Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp496110imm; Fri, 13 Jul 2018 00:53:42 -0700 (PDT) X-Google-Smtp-Source: AAOMgpf8pIqGXcUPSLyFIRFaRYBrEJzy9XuCgH1gk82i89MqQ+T32/MFuSW+UqnjIe3cOx7PHKSq X-Received: by 2002:a63:b40e:: with SMTP id s14-v6mr5155486pgf.9.1531468422470; Fri, 13 Jul 2018 00:53:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531468422; cv=none; d=google.com; s=arc-20160816; b=nyFbtoGy02hOp3sVYDmMrb3mw23lHSbQBWwDmffwGHjTl4UNItXOfgO2zmTbaeqXCO 5MgxWVuckyvlbu+VW8RwteVdOi2O/w0u8ZDZ/ixFu4J+6ux4fMjkHu1Gh0Lcm1x8jb0B s+MerjtiY7zN2BW/jFTHuVyU7/Wh/0dqMKLm0c2tx6IMojFNOqQNkoiFFO9xlucRs5ts L7VOphdnujFsNzYPGkmBbj/qIlxnlqJpii1HWQ6YHTYidVRkRYARUoyMjIxVWrAcZ2Oa k1cMlpclij65BFYs0ezX+8hW/CAqci8gOlCf8lRlMHv+hjl42N/pymFqUfQ/ps1aH8YO 0q1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=WC0RLt+hKFedYIGoHX5rducM73YIjdy10xFdGokrTDk=; b=0v8793kbkvmu1DGjJtsaavu03K6y65ZckbR6S8KONABiAQgPO3M5QW4RaYa2mKp/SR WeoBxz9p8B5dA0WDFfv2WRexm77CON7rhV2wMrCVHW3ed1LOVsUEwK+jo9psWSFshsnt b6zrkNy9L7eKNIv5bfNRRjoyfaziqlF1k52if+aKzGRGF2nX3Jyw2r5YVAGbSNFbvyLQ 9DE8i0EpryB6b6b4zvms7GLuQsbqgwLyX/73oM2oW2r24Qhxjo5rLeBpnHBGRiL4vq24 1AIy39pOXL60zTRniFk30BaqXgaaHF3lcWlAOIEW2c5KcjREQ/GsX1kKzuRRJFYhRDlR +tSQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b7-v6si16800plx.291.2018.07.13.00.53.27; Fri, 13 Jul 2018 00:53:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731571AbeGMIGS (ORCPT + 99 others); Fri, 13 Jul 2018 04:06:18 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:60016 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1731366AbeGMIGS (ORCPT ); Fri, 13 Jul 2018 04:06:18 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3882014B3AE; Fri, 13 Jul 2018 07:52:49 +0000 (UTC) Received: from dhcp201-121.englab.pnq.redhat.com (dhcp193-70.pnq.redhat.com [10.65.193.70]) by smtp.corp.redhat.com (Postfix) with ESMTP id A1B2B2026D6B; Fri, 13 Jul 2018 07:52:43 +0000 (UTC) From: Pankaj Gupta To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, qemu-devel@nongnu.org, linux-nvdimm@ml01.01.org Cc: jack@suse.cz, stefanha@redhat.com, dan.j.williams@intel.com, riel@surriel.com, haozhong.zhang@intel.com, nilal@redhat.com, kwolf@redhat.com, pbonzini@redhat.com, ross.zwisler@intel.com, david@redhat.com, xiaoguangrong.eric@gmail.com, hch@infradead.org, mst@redhat.com, niteshnarayanlal@hotmail.com, lcapitulino@redhat.com, imammedo@redhat.com, eblake@redhat.com, pagupta@redhat.com Subject: [RFC v3 1/2] libnvdimm: Add flush callback for virtio pmem Date: Fri, 13 Jul 2018 13:22:30 +0530 Message-Id: <20180713075232.9575-2-pagupta@redhat.com> In-Reply-To: <20180713075232.9575-1-pagupta@redhat.com> References: <20180713075232.9575-1-pagupta@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Fri, 13 Jul 2018 07:52:49 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Fri, 13 Jul 2018 07:52:49 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'pagupta@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch adds functionality to perform flush from guest to host over VIRTIO. We are registering a callback based on 'nd_region' type. As virtio_pmem driver requires this special flush interface, for rest of the region types we are registering existing flush function. Also report the error returned by virtio flush interface. Signed-off-by: Pankaj Gupta --- drivers/nvdimm/nd.h | 1 + drivers/nvdimm/pmem.c | 4 ++-- drivers/nvdimm/region_devs.c | 24 ++++++++++++++++++------ include/linux/libnvdimm.h | 5 ++++- 4 files changed, 25 insertions(+), 9 deletions(-) diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h index 32e0364..1b62f79 100644 --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -159,6 +159,7 @@ struct nd_region { struct badblocks bb; struct nd_interleave_set *nd_set; struct nd_percpu_lane __percpu *lane; + int (*flush)(struct device *dev); struct nd_mapping mapping[0]; }; diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 9d71492..29fd2cd 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -180,7 +180,7 @@ static blk_qc_t pmem_make_request(struct request_queue *q, struct bio *bio) struct nd_region *nd_region = to_region(pmem); if (bio->bi_opf & REQ_FLUSH) - nvdimm_flush(nd_region); + bio->bi_status = nvdimm_flush(nd_region); do_acct = nd_iostat_start(bio, &start); bio_for_each_segment(bvec, bio, iter) { @@ -196,7 +196,7 @@ static blk_qc_t pmem_make_request(struct request_queue *q, struct bio *bio) nd_iostat_end(bio, start); if (bio->bi_opf & REQ_FUA) - nvdimm_flush(nd_region); + bio->bi_status = nvdimm_flush(nd_region); bio_endio(bio); return BLK_QC_T_NONE; diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c index a612be6..124aae7 100644 --- a/drivers/nvdimm/region_devs.c +++ b/drivers/nvdimm/region_devs.c @@ -1025,6 +1025,7 @@ static struct nd_region *nd_region_create(struct nvdimm_bus *nvdimm_bus, dev->of_node = ndr_desc->of_node; nd_region->ndr_size = resource_size(ndr_desc->res); nd_region->ndr_start = ndr_desc->res->start; + nd_region->flush = ndr_desc->flush; nd_device_register(dev); return nd_region; @@ -1065,13 +1066,10 @@ struct nd_region *nvdimm_volatile_region_create(struct nvdimm_bus *nvdimm_bus, } EXPORT_SYMBOL_GPL(nvdimm_volatile_region_create); -/** - * nvdimm_flush - flush any posted write queues between the cpu and pmem media - * @nd_region: blk or interleaved pmem region - */ -void nvdimm_flush(struct nd_region *nd_region) +void pmem_flush(struct device *dev) { - struct nd_region_data *ndrd = dev_get_drvdata(&nd_region->dev); + struct nd_region_data *ndrd = dev_get_drvdata(dev); + struct nd_region *nd_region = to_nd_region(dev); int i, idx; /* @@ -1094,6 +1092,20 @@ void nvdimm_flush(struct nd_region *nd_region) writeq(1, ndrd_get_flush_wpq(ndrd, i, idx)); wmb(); } + +/** + * nvdimm_flush - flush any posted write queues between the cpu and pmem media + * @nd_region: blk or interleaved pmem region + */ +int nvdimm_flush(struct nd_region *nd_region) +{ + if (nd_region->flush) + return(nd_region->flush(&nd_region->dev)); + + pmem_flush(&nd_region->dev); + + return 0; +} EXPORT_SYMBOL_GPL(nvdimm_flush); /** diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h index 097072c..33b617f 100644 --- a/include/linux/libnvdimm.h +++ b/include/linux/libnvdimm.h @@ -126,6 +126,7 @@ struct nd_region_desc { int numa_node; unsigned long flags; struct device_node *of_node; + int (*flush)(struct device *dev); }; struct device; @@ -201,7 +202,9 @@ unsigned long nd_blk_memremap_flags(struct nd_blk_region *ndbr); unsigned int nd_region_acquire_lane(struct nd_region *nd_region); void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane); u64 nd_fletcher64(void *addr, size_t len, bool le); -void nvdimm_flush(struct nd_region *nd_region); +int nvdimm_flush(struct nd_region *nd_region); +void pmem_set_flush(struct nd_region *nd_region, void (*flush) + (struct device *)); int nvdimm_has_flush(struct nd_region *nd_region); int nvdimm_has_cache(struct nd_region *nd_region); -- 2.9.3