Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp2063855imm; Mon, 16 Jul 2018 01:15:24 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfnldqpbuzGztNqMS6Sgy62oUekfMCgbGOs17TkOgdLuyN76rYJni0XE7jHSE8WUQj7gqkv X-Received: by 2002:a62:1016:: with SMTP id y22-v6mr17087965pfi.109.1531728924549; Mon, 16 Jul 2018 01:15:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531728924; cv=none; d=google.com; s=arc-20160816; b=IBKpViw+28TxsShvrSzIlN42TDoNYda4/C+jy3YtC4F4R34UH3mY8SsmC5QqOBi6oT ByWy7NOdbMy17Ak7/h+1ZK2V8ZsEpIY4v3n0wv4lSjirPwupkxqQeNEZY+BrC7PNXDmL 7qD0P2/diHOtnH50fwUOQq2ZgifP+bpi6kFkpp1etgylXfrM0ftOwt2qDlDAPQ/R6hzA Sj+YThZhW4BqlCHeSmQmRJoh5/53mI+Gswc+vo59ii/VC6Fu5su03b10oh1+LJ8c7a1h PhqhG7QWOpHbps8FH7L5/Wt0ionTWz6Qv6ktAgdfTHOPK43iwvdTdU/AeAigPAk2L82c NZ/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:thread-index:thread-topic :content-transfer-encoding:mime-version:subject:references :in-reply-to:message-id:cc:to:from:date:arc-authentication-results; bh=rZdhW9fwlLIQf4E6noiVSaiWIG8DVpGYe9UFdkFRgb4=; b=fwpBintvhtJUB1lnDGuBdOWHLt3JNbyJBoFpYK4yrEuEgd9c5HQ+s2G0HBzZB0quac ejK5RnPH7rc7oKQaqeA0RoErvus6gulHYwXTWyAv8iWWgycER9+0TxlQD2df+Ux+NrGa 4rlcL4pG+gzBkyrDKlvoFMj5UrSr2Kb0koc9fVDyhZKWUXwdM9TeeYMPBIOQKBNcSKKC Y0EqwvsiI9B/lPcpT8ziMkP6kETUpZKFfVu7GB4F2qKYwZ4Ej3BGZKX4sbeHO9JQOilS 71mnjAyXOmGN8vsWAQ+t8XaBbCqjjsi/D3gv4dK5ThR/RZmq2KDLAlYqRSgo8wY+ieiP RMKg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g9-v6si18339323plb.103.2018.07.16.01.15.09; Mon, 16 Jul 2018 01:15:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728437AbeGPIkB (ORCPT + 99 others); Mon, 16 Jul 2018 04:40:01 -0400 Received: from mx1.redhat.com ([209.132.183.28]:52430 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726190AbeGPIkB (ORCPT ); Mon, 16 Jul 2018 04:40:01 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.25]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0844483F40; Mon, 16 Jul 2018 08:13:50 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0FE3E2010CD0; Mon, 16 Jul 2018 08:13:49 +0000 (UTC) Received: from zmail21.collab.prod.int.phx2.redhat.com (zmail21.collab.prod.int.phx2.redhat.com [10.5.83.24]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id D3CBC4BB78; Mon, 16 Jul 2018 08:13:47 +0000 (UTC) Date: Mon, 16 Jul 2018 04:13:47 -0400 (EDT) From: Pankaj Gupta To: Luiz Capitulino Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, qemu-devel@nongnu.org, linux-nvdimm@ml01.01.org, jack@suse.cz, stefanha@redhat.com, dan j williams , riel@surriel.com, nilal@redhat.com, kwolf@redhat.com, pbonzini@redhat.com, ross zwisler , david@redhat.com, xiaoguangrong eric , hch@infradead.org, mst@redhat.com, niteshnarayanlal@hotmail.com, imammedo@redhat.com, eblake@redhat.com Message-ID: <9008103.50980761.1531728827436.JavaMail.zimbra@redhat.com> In-Reply-To: <20180713163559.692aca30@doriath> References: <20180713075232.9575-1-pagupta@redhat.com> <20180713075232.9575-2-pagupta@redhat.com> <20180713163559.692aca30@doriath> Subject: Re: [RFC v3 1/2] libnvdimm: Add flush callback for virtio pmem MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.116.112, 10.4.195.14] Thread-Topic: libnvdimm: Add flush callback for virtio pmem Thread-Index: AWYAfPYipxce6t1ZnooXS5ceaHRxgg== X-Scanned-By: MIMEDefang 2.84 on 10.5.11.25 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Mon, 16 Jul 2018 08:13:50 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Luiz, > > > This patch adds functionality to perform flush from guest to host > > over VIRTIO. We are registering a callback based on 'nd_region' type. > > As virtio_pmem driver requires this special flush interface, for rest > > of the region types we are registering existing flush function. > > Also report the error returned by virtio flush interface. > > This patch doesn't apply against latest upstream. A few more comments > below. My bad, I tested it with 4.17-rc1. Will rebase it. > > > > > Signed-off-by: Pankaj Gupta > > --- > > drivers/nvdimm/nd.h | 1 + > > drivers/nvdimm/pmem.c | 4 ++-- > > drivers/nvdimm/region_devs.c | 24 ++++++++++++++++++------ > > include/linux/libnvdimm.h | 5 ++++- > > 4 files changed, 25 insertions(+), 9 deletions(-) > > > > diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h > > index 32e0364..1b62f79 100644 > > --- a/drivers/nvdimm/nd.h > > +++ b/drivers/nvdimm/nd.h > > @@ -159,6 +159,7 @@ struct nd_region { > > struct badblocks bb; > > struct nd_interleave_set *nd_set; > > struct nd_percpu_lane __percpu *lane; > > + int (*flush)(struct device *dev); > > struct nd_mapping mapping[0]; > > }; > > > > diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c > > index 9d71492..29fd2cd 100644 > > --- a/drivers/nvdimm/pmem.c > > +++ b/drivers/nvdimm/pmem.c > > @@ -180,7 +180,7 @@ static blk_qc_t pmem_make_request(struct request_queue > > *q, struct bio *bio) > > struct nd_region *nd_region = to_region(pmem); > > > > if (bio->bi_opf & REQ_FLUSH) > > - nvdimm_flush(nd_region); > > + bio->bi_status = nvdimm_flush(nd_region); > > > > do_acct = nd_iostat_start(bio, &start); > > bio_for_each_segment(bvec, bio, iter) { > > @@ -196,7 +196,7 @@ static blk_qc_t pmem_make_request(struct request_queue > > *q, struct bio *bio) > > nd_iostat_end(bio, start); > > > > if (bio->bi_opf & REQ_FUA) > > - nvdimm_flush(nd_region); > > + bio->bi_status = nvdimm_flush(nd_region); > > > > bio_endio(bio); > > return BLK_QC_T_NONE; > > diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c > > index a612be6..124aae7 100644 > > --- a/drivers/nvdimm/region_devs.c > > +++ b/drivers/nvdimm/region_devs.c > > @@ -1025,6 +1025,7 @@ static struct nd_region *nd_region_create(struct > > nvdimm_bus *nvdimm_bus, > > dev->of_node = ndr_desc->of_node; > > nd_region->ndr_size = resource_size(ndr_desc->res); > > nd_region->ndr_start = ndr_desc->res->start; > > + nd_region->flush = ndr_desc->flush; > > nd_device_register(dev); > > > > return nd_region; > > @@ -1065,13 +1066,10 @@ struct nd_region > > *nvdimm_volatile_region_create(struct nvdimm_bus *nvdimm_bus, > > } > > EXPORT_SYMBOL_GPL(nvdimm_volatile_region_create); > > > > -/** > > - * nvdimm_flush - flush any posted write queues between the cpu and pmem > > media > > - * @nd_region: blk or interleaved pmem region > > - */ > > -void nvdimm_flush(struct nd_region *nd_region) > > +void pmem_flush(struct device *dev) > > { > > - struct nd_region_data *ndrd = dev_get_drvdata(&nd_region->dev); > > + struct nd_region_data *ndrd = dev_get_drvdata(dev); > > + struct nd_region *nd_region = to_nd_region(dev); > > int i, idx; > > > > /* > > @@ -1094,6 +1092,20 @@ void nvdimm_flush(struct nd_region *nd_region) > > writeq(1, ndrd_get_flush_wpq(ndrd, i, idx)); > > wmb(); > > } > > + > > +/** > > + * nvdimm_flush - flush any posted write queues between the cpu and pmem > > media > > + * @nd_region: blk or interleaved pmem region > > + */ > > +int nvdimm_flush(struct nd_region *nd_region) > > +{ > > + if (nd_region->flush) > > + return(nd_region->flush(&nd_region->dev)); > > + > > + pmem_flush(&nd_region->dev); > > IMHO, a better way of doing this would be to allow nvdimm_flush() to > be overridden. That is, in nd_region_create() you set nd_region->flush > to the original nvdimm_flush() if ndr_desc->flush is NULL. And then > always call nd_region->flush() where nvdimm_flush() is called today. I wanted to do minimal changes for actual 'nvdimm_flush' function because it does not return an error or return status for fsync. So, I needed to differentiate between 'fake DAX' & 'NVDIMM' at the time of calling 'flush', otherwise I need to change 'nvdimm_flush' to return zero for all the calls. Looks like I am already doing this, will change as suggested. > > > + > > + return 0; > > +} > > EXPORT_SYMBOL_GPL(nvdimm_flush); > > > > /** > > diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h > > index 097072c..33b617f 100644 > > --- a/include/linux/libnvdimm.h > > +++ b/include/linux/libnvdimm.h > > @@ -126,6 +126,7 @@ struct nd_region_desc { > > int numa_node; > > unsigned long flags; > > struct device_node *of_node; > > + int (*flush)(struct device *dev); > > }; > > > > struct device; > > @@ -201,7 +202,9 @@ unsigned long nd_blk_memremap_flags(struct > > nd_blk_region *ndbr); > > unsigned int nd_region_acquire_lane(struct nd_region *nd_region); > > void nd_region_release_lane(struct nd_region *nd_region, unsigned int > > lane); > > u64 nd_fletcher64(void *addr, size_t len, bool le); > > -void nvdimm_flush(struct nd_region *nd_region); > > +int nvdimm_flush(struct nd_region *nd_region); > > +void pmem_set_flush(struct nd_region *nd_region, void (*flush) > > + (struct device *)); > > It seems pmem_set_flush() doesn't exist. Sorry! will remove it. > > > int nvdimm_has_flush(struct nd_region *nd_region); > > int nvdimm_has_cache(struct nd_region *nd_region); > > > > Thanks, Pankaj