Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752451AbdFOIJQ (ORCPT ); Thu, 15 Jun 2017 04:09:16 -0400 Received: from mx2.suse.de ([195.135.220.15]:44641 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751752AbdFOIJN (ORCPT ); Thu, 15 Jun 2017 04:09:13 -0400 Date: Thu, 15 Jun 2017 10:09:11 +0200 From: Jan Kara To: Dan Williams Cc: linux-nvdimm@lists.01.org, Jan Kara , Matthew Wilcox , x86@kernel.org, linux-kernel@vger.kernel.org, Jeff Moyer , linux-fsdevel@vger.kernel.org, Ross Zwisler , Christoph Hellwig Subject: Re: [PATCH v4 13/14] libnvdimm, pmem: gate cache management on QUEUE_FLAG_WC in pmem_dax_flush() Message-ID: <20170615080911.GC1764@quack2.suse.cz> References: <149703989611.20620.6907872165215640212.stgit@dwillia2-desk3.amr.corp.intel.com> <149748185051.10107.3746460181917620940.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <149748185051.10107.3746460181917620940.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1879 Lines: 56 On Wed 14-06-17 16:11:26, Dan Williams wrote: > Some platforms arrange for cpu caches to be flushed on power-fail. On > those platforms there is no requirement that the kernel track and flush > potentially dirty cache lines. Given that we still insert entries into > the radix for locking purposes this patch only disables the cache flush > loop, not the dirty tracking. > > Userspace can override the default cache setting via the block device > queue "write_cache" attribute in sysfs. > > Cc: Jeff Moyer > Cc: Christoph Hellwig > Cc: Matthew Wilcox > Cc: Ross Zwisler > Suggested-by: Jan Kara > Signed-off-by: Dan Williams Looks good. You can add: Reviewed-by: Jan Kara Honza > --- > Changes since v3: > * move the check of QUEUE_FLAG_WC into the pmem driver directly (Jan) > > drivers/nvdimm/pmem.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c > index 06f6c27ec1e9..49938b246a7b 100644 > --- a/drivers/nvdimm/pmem.c > +++ b/drivers/nvdimm/pmem.c > @@ -244,7 +244,16 @@ static size_t pmem_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, > static void pmem_dax_flush(struct dax_device *dax_dev, pgoff_t pgoff, > void *addr, size_t size) > { > - arch_wb_cache_pmem(addr, size); > + struct pmem_device *pmem = dax_get_private(dax_dev); > + struct gendisk *disk = pmem->disk; > + struct request_queue *q = disk->queue; > + > + /* > + * Only perform cache management when the queue has caching > + * enabled. > + */ > + if (test_bit(QUEUE_FLAG_WC, &q->queue_flags)) > + arch_wb_cache_pmem(addr, size); > } > > static const struct dax_operations pmem_dax_ops = { > -- Jan Kara SUSE Labs, CR