Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp151118imm; Tue, 5 Jun 2018 16:59:09 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKnPo7Ydz57gdV3UTb/GCLJSU0Ab1TQ4V+7G+qNmE17cmfglDweldAs9MSTlJ4ocHO3sLtS X-Received: by 2002:a17:902:bb8d:: with SMTP id m13-v6mr757282pls.46.1528243149435; Tue, 05 Jun 2018 16:59:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528243149; cv=none; d=google.com; s=arc-20160816; b=qzTsbl0a4Z1qs8akOI90t94k1VbdgdOHvVzudKs+KgrWLYbgmRJ72zjbQ9Gd1NL5f9 +X4FEL9rkCIPBpSg+8zcZLd1GpSfjNU/PelRaC4t9vYvEENREPpTOUFy39K7f2Ae+pHc jLHmMJtSzFlj/NmX8ifsQgfKNH3ZCz05ZU2rZAgpUSQwVcmXIzETveNXHVWgOYCWcP2t h5bngWjS7X7z2inF5Gue8w60p7XKMweej/GKPpdCaol8UPvrU8oIKkXujQxwm9D5j+RR ifS/0ZhT9nakfhdn4A1SUOtAoDMfqjA/QvPZ8JUIL9mhqDyqTBFbvfI3KpM5Xbpa6B95 6rpA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=BBketkQ+UE+OadDusiFz5i39ucM4tBA2AD8o0Bl5Emk=; b=W//dhhTVZAUWmJTTHM7/ViCxEngGE5vxONwsSvadwnCzCCwNMCP909xWbJwEsMngoD S9RzVnTkpf8b59J+NhpejQKEz3vPdWCbCkas2xchD9Mp/bvxN9ilIuokj3r9hoRMNsi1 kgPRss36SNJnJzMcocOKwAzcAdmIdZdhU7kSezAl/XM9s/W6xD5K5sqL8kzKRn42d87m EGJUjrNu+akhQ44/IzYxIsyTaMX8fXWI9iKitA6f1owvSpCdTFjkXSntgRW74kgYmSZ0 C1V4xFh0uOVbVoQ30HDrNa2YNkBrtSiP2dtUSUdv0ZKjvdP2ZCz8TXuy+h0QFXEHCXET 5ofg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i4-v6si49531037plt.581.2018.06.05.16.58.55; Tue, 05 Jun 2018 16:59:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932138AbeFEX6J (ORCPT + 99 others); Tue, 5 Jun 2018 19:58:09 -0400 Received: from mga12.intel.com ([192.55.52.136]:62554 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750714AbeFEX6I (ORCPT ); Tue, 5 Jun 2018 19:58:08 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Jun 2018 16:58:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,481,1520924400"; d="scan'208";a="205631346" Received: from theros.lm.intel.com ([10.232.112.164]) by orsmga004.jf.intel.com with ESMTP; 05 Jun 2018 16:58:07 -0700 From: Ross Zwisler To: linux-kernel@vger.kernel.org Cc: Ross Zwisler , Dan Williams , Dave Jiang , linux-nvdimm@lists.01.org Subject: [PATCH v2 1/3] libnvdimm: unconditionally deep flush on *sync Date: Tue, 5 Jun 2018 17:58:00 -0600 Message-Id: <20180605235802.14531-1-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.14.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Prior to this commit we would only do a "deep flush" in response to an msync/fsync/sync call if the nvdimm_has_cache() returned true at the time we were setting up the request queue. This happens due to the write cache value passed in to blk_queue_write_cache(). We do have a "write_cache" sysfs entry for namespaces, i.e.: /sys/bus/nd/devices/pfn0.1/block/pmem0/dax/write_cache which can be used to control whether or not the kernel thinks a given namespace has a write cache, but this didn't modify the deep flush behavior that we set up when the driver was initialized. Instead, it only modified whether or not DAX would flush CPU caches in response to *sync calls. Simplify this by making the *sync "deep flush" always happen, regardless of the write cache setting of a namespace. The DAX CPU cache flushing will be controlled by a combination of the write_cache setting as well as whether the platform supports flush-on-fail CPU caches. Signed-off-by: Ross Zwisler Suggested-by: Dan Williams --- drivers/nvdimm/pmem.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 9d714926ecf5..a152dd9e4134 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -299,7 +299,7 @@ static int pmem_attach_disk(struct device *dev, { struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev); struct nd_region *nd_region = to_nd_region(dev->parent); - int nid = dev_to_node(dev), fua, wbc; + int nid = dev_to_node(dev), fua; struct resource *res = &nsio->res; struct resource bb_res; struct nd_pfn *nd_pfn = NULL; @@ -335,7 +335,6 @@ static int pmem_attach_disk(struct device *dev, dev_warn(dev, "unable to guarantee persistence of writes\n"); fua = 0; } - wbc = nvdimm_has_cache(nd_region); if (!devm_request_mem_region(dev, res->start, resource_size(res), dev_name(&ndns->dev))) { @@ -382,7 +381,7 @@ static int pmem_attach_disk(struct device *dev, return PTR_ERR(addr); pmem->virt_addr = addr; - blk_queue_write_cache(q, wbc, fua); + blk_queue_write_cache(q, true, fua); blk_queue_make_request(q, pmem_make_request); blk_queue_physical_block_size(q, PAGE_SIZE); blk_queue_logical_block_size(q, pmem_sector_size(ndns)); @@ -413,7 +412,7 @@ static int pmem_attach_disk(struct device *dev, put_disk(disk); return -ENOMEM; } - dax_write_cache(dax_dev, wbc); + dax_write_cache(dax_dev, nvdimm_has_cache(nd_region)); pmem->dax_dev = dax_dev; gendev = disk_to_dev(disk); -- 2.14.4