Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1041273AbdDUXyy (ORCPT ); Fri, 21 Apr 2017 19:54:54 -0400 Received: from mga06.intel.com ([134.134.136.31]:19695 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1040146AbdDUXyv (ORCPT ); Fri, 21 Apr 2017 19:54:51 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.37,231,1488873600"; d="scan'208";a="1122284840" Subject: [PATCH] libnvdimm, region: sysfs trigger for nvdimm_flush() From: Dan Williams To: linux-nvdimm@ml01.01.org Cc: linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org Date: Fri, 21 Apr 2017 16:48:57 -0700 Message-ID: <149281853758.22910.2919981036906495309.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2240 Lines: 62 The nvdimm_flush() mechanism helps to reduce the impact of an ADR (asynchronous-dimm-refresh) failure. The ADR mechanism handles flushing platform WPQ (write-pending-queue) buffers when power is removed. The nvdimm_flush() mechanism performs that same function on-demand. When a pmem namespace is associated with a block device, an nvdimm_flush() is triggered with every block-layer REQ_FUA, or REQ_FLUSH request. However, when a namespace is in device-dax mode, or namespaces are disabled, userspace needs another path. The new 'flush' attribute is visible when it can be determined that the interleave-set either does, or does not have DIMMs that expose WPQ-flush addresses, "flush-hints" in ACPI NFIT terminology. It returns "1" and flushes DIMMs, or returns "0" the flush operation is a platform nop. Signed-off-by: Dan Williams --- drivers/nvdimm/region_devs.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c index 8de5a04644a1..3495b4c23941 100644 --- a/drivers/nvdimm/region_devs.c +++ b/drivers/nvdimm/region_devs.c @@ -255,6 +255,19 @@ static ssize_t size_show(struct device *dev, } static DEVICE_ATTR_RO(size); +static ssize_t flush_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct nd_region *nd_region = to_nd_region(dev); + + if (nvdimm_has_flush(nd_region)) { + nvdimm_flush(nd_region); + return sprintf(buf, "1\n"); + } + return sprintf(buf, "0\n"); +} +static DEVICE_ATTR_RO(flush); + static ssize_t mappings_show(struct device *dev, struct device_attribute *attr, char *buf) { @@ -474,6 +487,7 @@ static DEVICE_ATTR_RO(resource); static struct attribute *nd_region_attributes[] = { &dev_attr_size.attr, + &dev_attr_flush.attr, &dev_attr_nstype.attr, &dev_attr_mappings.attr, &dev_attr_btt_seed.attr, @@ -508,6 +522,9 @@ static umode_t region_visible(struct kobject *kobj, struct attribute *a, int n) if (!is_nd_pmem(dev) && a == &dev_attr_resource.attr) return 0; + if (a == &dev_attr_flush.attr && nvdimm_has_flush(nd_region) < 0) + return 0; + if (a != &dev_attr_set_cookie.attr && a != &dev_attr_available_size.attr) return a->mode;