Received: by 2002:ac0:a591:0:0:0:0:0 with SMTP id m17-v6csp295595imm; Thu, 5 Jul 2018 00:01:47 -0700 (PDT) X-Google-Smtp-Source: AAOMgpc9E93OBMuVI8FGHbZivoGW6wQn/mKPulJTfE8zkUgOmjuDlHD6vmYfndx8fHmZN6kj3y3U X-Received: by 2002:a63:6004:: with SMTP id u4-v6mr4414941pgb.441.1530774106998; Thu, 05 Jul 2018 00:01:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530774106; cv=none; d=google.com; s=arc-20160816; b=wXyZSke2+nRPNH8PuY7TpIYC9R9EooneKdpoMk6xj+ZKoGVEdNGvJ9ife1EVlho51b e36OXOz+Ewkj/Qf2ZFXU7ZgbnVpyNLgKI3jimqguPJbhhIpaMfTFvK14QVXbdBJUST8V IqhGzPWlFa/fCksVYsE3fXp3K+XxAnkZYhZTqTKzoOerQApdTeD1O1pzQVMZhtXxbfR3 fNQdsnF3nCJ8fJNzTlf8urrvfjmXgkl+aja7VnbFgf1COhxjbBz9VoMov30IkBSQEczD 0C9goTilvRb+GtrYx4oS4fmq24u3+yC1eHGyfw9eedwQpWRdGb/bXYVHz2O0+PjzkNpS 19wA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:arc-authentication-results; bh=tJmMKYXPn1EYAOv7aaafMuarfGiP4UKgQ7/JSySdPMk=; b=CI2QVcDKH/FVTdMXRLI1hMCd0BzNTytRFXW3lal+yr8g96CjGoCbvKrjPAL57J8KGk frj6uYH1hDjdiyArGfX1XZSpLRix0LjJ3jzD6LT1sxdPT5Adb2LjVUTWKuqgG3eMFo2g eQpmMVoWPoHslb+yO+b7mFdIk4nONHzOEpaHKgmf2NFxyk6KT1Y7/NvDygx3tJX0m4ol SzyEefvlG5H+ioUm3fjk767M8eNNKyQW6J4rNqI+vCc18pS3eibWCswLj/qTXK+cHc8V YstAB4UyonTds+HFR+3Uy4m31sxfdQKu1R0Gv+vBlIycXED5vB2woyVIJODeyYyiLLUA aVCQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l12-v6si5197540plc.215.2018.07.05.00.01.32; Thu, 05 Jul 2018 00:01:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753611AbeGEHAD (ORCPT + 99 others); Thu, 5 Jul 2018 03:00:03 -0400 Received: from mga04.intel.com ([192.55.52.120]:44111 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753596AbeGEHAA (ORCPT ); Thu, 5 Jul 2018 03:00:00 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 04 Jul 2018 23:59:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,311,1526367600"; d="scan'208";a="55251480" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by orsmga006.jf.intel.com with ESMTP; 04 Jul 2018 23:59:58 -0700 Subject: [PATCH 11/13] libnvdimm, pmem: Initialize the memmap in the background From: Dan Williams To: akpm@linux-foundation.org Cc: Ross Zwisler , Vishal Verma , Dave Jiang , hch@lst.de, linux-nvdimm@lists.01.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 04 Jul 2018 23:50:01 -0700 Message-ID: <153077340108.40830.8427794791878610916.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153077334130.40830.2714147692560185329.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153077334130.40830.2714147692560185329.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Arrange for the pmem driver to call memmap_sync() when it is asked to produce a valid pfn. The infrastructure is housed in the 'nd_pfn' device which implies that the async init support only exists for platform defined persistent memory, not the legacy / debug memmap=ss!nn facility. Another reason to restrict the capability to the 'nd_pfn' device case is that nd_pfn devices have sysfs infrastructure to communicate the memmap initialization state to userspace. The sysfs publication of memmap init state is saved for a later patch. Cc: Ross Zwisler Cc: Vishal Verma Cc: Dave Jiang Signed-off-by: Dan Williams --- drivers/nvdimm/nd.h | 2 ++ drivers/nvdimm/pmem.c | 16 ++++++++++++---- drivers/nvdimm/pmem.h | 1 + tools/testing/nvdimm/pmem-dax.c | 7 ++++++- 4 files changed, 21 insertions(+), 5 deletions(-) diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h index 32e0364b48b9..ee4f76fb0cb5 100644 --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -12,6 +12,7 @@ */ #ifndef __ND_H__ #define __ND_H__ +#include #include #include #include @@ -208,6 +209,7 @@ struct nd_pfn { unsigned long npfns; enum nd_pfn_mode mode; struct nd_pfn_sb *pfn_sb; + struct memmap_async_state async; struct nd_namespace_common *ndns; }; diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index c430536320a5..a1158181adc2 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -228,8 +229,13 @@ __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, PFN_PHYS(nr_pages)))) return -EIO; *kaddr = pmem->virt_addr + offset; - if (pfn) + if (pfn) { + struct dev_pagemap *pgmap = &pmem->pgmap; + struct memmap_async_state *async = pgmap->async; + *pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags); + memmap_sync(*pfn, nr_pages, async); + } /* * If badblocks are present, limit known good range to the @@ -310,13 +316,15 @@ static void fsdax_pagefree(struct page *page, void *data) wake_up_var(&page->_refcount); } -static int setup_pagemap_fsdax(struct device *dev, struct dev_pagemap *pgmap) +static int setup_pagemap_fsdax(struct device *dev, struct dev_pagemap *pgmap, + struct memmap_async_state *async) { dev_pagemap_get_ops(); if (devm_add_action_or_reset(dev, pmem_release_pgmap_ops, pgmap)) return -ENOMEM; pgmap->type = MEMORY_DEVICE_FS_DAX; pgmap->page_free = fsdax_pagefree; + pgmap->async = async; return 0; } @@ -379,7 +387,7 @@ static int pmem_attach_disk(struct device *dev, pmem->pfn_flags = PFN_DEV; pmem->pgmap.ref = &q->q_usage_counter; if (is_nd_pfn(dev)) { - if (setup_pagemap_fsdax(dev, &pmem->pgmap)) + if (setup_pagemap_fsdax(dev, &pmem->pgmap, &nd_pfn->async)) return -ENOMEM; addr = devm_memremap_pages(dev, &pmem->pgmap, pmem_freeze_queue); @@ -393,7 +401,7 @@ static int pmem_attach_disk(struct device *dev, } else if (pmem_should_map_pages(dev)) { memcpy(&pmem->pgmap.res, &nsio->res, sizeof(pmem->pgmap.res)); pmem->pgmap.altmap_valid = false; - if (setup_pagemap_fsdax(dev, &pmem->pgmap)) + if (setup_pagemap_fsdax(dev, &pmem->pgmap, NULL)) return -ENOMEM; addr = devm_memremap_pages(dev, &pmem->pgmap, pmem_freeze_queue); diff --git a/drivers/nvdimm/pmem.h b/drivers/nvdimm/pmem.h index a64ebc78b5df..93d226ea1006 100644 --- a/drivers/nvdimm/pmem.h +++ b/drivers/nvdimm/pmem.h @@ -1,6 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 */ #ifndef __NVDIMM_PMEM_H__ #define __NVDIMM_PMEM_H__ +#include #include #include #include diff --git a/tools/testing/nvdimm/pmem-dax.c b/tools/testing/nvdimm/pmem-dax.c index d4cb5281b30e..63151b75615c 100644 --- a/tools/testing/nvdimm/pmem-dax.c +++ b/tools/testing/nvdimm/pmem-dax.c @@ -42,8 +42,13 @@ long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, } *kaddr = pmem->virt_addr + offset; - if (pfn) + if (pfn) { + struct dev_pagemap *pgmap = &pmem->pgmap; + struct memmap_async_state *async = pgmap->async; + *pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags); + memmap_sync(*pfn, nr_pages, async); + } /* * If badblocks are present, limit known good range to the