Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp1242529imm; Wed, 4 Jul 2018 14:52:10 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcBJcD4N53rqx/p0UBKUu0+MfwllIjMUyCIFYeadnoYhc5YF3PjPeTG4eOXK7DRtL9DvGfp X-Received: by 2002:a63:b605:: with SMTP id j5-v6mr3307752pgf.437.1530741130336; Wed, 04 Jul 2018 14:52:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530741130; cv=none; d=google.com; s=arc-20160816; b=bMhW+5NkkY+OPioS4lGR/3LMuwWUcJ94FMqrAmlaMITn11YSDBR8PBqie4KjQtiged /iDbUzGkYiBHHcifS12FhdOTxWs0YXuzSAjWQfwDrtm6wdcbPT2yyG9k9TdzcovbxONJ U+kSeQ2fYUgycosUXmwJEgbzoE8we9AHQywoqHbb3oqhcSGQFSVi06RumYrpLo56BQQR otrRF569etuDPJm0U5AJMFndXd6BCDjnzMAHCeGcuxmOUIZwyw4vjRsUIhR7ZTtiOPMT pYOJ8yhzKB4JRaSASPCO4l7lCspGSmDvUXqrCUy4qVCZ/m9E/d2HQ59FTxGbbbL35jcS s7ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:arc-authentication-results; bh=Ur75SukaHqgPOehh5H/53kMyedZZfCixxXUv9rPjQ0o=; b=UPY5RSvK1EEnsuIvDsUqTWFrF1KBzv7B7DD+LWZrZ/RwiZm9pdZLD0T0+73FHfQ43m Nk0Qx10U0xjfwoqTZ9BJdGiZh/5W7tppKaf1mJqxJM0KryaESqOLX0yfZwOvHIZYEPF1 Vbmnot+n7UhFaUG6085oyIUKxr618Y1jpYIIk0DNiH1hNCps1kAJN+/Inz92pf9Aj0ey 6Ax+xcJGvXfoKU8mT/DLe11ijpjKzkWonbo3BbS8HhYFr2GtgQ88BkH7533QLs0RhH/G eTkoGiWRK/tUVfGuAEJABIF85rAM9I/YVATC+Unmi3+5+6bpa+qpExl7rqKzRxesgU1w m6QQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v16-v6si4530424plo.186.2018.07.04.14.51.56; Wed, 04 Jul 2018 14:52:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753249AbeGDVug (ORCPT + 99 others); Wed, 4 Jul 2018 17:50:36 -0400 Received: from mga17.intel.com ([192.55.52.151]:64679 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753154AbeGDVuc (ORCPT ); Wed, 4 Jul 2018 17:50:32 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 04 Jul 2018 14:50:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,309,1526367600"; d="scan'208";a="213480094" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by orsmga004.jf.intel.com with ESMTP; 04 Jul 2018 14:50:31 -0700 Subject: [PATCH v5 02/11] device-dax: Enable page_mapping() From: Dan Williams To: linux-nvdimm@lists.01.org Cc: Jan Kara , hch@lst.de, hch@lst.de, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, jack@suse.cz, ross.zwisler@linux.intel.com Date: Wed, 04 Jul 2018 14:40:34 -0700 Message-ID: <153074043405.27838.13174871137034713893.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153074042316.27838.17319837331947007626.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153074042316.27838.17319837331947007626.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In support of enabling memory_failure() handling for device-dax mappings, set the ->mapping association of pages backing device-dax mappings. The rmap implementation requires page_mapping() to return the address_space hosting the vmas that map the page. The ->mapping pointer is never cleared. There is no possibility for the page to become associated with another address_space while the device is enabled. When the device is disabled the 'struct page' array for the device is destroyed / later reinitialized to zero. Reviewed-by: Jan Kara Signed-off-by: Dan Williams --- drivers/dax/device.c | 55 +++++++++++++++++++++++++++++++++++--------------- 1 file changed, 38 insertions(+), 17 deletions(-) diff --git a/drivers/dax/device.c b/drivers/dax/device.c index ad5e7b4a15dc..95cfcfd612df 100644 --- a/drivers/dax/device.c +++ b/drivers/dax/device.c @@ -245,12 +245,11 @@ __weak phys_addr_t dax_pgoff_to_phys(struct dev_dax *dev_dax, pgoff_t pgoff, } static vm_fault_t __dev_dax_pte_fault(struct dev_dax *dev_dax, - struct vm_fault *vmf) + struct vm_fault *vmf, pfn_t *pfn) { struct device *dev = &dev_dax->dev; struct dax_region *dax_region; phys_addr_t phys; - pfn_t pfn; unsigned int fault_size = PAGE_SIZE; if (check_vma(dev_dax, vmf->vma, __func__)) @@ -272,20 +271,19 @@ static vm_fault_t __dev_dax_pte_fault(struct dev_dax *dev_dax, return VM_FAULT_SIGBUS; } - pfn = phys_to_pfn_t(phys, dax_region->pfn_flags); + *pfn = phys_to_pfn_t(phys, dax_region->pfn_flags); - return vmf_insert_mixed(vmf->vma, vmf->address, pfn); + return vmf_insert_mixed(vmf->vma, vmf->address, *pfn); } static vm_fault_t __dev_dax_pmd_fault(struct dev_dax *dev_dax, - struct vm_fault *vmf) + struct vm_fault *vmf, pfn_t *pfn) { unsigned long pmd_addr = vmf->address & PMD_MASK; struct device *dev = &dev_dax->dev; struct dax_region *dax_region; phys_addr_t phys; pgoff_t pgoff; - pfn_t pfn; unsigned int fault_size = PMD_SIZE; if (check_vma(dev_dax, vmf->vma, __func__)) @@ -321,22 +319,21 @@ static vm_fault_t __dev_dax_pmd_fault(struct dev_dax *dev_dax, return VM_FAULT_SIGBUS; } - pfn = phys_to_pfn_t(phys, dax_region->pfn_flags); + *pfn = phys_to_pfn_t(phys, dax_region->pfn_flags); - return vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd, pfn, + return vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd, *pfn, vmf->flags & FAULT_FLAG_WRITE); } #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax, - struct vm_fault *vmf) + struct vm_fault *vmf, pfn_t *pfn) { unsigned long pud_addr = vmf->address & PUD_MASK; struct device *dev = &dev_dax->dev; struct dax_region *dax_region; phys_addr_t phys; pgoff_t pgoff; - pfn_t pfn; unsigned int fault_size = PUD_SIZE; @@ -373,14 +370,14 @@ static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax, return VM_FAULT_SIGBUS; } - pfn = phys_to_pfn_t(phys, dax_region->pfn_flags); + *pfn = phys_to_pfn_t(phys, dax_region->pfn_flags); - return vmf_insert_pfn_pud(vmf->vma, vmf->address, vmf->pud, pfn, + return vmf_insert_pfn_pud(vmf->vma, vmf->address, vmf->pud, *pfn, vmf->flags & FAULT_FLAG_WRITE); } #else static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax, - struct vm_fault *vmf) + struct vm_fault *vmf, pfn_t *pfn) { return VM_FAULT_FALLBACK; } @@ -389,8 +386,10 @@ static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax, static vm_fault_t dev_dax_huge_fault(struct vm_fault *vmf, enum page_entry_size pe_size) { - int rc, id; struct file *filp = vmf->vma->vm_file; + unsigned long fault_size; + int rc, id; + pfn_t pfn; struct dev_dax *dev_dax = filp->private_data; dev_dbg(&dev_dax->dev, "%s: %s (%#lx - %#lx) size = %d\n", current->comm, @@ -400,17 +399,39 @@ static vm_fault_t dev_dax_huge_fault(struct vm_fault *vmf, id = dax_read_lock(); switch (pe_size) { case PE_SIZE_PTE: - rc = __dev_dax_pte_fault(dev_dax, vmf); + fault_size = PAGE_SIZE; + rc = __dev_dax_pte_fault(dev_dax, vmf, &pfn); break; case PE_SIZE_PMD: - rc = __dev_dax_pmd_fault(dev_dax, vmf); + fault_size = PMD_SIZE; + rc = __dev_dax_pmd_fault(dev_dax, vmf, &pfn); break; case PE_SIZE_PUD: - rc = __dev_dax_pud_fault(dev_dax, vmf); + fault_size = PUD_SIZE; + rc = __dev_dax_pud_fault(dev_dax, vmf, &pfn); break; default: rc = VM_FAULT_SIGBUS; } + + if (rc == VM_FAULT_NOPAGE) { + unsigned long i; + + /* + * In the device-dax case the only possibility for a + * VM_FAULT_NOPAGE result is when device-dax capacity is + * mapped. No need to consider the zero page, or racing + * conflicting mappings. + */ + for (i = 0; i < fault_size / PAGE_SIZE; i++) { + struct page *page; + + page = pfn_to_page(pfn_t_to_pfn(pfn) + i); + if (page->mapping) + continue; + page->mapping = filp->f_mapping; + } + } dax_read_unlock(id); return rc;