Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp3973648pxb; Tue, 25 Jan 2022 00:21:56 -0800 (PST) X-Google-Smtp-Source: ABdhPJzqw8G2JOMVJVppLWlBR+oAkHgJ3XGWKwsrNh+C9+9v1J1yHVlV8XUiZxjMq2ASk9IdSXMF X-Received: by 2002:a17:907:7f16:: with SMTP id qf22mr5444366ejc.388.1643098915893; Tue, 25 Jan 2022 00:21:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643098915; cv=none; d=google.com; s=arc-20160816; b=rLzDQn9Qfw+g31WhxcAM6BoPdxoxSpA2dW5gVhoqJoHkuJM5gt1XnfcDYMnOHua9CV BS+gpLbcRC61vkR1nySjsV+hmuBxESXgH/uSSHF+wzCK26o3nyOonR9e8+uak7tdQqyl fHH2T5aerZb45UUPsTvOfwvGUf+C3l9ocpt8UhJHs9o1Rh7/m+XyLlI02xt/i2xiXl+4 5gnLRKQMwZGLY1zCaHznFWvC9fwJovpYXYuGcRWN4ygDD/PDs5IQ5r2K0jZZ3Is5cAuP Dp4FKlcsVtMoLJf24OmLhtED+OG1xPEZE74UPWOWZhoslhERHGGH7QKTTs9JnXvdkq1/ wrDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=qhfrJcYzrrRLt7vZzGfQzvtMrFu8BOTLX8io0ZHR05Y=; b=wB8eW33yQ14mkNRiUcBYji6VGxDB79waZexYaDdYGUqL++0anHb7LHs2SJpFO1faU2 m/K0ZtUmD2Q7mfF38rkoNkOnqC8OEDxSD8rsss7gNEBd1MHaKTwUgHzXfrU36ZZrKHj8 ysP55NTo34KwoZ/9ovmOz7jCMpguzm1Xi1qo+VAIMfypN827LA9wQxGQxSTU+TfuX/6T s4nY3YBNN5zFAybRJbp3mQBlSxF6LjMREoPhfM4EzQh8nCpHluwB2jxlfnIvjgW2iM2i poj2IV6B9PlXMopiFGT37y6EK3x8G/ZbAZtdUdlKhl/C4lu1lDvLRPphcVrdZx7E6EXt u5Cg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=WBO2fUWI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n14si10371075edx.94.2022.01.25.00.21.31; Tue, 25 Jan 2022 00:21:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=WBO2fUWI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S3420477AbiAYCY2 (ORCPT + 99 others); Mon, 24 Jan 2022 21:24:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347571AbiAXTKs (ORCPT ); Mon, 24 Jan 2022 14:10:48 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F310C03463E; Mon, 24 Jan 2022 11:02:28 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 15B05B8122C; Mon, 24 Jan 2022 19:02:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E62E0C340E7; Mon, 24 Jan 2022 19:02:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1643050945; bh=mDE0ClwtW2O8lpRVSIOlJWKWc50OV1v0NEJkz0QeRIo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WBO2fUWI5QVLmkMnxzk2Rl3mbMILm3H4QLVtRZvApSgxnYXZd6WtgUp/CG2aq6wff FMMqWLqwHqjIZdjX80zMvGP6ig0dHOmGp/jck7T3iDRsTGmB3QnlEPpCr4IkbTzdsN EgXYiLaghwlFjpopGjIauGrXhhOC+8HwbMYLZNqg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Ross Zwisler , Dave Hansen , Alexander Viro , Christoph Hellwig , Dan Williams , Dave Chinner , Jan Kara , Matthew Wilcox , Andrew Morton , Linus Torvalds , Ben Hutchings Subject: [PATCH 4.9 154/157] mm: add follow_pte_pmd() Date: Mon, 24 Jan 2022 19:44:04 +0100 Message-Id: <20220124183937.642327629@linuxfoundation.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220124183932.787526760@linuxfoundation.org> References: <20220124183932.787526760@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ross Zwisler commit 097963959594c5eccaba42510f7033f703211bda upstream. Patch series "Write protect DAX PMDs in *sync path". Currently dax_mapping_entry_mkclean() fails to clean and write protect the pmd_t of a DAX PMD entry during an *sync operation. This can result in data loss, as detailed in patch 2. This series is based on Dan's "libnvdimm-pending" branch, which is the current home for Jan's "dax: Page invalidation fixes" series. You can find a working tree here: https://git.kernel.org/cgit/linux/kernel/git/zwisler/linux.git/log/?h=dax_pmd_clean This patch (of 2): Similar to follow_pte(), follow_pte_pmd() allows either a PTE leaf or a huge page PMD leaf to be found and returned. Link: http://lkml.kernel.org/r/1482272586-21177-2-git-send-email-ross.zwisler@linux.intel.com Signed-off-by: Ross Zwisler Suggested-by: Dave Hansen Cc: Alexander Viro Cc: Christoph Hellwig Cc: Dan Williams Cc: Dave Chinner Cc: Jan Kara Cc: Matthew Wilcox Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- include/linux/mm.h | 2 ++ mm/memory.c | 37 ++++++++++++++++++++++++++++++------- 2 files changed, 32 insertions(+), 7 deletions(-) --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1269,6 +1269,8 @@ int copy_page_range(struct mm_struct *ds struct vm_area_struct *vma); void unmap_mapping_range(struct address_space *mapping, loff_t const holebegin, loff_t const holelen, int even_cows); +int follow_pte_pmd(struct mm_struct *mm, unsigned long address, + pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, unsigned long *pfn); int follow_phys(struct vm_area_struct *vma, unsigned long address, --- a/mm/memory.c +++ b/mm/memory.c @@ -3780,8 +3780,8 @@ int __pmd_alloc(struct mm_struct *mm, pu } #endif /* __PAGETABLE_PMD_FOLDED */ -static int __follow_pte(struct mm_struct *mm, unsigned long address, - pte_t **ptepp, spinlock_t **ptlp) +static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address, + pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp) { pgd_t *pgd; pud_t *pud; @@ -3798,11 +3798,20 @@ static int __follow_pte(struct mm_struct pmd = pmd_offset(pud, address); VM_BUG_ON(pmd_trans_huge(*pmd)); - if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) - goto out; - /* We cannot handle huge page PFN maps. Luckily they don't exist. */ - if (pmd_huge(*pmd)) + if (pmd_huge(*pmd)) { + if (!pmdpp) + goto out; + + *ptlp = pmd_lock(mm, pmd); + if (pmd_huge(*pmd)) { + *pmdpp = pmd; + return 0; + } + spin_unlock(*ptlp); + } + + if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) goto out; ptep = pte_offset_map_lock(mm, pmd, address, ptlp); @@ -3825,9 +3834,23 @@ static inline int follow_pte(struct mm_s /* (void) is needed to make gcc happy */ (void) __cond_lock(*ptlp, - !(res = __follow_pte(mm, address, ptepp, ptlp))); + !(res = __follow_pte_pmd(mm, address, ptepp, NULL, + ptlp))); + return res; +} + +int follow_pte_pmd(struct mm_struct *mm, unsigned long address, + pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp) +{ + int res; + + /* (void) is needed to make gcc happy */ + (void) __cond_lock(*ptlp, + !(res = __follow_pte_pmd(mm, address, ptepp, pmdpp, + ptlp))); return res; } +EXPORT_SYMBOL(follow_pte_pmd); /** * follow_pfn - look up PFN at a user virtual address