Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp1884364pxb; Fri, 27 Aug 2021 21:24:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwYT709JX3l4/a6RX/tc1ZNkFe1cSHjnZpkelqs3lKD5euqhiH1U245luMmkagvyYNP+5ta X-Received: by 2002:a05:6e02:148c:: with SMTP id n12mr8769577ilk.36.1630124689396; Fri, 27 Aug 2021 21:24:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1630124689; cv=none; d=google.com; s=arc-20160816; b=iHX853y40WkeodEvuahmJtpS7jf5VECOyD9btfOYWwDXcH19GB9MxM+ssOoPLd7+rS H65dI0Wiye58AtZFayQjntrSiCvFGjMXICjzMJUtB0jdS4fYMnrvgOR4tSGeve8BGn9u z7Fx2uZEp098yK5bf84wTkURLj6a7lGy4/+EWwm/e53V7CLDZikeSItaxlJ2blRBdlPL tUDsjAn8IpDIs7xwxXyhY30UbIiNKD4CTDLsybbzHoQru9ZhcXEqj6uFe+QFfhjwekBx TbtZZgRGM+X0kDjm2ZnVQD0ZPzSPuppbw7x3aQdqlt+GXYPak3EOD5l1MRml6CdIh+VC V06Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=r4sxzAuXhEsyJUy6AggBHx5gh/7nTNy4dMi+0q2BJs8=; b=fVy6f4UQsTCiI4gVan1f7HylxuQqWbfzqA5SU9Tc6S8be8KMwwZaMiCP/jETYzeUG4 elOCg8osAzFxwh2NQe7N8pBX9k5mGDKSJo5xddXTRzKU+cxYKUVOPvj1sayWdl/cKNo5 Lx5kb/CFdqisT7MQoz6ieIZc7YzmLdX93Rb/0lEp+zuGdrSt8pibu40MBssxz5zhhE4y rh3AuJmFinBFIkSS3njX6a08aArrKMAws1El4J8EKIRfoyi2T5IXqz3vP9XbIsj0pmIR jXIShS21CRMZk1oIrrBTB/cQf39xnjltct6qU0MQXgsvo6OTXrfPCyhxOgMtwqLecaLA eYnA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=yyQAqAZC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s3si2415582ilq.133.2021.08.27.21.24.38; Fri, 27 Aug 2021 21:24:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=yyQAqAZC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231423AbhH1EYX (ORCPT + 99 others); Sat, 28 Aug 2021 00:24:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231504AbhH1EYR (ORCPT ); Sat, 28 Aug 2021 00:24:17 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82053C0613D9 for ; Fri, 27 Aug 2021 21:23:27 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id q3so5305779plx.4 for ; Fri, 27 Aug 2021 21:23:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=r4sxzAuXhEsyJUy6AggBHx5gh/7nTNy4dMi+0q2BJs8=; b=yyQAqAZC9uRXK9FQF2k6SnE4k4hEQtHILW6KYrD5m3mjzKnYvpnS3FXOf5do3xMSgt S1Amg8BS8q6cA8sOC+m+1rPL2mKEajZAZOPDaycaLy0814RWS7A2WaN9XQ8jVi5IigHY ukKnRBQSdJi+NLLSUnWJBY5Weh/u+I2bkO7x9p4Rgv3wMMk4ZI+BQ4zuvrmia+VeCCFN 6WpKibnp4XbUyQyBGwX7ebghnN+on6oDp+OKEGTd45ncGP1HN7l7w/l+Nn675tASffWM YE6GPfo7EIk/D6/wxzVKQiAhkGdlXxFiYVaSTPWDUMXFPKcuXE7NrOrcd/0sGg4yxQB6 RuiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=r4sxzAuXhEsyJUy6AggBHx5gh/7nTNy4dMi+0q2BJs8=; b=Ld4pkwL5dPzeDEYofeJMwAtQ54KW7kfwvgKkjFo0fg1B+aidGQs4CrKBG5pkK+xdQs d8yxoF4qso19yZ3Pnjbqpb9LyMAvZtzb2LOWGd0Hvv5XcDn53/GzOp2rLkcivKd07oC0 AVIF19xaXHFbp/JK+NA9qiPqSonzV1QSPW9unzEEFuS3ouycV+xl10MTpUpQUNjpJxu2 2+cBz99mgElwtykgOEjVUBgsnWynSHe/JOufsV0zN0E9MJp2towX82SdYKXzoceJjIvJ g5KclGabSPwpkQcSOwanYo98SlBxgSDJ6507OIpsh07srWX1slzLifz5kjAKtvQ98Xpp E3Lg== X-Gm-Message-State: AOAM531c9E/gu6xCCMivnQepAkhyrBBQt1106+3s3uxO8vxxID0jUpNl 8tdirQQCmf0eGp2bxCTomL9cGA== X-Received: by 2002:a17:90b:1952:: with SMTP id nk18mr27330276pjb.193.1630124607071; Fri, 27 Aug 2021 21:23:27 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id u15sm4720383pfl.14.2021.08.27.21.23.22 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 27 Aug 2021 21:23:26 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, tglx@linutronix.de, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, kirill.shutemov@linux.intel.com, mika.penttila@nextfour.com, david@redhat.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, songmuchun@bytedance.com, Qi Zheng Subject: [PATCH v1 2/2] mm: remove redundant smp_wmb() Date: Sat, 28 Aug 2021 12:23:06 +0800 Message-Id: <20210828042306.42886-3-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20210828042306.42886-1-zhengqi.arch@bytedance.com> References: <20210828042306.42886-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The smp_wmb() which is in the __pte_alloc() is used to ensure all ptes setup is visible before the pte is made visible to other CPUs by being put into page tables. We only need this when the pte is actually populated, so move it to pte_install(). __pte_alloc_kernel(), __p4d_alloc(), __pud_alloc() and __pmd_alloc() are similar to this case. We can also defer smp_wmb() to the place where the pmd entry is really populated by preallocated pte. There are two kinds of user of preallocated pte, one is filemap & finish_fault(), another is THP. The former does not need another smp_wmb() because the smp_wmb() has been done by pte_install(). Fortunately, the latter also does not need another smp_wmb() because there is already a smp_wmb() before populating the new pte when the THP uses a preallocated pte to split a huge pmd. Signed-off-by: Qi Zheng Reviewed-by: Muchun Song --- mm/memory.c | 47 ++++++++++++++++++++--------------------------- mm/sparse-vmemmap.c | 2 +- 2 files changed, 21 insertions(+), 28 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index ef7b1762e996..9c7534187454 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -439,6 +439,20 @@ void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte) if (likely(pmd_none(*pmd))) { /* Has another populated it ? */ mm_inc_nr_ptes(mm); + /* + * Ensure all pte setup (eg. pte page lock and page clearing) are + * visible before the pte is made visible to other CPUs by being + * put into page tables. + * + * The other side of the story is the pointer chasing in the page + * table walking code (when walking the page table without locking; + * ie. most of the time). Fortunately, these data accesses consist + * of a chain of data-dependent loads, meaning most CPUs (alpha + * being the notable exception) will already guarantee loads are + * seen in-order. See the alpha page table accessors for the + * smp_rmb() barriers in page table walking code. + */ + smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */ pmd_populate(mm, pmd, *pte); *pte = NULL; } @@ -451,21 +465,6 @@ int __pte_alloc(struct mm_struct *mm, pmd_t *pmd) if (!new) return -ENOMEM; - /* - * Ensure all pte setup (eg. pte page lock and page clearing) are - * visible before the pte is made visible to other CPUs by being - * put into page tables. - * - * The other side of the story is the pointer chasing in the page - * table walking code (when walking the page table without locking; - * ie. most of the time). Fortunately, these data accesses consist - * of a chain of data-dependent loads, meaning most CPUs (alpha - * being the notable exception) will already guarantee loads are - * seen in-order. See the alpha page table accessors for the - * smp_rmb() barriers in page table walking code. - */ - smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */ - pmd_install(mm, pmd, &new); if (new) pte_free(mm, new); @@ -478,10 +477,9 @@ int __pte_alloc_kernel(pmd_t *pmd) if (!new) return -ENOMEM; - smp_wmb(); /* See comment in __pte_alloc */ - spin_lock(&init_mm.page_table_lock); if (likely(pmd_none(*pmd))) { /* Has another populated it ? */ + smp_wmb(); /* See comment in pmd_install() */ pmd_populate_kernel(&init_mm, pmd, new); new = NULL; } @@ -3857,7 +3855,6 @@ static vm_fault_t __do_fault(struct vm_fault *vmf) vmf->prealloc_pte = pte_alloc_one(vma->vm_mm); if (!vmf->prealloc_pte) return VM_FAULT_OOM; - smp_wmb(); /* See comment in __pte_alloc() */ } ret = vma->vm_ops->fault(vmf); @@ -3919,7 +3916,6 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) vmf->prealloc_pte = pte_alloc_one(vma->vm_mm); if (!vmf->prealloc_pte) return VM_FAULT_OOM; - smp_wmb(); /* See comment in __pte_alloc() */ } vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); @@ -4144,7 +4140,6 @@ static vm_fault_t do_fault_around(struct vm_fault *vmf) vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm_mm); if (!vmf->prealloc_pte) return VM_FAULT_OOM; - smp_wmb(); /* See comment in __pte_alloc() */ } return vmf->vma->vm_ops->map_pages(vmf, start_pgoff, end_pgoff); @@ -4819,13 +4814,13 @@ int __p4d_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address) if (!new) return -ENOMEM; - smp_wmb(); /* See comment in __pte_alloc */ - spin_lock(&mm->page_table_lock); if (pgd_present(*pgd)) /* Another has populated it */ p4d_free(mm, new); - else + else { + smp_wmb(); /* See comment in pmd_install() */ pgd_populate(mm, pgd, new); + } spin_unlock(&mm->page_table_lock); return 0; } @@ -4842,11 +4837,10 @@ int __pud_alloc(struct mm_struct *mm, p4d_t *p4d, unsigned long address) if (!new) return -ENOMEM; - smp_wmb(); /* See comment in __pte_alloc */ - spin_lock(&mm->page_table_lock); if (!p4d_present(*p4d)) { mm_inc_nr_puds(mm); + smp_wmb(); /* See comment in pmd_install() */ p4d_populate(mm, p4d, new); } else /* Another has populated it */ pud_free(mm, new); @@ -4867,11 +4861,10 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) if (!new) return -ENOMEM; - smp_wmb(); /* See comment in __pte_alloc */ - ptl = pud_lock(mm, pud); if (!pud_present(*pud)) { mm_inc_nr_pmds(mm); + smp_wmb(); /* See comment in pmd_install() */ pud_populate(mm, pud, new); } else /* Another has populated it */ pmd_free(mm, new); diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index bdce883f9286..db6df27c852a 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -76,7 +76,7 @@ static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start, set_pte_at(&init_mm, addr, pte, entry); } - /* Make pte visible before pmd. See comment in __pte_alloc(). */ + /* Make pte visible before pmd. See comment in pmd_install(). */ smp_wmb(); pmd_populate_kernel(&init_mm, pmd, pgtable); -- 2.11.0