Received: by 2002:a05:6a10:8395:0:0:0:0 with SMTP id n21csp359703pxh; Wed, 10 Nov 2021 03:00:49 -0800 (PST) X-Google-Smtp-Source: ABdhPJzLMB7K4iDcUSa7Qe4hfRot7au4C11G9zkgzXgZvU3w+fYLlS5ALyNLtQ2PcbR/5HYEPmpm X-Received: by 2002:a17:906:780a:: with SMTP id u10mr19728163ejm.235.1636542048815; Wed, 10 Nov 2021 03:00:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1636542048; cv=none; d=google.com; s=arc-20160816; b=jir39RHm1OJKcXh0o1vPWRAdAaOM5s1sRZPNjefagTCDxBILE0/03pRxspwBWujuzk /13eE3HFzZfy9TemdqQ3pgkyRC1eVEdFJ0S3KRPKopyc7rGziTFXJR3AeLcwZyJ+ErjS IxfifODGs6Ql//TCf4S/XtWFNRIXPqKGg7wS98nL024lhAetDJ4d7+n58QoeSn1DAP9Z L/Et+Y2evi9w3GeR0PWAAP0sJb2ypn9sJQbDaXhs2Ix8fW6OlPPjee+UaS7o2lqSIFbz NOB1aXj4XgQmfG2WgVVi5NKT+GLivsFusxkLZCEMns95RdXewinORQiYo/q+tq6vmL9R rX3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=25VRZHR2Pd1VOPr3qZJqaRujj9XwfT4H1vR/Swabhf0=; b=Dg3WdGPFTHeaKrOaCosvuLyOoIxXiuu3PFmkk598fyBNCLIrA1zoBQRQ/xdGjbRhji TBVFciRew2Amlu6p7Kzzk2i0IiFaaOlq+y6ifvWibTOK7T+AvOBpR5yQ+Fb6fUXX7bmj lmtKFVLqTdnjEgRyr+qZtpGa9Xt2g0r6smBR46AA8O6ZFkVJ027aGQiJOfSA9eoAYEoP bu8/byGd0YPtLH1+L8xWJ4tf43ezdivx5Df0i+5lnaKslJ8An+iOjcxKgMd5ffcpcRt/ SKzGKaynlnA/rLJWJUHgzx9bnIct1ZQw9Srx/c+3gMQUyZMJ1LiOY/l9654iZ0jDrWZZ LB1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=lz02LYdU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h18si33896315ejt.663.2021.11.10.03.00.24; Wed, 10 Nov 2021 03:00:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=lz02LYdU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231527AbhKJK67 (ORCPT + 99 others); Wed, 10 Nov 2021 05:58:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231510AbhKJK6w (ORCPT ); Wed, 10 Nov 2021 05:58:52 -0500 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED07BC061764 for ; Wed, 10 Nov 2021 02:56:04 -0800 (PST) Received: by mail-pj1-x1029.google.com with SMTP id h24so1247373pjq.2 for ; Wed, 10 Nov 2021 02:56:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=25VRZHR2Pd1VOPr3qZJqaRujj9XwfT4H1vR/Swabhf0=; b=lz02LYdU8qxmCn1b6rHkpZA/GZ/O+w2Elu0uYFnq1f5iHi/0SBko9OtG0wFLIoD4yz b0hWoKv/hzZ0NYjWQ/OJu9oGxhKVIOhCv+moAsqKJJKJwZ89zZvegnmx1NxOLqtQl2c5 hMV71i7n2eMyMCO1JHtvVw48yoi8urgDLIHg1ob7sGE3BCov3KrnzAysFU94hqOwMdXN dH9vL3Z6czJou33gMPj72kQ3xO7D2Ms7T1uFWlImi+lXqu83eo+AGGO0D9StuTSabqVe O0nZBwDnMViWz7EmAxIiALvpWL+dCP80ex4FwnJA1Dni3ChoPgKRtoJRcrqYQtShwBxO iMjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=25VRZHR2Pd1VOPr3qZJqaRujj9XwfT4H1vR/Swabhf0=; b=YKd6YBvSdjK16/K3C2v1/qL+EhPt5rk2Igdis2WSu58AtR3IZ2DDo2ZqRPCnwxkfbH OOl4r8ZPnqy6YCRbaALoZS582e+f2dVnE63fsyOAncm+4BKM0y9slf9t8DZi6ujVlnSx WRmpaWsI7LkzIg2eCOYBhOwUSjWP8hFR6Z+B6cZM14JYsuTrlIslBjUbLLoTxch263cJ a6SoomcT/b3mBeo9rWbq9HlEo8beBheYjJNvfKco3X+UCnTh1TZJdI330lBYj3hM1MIe qwUZ/6N9O/Vu0dE2osCTT1kVWVPxZ4DTJKlQGuBOLcWTYTcBYidAIC8GZRZ0VHUUrPfY juGQ== X-Gm-Message-State: AOAM530jPPpncgzrHbPI3Oc/Ckex3qdyGJhJrRNilJj33Wfh/wykCO+3 mxj+yFy6HQXOx2t7W84NTBdU8w== X-Received: by 2002:a17:902:e789:b0:140:801:1262 with SMTP id cp9-20020a170902e78900b0014008011262mr15005904plb.42.1636541764450; Wed, 10 Nov 2021 02:56:04 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.251]) by smtp.gmail.com with ESMTPSA id v38sm5865829pgl.38.2021.11.10.02.56.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 Nov 2021 02:56:04 -0800 (PST) From: Qi Zheng To: akpm@linux-foundation.org, tglx@linutronix.de, kirill.shutemov@linux.intel.com, mika.penttila@nextfour.com, david@redhat.com, jgg@nvidia.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, songmuchun@bytedance.com, zhouchengming@bytedance.com, Qi Zheng Subject: [PATCH v3 15/15] mm/pte_ref: use mmu_gather to free PTE page table pages Date: Wed, 10 Nov 2021 18:54:28 +0800 Message-Id: <20211110105428.32458-16-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20211110105428.32458-1-zhengqi.arch@bytedance.com> References: <20211110105428.32458-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In unmap_region() and other paths, we can reuse @tlb to free PTE page table, which can reduce the number of tlb flush. Signed-off-by: Qi Zheng --- Documentation/vm/pte_ref.rst | 58 +++++++++++++++++++++++--------------------- arch/x86/Kconfig | 2 +- include/linux/pte_ref.h | 34 ++++++++++++++++++++------ mm/madvise.c | 4 +-- mm/memory.c | 4 +-- mm/mmu_gather.c | 40 +++++++++++++----------------- mm/pte_ref.c | 13 +++++++--- 7 files changed, 90 insertions(+), 65 deletions(-) diff --git a/Documentation/vm/pte_ref.rst b/Documentation/vm/pte_ref.rst index c5323a263464..d304c0bfaae1 100644 --- a/Documentation/vm/pte_ref.rst +++ b/Documentation/vm/pte_ref.rst @@ -183,30 +183,34 @@ GUP as an example:: 4. Helpers ========== -+---------------------+-------------------------------------------------+ -| pte_ref_init | Initialize the pte_refcount and pmd | -+---------------------+-------------------------------------------------+ -| pte_to_pmd | Get the corresponding pmd | -+---------------------+-------------------------------------------------+ -| pte_update_pmd | Update the corresponding pmd | -+---------------------+-------------------------------------------------+ -| pte_get | Increment a pte_refcount | -+---------------------+-------------------------------------------------+ -| pte_get_many | Add a value to a pte_refcount | -+---------------------+-------------------------------------------------+ -| pte_get_unless_zero | Increment a pte_refcount unless it is 0 | -+---------------------+-------------------------------------------------+ -| pte_try_get | Try to increment a pte_refcount | -+---------------------+-------------------------------------------------+ -| pte_tryget_map | Try to increment a pte_refcount before | -| | pte_offset_map() | -+---------------------+-------------------------------------------------+ -| pte_tryget_map_lock | Try to increment a pte_refcount before | -| | pte_offset_map_lock() | -+---------------------+-------------------------------------------------+ -| pte_put | Decrement a pte_refcount | -+---------------------+-------------------------------------------------+ -| pte_put_many | Sub a value to a pte_refcount | -+---------------------+-------------------------------------------------+ -| pte_put_vmf | Decrement a pte_refcount in the page fault path | -+---------------------+-------------------------------------------------+ ++---------------------+------------------------------------------------------+ +| pte_ref_init | Initialize the pte_refcount and pmd | ++---------------------+------------------------------------------------------+ +| pte_to_pmd | Get the corresponding pmd | ++---------------------+------------------------------------------------------+ +| pte_update_pmd | Update the corresponding pmd | ++---------------------+------------------------------------------------------+ +| pte_get | Increment a pte_refcount | ++---------------------+------------------------------------------------------+ +| pte_get_many | Add a value to a pte_refcount | ++---------------------+------------------------------------------------------+ +| pte_get_unless_zero | Increment a pte_refcount unless it is 0 | ++---------------------+------------------------------------------------------+ +| pte_try_get | Try to increment a pte_refcount | ++---------------------+------------------------------------------------------+ +| pte_tryget_map | Try to increment a pte_refcount before | +| | pte_offset_map() | ++---------------------+------------------------------------------------------+ +| pte_tryget_map_lock | Try to increment a pte_refcount before | +| | pte_offset_map_lock() | ++---------------------+------------------------------------------------------+ +| __pte_put | Decrement a pte_refcount | ++---------------------+------------------------------------------------------+ +| __pte_put_many | Sub a value to a pte_refcount | ++---------------------+------------------------------------------------------+ +| pte_put | Decrement a pte_refcount(without tlb parameter) | ++---------------------+------------------------------------------------------+ +| pte_put_many | Sub a value to a pte_refcount(without tlb parameter) | ++---------------------+------------------------------------------------------+ +| pte_put_vmf | Decrement a pte_refcount in the page fault path | ++---------------------+------------------------------------------------------+ diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index ca5bfe83ec61..69ea13437947 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -233,7 +233,7 @@ config X86 select HAVE_PCI select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT + select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT || FREE_USER_PTE select HAVE_POSIX_CPU_TIMERS_TASK_WORK select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if X86_64 && (UNWINDER_FRAME_POINTER || UNWINDER_ORC) && STACK_VALIDATION diff --git a/include/linux/pte_ref.h b/include/linux/pte_ref.h index 8a26eaba83ef..dc3923bb38f6 100644 --- a/include/linux/pte_ref.h +++ b/include/linux/pte_ref.h @@ -22,7 +22,8 @@ enum pte_tryget_type pte_try_get(pmd_t *pmd); bool pte_get_unless_zero(pmd_t *pmd); #ifdef CONFIG_FREE_USER_PTE -void free_user_pte_table(struct mm_struct *mm, pmd_t *pmdp, unsigned long addr); +void free_user_pte_table(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmd, unsigned long addr); static inline void pte_ref_init(pgtable_t pte, pmd_t *pmd, int count) { @@ -48,14 +49,21 @@ static inline void pte_get_many(pmd_t *pmd, unsigned int nr) atomic_add(nr, &pte->pte_refcount); } -static inline void pte_put_many(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, unsigned int nr) +static inline void __pte_put_many(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmd, unsigned long addr, + unsigned int nr) { pgtable_t pte = pmd_pgtable(*pmd); VM_BUG_ON(!PageTable(pte)); if (atomic_sub_and_test(nr, &pte->pte_refcount)) - free_user_pte_table(mm, pmd, addr & PMD_MASK); + free_user_pte_table(tlb, mm, pmd, addr & PMD_MASK); +} + +static inline void __pte_put(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmd, unsigned long addr) +{ + __pte_put_many(tlb, mm, pmd, addr, 1); } #else static inline void pte_ref_init(pgtable_t pte, pmd_t *pmd, int count) @@ -75,8 +83,14 @@ static inline void pte_get_many(pmd_t *pmd, unsigned int nr) { } -static inline void pte_put_many(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, unsigned int nr) +static inline void __pte_put_many(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmd, unsigned long addr, + unsigned int nr) +{ +} + +static inline void __pte_put(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmd, unsigned long addr) { } #endif /* CONFIG_FREE_USER_PTE */ @@ -110,6 +124,12 @@ static inline pte_t *pte_tryget_map_lock(struct mm_struct *mm, pmd_t *pmd, return pte_offset_map_lock(mm, pmd, address, ptlp); } +static inline void pte_put_many(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, unsigned int nr) +{ + __pte_put_many(NULL, mm, pmd, addr, nr); +} + /* * pte_put - Decrement refcount for the PTE page table. * @mm: the mm_struct of the target address space. @@ -120,7 +140,7 @@ static inline pte_t *pte_tryget_map_lock(struct mm_struct *mm, pmd_t *pmd, */ static inline void pte_put(struct mm_struct *mm, pmd_t *pmd, unsigned long addr) { - pte_put_many(mm, pmd, addr, 1); + __pte_put(NULL, mm, pmd, addr); } #endif diff --git a/mm/madvise.c b/mm/madvise.c index 5cf2832abb98..b51254305bb2 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -477,7 +477,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, arch_leave_lazy_mmu_mode(); pte_unmap_unlock(orig_pte, ptl); - pte_put(vma->vm_mm, pmd, start); + __pte_put(tlb, vma->vm_mm, pmd, start); if (pageout) reclaim_pages(&page_list); cond_resched(); @@ -710,7 +710,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, arch_leave_lazy_mmu_mode(); pte_unmap_unlock(orig_pte, ptl); if (nr_put) - pte_put_many(mm, pmd, start, nr_put); + __pte_put_many(tlb, mm, pmd, start, nr_put); cond_resched(); next: return 0; diff --git a/mm/memory.c b/mm/memory.c index 4d1ede78d1b0..1bdae3b0f877 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1469,7 +1469,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, } if (nr_put) - pte_put_many(mm, pmd, start, nr_put); + __pte_put_many(tlb, mm, pmd, start, nr_put); return addr; } @@ -1515,7 +1515,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb, if (pte_try_get(pmd)) goto next; next = zap_pte_range(tlb, vma, pmd, addr, next, details); - pte_put(tlb->mm, pmd, addr); + __pte_put(tlb, tlb->mm, pmd, addr); next: cond_resched(); } while (pmd++, addr = next, addr != end); diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 1b9837419bf9..1bd9fa889421 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -134,42 +134,42 @@ static void __tlb_remove_table_free(struct mmu_table_batch *batch) * */ -static void tlb_remove_table_smp_sync(void *arg) +static void tlb_remove_table_rcu(struct rcu_head *head) { - /* Simply deliver the interrupt */ + __tlb_remove_table_free(container_of(head, struct mmu_table_batch, rcu)); } -static void tlb_remove_table_sync_one(void) +static void tlb_remove_table_free(struct mmu_table_batch *batch) { - /* - * This isn't an RCU grace period and hence the page-tables cannot be - * assumed to be actually RCU-freed. - * - * It is however sufficient for software page-table walkers that rely on - * IRQ disabling. - */ - smp_call_function(tlb_remove_table_smp_sync, NULL, 1); + call_rcu(&batch->rcu, tlb_remove_table_rcu); } -static void tlb_remove_table_rcu(struct rcu_head *head) +static void tlb_remove_table_one_rcu(struct rcu_head *head) { - __tlb_remove_table_free(container_of(head, struct mmu_table_batch, rcu)); + struct page *page = container_of(head, struct page, rcu_head); + + __tlb_remove_table(page); } -static void tlb_remove_table_free(struct mmu_table_batch *batch) +static void tlb_remove_table_one(void *table) { - call_rcu(&batch->rcu, tlb_remove_table_rcu); + pgtable_t page = (pgtable_t)table; + + call_rcu(&page->rcu_head, tlb_remove_table_one_rcu); } #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */ -static void tlb_remove_table_sync_one(void) { } - static void tlb_remove_table_free(struct mmu_table_batch *batch) { __tlb_remove_table_free(batch); } +static void tlb_remove_table_one(void *table) +{ + __tlb_remove_table(table); +} + #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */ /* @@ -187,12 +187,6 @@ static inline void tlb_table_invalidate(struct mmu_gather *tlb) } } -static void tlb_remove_table_one(void *table) -{ - tlb_remove_table_sync_one(); - __tlb_remove_table(table); -} - static void tlb_table_flush(struct mmu_gather *tlb) { struct mmu_table_batch **batch = &tlb->batch; diff --git a/mm/pte_ref.c b/mm/pte_ref.c index 728e61cea25e..f9650ad23c7c 100644 --- a/mm/pte_ref.c +++ b/mm/pte_ref.c @@ -8,6 +8,8 @@ #include #include #include +#include +#include #include #ifdef CONFIG_FREE_USER_PTE @@ -117,7 +119,8 @@ static void pte_free_rcu(struct rcu_head *rcu) __free_page(page); } -void free_user_pte_table(struct mm_struct *mm, pmd_t *pmd, unsigned long addr) +void free_user_pte_table(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmd, unsigned long addr) { struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); spinlock_t *ptl; @@ -125,10 +128,14 @@ void free_user_pte_table(struct mm_struct *mm, pmd_t *pmd, unsigned long addr) ptl = pmd_lock(mm, pmd); pmdval = pmdp_huge_get_and_clear(mm, addr, pmd); - flush_tlb_range(&vma, addr, addr + PMD_SIZE); + if (!tlb) + flush_tlb_range(&vma, addr, addr + PMD_SIZE); + else + pte_free_tlb(tlb, pmd_pgtable(pmdval), addr); spin_unlock(ptl); pte_free_debug(pmdval); mm_dec_nr_ptes(mm); - call_rcu(&pmd_pgtable(pmdval)->rcu_head, pte_free_rcu); + if (!tlb) + call_rcu(&pmd_pgtable(pmdval)->rcu_head, pte_free_rcu); } -- 2.11.0