Received: by 2002:a05:6358:7058:b0:131:369:b2a3 with SMTP id 24csp5817226rwp; Mon, 17 Jul 2023 09:52:18 -0700 (PDT) X-Google-Smtp-Source: APBJJlE7NkwdLOXdZ9+m/YCBmS5WoW5UHC9qxS2GZXPupcJauAwqPvzsBobpMfCuTLaxl/3CJvWp X-Received: by 2002:a17:90a:b297:b0:263:5333:ca26 with SMTP id c23-20020a17090ab29700b002635333ca26mr10272582pjr.29.1689612737860; Mon, 17 Jul 2023 09:52:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689612737; cv=none; d=google.com; s=arc-20160816; b=yon08dPos3Foo76ut2mkkLjOKTGDr95FhulofYFMWq1iKP3RjZOsvLMEy7OoBYUef4 zD5PvB7ZeuhdsT+EPDesGm5J1L3yr5VLpABpuefmuzyS1p/hnUmVGIFBrdoVvk4JrnNt ES2j2AgLLBx8TYCdGv81fOZcvOR/C2EUR1IEFxWiXnESncgMlGkgxnN2GqHMvCLPHXpI yktE2+53zEPNrGHnPsrqvqA2vD+fxr+zwpOQtgUUbjH1V5HQx5gIIMvvu3C48Zmky2jr gLLPzDbwuRgwwxZs4KV/nBwyJ06iWAgsArZLeDj4DXZon4urmG9+yfwFJdiROSWKOR9T ORRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:subject:user-agent:mime-version:date:message-id; bh=gL/n9CmZkPdpwdTYZ/t8pqIremeZwwrErrjulFtr8mw=; fh=6tniJFKB3fkV3+1/+VQ7dVQE+qTeoV7koTyZyUFjr2A=; b=eZd5MvIMX3wJsx3oUfCRMW3UDFlRqf745k6M5Nse6Zt52BKQrQPgQR3Wnnz7531WOh nOd4GjRZBIunAL6BPlQ7gKMCHFqDjMs5A1EkLh4405oP3pQaMgGny7nKwGXuoGdmAUgd /+zoPGx0DWUkczcA+TpLlNfeWxface/rlcn2BQI0QUTtA0LbeeuWtiOgTV4Yl/PJ/u8B PKEIucEyNETLt9AxZorcAsOSBZ4hA2zASm0ytgHU1QbfF+91JDFW4PrNYLhpxe9e+ZAr XZt2XO7cbz8LYk6aEC+2NBRIHJROx7CXtqh8pUJ1WhRd6JlDmbNA9VM4VNINPnS8JEdS sVLA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n24-20020a17090ade9800b0024e35ef410fsi103309pjv.131.2023.07.17.09.52.05; Mon, 17 Jul 2023 09:52:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230050AbjGQP4E (ORCPT + 99 others); Mon, 17 Jul 2023 11:56:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229928AbjGQP4A (ORCPT ); Mon, 17 Jul 2023 11:56:00 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D6DACE7F for ; Mon, 17 Jul 2023 08:55:57 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 00B97C15; Mon, 17 Jul 2023 08:56:41 -0700 (PDT) Received: from [10.57.76.30] (unknown [10.57.76.30]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 229443F738; Mon, 17 Jul 2023 08:55:56 -0700 (PDT) Message-ID: <980c4e1f-116b-0113-65ee-4e77fdd3e7b4@arm.com> Date: Mon, 17 Jul 2023 16:55:54 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 Subject: Re: [PATCH v1 3/3] mm: Batch-zap large anonymous folio PTE mappings To: Zi Yan Cc: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Yang Shi , "Huang, Ying" , linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20230717143110.260162-1-ryan.roberts@arm.com> <20230717143110.260162-4-ryan.roberts@arm.com> <5A282984-F3AD-41E3-8EF2-BA0A77DD1A3A@nvidia.com> From: Ryan Roberts In-Reply-To: <5A282984-F3AD-41E3-8EF2-BA0A77DD1A3A@nvidia.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 17/07/2023 16:25, Zi Yan wrote: > On 17 Jul 2023, at 10:31, Ryan Roberts wrote: > >> This allows batching the rmap removal with folio_remove_rmap_range(), >> which means we avoid spuriously adding a partially unmapped folio to the >> deferrred split queue in the common case, which reduces split queue lock >> contention. >> >> Previously each page was removed from the rmap individually with >> page_remove_rmap(). If the first page belonged to a large folio, this >> would cause page_remove_rmap() to conclude that the folio was now >> partially mapped and add the folio to the deferred split queue. But >> subsequent calls would cause the folio to become fully unmapped, meaning >> there is no value to adding it to the split queue. >> >> Signed-off-by: Ryan Roberts >> --- >> mm/memory.c | 119 ++++++++++++++++++++++++++++++++++++++++++++++++++++ >> 1 file changed, 119 insertions(+) >> >> diff --git a/mm/memory.c b/mm/memory.c >> index 01f39e8144ef..6facb8c8807a 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -1391,6 +1391,95 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma, >> pte_install_uffd_wp_if_needed(vma, addr, pte, pteval); >> } >> >> +static inline unsigned long page_addr(struct page *page, >> + struct page *anchor, unsigned long anchor_addr) >> +{ >> + unsigned long offset; >> + unsigned long addr; >> + >> + offset = (page_to_pfn(page) - page_to_pfn(anchor)) << PAGE_SHIFT; >> + addr = anchor_addr + offset; >> + >> + if (anchor > page) { >> + if (addr > anchor_addr) >> + return 0; >> + } else { >> + if (addr < anchor_addr) >> + return ULONG_MAX; >> + } >> + >> + return addr; >> +} >> + >> +static int calc_anon_folio_map_pgcount(struct folio *folio, >> + struct page *page, pte_t *pte, >> + unsigned long addr, unsigned long end) >> +{ >> + pte_t ptent; >> + int floops; >> + int i; >> + unsigned long pfn; >> + >> + end = min(page_addr(&folio->page + folio_nr_pages(folio), page, addr), >> + end); >> + floops = (end - addr) >> PAGE_SHIFT; >> + pfn = page_to_pfn(page); >> + pfn++; >> + pte++; >> + >> + for (i = 1; i < floops; i++) { >> + ptent = ptep_get(pte); >> + >> + if (!pte_present(ptent) || >> + pte_pfn(ptent) != pfn) { >> + return i; >> + } >> + >> + pfn++; >> + pte++; >> + } >> + >> + return floops; >> +} >> + >> +static unsigned long zap_anon_pte_range(struct mmu_gather *tlb, >> + struct vm_area_struct *vma, >> + struct page *page, pte_t *pte, >> + unsigned long addr, unsigned long end, >> + bool *full_out) >> +{ >> + struct folio *folio = page_folio(page); >> + struct mm_struct *mm = tlb->mm; >> + pte_t ptent; >> + int pgcount; >> + int i; >> + bool full; >> + >> + pgcount = calc_anon_folio_map_pgcount(folio, page, pte, addr, end); >> + >> + for (i = 0; i < pgcount;) { >> + ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); >> + tlb_remove_tlb_entry(tlb, pte, addr); >> + full = __tlb_remove_page(tlb, page, 0); >> + >> + if (unlikely(page_mapcount(page) < 1)) >> + print_bad_pte(vma, addr, ptent, page); >> + >> + i++; >> + page++; >> + pte++; >> + addr += PAGE_SIZE; >> + >> + if (unlikely(full)) >> + break; >> + } >> + >> + folio_remove_rmap_range(folio, page - i, i, vma); >> + >> + *full_out = full; >> + return i; >> +} >> + >> static unsigned long zap_pte_range(struct mmu_gather *tlb, >> struct vm_area_struct *vma, pmd_t *pmd, >> unsigned long addr, unsigned long end, >> @@ -1428,6 +1517,36 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, >> page = vm_normal_page(vma, addr, ptent); >> if (unlikely(!should_zap_page(details, page))) >> continue; >> + >> + /* >> + * Batch zap large anonymous folio mappings. This allows >> + * batching the rmap removal, which means we avoid >> + * spuriously adding a partially unmapped folio to the >> + * deferrred split queue in the common case, which >> + * reduces split queue lock contention. Require the VMA >> + * to be anonymous to ensure that none of the PTEs in >> + * the range require zap_install_uffd_wp_if_needed(). >> + */ >> + if (page && PageAnon(page) && vma_is_anonymous(vma)) { >> + bool full; >> + int pgcount; >> + >> + pgcount = zap_anon_pte_range(tlb, vma, >> + page, pte, addr, end, &full); > > Are you trying to zap as many ptes as possible if all these ptes are > within a folio? Yes. > If so, why not calculate end before calling zap_anon_pte_range()? > That would make zap_anon_pte_range() simpler. I'm not sure I follow. That's currently done in calc_anon_folio_map_pgcount(). I could move it to here, but I'm not sure that makes things simpler, just puts more code in here and less in there? > Also check if page is part of > a large folio first to make sure you can batch. Yeah that's fair. I'd be inclined to put that in zap_anon_pte_range() to short circuit calc_anon_folio_map_pgcount(). But ultimately zap_anon_pte_range() would still zap the single pte. > >> + >> + rss[mm_counter(page)] -= pgcount; >> + pgcount--; >> + pte += pgcount; >> + addr += pgcount << PAGE_SHIFT; >> + >> + if (unlikely(full)) { >> + force_flush = 1; >> + addr += PAGE_SIZE; >> + break; >> + } >> + continue; >> + } >> + >> ptent = ptep_get_and_clear_full(mm, addr, pte, >> tlb->fullmm); >> tlb_remove_tlb_entry(tlb, pte, addr); >> -- >> 2.25.1 > > > -- > Best Regards, > Yan, Zi