Received: by 2002:a05:6358:7058:b0:131:369:b2a3 with SMTP id 24csp9424210rwp; Thu, 20 Jul 2023 04:57:57 -0700 (PDT) X-Google-Smtp-Source: APBJJlGSluU6/CAHRtN4YBnY8uM+24YCbB92w43AaxS++OX0UZatFKqzFr+UZieaPyOCdlkcISHH X-Received: by 2002:a17:906:1096:b0:993:d9bb:748b with SMTP id u22-20020a170906109600b00993d9bb748bmr4543102eju.1.1689854276962; Thu, 20 Jul 2023 04:57:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689854276; cv=none; d=google.com; s=arc-20160816; b=JHhtXaSODqRaq9jIJyOc1ZJein160XE3QRLcwhicu7Hi9vkXtGqfHk6dMddJ52d7DG rn+7hyWylTOZcuzF1IOAUbYKu3e61tWQmaMPItBXWw8LK9npk0NKV4ceyaO8/4HZn/a5 sF6ckYBjpnLRt9oUPhHfC1Er5krdBYRgQuNnuTWs/VTbFVHaIkuQMdOAdNlUNC61vU95 UpW18SqJEWR7TZT+LIT/NQ9G/DUJjE9AiLRB/wS3Qr1qeVpNAEgJBQ/pQVq22usPa/dZ 1VaX05jVdEaCmRMvLxprB7yOGlIStt1N0rzFBf2Hx6sWIvEr5jtN63vL6GJC/J80Nbtd 2tmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=G2GF+BZHvW+OuduMU+juyHwMecPdL/HtveJeza/m3Jw=; fh=THi7eKCjaWcVibUnyL3CTjC3PwRMYqk9JChFkSe1vn0=; b=EqraVh+QJVvC4wX7w3LaQ5WKns+cWmWFhCV2+OK7xvpP6sf/iVOCNsYHvFgYpdAm8G YdIoXFzbCpbNg5NG4YDX7P9cyYglb0aasgMzbmMK6uUNWboPGkowRo8flLwsdLlu8I3E A0wxtp7H2HlRUZVKNmcldMei9+0a+ZbKem6OLieO6ZLx5o9vd4rbb9x4CawkjgCdmnP0 pXJ42dBkbR028Gg31fmDKyKNbvdveFkUQM5zpnaab2S8MFgfdKn+jd0jtsSB2R7DZVAP MhHCz1SuY+X7hCODJ8+ba3j9UnuGeDaxZ1NREOOJvwT1GavNzJ0aGGEdz8gwwWzUt/L9 y7ew== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ks21-20020a170906f85500b00989004c1498si507134ejb.589.2023.07.20.04.57.32; Thu, 20 Jul 2023 04:57:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231422AbjGTLaR (ORCPT + 99 others); Thu, 20 Jul 2023 07:30:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37116 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231409AbjGTLaP (ORCPT ); Thu, 20 Jul 2023 07:30:15 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 32489123 for ; Thu, 20 Jul 2023 04:30:14 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3C758139F; Thu, 20 Jul 2023 04:30:57 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5A2CA3F6C4; Thu, 20 Jul 2023 04:30:12 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Yang Shi , "Huang, Ying" , Zi Yan Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 3/3] mm: Batch-zap large anonymous folio PTE mappings Date: Thu, 20 Jul 2023 12:29:55 +0100 Message-Id: <20230720112955.643283-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230720112955.643283-1-ryan.roberts@arm.com> References: <20230720112955.643283-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This allows batching the rmap removal with folio_remove_rmap_range(), which means we avoid spuriously adding a partially unmapped folio to the deferred split queue in the common case, which reduces split queue lock contention. Previously each page was removed from the rmap individually with page_remove_rmap(). If the first page belonged to a large folio, this would cause page_remove_rmap() to conclude that the folio was now partially mapped and add the folio to the deferred split queue. But subsequent calls would cause the folio to become fully unmapped, meaning there is no value to adding it to the split queue. Signed-off-by: Ryan Roberts --- mm/memory.c | 120 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 120 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index 01f39e8144ef..189b1cfd823d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1391,6 +1391,94 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma, pte_install_uffd_wp_if_needed(vma, addr, pte, pteval); } +static inline unsigned long page_cont_mapped_vaddr(struct page *page, + struct page *anchor, unsigned long anchor_vaddr) +{ + unsigned long offset; + unsigned long vaddr; + + offset = (page_to_pfn(page) - page_to_pfn(anchor)) << PAGE_SHIFT; + vaddr = anchor_vaddr + offset; + + if (anchor > page) { + if (vaddr > anchor_vaddr) + return 0; + } else { + if (vaddr < anchor_vaddr) + return ULONG_MAX; + } + + return vaddr; +} + +static int folio_nr_pages_cont_mapped(struct folio *folio, + struct page *page, pte_t *pte, + unsigned long addr, unsigned long end) +{ + pte_t ptent; + int floops; + int i; + unsigned long pfn; + struct page *folio_end; + + if (!folio_test_large(folio)) + return 1; + + folio_end = &folio->page + folio_nr_pages(folio); + end = min(page_cont_mapped_vaddr(folio_end, page, addr), end); + floops = (end - addr) >> PAGE_SHIFT; + pfn = page_to_pfn(page); + pfn++; + pte++; + + for (i = 1; i < floops; i++) { + ptent = ptep_get(pte); + + if (!pte_present(ptent) || pte_pfn(ptent) != pfn) + break; + + pfn++; + pte++; + } + + return i; +} + +static unsigned long try_zap_anon_pte_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, + struct folio *folio, + struct page *page, pte_t *pte, + unsigned long addr, int nr_pages, + struct zap_details *details) +{ + struct mm_struct *mm = tlb->mm; + pte_t ptent; + bool full; + int i; + + for (i = 0; i < nr_pages;) { + ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); + tlb_remove_tlb_entry(tlb, pte, addr); + zap_install_uffd_wp_if_needed(vma, addr, pte, details, ptent); + full = __tlb_remove_page(tlb, page, 0); + + if (unlikely(page_mapcount(page) < 1)) + print_bad_pte(vma, addr, ptent, page); + + i++; + page++; + pte++; + addr += PAGE_SIZE; + + if (unlikely(full)) + break; + } + + folio_remove_rmap_range(folio, page - i, i, vma); + + return i; +} + static unsigned long zap_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, @@ -1428,6 +1516,38 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, page = vm_normal_page(vma, addr, ptent); if (unlikely(!should_zap_page(details, page))) continue; + + /* + * Batch zap large anonymous folio mappings. This allows + * batching the rmap removal, which means we avoid + * spuriously adding a partially unmapped folio to the + * deferrred split queue in the common case, which + * reduces split queue lock contention. + */ + if (page && PageAnon(page)) { + struct folio *folio = page_folio(page); + int nr_pages_req, nr_pages; + + nr_pages_req = folio_nr_pages_cont_mapped( + folio, page, pte, addr, end); + + nr_pages = try_zap_anon_pte_range(tlb, vma, + folio, page, pte, addr, + nr_pages_req, details); + + rss[mm_counter(page)] -= nr_pages; + nr_pages--; + pte += nr_pages; + addr += nr_pages << PAGE_SHIFT; + + if (unlikely(nr_pages < nr_pages_req)) { + force_flush = 1; + addr += PAGE_SIZE; + break; + } + continue; + } + ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); tlb_remove_tlb_entry(tlb, pte, addr); -- 2.25.1