Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp4079841rdb; Mon, 11 Dec 2023 08:16:41 -0800 (PST) X-Google-Smtp-Source: AGHT+IGHdAtIF6Mkv/d+zM4Qi9GQz1bfvJoEP9p8oDMxBDDnRqACuWzDHtJRuLUITHU2dePD8+0X X-Received: by 2002:a17:902:e808:b0:1d0:92a0:492b with SMTP id u8-20020a170902e80800b001d092a0492bmr5780525plg.84.1702311401131; Mon, 11 Dec 2023 08:16:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702311401; cv=none; d=google.com; s=arc-20160816; b=HzhNw2JBZH8Z1T7I+VcsMc+xMyn4g7VYWXkvPzk747GmaM1ALo1C/J/N0mgiuNtVzV 1E9jDCv7tuK0Ru8/sp7liy2Ie/6V0QTC91BBJ+xRJFszmdYqM+RptSmJOyCYMnPUMyRW ryozelKsyytY5XvHO5v2sYMzxev4QOsaVtNbwV8aEZkhqbF/fcMNAaJPoprsvxEvwnqq Xwy1rhvBEZD1ikw85BkLHXC4BNS6vChdXm1fuI3PZPSfIsnAFl9sdyMxjPhiqolkAD3T YskxEoMhLmvTNjj6fK2Nhv9omh9gZnEhJp2fA/1fHw4aLMzhf+J5u76Qd/8N19TJUbmN OOhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=pIHhczPxE4PJ2bRt/m4+zPIqf0kxphH8CjYj9ffX4kk=; fh=GlLzLBhqF07GE0FJUf82nnkBDPyXS9iIZ0BDzvPX32s=; b=lPInsh5QAwucAzg8VBMkdVCFGrH6r0OEi9tWLl9wdNmr4pzfulHJN5PXQtTvhpXFI7 BDM5bUkM65U8H+NYPlHmFc89bObls3r79+aDzzpok7nIy0u/lC/v5skbzZM8M5d+eYpk bOoXhCyhysHDzKmJa3VDKibJSwSULgZIFRdylSKY/yG4azrYqzIX05mYf3lYhxixTqFq rY1g7KN2605VPe5ZoAbT/tCXEwZ1OfKD5yt8KtTggQrUbdloGmMLTcaqSxxX7hPJZ0PT JGTRL3XxpNoKX8uIROkSjGHAMFigveznkvgxC5HLUDN8jYaTx4ElRmVPQh6jZpiDjLDI IQyQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id d14-20020a170903230e00b001c7345bc01csi6446396plh.450.2023.12.11.08.16.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 08:16:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id D507380AD086; Mon, 11 Dec 2023 08:16:33 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344712AbjLKQQQ (ORCPT + 99 others); Mon, 11 Dec 2023 11:16:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344862AbjLKQPu (ORCPT ); Mon, 11 Dec 2023 11:15:50 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D2E97D67 for ; Mon, 11 Dec 2023 08:15:22 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2F16E16F3; Mon, 11 Dec 2023 08:16:09 -0800 (PST) Received: from [10.57.73.30] (unknown [10.57.73.30]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 43F1E3F738; Mon, 11 Dec 2023 08:15:21 -0800 (PST) Message-ID: <3acd2e94-7ae4-4272-8e43-b496c0d26e55@arm.com> Date: Mon, 11 Dec 2023 16:15:21 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 02/39] mm/rmap: introduce and use hugetlb_remove_rmap() Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , "Matthew Wilcox (Oracle)" , Hugh Dickins , Yin Fengwei , Mike Kravetz , Muchun Song , Peter Xu References: <20231211155652.131054-1-david@redhat.com> <20231211155652.131054-3-david@redhat.com> From: Ryan Roberts In-Reply-To: <20231211155652.131054-3-david@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Mon, 11 Dec 2023 08:16:34 -0800 (PST) On 11/12/2023 15:56, David Hildenbrand wrote: > hugetlb rmap handling differs quite a lot from "ordinary" rmap code. > For example, hugetlb currently only supports entire mappings, and treats > any mapping as mapped using a single "logical PTE". Let's move it out > of the way so we can overhaul our "ordinary" rmap. > implementation/interface. > > Let's introduce and use hugetlb_remove_rmap() and remove the hugetlb > code from page_remove_rmap(). This effectively removes one check on the > small-folio path as well. > > Note: all possible candidates that need care are page_remove_rmap() that > pass compound=true. > > Reviewed-by: Yin Fengwei > Signed-off-by: David Hildenbrand Reviewed-by: Ryan Roberts > --- > include/linux/rmap.h | 5 +++++ > mm/hugetlb.c | 4 ++-- > mm/rmap.c | 17 ++++++++--------- > 3 files changed, 15 insertions(+), 11 deletions(-) > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index 0bfea866f39b..d85bd1d4de04 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -213,6 +213,11 @@ void hugetlb_add_anon_rmap(struct folio *, struct vm_area_struct *, > void hugetlb_add_new_anon_rmap(struct folio *, struct vm_area_struct *, > unsigned long address); > > +static inline void hugetlb_remove_rmap(struct folio *folio) > +{ > + atomic_dec(&folio->_entire_mapcount); > +} > + > static inline void __page_dup_rmap(struct page *page, bool compound) > { > if (compound) { > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 305f3ca1dee6..ef48ae673890 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -5676,7 +5676,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, > make_pte_marker(PTE_MARKER_UFFD_WP), > sz); > hugetlb_count_sub(pages_per_huge_page(h), mm); > - page_remove_rmap(page, vma, true); > + hugetlb_remove_rmap(page_folio(page)); > > spin_unlock(ptl); > tlb_remove_page_size(tlb, page, huge_page_size(h)); > @@ -5987,7 +5987,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > > /* Break COW or unshare */ > huge_ptep_clear_flush(vma, haddr, ptep); > - page_remove_rmap(&old_folio->page, vma, true); > + hugetlb_remove_rmap(old_folio); > hugetlb_add_new_anon_rmap(new_folio, vma, haddr); > if (huge_pte_uffd_wp(pte)) > newpte = huge_pte_mkuffd_wp(newpte); > diff --git a/mm/rmap.c b/mm/rmap.c > index 80d42c31281a..4e60c1f38eaa 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1482,13 +1482,6 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, > > VM_BUG_ON_PAGE(compound && !PageHead(page), page); > > - /* Hugetlb pages are not counted in NR_*MAPPED */ > - if (unlikely(folio_test_hugetlb(folio))) { > - /* hugetlb pages are always mapped with pmds */ > - atomic_dec(&folio->_entire_mapcount); > - return; > - } > - > /* Is page being unmapped by PTE? Is this its last map to be removed? */ > if (likely(!compound)) { > last = atomic_add_negative(-1, &page->_mapcount); > @@ -1846,7 +1839,10 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > dec_mm_counter(mm, mm_counter_file(&folio->page)); > } > discard: > - page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); > + if (unlikely(folio_test_hugetlb(folio))) > + hugetlb_remove_rmap(folio); > + else > + page_remove_rmap(subpage, vma, false); > if (vma->vm_flags & VM_LOCKED) > mlock_drain_local(); > folio_put(folio); > @@ -2199,7 +2195,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, > */ > } > > - page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); > + if (unlikely(folio_test_hugetlb(folio))) > + hugetlb_remove_rmap(folio); > + else > + page_remove_rmap(subpage, vma, false); > if (vma->vm_flags & VM_LOCKED) > mlock_drain_local(); > folio_put(folio);