Received: by 2002:a05:7412:8d06:b0:f9:332d:97f1 with SMTP id bj6csp44291rdb; Mon, 18 Dec 2023 08:28:47 -0800 (PST) X-Google-Smtp-Source: AGHT+IGhWa34apk7gjImroTgzElLsg3KcOPF0jIPo25rVducM8UJm7bj6QC0YKIRAi3L76e8NGAb X-Received: by 2002:a7b:c857:0:b0:40b:5e21:ec22 with SMTP id c23-20020a7bc857000000b0040b5e21ec22mr8292248wml.84.1702916927557; Mon, 18 Dec 2023 08:28:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702916927; cv=none; d=google.com; s=arc-20160816; b=wXHoHfnk+R0L6qgyKxMEnLYM0zENqRGus6JJtibv2GaFyhzh7007pe3i8KJYdsftwl QzpBUmfWzfdcFC3M/GVe2iPmm+SaOG6QffQdXjWs+9SP/D7iPuJp/pKKlVD5nSZ3FKE4 /zNej7n1vtD4C7k910mK8BB3nxSt1y9jw5EWRrXtzTNTjh8UQG22LMR8EagvK7DUI3aZ GC0FRDleYB3VplX0UCXP5Xz0VPAMf/iK9rp68uImlzbMyg0idoYgc8OWwfvUeXPyZDU8 4s/hnHKTfhv+ngFSvaIOYtxROnJM6EJyzfhstmTs5Embic6DsdbPQHxqFaTtRDzBFbWL LlEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=tvtOTTqcoxNTkZHHv48rPM7CxTj8zwhU054QGCOGDA4=; fh=GlLzLBhqF07GE0FJUf82nnkBDPyXS9iIZ0BDzvPX32s=; b=qzEd2dZWBru+INDj3f85LnnbwZK49jEqvZ+NkylxVkyIy1/CQ0FyqHUCkPWSYOr0X+ cgR5fPpRkmSTPZS4dB8i+VVcM4bwKHC3TqtEbYiEtOXfHwhNudw7g2ydJOZFrtUcpeoh 3rB8jipMkKdGhkLldUv70seDV0jLoFY8cMAk49NFtktZw0T+NfIkc7rxRkTf1jE+LRR9 Ypx5PEM1nbEa9SIRiSdyalFYDuge2SN65uuSgGIMyXAGIVb2yh3ZD3PVgQJHEDDGkRPg RRcEQexZQqUaDkr5L5+Kgm+MnLnEa5o1XbqG+wXsg6NFdSkOJ37hhWV0UANtur7xWFxV CeMg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-4057-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-4057-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id xa12-20020a170907b9cc00b00a089f01d380si10111933ejc.559.2023.12.18.08.28.47 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Dec 2023 08:28:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-4057-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-4057-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-4057-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 30EB51F23C37 for ; Mon, 18 Dec 2023 16:22:59 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 555FB498BC; Mon, 18 Dec 2023 16:22:38 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id EA6263D57A for ; Mon, 18 Dec 2023 16:22:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9C4E12F4; Mon, 18 Dec 2023 08:23:19 -0800 (PST) Received: from [10.57.75.230] (unknown [10.57.75.230]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BD4FE3F5A1; Mon, 18 Dec 2023 08:22:33 -0800 (PST) Message-ID: Date: Mon, 18 Dec 2023 16:22:32 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 15/39] mm/huge_memory: batch rmap operations in __split_huge_pmd_locked() Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , "Matthew Wilcox (Oracle)" , Hugh Dickins , Yin Fengwei , Mike Kravetz , Muchun Song , Peter Xu References: <20231211155652.131054-1-david@redhat.com> <20231211155652.131054-16-david@redhat.com> From: Ryan Roberts In-Reply-To: <20231211155652.131054-16-david@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 11/12/2023 15:56, David Hildenbrand wrote: > Let's use folio_add_anon_rmap_ptes(), batching the rmap operations. > > While at it, use more folio operations (but only in the code branch we're > touching), use VM_WARN_ON_FOLIO(), and pass RMAP_EXCLUSIVE instead of > manually setting PageAnonExclusive. > > We should never see non-anon pages on that branch: otherwise, the > existing page_add_anon_rmap() call would have been flawed already. > > Signed-off-by: David Hildenbrand > --- > mm/huge_memory.c | 23 +++++++++++++++-------- > 1 file changed, 15 insertions(+), 8 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 1f5634b2f374..82ad68fe0d12 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2398,6 +2398,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > unsigned long haddr, bool freeze) > { > struct mm_struct *mm = vma->vm_mm; > + struct folio *folio; > struct page *page; > pgtable_t pgtable; > pmd_t old_pmd, _pmd; > @@ -2493,16 +2494,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > uffd_wp = pmd_swp_uffd_wp(old_pmd); > } else { > page = pmd_page(old_pmd); > + folio = page_folio(page); > if (pmd_dirty(old_pmd)) { > dirty = true; > - SetPageDirty(page); > + folio_set_dirty(folio); > } > write = pmd_write(old_pmd); > young = pmd_young(old_pmd); > soft_dirty = pmd_soft_dirty(old_pmd); > uffd_wp = pmd_uffd_wp(old_pmd); > > - VM_BUG_ON_PAGE(!page_count(page), page); > + VM_WARN_ON_FOLIO(!folio_ref_count(folio), folio); > + VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); Is this warning really correct? file-backed memory can be PMD-mapped with CONFIG_READ_ONLY_THP_FOR_FS, so presumably it can also have the need to be remapped as pte? Although I guess if we did have a file-backed folio, it definitely wouldn't be correct to call page_add_anon_rmap() / folio_add_anon_rmap_ptes()... > > /* > * Without "freeze", we'll simply split the PMD, propagating the > @@ -2519,11 +2522,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > * > * See page_try_share_anon_rmap(): invalidate PMD first. > */ > - anon_exclusive = PageAnon(page) && PageAnonExclusive(page); > + anon_exclusive = PageAnonExclusive(page); > if (freeze && anon_exclusive && page_try_share_anon_rmap(page)) > freeze = false; > - if (!freeze) > - page_ref_add(page, HPAGE_PMD_NR - 1); > + if (!freeze) { > + rmap_t rmap_flags = RMAP_NONE; > + > + folio_ref_add(folio, HPAGE_PMD_NR - 1); > + if (anon_exclusive) > + rmap_flags |= RMAP_EXCLUSIVE; > + folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR, > + vma, haddr, rmap_flags); > + } > } > > /* > @@ -2566,8 +2576,6 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot)); > if (write) > entry = pte_mkwrite(entry, vma); > - if (anon_exclusive) > - SetPageAnonExclusive(page + i); > if (!young) > entry = pte_mkold(entry); > /* NOTE: this may set soft-dirty too on some archs */ > @@ -2577,7 +2585,6 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > entry = pte_mksoft_dirty(entry); > if (uffd_wp) > entry = pte_mkuffd_wp(entry); > - page_add_anon_rmap(page + i, vma, addr, RMAP_NONE); > } > VM_BUG_ON(!pte_none(ptep_get(pte))); > set_pte_at(mm, addr, pte, entry);