Received: by 2002:a05:7412:b10a:b0:f3:1519:9f41 with SMTP id az10csp2740308rdb; Mon, 4 Dec 2023 06:24:32 -0800 (PST) X-Google-Smtp-Source: AGHT+IF/tLrkfYeiwvLpW57BPOui1vwGthgZLQ7t3PMEz85/1d7SudEVtylZ0LUZC1AxZhjRm3/E X-Received: by 2002:a17:902:760d:b0:1d0:6ffe:a19 with SMTP id k13-20020a170902760d00b001d06ffe0a19mr4679292pll.119.1701699872535; Mon, 04 Dec 2023 06:24:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701699872; cv=none; d=google.com; s=arc-20160816; b=RJqR3J5MN0gFPFuWLWTc+lOw7Ty/4O4dmx4xHvNu8UVFg/WsOF3u3gV3uXDynWOR07 GYJcUrRKgkA328y/sCalZ2hTLNJJ4LyLWfGMsUPNbscw0OJ+V5WaTopIjlgXd33v8I4d MYFOOVMBOn8R4EEYh+PHBKDZYu67EG7RcaU7TQbcN6emo0bzOZKNhoinCdx/5NukWTjw Qtku6wptYrjdYYCWuNPPEBwmIqa2+pcVaJ8FImyhtT2lA21JNvQ6dEhAWCytuUlhvZzx voqQsZYt6qKh9gN96UG03yz9VKHujtvj/J6wMROZrRGal4ccmz4qSQZSvtD9sS92w3rD GVXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=0rYG9NNO7tFTRIi4sOXhh6ZZDANHjsXpFzljN91NqwE=; fh=UlFXU+hKOMcm6idRzNfe3Pu+B5L1f7aeir3C6i/NWQA=; b=dpuHTVM827ZqK1McyHXOcjERyIhI4yj8kOxkWENbwpofFoGZi2kqwo4wj8Ii6nhLwx MhfZuKYmq4B/2/Nmy3xv+edmCt6BXmegvy+ceKujmYlaI9L2DkSC8VQ8W7hK9FDleL6i tcHwQ0kxWv4FtSmHnwn81GiHIF9kJ+WSiq8SCsnpmO9w/vpPGAZQUp5YyQ5u9b8TgqKk udBGHhCX04D5FRLlnY0pH8L9PfWVdYOSPqu1I9xyVM6RGokSI373v/WAIEUMbjz+Zma4 4USVUWPdL3/MGRDpLnu8QkuxCIRTQd07p/6Q0xqZCwJOi4IlF10dKwsGWQe3sy7r0hHj ok2w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bBBZr2L3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id iz21-20020a170902ef9500b001d0a9fdd8besi1498161plb.628.2023.12.04.06.24.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 06:24:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bBBZr2L3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 3D132804C67A; Mon, 4 Dec 2023 06:24:13 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235922AbjLDOWz (ORCPT + 99 others); Mon, 4 Dec 2023 09:22:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235857AbjLDOWZ (ORCPT ); Mon, 4 Dec 2023 09:22:25 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF7ED1720 for ; Mon, 4 Dec 2023 06:22:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701699738; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0rYG9NNO7tFTRIi4sOXhh6ZZDANHjsXpFzljN91NqwE=; b=bBBZr2L3F/wfak2zVi+bJFcURglvRlJ2bxcPAtLLewO2Q8vr9HvnCnzLkagf/Nj50xE1hs oCGi9gVd1Lomi5nWKV4eSDX0YgiBWowlm3vNy5DJf2thjEfEU36Wfpd9nm6e19unUfjkTb 48rDdOV4oSlmYHzLV7tiRPoI9I7v2NU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-658-L-pBgpvKP7yCqqoQs9ALwA-1; Mon, 04 Dec 2023 09:22:17 -0500 X-MC-Unique: L-pBgpvKP7yCqqoQs9ALwA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EF0598314FC; Mon, 4 Dec 2023 14:22:16 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.195.87]) by smtp.corp.redhat.com (Postfix) with ESMTP id 052CE2026D4C; Mon, 4 Dec 2023 14:22:14 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Hugh Dickins , Ryan Roberts , Yin Fengwei , Mike Kravetz , Muchun Song , Peter Xu Subject: [PATCH RFC 15/39] mm/huge_memory: batch rmap operations in __split_huge_pmd_locked() Date: Mon, 4 Dec 2023 15:21:22 +0100 Message-ID: <20231204142146.91437-16-david@redhat.com> In-Reply-To: <20231204142146.91437-1-david@redhat.com> References: <20231204142146.91437-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Mon, 04 Dec 2023 06:24:13 -0800 (PST) Let's use folio_add_anon_rmap_ptes(), batching the rmap operations. While at it, use more folio operations (but only in the code branch we're touching), use VM_WARN_ON_FOLIO(), and pass RMAP_COMPOUND instead of manually setting PageAnonExclusive. We should never see non-anon pages on that branch: otherwise, the existing page_add_anon_rmap() call would have been flawed already. Signed-off-by: David Hildenbrand --- mm/huge_memory.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index cb33c6e0404cf..2c037ab3f4916 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2099,6 +2099,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, unsigned long haddr, bool freeze) { struct mm_struct *mm = vma->vm_mm; + struct folio *folio; struct page *page; pgtable_t pgtable; pmd_t old_pmd, _pmd; @@ -2194,16 +2195,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, uffd_wp = pmd_swp_uffd_wp(old_pmd); } else { page = pmd_page(old_pmd); + folio = page_folio(page); if (pmd_dirty(old_pmd)) { dirty = true; - SetPageDirty(page); + folio_set_dirty(folio); } write = pmd_write(old_pmd); young = pmd_young(old_pmd); soft_dirty = pmd_soft_dirty(old_pmd); uffd_wp = pmd_uffd_wp(old_pmd); - VM_BUG_ON_PAGE(!page_count(page), page); + VM_WARN_ON_FOLIO(!folio_ref_count(folio), folio); + VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); /* * Without "freeze", we'll simply split the PMD, propagating the @@ -2220,11 +2223,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, * * See page_try_share_anon_rmap(): invalidate PMD first. */ - anon_exclusive = PageAnon(page) && PageAnonExclusive(page); + anon_exclusive = PageAnonExclusive(page); if (freeze && anon_exclusive && page_try_share_anon_rmap(page)) freeze = false; - if (!freeze) - page_ref_add(page, HPAGE_PMD_NR - 1); + if (!freeze) { + rmap_t rmap_flags = RMAP_NONE; + + folio_ref_add(folio, HPAGE_PMD_NR - 1); + if (anon_exclusive) + rmap_flags = RMAP_EXCLUSIVE; + folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR, + vma, haddr, rmap_flags); + } } /* @@ -2267,8 +2277,6 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot)); if (write) entry = pte_mkwrite(entry, vma); - if (anon_exclusive) - SetPageAnonExclusive(page + i); if (!young) entry = pte_mkold(entry); /* NOTE: this may set soft-dirty too on some archs */ @@ -2278,7 +2286,6 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, entry = pte_mksoft_dirty(entry); if (uffd_wp) entry = pte_mkuffd_wp(entry); - page_add_anon_rmap(page + i, vma, addr, RMAP_NONE); } VM_BUG_ON(!pte_none(ptep_get(pte))); set_pte_at(mm, addr, pte, entry); -- 2.41.0