Received: by 2002:a05:7412:8598:b0:f9:33c2:5753 with SMTP id n24csp360270rdh; Tue, 19 Dec 2023 00:42:55 -0800 (PST) X-Google-Smtp-Source: AGHT+IFjGQEtfeNcbeM4vigC8YS46/EWp6cL20WJdgKHZ0Y4OwlPbmycbt0mAEIRsGANa1qx4F2Q X-Received: by 2002:a05:6214:29c1:b0:67f:5b3d:5f19 with SMTP id gh1-20020a05621429c100b0067f5b3d5f19mr1095124qvb.57.1702975375294; Tue, 19 Dec 2023 00:42:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702975375; cv=none; d=google.com; s=arc-20160816; b=kJw7u4qq1A0ey/ipyQIHhXAmYsHhgX33IjXe2/SUoaWannciyS/4OP+21Q8f33MSJ+ feFAH2rpVOAx8C0Ysk/BhwIOZKX5HKQ3MKHHFAHFVPVz1PDrSNbXVHddil8L9pTsCXNp j4GiZSwDQcGRtWPqgszjB9uyJKAW+tfnb6w/rFqFT17qHx7jlZF0WeEJ+uhefowV0koO /i7T+nrlKdB9WuHitqlVtKJSMLl8/bbCNOHl+QncLiUHLgIhjZRqnkCvJTySNvxOPeaV 66sylV7fkENYTleoyeL1U91c4lFYHFeUrP40WZCo0xuLNHGfi10CwWtSjqEshbsrKCis dX8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=uf4eFu5Bj77orCseMHhHKdXNYSKMRCy+SRO1+HBsfhQ=; fh=GlLzLBhqF07GE0FJUf82nnkBDPyXS9iIZ0BDzvPX32s=; b=qpRw7iSD4Gs4AFNdzirl1vncCKlaspHhT1y7nfXL+1x95+4M5KS7RCeDadtBDINcgw yK/yccqrug3ouzaLc5sG8RtSvW6E6vCu0AAo7Dpa8DF4w7tUvhlx3ZzPIVOXCTbctQoA k9dBhbnx0PX51TnXY1Wvt3L7i8nr0cbWoml2Y2TLF3OnnPFEBu8oY9uF0oRFtBsCZ6xl SUgXhSMwl+oZchPt7OIkTvrYl24OI7r/ImGi7Y6+PxIkxs4ZcaOGXZNlF9zGcC/mZI9h JyfDd44i7VKYBO1qa7rhR/CJtHB+62HbLgUjzlQamjpFrm4mACQ/hlvaxrOGpzrA6ORp cr3g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-4902-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-4902-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id m17-20020a0cf191000000b0067f1863cbcdsi9586396qvl.389.2023.12.19.00.42.55 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 00:42:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-4902-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-4902-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-4902-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id E07711C23A4E for ; Tue, 19 Dec 2023 08:42:54 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CDB6214282; Tue, 19 Dec 2023 08:42:20 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DCC301426B for ; Tue, 19 Dec 2023 08:42:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8E4F31FB; Tue, 19 Dec 2023 00:43:02 -0800 (PST) Received: from [10.57.75.230] (unknown [10.57.75.230]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 02A783F738; Tue, 19 Dec 2023 00:42:16 -0800 (PST) Message-ID: <8b6c9485-568c-41e4-8ef9-0ac6c33754ea@arm.com> Date: Tue, 19 Dec 2023 08:42:16 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 15/39] mm/huge_memory: batch rmap operations in __split_huge_pmd_locked() Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , "Matthew Wilcox (Oracle)" , Hugh Dickins , Yin Fengwei , Mike Kravetz , Muchun Song , Peter Xu References: <20231211155652.131054-1-david@redhat.com> <20231211155652.131054-16-david@redhat.com> <652f143e-1547-4ded-892f-1216ce689c9b@redhat.com> From: Ryan Roberts In-Reply-To: <652f143e-1547-4ded-892f-1216ce689c9b@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 18/12/2023 17:03, David Hildenbrand wrote: > On 18.12.23 17:22, Ryan Roberts wrote: >> On 11/12/2023 15:56, David Hildenbrand wrote: >>> Let's use folio_add_anon_rmap_ptes(), batching the rmap operations. >>> >>> While at it, use more folio operations (but only in the code branch we're >>> touching), use VM_WARN_ON_FOLIO(), and pass RMAP_EXCLUSIVE instead of >>> manually setting PageAnonExclusive. >>> >>> We should never see non-anon pages on that branch: otherwise, the >>> existing page_add_anon_rmap() call would have been flawed already. >>> >>> Signed-off-by: David Hildenbrand >>> --- >>>   mm/huge_memory.c | 23 +++++++++++++++-------- >>>   1 file changed, 15 insertions(+), 8 deletions(-) >>> >>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>> index 1f5634b2f374..82ad68fe0d12 100644 >>> --- a/mm/huge_memory.c >>> +++ b/mm/huge_memory.c >>> @@ -2398,6 +2398,7 @@ static void __split_huge_pmd_locked(struct >>> vm_area_struct *vma, pmd_t *pmd, >>>           unsigned long haddr, bool freeze) >>>   { >>>       struct mm_struct *mm = vma->vm_mm; >>> +    struct folio *folio; >>>       struct page *page; >>>       pgtable_t pgtable; >>>       pmd_t old_pmd, _pmd; >>> @@ -2493,16 +2494,18 @@ static void __split_huge_pmd_locked(struct >>> vm_area_struct *vma, pmd_t *pmd, >>>           uffd_wp = pmd_swp_uffd_wp(old_pmd); >>>       } else { >>>           page = pmd_page(old_pmd); >>> +        folio = page_folio(page); >>>           if (pmd_dirty(old_pmd)) { >>>               dirty = true; >>> -            SetPageDirty(page); >>> +            folio_set_dirty(folio); >>>           } >>>           write = pmd_write(old_pmd); >>>           young = pmd_young(old_pmd); >>>           soft_dirty = pmd_soft_dirty(old_pmd); >>>           uffd_wp = pmd_uffd_wp(old_pmd); >>>   -        VM_BUG_ON_PAGE(!page_count(page), page); >>> +        VM_WARN_ON_FOLIO(!folio_ref_count(folio), folio); >>> +        VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); >> >> Is this warning really correct? file-backed memory can be PMD-mapped with >> CONFIG_READ_ONLY_THP_FOR_FS, so presumably it can also have the need to be >> remapped as pte? Although I guess if we did have a file-backed folio, it >> definitely wouldn't be correct to call page_add_anon_rmap() / >> folio_add_anon_rmap_ptes()... > > Yes, see the patch description where I spell that out. Oh god, how did I miss that... sorry! > > PTE-remapping a file-back folio will simply zap the PMD and refault from the > page cache after creating a page table. Yep, that makes sense. > > So this is anon-only code. >