Received: by 2002:ab2:1149:0:b0:1f3:1f8c:d0c6 with SMTP id z9csp2056734lqz; Tue, 2 Apr 2024 06:13:53 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCXxaNLVAukzopoERRjJBcJ02jKUvxuTBtCRmq3NYkaY28/+pjyue5pTiburfdJRJGRpg3iPrPDiDtq+skX8bdRfezaceOkyohvpB1LYdQ== X-Google-Smtp-Source: AGHT+IHSqv6XWB9q8q1i//PmSb1HP92AyCQ6YIXEvtmbsUDtLO2QEfj5sW/HLUNnS/kBbXA4srgk X-Received: by 2002:a05:6102:356a:b0:478:4960:c7e2 with SMTP id bh10-20020a056102356a00b004784960c7e2mr9279100vsb.3.1712063632764; Tue, 02 Apr 2024 06:13:52 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1712063632; cv=pass; d=google.com; s=arc-20160816; b=KM7RvdbI90TBjLjMI39tPcMM+zhhMkbVhs5wzSEvhpSlmdyyQDfyCoQbvejq4btHtx Gb7gDivkh21uqp9KotKpti8GPeItoSMrgyfrjbXiHzmn5hpLQAqgH/JFnuNdcou2nOEi w6zWy8hwE8fwyHkoWA2aZim9hAfV4y39YJgtK0O/tcST2408LnrsGhNMblkM/N630O3i zagRxnS6A28BocaQ3GA6WcPDUdRmIPt3X2lxO8Ypt3fSDcl2H/PgUrkCSKOTxzGujLrL +8xPThP/JjHZmfEwj//s2Kh+vjA4UiK0usWUJZ89f8vqGpty81Txlw0sqmyLl/mkUSmj qlXg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=JyXqLJ7y+RdkP7qVOwmdmEy3YE2UOzv7MRHyWZuhXHY=; fh=3cULz2uNDeJifropYdulCahm9pRRJlGVKjRQ33O9T9A=; b=iZMDw5/m3YjUAuimteoc8mj1QVl6OrUmvTLaJ8BBFTWeQFT8PBp01VBLVUyr+iCh8w Bxx61kkNgKePjaU/4NQnKTcOhxzF/D/rEKrNt4S6vDsfNhaNc9nNUeuOfbNwUU0MehV9 uM0tabm8kX6IxTQ4bc/7uKl7D9Wuu3pooYCN6JA8p7dpQ64vriwDsgLNT7rkCkf8S/ej qeDtt6gVgrK/xJ/6IbRRN6+vNDt5+ligDss37FeYRp2uBgOEOU0lVuXbQ6GsUYUwp0pq alS5pTB6MVgMbNwwIzF7l3hGH/yR189WFmR2OR/6HEETapvIbQVktbfFayAjfKlSRXOV cSeg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-128006-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-128006-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id s9-20020a056102300900b004784194d12fsi1331452vsa.705.2024.04.02.06.13.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Apr 2024 06:13:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-128006-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-128006-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-128006-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 881B21C2395E for ; Tue, 2 Apr 2024 13:13:17 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C5BB28287D; Tue, 2 Apr 2024 13:10:17 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9FC9B60DE9 for ; Tue, 2 Apr 2024 13:10:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712063417; cv=none; b=IOGITJuCxLhKp5I6NpqpPyZ/nnY1wtWqaGSrAHSmJAaIwOPP3mNRO3lR0oxFnjJKJ45oz3i+VUFR5/ht4dedYYdeIG6T3RLCWdzcu6MOs1vnVn4FuelSv6TX6/CKNr/vws3szGvSbnBmlXir7b+cxZlRekkiA3cNfJYOS3R4pf4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712063417; c=relaxed/simple; bh=qtEQ3MiIWlNA/svBFqI/OqRqEvqh/7xngg+Uzjxi2vY=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=FuMXU7hRKdsIIGpgIgVk5rpslmpIMfQwbZSFzhqpVk1f8rUFooPCCKZTumADconhopg4U4m112t0Whm1t8qD0ladjGmtJQKaXpfn5LXN+hyCEqCc+C5766HV6L8K3JFI9BYFD8uLV1jjCAEz3FY526cgMbwtD25ssJK/6zjfVH8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6DCF21007; Tue, 2 Apr 2024 06:10:45 -0700 (PDT) Received: from [10.1.38.163] (XHFQ2J9959.cambridge.arm.com [10.1.38.163]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D3A383F766; Tue, 2 Apr 2024 06:10:10 -0700 (PDT) Message-ID: <63c9caf4-3af4-4149-b3c2-e677788cb11f@arm.com> Date: Tue, 2 Apr 2024 14:10:09 +0100 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v5 5/6] mm: vmscan: Avoid split during shrink_folio_list() Content-Language: en-GB To: Barry Song <21cnbao@gmail.com> Cc: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Chris Li , Lance Yang , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Barry Song References: <20240327144537.4165578-1-ryan.roberts@arm.com> <20240327144537.4165578-6-ryan.roberts@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 28/03/2024 08:18, Barry Song wrote: > On Thu, Mar 28, 2024 at 3:45 AM Ryan Roberts wrote: >> >> Now that swap supports storing all mTHP sizes, avoid splitting large >> folios before swap-out. This benefits performance of the swap-out path >> by eliding split_folio_to_list(), which is expensive, and also sets us >> up for swapping in large folios in a future series. >> >> If the folio is partially mapped, we continue to split it since we want >> to avoid the extra IO overhead and storage of writing out pages >> uneccessarily. >> >> Reviewed-by: David Hildenbrand >> Reviewed-by: Barry Song >> Signed-off-by: Ryan Roberts >> --- >> mm/vmscan.c | 9 +++++---- >> 1 file changed, 5 insertions(+), 4 deletions(-) >> >> diff --git a/mm/vmscan.c b/mm/vmscan.c >> index 00adaf1cb2c3..293120fe54f3 100644 >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -1223,11 +1223,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, >> if (!can_split_folio(folio, NULL)) >> goto activate_locked; >> /* >> - * Split folios without a PMD map right >> - * away. Chances are some or all of the >> - * tail pages can be freed without IO. >> + * Split partially mapped folios right >> + * away. We can free the unmapped pages >> + * without IO. >> */ >> - if (!folio_entire_mapcount(folio) && >> + if (data_race(!list_empty( >> + &folio->_deferred_list)) && >> split_folio_to_list(folio, >> folio_list)) >> goto activate_locked; > > Hi Ryan, > > Sorry for bringing up another minor issue at this late stage. No problem - I'd rather take a bit longer and get it right, rather than rush it and get it wrong! > > During the debugging of thp counter patch v2, I noticed the discrepancy between > THP_SWPOUT_FALLBACK and THP_SWPOUT. > > Should we make adjustments to the counter? Yes, agreed; we want to be consistent here with all the other existing THP counters; they only refer to PMD-sized THP. I'll make the change for the next version. I guess we will eventually want equivalent counters for per-size mTHP using the framework you are adding. > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 293120fe54f3..d7856603f689 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1241,8 +1241,10 @@ static unsigned int shrink_folio_list(struct > list_head *folio_list, > folio_list)) > goto activate_locked; > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > - > count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1); > - count_vm_event(THP_SWPOUT_FALLBACK); > + if (folio_test_pmd_mappable(folio)) { > + > count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1); > + > count_vm_event(THP_SWPOUT_FALLBACK); > + } > #endif > if (!add_to_swap(folio)) > goto activate_locked_split; > > > Because THP_SWPOUT is only for pmd: > > static inline void count_swpout_vm_event(struct folio *folio) > { > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > if (unlikely(folio_test_pmd_mappable(folio))) { > count_memcg_folio_events(folio, THP_SWPOUT, 1); > count_vm_event(THP_SWPOUT); > } > #endif > count_vm_events(PSWPOUT, folio_nr_pages(folio)); > } > > I can provide per-order counters for this in my THP counter patch. > >> -- >> 2.25.1 >> > > Thanks > Barry