Received: by 2002:ab2:1149:0:b0:1f3:1f8c:d0c6 with SMTP id z9csp2645222lqz; Wed, 3 Apr 2024 04:41:50 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWkf+JoehKKjY473mhtSBfcs8xarqvnBH5zuv9b3wA6IGLldqOurH65lF6oYMMMU7fAIW1Kaxyu2AOeE+6sq5DVAjmbnqotWd5PZ586kQ== X-Google-Smtp-Source: AGHT+IEuvq165X/n3hJ30p3glO8pxJ6W7nLAQn5WCPmBIoCxRA4rl2wh82c3hUSTfKBgblu4xW/c X-Received: by 2002:ac2:5314:0:b0:516:a6b2:f9eb with SMTP id c20-20020ac25314000000b00516a6b2f9ebmr3520078lfh.69.1712144510812; Wed, 03 Apr 2024 04:41:50 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1712144510; cv=pass; d=google.com; s=arc-20160816; b=z/5uVDH3FX+1i4bZtclgTD+7hE7A9qK45sFBJ0BI4wD0gVZqa6iTlX88i/XlwuqIep 2m93FExQdu6M4abtzCLpNJTUdNKbM9H7tNQrz3Hid75rQxCh4NJFnlPeNbsMpYXpylAn KdKINrwUWmBQ89p9YrPTjU+hT8hB5P/fXcyBg+HLfAm+EMiUZ09LktKZ1TDmosaSbLL5 VITZypkhq3QAH30yFjD2H1dk/AGEMLgoXvk8J6Uo87ft5mWEYlX1iAdwdnIb8PtLq66T ODCIWawXpv0+RgCMfzomXgTfgATffDNUNay2UeL9SiJtoI1BXf1VYmc/yriwy6EizFi1 VzMw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=bIeI9kn3DIjbAOrFytpc1mJTOK7n80FnWcUsk3laZkA=; fh=wwFuQp76xlefwcBUt1D5j4vuA1DLR/ytfrF7TQnU/mk=; b=Zl02ZiPWbZdyhiTN0ugGvSJA7nnHd6w7jcV9jiIipF5sekSzZtuGeYweY+1onXaM/r 4k07QGCQz9MvdOClP63xzEPMGVbwpgaFZw/y403vGy+V7cK9FvWIlr1UBD+69W7+xFGt XgoPMtxSkbQaIF4hRdDTXbEdIobrXIlTG5+53OTJYeX1TbOJ4Wbjns25RFvoKa6DeI4D B84K5u4FsB3o7uuK6o+F2EHEwCsq4v87uDs68QQmnOrxAb9+dtjM/Ju/2xO3GI6T7t4b kF+jIgIJzvqi3FX/uFFmrL3vfS6GT5wIMxLIu56oSn1LhDly6KXJC7mP3czg0lyaDhJa saug==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-129683-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-129683-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id hh2-20020a170906a94200b00a4e7d67a1a5si2049830ejb.404.2024.04.03.04.41.50 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Apr 2024 04:41:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-129683-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-129683-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-129683-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 544441F28C3F for ; Wed, 3 Apr 2024 11:41:50 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 930701474DA; Wed, 3 Apr 2024 11:40:55 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A936B143870 for ; Wed, 3 Apr 2024 11:40:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712144455; cv=none; b=gMzCS0y4/u4LpMtuSyMW1uryBqGJLZXKMFIPLeBn7glo85L63jRjorgaqf5UBbxT15X5915QqAS1uRf8bg9pU5wgCrOsrqW6reVBJBlWr5ShCESylB6JfW1uvfuAV1dVtSigCLi3PjriXOsmNajXssKSft+Tznv24Bf6O6rZiGM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712144455; c=relaxed/simple; bh=t+bHvhKWP5INTGwjx3NA6MZxcRYZobW1o9V8V94OCsI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=HaCqIszj/GB/eRYGoqBpLNgqE27hK6vLpPXyvNmmgSqIwL4sNEk+E8hgJns71LESheARqPRpmznESFVet3tvirrSmhcGsCcUY4SYJclMTXv6L8n+wuddtSwI14nUiiLvyckH/ivTTzjJi/Bz4VE/dxgHJEHDjnoyc87mLKg7ZSg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 62C921655; Wed, 3 Apr 2024 04:41:24 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 557243F64C; Wed, 3 Apr 2024 04:40:51 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Barry Song Subject: [PATCH v6 5/6] mm: vmscan: Avoid split during shrink_folio_list() Date: Wed, 3 Apr 2024 12:40:31 +0100 Message-Id: <20240403114032.1162100-6-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240403114032.1162100-1-ryan.roberts@arm.com> References: <20240403114032.1162100-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Now that swap supports storing all mTHP sizes, avoid splitting large folios before swap-out. This benefits performance of the swap-out path by eliding split_folio_to_list(), which is expensive, and also sets us up for swapping in large folios in a future series. If the folio is partially mapped, we continue to split it since we want to avoid the extra IO overhead and storage of writing out pages uneccessarily. THP_SWPOUT and THP_SWPOUT_FALLBACK counters should continue to count events only for PMD-mappable folios to avoid user confusion. THP_SWPOUT already has the appropriate guard. Add a guard for THP_SWPOUT_FALLBACK. It may be appropriate to add per-size counters in future. Reviewed-by: David Hildenbrand Reviewed-by: Barry Song Signed-off-by: Ryan Roberts --- mm/vmscan.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 00adaf1cb2c3..ffc4553c8615 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1223,11 +1223,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (!can_split_folio(folio, NULL)) goto activate_locked; /* - * Split folios without a PMD map right - * away. Chances are some or all of the - * tail pages can be freed without IO. + * Split partially mapped folios right + * away. We can free the unmapped pages + * without IO. */ - if (!folio_entire_mapcount(folio) && + if (data_race(!list_empty( + &folio->_deferred_list)) && split_folio_to_list(folio, folio_list)) goto activate_locked; @@ -1240,8 +1241,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, folio_list)) goto activate_locked; #ifdef CONFIG_TRANSPARENT_HUGEPAGE - count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1); - count_vm_event(THP_SWPOUT_FALLBACK); + if (nr_pages >= HPAGE_PMD_NR) { + count_memcg_folio_events(folio, + THP_SWPOUT_FALLBACK, 1); + count_vm_event( + THP_SWPOUT_FALLBACK); + } #endif if (!add_to_swap(folio)) goto activate_locked_split; -- 2.25.1