Received: by 2002:a05:6602:18e:0:0:0:0 with SMTP id m14csp417224ioo; Thu, 26 May 2022 06:39:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyYunS7mTrbCO68c1ugrufyhoNlD0U84q3TJkA9njmFRvr4QkEWJLZ+fVdC5MYNL5cXaYWF X-Received: by 2002:a17:907:3e82:b0:6ff:1e04:a365 with SMTP id hs2-20020a1709073e8200b006ff1e04a365mr4508565ejc.617.1653572373883; Thu, 26 May 2022 06:39:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1653572373; cv=none; d=google.com; s=arc-20160816; b=BPrNFvTvKpg8M2yvxzuDOuVWvL3EIk5SiyFuNTdu29bFuPJSgGg3Twe9eBws3soUd3 pmOUACSDJvrNxEvqcBK+ZqSGxFbLDYbyZ/7UpjAU+dV9nXFiENJkElrpO+nkobL4EfOW 18ffx459pLCAn9QyhvUZBjHx809F8d7WAwUW3Mb+92kBeO02sDAxoIl3IMZi5FCJoOks AVr943nd18RF+rtb716AB5+n75l9274r3KqqdyAdKbgLndGKc8qlbvxHp82uatmOEwsh oaNWHGeyePfA5rGwdq46uDaCK9Yj3bbTtwDUx36fU860zv4bnalYquBwLKzMUvqnUR/0 +XeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:content-disposition:mime-version :message-id:subject:cc:to:from:date; bh=ccMGMyJh9mTsZ+ssdLoY2ZP23g2bZuIM8y1TXZic7EI=; b=kxs1f4rSvouzOkYkgy+P5Fk0fF7lbWgnOwXNs6DkSWgEQWj+BPkM/1r4bckBHn6au9 hSCdM6JDrYsBD5vSaDVh9pbL+2AYkGdhrYsnC6c7SEpKLT4UQmq+0tr0+Z500vpxNK1H TJ3AAT0ROYcRjg3T8AetUz+L9vwmZkVaSQl1LBnuD+UeoBQaO9+uR3mcmpFm+zJieHDM vdD8H1odiJdz+UsS5rg6yqSgxZp4S/hskAMqm0g5ZR9z7vtfU1IT5ieuYa0MW702v3/v OfAL4m5mYpVGyfYoWIRBUTBZ2nOK15GBSOgxL4qFTgo15djl1Gw+VV5Hh6FPrbc91YTJ yHEg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a19-20020a170906245300b006fef86e6003si1391968ejb.914.2022.05.26.06.39.06; Thu, 26 May 2022 06:39:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243524AbiEZJM1 (ORCPT + 99 others); Thu, 26 May 2022 05:12:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232996AbiEZJMU (ORCPT ); Thu, 26 May 2022 05:12:20 -0400 Received: from outbound-smtp21.blacknight.com (outbound-smtp21.blacknight.com [81.17.249.41]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 541F85251F for ; Thu, 26 May 2022 02:12:13 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp21.blacknight.com (Postfix) with ESMTPS id D1FBCCCB9F for ; Thu, 26 May 2022 10:12:11 +0100 (IST) Received: (qmail 7312 invoked from network); 26 May 2022 09:12:11 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 26 May 2022 09:12:11 -0000 Date: Thu, 26 May 2022 10:12:10 +0100 From: Mel Gorman To: Andrew Morton Cc: "Darrick J. Wong" , Dave Chinner , Jan Kara , Vlastimil Babka , Jesper Dangaard Brouer , Chuck Lever , Linux-NFS , Linux-MM , Linux-XFS , LKML Subject: [PATCH] mm/page_alloc: Always attempt to allocate at least one page during bulk allocation Message-ID: <20220526091210.GC3441@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline User-Agent: Mutt/1.10.1 (2018-07-13) X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Peter Pavlisko reported the following problem on kernel bugzilla 216007. When I try to extract an uncompressed tar archive (2.6 milion files, 760.3 GiB in size) on newly created (empty) XFS file system, after first low tens of gigabytes extracted the process hangs in iowait indefinitely. One CPU core is 100% occupied with iowait, the other CPU core is idle (on 2-core Intel Celeron G1610T). It was bisected to c9fa563072e1 ("xfs: use alloc_pages_bulk_array() for buffers") but XFS is only the messenger. The problem is that nothing is waking kswapd to reclaim some pages at a time the PCP lists cannot be refilled until some reclaim happens. The bulk allocator checks that there are some pages in the array and the original intent was that a bulk allocator did not necessarily need all the requested pages and it was best to return as quickly as possible. This was fine for the first user of the API but both NFS and XFS require the requested number of pages be available before making progress. Both could be adjusted to call the page allocator directly if a bulk allocation fails but it puts a burden on users of the API. Adjust the semantics to attempt at least one allocation via __alloc_pages() before returning so kswapd is woken if necessary. It was reported via bugzilla that the patch addressed the problem and that the tar extraction completed successfully. This may also address bug 215975 but has yet to be confirmed. BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=216007 BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=215975 Fixes: 387ba26fb1cb ("mm/page_alloc: add a bulk page allocator") Signed-off-by: Mel Gorman Cc: # v5.13+ --- mm/page_alloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0e42038382c1..5ced6cb260ed 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5324,8 +5324,8 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, page = __rmqueue_pcplist(zone, 0, ac.migratetype, alloc_flags, pcp, pcp_list); if (unlikely(!page)) { - /* Try and get at least one page */ - if (!nr_populated) + /* Try and allocate at least one page */ + if (!nr_account) goto failed_irq; break; }