Received: by 2002:a05:6358:9144:b0:117:f937:c515 with SMTP id r4csp2228668rwr; Fri, 21 Apr 2023 06:16:32 -0700 (PDT) X-Google-Smtp-Source: AKy350YLj1+0UKnrrKS73DGTuNNUOJ0o3UNPd90yW9F76ZdqVk69l5DcahuFyjw1I5JdzusYZfK6 X-Received: by 2002:a17:902:ea0d:b0:1a6:3ba2:c896 with SMTP id s13-20020a170902ea0d00b001a63ba2c896mr6198471plg.3.1682082992628; Fri, 21 Apr 2023 06:16:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682082992; cv=none; d=google.com; s=arc-20160816; b=OY7qNyevfJ+LHtV9jpvmFSGpoyIk8KVyVCqrjZM02zfHwMkRcJQgWVXJOt/hcj7R4X OqQaOFkAuTt0VvGltiy3nsZKv7xihMGY/Q6NbxBxch145bhYPA9Vp/Jse1oww5ofu5Wo hAdLbaG2Y3i2j7JYL1n+Jq5jXnqBIHlrEi/reBF/qy+O9rUItdB0VnC5Cw9R7JhcdH/3 gVhizj1d3W1cz5LLpNVSGiqbmz69veEfjGv7U6tXDTQkFtvSj2E+gkOvfTOlElTC8yPj /C8RxkrC3BcePsSaO26iNc6mCQ7GvtYdhkbnAC93wPdrsOjgEGtK3Fx5dj+I/Ro6R1Wx j2Xw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=mqJS+mCkXn29Qq93BENBL/BpL0OX/wjoKC4YTgt/h14=; b=gJs44vvihIKmkQaABZAFJdl4Rs66faHUTa+5yobC6rAj9QMAEKiR51bBWu3uyFkAlU mTLaap8bAqlCeqxgjj/CJl7IGVXvESJplLsj/PSPkfYqkb8+8rAo3+tONnCoqA822aVI rjbswUyvOsl0F5ZzM9d07eJn/ryqhmbbmxW0KkUlLC+I3Ap9a26emH9Q6JEU6Hc9UwPD Jgz3RIj+5eMYnJKzMjYN2XeJaJ/GTdgDkR8cjrb0w8V/YBvhD6Hll5o5ihsxCX/v/t13 zD5PjYjqHZB6uPqCC82R+Xyi8xN+ro3JQfuHrsAQxlJ93Db7b7wC1KfFyjQAffkzRbAT 9T1Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j7-20020a17090a94c700b0024987ba9490si4598289pjw.138.2023.04.21.06.16.20; Fri, 21 Apr 2023 06:16:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231128AbjDUNMd (ORCPT + 99 others); Fri, 21 Apr 2023 09:12:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229543AbjDUNMc (ORCPT ); Fri, 21 Apr 2023 09:12:32 -0400 Received: from outbound-smtp46.blacknight.com (outbound-smtp46.blacknight.com [46.22.136.58]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29CC0139 for ; Fri, 21 Apr 2023 06:12:31 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp46.blacknight.com (Postfix) with ESMTPS id C3753FACCF for ; Fri, 21 Apr 2023 14:12:29 +0100 (IST) Received: (qmail 32508 invoked from network); 21 Apr 2023 13:12:29 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.21.103]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 21 Apr 2023 13:12:29 -0000 Date: Fri, 21 Apr 2023 14:12:27 +0100 From: Mel Gorman To: Johannes Weiner Cc: linux-mm@kvack.org, Kaiyang Zhao , Vlastimil Babka , David Rientjes , linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [RFC PATCH 08/26] mm: page_alloc: claim blocks during compaction capturing Message-ID: <20230421131227.k2afmhb6kejdbhui@techsingularity.net> References: <20230418191313.268131-1-hannes@cmpxchg.org> <20230418191313.268131-9-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20230418191313.268131-9-hannes@cmpxchg.org> X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 18, 2023 at 03:12:55PM -0400, Johannes Weiner wrote: > When capturing a whole block, update the migratetype accordingly. For > example, a THP allocation might capture an unmovable block. If the THP > gets split and partially freed later, the remainder should group up > with movable allocations. > > Signed-off-by: Johannes Weiner > --- > mm/internal.h | 1 + > mm/page_alloc.c | 42 ++++++++++++++++++++++++------------------ > 2 files changed, 25 insertions(+), 18 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index 024affd4e4b5..39f65a463631 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -432,6 +432,7 @@ struct compact_control { > */ > struct capture_control { > struct compact_control *cc; > + int migratetype; > struct page *page; > }; > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 4d20513c83be..8e5996f8b4b4 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -615,6 +615,17 @@ void set_pageblock_migratetype(struct page *page, int migratetype) > page_to_pfn(page), MIGRATETYPE_MASK); > } > > +static void change_pageblock_range(struct page *pageblock_page, > + int start_order, int migratetype) > +{ > + int nr_pageblocks = 1 << (start_order - pageblock_order); > + > + while (nr_pageblocks--) { > + set_pageblock_migratetype(pageblock_page, migratetype); > + pageblock_page += pageblock_nr_pages; > + } > +} > + > #ifdef CONFIG_DEBUG_VM > static int page_outside_zone_boundaries(struct zone *zone, struct page *page) > { > @@ -962,14 +973,19 @@ compaction_capture(struct capture_control *capc, struct page *page, > is_migrate_isolate(migratetype)) > return false; > > - /* > - * Do not let lower order allocations pollute a movable pageblock. > - * This might let an unmovable request use a reclaimable pageblock > - * and vice-versa but no more than normal fallback logic which can > - * have trouble finding a high-order free page. > - */ > - if (order < pageblock_order && migratetype == MIGRATE_MOVABLE) > + if (order >= pageblock_order) { > + migratetype = capc->migratetype; > + change_pageblock_range(page, order, migratetype); > + } else if (migratetype == MIGRATE_MOVABLE) { > + /* > + * Do not let lower order allocations pollute a > + * movable pageblock. This might let an unmovable > + * request use a reclaimable pageblock and vice-versa > + * but no more than normal fallback logic which can > + * have trouble finding a high-order free page. > + */ > return false; > + } > For capturing pageblock order or larger, why not unconditionally make the block MOVABLE? Even if it's a zero page allocation, it would be nice to keep the pageblock for movable pages after the split as long as possible. -- Mel Gorman SUSE Labs