Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp1482696rwd; Thu, 25 May 2023 13:19:38 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7+mU672orNWe3D5gZTot55lG+HbqCWCxVnn/AmB6c3Tp1vb2orR5imxdLXy9WMiBxMzq/3 X-Received: by 2002:a05:6a00:2343:b0:64d:123d:cc74 with SMTP id j3-20020a056a00234300b0064d123dcc74mr10095097pfj.4.1685045978528; Thu, 25 May 2023 13:19:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685045978; cv=none; d=google.com; s=arc-20160816; b=E9s1G3ZIYixwyMOw/Dp6uDowXvhVZsjix7v/9Op0+TGXkjkhReM2Nkwl11rnigSHR+ l/ncHfs5pPOCGtXZMrSZo+d5jLkc0aJIIfUK7UuWGy5b6ve8RLNXHv7K8639+I2CXKYx AOUhte2BuLWCEI5Hi2weEKNwUm/J5PPdiXikGDRRyXtcvyx5oa62U1g1qiiJI/g4smD1 7lqLQOajPno2Vo2CbqEO3BhXHxI6mhstGBagaXb+ud7WQyN5a+9j1FKDg7MBJ2kR2FIx Z7GEsxT7QqM3AgP9lUdJB8AHXJ0bNhmuX3XZQXn5hOFOUEPy+jMU4jX1CVEbGm1VDn9g Wulw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=RwpqfGGYax+pT0MfFhzfIhAN71y4/MDvl/Gv7NEcip0=; b=flpU+GB3hRF1oNpFApdCUDzMXOfwrsotgTiyK/1f4mqlS6x4flodPWXmAJCG7iQy0B mX3GLARJzBeOjoUDwPe1lEFhREHKDsWk6jnCB+1ri9168ZbPuIoQVXcwXaJLNlosCS3Z 407x/ZB51O5YDEsZN/I//kM8IkbzU/ee37pVzFqJetYs9cPbFOsjQ1scLFxzsRc3C1KV 7o6p5MUdBlqS7MbVH0UrhTHyTyOfv91yBZBk2LeWVlPymBWeag3rMjJoBQmagw9WH/6i S9s/4IMuE/XP6nxxc11GOuaxZ5CMTA5fcsgGXUoMdTr8hvKxlx6KzWemI1R/3Y9wNjp1 l/YQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=oLpgFZXK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r9-20020aa79889000000b0064f6a52105csi2152504pfl.378.2023.05.25.13.19.23; Thu, 25 May 2023 13:19:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=oLpgFZXK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241931AbjEYT7I (ORCPT + 99 others); Thu, 25 May 2023 15:59:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242078AbjEYT7G (ORCPT ); Thu, 25 May 2023 15:59:06 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFCF790 for ; Thu, 25 May 2023 12:59:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=RwpqfGGYax+pT0MfFhzfIhAN71y4/MDvl/Gv7NEcip0=; b=oLpgFZXKkrtYOWRAbCHp7PQRav lquzf2zcJ26RmpzS5UFAL8NT3JPGzYq63xAMI1M3GfGZ9i1WH1t904bDMUDPQVQu/riz1mhMNWERz TOgqASQ+QbDvnsuNvwinwMCLYKtZgxFcO6cnRurXhmjrtmCPWcU8eJtgQ/6Sb+vq63l8qdIsLH8QG hY4ZTHrcleIeQTeMQvBgjt+i1xe7UPfmm5ZMlQkhpATEBiHF91EoNbRUEclX5uA/P6kGg9AUeH7E7 1i5doJchK0ufkm2guGo/zW81XeC2GjBZnpMq47JHJ3srZ5AhsAmc1G3F4BJXnC4q4Qp5xuISpjNxo lYgqBbEA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q2H6k-00CVtH-EG; Thu, 25 May 2023 19:58:46 +0000 Date: Thu, 25 May 2023 20:58:46 +0100 From: Matthew Wilcox To: Khalid Aziz Cc: akpm@linux-foundation.org, steven.sistare@oracle.com, david@redhat.com, ying.huang@intel.com, mgorman@techsingularity.net, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Khalid Aziz Subject: Re: [PATCH v4] mm, compaction: Skip all non-migratable pages during scan Message-ID: References: <20230525191507.160076-1-khalid.aziz@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230525191507.160076-1-khalid.aziz@oracle.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 25, 2023 at 01:15:07PM -0600, Khalid Aziz wrote: > diff --git a/mm/compaction.c b/mm/compaction.c > index 5a9501e0ae01..b548e05f0349 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -764,6 +764,42 @@ static bool too_many_isolated(pg_data_t *pgdat) > return too_many; > } > > +/* > + * Check if this base page should be skipped from isolation because > + * it has extra refcounts that will prevent it from being migrated. > + * This code is inspired by similar code in migrate_vma_check_page(), > + * can_split_folio() and folio_migrate_mapping() > + */ > +static inline bool page_has_extra_refs(struct page *page, > + struct address_space *mapping) > +{ > + unsigned long extra_refs; > + struct folio *folio; > + > + /* > + * Skip this check for pages in ZONE_MOVABLE or MIGRATE_CMA > + * pages that can not be long term pinned > + */ > + if (is_zone_movable_page(page) || is_migrate_cma_page(page)) > + return false; > + > + folio = page_folio(page); > + > + /* > + * caller holds a ref already from get_page_unless_zero() > + * which is accounted for in folio_expected_refs() > + */ > + extra_refs = folio_expected_refs(mapping, folio); > + > + /* > + * This is an admittedly racy check but good enough to determine > + * if a page is pinned and can not be migrated > + */ > + if ((folio_ref_count(folio) - extra_refs) > folio_mapcount(folio)) > + return true; > + return false; > +} > + > /** > * isolate_migratepages_block() - isolate all migrate-able pages within > * a single pageblock > @@ -992,12 +1028,12 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, > goto isolate_fail; Just out of shot, we have ... if (unlikely(!get_page_unless_zero(page))) This is the perfect opportunity to use folio_get_nontail_page() instead. You get back the folio without having to cast the pointer yourself or call page_folio(). Now you can use a folio throughout your new function, saving a call to compound_head(). For a followup patch, everything in this loop below this point can use the folio ... that's quite a lot of change. > /* > - * Migration will fail if an anonymous page is pinned in memory, > - * so avoid taking lru_lock and isolating it unnecessarily in an > - * admittedly racy check. > + * Migration will fail if a page has extra refcounts > + * from long term pinning preventing it from migrating, > + * so avoid taking lru_lock and isolating it unnecessarily. > */ Isn't "long term pinning" the wrong description of the problem? Long term pins suggest to me FOLL_LONGTERM. I think this is simple short term pins that we care about here.