Received: by 2002:a05:6a10:413:0:0:0:0 with SMTP id 19csp571685pxp; Wed, 9 Mar 2022 08:22:38 -0800 (PST) X-Google-Smtp-Source: ABdhPJyblqxCjei4EO5sfR5Npuwqjy/3e/IoR3G3vQhUBiwg2sl0MPRdYXbogcZG7aApHBsRU2jV X-Received: by 2002:a05:6402:50d2:b0:413:1cd8:e08e with SMTP id h18-20020a05640250d200b004131cd8e08emr233532edb.276.1646842958101; Wed, 09 Mar 2022 08:22:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1646842958; cv=none; d=google.com; s=arc-20160816; b=qrxEz5teIcWx9mUylHv5nMDx04UPKsZsOSMFN1C9+D/3GtKZbqUqGZRSF2orJUp2jZ oEXrE+cgaK4dOFh1M+IXQAo9AVQjfT6BTc0lZyvwsFyP+4z2RUaPT504ViWRJXxPu0Dp J14qsWE5hCM2NZtZgyCBtxdvH6mmvuTZMI5AQ1f618AHN0qCny5n7DJS2EKmm0xgpWFN ZJByTghrA+BH8gdzVZlH7Rl3Eirp9T+TFtz8hY8xfUNVkcxsa1a+jqjV3QAfOblHTl5s BZmEeyXfY5ubibxfWoW+cqmMyZMJQLsoHFOJketd4CocT36DG5wBfTrodJ5lST/Bgjaj 4+NA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=vdXuYfWNlEBg1jF0U/VHbVW6m5OcNyEz8mZI/FExkbw=; b=BtSxoowsr5q1FAP+wMBZpc5b1H/zlxxgExvNz6i2kzIPpBtMiN5khsNP+1Vir+VuUb g5eAl+mX4Hn2G3vRemujJFfZ0plade5sznUeu1uBioXa702k349lplJ0VXxeKPQZO/7j bzJD78EGPmZJHNt89NDOt375j0RyDmeE4+XcRffvgnTj61kqtsunzGPdPOXHPhjYQD4b IrQg8iw7jDGi3EPBnyIj0z4B2nfnsfQ+FLAN042QgMHPsChjMCigLfc7rXADWTgslNX7 bxhKLlpWRECW4oYOFxhCuNshxDh7A2fZh5JTT27N8LHhtqtBQ05MYeD45WIZKO3meIcd AQWg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=qcUqrSnL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i4-20020a1709064fc400b006db371664b5si1334058ejw.374.2022.03.09.08.21.48; Wed, 09 Mar 2022 08:22:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=qcUqrSnL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234001AbiCIPsU (ORCPT + 99 others); Wed, 9 Mar 2022 10:48:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234003AbiCIPsS (ORCPT ); Wed, 9 Mar 2022 10:48:18 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 281971168F1 for ; Wed, 9 Mar 2022 07:47:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=vdXuYfWNlEBg1jF0U/VHbVW6m5OcNyEz8mZI/FExkbw=; b=qcUqrSnLtKP0UOdF/f/90gHPzs 5YqFS2JldCVs7P0mTAzAmkVZZIquCsZzw65pQqhXex549NsY8fbJb3sCJkuNSbKtUQFF1jH0RIhx1 S/6M67/4wmdoHSEYDsgEGkPOX+ZB5IVv3lCDCBi/egOC3T1TNTFoWh8/E95tfyK4XDV7lBH/L7fdt zPHEDnZ8c/TFrCchddtYNy4ImRfU9CkLH6p0LmFtwMJkk7EpLvu9YNJwxf0XVRMboS/q1++0fKvs8 RX9z51Dfd92hI706KHgQm5EwIT7dex+W560h22tLkK2+G26wUsMtKJ11V0ZjMZOjZ7Mh+4koVSy00 ZU3Bgolg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRyWo-00HFIK-HN; Wed, 09 Mar 2022 15:47:06 +0000 Date: Wed, 9 Mar 2022 15:47:06 +0000 From: Matthew Wilcox To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org, Alexander Potapenko Subject: Re: [PATCH v1 10/15] mm/page-flags: reuse PG_slab as PG_anon_exclusive for PageAnon() pages Message-ID: References: <20220308141437.144919-1-david@redhat.com> <20220308141437.144919-11-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220308141437.144919-11-david@redhat.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 08, 2022 at 03:14:32PM +0100, David Hildenbrand wrote: > The basic question we would like to have a reliable and efficient answer > to is: is this anonymous page exclusive to a single process or might it > be shared? Is this supposed to be for PAGE_SIZE pages as well, or is it only used on pages > PAGE_SIZE? > In an ideal world, we'd have a spare pageflag. Unfortunately, pageflags > don't grow on trees, so we have to get a little creative for the time > being. This feels a little _too_ creative to me. There's now an implicit requirement that SL[AOU]B doesn't use the bottom two bits of ->slab_cache, which is probably OK but would need to be documented. I have plans to get rid of PageError and PagePrivate, but those are going to be too late for you. I don't think mappedtodisk has meaning for anon pages, even if they're in the swapcache. It would need PG_has_hwpoisoned to shift to another bit ... but almost any bit will do for has_hwpoisoned. Or have I overlooked something? > @@ -920,6 +976,70 @@ extern bool is_free_buddy_page(struct page *page); > > __PAGEFLAG(Isolated, isolated, PF_ANY); > > +static __always_inline bool folio_test_slab(struct folio *folio) > +{ > + return !folio_test_anon(folio) && > + test_bit(PG_slab, folio_flags(folio, FOLIO_PF_NO_TAIL)); > +} > + > +static __always_inline int PageSlab(struct page *page) > +{ > + return !PageAnon(page) && > + test_bit(PG_slab, &PF_NO_TAIL(page, 0)->flags); > +} In case we do end up using this, this would be better implemented as static __always_inline int PageSlab(struct page *page) { return folio_test_slab(page_folio(page)); } since PageAnon already has a page_folio() call embedded in it. > +static __always_inline void __SetPageSlab(struct page *page) > +{ > + VM_BUG_ON_PGFLAGS(PageAnon(page), page); > + __set_bit(PG_slab, &PF_NO_TAIL(page, 1)->flags); > +} There's only one caller of __SetPageSlab() left, in kfence. And that code looks ... weird. for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { if (!i || (i % 2)) continue; /* Verify we do not have a compound head page. */ if (WARN_ON(compound_head(&pages[i]) != &pages[i])) goto err; __SetPageSlab(&pages[i]); I think the author probably intended WARN_ON(PageCompound(page)) because they're actually verifying that it's not a tail page, rather than head page. > +static __always_inline void __ClearPageSlab(struct page *page) > +{ > + VM_BUG_ON_PGFLAGS(PageAnon(page), page); > + __clear_bit(PG_slab, &PF_NO_TAIL(page, 1)->flags); > +} There are no remaining callers of __ClearPageSlab(). yay.