Received: by 2002:a05:6359:c8b:b0:c7:702f:21d4 with SMTP id go11csp1381981rwb; Fri, 7 Oct 2022 11:46:28 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4OutxDPNtGBXZTw5c9XN74HO/8MzHhwcOBlNOdMass4zAzCv7BrdziNV21phKUZlBs4wLq X-Received: by 2002:a17:902:8549:b0:178:6399:3e0f with SMTP id d9-20020a170902854900b0017863993e0fmr6219921plo.35.1665168387962; Fri, 07 Oct 2022 11:46:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665168387; cv=none; d=google.com; s=arc-20160816; b=S4xVP0NMzUEj+daOz0xHN7fN8VYjTxJLNGhDyKG7EBd23MDIw6zUjJJsKm3OLsb+hj ChOJymPwBdGHUf8F+kaqyXKl5LERIpe/T1YWTLyhAJA4Au+Y+tVGqIIu3vwu1ozQBzIJ O/JibUcUyEItKyeALaXqd/SD14hopXYv7h/vV1thOfZ9Q2A/rMR+GOgbbyptuSjDDfTh w0CP1VU7iCInFy5eJ2TaCuz87xkmszu2Ypeh8iSdO/nmyF04Y458qrsTQUMl0qfUF+v6 3zMulohE2HaybKZoLYMI+Ula2a5x6rP8dSczlvZh0pcNiuuxhlBSSufd3rt32ieDeiMl tmew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=S2z7kL/puGmrjNYlyHAwGHHEwyzprFts21kmJPvb+tA=; b=ao5BLOjcrPna2tM98uf/Z1uiwkjXhTwC5Jh2zOIRdRHf4MwGd+uDVIvooLDiGVHhXm 9FOEYArUfd+SKamaJwhKqcddllXfs52/X8O7KZ6Vle972E/xWuqrAhX5HZu1oD4iS3lR C4prJ90AJkiDDqmZBkXb2apnz2+RdBN+SpnlM1sDtIITGPBpn+eXDRwm/eVUrwEUVheF 1smeWrlKHFB8BPL3PZfeg+SuifvF7vD+mv/Nfhx36m7ZEbkL4bwBdYms4Ma1Zc803Owm aNM4wka40WulbbzxToWqNyhbECSGp+OImT4B5OoL4e8T/iXi1RxhpFmDbHx2tY5lliDK NVcQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=stIurYOm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l71-20020a63884a000000b0045fb2a7266csi799023pgd.590.2022.10.07.11.46.13; Fri, 07 Oct 2022 11:46:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=stIurYOm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230160AbiJGSFn (ORCPT + 99 others); Fri, 7 Oct 2022 14:05:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230193AbiJGSDl (ORCPT ); Fri, 7 Oct 2022 14:03:41 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87E5A86F90 for ; Fri, 7 Oct 2022 11:02:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=S2z7kL/puGmrjNYlyHAwGHHEwyzprFts21kmJPvb+tA=; b=stIurYOm1Eq5Jmv9ryuWsRZALi S9nb4Z/bfXmUrPUhWv3StqLFqxF/QyVqlUSwbVnUgGsXD2ZGX1OsvDVKzmz7hpk6LP/lL4a4Nw6Tn AIfqou5bRhZewiYU53EvuEoitfLf6mkhGoko08mdshH6vAuDA+NB0qvE3DougXZr+17YHTnJihD6b sF5DVL8WnOEa+EJVEHOn4K/TVAwt/fj/qsnE/kBMKN2NNVXKxkcU7lLycd54yNweptmfGj9Zkuk/R ZW6cKdNVO2kDtbtH9XofNXSzKGvoMu2fwnsC4unM8lfIs5cD6YntnoCQkYEBeNW8pJ8WCguiBgaiz 07l0XqFQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1ogrgB-0024GK-TE; Fri, 07 Oct 2022 18:02:35 +0000 Date: Fri, 7 Oct 2022 19:02:35 +0100 From: Matthew Wilcox To: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Naoya Horiguchi , Miaohe Lin , Minchan Kim , Mel Gorman , Andrea Arcangeli , Dan Williams , Hugh Dickins , Muchun Song , David Hildenbrand , Andrey Konovalov , Marco Elver Subject: Re: [PATCH] mm: move PG_slab flag to page_type Message-ID: References: <20220919125708.276864-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Oct 07, 2022 at 10:36:56PM +0900, Hyeonggon Yoo wrote: > > First, you say that folio_mapped() returns false for slab pages. That's > > only true for order-0 slab pages. For larger pages, > > > > if (!folio_test_large(folio)) > > return atomic_read(&folio->_mapcount) >= 0; > > if (atomic_read(folio_mapcount_ptr(folio)) >= 0) > > return true; > > > > so that's going to depend what folio_mapcount_ptr() aliases with. > > IIUC it's true for order > 0 slab too. > > As slab pages are not mapped to userspace at all, > entire compound page nor base pages are not mapped to userspace. > > AFAIK followings are true for order > 0 slab: > - (first tail page)->compound_mapcount is -1 That's the part I wasn't sure of. I think we do, in prep_compound_head(). > - _mapcount of base pages are -1 > > So: > folio_mapped() and page_mapped() (if applied to head page) > returns false for larger pages with this patch. > > I wrote simple testcase and did check that folio_mapped() and page_mapped() > returns false for both order-0 page and larger pages. (and SLAB > returned true for them before) > > > Second, this patch changes the behaviour of PageSlab() when applied to > > tail pages. > > Altough it changes the way it checks the flag, > > it does not change behavior when applied to tail pages - PageSlab() on tail > page returns false with or without this patch. Really? It seems to me that it returns true at the moment. Look: __PAGEFLAG(Slab, slab, PF_NO_TAIL) #define PF_NO_TAIL(page, enforce) ({ \ VM_BUG_ON_PGFLAGS(enforce && PageTail(page), page); \ PF_POISONED_CHECK(compound_head(page)); }) so AFAICS, PageSlab checks the Slab bit on the head page, not the tail page. > If PageSlab() need to return true for tail pages too, > we may make it check page_type at head page. > > But I'm not sure when it the behavior is needed. > Can you please share your insight on this? There are tools like tools/vm/page-types.c which expect PageSlab to return true for tail pages. > > Which raises the further question of what PageBuddy(), > > PageTable(), PageGuard() and PageIsolated() should do for multi-page > > folios, if that is even possible. > > For users that uses real compound page like slab, we can make it check > page_type of head page. (if needed) > > But for cases David described, there isn't much thing we can do > except making them to use real compound pages. > > > Third, can we do this without that awkward __u16 thing? Perhaps > > > > -#define PG_buddy 0x00000080 > > -#define PG_offline 0x00000100 > > -#define PG_table 0x00000200 > > -#define PG_guard 0x00000400 > > +#define PG_buddy 0x00010000 > > +#define PG_offline 0x00020000 > > +#define PG_table 0x00040000 > > +#define PG_guard 0x00080000 > > +#define PG_slab 0x00100000 > > > > ... and then use wrappers in slab.c to access the bottom 16 bits? > > Definitely! I prefer that way and will adjust in RFC v2. > > Thank you for precious feedback. No problem. I suggested (in an off-list email) that you consider counting 'active' by subtraction rather than addition because I have a feeling that int active(struct slab *slab) { return ~(slab->page_type | PG_slab); } would be better than int active(struct slab *slab) { return slab->page_type & 0xffff; } at least in part because you don't have to clear the bottom 16 bits of page_type when you clear PG_slab, and you don't have to re-set them when you set PG_slab.