Received: by 2002:a05:6358:a55:b0:ec:fcf4:3ecf with SMTP id 21csp3142751rwb; Mon, 16 Jan 2023 04:24:21 -0800 (PST) X-Google-Smtp-Source: AMrXdXsND+57T708ZMpyES+upV2VIrbRpzBGQpPAInnSU+D0sbpacRkAHcGi9fW/IE7AAaHgMTut X-Received: by 2002:a17:902:edc3:b0:191:4389:f8f5 with SMTP id q3-20020a170902edc300b001914389f8f5mr78901205plk.34.1673871861672; Mon, 16 Jan 2023 04:24:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673871861; cv=none; d=google.com; s=arc-20160816; b=beqWRQxQDTIp8b7s/qa/VDCiMWFmBzZrCM1+c+xQvvsgpizrEBHFLwb4v8tDE/03yu R8iOZ+AHpYKXidTBouKQ8XegezR/e0DG/wjKsuoDu5Eq8KZOVuJof+SqtoEBeROIhsTI 8wt5cdyg2CBFYkb8OeYhhw7CkaDgg4l2P53I8N0MAYsUdtT/G7TH3ghvRshbWeYRM/8c nSf75FHP6tcEFw/6z7w70Gygw709pumoqqUe6lDiFK/M2Di7G87QaG3Tc+N7Yd3qh5se KR2ZMp93OxAP6pEb7vq4+k7bxEWaZwWdwTg8zIKBl1Hbz2Xq9MnXWk5qr4fAsVHjhTI1 YJnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=yjK5pV8lbB2+/iOpg5dZ+FqsEmAxLEbjxKcHJJZtats=; b=LQxGTSb3fnLCp25zfXUnMcxDHLsMs4DoMUHo+ZY/5Q+1A/CCuDVAgT2pRtwZT0p5H2 ppqd/rs04Ehcsw6Ud31T9GNKVe06SClk8eRBm+76wh5pVOklTAtU5x4jGGA7nTRelMJY Axi/tnXRitbZvWKlDSuHliPx1WM5x5VyJD0SqsToOhiFy1yfgJbaE7KT/gkY8kqIyTRk k0jsntFcdCB/7LzayA9iCnDBYAy+6wZ9k6b5bd5fjoUSREZS28uja4U+3R6npWoYPPXc Jk9xNp7fqi8zzZMX3QUn16tjzHb2HAqxSH4rnz0tbug4npJ8Hu+afhN6n4v98Cg9fgyn e+PQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=Cb7mojGp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u8-20020a170902e5c800b00189906d63bdsi9104641plf.7.2023.01.16.04.24.15; Mon, 16 Jan 2023 04:24:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=Cb7mojGp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230371AbjAPMBl (ORCPT + 53 others); Mon, 16 Jan 2023 07:01:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230304AbjAPMAv (ORCPT ); Mon, 16 Jan 2023 07:00:51 -0500 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 015E31A49F for ; Mon, 16 Jan 2023 04:00:03 -0800 (PST) Received: by mail-pg1-x529.google.com with SMTP id v3so19505768pgh.4 for ; Mon, 16 Jan 2023 04:00:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=yjK5pV8lbB2+/iOpg5dZ+FqsEmAxLEbjxKcHJJZtats=; b=Cb7mojGp5V/dmR3xIWSy3C5h1dAUB+67iYTjOwzrvjU/baqnzYeuM+9j4dFADpKXlR rHZc4yfBF9VzP750vYbG2dgoWSjYqYk7pbhL7yfdguF1LFMFf95XSTy5Zt1ZrWIiZFZ4 KIB+O4BRYWWv9Qe5Dy3I8uizc4ckekucSARK4n5QgDIFXXzzEn6CojQrjU/mC9z/slyY mm8CtsMGNM8wf4UwV33cwLBwK9oU6cLL1bmDLkOLxlUcxRdI3PLRdCeKoVa/1z140uQh q9qK387DTMIRhj95KUq8dVmXM4jmOvRfoP8bWO+D+To+vcPfFmuYsMWPlorqJMiaoMEM GHaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=yjK5pV8lbB2+/iOpg5dZ+FqsEmAxLEbjxKcHJJZtats=; b=rUynzrTwlfyuaF2MgBQR0m3fsMNGAzsVEYbdL2n3yvkGQc4gt1himI6EY+w5JQOHGY HW9CW3YFZOdgHbdpy4cH052x/0zSRCVE361GfxyMUbFWBYKwRzbfqFbhyTWEav121LZr ad1ok4XAWhXEH7GzBRles+7Uw8ATcYLQvD1Pj3WyzcznwiqM9YdSFAFm+/2jLJW78thl JxvCUA0T6URGurG2KTVy1FsoLcF0Lh05Vq5hP3F0ikYRtJfFZnvnlrObJN7s1pRdNv75 Q1/WlJimN9HC94Bxx9hA/7H426O8qmn/wZ+AqjeDNziBZ1sizXYAvJhP/2J3xNdn7jC6 WE7w== X-Gm-Message-State: AFqh2kok9g2yOUDaFoU9hefW4s1u2hMO3MIbPo4hB9LpQmpwy4AaV7RF 13Kn0+RfkqTIDjg2QkGnS+E= X-Received: by 2002:aa7:99da:0:b0:58d:90d2:8b12 with SMTP id v26-20020aa799da000000b0058d90d28b12mr8038396pfi.3.1673870403272; Mon, 16 Jan 2023 04:00:03 -0800 (PST) Received: from localhost ([2400:8902::f03c:93ff:fe27:642a]) by smtp.gmail.com with ESMTPSA id x186-20020a6231c3000000b0058bcb42dd1asm3687140pfx.111.2023.01.16.03.59.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 16 Jan 2023 04:00:02 -0800 (PST) Date: Mon, 16 Jan 2023 11:59:53 +0000 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Rong Tao Cc: cl@linux.com, sdf@google.com, yhs@fb.com, Rong Tao , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , "open list:SLAB ALLOCATOR" , open list Subject: Re: [PATCH] mm: Functions used internally should not be put into slub_def.h Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=2.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,HK_RANDOM_ENVFROM, HK_RANDOM_FROM,RCVD_IN_DNSWL_NONE,RCVD_IN_SBL_CSS,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Level: ** X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jan 16, 2023 at 04:50:05PM +0800, Rong Tao wrote: > From: Rong Tao > > commit 40f3bf0cb04c("mm: Convert struct page to struct slab in functions > used by other subsystems") introduce 'slab_address()' and 'struct slab' > in slab_def.h(CONFIG_SLAB) and slub_def.h(CONFIG_SLUB). When referencing > a header file in a module or BPF code, 'slab_address()' > and 'struct slab' are not recognized, resulting in incomplete and > undefined errors(see bcc slabratetop.py error [0]). > Hello Rong, IMO sl*b_def.h is not intended to be used externally. and I'm not sure if it's worth for -stable release too. IIUC The reason for slabratetop.py to rely on sl*b_def.h is to read cachep->cache and cachep->size. I think this can be solved if you use a tool that supports BPF Type Format? > Moving the function definitions of reference data structures such as > struct slab and slab_address() such as nearest_obj(), obj_to_index(), > and objs_per_slab() to the internal header file slab.h solves this > fatal problem. > > [0] https://github.com/iovisor/bcc/issues/4438 > > Signed-off-by: Rong Tao > --- > include/linux/slab_def.h | 33 -------------------- > include/linux/slub_def.h | 32 ------------------- > mm/slab.h | 66 ++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 66 insertions(+), 65 deletions(-) > > diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h > index 5834bad8ad78..5658b5fddf9b 100644 > --- a/include/linux/slab_def.h > +++ b/include/linux/slab_def.h > @@ -88,37 +88,4 @@ struct kmem_cache { > struct kmem_cache_node *node[MAX_NUMNODES]; > }; > > -static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, > - void *x) > -{ > - void *object = x - (x - slab->s_mem) % cache->size; > - void *last_object = slab->s_mem + (cache->num - 1) * cache->size; > - > - if (unlikely(object > last_object)) > - return last_object; > - else > - return object; > -} > - > -/* > - * We want to avoid an expensive divide : (offset / cache->size) > - * Using the fact that size is a constant for a particular cache, > - * we can replace (offset / cache->size) by > - * reciprocal_divide(offset, cache->reciprocal_buffer_size) > - */ > -static inline unsigned int obj_to_index(const struct kmem_cache *cache, > - const struct slab *slab, void *obj) > -{ > - u32 offset = (obj - slab->s_mem); > - return reciprocal_divide(offset, cache->reciprocal_buffer_size); > -} > - > -static inline int objs_per_slab(const struct kmem_cache *cache, > - const struct slab *slab) > -{ > - if (is_kfence_address(slab_address(slab))) > - return 1; > - return cache->num; > -} > - > #endif /* _LINUX_SLAB_DEF_H */ > diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h > index aa0ee1678d29..660fd6b2a748 100644 > --- a/include/linux/slub_def.h > +++ b/include/linux/slub_def.h > @@ -163,36 +163,4 @@ static inline void sysfs_slab_release(struct kmem_cache *s) > > void *fixup_red_left(struct kmem_cache *s, void *p); > > -static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, > - void *x) { > - void *object = x - (x - slab_address(slab)) % cache->size; > - void *last_object = slab_address(slab) + > - (slab->objects - 1) * cache->size; > - void *result = (unlikely(object > last_object)) ? last_object : object; > - > - result = fixup_red_left(cache, result); > - return result; > -} > - > -/* Determine object index from a given position */ > -static inline unsigned int __obj_to_index(const struct kmem_cache *cache, > - void *addr, void *obj) > -{ > - return reciprocal_divide(kasan_reset_tag(obj) - addr, > - cache->reciprocal_size); > -} > - > -static inline unsigned int obj_to_index(const struct kmem_cache *cache, > - const struct slab *slab, void *obj) > -{ > - if (is_kfence_address(obj)) > - return 0; > - return __obj_to_index(cache, slab_address(slab), obj); > -} > - > -static inline int objs_per_slab(const struct kmem_cache *cache, > - const struct slab *slab) > -{ > - return slab->objects; > -} > #endif /* _LINUX_SLUB_DEF_H */ > diff --git a/mm/slab.h b/mm/slab.h > index 7cc432969945..38350a0efa91 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -227,10 +227,76 @@ struct kmem_cache { > > #ifdef CONFIG_SLAB > #include > + > +static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, > + void *x) > +{ > + void *object = x - (x - slab->s_mem) % cache->size; > + void *last_object = slab->s_mem + (cache->num - 1) * cache->size; > + > + if (unlikely(object > last_object)) > + return last_object; > + else > + return object; > +} > + > +/* > + * We want to avoid an expensive divide : (offset / cache->size) > + * Using the fact that size is a constant for a particular cache, > + * we can replace (offset / cache->size) by > + * reciprocal_divide(offset, cache->reciprocal_buffer_size) > + */ > +static inline unsigned int obj_to_index(const struct kmem_cache *cache, > + const struct slab *slab, void *obj) > +{ > + u32 offset = (obj - slab->s_mem); > + return reciprocal_divide(offset, cache->reciprocal_buffer_size); > +} > + > +static inline int objs_per_slab(const struct kmem_cache *cache, > + const struct slab *slab) > +{ > + if (is_kfence_address(slab_address(slab))) > + return 1; > + return cache->num; > +} > #endif > > #ifdef CONFIG_SLUB > #include > + > +static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, > + void *x) { > + void *object = x - (x - slab_address(slab)) % cache->size; > + void *last_object = slab_address(slab) + > + (slab->objects - 1) * cache->size; > + void *result = (unlikely(object > last_object)) ? last_object : object; > + > + result = fixup_red_left(cache, result); > + return result; > +} > + > +/* Determine object index from a given position */ > +static inline unsigned int __obj_to_index(const struct kmem_cache *cache, > + void *addr, void *obj) > +{ > + return reciprocal_divide(kasan_reset_tag(obj) - addr, > + cache->reciprocal_size); > +} > + > +static inline unsigned int obj_to_index(const struct kmem_cache *cache, > + const struct slab *slab, void *obj) > +{ > + if (is_kfence_address(obj)) > + return 0; > + return __obj_to_index(cache, slab_address(slab), obj); > +} > + > +static inline int objs_per_slab(const struct kmem_cache *cache, > + const struct slab *slab) > +{ > + return slab->objects; > +} > #endif > > #include > -- > 2.39.0 >