Received: by 2002:a05:6358:45e:b0:b5:b6eb:e1f9 with SMTP id 30csp1353885rwe; Thu, 1 Sep 2022 17:32:15 -0700 (PDT) X-Google-Smtp-Source: AA6agR5wBsBezepcln0KYTEo6veoyHnDIUGYZbMf50pdsn6E8euEvpf+gvmo+XaumrFJ/+z5SA6W X-Received: by 2002:a17:907:c15:b0:741:a052:452b with SMTP id ga21-20020a1709070c1500b00741a052452bmr14030944ejc.723.1662078735237; Thu, 01 Sep 2022 17:32:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1662078735; cv=none; d=google.com; s=arc-20160816; b=wDotl+OmLmyP2lOnnID+9TtNPhYUWeTOWLmMl8rp2Wr5fOJdoR3xfqnmZkkHh1ax64 4lc5ZslXDGZ0kDH8zzCgt/S8E56DPrpGcvLOHJSZxN5NJkLr90UITItNxDSyqeucX1Zx njIGW1hiBJfH3jSyfeSqNAN76IAY8A7yjnZ5ZNu70IDzThMbtv4qzvZIiB2/H0yYP4xW MNPNJO1FV3dlAdct86emSrgiR+7uEbkCFE3yK8L1e5A8yumg2Nvo7VZK7nBlDZ+cIBYa mQKd7qb3BNZmU+CcgcNr4Tn9bG1kpZ3a/4Gn1Sr3BrjyDrmTeLidWi4Ar14xwjI2SOn7 Ajzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=id/aU8twWVdV/Bsd/keLwAwE0+9+WzWIEZXVP3pbEN4=; b=n03egobkfsfjMJUsspfZztI2NcN6TK7icNESytN5Erogux11D8UqPY+anBQ5ubsOcK NMkWXvkWvyvOQle+MHPgaYqLCRuYUUqSrtwbaYIm5Sty1mC5Va5TmTnH6cBZliqmLUJR USbmk+5pdy2FJb4bGvRJfI3yYpOt0fPLEUQ3Ks1r9iup2XI2UKHtVApb2IyphfISI123 o5WZRMflmuGmzfE24V46XFA6R9GiuGby34czEZSocqsXlgltrD8VGi6+MSQXmAK5F+qp 9BC7Uzp3MqtG3KCK7uKaL+HWocLKWuFTWH8lALxUP76lfGuL1r0Tx5GYkrbhaR+kNG6W 5ohQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=G77qRohb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l17-20020a1709067d5100b00732fd99cdb5si515372ejp.854.2022.09.01.17.31.49; Thu, 01 Sep 2022 17:32:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=G77qRohb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234723AbiIBAX5 (ORCPT + 99 others); Thu, 1 Sep 2022 20:23:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230091AbiIBAXy (ORCPT ); Thu, 1 Sep 2022 20:23:54 -0400 Received: from mail-yb1-xb29.google.com (mail-yb1-xb29.google.com [IPv6:2607:f8b0:4864:20::b29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D3B57961B for ; Thu, 1 Sep 2022 17:23:51 -0700 (PDT) Received: by mail-yb1-xb29.google.com with SMTP id y197so1012425yby.13 for ; Thu, 01 Sep 2022 17:23:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date; bh=id/aU8twWVdV/Bsd/keLwAwE0+9+WzWIEZXVP3pbEN4=; b=G77qRohbCf8ac6L53jHN3bFU7PgUEL71xjRWrIRgmeioghpD0xzX3e0zi4p0Fjggdn KjTBdwkI1TuEZqbucWCkkW7b1W0zxO+VI5WfbrrTQxHWUgyA/tZnHLigSv/8NMdA/UI9 yIjZAiamvzjrepwHMz84VsWYmIN8e95FQxmPSo2qVVXspBAp75MmSrl9SPfSax4OI1C5 PKItbP+9xOmOhns90JGgekip/c4JyrWAYahbl7YRg91JgJ4DyXgvl2Iq86+UvWfYoNpM 0HV40naa8DroB19VETMd6C9gCjGoRmVPMuQwaMkdGp39fuTpS4NZhEPHnxCbapfqWUu+ Fo7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date; bh=id/aU8twWVdV/Bsd/keLwAwE0+9+WzWIEZXVP3pbEN4=; b=t2yzPXM7zlcp2jRNNUU6RS7mR5mkKWpH3UIHy64cpBHtVbEKsOQQcMatdzONifbUp/ Ho2CD0v8MKiJ+pAHoEq3qFWZViNnhAa+HCYfXvzXyljYcwvwvXgMjyUm8Qy354weKp6f t174U4oaHPBjNjf4H9b6uiVd5oqYJRwvBoZKrIgRZH5Znu44gUavJfHtfV5STJQHGxAw ZkmvdHevi+KminVISI+dJoQWP2aw++PIFc7gKhA2uYw72x8CHr2lOHg7J0QB/ElEvYon IffBgoWMoTNqAdcWAzk1rvBxmNRbcgohBZwTJrbtpxrRs+gH9rN0pcGJMBuf7DE/5uFK 8VDQ== X-Gm-Message-State: ACgBeo3wIlsuw3yTwKAKf9ehTYqSDtnZwTQTW06p89X/fK0VaCfm73w7 LhX8NdyukxTUEDmGLB4wRLwAdJ9KNPere4FCgcyhqA== X-Received: by 2002:a05:6902:1366:b0:691:4335:455b with SMTP id bt6-20020a056902136600b006914335455bmr20559494ybb.282.1662078230135; Thu, 01 Sep 2022 17:23:50 -0700 (PDT) MIME-Version: 1.0 References: <20220830214919.53220-1-surenb@google.com> <20220830214919.53220-12-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Thu, 1 Sep 2022 17:23:38 -0700 Message-ID: Subject: Re: [RFC PATCH 11/30] mm: introduce slabobj_ext to support slab object extensions To: Roman Gushchin Cc: Andrew Morton , Kent Overstreet , Michal Hocko , Vlastimil Babka , Johannes Weiner , Mel Gorman , Davidlohr Bueso , Matthew Wilcox , "Liam R. Howlett" , David Vernet , Peter Zijlstra , Juri Lelli , Laurent Dufour , Peter Xu , David Hildenbrand , Jens Axboe , mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, changbin.du@intel.com, ytcoode@gmail.com, Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Benjamin Segall , Daniel Bristot de Oliveira , Valentin Schneider , Christopher Lameter , Pekka Enberg , Joonsoo Kim , 42.hyeyoo@gmail.com, Alexander Potapenko , Marco Elver , Dmitry Vyukov , Shakeel Butt , Muchun Song , arnd@arndb.de, jbaron@akamai.com, David Rientjes , Minchan Kim , Kalesh Singh , kernel-team , linux-mm , iommu@lists.linux.dev, kasan-dev@googlegroups.com, io-uring@vger.kernel.org, linux-arch@vger.kernel.org, xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org, linux-modules@vger.kernel.org, LKML Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 1, 2022 at 4:36 PM Roman Gushchin wrote: > > On Tue, Aug 30, 2022 at 02:49:00PM -0700, Suren Baghdasaryan wrote: > > Currently slab pages can store only vectors of obj_cgroup pointers in > > page->memcg_data. Introduce slabobj_ext structure to allow more data > > to be stored for each slab object. Wraps obj_cgroup into slabobj_ext > > to support current functionality while allowing to extend slabobj_ext > > in the future. > > > > Note: ideally the config dependency should be turned the other way around: > > MEMCG should depend on SLAB_OBJ_EXT and {page|slab|folio}.memcg_data would > > be renamed to something like {page|slab|folio}.objext_data. However doing > > this in RFC would introduce considerable churn unrelated to the overall > > idea, so avoiding this until v1. > > Hi Suren! Hi Roman, > > I'd say CONFIG_MEMCG_KMEM and CONFIG_YOUR_NEW_STUFF should both depend on > SLAB_OBJ_EXT. > CONFIG_MEMCG_KMEM depend on CONFIG_MEMCG anyway. Yes, I agree. I wanted to mention here that the current dependency is incorrect and should be reworked. Having both depending on SLAB_OBJ_EXT seems like the right approach. > > > > > Signed-off-by: Suren Baghdasaryan > > --- > > include/linux/memcontrol.h | 18 ++++-- > > init/Kconfig | 5 ++ > > mm/kfence/core.c | 2 +- > > mm/memcontrol.c | 60 ++++++++++--------- > > mm/page_owner.c | 2 +- > > mm/slab.h | 119 +++++++++++++++++++++++++------------ > > 6 files changed, 131 insertions(+), 75 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index 6257867fbf95..315399f77173 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -227,6 +227,14 @@ struct obj_cgroup { > > }; > > }; > > > > +/* > > + * Extended information for slab objects stored as an array in page->memcg_data > > + * if MEMCG_DATA_OBJEXTS is set. > > + */ > > +struct slabobj_ext { > > + struct obj_cgroup *objcg; > > +} __aligned(8); > > Why do we need this aligment requirement? To save space by avoiding padding, however, all members today will be pointers, so it's meaningless and we can safely drop it. > > > + > > /* > > * The memory controller data structure. The memory controller controls both > > * page cache and RSS per cgroup. We would eventually like to provide > > @@ -363,7 +371,7 @@ extern struct mem_cgroup *root_mem_cgroup; > > > > enum page_memcg_data_flags { > > /* page->memcg_data is a pointer to an objcgs vector */ > > - MEMCG_DATA_OBJCGS = (1UL << 0), > > + MEMCG_DATA_OBJEXTS = (1UL << 0), > > /* page has been accounted as a non-slab kernel page */ > > MEMCG_DATA_KMEM = (1UL << 1), > > /* the next bit after the last actual flag */ > > @@ -401,7 +409,7 @@ static inline struct mem_cgroup *__folio_memcg(struct folio *folio) > > unsigned long memcg_data = folio->memcg_data; > > > > VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); > > - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); > > + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); > > VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); > > > > return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); > > @@ -422,7 +430,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) > > unsigned long memcg_data = folio->memcg_data; > > > > VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); > > - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); > > + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); > > VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); > > > > return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); > > @@ -517,7 +525,7 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) > > */ > > unsigned long memcg_data = READ_ONCE(page->memcg_data); > > > > - if (memcg_data & MEMCG_DATA_OBJCGS) > > + if (memcg_data & MEMCG_DATA_OBJEXTS) > > return NULL; > > > > if (memcg_data & MEMCG_DATA_KMEM) { > > @@ -556,7 +564,7 @@ static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *ob > > static inline bool folio_memcg_kmem(struct folio *folio) > > { > > VM_BUG_ON_PGFLAGS(PageTail(&folio->page), &folio->page); > > - VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJCGS, folio); > > + VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJEXTS, folio); > > return folio->memcg_data & MEMCG_DATA_KMEM; > > } > > > > diff --git a/init/Kconfig b/init/Kconfig > > index 532362fcfe31..82396d7a2717 100644 > > --- a/init/Kconfig > > +++ b/init/Kconfig > > @@ -958,6 +958,10 @@ config MEMCG > > help > > Provides control over the memory footprint of tasks in a cgroup. > > > > +config SLAB_OBJ_EXT > > + bool > > + depends on MEMCG > > + > > config MEMCG_SWAP > > bool > > depends on MEMCG && SWAP > > @@ -966,6 +970,7 @@ config MEMCG_SWAP > > config MEMCG_KMEM > > bool > > depends on MEMCG && !SLOB > > + select SLAB_OBJ_EXT > > default y > > > > config BLK_CGROUP > > diff --git a/mm/kfence/core.c b/mm/kfence/core.c > > index c252081b11df..c0958e4a32e2 100644 > > --- a/mm/kfence/core.c > > +++ b/mm/kfence/core.c > > @@ -569,7 +569,7 @@ static unsigned long kfence_init_pool(void) > > __folio_set_slab(slab_folio(slab)); > > #ifdef CONFIG_MEMCG > > slab->memcg_data = (unsigned long)&kfence_metadata[i / 2 - 1].objcg | > > - MEMCG_DATA_OBJCGS; > > + MEMCG_DATA_OBJEXTS; > > #endif > > } > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index b69979c9ced5..3f407ef2f3f1 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -2793,7 +2793,7 @@ static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) > > folio->memcg_data = (unsigned long)memcg; > > } > > > > -#ifdef CONFIG_MEMCG_KMEM > > +#ifdef CONFIG_SLAB_OBJ_EXT > > /* > > * The allocated objcg pointers array is not accounted directly. > > * Moreover, it should not come from DMA buffer and is not readily > > @@ -2801,38 +2801,20 @@ static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) > > */ > > #define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT) > > > > -/* > > - * mod_objcg_mlstate() may be called with irq enabled, so > > - * mod_memcg_lruvec_state() should be used. > > - */ > > -static inline void mod_objcg_mlstate(struct obj_cgroup *objcg, > > - struct pglist_data *pgdat, > > - enum node_stat_item idx, int nr) > > -{ > > - struct mem_cgroup *memcg; > > - struct lruvec *lruvec; > > - > > - rcu_read_lock(); > > - memcg = obj_cgroup_memcg(objcg); > > - lruvec = mem_cgroup_lruvec(memcg, pgdat); > > - mod_memcg_lruvec_state(lruvec, idx, nr); > > - rcu_read_unlock(); > > -} > > - > > -int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, > > - gfp_t gfp, bool new_slab) > > +int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, > > + gfp_t gfp, bool new_slab) > > { > > unsigned int objects = objs_per_slab(s, slab); > > unsigned long memcg_data; > > void *vec; > > > > gfp &= ~OBJCGS_CLEAR_MASK; > > - vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp, > > + vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, > > slab_nid(slab)); > > if (!vec) > > return -ENOMEM; > > > > - memcg_data = (unsigned long) vec | MEMCG_DATA_OBJCGS; > > + memcg_data = (unsigned long) vec | MEMCG_DATA_OBJEXTS; > > if (new_slab) { > > /* > > * If the slab is brand new and nobody can yet access its > > @@ -2843,7 +2825,7 @@ int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, > > } else if (cmpxchg(&slab->memcg_data, 0, memcg_data)) { > > /* > > * If the slab is already in use, somebody can allocate and > > - * assign obj_cgroups in parallel. In this case the existing > > + * assign slabobj_exts in parallel. In this case the existing > > * objcg vector should be reused. > > */ > > kfree(vec); > > @@ -2853,6 +2835,26 @@ int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, > > kmemleak_not_leak(vec); > > return 0; > > } > > +#endif /* CONFIG_SLAB_OBJ_EXT */ > > + > > +#ifdef CONFIG_MEMCG_KMEM > > +/* > > + * mod_objcg_mlstate() may be called with irq enabled, so > > + * mod_memcg_lruvec_state() should be used. > > + */ > > +static inline void mod_objcg_mlstate(struct obj_cgroup *objcg, > > + struct pglist_data *pgdat, > > + enum node_stat_item idx, int nr) > > +{ > > + struct mem_cgroup *memcg; > > + struct lruvec *lruvec; > > + > > + rcu_read_lock(); > > + memcg = obj_cgroup_memcg(objcg); > > + lruvec = mem_cgroup_lruvec(memcg, pgdat); > > + mod_memcg_lruvec_state(lruvec, idx, nr); > > + rcu_read_unlock(); > > +} > > > > static __always_inline > > struct mem_cgroup *mem_cgroup_from_obj_folio(struct folio *folio, void *p) > > @@ -2863,18 +2865,18 @@ struct mem_cgroup *mem_cgroup_from_obj_folio(struct folio *folio, void *p) > > * slab->memcg_data. > > */ > > if (folio_test_slab(folio)) { > > - struct obj_cgroup **objcgs; > > + struct slabobj_ext *obj_exts; > > struct slab *slab; > > unsigned int off; > > > > slab = folio_slab(folio); > > - objcgs = slab_objcgs(slab); > > - if (!objcgs) > > + obj_exts = slab_obj_exts(slab); > > + if (!obj_exts) > > return NULL; > > > > off = obj_to_index(slab->slab_cache, slab, p); > > - if (objcgs[off]) > > - return obj_cgroup_memcg(objcgs[off]); > > + if (obj_exts[off].objcg) > > + return obj_cgroup_memcg(obj_exts[off].objcg); > > > > return NULL; > > } > > diff --git a/mm/page_owner.c b/mm/page_owner.c > > index e4c6f3f1695b..fd4af1ad34b8 100644 > > --- a/mm/page_owner.c > > +++ b/mm/page_owner.c > > @@ -353,7 +353,7 @@ static inline int print_page_owner_memcg(char *kbuf, size_t count, int ret, > > if (!memcg_data) > > goto out_unlock; > > > > - if (memcg_data & MEMCG_DATA_OBJCGS) > > + if (memcg_data & MEMCG_DATA_OBJEXTS) > > ret += scnprintf(kbuf + ret, count - ret, > > "Slab cache page\n"); > > > > diff --git a/mm/slab.h b/mm/slab.h > > index 4ec82bec15ec..c767ce3f0fe2 100644 > > --- a/mm/slab.h > > +++ b/mm/slab.h > > @@ -422,36 +422,94 @@ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t fla > > return false; > > } > > > > +#ifdef CONFIG_SLAB_OBJ_EXT > > + > > +static inline bool is_kmem_only_obj_ext(void) > > +{ > > #ifdef CONFIG_MEMCG_KMEM > > + return sizeof(struct slabobj_ext) == sizeof(struct obj_cgroup *); > > +#else > > + return false; > > +#endif > > +} > > + > > /* > > - * slab_objcgs - get the object cgroups vector associated with a slab > > + * slab_obj_exts - get the pointer to the slab object extension vector > > + * associated with a slab. > > * @slab: a pointer to the slab struct > > * > > - * Returns a pointer to the object cgroups vector associated with the slab, > > + * Returns a pointer to the object extension vector associated with the slab, > > * or NULL if no such vector has been associated yet. > > */ > > -static inline struct obj_cgroup **slab_objcgs(struct slab *slab) > > +static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) > > { > > unsigned long memcg_data = READ_ONCE(slab->memcg_data); > > > > - VM_BUG_ON_PAGE(memcg_data && !(memcg_data & MEMCG_DATA_OBJCGS), > > + VM_BUG_ON_PAGE(memcg_data && !(memcg_data & MEMCG_DATA_OBJEXTS), > > slab_page(slab)); > > VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, slab_page(slab)); > > > > - return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); > > + return (struct slabobj_ext *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); > > } > > > > -int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, > > - gfp_t gfp, bool new_slab); > > -void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, > > - enum node_stat_item idx, int nr); > > +int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, > > + gfp_t gfp, bool new_slab); > > > > -static inline void memcg_free_slab_cgroups(struct slab *slab) > > +static inline void free_slab_obj_exts(struct slab *slab) > > { > > - kfree(slab_objcgs(slab)); > > + struct slabobj_ext *obj_exts; > > + > > + if (!memcg_kmem_enabled() && is_kmem_only_obj_ext()) > > + return; > > Hm, not sure I understand this. I kmem is disabled and is_kmem_only_obj_ext() > is true, shouldn't slab->memcg_data == NULL (always)? So, the logic was to skip freeing when the only possible objects in slab->memcg_data are "struct obj_cgroup" and kmem is disabled. Otherwise there are other objects stored in slab->memcg_data which have to be freed. Did I make it more complicated than it should have been? > > > + > > + obj_exts = slab_obj_exts(slab); > > + kfree(obj_exts); > > slab->memcg_data = 0; > > } > > > > +static inline void prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) > > +{ > > + struct slab *slab; > > + > > + /* If kmem is the only extension then the vector will be created conditionally */ > > + if (is_kmem_only_obj_ext()) > > + return; > > + > > + slab = virt_to_slab(p); > > + if (!slab_obj_exts(slab)) > > + WARN(alloc_slab_obj_exts(slab, s, flags, false), > > + "%s, %s: Failed to create slab extension vector!\n", > > + __func__, s->name); > > +} > > This looks a bit crypric: the action is wrapped into WARN() and the rest is a set > of (semi-)static checks. Can we, please, invert it? E.g. something like: > > if (slab_alloc_tracking_enabled()) { > slab = virt_to_slab(p); > if (!slab_obj_exts(slab)) > WARN(alloc_slab_obj_exts(slab, s, flags, false), > "%s, %s: Failed to create slab extension vector!\n", > __func__, s->name); > } Yeah, this is much more readable. Thanks for the suggestion and for reviewing the code! > > The rest looks good to me. > > Thank you! > > -- > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. >