Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp1348206rdb; Wed, 6 Dec 2023 16:43:43 -0800 (PST) X-Google-Smtp-Source: AGHT+IFeiOwzIKeSF3rM87KaIZOnsULMZ1o7AZcjEtP5PhZhFY/3xziVuTUd63j2ZIQ9aGdEKHEX X-Received: by 2002:a05:6358:e49f:b0:16f:eb57:af02 with SMTP id by31-20020a056358e49f00b0016feb57af02mr2269664rwb.16.1701909823459; Wed, 06 Dec 2023 16:43:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701909823; cv=none; d=google.com; s=arc-20160816; b=Wd/63wxiFitrzp54zUwqwpU43L1BfZG2L7TJZZiXrZrOwnb0Rn3sA8RQopNOo4B01i zM0WPleRojJHiUhugfehYXHbGVxMU3LDAb8WFUi5Jf/NR6EIl4Ip+mt76w2GESEQLZsq JoSkSYNw4Wi6wBxR4iFfUlUpD3ZByd3npwoAniKlkGvKgRbM7alajryo4RQNrBes81IO /bbZM4uhymkaUahuhnc15RA0gzyvrF5mZM7Y0RjL/WyOVvDVdRWKZ5k36vKID1zz06KM P39vaRHdVDdSb2tnr8w/VeEfABK+KeFVgnW5sRdpTG09dtsEZ65XGXnxfMBXURbBDjuz GC4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=eh9CtMoBA+zmGvG+NMP6SS+GUAPZn7ZW9r8sCwWKZF8=; fh=ezGgTJoHdqijbJ6IWHiSjHZUYI0p+VPBCX/sPRmQd+E=; b=tsRA2g2ZeiHjCiZ3I3ahY6c0c38F7ujWpa3WY6w3eXIK/MS2k74WV57no6yYKk7sxW IG7m28BEptbgxlm3hMFmlE7/384z8gnssh2ef9r2dfLy8OIVf489D/1+xI3EvRu0O++4 jLl048fLq57CTxeE9/HJjsvlkWroKuNOOlLaTprVMdHbUgMfoAKCpLrrGASSdX5uFpTV weULwZ8ru/AHxScQluhTSRJZFciZf5+qYUVevUdhNW4tpJxw3vyi9c2dfTvryTb3ePYN NzOeQ0VpZylkek81NhtRPKSK+ueEzOXrgqZOwpSXyB6Wb5BL8p/R40x9J7ho09H1UL3l HKYw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Pj69Ns+x; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id n30-20020a634d5e000000b005a9fde46fa0si171355pgl.130.2023.12.06.16.43.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 16:43:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Pj69Ns+x; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 56EDA80278C0; Wed, 6 Dec 2023 16:43:40 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231183AbjLGAnY (ORCPT + 99 others); Wed, 6 Dec 2023 19:43:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229729AbjLGAnX (ORCPT ); Wed, 6 Dec 2023 19:43:23 -0500 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD336E0; Wed, 6 Dec 2023 16:43:29 -0800 (PST) Received: by mail-pf1-x434.google.com with SMTP id d2e1a72fcca58-6ce94f62806so84980b3a.1; Wed, 06 Dec 2023 16:43:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701909809; x=1702514609; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=eh9CtMoBA+zmGvG+NMP6SS+GUAPZn7ZW9r8sCwWKZF8=; b=Pj69Ns+x9Mj6nd11Ln5I2lxbsKW32F0sudERQbezDpJJVN2JvbcQEEotU6IFkhPwql U9CRSSaopODLoYRGn0h7oCSmSlDnhFdwJMCuy5d7qudnLeLZ/QtnQ5oNzv8hJWPNIz81 5bcKCRSnxycyUVAgmvi1GbCnVIV5rMB+XvNOs09NXve0PVNvv9ZCDBg0yxdtmzj3rF7Z 6gry2hbCasNBKtWCOLcT2AtIosl/g98R7LfK1d9wjhmoxbwGCy9Lohhrz5FQf2UxsYUR hk9xe8Q7mu2aKtyERhu3B/XihavSobc4f/vZtOMryOIcdth1+fzK6QIWTmIKy5fhOAnB 6D2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701909809; x=1702514609; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=eh9CtMoBA+zmGvG+NMP6SS+GUAPZn7ZW9r8sCwWKZF8=; b=kBqJyHFjUfYLW5LL/raL7DgQEVABwOUDNr19XWCjteJWbIbAeVGKpo/VVJGvNk7G/R s58CdP4n23eG+rFS7t8+YB09r8TU03ci4XK+LJxHKqv6sVv1FCSIMOM56VckJG/QckA/ iVNZb0JmkOlfmhpvH0EO1K5rzpzroC7tgUgDADBbVhgCVOUV0YltCcNVZboxppdqI3LK IEjTJReudVGSXOinMssj7EQG0DySNHmNsmxg2NYC0YcjNpW4A2zjfBUVeZX0VQJaoExy qi4X6Ktua4qmH6v9BAt6YVSX+M2YK9rHBzu3Szv+V6rpj1NUGyn8wNUfc0W7fr6BTt4K EMDA== X-Gm-Message-State: AOJu0YxvitullPMvcZw5MgxgYWFyCPmMLtcN4z1Dh/cw1An2fcLf1LZ4 ZUpGZnd1ZO4VfzkHERO1uQg= X-Received: by 2002:a05:6a20:2449:b0:18a:de88:e0d with SMTP id t9-20020a056a20244900b0018ade880e0dmr1780832pzc.15.1701909808906; Wed, 06 Dec 2023 16:43:28 -0800 (PST) Received: from localhost.localdomain ([1.245.180.67]) by smtp.gmail.com with ESMTPSA id p12-20020a170902e74c00b001cf7bd9ade5sm63977plf.3.2023.12.06.16.43.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 16:43:28 -0800 (PST) Date: Thu, 7 Dec 2023 09:43:04 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim , Andrew Morton , Roman Gushchin , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Marco Elver , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, linux-hardening@vger.kernel.org Subject: Re: [PATCH v2 13/21] mm/slab: move pre/post-alloc hooks from slab.h to slub.c Message-ID: References: <20231120-slab-remove-slab-v2-0-9c9c70177183@suse.cz> <20231120-slab-remove-slab-v2-13-9c9c70177183@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231120-slab-remove-slab-v2-13-9c9c70177183@suse.cz> X-Spam-Status: No, score=0.4 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,HK_RANDOM_FROM,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Wed, 06 Dec 2023 16:43:40 -0800 (PST) On Mon, Nov 20, 2023 at 07:34:24PM +0100, Vlastimil Babka wrote: > We don't share the hooks between two slab implementations anymore so > they can be moved away from the header. As part of the move, also move > should_failslab() from slab_common.c as the pre_alloc hook uses it. > This means slab.h can stop including fault-inject.h and kmemleak.h. > Fix up some files that were depending on the includes transitively. > > Reviewed-by: Kees Cook > Signed-off-by: Vlastimil Babka > --- > mm/kasan/report.c | 1 + > mm/memcontrol.c | 1 + > mm/slab.h | 72 ------------------------------------------------- > mm/slab_common.c | 8 +----- > mm/slub.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 5 files changed, 84 insertions(+), 79 deletions(-) > > diff --git a/mm/kasan/report.c b/mm/kasan/report.c > index e77facb62900..011f727bfaff 100644 > --- a/mm/kasan/report.c > +++ b/mm/kasan/report.c > @@ -23,6 +23,7 @@ > #include > #include > #include > +#include > #include > #include > #include > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 947fb50eba31..8a0603517065 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -64,6 +64,7 @@ > #include > #include > #include > +#include > #include "internal.h" > #include > #include > diff --git a/mm/slab.h b/mm/slab.h > index 1ac3a2f8d4c0..65ebf86b3fe9 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -9,8 +9,6 @@ > #include > #include > #include > -#include > -#include > #include > #include > > @@ -796,76 +794,6 @@ static inline size_t slab_ksize(const struct kmem_cache *s) > return s->size; > } > > -static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, > - struct list_lru *lru, > - struct obj_cgroup **objcgp, > - size_t size, gfp_t flags) > -{ > - flags &= gfp_allowed_mask; > - > - might_alloc(flags); > - > - if (should_failslab(s, flags)) > - return NULL; > - > - if (!memcg_slab_pre_alloc_hook(s, lru, objcgp, size, flags)) > - return NULL; > - > - return s; > -} > - > -static inline void slab_post_alloc_hook(struct kmem_cache *s, > - struct obj_cgroup *objcg, gfp_t flags, > - size_t size, void **p, bool init, > - unsigned int orig_size) > -{ > - unsigned int zero_size = s->object_size; > - bool kasan_init = init; > - size_t i; > - > - flags &= gfp_allowed_mask; > - > - /* > - * For kmalloc object, the allocated memory size(object_size) is likely > - * larger than the requested size(orig_size). If redzone check is > - * enabled for the extra space, don't zero it, as it will be redzoned > - * soon. The redzone operation for this extra space could be seen as a > - * replacement of current poisoning under certain debug option, and > - * won't break other sanity checks. > - */ > - if (kmem_cache_debug_flags(s, SLAB_STORE_USER | SLAB_RED_ZONE) && > - (s->flags & SLAB_KMALLOC)) > - zero_size = orig_size; > - > - /* > - * When slub_debug is enabled, avoid memory initialization integrated > - * into KASAN and instead zero out the memory via the memset below with > - * the proper size. Otherwise, KASAN might overwrite SLUB redzones and > - * cause false-positive reports. This does not lead to a performance > - * penalty on production builds, as slub_debug is not intended to be > - * enabled there. > - */ > - if (__slub_debug_enabled()) > - kasan_init = false; > - > - /* > - * As memory initialization might be integrated into KASAN, > - * kasan_slab_alloc and initialization memset must be > - * kept together to avoid discrepancies in behavior. > - * > - * As p[i] might get tagged, memset and kmemleak hook come after KASAN. > - */ > - for (i = 0; i < size; i++) { > - p[i] = kasan_slab_alloc(s, p[i], flags, kasan_init); > - if (p[i] && init && (!kasan_init || !kasan_has_integrated_init())) > - memset(p[i], 0, zero_size); > - kmemleak_alloc_recursive(p[i], s->object_size, 1, > - s->flags, flags); > - kmsan_slab_alloc(s, p[i], flags); > - } > - > - memcg_slab_post_alloc_hook(s, objcg, flags, size, p); > -} > > /* > * The slab lists for all objects. > diff --git a/mm/slab_common.c b/mm/slab_common.c > index 63b8411db7ce..bbc2e3f061f1 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -21,6 +21,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -1470,10 +1471,3 @@ EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); > EXPORT_TRACEPOINT_SYMBOL(kfree); > EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free); > > -int should_failslab(struct kmem_cache *s, gfp_t gfpflags) > -{ > - if (__should_failslab(s, gfpflags)) > - return -ENOMEM; > - return 0; > -} > -ALLOW_ERROR_INJECTION(should_failslab, ERRNO); > diff --git a/mm/slub.c b/mm/slub.c > index 979932d046fd..9eb6508152c2 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -34,6 +34,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -3494,6 +3495,86 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, > 0, sizeof(void *)); > } > > +noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags) > +{ > + if (__should_failslab(s, gfpflags)) > + return -ENOMEM; > + return 0; > +} > +ALLOW_ERROR_INJECTION(should_failslab, ERRNO); > + > +static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, > + struct list_lru *lru, > + struct obj_cgroup **objcgp, > + size_t size, gfp_t flags) > +{ > + flags &= gfp_allowed_mask; > + > + might_alloc(flags); > + > + if (should_failslab(s, flags)) > + return NULL; > + > + if (!memcg_slab_pre_alloc_hook(s, lru, objcgp, size, flags)) > + return NULL; > + > + return s; > +} > + > +static inline void slab_post_alloc_hook(struct kmem_cache *s, > + struct obj_cgroup *objcg, gfp_t flags, > + size_t size, void **p, bool init, > + unsigned int orig_size) > +{ > + unsigned int zero_size = s->object_size; > + bool kasan_init = init; > + size_t i; > + > + flags &= gfp_allowed_mask; > + > + /* > + * For kmalloc object, the allocated memory size(object_size) is likely > + * larger than the requested size(orig_size). If redzone check is > + * enabled for the extra space, don't zero it, as it will be redzoned > + * soon. The redzone operation for this extra space could be seen as a > + * replacement of current poisoning under certain debug option, and > + * won't break other sanity checks. > + */ > + if (kmem_cache_debug_flags(s, SLAB_STORE_USER | SLAB_RED_ZONE) && > + (s->flags & SLAB_KMALLOC)) > + zero_size = orig_size; > + > + /* > + * When slub_debug is enabled, avoid memory initialization integrated > + * into KASAN and instead zero out the memory via the memset below with > + * the proper size. Otherwise, KASAN might overwrite SLUB redzones and > + * cause false-positive reports. This does not lead to a performance > + * penalty on production builds, as slub_debug is not intended to be > + * enabled there. > + */ > + if (__slub_debug_enabled()) > + kasan_init = false; > + > + /* > + * As memory initialization might be integrated into KASAN, > + * kasan_slab_alloc and initialization memset must be > + * kept together to avoid discrepancies in behavior. > + * > + * As p[i] might get tagged, memset and kmemleak hook come after KASAN. > + */ > + for (i = 0; i < size; i++) { > + p[i] = kasan_slab_alloc(s, p[i], flags, kasan_init); > + if (p[i] && init && (!kasan_init || > + !kasan_has_integrated_init())) > + memset(p[i], 0, zero_size); > + kmemleak_alloc_recursive(p[i], s->object_size, 1, > + s->flags, flags); > + kmsan_slab_alloc(s, p[i], flags); > + } > + > + memcg_slab_post_alloc_hook(s, objcg, flags, size, p); > +} > + > /* > * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc) > * have the fastpath folded into their functions. So no function call > > -- Looks good to me, Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > 2.42.1 > >