Received: by 2002:a05:6358:489b:b0:bb:da1:e618 with SMTP id x27csp6222684rwn; Tue, 13 Sep 2022 00:22:09 -0700 (PDT) X-Google-Smtp-Source: AA6agR4nMsJqjovDE0f7z5pKGVUfl/M9NkaHCw8h/+qh5KhHe04CKmrrqLWfZwwcyL715gZl6dBS X-Received: by 2002:a17:90a:53a4:b0:1fa:97eb:6f0a with SMTP id y33-20020a17090a53a400b001fa97eb6f0amr2565083pjh.54.1663053729082; Tue, 13 Sep 2022 00:22:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1663053729; cv=none; d=google.com; s=arc-20160816; b=vg6SRRMTsNl4SZ5Jt6iEpNbkCMMh9kojFOcrBbSi9ouG02ApxfvhCZVCOy6xPP0mOf 79AbVda7GftzOjBr0IKAZd6q07YA2mRdb+rFSGqK6S4Rhv/bxKhurJbR25D+HbHb15wW RC0S9M3iMmi1WSmDpIVlV+tqDapP6962v7UVZyy6JNCg/4e11VWqmuyv6UEoUdiTuMxU mv/+hx+52sb7dh95/2WRTaghQXnUDOTBnw9CDZb3rdUc3hzrudbjKtyNvMzi3NUBh/b3 NpWSsJKWWlJPXghB8c4NeZ8JgVZ6rm2WspPQQKDh6i1Hx4Sy7HohpTh9z5B27QAU/6yN LCNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=WygvbcU+57VUtDe0idm3K8vvy6IHtOeCKf1SmJPZZYI=; b=TG+ozvEG/nu9Fp1dWO8alFQJDBdfYMEdlgbhF0ZKamw0ndyUZbW6nE0P0yungFmxEK npwUqDx5G0cT4XjwQ/aiY5dTS2/Yjzp1mFmCjSbio3+NZNITzzpX9Wah7gAjdk3r1VDB jtdSjaxChlH6BzADYUP8BwAQKgzXaJqXNacQ8W1JeivsV/sXoIzwxgRbxKSiCFKzUp/4 YkHW2RPALQWw7L8CxcequiS4WgcSjaGh2vtkcS1aptlGzmfMhkC/pPOkkmizGx5zlqQR jEPEpBrKJSsGTLIvbU0Rn845nZkNk8A1Hq/j6ML08tWdBoW1fIvms7OLCaHmtROv9DNO TIkA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=UOdizk4R; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g186-20020a636bc3000000b00439246e4a0dsi3338852pgc.812.2022.09.13.00.21.55; Tue, 13 Sep 2022 00:22:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=UOdizk4R; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231130AbiIMGzL (ORCPT + 99 others); Tue, 13 Sep 2022 02:55:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230503AbiIMGy4 (ORCPT ); Tue, 13 Sep 2022 02:54:56 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA2E312D36 for ; Mon, 12 Sep 2022 23:54:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663052094; x=1694588094; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jNIjARJaqhZDNzVjt1J74PS/eC/mhxmnJkoZtfdgppw=; b=UOdizk4RsiiFSxlXLWgk3JJaIhgbs+1ct8yX8Xd7dqFlI2UJ9dnF/ddP ULqdSO0yDWf0o/MOSPm3+APwUesh6vmr1PHM6YKFSCFujiH70Z1ikALjb T4HWdHWFoog/aBhpWhO6q/JpCypdsDKtlwFA9BTxoCsAvpR6rISqzUpnN QcjRqWoCtPul4HeYcmYMgIELthV+h1XOzPPYX7Q2w4t8ATPRrq2LB09st J96igUebfZg2FfBAoOp5ntjrYOkmG8W+izcJz8PYiatPTRMdVENZ2BQg2 S4mNbklsHKAgNFfaqNeuEsNggM4hKY0nZHSI08wskhmnInkJ8zdqtqGI3 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10468"; a="298855248" X-IronPort-AV: E=Sophos;i="5.93,312,1654585200"; d="scan'208";a="298855248" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2022 23:54:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,312,1654585200"; d="scan'208";a="861440725" Received: from feng-clx.sh.intel.com ([10.238.200.228]) by fmsmga006.fm.intel.com with ESMTP; 12 Sep 2022 23:54:51 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Dmitry Vyukov , Jonathan Corbet , Andrey Konovalov Cc: Dave Hansen , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Feng Tang Subject: [PATCH v6 2/4] mm/slub: only zero the requested size of buffer for kzalloc Date: Tue, 13 Sep 2022 14:54:21 +0800 Message-Id: <20220913065423.520159-3-feng.tang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220913065423.520159-1-feng.tang@intel.com> References: <20220913065423.520159-1-feng.tang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org kzalloc/kmalloc will round up the request size to a fixed size (mostly power of 2), so the allocated memory could be more than requested. Currently kzalloc family APIs will zero all the allocated memory. To detect out-of-bound usage of the extra allocated memory, only zero the requested part, so that sanity check could be added to the extra space later. Performance wise, smaller zeroing length also brings shorter execution time, as shown from test data on various server/desktop platforms. For kzalloc users who will call ksize() later and utilize this extra space, please be aware that the space is not zeroed any more. Signed-off-by: Feng Tang --- mm/slab.c | 7 ++++--- mm/slab.h | 5 +++-- mm/slub.c | 10 +++++++--- 3 files changed, 14 insertions(+), 8 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index a5486ff8362a..4594de0e3d6b 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3253,7 +3253,8 @@ slab_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, init = slab_want_init_on_alloc(flags, cachep); out: - slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init); + slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init, + cachep->object_size); return objp; } @@ -3506,13 +3507,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * Done outside of the IRQ disabled section. */ slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); + slab_want_init_on_alloc(flags, s), s->object_size); /* FIXME: Trace call missing. Christoph would like a bulk variant */ return size; error: local_irq_enable(); cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_); - slab_post_alloc_hook(s, objcg, flags, i, p, false); + slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size); kmem_cache_free_bulk(s, i, p); return 0; } diff --git a/mm/slab.h b/mm/slab.h index d0ef9dd44b71..3cf5adf63f48 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -730,7 +730,8 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, static inline void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, gfp_t flags, - size_t size, void **p, bool init) + size_t size, void **p, bool init, + unsigned int orig_size) { size_t i; @@ -746,7 +747,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, for (i = 0; i < size; i++) { p[i] = kasan_slab_alloc(s, p[i], flags, init); if (p[i] && init && !kasan_has_integrated_init()) - memset(p[i], 0, s->object_size); + memset(p[i], 0, orig_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); } diff --git a/mm/slub.c b/mm/slub.c index c8ba16b3a4db..6f823e99d8b4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3376,7 +3376,11 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l init = slab_want_init_on_alloc(gfpflags, s); out: - slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init); + /* + * When init equals 'true', like for kzalloc() family, only + * @orig_size bytes will be zeroed instead of s->object_size + */ + slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init, orig_size); return object; } @@ -3833,11 +3837,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * Done outside of the IRQ disabled fastpath loop. */ slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); + slab_want_init_on_alloc(flags, s), s->object_size); return i; error: slub_put_cpu_ptr(s->cpu_slab); - slab_post_alloc_hook(s, objcg, flags, i, p, false); + slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size); kmem_cache_free_bulk(s, i, p); return 0; } -- 2.34.1