Received: by 2002:ac0:e34a:0:0:0:0:0 with SMTP id g10csp188733imn; Wed, 27 Jul 2022 03:41:02 -0700 (PDT) X-Google-Smtp-Source: AGRyM1tBULjSBtLHHZ1MA2KM8Yv3cCVMy8PEqD1XjeEzz9HuUHwAvU//92iEELsmykJ5LcgDZZ/W X-Received: by 2002:a17:90a:a40f:b0:1f2:f09:10e7 with SMTP id y15-20020a17090aa40f00b001f20f0910e7mr4017687pjp.73.1658918462762; Wed, 27 Jul 2022 03:41:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1658918462; cv=none; d=google.com; s=arc-20160816; b=j2sRXR86T/5PjCt4dgU0hgATpGR+tZLPNPNnhqZ7VG0DBXjjGvBwep6RzuVdJF1DOE ULHXLz07zwA3JWKtDPdVkbzeyzxKmY6jdC6uCb3F3B5C1G+1jODw4yn7o/C01idOiYer vQZtFEJ3qjPAcW08kOl5tHSwWMUVe5EbHkm/1kNcruNW02cKM9w8+oUJAZ8bKvdbdXlG xYpZzJ7BzA+1uGwS2mzmVWXRm7U6deDvMebJanH368IRAbCpG1c3og+PFaFtAb2PlsTK /yapAYfeYcIZqVd8jkPlOIHFlI8cbNK2//KEx2ShqMZWpp+pOAGjG57cY09K/pNAX5Q3 rqmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:references:message-id :in-reply-to:subject:cc:to:from:date:dkim-signature; bh=ZtS03xkK4QNWKMbdoiivZ4QBWphBTXlyntGOGANgyRw=; b=G1ThAvDt91z03dn6FcAAys910THakmPFsFt4HfTdfbR+OtggdqfOdY9+b8iZvagMRF WSQnv1Ce0IS9VQ+UobogZZ+fecV9/XQh+jCypDK+6F6mglOKfTxODhFpM4rAhSn0dcGX 7kutt/EUBHAzR/gS4IzRaAGsWowehiB5Uqyz4DnaQRBe+n03f3/GIbE87I6ckmhkM6xS D3xMPqFcNoFgY6YdCqih6b3La5wDSAur7JNnDVY1dJidf/WlgjFYmdZdLUC92x560IQh 1m/3CHRJOckmRbJ2k9mXqRWW5re8MVxcEp1AKst6hU8K/NlgSKGcG/w3wwjJERWf/VSI mUNw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gentwo.de header.s=default header.b=g1EsMqUg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gentwo.de Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nl10-20020a17090b384a00b001f02a63d9fesi1982748pjb.133.2022.07.27.03.40.48; Wed, 27 Jul 2022 03:41:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gentwo.de header.s=default header.b=g1EsMqUg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gentwo.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231933AbiG0KVF (ORCPT + 99 others); Wed, 27 Jul 2022 06:21:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230159AbiG0KUw (ORCPT ); Wed, 27 Jul 2022 06:20:52 -0400 Received: from gentwo.de (gentwo.de [IPv6:2a02:c206:2048:5042::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2BA66B7DE for ; Wed, 27 Jul 2022 03:20:49 -0700 (PDT) Received: by gentwo.de (Postfix, from userid 1001) id 66F50B00264; Wed, 27 Jul 2022 12:20:47 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gentwo.de; s=default; t=1658917247; bh=iiWn5QMcngKlLYivm+GA6lb+fbbu7FPY3WiuGWJO0is=; h=Date:From:To:cc:Subject:In-Reply-To:References:From; b=g1EsMqUgJ6SkuH7Eop/o2cEKrnNldDGecwPmkcRY0hT7aRg5qpLjl10a8TBQDHOHn WKvVxXeJV8Zlyeuxtnta8PfE79flIEwy989p9XS7k2EMr4mShx07z8+Wmg7nhUDbzv B/pvzrlumA9m+u2PO+Vg6wJxn2UcQxDWkPRc9p9otVT/mQZT/oMeTAF1g/etzQErGx 4IdKuqREezU/TuQdcC39B3KANaPyogBEvfbjgmMt48o9wx7+s9oRPbSbd+nZK1uvXa CPTLNB9m6yNobbqyWG2ekLUO3ZEKRzo+VvpYELm3zskRz9gxBel8MoCmLPIWCB5PP+ 738RKcnKuUZCA== Received: from localhost (localhost [127.0.0.1]) by gentwo.de (Postfix) with ESMTP id 62911B00224; Wed, 27 Jul 2022 12:20:47 +0200 (CEST) Date: Wed, 27 Jul 2022 12:20:47 +0200 (CEST) From: Christoph Lameter To: Feng Tang cc: Andrew Morton , Vlastimil Babka , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Dave Hansen , Robin Murphy , John Garry , Kefeng Wang Subject: Re: [PATCH v3 1/3] mm/slub: enable debugging memory wasting of kmalloc In-Reply-To: <20220727071042.8796-2-feng.tang@intel.com> Message-ID: References: <20220727071042.8796-1-feng.tang@intel.com> <20220727071042.8796-2-feng.tang@intel.com> User-Agent: Alpine 2.22 (DEB 394 2020-01-19) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_PASS,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 27 Jul 2022, Feng Tang wrote: > @@ -2905,7 +2950,7 @@ static inline void *get_freelist(struct kmem_cache *s, struct slab *slab) > * already disabled (which is the case for bulk allocation). > */ > static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > - unsigned long addr, struct kmem_cache_cpu *c) > + unsigned long addr, struct kmem_cache_cpu *c, unsigned int orig_size) > { > void *freelist; > struct slab *slab; > @@ -3102,7 +3147,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > * pointer. > */ > static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > - unsigned long addr, struct kmem_cache_cpu *c) > + unsigned long addr, struct kmem_cache_cpu *c, unsigned int orig_size) > { > void *p; > > @@ -3115,7 +3160,7 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > c = slub_get_cpu_ptr(s->cpu_slab); > #endif > > - p = ___slab_alloc(s, gfpflags, node, addr, c); > + p = ___slab_alloc(s, gfpflags, node, addr, c, orig_size); > #ifdef CONFIG_PREEMPT_COUNT > slub_put_cpu_ptr(s->cpu_slab); This is modifying and making execution of standard slab functions more expensive. Could you restrict modifications to the kmalloc subsystem? kmem_cache_alloc() and friends are not doing any rounding up to power of two sizes. What is happening here is that you pass kmalloc object size info through the kmem_cache_alloc functions so that the regular allocation functions debug functionality can then save the kmalloc specific object request size. This is active even when no debugging options are enabled. Can you avoid that? Have kmalloc do the object allocation without passing through the kmalloc request size and then add the original size info to the debug field later after execution continues in the kmalloc functions?