Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp4585040pxa; Mon, 10 Aug 2020 12:46:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyurMK7pf4i+xQSa4REV3GXTVWYdf9VRh2qinWtqm9anI9nbb1Lf7jTKnybpYlGzBHDsf3w X-Received: by 2002:a50:dac6:: with SMTP id s6mr22553135edj.378.1597088796685; Mon, 10 Aug 2020 12:46:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597088796; cv=none; d=google.com; s=arc-20160816; b=MYkzy9/mumF+WuDXT/H4WRjcd9lP2PgVuYp9CGOozwEu0/XJ4KkmdC19yCgaNKEGQX tF4fBd6nAr/E5RW/F88OsJGUT51F4dFkDPCghqNEm80/9HCs5lKOapOLdFVFkMrSysN4 uDnvHJArntv93wroK8w3MU8b5QTCFBQsJL+4xAJNy/VQ7yLtg7BXrZpQ2e5kFf9eZOup US3yO3S4xl7c0Dg3IRP3LU9OyhImUipe6NWMfecGvAYFYj4xeelIhEqO9tFGvN44zYMA aDkgLPq05TIRMhzT04aeuDk0Zld7/1v6oj0mx9IFMj//07teHxqvzfRwNywCyGI9jEM5 wOtw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=ycRnnbR2xZeCpTpBLfZzGVZNgr0ToOaSohq7rnXna0I=; b=buZnzK8YxTSHnUC/EbQ2g2LW4QGKBfmku2N5iBrgdnBstiji2RHTBnai7uI3oICnlK VjKWpu/9em0RTVJ1/shGctx/xVkFe/tOD4sr9pmmLQJv9O0IppnCQOZoMmHwjsiUVodk z2XN2vNuui9qcNKIljJNPebP/mfs21bdJ385RT5xmiNe65mggdGS8FkKuHtV94E0nyJv gfbIyAEKcPEKmHVAVSoATNWFviqQj4VEoD6cx9xaJr8MoYFUNH55PHppzB+VgpgHpN/v JgjlhehKRKli4abZZrRa6lLt2XAZjTeNq0aZ2luvpG+NdmqProxqWfSL84s9g5SzMZjW mydg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=JYKmQbrK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i5si11454351edn.15.2020.08.10.12.46.13; Mon, 10 Aug 2020 12:46:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=JYKmQbrK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728322AbgHJToz (ORCPT + 99 others); Mon, 10 Aug 2020 15:44:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728217AbgHJToy (ORCPT ); Mon, 10 Aug 2020 15:44:54 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D53EEC061756 for ; Mon, 10 Aug 2020 12:44:54 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id m8so6237119pfh.3 for ; Mon, 10 Aug 2020 12:44:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=ycRnnbR2xZeCpTpBLfZzGVZNgr0ToOaSohq7rnXna0I=; b=JYKmQbrKCDsHxauoJypFkiLT0h6Jgug1DSYi+poxXXWFufbUXTk6KJ2Kt+2XK1GNe8 S4rDGZ25MXBHYIILZASSvlBGv/CXEPBhfEdq8xNvoJhyrVjWKTpyPyDPysR44olyfslV mGw6KuM3u/OUbmJTlOzvIJpWwWPnrvYIUHHFwen7otkcfX3uda6M2XmnnjQVAKBwKZ26 hAz1MIYlsn7XkSI143g67BMmFe1dwr5CYuEf/dZbSG87/0vpxXe+x8aURE3Fu7ZqWC4F kC+umJq3FudKz6uQnMsap2KZcBh6Rnqqu0HebEBEfgQPUczCqToaclrgdDy3kGDcXB9x zlAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=ycRnnbR2xZeCpTpBLfZzGVZNgr0ToOaSohq7rnXna0I=; b=f7vXauZK0xDRTh9x7PxXAJ0B+hMF2AtD30IM226/tpdw/Ukdm1nrPGUA6Iov4oQ1lf l8ySE/R9vOstLrh8xT5YWnHlaKW2fjFYLitG0dM9St+ffxzVO2BSpn5RrjfJntdeiT7C yV/qgTS8TuWU/e7aj0DAvyqXkDqjrgCJ83iRnALFutrxq1HaQ0UKW7j8cwUUqn3MLh4V BrqbOU7Kl332Qu8Ls5wRWk5my0vLr4dtXzNcY0hpVwgvV4RLQkkVPyc6ubzVzhqt/9EE kGuTBRG+ESuWhwpzSUO7IdUwlnVrn9NLJ3RNiYD5FcUslDQeJL3Gyf5gIJAY7ZOP8AYy AWlg== X-Gm-Message-State: AOAM530wMdNICuHhkZ3eYK5zwBskQgV9negFQS39jw8TsGeHY4PqIdxc vzVW/X3nyeXQ7mWcorJHtLBTDQ== X-Received: by 2002:aa7:9ad3:: with SMTP id x19mr2587166pfp.158.1597088693871; Mon, 10 Aug 2020 12:44:53 -0700 (PDT) Received: from [2620:15c:17:3:4a0f:cfff:fe51:6667] ([2620:15c:17:3:4a0f:cfff:fe51:6667]) by smtp.gmail.com with ESMTPSA id j10sm13106401pff.171.2020.08.10.12.44.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Aug 2020 12:44:53 -0700 (PDT) Date: Mon, 10 Aug 2020 12:44:52 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Abel Wu cc: Christoph Lameter , Pekka Enberg , Joonsoo Kim , Andrew Morton , liu.xiang6@zte.com.cn, "open list:SLAB ALLOCATOR" , open list Subject: Re: [PATCH] mm/slub: remove useless kmem_cache_debug In-Reply-To: <20200810080758.940-1-wuyun.wu@huawei.com> Message-ID: References: <20200810080758.940-1-wuyun.wu@huawei.com> User-Agent: Alpine 2.23 (DEB 453 2020-06-18) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 10 Aug 2020, wuyun.wu@huawei.com wrote: > From: Abel Wu > > The commit below is incomplete, as it didn't handle the add_full() part. > commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()") > > Signed-off-by: Abel Wu > --- > mm/slub.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/mm/slub.c b/mm/slub.c > index fe81773..0b021b7 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, > } > } else { > m = M_FULL; > - if (kmem_cache_debug(s) && !lock) { > +#ifdef CONFIG_SLUB_DEBUG > + if (!lock) { > lock = 1; > /* > * This also ensures that the scanning of full > @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, > */ > spin_lock(&n->list_lock); > } > +#endif > } > > if (l != m) { This should be functionally safe, I'm wonder if it would make sense to only check for SLAB_STORE_USER here instead of kmem_cache_debug(), however, since that should be the only context in which we need the list_lock for add_full()? It seems more explicit.