Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp55844pxa; Mon, 10 Aug 2020 18:31:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwYF85uwATOX6rgZKOmOH3GWf3FHf+ir2nJ3Aw8LXDVAItaHEWW2l25jdUbS83w+0wE66uw X-Received: by 2002:a05:6402:1e2:: with SMTP id i2mr23146478edy.70.1597109469860; Mon, 10 Aug 2020 18:31:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597109469; cv=none; d=google.com; s=arc-20160816; b=E8dLau0S473d7TlKGEfCwXatCBZbTeqKcYeb33aR/PXam5WyWZSoSBTnIEUdkU9XSW I+IQfW47HvlPlPFHCnD0rM3vs6A+owupYfE4TftwKBXYykKe3aPElUgYKYcd81wofDr9 E7xIB3D6Zt2SwwP8sWfhTA5iuzXv7+QUsv5+dThyfhVAhg2e+/1apwC0ojADdVSHD2A6 BQu/wQtu5IAYxL92Gdq+FXF0x3nziQnu8+ohNo5ej2FCm2BUSdSG96+tBaWazZXgkHp4 wiik0hKC/I3ktP9f9Wvw2A1KsYDsS1fyXFrvYFYLYTaFFbtLK/+vjUFecOBnshvkUrwI EGvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=afeavMAMFH9JHN6mKEjTE/j0igvF1rhDC2662xs3Ln0=; b=aRSKd+3hidt+3v66u2HlEZKNmInbWq5gnbuKEWq1sEkf9I2vfKXFit/JScuxxDj8Po Fb9uTbWyVlV8veBgXW1D9jeAPy/LwCtotF3QNjBQ0pskewx80nXne6XryGne6GcUnCf5 lah9WiAss6QwGHHo62GXzrBKu5yvYzfE3Lb6I6PuMGGx3MwYvXWY/pgziJ3C4Gxp0bOF /EjVVpETGDBeTADnLm8mQ5MjIVYxL4fprXcWob7MwcwBX7F0vo2qn0ttBlpr2pPM3GPB 0Eok7CzOmFs7hhjLqYA0eK5KnRuoJF6uYn/A8DYIG41K9WE/3Htu0qT72mOUcOWFwn6q Mbhg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o4si10589660edv.338.2020.08.10.18.30.45; Mon, 10 Aug 2020 18:31:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727989AbgHKB3l (ORCPT + 99 others); Mon, 10 Aug 2020 21:29:41 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:47508 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727930AbgHKB3l (ORCPT ); Mon, 10 Aug 2020 21:29:41 -0400 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id F07293E44E40E631B431; Tue, 11 Aug 2020 09:29:38 +0800 (CST) Received: from [10.174.179.61] (10.174.179.61) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.487.0; Tue, 11 Aug 2020 09:29:38 +0800 Subject: Re: [PATCH] mm/slub: remove useless kmem_cache_debug To: David Rientjes CC: Christoph Lameter , Pekka Enberg , Joonsoo Kim , Andrew Morton , , "open list:SLAB ALLOCATOR" , open list References: <20200810080758.940-1-wuyun.wu@huawei.com> From: Abel Wu Message-ID: <63ee904c-f6b7-3a00-c51d-3ff0feabc9d6@huawei.com> Date: Tue, 11 Aug 2020 09:29:38 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.1.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.179.61] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/8/11 3:44, David Rientjes wrote: > On Mon, 10 Aug 2020, wuyun.wu@huawei.com wrote: > >> From: Abel Wu >> >> The commit below is incomplete, as it didn't handle the add_full() part. >> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()") >> >> Signed-off-by: Abel Wu >> --- >> mm/slub.c | 4 +++- >> 1 file changed, 3 insertions(+), 1 deletion(-) >> >> diff --git a/mm/slub.c b/mm/slub.c >> index fe81773..0b021b7 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, >> } >> } else { >> m = M_FULL; >> - if (kmem_cache_debug(s) && !lock) { >> +#ifdef CONFIG_SLUB_DEBUG >> + if (!lock) { >> lock = 1; >> /* >> * This also ensures that the scanning of full >> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, >> */ >> spin_lock(&n->list_lock); >> } >> +#endif >> } >> >> if (l != m) { > > This should be functionally safe, I'm wonder if it would make sense to > only check for SLAB_STORE_USER here instead of kmem_cache_debug(), > however, since that should be the only context in which we need the > list_lock for add_full()? It seems more explicit. > . > Yes, checking for SLAB_STORE_USER here can also get rid of noising macros. I will resend the patch later. Thanks, Abel