Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp65067pxa; Mon, 10 Aug 2020 18:52:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwkDbRri9R2mL1PkrU/aMWAJHNwFwMfeNhdkqgjDEQzxcPx0Ts4hynSVHzlWGhp492RnWai X-Received: by 2002:a17:906:a4b:: with SMTP id x11mr25371632ejf.83.1597110722905; Mon, 10 Aug 2020 18:52:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597110722; cv=none; d=google.com; s=arc-20160816; b=01L7FnFIg0AV2V6g402hkd1Mbp3kOnu7I7Q9BsSjJVhjbXUDoUIEcXEVOJ8Mgcjp/q xQBI2qsq8Hfkzkqs+frpVZMA7X64PuaIL3hsS/83PV3O9VAy8HHbvE+96Yq9hqaWIEej MGM73WMybYR5FzWlpSy4HMs8CS7sN/Ws+7IxaSLTGEa94CsMRbZQEX6tzCkKavIb5+nN EM66LFODmIWcdiYvdeeOQjOtFC4dpFu3hNfO2hmAov3AMbImDZcLLjLf7wsiKln9t2Tn ncyFQafGM7aMy3Yf8mnlNnBcdMtQUXt5ofhVMDlPIgesRp6FmKaiejGg9MxyFoV6ftTB gHEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:references:cc:to:from:subject; bh=2gz7aBttcgTYhb900qMDRuBSqOsZA0VMDOP+zby9Vmo=; b=FdwJfwV/Elctz/Gldd4rQLem0L/k7rwioyzvK1+5gG9bGbQEcc+hZha5Neer7BLUF+ sQGIAsUUze8fA7dQGgI+Smr4Lp5ytUoqYECt97CiKTjdXyA0oN63WtIhRdMDj+nbZlbe nRKUtxPt1wKZe0k6nxd0tF7CscD89RVMpigKumLaR4TcPZZxrEyzw0IBZqR+W5WBqbgv LsgABG5IB1DiONXG1Ob+miLvXpfbZpu/WzdIgm5w5GZkd1J3hGT8aCN3FnB+dhPbS3Bz ttguu0qihe8RzO86qBf2KW1cGXDznvnW46wGyrPHo48ZgPkEludkjkSjxSnSvxJBZLgv ZZVg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n3si12423424eji.384.2020.08.10.18.51.40; Mon, 10 Aug 2020 18:52:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728144AbgHKBuQ (ORCPT + 99 others); Mon, 10 Aug 2020 21:50:16 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:9360 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727985AbgHKBuQ (ORCPT ); Mon, 10 Aug 2020 21:50:16 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 4D89131BCE288B65FB54; Tue, 11 Aug 2020 09:50:14 +0800 (CST) Received: from [10.174.179.61] (10.174.179.61) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Tue, 11 Aug 2020 09:50:10 +0800 Subject: Re: [PATCH] mm/slub: remove useless kmem_cache_debug From: Abel Wu To: David Rientjes CC: Christoph Lameter , Pekka Enberg , Joonsoo Kim , Andrew Morton , , "open list:SLAB ALLOCATOR" , open list References: <20200810080758.940-1-wuyun.wu@huawei.com> <63ee904c-f6b7-3a00-c51d-3ff0feabc9d6@huawei.com> Message-ID: Date: Tue, 11 Aug 2020 09:50:10 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.1.0 MIME-Version: 1.0 In-Reply-To: <63ee904c-f6b7-3a00-c51d-3ff0feabc9d6@huawei.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.179.61] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/8/11 9:29, Abel Wu wrote: > > > On 2020/8/11 3:44, David Rientjes wrote: >> On Mon, 10 Aug 2020, wuyun.wu@huawei.com wrote: >> >>> From: Abel Wu >>> >>> The commit below is incomplete, as it didn't handle the add_full() part. >>> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()") >>> >>> Signed-off-by: Abel Wu >>> --- >>> mm/slub.c | 4 +++- >>> 1 file changed, 3 insertions(+), 1 deletion(-) >>> >>> diff --git a/mm/slub.c b/mm/slub.c >>> index fe81773..0b021b7 100644 >>> --- a/mm/slub.c >>> +++ b/mm/slub.c >>> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, >>> } >>> } else { >>> m = M_FULL; >>> - if (kmem_cache_debug(s) && !lock) { >>> +#ifdef CONFIG_SLUB_DEBUG >>> + if (!lock) { >>> lock = 1; >>> /* >>> * This also ensures that the scanning of full >>> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, >>> */ >>> spin_lock(&n->list_lock); >>> } >>> +#endif >>> } >>> >>> if (l != m) { >> >> This should be functionally safe, I'm wonder if it would make sense to >> only check for SLAB_STORE_USER here instead of kmem_cache_debug(), >> however, since that should be the only context in which we need the >> list_lock for add_full()? It seems more explicit. >> . >> > Yes, checking for SLAB_STORE_USER here can also get rid of noising macros. > I will resend the patch later. > > Thanks, > Abel > . > Wait... It still needs CONFIG_SLUB_DEBUG to wrap around, but can avoid locking overhead when SLAB_STORE_USER is not set (as what you said). I will keep the CONFIG_SLUB_DEBUG in my new patch.