Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp1367505pxu; Fri, 16 Oct 2020 10:10:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxN8RwmfVo5D72lOR+0UhLYXowthdy8TWx15L6WOj/MjwunwjE5LeJvW0rm8VxXTKPKrRNz X-Received: by 2002:a05:6402:293:: with SMTP id l19mr5175542edv.227.1602868221022; Fri, 16 Oct 2020 10:10:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602868221; cv=none; d=google.com; s=arc-20160816; b=dg1fXBvTWBGnVz3BBwC6B7Bn8blNduwWb9rzkk1Btm7Wy6h6ilXV1Oj6xATpXm46jf 8ud94NvzTOyyiPNI0sgYDOCrb07TdZBAzoZ5OWfGLY6eX3hGzEOa20sDQ/3VlU2Rnpq3 Nl9bCigIDDBoxRxazNDFJhrN3pMQs9qcJDv2AO6294D2RqyDPDKk4BxLcr5k3ijlKNBK m7cUtmtbcKmqYFMibZ9BhrX/i1St4O5I9U+XzcFK4MIKnEa8pra3qyf78AF0VIXK+B+O 2GKzkiHkgIUDUbfsuXLB93vRQbz08gNw/nNY7a7v0it5UbpUFUWo8zrKsMCTBruGGbjX /PxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=HUEyaOsXmRKnH32054bQtGbXwhM2yqNt7VNNTHpAsA0=; b=qgQ/XN7noA8/pHp1bWTNG1d8MHWgP3ucXLAJjhLuUUjJeIJ9lQ0jMvgXB15D/H7P1j sQojag1kriutKSab604fj0rEoMN8cuo56XicdR8emkheWbsceXxFGnAHqMwntUb43PyE U65i+s3tARn8FYfEHzbCPDcenwnlj/nvnFa+INLVJD/8rwqhaRtIhi1DwOQVe/7rHJlP f4L+xAgsACbh6lRBZxwDsMMt5E6FEOXk2eX6AsVSDa9JilT9SX/tqFkTTuKOK5SRCTIr x5PSnLVtW9S0W83sbOINZ7LAV4G252JalgBKTAaR1HOiGsHvEMGh9VM4XJyG2V0uXz1P 4CFA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cc20si2425978edb.341.2020.10.16.10.09.58; Fri, 16 Oct 2020 10:10:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2408728AbgJPQ6f (ORCPT + 99 others); Fri, 16 Oct 2020 12:58:35 -0400 Received: from mx2.suse.de ([195.135.220.15]:49366 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2410160AbgJPQ6f (ORCPT ); Fri, 16 Oct 2020 12:58:35 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id E463EAE37; Fri, 16 Oct 2020 16:58:33 +0000 (UTC) Subject: Re: [PATCH] mm/slub: make add_full() condition more explicit To: wuyun.wu@huawei.com, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton Cc: liu.xiang6@zte.com.cn, "open list:SLAB ALLOCATOR" , open list References: <20200811020240.1231-1-wuyun.wu@huawei.com> From: Vlastimil Babka Message-ID: <3ef24214-38c7-1238-8296-88caf7f48ab6@suse.cz> Date: Fri, 16 Oct 2020 18:58:30 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.3.2 MIME-Version: 1.0 In-Reply-To: <20200811020240.1231-1-wuyun.wu@huawei.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 8/11/20 4:02 AM, wuyun.wu@huawei.com wrote: > From: Abel Wu > > The commit below is incomplete, as it didn't handle the add_full() part. > commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()") > > This patch checks for SLAB_STORE_USER instead of kmem_cache_debug(), > since that should be the only context in which we need the list_lock for > add_full(). > > Signed-off-by: Abel Wu > --- > mm/slub.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/mm/slub.c b/mm/slub.c > index f226d66408ee..df93a5a0e9a4 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, > } > } else { > m = M_FULL; > - if (kmem_cache_debug(s) && !lock) { > +#ifdef CONFIG_SLUB_DEBUG > + if ((s->flags & SLAB_STORE_USER) && !lock) { > lock = 1; > /* > * This also ensures that the scanning of full > @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, > */ > spin_lock(&n->list_lock); > } > +#endif > } > > if (l != m) { > Hm I missed this, otherwise I would have suggested the following -----8<----- From 0b43c7e20c81241f4b74cdb366795fc0b94a25c9 Mon Sep 17 00:00:00 2001 From: Vlastimil Babka Date: Fri, 16 Oct 2020 18:46:06 +0200 Subject: [PATCH] mm, slub: use kmem_cache_debug_flags() in deactivate_slab() Commit 9cf7a1118365 ("mm/slub: make add_full() condition more explicit") replaced an unnecessarily generic kmem_cache_debug(s) check with an explicit check of SLAB_STORE_USER and #ifdef CONFIG_SLUB_DEBUG. We can achieve the same specific check with the recently added kmem_cache_debug_flags() which removes the #ifdef and restores the no-branch-overhead benefit of static key check when slub debugging is not enabled. Signed-off-by: Vlastimil Babka --- mm/slub.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 61d0d2968413..28d78238f31e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2245,8 +2245,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, } } else { m = M_FULL; -#ifdef CONFIG_SLUB_DEBUG - if ((s->flags & SLAB_STORE_USER) && !lock) { + if (kmem_cache_debug_flags(s, SLAB_STORE_USER) && !lock) { lock = 1; /* * This also ensures that the scanning of full @@ -2255,7 +2254,6 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, */ spin_lock(&n->list_lock); } -#endif } if (l != m) { -- 2.28.0