Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp4263667pxa; Mon, 10 Aug 2020 05:20:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJynR8+PYc2TFh+LRGZ9AFw8r3ey3jvcJ85r1/MvsE1FXZssADoVfQ0RmBsxKD5fVweFS7aA X-Received: by 2002:aa7:d285:: with SMTP id w5mr21430175edq.125.1597062004695; Mon, 10 Aug 2020 05:20:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597062004; cv=none; d=google.com; s=arc-20160816; b=zaL2Z3MUAXdrsphTpAXGs1AASOyGzyF3JOpjlR/AbIYJsHHTwJ8GzxpjmBX2ggxAC4 zusCwrRqllsZCW2bgmOC5FxD/knx/I9eX/KWCJUYA+aRECm4YFMGHNS+N/i3O/YD8IRT PY4abnMEuwtkfZV/p+d39+V0HFDsqid4WiMXYbmw3HV7xmOK5XojuMJuF2JTyzNQBRFV aaMGhiBzAeWHd+gUd66gfrQ4DBfq/e/lWjv4PeNhffFaff+peU6zgkCx7k5pf0pZshfP yPLvjb2AHAq2044VVBCYd46FTs4BhXiDiCmCncAYYDGqE2auV8Vx94fuh/As7uR1MIIN p/fA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=jbG7Qcp4eAjYM8G+bt2s9W8isGsgA7mZv7ACPrPCAns=; b=Md/a2LRK7MDNy0QCHyrN1X95gzWrEyeG25pwwn08X2+brkAcnVXhWTOdN4BIp/CBLU VBvVZGjYXGi+QqPomT8fQnJ6JXtxeuSuSxc/AFAxWi85LLxu0OkLHCDnT53cx/VKuqZ4 ctJhi8B6ouk4/HcIOg+pRwhQs41bn+3ASz87HcGupIL7MSuFq80w73F0Gx/lgUDNdw+5 1dzVYc8/y8/aCLfdNotMmDqge01HMLUzyoed0jmeLULf+tPWwdy7BKpde2o0lokPrVtX xxG2WinQStIQhEYSSTYO30xA1Mn9JIlVZ3noBfz5grltYMriVc473rT9HNXWxFbmhq9N 9wIg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l7si10104408eds.64.2020.08.10.05.19.42; Mon, 10 Aug 2020 05:20:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726691AbgHJMSP (ORCPT + 99 others); Mon, 10 Aug 2020 08:18:15 -0400 Received: from out30-42.freemail.mail.aliyun.com ([115.124.30.42]:51252 "EHLO out30-42.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726450AbgHJMSO (ORCPT ); Mon, 10 Aug 2020 08:18:14 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R201e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07484;MF=xlpang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0U5N4-L-_1597061873; Received: from localhost(mailfrom:xlpang@linux.alibaba.com fp:SMTPD_---0U5N4-L-_1597061873) by smtp.aliyun-inc.com(127.0.0.1); Mon, 10 Aug 2020 20:17:54 +0800 From: Xunlei Pang To: Vlastimil Babka , Christoph Lameter , Wen Yang , Roman Gushchin , Pekka Enberg , Konstantin Khlebnikov , David Rientjes , Xunlei Pang Cc: linux-kernel@vger.kernel.org, "linux-mm@kvack.org" Subject: [PATCH v2 3/3] mm/slub: Use percpu partial free counter Date: Mon, 10 Aug 2020 20:17:52 +0800 Message-Id: <1597061872-58724-4-git-send-email-xlpang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1597061872-58724-1-git-send-email-xlpang@linux.alibaba.com> References: <1597061872-58724-1-git-send-email-xlpang@linux.alibaba.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The only concern of introducing partial counter is that, partial_free_objs may cause atomic operation contention in case of same SLUB concurrent __slab_free(). This patch changes it to be a percpu counter to avoid that. Co-developed-by: Wen Yang Signed-off-by: Xunlei Pang --- mm/slab.h | 2 +- mm/slub.c | 38 +++++++++++++++++++++++++++++++------- 2 files changed, 32 insertions(+), 8 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index c85e2fa..a709a70 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -616,7 +616,7 @@ struct kmem_cache_node { #ifdef CONFIG_SLUB unsigned long nr_partial; struct list_head partial; - atomic_long_t partial_free_objs; + atomic_long_t __percpu *partial_free_objs; atomic_long_t partial_total_objs; #ifdef CONFIG_SLUB_DEBUG atomic_long_t nr_slabs; diff --git a/mm/slub.c b/mm/slub.c index 25a4421..f6fc60b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1775,11 +1775,21 @@ static void discard_slab(struct kmem_cache *s, struct page *page) /* * Management of partially allocated slabs. */ +static inline long get_partial_free(struct kmem_cache_node *n) +{ + long nr = 0; + int cpu; + + for_each_possible_cpu(cpu) + nr += atomic_long_read(per_cpu_ptr(n->partial_free_objs, cpu)); + + return nr; +} static inline void __update_partial_free(struct kmem_cache_node *n, long delta) { - atomic_long_add(delta, &n->partial_free_objs); + atomic_long_add(delta, this_cpu_ptr(n->partial_free_objs)); } static inline void @@ -2429,12 +2439,12 @@ static unsigned long partial_counter(struct kmem_cache_node *n, unsigned long ret = 0; if (item == PARTIAL_FREE) { - ret = atomic_long_read(&n->partial_free_objs); + ret = get_partial_free(n); } else if (item == PARTIAL_TOTAL) { ret = atomic_long_read(&n->partial_total_objs); } else if (item == PARTIAL_INUSE) { ret = atomic_long_read(&n->partial_total_objs) - - atomic_long_read(&n->partial_free_objs); + get_partial_free(n); if ((long)ret < 0) ret = 0; } @@ -3390,19 +3400,28 @@ static inline int calculate_order(unsigned int size) return -ENOSYS; } -static void +static int init_kmem_cache_node(struct kmem_cache_node *n) { + int cpu; + n->nr_partial = 0; spin_lock_init(&n->list_lock); INIT_LIST_HEAD(&n->partial); - atomic_long_set(&n->partial_free_objs, 0); + + n->partial_free_objs = alloc_percpu(atomic_long_t); + if (!n->partial_free_objs) + return -ENOMEM; + for_each_possible_cpu(cpu) + atomic_long_set(per_cpu_ptr(n->partial_free_objs, cpu), 0); atomic_long_set(&n->partial_total_objs, 0); #ifdef CONFIG_SLUB_DEBUG atomic_long_set(&n->nr_slabs, 0); atomic_long_set(&n->total_objects, 0); INIT_LIST_HEAD(&n->full); #endif + + return 0; } static inline int alloc_kmem_cache_cpus(struct kmem_cache *s) @@ -3463,7 +3482,7 @@ static void early_kmem_cache_node_alloc(int node) page->inuse = 1; page->frozen = 0; kmem_cache_node->node[node] = n; - init_kmem_cache_node(n); + BUG_ON(init_kmem_cache_node(n) < 0); inc_slabs_node(kmem_cache_node, node, page->objects); /* @@ -3481,6 +3500,7 @@ static void free_kmem_cache_nodes(struct kmem_cache *s) for_each_kmem_cache_node(s, node, n) { s->node[node] = NULL; + free_percpu(n->partial_free_objs); kmem_cache_free(kmem_cache_node, n); } } @@ -3511,7 +3531,11 @@ static int init_kmem_cache_nodes(struct kmem_cache *s) return 0; } - init_kmem_cache_node(n); + if (init_kmem_cache_node(n) < 0) { + free_kmem_cache_nodes(s); + return 0; + } + s->node[node] = n; } return 1; -- 1.8.3.1