Received: by 2002:a05:7412:b10a:b0:f3:1519:9f41 with SMTP id az10csp2098657rdb; Sun, 3 Dec 2023 02:27:07 -0800 (PST) X-Google-Smtp-Source: AGHT+IEOcVj09/xfHUILXTpWfzkxLzFcsyobDEDlAdX4drsPk9bzIB2V9fIhgsjwX1PLJM86Vcv4 X-Received: by 2002:ac8:5f8b:0:b0:423:8501:2899 with SMTP id j11-20020ac85f8b000000b0042385012899mr4034913qta.57.1701599227089; Sun, 03 Dec 2023 02:27:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701599227; cv=none; d=google.com; s=arc-20160816; b=PQ+18e6rY3qzE57YPEDn4QR/ml6NaT1+kvGYMob9I2pSSRVMa2evl/fyIhRuAk8jnu JfhaGgTWnw+1lcrOifU8c1/fYuoHyG4P+r1OBK3TDmzulqC7ci6Y9A3MIOBlsXZjzwsE ygUqOLdXxuGWVMgXsqGYe0Tsk/Wbb708LjcW/39Fiy1zMpj5oqaJOzJpgkbHP4h7am2M 091etRwPdK8wDf6cGVzG9w4uBdCuR9QzPNzMcXIpDQYnDDtPjCL2Tyuj/HwVe1qtmc1Q RcmZQEce8RMymAW08jgrZnnS1G975XtGjctmIXVEvEp4fxxmjrRxeRE8m5E6q3GTuTnE dFAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:mime-version:date :dkim-signature:message-id; bh=qrtEfm+ABJ7wZDXABUJNQSB06ZVHq0EffLwzkPVkelA=; fh=ZSPBeIix4xGsyCXDn/zv/idvytwmlUYuNQs9rbtmoM8=; b=d4EVYzubF1BgMzY4s97jbYHtZGZO2BrCoA2LWH2I0Q2XIeZCyR21wcbPqMtbN+uLBa yarfAqIv//YV3IVeE8U6jQGmL0Y3bgz+K1agxa/zubX8nTFbX7ljeskLV2tb9+r8caVw bsr0RL9ttbBrZutaIqPLGYCURQy9wFWA+n9+B/J7ryGTDy3IilAoVpJi+ODLLv48Pt7H F+p/sf1obwZEY7utN8KqLj2jvTsM1rudKatEcZIyu90F+IG9TAIyy7x+ehYomJZyWc0f PWZUFQwfFPjAlkDgxCyI+jyhreIM0EthY+wVeVdezBK+zMZ6PtnWbQLkycLG0V8kHJlk H9Dw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=INAAscK9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id y8-20020a17090a784800b00286809b1cecsi1507911pjl.162.2023.12.03.02.27.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 03 Dec 2023 02:27:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=INAAscK9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 47BDA8077525; Sun, 3 Dec 2023 02:27:04 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233337AbjLCK0r (ORCPT + 99 others); Sun, 3 Dec 2023 05:26:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230256AbjLCK0q (ORCPT ); Sun, 3 Dec 2023 05:26:46 -0500 Received: from out-179.mta0.migadu.com (out-179.mta0.migadu.com [91.218.175.179]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF822F3 for ; Sun, 3 Dec 2023 02:26:51 -0800 (PST) Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1701599210; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qrtEfm+ABJ7wZDXABUJNQSB06ZVHq0EffLwzkPVkelA=; b=INAAscK9yQN+V7BQYtKeEpCT1C17m2Hca1J8hrYp/Z1bDiwW77I/C1hELG0uwf+s8Z4oIr 5ICSXQIcAgKxXakvUgG553J6mAuRjw75+HroH/joXRoT7C8sx9nNPoZvG2YopXQzbqklLW mKTRmMY4LzeQYS01TV7ECoNApQ7OFQs= Date: Sun, 3 Dec 2023 18:26:20 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v5 7/9] slub: Optimize deactivate_slab() Content-Language: en-US To: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: vbabka@suse.cz, cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, roman.gushchin@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Chengming Zhou References: <20231102032330.1036151-1-chengming.zhou@linux.dev> <20231102032330.1036151-8-chengming.zhou@linux.dev> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Sun, 03 Dec 2023 02:27:04 -0800 (PST) On 2023/12/3 17:23, Hyeonggon Yoo wrote: > On Thu, Nov 2, 2023 at 12:25 PM wrote: >> >> From: Chengming Zhou >> >> Since the introduce of unfrozen slabs on cpu partial list, we don't >> need to synchronize the slab frozen state under the node list_lock. >> >> The caller of deactivate_slab() and the caller of __slab_free() won't >> manipulate the slab list concurrently. >> >> So we can get node list_lock in the last stage if we really need to >> manipulate the slab list in this path. >> >> Signed-off-by: Chengming Zhou >> Reviewed-by: Vlastimil Babka >> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> >> --- >> mm/slub.c | 79 ++++++++++++++++++------------------------------------- >> 1 file changed, 26 insertions(+), 53 deletions(-) >> >> diff --git a/mm/slub.c b/mm/slub.c >> index bcb5b2c4e213..d137468fe4b9 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -2468,10 +2468,8 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) >> static void deactivate_slab(struct kmem_cache *s, struct slab *slab, >> void *freelist) >> { >> - enum slab_modes { M_NONE, M_PARTIAL, M_FREE, M_FULL_NOLIST }; >> struct kmem_cache_node *n = get_node(s, slab_nid(slab)); >> int free_delta = 0; >> - enum slab_modes mode = M_NONE; >> void *nextfree, *freelist_iter, *freelist_tail; >> int tail = DEACTIVATE_TO_HEAD; >> unsigned long flags = 0; >> @@ -2509,65 +2507,40 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, >> /* >> * Stage two: Unfreeze the slab while splicing the per-cpu >> * freelist to the head of slab's freelist. >> - * >> - * Ensure that the slab is unfrozen while the list presence >> - * reflects the actual number of objects during unfreeze. >> - * >> - * We first perform cmpxchg holding lock and insert to list >> - * when it succeed. If there is mismatch then the slab is not >> - * unfrozen and number of objects in the slab may have changed. >> - * Then release lock and retry cmpxchg again. >> */ >> -redo: >> - >> - old.freelist = READ_ONCE(slab->freelist); >> - old.counters = READ_ONCE(slab->counters); >> - VM_BUG_ON(!old.frozen); >> - >> - /* Determine target state of the slab */ >> - new.counters = old.counters; >> - if (freelist_tail) { >> - new.inuse -= free_delta; >> - set_freepointer(s, freelist_tail, old.freelist); >> - new.freelist = freelist; >> - } else >> - new.freelist = old.freelist; >> - >> - new.frozen = 0; >> + do { >> + old.freelist = READ_ONCE(slab->freelist); >> + old.counters = READ_ONCE(slab->counters); >> + VM_BUG_ON(!old.frozen); >> + >> + /* Determine target state of the slab */ >> + new.counters = old.counters; >> + new.frozen = 0; >> + if (freelist_tail) { >> + new.inuse -= free_delta; >> + set_freepointer(s, freelist_tail, old.freelist); >> + new.freelist = freelist; >> + } else { >> + new.freelist = old.freelist; >> + } >> + } while (!slab_update_freelist(s, slab, >> + old.freelist, old.counters, >> + new.freelist, new.counters, >> + "unfreezing slab")); >> >> + /* >> + * Stage three: Manipulate the slab list based on the updated state. >> + */ > > deactivate_slab() might unconsciously put empty slabs into partial list, like: > > deactivate_slab() __slab_free() > cmpxchg(), slab's not empty > cmpxchg(), slab's empty > and unfrozen Hi, Sorry, but I don't get it here how __slab_free() can see the slab empty, since the slab is not empty from deactivate_slab() path, and it can't be used by any CPU at that time? Thanks for review! > spin_lock(&n->list_lock) > (slab's empty but not > on partial list, > > spin_unlock(&n->list_lock) and return) > spin_lock(&n->list_lock) > put slab into partial list > spin_unlock(&n->list_lock) > > IMHO it should be fine in the real world, but just wanted to > mention as it doesn't seem to be intentional. > > Otherwise it looks good to me! > >> if (!new.inuse && n->nr_partial >= s->min_partial) { >> - mode = M_FREE; >> + stat(s, DEACTIVATE_EMPTY); >> + discard_slab(s, slab); >> + stat(s, FREE_SLAB); >> } else if (new.freelist) { >> - mode = M_PARTIAL; >> - /* >> - * Taking the spinlock removes the possibility that >> - * acquire_slab() will see a slab that is frozen >> - */ >> spin_lock_irqsave(&n->list_lock, flags); >> - } else { >> - mode = M_FULL_NOLIST; >> - } >> - >> - >> - if (!slab_update_freelist(s, slab, >> - old.freelist, old.counters, >> - new.freelist, new.counters, >> - "unfreezing slab")) { >> - if (mode == M_PARTIAL) >> - spin_unlock_irqrestore(&n->list_lock, flags); >> - goto redo; >> - } >> - >> - >> - if (mode == M_PARTIAL) { >> add_partial(n, slab, tail); >> spin_unlock_irqrestore(&n->list_lock, flags); >> stat(s, tail); >> - } else if (mode == M_FREE) { >> - stat(s, DEACTIVATE_EMPTY); >> - discard_slab(s, slab); >> - stat(s, FREE_SLAB); >> - } else if (mode == M_FULL_NOLIST) { >> + } else { >> stat(s, DEACTIVATE_FULL); >> } >> } >> -- >> 2.20.1 >>