Received: by 2002:a05:7412:b10a:b0:f3:1519:9f41 with SMTP id az10csp3070841rdb; Mon, 4 Dec 2023 16:20:52 -0800 (PST) X-Google-Smtp-Source: AGHT+IFhgfEAraTRR09GzUzXuAPV11wm6vDm+aw6VP097h+ttZyXK9+sgHzs9f1tVF3PM3QglSuL X-Received: by 2002:a05:6a20:430f:b0:187:6dab:578 with SMTP id h15-20020a056a20430f00b001876dab0578mr2536738pzk.40.1701735652274; Mon, 04 Dec 2023 16:20:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701735652; cv=none; d=google.com; s=arc-20160816; b=D8kMz3tS07mn3ezCM+fyURq9jzcQl3IYKQgHlDyUlcmijl/0soZ/dtKzFCDVuRSG3i 7sFCGK8TUQmPDiH6jjhDCnRbEaZvS6JHeIVPpZyYdn8kQHUiu8Qm/R0tV4ENykpAAd99 UHjDXqwoHrS9t1aUTqQbZL9qqiMRT3bvpXXg9kZyON4X0CwBrwOevtfAR8+/RxtiuWXe Ig1f5+qT06evpgaOsHS5WIRXEQdSR3p0bN+RqvKok6Fxy1Ha4SG0jgMvbLxu10HClhQ4 YRCynMSEVcO4SQR5aZ7vVigKtXsm1vx4hgWDGMofutF5nyryt7oRtlm/bGF9io114d9G hKDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=x5bLYrX5YTJX4H6MnsGRPEGJTo5dwitg3APNnKg8kiU=; fh=SdGw5iIzwEdwr7XsF6vMyvQoeqrR0DDf/2mQXXow/0I=; b=yxFZbWe/vVvECESoVThzgwvETznhlb4FT42HFVdKh8DuJ8xdJ+SRDQ3mhcoszgpAX9 wtagLAsOK1WvoKZeRO5nshXwbczjqtVZmHAsTPHeR73e7A2W/uH5IZzI6rChqH1ald/I TNkjHfs5K2VJrb7/IcT6Si6qMTrIemxTCWErOqoU9demc90QoFhbbTArjKtTgg3CGkRw GbLI1OG4lE3S+XLXjkcE6oLHK+Kp7ZtdFASb4P0X83mecVNfLcBY9xXeudvlLn4stfyk 4eHPZvYW2b/3gX1DrLXV/pXWF9p28si1qbl23YZddXFjclOpwxbX0OSXh4B3sgDVL5y/ MjhQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Fp1qbUid; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id lr4-20020a17090b4b8400b00286988f6debsi3225683pjb.67.2023.12.04.16.20.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 16:20:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Fp1qbUid; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 82969806BC3A; Mon, 4 Dec 2023 16:20:44 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234800AbjLEAU0 (ORCPT + 99 others); Mon, 4 Dec 2023 19:20:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234805AbjLEAUY (ORCPT ); Mon, 4 Dec 2023 19:20:24 -0500 Received: from mail-vk1-xa33.google.com (mail-vk1-xa33.google.com [IPv6:2607:f8b0:4864:20::a33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07220FF for ; Mon, 4 Dec 2023 16:20:30 -0800 (PST) Received: by mail-vk1-xa33.google.com with SMTP id 71dfb90a1353d-4b2ce4ffb0cso698299e0c.0 for ; Mon, 04 Dec 2023 16:20:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701735629; x=1702340429; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=x5bLYrX5YTJX4H6MnsGRPEGJTo5dwitg3APNnKg8kiU=; b=Fp1qbUidctI9SoRAlYdG5IIHcbNaNJpnwqcWHBwR0jwOXs15wnTzXJ2Sk7FmTI6CbK Y33CtG2qdP+i5sduCzHRBibvpIB4nk9Ef7LCanqulnc2o9fPWk6kmN3dIfwctLjoKDFh DeJHEkkK+E1hxEv0FGtYMJt51Aza+b/AJiDb7yZ2GKLt1sTk9DDLKD/cC/qSP90Auv1N yugzWmytxYSCXJtBqol4bNHtQkEPoAE7MpKoMcdXP68dscAnst59XA/i8bfASN4snyoB 0n7+IcGd7EbEkxCY4tBuu9L04a8IYtm2b246cYjHJCMLQM+VSXynnrYLqxFmiZlAT8EU kWog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701735629; x=1702340429; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=x5bLYrX5YTJX4H6MnsGRPEGJTo5dwitg3APNnKg8kiU=; b=MyWJiNaYnaOch6DDeLTcJ2mJJupRrmNMOD5zG6uv2unW38cMT/w1fOscNWjqWjlbEL ZVL3UpOk0ZzVjTAdT1OUxTfTIAMz3oRiPeYPAyMX/jDZhfTbvRN1iops0WKHHFlka/n2 JPPcjLjdUJn2fiPrfqALa2NJLOyHZpiqHfHyoFJ33eg+l9g+GIiK822+dFNkOCUIXOAP mdJDpQqTVOr2ElHkGz+L1KT9Z5dXu9hYYQay5JIuh3jtoAZgOkFJHz1XRPv2q0g3T+B+ qnjCOG2YObmWX/0caPKY+1xo8OV2OEVg1G2uYI/2CR7vqJAhWctprL24x8bQ87zQKLeR ZjYw== X-Gm-Message-State: AOJu0YykpBVHvZV+14r3x+k4kk+uoZkkOz3M23mZVR3L8nRzU/quaVzZ fNWTUnW33ccZiqWRbKyaDNd9Ejdw7ANFuZ6tMsY= X-Received: by 2002:a05:6122:da5:b0:496:248e:43fc with SMTP id bc37-20020a0561220da500b00496248e43fcmr3134948vkb.8.1701735628937; Mon, 04 Dec 2023 16:20:28 -0800 (PST) MIME-Version: 1.0 References: <20231102032330.1036151-1-chengming.zhou@linux.dev> <20231102032330.1036151-8-chengming.zhou@linux.dev> <93dcdf0c-336b-cb20-d646-7a48d872e08c@suse.cz> In-Reply-To: <93dcdf0c-336b-cb20-d646-7a48d872e08c@suse.cz> From: Hyeonggon Yoo <42.hyeyoo@gmail.com> Date: Tue, 5 Dec 2023 09:20:17 +0900 Message-ID: Subject: Re: [PATCH v5 7/9] slub: Optimize deactivate_slab() To: Vlastimil Babka Cc: chengming.zhou@linux.dev, cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, roman.gushchin@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Chengming Zhou Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=0.4 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,HK_RANDOM_FROM,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Mon, 04 Dec 2023 16:20:44 -0800 (PST) On Tue, Dec 5, 2023 at 2:55=E2=80=AFAM Vlastimil Babka wro= te: > > On 12/3/23 10:23, Hyeonggon Yoo wrote: > > On Thu, Nov 2, 2023 at 12:25=E2=80=AFPM wrot= e: > >> > >> From: Chengming Zhou > >> > >> Since the introduce of unfrozen slabs on cpu partial list, we don't > >> need to synchronize the slab frozen state under the node list_lock. > >> > >> The caller of deactivate_slab() and the caller of __slab_free() won't > >> manipulate the slab list concurrently. > >> > >> So we can get node list_lock in the last stage if we really need to > >> manipulate the slab list in this path. > >> > >> Signed-off-by: Chengming Zhou > >> Reviewed-by: Vlastimil Babka > >> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > >> --- > >> mm/slub.c | 79 ++++++++++++++++++------------------------------------= - > >> 1 file changed, 26 insertions(+), 53 deletions(-) > >> > >> diff --git a/mm/slub.c b/mm/slub.c > >> index bcb5b2c4e213..d137468fe4b9 100644 > >> --- a/mm/slub.c > >> +++ b/mm/slub.c > >> @@ -2468,10 +2468,8 @@ static void init_kmem_cache_cpus(struct kmem_ca= che *s) > >> static void deactivate_slab(struct kmem_cache *s, struct slab *slab, > >> void *freelist) > >> { > >> - enum slab_modes { M_NONE, M_PARTIAL, M_FREE, M_FULL_NOLIST }; > >> struct kmem_cache_node *n =3D get_node(s, slab_nid(slab)); > >> int free_delta =3D 0; > >> - enum slab_modes mode =3D M_NONE; > >> void *nextfree, *freelist_iter, *freelist_tail; > >> int tail =3D DEACTIVATE_TO_HEAD; > >> unsigned long flags =3D 0; > >> @@ -2509,65 +2507,40 @@ static void deactivate_slab(struct kmem_cache = *s, struct slab *slab, > >> /* > >> * Stage two: Unfreeze the slab while splicing the per-cpu > >> * freelist to the head of slab's freelist. > >> - * > >> - * Ensure that the slab is unfrozen while the list presence > >> - * reflects the actual number of objects during unfreeze. > >> - * > >> - * We first perform cmpxchg holding lock and insert to list > >> - * when it succeed. If there is mismatch then the slab is not > >> - * unfrozen and number of objects in the slab may have changed= . > >> - * Then release lock and retry cmpxchg again. > >> */ > >> -redo: > >> - > >> - old.freelist =3D READ_ONCE(slab->freelist); > >> - old.counters =3D READ_ONCE(slab->counters); > >> - VM_BUG_ON(!old.frozen); > >> - > >> - /* Determine target state of the slab */ > >> - new.counters =3D old.counters; > >> - if (freelist_tail) { > >> - new.inuse -=3D free_delta; > >> - set_freepointer(s, freelist_tail, old.freelist); > >> - new.freelist =3D freelist; > >> - } else > >> - new.freelist =3D old.freelist; > >> - > >> - new.frozen =3D 0; > >> + do { > >> + old.freelist =3D READ_ONCE(slab->freelist); > >> + old.counters =3D READ_ONCE(slab->counters); > >> + VM_BUG_ON(!old.frozen); > >> + > >> + /* Determine target state of the slab */ > >> + new.counters =3D old.counters; > >> + new.frozen =3D 0; > >> + if (freelist_tail) { > >> + new.inuse -=3D free_delta; > >> + set_freepointer(s, freelist_tail, old.freelist= ); > >> + new.freelist =3D freelist; > >> + } else { > >> + new.freelist =3D old.freelist; > >> + } > >> + } while (!slab_update_freelist(s, slab, > >> + old.freelist, old.counters, > >> + new.freelist, new.counters, > >> + "unfreezing slab")); > >> > >> + /* > >> + * Stage three: Manipulate the slab list based on the updated = state. > >> + */ > > > > deactivate_slab() might unconsciously put empty slabs into partial list= , like: > > > > deactivate_slab() __slab_free() > > cmpxchg(), slab's not empty > > cmpxchg(), slab's empty > > and unfrozen > > spin_lock(&n->list_lock) > > (slab's empty but not > > on partial list, > > > > spin_unlock(&n->list_lock) and return) > > spin_lock(&n->list_lock) > > put slab into partial list > > spin_unlock(&n->list_lock) > > > > IMHO it should be fine in the real world, but just wanted to > > mention as it doesn't seem to be intentional. > > I've noticed it too during review, but then realized it's not a new > behavior, same thing could happen with deactivate_slab() already before t= he > series. Ah, you are right. > Free slabs on partial list are supported, we even keep some > intentionally as long as "n->nr_partial < s->min_partial" (and that check= is > racy too) so no need to try making this more strict. Agreed. > > Otherwise it looks good to me! > > Good enough for a reviewed-by? :) Yes, Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Thanks! -- Hyeonggon