Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp7270282rdb; Wed, 3 Jan 2024 09:56:50 -0800 (PST) X-Google-Smtp-Source: AGHT+IER2REj8tLoVPuCAZaHeLOHeuhabQdo+QFJFO/HHayyRdjnJ7bbNHTKdFk1U/Q5kW+6SPs4 X-Received: by 2002:a05:6a20:9383:b0:197:5969:a6e7 with SMTP id x3-20020a056a20938300b001975969a6e7mr2055980pzh.50.1704304610526; Wed, 03 Jan 2024 09:56:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704304610; cv=none; d=google.com; s=arc-20160816; b=TEI4x/gALQJjQgOoFn5jyonMw9u06UMza9ZE3pa12NmZrMoA6PfD0c9H/8U4tfDwE2 mkBzY3rDK6UpwGjWBU2Yk9geTTUoIg0Nls7ePc68FaCVB/eS6X5CYQlaszPJ6DAonG5R Pow2bsaSNhz0FuaD70BC2PpzKDuYMkEXLrZmQPffKraYTtwInTlrcbCoA4omnyiqGCdx DS6qw2QuBP1L9alo2L3QPvy4APm8QHITemhOo4BRfMhTBcKUYLhDSqTrQWcNbbjSJOsb IxoOYskWMcWAwjP4lZ/n/mgit7cwK2c+nRguwM7duQlXUitFdBby5fKMvKV5E9LKJ0bC +2OQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:reply-to:message-id :subject:cc:to:from:date:dkim-signature; bh=XiSo1D8ZDW4DvB7ue/dR8kY2BLW6zG+q4ULAG5ZLp60=; fh=qCYR4V6LEqlVZEJhM8X8I09bW6ELRZQJOr9RDkfkbYE=; b=MfJRQIeT4qtdhH/mpe7Iq+lsTmrSslvSNip4PtYMqhXDrcy1RW3Y49/7C4DGJgowYc jPlE2sLK6ASHkd+6Src4Iri6P9TvNyFAFe7S5v2qedUAdqHgBS3LNn7Eh49qxXsTsuvW TD8v8hfry5mJAHkX16vQbqCX6gTecxM3+5eSfdh4i3Upui7uUJ4e4EKAAwLg8JTvvq1S wMqqArFZL7RH7f7XLfvaF0adcyrONK2Amq11LYpzmt0XbAZyZ08Hahvp+L4bAgmPcVin rB0ktk277mbZaZWe4/gKAJ8rk3uLKm20P+ATnLwn2z4yp7cXTC4X+92mvIlSnIpo4Ygn ez0Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=St5XPk70; spf=pass (google.com: domain of linux-kernel+bounces-15828-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-15828-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id 7-20020a630107000000b005cdf915a896si19869758pgb.571.2024.01.03.09.56.50 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Jan 2024 09:56:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-15828-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=St5XPk70; spf=pass (google.com: domain of linux-kernel+bounces-15828-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-15828-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 2F9CF2817DE for ; Wed, 3 Jan 2024 17:56:50 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A26451C68C; Wed, 3 Jan 2024 17:56:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="St5XPk70" X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D4BC21C680; Wed, 3 Jan 2024 17:56:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5E66AC433C8; Wed, 3 Jan 2024 17:56:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1704304603; bh=1qLqbSBie+qoIxrOyJPdv99vqabX+OLFZe8Dd4TjnMc=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=St5XPk70rY6T7ivXxJMzaNZCe4jDs0YSDcLCuUqCuvexrHlDwBBAocagvpmwKLBKJ sc0wO+nIavR3J4XDly0RlwB6ZEFHjCjp47FfFJNYDfnH7+Uj78mBdpoIsaxgYSxjkh duUkh/OEAruRju1ecXiD2knD3/zjoNfg+VE+ZrN/paWiAfByFEzlT4MCxxOMA4mCMc a30LSOQ8RB/UbhsEm5kh24xvPenlhfKd4BcZsCUNtRNDcrSoTds+Muhh36c9ShjpoW wu1YkEAL3rmrV1+hroSD4gxhJbtLCLyYUMo1cDP+U8Ne/4KjfoLo4UScHUnWvCeg+B DK2YJRuirjGgA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id D7C03CE08F4; Wed, 3 Jan 2024 09:56:42 -0800 (PST) Date: Wed, 3 Jan 2024 09:56:42 -0800 From: "Paul E. McKenney" To: Uladzislau Rezki Cc: RCU , Neeraj upadhyay , Boqun Feng , Hillf Danton , Joel Fernandes , LKML , Oleksiy Avramchenko , Frederic Weisbecker Subject: Re: [PATCH v3 4/7] rcu: Improve handling of synchronize_rcu() users Message-ID: Reply-To: paulmck@kernel.org References: <579f86e0-e03e-4ab3-9a85-a62064bcf2a1@paulmck-laptop> <650554ca-17f6-4119-ab4e-42239c958c73@paulmck-laptop> <45a15103-0302-4e7d-b522-e17e8b8ac927@paulmck-laptop> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Wed, Jan 03, 2024 at 06:35:20PM +0100, Uladzislau Rezki wrote: > On Wed, Jan 03, 2024 at 06:47:30AM -0800, Paul E. McKenney wrote: > > On Wed, Jan 03, 2024 at 02:16:00PM +0100, Uladzislau Rezki wrote: > > > On Tue, Jan 02, 2024 at 11:25:13AM -0800, Paul E. McKenney wrote: > > > > On Tue, Jan 02, 2024 at 01:52:26PM +0100, Uladzislau Rezki wrote: > > > > > Hello, Paul! > > > > > > > > > > Sorry for late answer, it is because of holidays :) > > > > > > > > > > > > > > The problem is that, we are limited in number of "wait-heads" which we > > > > > > > > > add as a marker node for this/current grace period. If there are more clients > > > > > > > > > and there is no a wait-head available it means that a system, the deferred > > > > > > > > > kworker, is slow in processing callbacks, thus all wait-nodes are in use. > > > > > > > > > > > > > > > > > > That is why we need an extra grace period. Basically to repeat our try one > > > > > > > > > more time, i.e. it might be that a current grace period is not able to handle > > > > > > > > > users due to the fact that a system is doing really slow, but this is rather > > > > > > > > > a corner case and is not a problem. > > > > > > > > > > > > > > > > But in that case, the real issue is not the need for an extra grace > > > > > > > > period, but rather the need for the wakeup processing to happen, correct? > > > > > > > > Or am I missing something subtle here? > > > > > > > > > > > > > > > Basically, yes. If we had a spare dummy-node we could process the users > > > > > > > by the current GP(no need in extra). Why we may not have it - it is because > > > > > > > like you pointed: > > > > > > > > > > > > > > - wake-up issue, i.e. wake-up time + when we are on_cpu; > > > > > > > - slow list process. For example priority. The kworker is not > > > > > > > given enough CPU time to do the progress, thus "dummy-nodes" > > > > > > > are not released in time for reuse. > > > > > > > > > > > > > > Therefore, en extra GP is requested if there is a high flow of > > > > > > > synchronize_rcu() users and kworker is not able to do a progress > > > > > > > in time. > > > > > > > > > > > > > > For example 60K+ parallel synchronize_rcu() users will trigger it. > > > > > > > > > > > > OK, but what bad thing would happen if that was moved to precede the > > > > > > rcu_seq_start(&rcu_state.gp_seq)? That way, the requested grace period > > > > > > would be the same as the one that is just now starting. > > > > > > > > > > > > Something like this? > > > > > > > > > > > > start_new_poll = rcu_sr_normal_gp_init(); > > > > > > > > > > > > /* Record GP times before starting GP, hence rcu_seq_start(). */ > > > > > > rcu_seq_start(&rcu_state.gp_seq); > > > > > > ASSERT_EXCLUSIVE_WRITER(rcu_state.gp_seq); > > > > > > > > > > > I had a concern about the case when rcu_sr_normal_gp_init() handles what > > > > > we currently have, in terms of requests. Right after that there is/are > > > > > extra sync requests which invoke the start_poll_synchronize_rcu() but > > > > > since a GP has been requested before it will not request an extra one. So > > > > > "last" incoming users might not be processed. > > > > > > > > > > That is why i have placed the rcu_sr_normal_gp_init() after a gp_seq is > > > > > updated. > > > > > > > > > > I can miss something, so please comment. Apart of that we can move it > > > > > as you proposed. > > > > > > > > Couldn't that possibility be handled by a check in rcu_gp_cleanup()? > > > > > > > It is controlled by the caller anyway, i.e. if a new GP is needed. > > > > > > I am not 100% sure it is as straightforward as it could look like to > > > handle it in the rcu_sr_normal_gp_cleaup() function. At least i see > > > that we need to access to the first element of llist and find out if > > > it is a wait-dummy-head or not. If not we know there are extra incoming > > > calls. > > > > > > So that way requires extra calling of start_poll_synchronize_rcu(). > > > > If this is invoked early enough in rcu_gp_cleanup(), all that needs to > > happen is to set the need_gp flag. Plus you can count the number of > > requests, and snapshot that number at rcu_gp_init() time and check to > > see if it changed at rcu_gp_cleanup() time. Later on, this could be > > used to reduce the number of wakeups, correct? > > > You mean instead of waking-up a gp-kthread just continue processing of > new users if they are exist? If so, i think, we can implement it as separate > patches. Agreed, this is an optimization, and thus should be a separate patch. > > > I can add a comment about your concern and we can find the best approach > > > later, if it is OK with you! > > > > I agree that this should be added via a later patch, though I have not > > yet given up on the possibility that this patch might be simple enough > > to be later in this same series. > > > Maybe there is a small misunderstanding. Please note, the rcu_sr_normal_gp_init() > function does not request any new gp, i.e. our approach does not do any extra GP > requests. It happens only if there are no any dummy-wait-head available as we > discussed it earlier. The start_poll_synchronize_rcu() added by your patch 4/7 will request an additional grace period because it is invoked after rcu_seq_start() is called, correct? Or am I missing something subtle here? Thanx, Paul