Received: by 2002:a05:6358:f14:b0:e5:3b68:ec04 with SMTP id b20csp3354460rwj; Mon, 19 Dec 2022 17:21:15 -0800 (PST) X-Google-Smtp-Source: AMrXdXtpAjgo3ybS2hNjKJpqtS5S+zgPnWbkWoX1DuOQiRX/40NAahhk/HLd+ROZHL+C/b518xpy X-Received: by 2002:a17:902:ea91:b0:189:c6fb:f933 with SMTP id x17-20020a170902ea9100b00189c6fbf933mr10737466plb.28.1671499275606; Mon, 19 Dec 2022 17:21:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671499275; cv=none; d=google.com; s=arc-20160816; b=Xwyjry9h3wAUYqgJ7T1E6lOSIg8M+zg6bBYuk9V8xjRc4tqENUnfhjxpaXbafPI44Z hUzMCwXr+RMreZmGCKD9sM3MMu2wXgLdKljq93M/SK25g/ryFK5k0UByPFa5X/3tVsQS X08UaVJ1veeiuQv5qU4cuzn2lFo9fQNRBvKNJSkTVPi7JHpvPze29i/rr9SFxoWFZWal jmHsO+bS9omF5Pwx6gZe4zhZkEiaDUpgMNHdUpW4/puAYDwsr4Zvy3ncbxR5SrPxWnI4 ZM503iFHgJzA4tLPMfgb09O4uyUw1B08CYXCZaOaYFDCPnq2Cxh8Fpt3ASoi9UvNdBiS kqsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=uXoy9Pl68vMlNNOaV9Wbz1/x0VWfUtAxt4Ykl/YxwEE=; b=zqHsxl6JekZdQWpRDToOo0GR5L7r7jKRZzlHSqoR3Vp1URWmf8+wZaNBCrFScLAMb4 1dQdoDaGukOp9u8BBg/hCW0OKTWBRSx650cYxCZSDmzXv0SV+Ufv12ARH5o9ArVwRAP9 QPsEOv9Gnx4x/mOtBsjxWnAm+q0gm6Ga1VXEyBW+bHVTLROibUY1XOZZ8F4T+k/Vr4AG 0r0AZI8AbPHsFtyLM6JadnNGiAShkIRjGgpWptlLgdnhBjaysDP7Lw11K2xKZ7JrGgcd y/hNwU/BfkYxX6WM3vAmxWiB4asgq+ZyfQDOq1x+ziAYguuIysckWjXj+gI/uQOPqo9h Ym9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=hdzpL5tK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g5-20020a170902e38500b00189cbf69532si11056963ple.153.2022.12.19.17.21.06; Mon, 19 Dec 2022 17:21:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=hdzpL5tK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233076AbiLTA5V (ORCPT + 71 others); Mon, 19 Dec 2022 19:57:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233142AbiLTA44 (ORCPT ); Mon, 19 Dec 2022 19:56:56 -0500 Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com [IPv6:2a00:1450:4864:20::12d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6097E1789E for ; Mon, 19 Dec 2022 16:56:08 -0800 (PST) Received: by mail-lf1-x12d.google.com with SMTP id cf42so16337197lfb.1 for ; Mon, 19 Dec 2022 16:56:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=uXoy9Pl68vMlNNOaV9Wbz1/x0VWfUtAxt4Ykl/YxwEE=; b=hdzpL5tK12uCXbeMlqH9Ot9UZBqvL4dx136PQK/nRuBwcbuljR9Y/4Hjq4iCla/xsn J/2DX5zvC6KXy4haOWoNcvx/DQ/IuLPkc7/1i0Ia0LVbBUL9wvBucz6tLy9w5bdOza2x i97RwFWME1WTt36AuUlbDQPTkADhcCgXSng54= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=uXoy9Pl68vMlNNOaV9Wbz1/x0VWfUtAxt4Ykl/YxwEE=; b=WWsvdO8zrQzly/TTHY91pxz239l7mcX8OFKBE0pfJd49ljsnkye+0vPHdFSBhR7UQ9 d515J6D//HK7URd7Llb8qE/pSiVQTY/iMWaNAySjrH85A8e1hi96B14PQXv6tQ5wydZj X7KBqqlb/GiFPB00El0Xq5hKODoA+ulUTH7uaZ3TqZN4whBN35rnMt6rj82IVQyYjaSJ DKyftxHVyb5VF3HiS68n96CYyHgSnGMxUAuxhLgsasYAOAKifbS4zTWmjOoSh/4XuoDR J/LiQLW7PhNbMWq7vYafXfDmVXztlr5GFiIMH1BwtljnILY7y5euaOfn7/7hOg1pLZyh DxmA== X-Gm-Message-State: ANoB5pkLKn6m+V4+yEtFjY0cMfDFClf+k5Ep8BVGnKwfRIGIWeGxEHHt nDT0S+f5WD501gb1qKL5tBKYX9B09b16/7c6ojpzVg== X-Received: by 2002:ac2:510c:0:b0:4af:d4e:dfa7 with SMTP id q12-20020ac2510c000000b004af0d4edfa7mr28137170lfb.582.1671497765480; Mon, 19 Dec 2022 16:56:05 -0800 (PST) MIME-Version: 1.0 References: <20221218191310.130904-1-joel@joelfernandes.org> <589da7c9-5fb7-5f6f-db88-ca464987997e@efficios.com> <2da94283-4fce-9aff-ac5d-ba181fa0f008@efficios.com> <659763b0-eee4-10dd-5f4a-37241173809c@efficios.com> In-Reply-To: <659763b0-eee4-10dd-5f4a-37241173809c@efficios.com> From: Joel Fernandes Date: Mon, 19 Dec 2022 19:55:59 -0500 Message-ID: Subject: Re: [RFC 0/2] srcu: Remove pre-flip memory barrier To: Mathieu Desnoyers Cc: linux-kernel@vger.kernel.org, Josh Triplett , Lai Jiangshan , "Paul E. McKenney" , rcu@vger.kernel.org, Steven Rostedt Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Dec 18, 2022 at 8:49 PM Mathieu Desnoyers wrote: [...] > >>>>> > >>>>> On 2022-12-18 14:13, Joel Fernandes (Google) wrote: > >>>>>> Hello, I believe the pre-flip memory barrier is not required. The only reason I > >>>>>> can say to remove it, other than the possibility that it is unnecessary, is to > >>>>>> not have extra code that does not help. However, since we are issuing a fully > >>>>>> memory-barrier after the flip, I cannot say that it hurts to do it anyway. > >>>>>> > >>>>>> For this reason, please consider these patches as "informational", than a > >>>>>> "please merge". :-) Though, feel free to consider merging if you agree! > >>>>>> > >>>>>> All SRCU scenarios pass with these, with 6 hours of testing. > >>>>> > >>>>> Hi Joel, > >>>>> > >>>>> Please have a look at the comments in my side-rcu implementation [1, 2]. > >>>>> It is similar to what SRCU does (per-cpu counter based grace period > >>>>> tracking), but implemented for userspace. The comments explain why this > >>>>> works without the memory barrier you identify as useless in SRCU. > >>>>> > >>>>> Following my implementation of side-rcu, I reviewed the SRCU comments > >>>>> and identified that the barrier "/* E */" appears to be useless. I even > >>>>> discussed this privately with Paul E. McKenney. > >>>>> > >>>>> My implementation and comments go further though, and skip the period > >>>>> "flip" entirely if the first pass observes that all readers (in both > >>>>> periods) are quiescent. > >>>> > >>>> Actually in SRCU, the first pass scans only 1 index, then does the > >>>> flip, and the second pass scans the second index. Without doing a > >>>> flip, an index cannot be scanned for forward progress reasons because > >>>> it is still "active". So I am curious how you can skip flip and still > >>>> scan both indexes? I will dig more into your implementation to learn more. > >>> > >>> If we look at SRCU read-side: > >>> > >>> int __srcu_read_lock(struct srcu_struct *ssp) > >>> { > >>> int idx; > >>> > >>> idx = READ_ONCE(ssp->srcu_idx) & 0x1; > >>> this_cpu_inc(ssp->sda->srcu_lock_count[idx]); > >>> smp_mb(); /* B */ /* Avoid leaking the critical section. */ > >>> return idx; > >>> } > >>> > >>> If the thread is preempted for a long period of time between load of > >>> ssp->srcu_idx and increment of srcu_lock_count[idx], this means this > >>> thread can appear as a "new reader" for the idx period at any arbitrary > >>> time in the future, independently of which period is the current one > >>> within a future grace period. > >>> > >>> As a result, the grace period algorithm needs to inherently support the > >>> fact that a "new reader" can appear in any of the two periods, > >>> independently of the current period state. > >>> > >>> As a result, this means that while within period "0", we _need_ to allow > >>> newly coming readers to appear as we scan period "0". > >> > >> Sure, it already does handle it but that is I believe it is a corner > >> case, not the norm. > >> > >>> As a result, we can simply scan both periods 0/1 for reader quiescence, > >>> even while new readers appear within those periods. > >> > >> I think this is a bit dangerous. Yes there is the preemption thing you > >> mentioned above, but that is bounded since you can only have a fixed > >> number of tasks that underwent that preemption, and it is quite rare > >> in the sense, each reader should get preempted just after sampling idx > >> but not incrementing lock count. > >> > >> However, if we scan while new readers appear (outside of the above > >> preemption problem), we can have counter wrap causing a false match > >> much quicker. > >> The scan loop is: > >> check_readers(idx) { > >> count_all_unlocks(idx); > >> smp_mb(); > >> count_all_locks(idx); > >> bool done = (locks == unlocks) > >> if (done) { > >> // readers are done, end scan for this idx. > >> } else { > >> // try again later > >> } > >> } > >> > >> So if check_readers() got preempted just after the smp_mb(), then you > >> can have lots of tasks enter and exit the read-side critical section > >> and increment the locks count. Eventually locks == unlocks will > >> happen, and it is screwed. Sure this is also theoretical, but yeah > >> that issue can be made "worse" by scanning active readers > >> deliberately, especially when such readers can also nest arbitrarily. > >> > >>> As a result, flipping between periods 0/1 is just relevant for forward > >>> progress, not for correctness. > >> > >> Sure, agreed, forward progress. > > > > Adding to the last statement "But also correctness as described above". > > Exactly how many entry/exit of the read-side critical section while the > grace period is preempted do you need to trigger this ? It depends on how many readers are active during the preemption of the scan code. Say the preemption happened after per-CPU unlock counts were totalled. Then AFAICS, if there are N active readers which need the grace period to wait, you need (2^sizeof(int) - N) number of lock+unlock to happen. > On a 64-bit system, where 64-bit counters are used, AFAIU this need to > be exactly 2^64 read-side critical sections. Yes, but what about 32-bit systems? > There are other synchronization algorithms such as seqlocks which are > quite happy with much less protection against overflow (using a 32-bit > counter even on 64-bit architectures). The seqlock is an interesting point. > For practical purposes, I suspect this issue is really just theoretical. I have to ask, what is the benefit of avoiding a flip and scanning active readers? Is the issue about grace period delay or performance? If so, it might be worth prototyping that approach and measuring using rcutorture/rcuscale. If there is significant benefit to current approach, then IMO it is worth exploring. > Or am I missing your point ? No, I think you are not. Let me know if I missed something. Thanks, - Joel > > > > > > thanks, > > > > - Joel > > -- > Mathieu Desnoyers > EfficiOS Inc. > https://www.efficios.com >