Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp4661617pxu; Tue, 13 Oct 2020 04:15:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw1OY+Zv2xB8esoOZhbllToTAHPca6GXDJwp0a8SXd2fxtdI1ZoBtDHRMhdsI6Y48lrhFCj X-Received: by 2002:a17:906:1152:: with SMTP id i18mr32320188eja.101.1602587700260; Tue, 13 Oct 2020 04:15:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602587700; cv=none; d=google.com; s=arc-20160816; b=FFsHEz7EWDjyZapWXd5dciq3NWLMR+xUdqwWQunvUYFU7Hu4DArqreskdJgtYm0yfQ r/NlPq9Lm5UPGjar6JyMxuwcUojj+bDgwqfcDg9KF8kOMpoSQq77MkxLyhqYVZnBu8vU ja1C/LY/1On2AMt3l9ngaU2lnCSjVIO/ueZcIBTEjcQTsvWdZ8MkGLjSgKPRHiAn7xU8 iY0aq3TtetYqjYGsHEX+16Mgr5rQqAcdpTLrM70D36VXJdW8J+yWUx+CjODRywnRlacc 4loZFf4Yt0a+EONlrDPBqiFpfJe9lwh3+sHIYasGdkyCSEw8S6xUY7MabJoWYtgvYUNL E0qQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=alLk8xndDszF9NR0rgaiz98YqUTHsvcvl2ard2qtT/E=; b=Iz+H3B1pypoi152TuPDQiaZ4jF4ZOkYIopRdNb7/wN1VMhPXVLVEiH+kvcmoor3fV0 3Uvg4/Jimiv3gn7Mp9ncyXnFE/uN/MG4OqoAd1den65JkA7BewS1B4VHCDlQGCMAGYXp L+2R4EctuzX0pyroqGah9dL6s7mkmojEUjih5hql9y/4hWm7BXvik5WiE23/czRYWDtI HJ9uJBIxA3gBPPBDE2cZLjz+KeF288RtmPgLJ9ngpXj0TpQrO0J6MuMEB6f9cIEkNBmI EJg3sI6rn+xvbzq8BnzmgHbJsFFeDwtizMYXpzNh1fB/uZrHZXvidX0nA6alqiT9nQ93 1G7g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=qq5ShvRN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q28si13952038edw.303.2020.10.13.04.14.38; Tue, 13 Oct 2020 04:15:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=qq5ShvRN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731133AbgJLXUM (ORCPT + 99 others); Mon, 12 Oct 2020 19:20:12 -0400 Received: from mail.kernel.org ([198.145.29.99]:39748 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726744AbgJLXUM (ORCPT ); Mon, 12 Oct 2020 19:20:12 -0400 Received: from localhost (unknown [176.164.225.223]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2F0002076D; Mon, 12 Oct 2020 23:20:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1602544811; bh=FR/XhIiRDNVQjTauUtYXk48qWpsYZADdnNm8X2ctaWQ=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=qq5ShvRN7W/05X85qGJqqh5KHoOF31TRFGWcqUMU7hCKTS+z7FNw/lrg0QDfyPbgZ kAGfva6WKdEFbW4jKxwqCP2zXQw8WC18mjuVnaBBNuoy9Zr+JugOpGLA4xsYKHwL6m JS93CChPZh5NbdhSG6hIn/uOJT8EWNupH12A9O4Q= Date: Tue, 13 Oct 2020 01:20:08 +0200 From: Frederic Weisbecker To: "Joel Fernandes (Google)" Cc: linux-kernel@vger.kernel.org, Ingo Molnar , Josh Triplett , Lai Jiangshan , Madhuparna Bhowmik , Mathieu Desnoyers , neeraj.iitr10@gmail.com, "Paul E. McKenney" , rcu@vger.kernel.org, Steven Rostedt , "Uladzislau Rezki (Sony)" Subject: Re: [PATCH v6 2/4] rcu/segcblist: Add counters to segcblist datastructure Message-ID: <20201012232008.GA47577@lothringen> References: <20200923152211.2403352-1-joel@joelfernandes.org> <20200923152211.2403352-3-joel@joelfernandes.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200923152211.2403352-3-joel@joelfernandes.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 23, 2020 at 11:22:09AM -0400, Joel Fernandes (Google) wrote: > +/* Return number of callbacks in a segment of the segmented callback list. */ > +static void rcu_segcblist_add_seglen(struct rcu_segcblist *rsclp, int seg, long v) > +{ > +#ifdef CONFIG_RCU_NOCB_CPU > + smp_mb__before_atomic(); /* Up to the caller! */ > + atomic_long_add(v, &rsclp->seglen[seg]); > + smp_mb__after_atomic(); /* Up to the caller! */ > +#else > + smp_mb(); /* Up to the caller! */ > + WRITE_ONCE(rsclp->seglen[seg], rsclp->seglen[seg] + v); > + smp_mb(); /* Up to the caller! */ > +#endif > +} I know that these "Up to the caller" comments come from the existing len functions but perhaps we should explain a bit more against what it is ordering and what it pairs to. Also why do we need one before _and_ after? And finally do we have the same ordering requirements than the unsegmented len field? > + > +/* Move from's segment length to to's segment. */ > +static void rcu_segcblist_move_seglen(struct rcu_segcblist *rsclp, int from, int to) > +{ > + long len; > + > + if (from == to) > + return; > + > + len = rcu_segcblist_get_seglen(rsclp, from); > + if (!len) > + return; > + > + rcu_segcblist_add_seglen(rsclp, to, len); > + rcu_segcblist_set_seglen(rsclp, from, 0); > +} > + [...] > @@ -245,6 +283,7 @@ void rcu_segcblist_enqueue(struct rcu_segcblist *rsclp, > struct rcu_head *rhp) > { > rcu_segcblist_inc_len(rsclp); > + rcu_segcblist_inc_seglen(rsclp, RCU_NEXT_TAIL); > smp_mb(); /* Ensure counts are updated before callback is enqueued. */ Since inc_len and even now inc_seglen have two full barriers embracing the add up, we can probably spare the above smp_mb()? > rhp->next = NULL; > WRITE_ONCE(*rsclp->tails[RCU_NEXT_TAIL], rhp); > @@ -274,27 +313,13 @@ bool rcu_segcblist_entrain(struct rcu_segcblist *rsclp, > for (i = RCU_NEXT_TAIL; i > RCU_DONE_TAIL; i--) > if (rsclp->tails[i] != rsclp->tails[i - 1]) > break; > + rcu_segcblist_inc_seglen(rsclp, i); > WRITE_ONCE(*rsclp->tails[i], rhp); > for (; i <= RCU_NEXT_TAIL; i++) > WRITE_ONCE(rsclp->tails[i], &rhp->next); > return true; > } > > @@ -403,6 +437,7 @@ void rcu_segcblist_advance(struct rcu_segcblist *rsclp, unsigned long seq) > if (ULONG_CMP_LT(seq, rsclp->gp_seq[i])) > break; > WRITE_ONCE(rsclp->tails[RCU_DONE_TAIL], rsclp->tails[i]); > + rcu_segcblist_move_seglen(rsclp, i, RCU_DONE_TAIL); Do we still need the same amount of full barriers contained in add() called by move() here? It's called in the reverse order (write queue then len) than usual. If I trust the comment in rcu_segcblist_enqueue(), the point of the barrier is to make the length visible before the new callback for rcu_barrier() (although that concerns len and not seglen). But here above, the unsegmented length doesn't change. I could understand a write barrier between add_seglen(x, i) and set_seglen(0, RCU_DONE_TAIL) but I couldn't find a paired couple either. > } > > /* If no callbacks moved, nothing more need be done. */ > @@ -423,6 +458,7 @@ void rcu_segcblist_advance(struct rcu_segcblist *rsclp, unsigned long seq) > if (rsclp->tails[j] == rsclp->tails[RCU_NEXT_TAIL]) > break; /* No more callbacks. */ > WRITE_ONCE(rsclp->tails[j], rsclp->tails[i]); > + rcu_segcblist_move_seglen(rsclp, i, j); Same question here (feel free to reply "same answer" :o) Thanks!