2020-11-03 14:28:14

by Joel Fernandes

[permalink] [raw]
Subject: [PATCH v9 3/7] srcu: Fix invoke_rcu_callbacks() segcb length adjustment

With earlier patches, the negative counting of the unsegmented list
cannot be used to adjust the segmented one. To fix this, sample the
unsegmented length in advance, and use it after CB execution to adjust
the segmented list's length.

Reviewed-by: Frederic Weisbecker <[email protected]>
Suggested-by: Frederic Weisbecker <[email protected]>
Signed-off-by: Joel Fernandes (Google) <[email protected]>
---
kernel/rcu/srcutree.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 0f23d20d485a..79b7081143a7 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -1160,6 +1160,7 @@ static void srcu_advance_state(struct srcu_struct *ssp)
*/
static void srcu_invoke_callbacks(struct work_struct *work)
{
+ long len;
bool more;
struct rcu_cblist ready_cbs;
struct rcu_head *rhp;
@@ -1182,6 +1183,7 @@ static void srcu_invoke_callbacks(struct work_struct *work)
/* We are on the job! Extract and invoke ready callbacks. */
sdp->srcu_cblist_invoking = true;
rcu_segcblist_extract_done_cbs(&sdp->srcu_cblist, &ready_cbs);
+ len = ready_cbs.len;
spin_unlock_irq_rcu_node(sdp);
rhp = rcu_cblist_dequeue(&ready_cbs);
for (; rhp != NULL; rhp = rcu_cblist_dequeue(&ready_cbs)) {
@@ -1190,13 +1192,14 @@ static void srcu_invoke_callbacks(struct work_struct *work)
rhp->func(rhp);
local_bh_enable();
}
+ WARN_ON_ONCE(ready_cbs.len);

/*
* Update counts, accelerate new callbacks, and if needed,
* schedule another round of callback invocation.
*/
spin_lock_irq_rcu_node(sdp);
- rcu_segcblist_insert_count(&sdp->srcu_cblist, &ready_cbs);
+ rcu_segcblist_add_len(&sdp->srcu_cblist, -len);
(void)rcu_segcblist_accelerate(&sdp->srcu_cblist,
rcu_seq_snap(&ssp->srcu_gp_seq));
sdp->srcu_cblist_invoking = false;
--
2.29.1.341.ge80a0c044ae-goog


2020-11-03 14:51:11

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH v9 3/7] srcu: Fix invoke_rcu_callbacks() segcb length adjustment

On Tue, Nov 03, 2020 at 09:25:59AM -0500, Joel Fernandes (Google) wrote:
> With earlier patches, the negative counting of the unsegmented list
> cannot be used to adjust the segmented one. To fix this, sample the
> unsegmented length in advance, and use it after CB execution to adjust
> the segmented list's length.
>
> Reviewed-by: Frederic Weisbecker <[email protected]>
> Suggested-by: Frederic Weisbecker <[email protected]>
> Signed-off-by: Joel Fernandes (Google) <[email protected]>

This breaks bisection, you need to either fix up the previous patch
by adding this diff inside or better yet: expand what you did
in "rcu/tree: Make rcu_do_batch count how many callbacks were executed"
to also handle srcu before introducing the segcb count.

Thanks.

2020-11-03 15:00:24

by Joel Fernandes

[permalink] [raw]
Subject: Re: [PATCH v9 3/7] srcu: Fix invoke_rcu_callbacks() segcb length adjustment

On Tue, Nov 03, 2020 at 03:47:14PM +0100, Frederic Weisbecker wrote:
> On Tue, Nov 03, 2020 at 09:25:59AM -0500, Joel Fernandes (Google) wrote:
> > With earlier patches, the negative counting of the unsegmented list
> > cannot be used to adjust the segmented one. To fix this, sample the
> > unsegmented length in advance, and use it after CB execution to adjust
> > the segmented list's length.
> >
> > Reviewed-by: Frederic Weisbecker <[email protected]>
> > Suggested-by: Frederic Weisbecker <[email protected]>
> > Signed-off-by: Joel Fernandes (Google) <[email protected]>
>
> This breaks bisection, you need to either fix up the previous patch
> by adding this diff inside or better yet: expand what you did
> in "rcu/tree: Make rcu_do_batch count how many callbacks were executed"
> to also handle srcu before introducing the segcb count.

Right, the latter is better. Let us do that.

thanks,

- Joel

2020-11-03 15:09:40

by Joel Fernandes

[permalink] [raw]
Subject: Re: [PATCH v9 3/7] srcu: Fix invoke_rcu_callbacks() segcb length adjustment

On Tue, Nov 03, 2020 at 03:47:14PM +0100, Frederic Weisbecker wrote:
> On Tue, Nov 03, 2020 at 09:25:59AM -0500, Joel Fernandes (Google) wrote:
> > With earlier patches, the negative counting of the unsegmented list
> > cannot be used to adjust the segmented one. To fix this, sample the
> > unsegmented length in advance, and use it after CB execution to adjust
> > the segmented list's length.
> >
> > Reviewed-by: Frederic Weisbecker <[email protected]>
> > Suggested-by: Frederic Weisbecker <[email protected]>
> > Signed-off-by: Joel Fernandes (Google) <[email protected]>
>
> This breaks bisection, you need to either fix up the previous patch
> by adding this diff inside or better yet: expand what you did
> in "rcu/tree: Make rcu_do_batch count how many callbacks were executed"
> to also handle srcu before introducing the segcb count.

Since doing the latter is a lot more tedious and I want to get reviewing
other's RCU patches today :) , I just squashed the suggestion into the
counters patch to fix bissection:
https://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git/commit/?h=rcu/segcb-counts&id=595e3a65eeef109cb8fcbfcc114fd3ea2064b873

Hope that's Ok. Also, so that I don't have to resend everything, here is the
final branch if Paul wants to take it:
git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (branch rcu/segcb-counts)

Thank you for your time, Frederick!

- Joel





2020-11-03 15:20:14

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH v9 3/7] srcu: Fix invoke_rcu_callbacks() segcb length adjustment

On Tue, Nov 03, 2020 at 10:07:38AM -0500, Joel Fernandes wrote:
> On Tue, Nov 03, 2020 at 03:47:14PM +0100, Frederic Weisbecker wrote:
> > On Tue, Nov 03, 2020 at 09:25:59AM -0500, Joel Fernandes (Google) wrote:
> > > With earlier patches, the negative counting of the unsegmented list
> > > cannot be used to adjust the segmented one. To fix this, sample the
> > > unsegmented length in advance, and use it after CB execution to adjust
> > > the segmented list's length.
> > >
> > > Reviewed-by: Frederic Weisbecker <[email protected]>
> > > Suggested-by: Frederic Weisbecker <[email protected]>
> > > Signed-off-by: Joel Fernandes (Google) <[email protected]>
> >
> > This breaks bisection, you need to either fix up the previous patch
> > by adding this diff inside or better yet: expand what you did
> > in "rcu/tree: Make rcu_do_batch count how many callbacks were executed"
> > to also handle srcu before introducing the segcb count.
>
> Since doing the latter is a lot more tedious and I want to get reviewing
> other's RCU patches today :) , I just squashed the suggestion into the
> counters patch to fix bissection:
> https://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git/commit/?h=rcu/segcb-counts&id=595e3a65eeef109cb8fcbfcc114fd3ea2064b873
>
> Hope that's Ok. Also, so that I don't have to resend everything, here is the
> final branch if Paul wants to take it:
> git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (branch rcu/segcb-counts)

Either way, it sounds like it is time for me to review this series.

Thank you both very much!!!

Thaxn, Paul

2020-11-03 15:22:33

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH v9 3/7] srcu: Fix invoke_rcu_callbacks() segcb length adjustment

On Tue, Nov 03, 2020 at 10:07:38AM -0500, Joel Fernandes wrote:
> On Tue, Nov 03, 2020 at 03:47:14PM +0100, Frederic Weisbecker wrote:
> > On Tue, Nov 03, 2020 at 09:25:59AM -0500, Joel Fernandes (Google) wrote:
> > > With earlier patches, the negative counting of the unsegmented list
> > > cannot be used to adjust the segmented one. To fix this, sample the
> > > unsegmented length in advance, and use it after CB execution to adjust
> > > the segmented list's length.
> > >
> > > Reviewed-by: Frederic Weisbecker <[email protected]>
> > > Suggested-by: Frederic Weisbecker <[email protected]>
> > > Signed-off-by: Joel Fernandes (Google) <[email protected]>
> >
> > This breaks bisection, you need to either fix up the previous patch
> > by adding this diff inside or better yet: expand what you did
> > in "rcu/tree: Make rcu_do_batch count how many callbacks were executed"
> > to also handle srcu before introducing the segcb count.
>
> Since doing the latter is a lot more tedious and I want to get reviewing
> other's RCU patches today :) , I just squashed the suggestion into the
> counters patch to fix bissection:
> https://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git/commit/?h=rcu/segcb-counts&id=595e3a65eeef109cb8fcbfcc114fd3ea2064b873
>
> Hope that's Ok.

Works for me.

Thanks!

2020-11-04 13:40:17

by Neeraj Upadhyay

[permalink] [raw]
Subject: Re: [PATCH v9 3/7] srcu: Fix invoke_rcu_callbacks() segcb length adjustment



On 11/3/2020 7:55 PM, Joel Fernandes (Google) wrote:
> With earlier patches, the negative counting of the unsegmented list
> cannot be used to adjust the segmented one. To fix this, sample the
> unsegmented length in advance, and use it after CB execution to adjust
> the segmented list's length.
>
> Reviewed-by: Frederic Weisbecker <[email protected]>
> Suggested-by: Frederic Weisbecker <[email protected]>
> Signed-off-by: Joel Fernandes (Google) <[email protected]>
> ---

Reviewed-by: Neeraj Upadhyay <[email protected]>

> kernel/rcu/srcutree.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> index 0f23d20d485a..79b7081143a7 100644
> --- a/kernel/rcu/srcutree.c
> +++ b/kernel/rcu/srcutree.c
> @@ -1160,6 +1160,7 @@ static void srcu_advance_state(struct srcu_struct *ssp)
> */
> static void srcu_invoke_callbacks(struct work_struct *work)
> {
> + long len;
> bool more;
> struct rcu_cblist ready_cbs;
> struct rcu_head *rhp;
> @@ -1182,6 +1183,7 @@ static void srcu_invoke_callbacks(struct work_struct *work)
> /* We are on the job! Extract and invoke ready callbacks. */
> sdp->srcu_cblist_invoking = true;
> rcu_segcblist_extract_done_cbs(&sdp->srcu_cblist, &ready_cbs);
> + len = ready_cbs.len;
> spin_unlock_irq_rcu_node(sdp);
> rhp = rcu_cblist_dequeue(&ready_cbs);
> for (; rhp != NULL; rhp = rcu_cblist_dequeue(&ready_cbs)) {
> @@ -1190,13 +1192,14 @@ static void srcu_invoke_callbacks(struct work_struct *work)
> rhp->func(rhp);
> local_bh_enable();
> }
> + WARN_ON_ONCE(ready_cbs.len);
>
> /*
> * Update counts, accelerate new callbacks, and if needed,
> * schedule another round of callback invocation.
> */
> spin_lock_irq_rcu_node(sdp);
> - rcu_segcblist_insert_count(&sdp->srcu_cblist, &ready_cbs);
> + rcu_segcblist_add_len(&sdp->srcu_cblist, -len);
> (void)rcu_segcblist_accelerate(&sdp->srcu_cblist,
> rcu_seq_snap(&ssp->srcu_gp_seq));
> sdp->srcu_cblist_invoking = false;
>

--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a
member of the Code Aurora Forum, hosted by The Linux Foundation