From: "Paul E. McKenney" <[email protected]>
The current code will scan the entirety of each per-CPU list of exiting
tasks in ->rtp_exit_list with interrupts disabled. This is normally just
fine, because each CPU typically won't have very many tasks in this state.
However, if a large number of tasks block late in do_exit(), these lists
could be arbitrarily long. Low probability, perhaps, but it really
could happen.
This commit therefore occasionally re-enables interrupts while traversing
these lists, inserting a dummy element to hold the current place in the
list. In kernels built with CONFIG_PREEMPT_RT=y, this re-enabling happens
after each list element is processed, otherwise every one-to-two jiffies.
Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Sebastian Siewior <[email protected]>
Cc: Anna-Maria Behnsen <[email protected]>
Cc: Steven Rostedt <[email protected]>
Signed-off-by: Boqun Feng <[email protected]>
---
kernel/rcu/tasks.h | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 4dc355b2ac22..866743e0796f 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -971,13 +971,32 @@ static void rcu_tasks_postscan(struct list_head *hop)
*/
for_each_possible_cpu(cpu) {
+ unsigned long j = jiffies + 1;
struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rcu_tasks.rtpcpu, cpu);
struct task_struct *t;
+ struct task_struct *t1;
+ struct list_head tmp;
raw_spin_lock_irq_rcu_node(rtpcp);
- list_for_each_entry(t, &rtpcp->rtp_exit_list, rcu_tasks_exit_list)
+ list_for_each_entry_safe(t, t1, &rtpcp->rtp_exit_list, rcu_tasks_exit_list) {
if (list_empty(&t->rcu_tasks_holdout_list))
rcu_tasks_pertask(t, hop);
+
+ // RT kernels need frequent pauses, otherwise
+ // pause at least once per pair of jiffies.
+ if (!IS_ENABLED(CONFIG_PREEMPT_RT) && time_before(jiffies, j))
+ continue;
+
+ // Keep our place in the list while pausing.
+ // Nothing else traverses this list, so adding a
+ // bare list_head is OK.
+ list_add(&tmp, &t->rcu_tasks_exit_list);
+ raw_spin_unlock_irq_rcu_node(rtpcp);
+ cond_resched(); // For CONFIG_PREEMPT=n kernels
+ raw_spin_lock_irq_rcu_node(rtpcp);
+ list_del(&tmp);
+ j = jiffies + 1;
+ }
raw_spin_unlock_irq_rcu_node(rtpcp);
}
--
2.43.0
Le Fri, Feb 16, 2024 at 05:27:41PM -0800, Boqun Feng a ?crit :
> From: "Paul E. McKenney" <[email protected]>
>
> The current code will scan the entirety of each per-CPU list of exiting
> tasks in ->rtp_exit_list with interrupts disabled. This is normally just
> fine, because each CPU typically won't have very many tasks in this state.
> However, if a large number of tasks block late in do_exit(), these lists
> could be arbitrarily long. Low probability, perhaps, but it really
> could happen.
>
> This commit therefore occasionally re-enables interrupts while traversing
> these lists, inserting a dummy element to hold the current place in the
> list. In kernels built with CONFIG_PREEMPT_RT=y, this re-enabling happens
> after each list element is processed, otherwise every one-to-two jiffies.
>
> Signed-off-by: Paul E. McKenney <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Sebastian Siewior <[email protected]>
> Cc: Anna-Maria Behnsen <[email protected]>
> Cc: Steven Rostedt <[email protected]>
> Signed-off-by: Boqun Feng <[email protected]>
> ---
> kernel/rcu/tasks.h | 21 ++++++++++++++++++++-
> 1 file changed, 20 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> index 4dc355b2ac22..866743e0796f 100644
> --- a/kernel/rcu/tasks.h
> +++ b/kernel/rcu/tasks.h
> @@ -971,13 +971,32 @@ static void rcu_tasks_postscan(struct list_head *hop)
> */
>
> for_each_possible_cpu(cpu) {
> + unsigned long j = jiffies + 1;
> struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rcu_tasks.rtpcpu, cpu);
> struct task_struct *t;
> + struct task_struct *t1;
> + struct list_head tmp;
>
> raw_spin_lock_irq_rcu_node(rtpcp);
> - list_for_each_entry(t, &rtpcp->rtp_exit_list, rcu_tasks_exit_list)
> + list_for_each_entry_safe(t, t1, &rtpcp->rtp_exit_list, rcu_tasks_exit_list) {
> if (list_empty(&t->rcu_tasks_holdout_list))
> rcu_tasks_pertask(t, hop);
> +
> + // RT kernels need frequent pauses, otherwise
> + // pause at least once per pair of jiffies.
> + if (!IS_ENABLED(CONFIG_PREEMPT_RT) && time_before(jiffies, j))
> + continue;
> +
> + // Keep our place in the list while pausing.
> + // Nothing else traverses this list, so adding a
> + // bare list_head is OK.
> + list_add(&tmp, &t->rcu_tasks_exit_list);
I'm a bit confused about what this does...
> + raw_spin_unlock_irq_rcu_node(rtpcp);
> + cond_resched(); // For CONFIG_PREEMPT=n kernels
> + raw_spin_lock_irq_rcu_node(rtpcp);
> + list_del(&tmp);
Isn't there a risk that t is reaped by then? If it was not observed on_rq
while calling rcu_tasks_pertask() then there is no get_task_struct.
And what about t1? Can't it be reaped as well?
Thanks.
> + j = jiffies + 1;
> + }
> raw_spin_unlock_irq_rcu_node(rtpcp);
> }
>
> --
> 2.43.0
>
On Thu, Feb 22, 2024 at 06:48:47PM +0100, Frederic Weisbecker wrote:
> Le Fri, Feb 16, 2024 at 05:27:41PM -0800, Boqun Feng a ?crit :
> > From: "Paul E. McKenney" <[email protected]>
> >
> > The current code will scan the entirety of each per-CPU list of exiting
> > tasks in ->rtp_exit_list with interrupts disabled. This is normally just
> > fine, because each CPU typically won't have very many tasks in this state.
> > However, if a large number of tasks block late in do_exit(), these lists
> > could be arbitrarily long. Low probability, perhaps, but it really
> > could happen.
> >
> > This commit therefore occasionally re-enables interrupts while traversing
> > these lists, inserting a dummy element to hold the current place in the
> > list. In kernels built with CONFIG_PREEMPT_RT=y, this re-enabling happens
> > after each list element is processed, otherwise every one-to-two jiffies.
> >
> > Signed-off-by: Paul E. McKenney <[email protected]>
> > Cc: Thomas Gleixner <[email protected]>
> > Cc: Sebastian Siewior <[email protected]>
> > Cc: Anna-Maria Behnsen <[email protected]>
> > Cc: Steven Rostedt <[email protected]>
> > Signed-off-by: Boqun Feng <[email protected]>
> > ---
> > kernel/rcu/tasks.h | 21 ++++++++++++++++++++-
> > 1 file changed, 20 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> > index 4dc355b2ac22..866743e0796f 100644
> > --- a/kernel/rcu/tasks.h
> > +++ b/kernel/rcu/tasks.h
> > @@ -971,13 +971,32 @@ static void rcu_tasks_postscan(struct list_head *hop)
> > */
> >
> > for_each_possible_cpu(cpu) {
> > + unsigned long j = jiffies + 1;
> > struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rcu_tasks.rtpcpu, cpu);
> > struct task_struct *t;
> > + struct task_struct *t1;
> > + struct list_head tmp;
> >
> > raw_spin_lock_irq_rcu_node(rtpcp);
> > - list_for_each_entry(t, &rtpcp->rtp_exit_list, rcu_tasks_exit_list)
> > + list_for_each_entry_safe(t, t1, &rtpcp->rtp_exit_list, rcu_tasks_exit_list) {
> > if (list_empty(&t->rcu_tasks_holdout_list))
> > rcu_tasks_pertask(t, hop);
> > +
> > + // RT kernels need frequent pauses, otherwise
> > + // pause at least once per pair of jiffies.
> > + if (!IS_ENABLED(CONFIG_PREEMPT_RT) && time_before(jiffies, j))
> > + continue;
> > +
> > + // Keep our place in the list while pausing.
> > + // Nothing else traverses this list, so adding a
> > + // bare list_head is OK.
> > + list_add(&tmp, &t->rcu_tasks_exit_list);
>
> I'm a bit confused about what this does...
>
> > + raw_spin_unlock_irq_rcu_node(rtpcp);
> > + cond_resched(); // For CONFIG_PREEMPT=n kernels
> > + raw_spin_lock_irq_rcu_node(rtpcp);
> > + list_del(&tmp);
>
> Isn't there a risk that t is reaped by then? If it was not observed on_rq
> while calling rcu_tasks_pertask() then there is no get_task_struct.
That is OK, courtesy of the _safe in list_for_each_entry_safe().
> And what about t1? Can't it be reaped as well?
It can, and that is a problem, good catch!
My current thought is to add this before the list_del(), which is
admittedly a bit crude:
t1 = tmp.next;
Is there a better way?
Thanx, Paul
> Thanks.
>
>
> > + j = jiffies + 1;
> > + }
> > raw_spin_unlock_irq_rcu_node(rtpcp);
> > }
> >
> > --
> > 2.43.0
> >
On Thu, Feb 22, 2024 at 12:52:24PM -0800, Paul E. McKenney wrote:
> On Thu, Feb 22, 2024 at 06:48:47PM +0100, Frederic Weisbecker wrote:
> > Le Fri, Feb 16, 2024 at 05:27:41PM -0800, Boqun Feng a ?crit :
> > > From: "Paul E. McKenney" <[email protected]>
> > >
> > > The current code will scan the entirety of each per-CPU list of exiting
> > > tasks in ->rtp_exit_list with interrupts disabled. This is normally just
> > > fine, because each CPU typically won't have very many tasks in this state.
> > > However, if a large number of tasks block late in do_exit(), these lists
> > > could be arbitrarily long. Low probability, perhaps, but it really
> > > could happen.
> > >
> > > This commit therefore occasionally re-enables interrupts while traversing
> > > these lists, inserting a dummy element to hold the current place in the
> > > list. In kernels built with CONFIG_PREEMPT_RT=y, this re-enabling happens
> > > after each list element is processed, otherwise every one-to-two jiffies.
> > >
> > > Signed-off-by: Paul E. McKenney <[email protected]>
> > > Cc: Thomas Gleixner <[email protected]>
> > > Cc: Sebastian Siewior <[email protected]>
> > > Cc: Anna-Maria Behnsen <[email protected]>
> > > Cc: Steven Rostedt <[email protected]>
> > > Signed-off-by: Boqun Feng <[email protected]>
> > > ---
> > > kernel/rcu/tasks.h | 21 ++++++++++++++++++++-
> > > 1 file changed, 20 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> > > index 4dc355b2ac22..866743e0796f 100644
> > > --- a/kernel/rcu/tasks.h
> > > +++ b/kernel/rcu/tasks.h
> > > @@ -971,13 +971,32 @@ static void rcu_tasks_postscan(struct list_head *hop)
> > > */
> > >
> > > for_each_possible_cpu(cpu) {
> > > + unsigned long j = jiffies + 1;
> > > struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rcu_tasks.rtpcpu, cpu);
> > > struct task_struct *t;
> > > + struct task_struct *t1;
> > > + struct list_head tmp;
> > >
> > > raw_spin_lock_irq_rcu_node(rtpcp);
> > > - list_for_each_entry(t, &rtpcp->rtp_exit_list, rcu_tasks_exit_list)
> > > + list_for_each_entry_safe(t, t1, &rtpcp->rtp_exit_list, rcu_tasks_exit_list) {
> > > if (list_empty(&t->rcu_tasks_holdout_list))
> > > rcu_tasks_pertask(t, hop);
> > > +
> > > + // RT kernels need frequent pauses, otherwise
> > > + // pause at least once per pair of jiffies.
> > > + if (!IS_ENABLED(CONFIG_PREEMPT_RT) && time_before(jiffies, j))
> > > + continue;
> > > +
> > > + // Keep our place in the list while pausing.
> > > + // Nothing else traverses this list, so adding a
> > > + // bare list_head is OK.
> > > + list_add(&tmp, &t->rcu_tasks_exit_list);
> >
> > I'm a bit confused about what this does...
> >
> > > + raw_spin_unlock_irq_rcu_node(rtpcp);
> > > + cond_resched(); // For CONFIG_PREEMPT=n kernels
> > > + raw_spin_lock_irq_rcu_node(rtpcp);
> > > + list_del(&tmp);
> >
> > Isn't there a risk that t is reaped by then? If it was not observed on_rq
> > while calling rcu_tasks_pertask() then there is no get_task_struct.
>
> That is OK, courtesy of the _safe in list_for_each_entry_safe().
>
> > And what about t1? Can't it be reaped as well?
>
> It can, and that is a problem, good catch!
>
> My current thought is to add this before the list_del(), which is
> admittedly a bit crude:
>
> t1 = tmp.next;
OK, OK... ;-)
t1 = list_entry(tmp.next, struct task_struct, rcu_tasks_exit_list);
Is there still a better way?
Thanx, Paul
> > Thanks.
> >
> >
> > > + j = jiffies + 1;
> > > + }
> > > raw_spin_unlock_irq_rcu_node(rtpcp);
> > > }
> > >
> > > --
> > > 2.43.0
> > >
On Thu, Feb 22, 2024 at 02:56:46PM -0800, Paul E. McKenney wrote:
> On Thu, Feb 22, 2024 at 12:52:24PM -0800, Paul E. McKenney wrote:
> > > > diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> > > > index 4dc355b2ac22..866743e0796f 100644
> > > > --- a/kernel/rcu/tasks.h
> > > > +++ b/kernel/rcu/tasks.h
> > > > @@ -971,13 +971,32 @@ static void rcu_tasks_postscan(struct list_head *hop)
> > > > */
> > > >
> > > > for_each_possible_cpu(cpu) {
> > > > + unsigned long j = jiffies + 1;
> > > > struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rcu_tasks.rtpcpu, cpu);
> > > > struct task_struct *t;
> > > > + struct task_struct *t1;
> > > > + struct list_head tmp;
> > > >
> > > > raw_spin_lock_irq_rcu_node(rtpcp);
> > > > - list_for_each_entry(t, &rtpcp->rtp_exit_list, rcu_tasks_exit_list)
> > > > + list_for_each_entry_safe(t, t1, &rtpcp->rtp_exit_list, rcu_tasks_exit_list) {
> > > > if (list_empty(&t->rcu_tasks_holdout_list))
> > > > rcu_tasks_pertask(t, hop);
> > > > +
> > > > + // RT kernels need frequent pauses, otherwise
> > > > + // pause at least once per pair of jiffies.
> > > > + if (!IS_ENABLED(CONFIG_PREEMPT_RT) && time_before(jiffies, j))
> > > > + continue;
> > > > +
> > > > + // Keep our place in the list while pausing.
> > > > + // Nothing else traverses this list, so adding a
> > > > + // bare list_head is OK.
> > > > + list_add(&tmp, &t->rcu_tasks_exit_list);
> > >
> > > I'm a bit confused about what this does...
Oh, ok now I see what you're doing! My fear was that t goes off but doesn't
remove itself and then the list_del() crashes. But no actually tmp places itself
after t and then if t exits, it removes itself before tmp and that's fine.
> > >
> > > > + raw_spin_unlock_irq_rcu_node(rtpcp);
> > > > + cond_resched(); // For CONFIG_PREEMPT=n kernels
> > > > + raw_spin_lock_irq_rcu_node(rtpcp);
> > > > + list_del(&tmp);
> > >
> > > Isn't there a risk that t is reaped by then? If it was not observed on_rq
> > > while calling rcu_tasks_pertask() then there is no get_task_struct.
> >
> > That is OK, courtesy of the _safe in list_for_each_entry_safe().
> >
> > > And what about t1? Can't it be reaped as well?
> >
> > It can, and that is a problem, good catch!
> >
> > My current thought is to add this before the list_del(), which is
> > admittedly a bit crude:
> >
> > t1 = tmp.next;
>
> OK, OK... ;-)
>
> t1 = list_entry(tmp.next, struct task_struct, rcu_tasks_exit_list);
>
> Is there still a better way?
That should work.
An (untested) alternative that fiddles a bit less with list internals could look like this:
diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 866743e0796f..0ff2b554f5b5 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -973,12 +973,13 @@ static void rcu_tasks_postscan(struct list_head *hop)
for_each_possible_cpu(cpu) {
unsigned long j = jiffies + 1;
struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rcu_tasks.rtpcpu, cpu);
- struct task_struct *t;
- struct task_struct *t1;
- struct list_head tmp;
raw_spin_lock_irq_rcu_node(rtpcp);
- list_for_each_entry_safe(t, t1, &rtpcp->rtp_exit_list, rcu_tasks_exit_list) {
+ while (!list_empty(&rtpcp->rtp_exit_list)) {
+ struct task_struct *t;
+ t = list_first_entry(&rtpcp->rtp_exit_list, typeof(*t), rcu_tasks_exit_list);
+ list_del_init(&t->rcu_tasks_exit_list);
+
if (list_empty(&t->rcu_tasks_holdout_list))
rcu_tasks_pertask(t, hop);
@@ -987,14 +988,9 @@ static void rcu_tasks_postscan(struct list_head *hop)
if (!IS_ENABLED(CONFIG_PREEMPT_RT) && time_before(jiffies, j))
continue;
- // Keep our place in the list while pausing.
- // Nothing else traverses this list, so adding a
- // bare list_head is OK.
- list_add(&tmp, &t->rcu_tasks_exit_list);
raw_spin_unlock_irq_rcu_node(rtpcp);
cond_resched(); // For CONFIG_PREEMPT=n kernels
raw_spin_lock_irq_rcu_node(rtpcp);
- list_del(&tmp);
j = jiffies + 1;
}
raw_spin_unlock_irq_rcu_node(rtpcp);
@@ -1219,7 +1215,6 @@ void exit_tasks_rcu_stop(void)
struct rcu_tasks_percpu *rtpcp;
struct task_struct *t = current;
- WARN_ON_ONCE(list_empty(&t->rcu_tasks_exit_list));
rtpcp = per_cpu_ptr(rcu_tasks.rtpcpu, t->rcu_tasks_exit_cpu);
raw_spin_lock_irqsave_rcu_node(rtpcp, flags);
list_del_init(&t->rcu_tasks_exit_list);
On Fri, Feb 23, 2024 at 01:17:14PM +0100, Frederic Weisbecker wrote:
> diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> index 866743e0796f..0ff2b554f5b5 100644
> --- a/kernel/rcu/tasks.h
> +++ b/kernel/rcu/tasks.h
> @@ -973,12 +973,13 @@ static void rcu_tasks_postscan(struct list_head *hop)
> for_each_possible_cpu(cpu) {
> unsigned long j = jiffies + 1;
> struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rcu_tasks.rtpcpu, cpu);
> - struct task_struct *t;
> - struct task_struct *t1;
> - struct list_head tmp;
>
> raw_spin_lock_irq_rcu_node(rtpcp);
> - list_for_each_entry_safe(t, t1, &rtpcp->rtp_exit_list, rcu_tasks_exit_list) {
> + while (!list_empty(&rtpcp->rtp_exit_list)) {
> + struct task_struct *t;
> + t = list_first_entry(&rtpcp->rtp_exit_list, typeof(*t), rcu_tasks_exit_list);
> + list_del_init(&t->rcu_tasks_exit_list);
Oh no! The task has to stay in the list for subsequent grace periods! Please
forget that suggestion... Yours looks good!
Thanks.
> +
> if (list_empty(&t->rcu_tasks_holdout_list))
> rcu_tasks_pertask(t, hop);
>
> @@ -987,14 +988,9 @@ static void rcu_tasks_postscan(struct list_head *hop)
> if (!IS_ENABLED(CONFIG_PREEMPT_RT) && time_before(jiffies, j))
> continue;
>
> - // Keep our place in the list while pausing.
> - // Nothing else traverses this list, so adding a
> - // bare list_head is OK.
> - list_add(&tmp, &t->rcu_tasks_exit_list);
> raw_spin_unlock_irq_rcu_node(rtpcp);
> cond_resched(); // For CONFIG_PREEMPT=n kernels
> raw_spin_lock_irq_rcu_node(rtpcp);
> - list_del(&tmp);
> j = jiffies + 1;
> }
> raw_spin_unlock_irq_rcu_node(rtpcp);
> @@ -1219,7 +1215,6 @@ void exit_tasks_rcu_stop(void)
> struct rcu_tasks_percpu *rtpcp;
> struct task_struct *t = current;
>
> - WARN_ON_ONCE(list_empty(&t->rcu_tasks_exit_list));
> rtpcp = per_cpu_ptr(rcu_tasks.rtpcpu, t->rcu_tasks_exit_cpu);
> raw_spin_lock_irqsave_rcu_node(rtpcp, flags);
> list_del_init(&t->rcu_tasks_exit_list);
>
On Fri, Feb 23, 2024 at 04:14:49PM +0100, Frederic Weisbecker wrote:
> On Fri, Feb 23, 2024 at 01:17:14PM +0100, Frederic Weisbecker wrote:
> > diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> > index 866743e0796f..0ff2b554f5b5 100644
> > --- a/kernel/rcu/tasks.h
> > +++ b/kernel/rcu/tasks.h
> > @@ -973,12 +973,13 @@ static void rcu_tasks_postscan(struct list_head *hop)
> > for_each_possible_cpu(cpu) {
> > unsigned long j = jiffies + 1;
> > struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rcu_tasks.rtpcpu, cpu);
> > - struct task_struct *t;
> > - struct task_struct *t1;
> > - struct list_head tmp;
> >
> > raw_spin_lock_irq_rcu_node(rtpcp);
> > - list_for_each_entry_safe(t, t1, &rtpcp->rtp_exit_list, rcu_tasks_exit_list) {
> > + while (!list_empty(&rtpcp->rtp_exit_list)) {
> > + struct task_struct *t;
> > + t = list_first_entry(&rtpcp->rtp_exit_list, typeof(*t), rcu_tasks_exit_list);
> > + list_del_init(&t->rcu_tasks_exit_list);
>
> Oh no! The task has to stay in the list for subsequent grace periods! Please
> forget that suggestion... Yours looks good!
You had me going for a bit, and I do know that feeling! ;-)
Thanx, Paul
> Thanks.
>
> > +
> > if (list_empty(&t->rcu_tasks_holdout_list))
> > rcu_tasks_pertask(t, hop);
> >
> > @@ -987,14 +988,9 @@ static void rcu_tasks_postscan(struct list_head *hop)
> > if (!IS_ENABLED(CONFIG_PREEMPT_RT) && time_before(jiffies, j))
> > continue;
> >
> > - // Keep our place in the list while pausing.
> > - // Nothing else traverses this list, so adding a
> > - // bare list_head is OK.
> > - list_add(&tmp, &t->rcu_tasks_exit_list);
> > raw_spin_unlock_irq_rcu_node(rtpcp);
> > cond_resched(); // For CONFIG_PREEMPT=n kernels
> > raw_spin_lock_irq_rcu_node(rtpcp);
> > - list_del(&tmp);
> > j = jiffies + 1;
> > }
> > raw_spin_unlock_irq_rcu_node(rtpcp);
> > @@ -1219,7 +1215,6 @@ void exit_tasks_rcu_stop(void)
> > struct rcu_tasks_percpu *rtpcp;
> > struct task_struct *t = current;
> >
> > - WARN_ON_ONCE(list_empty(&t->rcu_tasks_exit_list));
> > rtpcp = per_cpu_ptr(rcu_tasks.rtpcpu, t->rcu_tasks_exit_cpu);
> > raw_spin_lock_irqsave_rcu_node(rtpcp, flags);
> > list_del_init(&t->rcu_tasks_exit_list);
> >