2014-02-19 18:16:51

by Mike Galbraith

[permalink] [raw]
Subject: [PATCH] softirq: stable v3.1[23] (others?) have screaming tasklet disease - ksoftirqd[random] eats 100% CPU

I'm seeing ksoftirqd chewing 100% CPU on one or more CPUs in both 3.12
and 3.13, as below in a 40 core (+smt) box. It should look very
familiar to CCs, especially Ingo.

Below, tasklet is disabled by ioat2_free_chan_resources, and what I
presume was systemd-udevd-1050 starts screaming when it meets same,
until debug patchlet turns tracing off. Once the box was up such that I
could login, 1050 was long gone, and ksoftirqd had taken over.

systemd-udevd-976 [016] .... 27.467534: ioat_init_channel: tasklet_disable_nosync ffff880465b8bee8
systemd-udevd-976 [016] .... 27.467649: ioat2_alloc_chan_resources: tasklet_enable ffff880465b8bee8
<idle>-0 [072] ..s. 27.467659: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
<idle>-0 [072] .Ns. 27.467667: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
<idle>-0 [072] .Ns. 27.467673: tasklet_action: LOOP processed ffff880465b8bee8
systemd-udevd-976 [016] .... 27.467679: ioat2_free_chan_resources: tasklet_disable_nosync ffff880465b8bee8
systemd-udevd-1034 [000] .Ns. 27.467917: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
systemd-udevd-1034 [000] .Ns. 27.467918: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
<...>-1050 [000] ..s. 27.468203: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
<...>-1050 [000] ..s. 27.468204: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
<...>-1050 [000] ..s. 27.468204: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
<...>-1050 [000] ..s. 27.468205: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
... much no processing, see tasklet disabled, raise softirq - wash rinse repeat
<...>-1050 [000] ..s. 27.469561: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
<...>-1050 [000] ..s. 27.469562: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
<...>-1050 [000] ..s. 27.469563: tasklet_action: LOOP tasklet disabled ffff880465b8bee8 - It's dead Jim

Hm, he says, now where have I seen text describing that trace? Right,
RT, and the below fixes screaming NOPREEMPT kernels.

Taken from 3.12-rt, and applied to screaming 3.12.11-virgin

Subject: tasklet: Prevent tasklets from going into infinite spin in RT
From: Ingo Molnar <[email protected]>
Date: Tue Nov 29 20:18:22 2011 -0500

When CONFIG_PREEMPT_RT_FULL is enabled, tasklets run as threads,
and spinlocks turn are mutexes. But this can cause issues with
tasks disabling tasklets. A tasklet runs under ksoftirqd, and
if a tasklets are disabled with tasklet_disable(), the tasklet
count is increased. When a tasklet runs, it checks this counter
and if it is set, it adds itself back on the softirq queue and
returns.

The problem arises in RT because ksoftirq will see that a softirq
is ready to run (the tasklet softirq just re-armed itself), and will
not sleep, but instead run the softirqs again. The tasklet softirq
will still see that the count is non-zero and will not execute
the tasklet and requeue itself on the softirq again, which will
cause ksoftirqd to run it again and again and again.

It gets worse because ksoftirqd runs as a real-time thread.
If it preempted the task that disabled tasklets, and that task
has migration disabled, or can't run for other reasons, the tasklet
softirq will never run because the count will never be zero, and
ksoftirqd will go into an infinite loop. As an RT task, it this
becomes a big problem.

This is a hack solution to have tasklet_disable stop tasklets, and
when a tasklet runs, instead of requeueing the tasklet softirqd
it delays it. When tasklet_enable() is called, and tasklets are
waiting, then the tasklet_enable() will kick the tasklets to continue.
This prevents the lock up from ksoftirq going into an infinite loop.

[ [email protected]: ported to 3.0-rt ]

Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Steven Rostedt <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>

---
include/linux/interrupt.h | 39 ++++----
kernel/softirq.c | 208 +++++++++++++++++++++++++++++++++-------------
2 files changed, 170 insertions(+), 77 deletions(-)

Index: linux-2.6/include/linux/interrupt.h
===================================================================
--- linux-2.6.orig/include/linux/interrupt.h
+++ linux-2.6/include/linux/interrupt.h
@@ -421,8 +421,9 @@ extern void __send_remote_softirq(struct
to be executed on some cpu at least once after this.
* If the tasklet is already scheduled, but its execution is still not
started, it will be executed only once.
- * If this tasklet is already running on another CPU (or schedule is called
- from tasklet itself), it is rescheduled for later.
+ * If this tasklet is already running on another CPU, it is rescheduled
+ for later.
+ * Schedule must not be called from the tasklet itself (a lockup occurs)
* Tasklet is strictly serialized wrt itself, but not
wrt another tasklets. If client needs some intertask synchronization,
he makes it with spinlocks.
@@ -447,27 +448,36 @@ struct tasklet_struct name = { NULL, 0,
enum
{
TASKLET_STATE_SCHED, /* Tasklet is scheduled for execution */
- TASKLET_STATE_RUN /* Tasklet is running (SMP only) */
+ TASKLET_STATE_RUN, /* Tasklet is running (SMP only) */
+ TASKLET_STATE_PENDING /* Tasklet is pending */
};

-#ifdef CONFIG_SMP
+#define TASKLET_STATEF_SCHED (1 << TASKLET_STATE_SCHED)
+#define TASKLET_STATEF_RUN (1 << TASKLET_STATE_RUN)
+#define TASKLET_STATEF_PENDING (1 << TASKLET_STATE_PENDING)
+
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
static inline int tasklet_trylock(struct tasklet_struct *t)
{
return !test_and_set_bit(TASKLET_STATE_RUN, &(t)->state);
}

+static inline int tasklet_tryunlock(struct tasklet_struct *t)
+{
+ return cmpxchg(&t->state, TASKLET_STATEF_RUN, 0) == TASKLET_STATEF_RUN;
+}
+
static inline void tasklet_unlock(struct tasklet_struct *t)
{
smp_mb__before_clear_bit();
clear_bit(TASKLET_STATE_RUN, &(t)->state);
}

-static inline void tasklet_unlock_wait(struct tasklet_struct *t)
-{
- while (test_bit(TASKLET_STATE_RUN, &(t)->state)) { barrier(); }
-}
+extern void tasklet_unlock_wait(struct tasklet_struct *t);
+
#else
#define tasklet_trylock(t) 1
+#define tasklet_tryunlock(t) 1
#define tasklet_unlock_wait(t) do { } while (0)
#define tasklet_unlock(t) do { } while (0)
#endif
@@ -516,17 +526,8 @@ static inline void tasklet_disable(struc
smp_mb();
}

-static inline void tasklet_enable(struct tasklet_struct *t)
-{
- smp_mb__before_atomic_dec();
- atomic_dec(&t->count);
-}
-
-static inline void tasklet_hi_enable(struct tasklet_struct *t)
-{
- smp_mb__before_atomic_dec();
- atomic_dec(&t->count);
-}
+extern void tasklet_enable(struct tasklet_struct *t);
+extern void tasklet_hi_enable(struct tasklet_struct *t);

extern void tasklet_kill(struct tasklet_struct *t);
extern void tasklet_kill_immediate(struct tasklet_struct *t, unsigned int cpu);
Index: linux-2.6/kernel/softirq.c
===================================================================
--- linux-2.6.orig/kernel/softirq.c
+++ linux-2.6/kernel/softirq.c
@@ -21,6 +21,7 @@
#include <linux/freezer.h>
#include <linux/kthread.h>
#include <linux/rcupdate.h>
+#include <linux/delay.h>
#include <linux/ftrace.h>
#include <linux/smp.h>
#include <linux/smpboot.h>
@@ -429,15 +430,45 @@ struct tasklet_head
static DEFINE_PER_CPU(struct tasklet_head, tasklet_vec);
static DEFINE_PER_CPU(struct tasklet_head, tasklet_hi_vec);

+static void inline
+__tasklet_common_schedule(struct tasklet_struct *t, struct tasklet_head *head, unsigned int nr)
+{
+ if (tasklet_trylock(t)) {
+again:
+ /* We may have been preempted before tasklet_trylock
+ * and __tasklet_action may have already run.
+ * So double check the sched bit while the takslet
+ * is locked before adding it to the list.
+ */
+ if (test_bit(TASKLET_STATE_SCHED, &t->state)) {
+ t->next = NULL;
+ *head->tail = t;
+ head->tail = &(t->next);
+ raise_softirq_irqoff(nr);
+ tasklet_unlock(t);
+ } else {
+ /* This is subtle. If we hit the corner case above
+ * It is possible that we get preempted right here,
+ * and another task has successfully called
+ * tasklet_schedule(), then this function, and
+ * failed on the trylock. Thus we must be sure
+ * before releasing the tasklet lock, that the
+ * SCHED_BIT is clear. Otherwise the tasklet
+ * may get its SCHED_BIT set, but not added to the
+ * list
+ */
+ if (!tasklet_tryunlock(t))
+ goto again;
+ }
+ }
+}
+
void __tasklet_schedule(struct tasklet_struct *t)
{
unsigned long flags;

local_irq_save(flags);
- t->next = NULL;
- *__this_cpu_read(tasklet_vec.tail) = t;
- __this_cpu_write(tasklet_vec.tail, &(t->next));
- raise_softirq_irqoff(TASKLET_SOFTIRQ);
+ __tasklet_common_schedule(t, &__get_cpu_var(tasklet_vec), TASKLET_SOFTIRQ);
local_irq_restore(flags);
}

@@ -448,10 +479,7 @@ void __tasklet_hi_schedule(struct taskle
unsigned long flags;

local_irq_save(flags);
- t->next = NULL;
- *__this_cpu_read(tasklet_hi_vec.tail) = t;
- __this_cpu_write(tasklet_hi_vec.tail, &(t->next));
- raise_softirq_irqoff(HI_SOFTIRQ);
+ __tasklet_common_schedule(t, &__get_cpu_var(tasklet_hi_vec), HI_SOFTIRQ);
local_irq_restore(flags);
}

@@ -459,50 +487,119 @@ EXPORT_SYMBOL(__tasklet_hi_schedule);

void __tasklet_hi_schedule_first(struct tasklet_struct *t)
{
- BUG_ON(!irqs_disabled());
-
- t->next = __this_cpu_read(tasklet_hi_vec.head);
- __this_cpu_write(tasklet_hi_vec.head, t);
- __raise_softirq_irqoff(HI_SOFTIRQ);
+ __tasklet_hi_schedule(t);
}

EXPORT_SYMBOL(__tasklet_hi_schedule_first);

-static void tasklet_action(struct softirq_action *a)
+void tasklet_enable(struct tasklet_struct *t)
{
- struct tasklet_struct *list;
+ if (!atomic_dec_and_test(&t->count))
+ return;
+ if (test_and_clear_bit(TASKLET_STATE_PENDING, &t->state))
+ tasklet_schedule(t);
+}

- local_irq_disable();
- list = __this_cpu_read(tasklet_vec.head);
- __this_cpu_write(tasklet_vec.head, NULL);
- __this_cpu_write(tasklet_vec.tail, &__get_cpu_var(tasklet_vec).head);
- local_irq_enable();
+EXPORT_SYMBOL(tasklet_enable);
+
+void tasklet_hi_enable(struct tasklet_struct *t)
+{
+ if (!atomic_dec_and_test(&t->count))
+ return;
+ if (test_and_clear_bit(TASKLET_STATE_PENDING, &t->state))
+ tasklet_hi_schedule(t);
+}
+
+EXPORT_SYMBOL(tasklet_hi_enable);
+
+static void
+__tasklet_action(struct softirq_action *a, struct tasklet_struct *list)
+{
+ int loops = 1000000;

while (list) {
struct tasklet_struct *t = list;

list = list->next;

- if (tasklet_trylock(t)) {
- if (!atomic_read(&t->count)) {
- if (!test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
- BUG();
- t->func(t->data);
- tasklet_unlock(t);
- continue;
- }
- tasklet_unlock(t);
+ /*
+ * Should always succeed - after a tasklist got on the
+ * list (after getting the SCHED bit set from 0 to 1),
+ * nothing but the tasklet softirq it got queued to can
+ * lock it:
+ */
+ if (!tasklet_trylock(t)) {
+ WARN_ON(1);
+ continue;
}

- local_irq_disable();
t->next = NULL;
- *__this_cpu_read(tasklet_vec.tail) = t;
- __this_cpu_write(tasklet_vec.tail, &(t->next));
- __raise_softirq_irqoff(TASKLET_SOFTIRQ);
- local_irq_enable();
+
+ /*
+ * If we cannot handle the tasklet because it's disabled,
+ * mark it as pending. tasklet_enable() will later
+ * re-schedule the tasklet.
+ */
+ if (unlikely(atomic_read(&t->count))) {
+out_disabled:
+ /* implicit unlock: */
+ wmb();
+ t->state = TASKLET_STATEF_PENDING;
+ continue;
+ }
+
+ /*
+ * After this point on the tasklet might be rescheduled
+ * on another CPU, but it can only be added to another
+ * CPU's tasklet list if we unlock the tasklet (which we
+ * dont do yet).
+ */
+ if (!test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
+ WARN_ON(1);
+
+again:
+ t->func(t->data);
+
+ /*
+ * Try to unlock the tasklet. We must use cmpxchg, because
+ * another CPU might have scheduled or disabled the tasklet.
+ * We only allow the STATE_RUN -> 0 transition here.
+ */
+ while (!tasklet_tryunlock(t)) {
+ /*
+ * If it got disabled meanwhile, bail out:
+ */
+ if (atomic_read(&t->count))
+ goto out_disabled;
+ /*
+ * If it got scheduled meanwhile, re-execute
+ * the tasklet function:
+ */
+ if (test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
+ goto again;
+ if (!--loops) {
+ printk("hm, tasklet state: %08lx\n", t->state);
+ WARN_ON(1);
+ tasklet_unlock(t);
+ break;
+ }
+ }
}
}

+static void tasklet_action(struct softirq_action *a)
+{
+ struct tasklet_struct *list;
+
+ local_irq_disable();
+ list = __get_cpu_var(tasklet_vec).head;
+ __get_cpu_var(tasklet_vec).head = NULL;
+ __get_cpu_var(tasklet_vec).tail = &__get_cpu_var(tasklet_vec).head;
+ local_irq_enable();
+
+ __tasklet_action(a, list);
+}
+
static void tasklet_hi_action(struct softirq_action *a)
{
struct tasklet_struct *list;
@@ -513,29 +610,7 @@ static void tasklet_hi_action(struct sof
__this_cpu_write(tasklet_hi_vec.tail, &__get_cpu_var(tasklet_hi_vec).head);
local_irq_enable();

- while (list) {
- struct tasklet_struct *t = list;
-
- list = list->next;
-
- if (tasklet_trylock(t)) {
- if (!atomic_read(&t->count)) {
- if (!test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
- BUG();
- t->func(t->data);
- tasklet_unlock(t);
- continue;
- }
- tasklet_unlock(t);
- }
-
- local_irq_disable();
- t->next = NULL;
- *__this_cpu_read(tasklet_hi_vec.tail) = t;
- __this_cpu_write(tasklet_hi_vec.tail, &(t->next));
- __raise_softirq_irqoff(HI_SOFTIRQ);
- local_irq_enable();
- }
+ __tasklet_action(a, list);
}


@@ -558,7 +633,7 @@ void tasklet_kill(struct tasklet_struct

while (test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) {
do {
- yield();
+ msleep(1);
} while (test_bit(TASKLET_STATE_SCHED, &t->state));
}
tasklet_unlock_wait(t);
@@ -762,6 +837,23 @@ void __init softirq_init(void)
open_softirq(HI_SOFTIRQ, tasklet_hi_action);
}

+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
+void tasklet_unlock_wait(struct tasklet_struct *t)
+{
+ while (test_bit(TASKLET_STATE_RUN, &(t)->state)) {
+ /*
+ * Hack for now to avoid this busy-loop:
+ */
+#ifdef CONFIG_PREEMPT_RT_FULL
+ msleep(1);
+#else
+ barrier();
+#endif
+ }
+}
+EXPORT_SYMBOL(tasklet_unlock_wait);
+#endif
+
static int ksoftirqd_should_run(unsigned int cpu)
{
return local_softirq_pending();


2014-02-20 00:00:12

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH] softirq: stable v3.1[23] (others?) have screaming tasklet disease - ksoftirqd[random] eats 100% CPU

On Wed, 19 Feb 2014, Mike Galbraith wrote:
> I'm seeing ksoftirqd chewing 100% CPU on one or more CPUs in both 3.12
> and 3.13, as below in a 40 core (+smt) box. It should look very
> familiar to CCs, especially Ingo.
>
> Below, tasklet is disabled by ioat2_free_chan_resources, and what I
> presume was systemd-udevd-1050 starts screaming when it meets same,
> until debug patchlet turns tracing off. Once the box was up such that I
> could login, 1050 was long gone, and ksoftirqd had taken over.
>
> systemd-udevd-976 [016] .... 27.467534: ioat_init_channel: tasklet_disable_nosync ffff880465b8bee8
> systemd-udevd-976 [016] .... 27.467649: ioat2_alloc_chan_resources: tasklet_enable ffff880465b8bee8
> <idle>-0 [072] ..s. 27.467659: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
> <idle>-0 [072] .Ns. 27.467667: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
> <idle>-0 [072] .Ns. 27.467673: tasklet_action: LOOP processed ffff880465b8bee8
> systemd-udevd-976 [016] .... 27.467679: ioat2_free_chan_resources: tasklet_disable_nosync ffff880465b8bee8
> systemd-udevd-1034 [000] .Ns. 27.467917: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
> systemd-udevd-1034 [000] .Ns. 27.467918: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
> <...>-1050 [000] ..s. 27.468203: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
> <...>-1050 [000] ..s. 27.468204: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
> <...>-1050 [000] ..s. 27.468204: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
> <...>-1050 [000] ..s. 27.468205: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
> ... much no processing, see tasklet disabled, raise softirq - wash rinse repeat
> <...>-1050 [000] ..s. 27.469561: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
> <...>-1050 [000] ..s. 27.469562: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
> <...>-1050 [000] ..s. 27.469563: tasklet_action: LOOP tasklet disabled ffff880465b8bee8 - It's dead Jim
>
> Hm, he says, now where have I seen text describing that trace? Right,
> RT, and the below fixes screaming NOPREEMPT kernels.
>
> Taken from 3.12-rt, and applied to screaming 3.12.11-virgin

Indeed. That's a very similar issue just for different reasons. The RT
case is special as the mainline usage side of tasklets do not expect
the preemption scenario.

But this one is clearly a driver issue.

The window where you can bring a machine into that state is infinite
large. Lets look at the tasklet_schedule --> softirq sequence:

tasklet_schedule(t)
set_bit(TASKLET_STATE_SCHED, &t->state);
queue_tasklet_on_cpu_list(t);
raise_softirq();

softirq()
splice_tasklet_cpu_list(cpu_list, list);
while (list) {
t = list;
list = t->next;
/* Sets the TASKLET_STATE_RUN bit ! */
if (tasklet_trylock(t) {
if (!atomic_read(&t->count)) { <-----
clear_bit(TASKLET_STATE_SCHED, &t->state);
t->func();
/* Clear the TASKLET_STATE_RUN bit */
tasklet_unlock();
continue;
}
tasklet_unlock();
queue_tasklet_on_cpu_list(t);
raise_softirq();
}

So up to the atomic_read in the softirq all calls to tasklet_disable()
even if issued eons before that point are going to put the softirq
into an infinite loop when the tasklet is scheduled.

Even if we would put a check for the disabled state into
tasklet_schedule there would be still the window between the schedule
and the actual softirq handling. And we even can't add that check
because that would break "sane" use sites of tasklet_disable.

tasklet_disable/enable is only meant for temporary, i.e. over a very
short code sequence, preventing the execution of the tasklet.

The usage of tasklet_disable() in teardown scenarios is completely
broken. The only way to do that is to have a proper serialization of
the teardown versus the interrupt which schedules the tasklet:

/*
* First step.
*/
disable_interrupt_at_device_or_irq_line_level();

/*
* This makes sure that even a spurious interrupt which
* arrives _AFTER_ the synchronize_irq() cannot schedule
* the tasklet anymore.
*/
tell_interrupt_to_not_schedule_tasklet();

/* Make sure that no interrupt is on the fly */
synchronize_irq();

/*
* Kill the tasklet, which also waits for an already
* scheduled one to complete.
*/
tasklet_kill();

I tried to find something like that in the ioat code but I failed
miserably.

Instead of that it uses tasklet_disable/enable for the setup/teardown
which is completely buggered and obviously written by people who have
no clue about the tasklet semantics at all.

What's worse is that at the point where this code was written it was
already well known that tasklets are a steaming pile of crap and
should die.

I know why and how the RT patch works around that issue, but do we
really want to make it simpler to (ab)use and introduce new users of
tasklets instead of getting rid of them? Definitely NOT!

Seriously, people who still use tasklets without being aware of their
subtle issues and without an extremly good reason to use them at all
should use a wire cutter or some other appropriate tool to render
their keyboard unusable and get a job in a bakery where they can eat
the mess they produce themself.

Thanks,

tglx

2014-02-20 00:10:12

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH] softirq: stable v3.1[23] (others?) have screaming tasklet disease - ksoftirqd[random] eats 100% CPU

On Wed, Feb 19, 2014 at 4:00 PM, Thomas Gleixner <[email protected]> wrote:
> On Wed, 19 Feb 2014, Mike Galbraith wrote:
>> I'm seeing ksoftirqd chewing 100% CPU on one or more CPUs in both 3.12
>> and 3.13, as below in a 40 core (+smt) box. It should look very
>> familiar to CCs, especially Ingo.
>>
>> Below, tasklet is disabled by ioat2_free_chan_resources, and what I
>> presume was systemd-udevd-1050 starts screaming when it meets same,
>> until debug patchlet turns tracing off. Once the box was up such that I
>> could login, 1050 was long gone, and ksoftirqd had taken over.
>>
>> systemd-udevd-976 [016] .... 27.467534: ioat_init_channel: tasklet_disable_nosync ffff880465b8bee8
>> systemd-udevd-976 [016] .... 27.467649: ioat2_alloc_chan_resources: tasklet_enable ffff880465b8bee8
>> <idle>-0 [072] ..s. 27.467659: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
>> <idle>-0 [072] .Ns. 27.467667: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
>> <idle>-0 [072] .Ns. 27.467673: tasklet_action: LOOP processed ffff880465b8bee8
>> systemd-udevd-976 [016] .... 27.467679: ioat2_free_chan_resources: tasklet_disable_nosync ffff880465b8bee8
>> systemd-udevd-1034 [000] .Ns. 27.467917: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
>> systemd-udevd-1034 [000] .Ns. 27.467918: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
>> <...>-1050 [000] ..s. 27.468203: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
>> <...>-1050 [000] ..s. 27.468204: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
>> <...>-1050 [000] ..s. 27.468204: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
>> <...>-1050 [000] ..s. 27.468205: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
>> ... much no processing, see tasklet disabled, raise softirq - wash rinse repeat
>> <...>-1050 [000] ..s. 27.469561: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
>> <...>-1050 [000] ..s. 27.469562: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
>> <...>-1050 [000] ..s. 27.469563: tasklet_action: LOOP tasklet disabled ffff880465b8bee8 - It's dead Jim
>>
>> Hm, he says, now where have I seen text describing that trace? Right,
>> RT, and the below fixes screaming NOPREEMPT kernels.
>>
>> Taken from 3.12-rt, and applied to screaming 3.12.11-virgin
>
> Indeed. That's a very similar issue just for different reasons. The RT
> case is special as the mainline usage side of tasklets do not expect
> the preemption scenario.
>
> But this one is clearly a driver issue.
>
> The window where you can bring a machine into that state is infinite
> large. Lets look at the tasklet_schedule --> softirq sequence:
>
> tasklet_schedule(t)
> set_bit(TASKLET_STATE_SCHED, &t->state);
> queue_tasklet_on_cpu_list(t);
> raise_softirq();
>
> softirq()
> splice_tasklet_cpu_list(cpu_list, list);
> while (list) {
> t = list;
> list = t->next;
> /* Sets the TASKLET_STATE_RUN bit ! */
> if (tasklet_trylock(t) {
> if (!atomic_read(&t->count)) { <-----
> clear_bit(TASKLET_STATE_SCHED, &t->state);
> t->func();
> /* Clear the TASKLET_STATE_RUN bit */
> tasklet_unlock();
> continue;
> }
> tasklet_unlock();
> queue_tasklet_on_cpu_list(t);
> raise_softirq();
> }
>
> So up to the atomic_read in the softirq all calls to tasklet_disable()
> even if issued eons before that point are going to put the softirq
> into an infinite loop when the tasklet is scheduled.
>
> Even if we would put a check for the disabled state into
> tasklet_schedule there would be still the window between the schedule
> and the actual softirq handling. And we even can't add that check
> because that would break "sane" use sites of tasklet_disable.
>
> tasklet_disable/enable is only meant for temporary, i.e. over a very
> short code sequence, preventing the execution of the tasklet.
>
> The usage of tasklet_disable() in teardown scenarios is completely
> broken. The only way to do that is to have a proper serialization of
> the teardown versus the interrupt which schedules the tasklet:
>
> /*
> * First step.
> */
> disable_interrupt_at_device_or_irq_line_level();
>
> /*
> * This makes sure that even a spurious interrupt which
> * arrives _AFTER_ the synchronize_irq() cannot schedule
> * the tasklet anymore.
> */
> tell_interrupt_to_not_schedule_tasklet();
>
> /* Make sure that no interrupt is on the fly */
> synchronize_irq();
>
> /*
> * Kill the tasklet, which also waits for an already
> * scheduled one to complete.
> */
> tasklet_kill();
>
> I tried to find something like that in the ioat code but I failed
> miserably.
>
> Instead of that it uses tasklet_disable/enable for the setup/teardown
> which is completely buggered and obviously written by people who have
> no clue about the tasklet semantics at all.

Yup, I carried forward that broken usage of tasklet_disable() when I
took over the driver, and until recent reports was blissfully
ignorant.

Working on a fix that can go to -stable.

--
Dan

2014-02-20 04:38:13

by Mike Galbraith

[permalink] [raw]
Subject: Re: [PATCH] softirq: stable v3.1[23] (others?) have screaming tasklet disease - ksoftirqd[random] eats 100% CPU

On Thu, 2014-02-20 at 01:00 +0100, Thomas Gleixner wrote:
> On Wed, 19 Feb 2014, Mike Galbraith wrote:
> > I'm seeing ksoftirqd chewing 100% CPU on one or more CPUs in both 3.12
> > and 3.13, as below in a 40 core (+smt) box. It should look very
> > familiar to CCs, especially Ingo.
> >
> > Below, tasklet is disabled by ioat2_free_chan_resources, and what I
> > presume was systemd-udevd-1050 starts screaming when it meets same,
> > until debug patchlet turns tracing off. Once the box was up such that I
> > could login, 1050 was long gone, and ksoftirqd had taken over.
> >
> > systemd-udevd-976 [016] .... 27.467534: ioat_init_channel: tasklet_disable_nosync ffff880465b8bee8
> > systemd-udevd-976 [016] .... 27.467649: ioat2_alloc_chan_resources: tasklet_enable ffff880465b8bee8
> > <idle>-0 [072] ..s. 27.467659: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
> > <idle>-0 [072] .Ns. 27.467667: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
> > <idle>-0 [072] .Ns. 27.467673: tasklet_action: LOOP processed ffff880465b8bee8
> > systemd-udevd-976 [016] .... 27.467679: ioat2_free_chan_resources: tasklet_disable_nosync ffff880465b8bee8
> > systemd-udevd-1034 [000] .Ns. 27.467917: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
> > systemd-udevd-1034 [000] .Ns. 27.467918: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
> > <...>-1050 [000] ..s. 27.468203: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
> > <...>-1050 [000] ..s. 27.468204: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
> > <...>-1050 [000] ..s. 27.468204: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
> > <...>-1050 [000] ..s. 27.468205: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
> > ... much no processing, see tasklet disabled, raise softirq - wash rinse repeat
> > <...>-1050 [000] ..s. 27.469561: tasklet_action: ENTER struct tasklet_struct *list: ffff880465b8bee8
> > <...>-1050 [000] ..s. 27.469562: tasklet_action: LOOP struct tasklet_struct *t = list: ffff880465b8bee8
> > <...>-1050 [000] ..s. 27.469563: tasklet_action: LOOP tasklet disabled ffff880465b8bee8 - It's dead Jim
> >
> > Hm, he says, now where have I seen text describing that trace? Right,
> > RT, and the below fixes screaming NOPREEMPT kernels.
> >
> > Taken from 3.12-rt, and applied to screaming 3.12.11-virgin
>
> Indeed. That's a very similar issue just for different reasons. The RT
> case is special as the mainline usage side of tasklets do not expect
> the preemption scenario.
>
> But this one is clearly a driver issue.

rapidio::tsi721_free_chan_resources() appears to do the same. Joy.

> The window where you can bring a machine into that state is infinite
> large. Lets look at the tasklet_schedule --> softirq sequence:
>
> tasklet_schedule(t)
> set_bit(TASKLET_STATE_SCHED, &t->state);
> queue_tasklet_on_cpu_list(t);
> raise_softirq();
>
> softirq()
> splice_tasklet_cpu_list(cpu_list, list);
> while (list) {
> t = list;
> list = t->next;
> /* Sets the TASKLET_STATE_RUN bit ! */
> if (tasklet_trylock(t) {
> if (!atomic_read(&t->count)) { <-----
> clear_bit(TASKLET_STATE_SCHED, &t->state);
> t->func();
> /* Clear the TASKLET_STATE_RUN bit */
> tasklet_unlock();
> continue;
> }
> tasklet_unlock();
> queue_tasklet_on_cpu_list(t);
> raise_softirq();
> }
>
> So up to the atomic_read in the softirq all calls to tasklet_disable()
> even if issued eons before that point are going to put the softirq
> into an infinite loop when the tasklet is scheduled.
>
> Even if we would put a check for the disabled state into
> tasklet_schedule there would be still the window between the schedule
> and the actual softirq handling. And we even can't add that check
> because that would break "sane" use sites of tasklet_disable.
>
> tasklet_disable/enable is only meant for temporary, i.e. over a very
> short code sequence, preventing the execution of the tasklet.
>
> The usage of tasklet_disable() in teardown scenarios is completely
> broken. The only way to do that is to have a proper serialization of
> the teardown versus the interrupt which schedules the tasklet:
>
> /*
> * First step.
> */
> disable_interrupt_at_device_or_irq_line_level();
>
> /*
> * This makes sure that even a spurious interrupt which
> * arrives _AFTER_ the synchronize_irq() cannot schedule
> * the tasklet anymore.
> */
> tell_interrupt_to_not_schedule_tasklet();
>
> /* Make sure that no interrupt is on the fly */
> synchronize_irq();
>
> /*
> * Kill the tasklet, which also waits for an already
> * scheduled one to complete.
> */
> tasklet_kill();
>
> I tried to find something like that in the ioat code but I failed
> miserably.
>
> Instead of that it uses tasklet_disable/enable for the setup/teardown
> which is completely buggered and obviously written by people who have
> no clue about the tasklet semantics at all.
>
> What's worse is that at the point where this code was written it was
> already well known that tasklets are a steaming pile of crap and
> should die.
>
> I know why and how the RT patch works around that issue, but do we
> really want to make it simpler to (ab)use and introduce new users of
> tasklets instead of getting rid of them? Definitely NOT!
>
> Seriously, people who still use tasklets without being aware of their
> subtle issues and without an extremly good reason to use them at all
> should use a wire cutter or some other appropriate tool to render
> their keyboard unusable and get a job in a bakery where they can eat
> the mess they produce themself.
>
> Thanks,
>
> tglx