The current implementation of idle_cpu only considers tasks that might be in the
CPU's runqueue. If there's nothing in the specified CPU's runqueue, it will
return 1. But if the CPU is doing work in the softirq context, it is wrong for
idle_cpu to return 1. This patch makes it return 0.
I observed this to be a problem with a device driver kicking a kthread by
executing wake_up from softirq context. The Completely Fair Scheduler's
select_task_rq_fair was looking for an "idle sibling" of the CPU executing it by
calling select_idle_sibling, passing the executing CPU as the 'target'
parameter. The first thing that select_idle_sibling does is to check whether the
'target' CPU is idle, using idle_cpu, and to return that CPU if so. Despite the
executing CPU being busy in softirq context, idle_cpu was returning 1, meaning
that the scheduler would consistently try to run the kthread on the same CPU as
the kick came from. Given that the softirq work was on-going, this led to a
multi-millisecond delay before the scheduler eventually realised it should
migrate the kthread to a different CPU.
A solution to this problem would be to make idle_cpu return 0 when the CPU is
running in softirq context. I haven't got a patch for that because I couldn't
find an easy way of querying whether an arbitrary CPU is doing this. (Perhaps I
should look at the per-CPU softirq_work_list[]...?)
Instead, the following patch is a partial solution, only handling the case when
the currently-executing CPU is in softirq context. This was sufficient to solve
the problem I observed.
Signed-off-by: Jonathan Davies <[email protected]>
---
kernel/sched/core.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7bc599d..4ee58c4 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3169,6 +3169,10 @@ int idle_cpu(int cpu)
return 0;
#endif
+ /* When the current CPU is in softirq context, count it as non-idle */
+ if (cpu == smp_processor_id() && in_softirq())
+ return 0;
+
return 1;
}
--
1.9.1
On Fri, Jul 18, 2014 at 01:59:06PM +0100, Jonathan Davies wrote:
> The current implementation of idle_cpu only considers tasks that might be in the
> CPU's runqueue. If there's nothing in the specified CPU's runqueue, it will
> return 1. But if the CPU is doing work in the softirq context, it is wrong for
> idle_cpu to return 1. This patch makes it return 0.
>
> I observed this to be a problem with a device driver kicking a kthread by
> executing wake_up from softirq context. The Completely Fair Scheduler's
> select_task_rq_fair was looking for an "idle sibling" of the CPU executing it by
> calling select_idle_sibling, passing the executing CPU as the 'target'
> parameter. The first thing that select_idle_sibling does is to check whether the
> 'target' CPU is idle, using idle_cpu, and to return that CPU if so. Despite the
> executing CPU being busy in softirq context, idle_cpu was returning 1, meaning
> that the scheduler would consistently try to run the kthread on the same CPU as
> the kick came from. Given that the softirq work was on-going, this led to a
> multi-millisecond delay before the scheduler eventually realised it should
> migrate the kthread to a different CPU.
If your softirq takes _that_ long its broken anyhow.
> A solution to this problem would be to make idle_cpu return 0 when the CPU is
> running in softirq context. I haven't got a patch for that because I couldn't
> find an easy way of querying whether an arbitrary CPU is doing this. (Perhaps I
> should look at the per-CPU softirq_work_list[]...?)
in_serving_softirq()?
> Instead, the following patch is a partial solution, only handling the case when
> the currently-executing CPU is in softirq context. This was sufficient to solve
> the problem I observed.
NAK, IRQ and SoftIRQ are outside of what the scheduler can control, so
for its purpose the CPU is indeed idle.
On 18/07/14 15:08, Peter Zijlstra wrote:
> On Fri, Jul 18, 2014 at 01:59:06PM +0100, Jonathan Davies wrote:
>> The current implementation of idle_cpu only considers tasks that might be in the
>> CPU's runqueue. If there's nothing in the specified CPU's runqueue, it will
>> return 1. But if the CPU is doing work in the softirq context, it is wrong for
>> idle_cpu to return 1. This patch makes it return 0.
>>
>> I observed this to be a problem with a device driver kicking a kthread by
>> executing wake_up from softirq context. The Completely Fair Scheduler's
>> select_task_rq_fair was looking for an "idle sibling" of the CPU executing it by
>> calling select_idle_sibling, passing the executing CPU as the 'target'
>> parameter. The first thing that select_idle_sibling does is to check whether the
>> 'target' CPU is idle, using idle_cpu, and to return that CPU if so. Despite the
>> executing CPU being busy in softirq context, idle_cpu was returning 1, meaning
>> that the scheduler would consistently try to run the kthread on the same CPU as
>> the kick came from. Given that the softirq work was on-going, this led to a
>> multi-millisecond delay before the scheduler eventually realised it should
>> migrate the kthread to a different CPU.
>
> If your softirq takes _that_ long its broken anyhow.
Modern NICs can sustain 40 Gb/s of traffic. For network device drivers
that use NAPI, polling is done in softirq context. At this data-rate,
the per-packet processing overhead means means that a lot of CPU time is
spent in softirq.
(CCing Dave and Eric for their thoughts about long-running softirq due
to NAPI. The example I gave above was of xen-netback sending data to
another virtual interface at a high rate.)
>> A solution to this problem would be to make idle_cpu return 0 when the CPU is
>> running in softirq context. I haven't got a patch for that because I couldn't
>> find an easy way of querying whether an arbitrary CPU is doing this. (Perhaps I
>> should look at the per-CPU softirq_work_list[]...?)
>
> in_serving_softirq()?
That's probably more appropriate, but only tells us about the currently
executing CPU, rather than what other CPUs are doing.
>> Instead, the following patch is a partial solution, only handling the case when
>> the currently-executing CPU is in softirq context. This was sufficient to solve
>> the problem I observed.
>
> NAK, IRQ and SoftIRQ are outside of what the scheduler can control, so
> for its purpose the CPU is indeed idle.
The scheduler can't control those things, but surely it wants to make
the best possible placement for the things it can control? So it seems
odd to me that it would ignore relevant information about the resources
it can use. As I observed, it leads to pathological behaviour, and is
easily fixed.
Jonathan
On Mon, Jul 21, 2014 at 05:56:36PM +0100, Jonathan Davies wrote:
> >If your softirq takes _that_ long its broken anyhow.
>
> Modern NICs can sustain 40 Gb/s of traffic. For network device drivers that
> use NAPI, polling is done in softirq context. At this data-rate, the
> per-packet processing overhead means means that a lot of CPU time is spent
> in softirq.
>
> (CCing Dave and Eric for their thoughts about long-running softirq due to
> NAPI. The example I gave above was of xen-netback sending data to another
> virtual interface at a high rate.)
So thing more or less assume that softirq handling (as ran off the tail
of hardirqs) does not take longer than a tick. Otherwise things start to
pile up and you get all kinds of nasty. Not to mention you get into
horrid latencies etc..
How hard would you scream if people ran multi tick hard interrupts? Why
do you then think its OK so do the effective same thing?
> >>Instead, the following patch is a partial solution, only handling the case when
> >>the currently-executing CPU is in softirq context. This was sufficient to solve
> >>the problem I observed.
> >
> >NAK, IRQ and SoftIRQ are outside of what the scheduler can control, so
> >for its purpose the CPU is indeed idle.
>
> The scheduler can't control those things, but surely it wants to make the
> best possible placement for the things it can control? So it seems odd to me
> that it would ignore relevant information about the resources it can use. As
> I observed, it leads to pathological behaviour, and is easily fixed.
We already lower the compute capacity due to irq/softirq overhead, if we
don't correctly handle that then we need to fix that. But as far as the
scheduler is concerned that cpu is _IDLE_. We didn't put anything on,
and therefore there's not anything on, end of story.
Hi Jonathan
On Tue, Jul 22, 2014 at 12:56 AM, Jonathan Davies
<[email protected]> wrote:
>
>
> On 18/07/14 15:08, Peter Zijlstra wrote:
>>
>> On Fri, Jul 18, 2014 at 01:59:06PM +0100, Jonathan Davies wrote:
>>>
>>> The current implementation of idle_cpu only considers tasks that might be
>>> in the
>>> CPU's runqueue. If there's nothing in the specified CPU's runqueue, it
>>> will
>>> return 1. But if the CPU is doing work in the softirq context, it is
>>> wrong for
>>> idle_cpu to return 1. This patch makes it return 0.
>>>
>>> I observed this to be a problem with a device driver kicking a kthread by
>>> executing wake_up from softirq context. The Completely Fair Scheduler's
>>> select_task_rq_fair was looking for an "idle sibling" of the CPU
>>> executing it by
>>> calling select_idle_sibling, passing the executing CPU as the 'target'
>>> parameter. The first thing that select_idle_sibling does is to check
>>> whether the
>>> 'target' CPU is idle, using idle_cpu, and to return that CPU if so.
>>> Despite the
>>> executing CPU being busy in softirq context, idle_cpu was returning 1,
>>> meaning
>>> that the scheduler would consistently try to run the kthread on the same
>>> CPU as
>>> the kick came from. Given that the softirq work was on-going, this led to
>>> a
>>> multi-millisecond delay before the scheduler eventually realised it
>>> should
>>> migrate the kthread to a different CPU.
>>
>>
>> If your softirq takes _that_ long its broken anyhow.
>
>
> Modern NICs can sustain 40 Gb/s of traffic. For network device drivers that
> use NAPI, polling is done in softirq context. At this data-rate, the
> per-packet processing overhead means means that a lot of CPU time is spent
> in softirq.
>
Perhaps fcoe_percpu_receive_thread (linux/drivers/scsi/fcoe/fcoe.c)
can help you.
Hillf