2010-02-18 05:28:59

by Anton Blanchard

[permalink] [raw]
Subject: NO_HZ migration of TCP ack timers


Hi,

We have a networking workload on a large ppc64 box that is spending a lot
of its time in mod_timer(). One backtrace looks like:

83.25% [k] ._spin_lock_irqsave
|
|--99.62%-- .lock_timer_base
| .mod_timer
| .sk_reset_timer
| |
| |--84.77%-- .tcp_send_delayed_ack
| | .__tcp_ack_snd_check
| | .tcp_rcv_established
| | .tcp_v4_do_rcv

| |--12.72%-- .tcp_ack
| | .tcp_rcv_established
| | .tcp_v4_do_rcv

So it's mod_timer being called from the TCP ack timer code. It looks like
commit eea08f32adb3f97553d49a4f79a119833036000a (timers: Logic to move non
pinned timers) is causing it, in particular:

#if defined(CONFIG_NO_HZ) && defined(CONFIG_SMP)
if (!pinned && get_sysctl_timer_migration() && idle_cpu(cpu)) {
int preferred_cpu = get_nohz_load_balancer();

if (preferred_cpu >= 0)
cpu = preferred_cpu;
}
#endif

and:

echo 0 > /proc/sys/kernel/timer_migration

makes the problem go away.

I think the problem is the CPU is most likely to be idle when an rx networking
interrupt comes in. It seems the wrong thing to do to migrate any ack timers
off the current cpu taking the interrupt, and with enough networks we train
wreck transferring everyones ack timers to the nohz load balancer cpu.

What should we do? Should we use mod_timer_pinned here? Or is this an issue
other areas might see (eg the block layer) and we should instead avoid
migrating timers created out of interrupts.

Anton


2010-02-18 08:08:44

by Andi Kleen

[permalink] [raw]
Subject: Re: NO_HZ migration of TCP ack timers

Anton Blanchard <[email protected]> writes:

> echo 0 > /proc/sys/kernel/timer_migration
>
> makes the problem go away.
>
> I think the problem is the CPU is most likely to be idle when an rx networking
> interrupt comes in. It seems the wrong thing to do to migrate any ack timers
> off the current cpu taking the interrupt, and with enough networks we train
> wreck transferring everyones ack timers to the nohz load balancer cpu.

If the nohz balancer CPU is otherwise idle, shouldn't it have enough
cycles to handle acks for everyone? Is the problem the cache line
transfer time?

But yes if it's non idle the migration might need to spread out
to more CPUs.

>
> What should we do? Should we use mod_timer_pinned here? Or is this an issue

Sounds like something that should be controlled by the cpufreq governour's
idle predictor? Only migrate if predicted idle time is long enough.
It's essentially the same problem as deciding how deeply idle to put
a CPU. Heavy measures only pay off if the expected time is long enough.

> other areas might see (eg the block layer) and we should instead avoid
> migrating timers created out of interrupts.

-Andi

--
[email protected] -- Speaking for myself only.

2010-02-18 09:56:56

by Anton Blanchard

[permalink] [raw]
Subject: Re: NO_HZ migration of TCP ack timers


Hi Andi,

> If the nohz balancer CPU is otherwise idle, shouldn't it have enough
> cycles to handle acks for everyone? Is the problem the cache line
> transfer time?

Yeah, I think the timer spinlock on the nohz balancer cpu ends up being a
global lock for every other cpu trying to migrate their ack timers to it.

> Sounds like something that should be controlled by the cpufreq governour's
> idle predictor? Only migrate if predicted idle time is long enough.
> It's essentially the same problem as deciding how deeply idle to put
> a CPU. Heavy measures only pay off if the expected time is long enough.

Interesting idea, it seems like we do need a better understanding of
how idle a cpu is, not just that it is idle when mod_timer is called.

Anton

2010-02-18 10:08:58

by Andi Kleen

[permalink] [raw]
Subject: Re: NO_HZ migration of TCP ack timers

On Thu, Feb 18, 2010 at 08:55:30PM +1100, Anton Blanchard wrote:
>
> Hi Andi,
>
> > If the nohz balancer CPU is otherwise idle, shouldn't it have enough
> > cycles to handle acks for everyone? Is the problem the cache line
> > transfer time?
>
> Yeah, I think the timer spinlock on the nohz balancer cpu ends up being a
> global lock for every other cpu trying to migrate their ack timers to it.

And they do that often for short idle periods?

For longer idle periods that should be not too bad.

-Andi

--
[email protected] -- Speaking for myself only.

2010-02-18 10:33:11

by Arun R Bharadwaj

[permalink] [raw]
Subject: Re: NO_HZ migration of TCP ack timers

* Andi Kleen <[email protected]> [2010-02-18 09:08:35]:

> Anton Blanchard <[email protected]> writes:
>
> > echo 0 > /proc/sys/kernel/timer_migration
> >
> > makes the problem go away.
> >
> > I think the problem is the CPU is most likely to be idle when an rx networking
> > interrupt comes in. It seems the wrong thing to do to migrate any ack timers
> > off the current cpu taking the interrupt, and with enough networks we train
> > wreck transferring everyones ack timers to the nohz load balancer cpu.
>
> If the nohz balancer CPU is otherwise idle, shouldn't it have enough
> cycles to handle acks for everyone? Is the problem the cache line
> transfer time?
>
> But yes if it's non idle the migration might need to spread out
> to more CPUs.
>
> >
> > What should we do? Should we use mod_timer_pinned here? Or is this an issue
>
> Sounds like something that should be controlled by the cpufreq governour's
> idle predictor? Only migrate if predicted idle time is long enough.
> It's essentially the same problem as deciding how deeply idle to put
> a CPU. Heavy measures only pay off if the expected time is long enough.
>

cpuidle infrastructure hs statistics about the idle times for
all the cpus. Maybe we can look to use this infrastructure to decide
whether to migrate timers or not?

arun

> > other areas might see (eg the block layer) and we should instead avoid
> > migrating timers created out of interrupts.
>
> -Andi
>
> --
> [email protected] -- Speaking for myself only.

2010-02-18 16:03:40

by Andi Kleen

[permalink] [raw]
Subject: Re: NO_HZ migration of TCP ack timers

> > > What should we do? Should we use mod_timer_pinned here? Or is this an issue
> >
> > Sounds like something that should be controlled by the cpufreq governour's
> > idle predictor? Only migrate if predicted idle time is long enough.
> > It's essentially the same problem as deciding how deeply idle to put
> > a CPU. Heavy measures only pay off if the expected time is long enough.
> >
>
> cpuidle infrastructure hs statistics about the idle times for
> all the cpus. Maybe we can look to use this infrastructure to decide
> whether to migrate timers or not?

Yes sorry I reallhy meant cpuidle when I wrote cpufreq.
That's what I suggested too.

But if the problem is lock contention on the target CPU that would
still not completely solve it, just make it less frequent depending
on the idle pattern.

-Andi

--
[email protected] -- Speaking for myself only.

2010-02-26 12:26:20

by David Miller

[permalink] [raw]
Subject: Re: NO_HZ migration of TCP ack timers

From: Anton Blanchard <[email protected]>
Date: Thu, 18 Feb 2010 16:28:20 +1100

> I think the problem is the CPU is most likely to be idle when an rx networking
> interrupt comes in. It seems the wrong thing to do to migrate any ack timers
> off the current cpu taking the interrupt, and with enough networks we train
> wreck transferring everyones ack timers to the nohz load balancer cpu.

This migration against the very design of all of the TCP timers in the
tree currently.

For TCP, even when the timer is no longer needed, we don't cancel the
timer. We do this in order to avoid touching the timer for the cancel
from a cpu other than the one the timer was scheduled on.

The timer therefore is always accessed, cache hot, locally to the cpu
where it was scheduled.