2007-08-11 10:44:35

by Al Boldi

[permalink] [raw]
Subject: Re: CFS review

Roman Zippel wrote:
> On Fri, 10 Aug 2007, Ingo Molnar wrote:
> > achieve that. It probably wont make a real difference, but it's really
> > easy for you to send and it's still very useful when one tries to
> > eliminate possibilities and when one wants to concentrate on the
> > remaining possibilities alone.
>
> The thing I'm afraid about CFS is its possible unpredictability, which
> would make it hard to reproduce problems and we may end up with users with
> unexplainable weird problems. That's the main reason I'm trying so hard to
> push for a design discussion.
>
> Just to give an idea here are two more examples of irregular behaviour,
> which are hopefully easier to reproduce.
>
> 1. Two simple busy loops, one of them is reniced to 15, according to my
> calculations the reniced task should get about 3.4% (1/(1.25^15+1)), but I
> get this:
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 4433 roman 20 0 1532 300 244 R 99.2 0.2 5:05.51 l
> 4434 roman 35 15 1532 72 16 R 0.7 0.1 0:10.62 l
>
> OTOH upto nice level 12 I get what I expect.
>
> 2. If I start 20 busy loops, initially I see in top that every task gets
> 5% and time increments equally (as it should):
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 4492 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4491 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4490 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4489 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4488 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4487 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4486 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4485 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4484 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4483 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4482 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4481 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4480 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4479 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4478 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4477 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4476 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4475 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4474 roman 20 0 1532 68 16 R 5.0 0.1 0:02.86 l
> 4473 roman 20 0 1532 296 244 R 5.0 0.2 0:02.86 l
>
> But if I renice all of them to -15, the time every task gets is rather
> random:
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 4492 roman 5 -15 1532 68 16 R 1.0 0.1 0:07.95 l
> 4491 roman 5 -15 1532 68 16 R 4.3 0.1 0:07.62 l
> 4490 roman 5 -15 1532 68 16 R 3.3 0.1 0:07.50 l
> 4489 roman 5 -15 1532 68 16 R 7.6 0.1 0:07.80 l
> 4488 roman 5 -15 1532 68 16 R 9.6 0.1 0:08.31 l
> 4487 roman 5 -15 1532 68 16 R 3.3 0.1 0:07.59 l
> 4486 roman 5 -15 1532 68 16 R 6.6 0.1 0:07.08 l
> 4485 roman 5 -15 1532 68 16 R 10.0 0.1 0:07.31 l
> 4484 roman 5 -15 1532 68 16 R 8.0 0.1 0:07.30 l
> 4483 roman 5 -15 1532 68 16 R 7.0 0.1 0:07.34 l
> 4482 roman 5 -15 1532 68 16 R 1.0 0.1 0:05.84 l
> 4481 roman 5 -15 1532 68 16 R 1.0 0.1 0:07.16 l
> 4480 roman 5 -15 1532 68 16 R 3.3 0.1 0:07.00 l
> 4479 roman 5 -15 1532 68 16 R 1.0 0.1 0:06.66 l
> 4478 roman 5 -15 1532 68 16 R 8.6 0.1 0:06.96 l
> 4477 roman 5 -15 1532 68 16 R 8.6 0.1 0:07.63 l
> 4476 roman 5 -15 1532 68 16 R 9.6 0.1 0:07.38 l
> 4475 roman 5 -15 1532 68 16 R 1.3 0.1 0:07.09 l
> 4474 roman 5 -15 1532 68 16 R 2.3 0.1 0:07.97 l
> 4473 roman 5 -15 1532 296 244 R 1.0 0.2 0:07.73 l

That's because granularity increases when decreasing nice, and results in
larger timeslices, which affects smoothness negatively. chew.c easily shows
this problem with 2 background cpu-hogs at the same nice-level.

pid 908, prio 0, out for 8 ms, ran for 4 ms, load 37%
pid 908, prio 0, out for 8 ms, ran for 4 ms, load 37%
pid 908, prio 0, out for 8 ms, ran for 2 ms, load 26%
pid 908, prio 0, out for 8 ms, ran for 4 ms, load 38%
pid 908, prio 0, out for 2 ms, ran for 1 ms, load 47%

pid 908, prio -5, out for 23 ms, ran for 3 ms, load 14%
pid 908, prio -5, out for 17 ms, ran for 9 ms, load 35%
pid 908, prio -5, out for 18 ms, ran for 6 ms, load 27%
pid 908, prio -5, out for 20 ms, ran for 10 ms, load 34%
pid 908, prio -5, out for 9 ms, ran for 3 ms, load 30%

pid 908, prio -10, out for 69 ms, ran for 8 ms, load 11%
pid 908, prio -10, out for 35 ms, ran for 19 ms, load 36%
pid 908, prio -10, out for 58 ms, ran for 20 ms, load 26%
pid 908, prio -10, out for 34 ms, ran for 17 ms, load 34%
pid 908, prio -10, out for 58 ms, ran for 23 ms, load 28%

pid 908, prio -15, out for 164 ms, ran for 20 ms, load 11%
pid 908, prio -15, out for 21 ms, ran for 11 ms, load 36%
pid 908, prio -15, out for 21 ms, ran for 12 ms, load 37%
pid 908, prio -15, out for 115 ms, ran for 14 ms, load 11%
pid 908, prio -15, out for 27 ms, ran for 22 ms, load 45%
pid 908, prio -15, out for 125 ms, ran for 33 ms, load 21%
pid 908, prio -15, out for 54 ms, ran for 16 ms, load 22%
pid 908, prio -15, out for 34 ms, ran for 33 ms, load 49%
pid 908, prio -15, out for 94 ms, ran for 15 ms, load 14%
pid 908, prio -15, out for 29 ms, ran for 21 ms, load 42%
pid 908, prio -15, out for 108 ms, ran for 20 ms, load 15%
pid 908, prio -15, out for 44 ms, ran for 20 ms, load 31%
pid 908, prio -15, out for 34 ms, ran for 110 ms, load 76%
pid 908, prio -15, out for 132 ms, ran for 21 ms, load 14%
pid 908, prio -15, out for 42 ms, ran for 39 ms, load 48%
pid 908, prio -15, out for 57 ms, ran for 124 ms, load 68%
pid 908, prio -15, out for 44 ms, ran for 17 ms, load 28%

It looks like the larger the granularity, the more unpredictable it gets,
which probably means that this unpredictability exists even at smaller
granularity but is only exposed with larger ones.


Thanks!

--
Al


2007-08-12 04:18:01

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Al Boldi <[email protected]> wrote:

> That's because granularity increases when decreasing nice, and results
> in larger timeslices, which affects smoothness negatively. chew.c
> easily shows this problem with 2 background cpu-hogs at the same
> nice-level.
>
> pid 908, prio 0, out for 8 ms, ran for 4 ms, load 37%
> pid 908, prio 0, out for 8 ms, ran for 4 ms, load 37%
> pid 908, prio 0, out for 8 ms, ran for 2 ms, load 26%
> pid 908, prio 0, out for 8 ms, ran for 4 ms, load 38%
> pid 908, prio 0, out for 2 ms, ran for 1 ms, load 47%
>
> pid 908, prio -5, out for 23 ms, ran for 3 ms, load 14%
> pid 908, prio -5, out for 17 ms, ran for 9 ms, load 35%

yeah. Incidentally, i refined this last week and those nice-level
granularity changes went into the upstream scheduler code a few days
ago:

commit 7cff8cf61cac15fa29a1ca802826d2bcbca66152
Author: Ingo Molnar <[email protected]>
Date: Thu Aug 9 11:16:52 2007 +0200

sched: refine negative nice level granularity

refine the granularity of negative nice level tasks: let them
reschedule more often to offset the effect of them consuming
their wait_runtime proportionately slower. (This makes nice-0
task scheduling smoother in the presence of negatively
reniced tasks.)

Signed-off-by: Ingo Molnar <[email protected]>

so could you please re-check chew jitter behavior with the latest
kernel? (i've attached the standalone patch below, it will apply cleanly
to rc2 too.)

when testing this, you might also want to try chew-max:

http://redhat.com/~mingo/cfs-scheduler/tools/chew-max.c

i added a few trivial enhancements to chew.c: it tracks the maximum
latency, latency fluctuations (noise of scheduling) and allows it to be
run for a fixed amount of time.

NOTE: if you run chew from any indirect terminal (xterm, ssh, etc.) it's
best to capture/report chew numbers like this:

./chew-max 60 > chew.log

otherwise the indirect scheduling activities of the chew printout will
disturb the numbers.

> It looks like the larger the granularity, the more unpredictable it
> gets, which probably means that this unpredictability exists even at
> smaller granularity but is only exposed with larger ones.

this should only affect non-default nice levels. Note that 99.9%+ of all
userspace Linux CPU time is spent on default nice level 0, and that is
what controls the design. So the approach was always to first get nice-0
right, and then to adjust the non-default nice level behavior too,
carefully, without hurting nice-0 - to refine all the workloads where
people (have to) use positive or negative nice levels. In any case,
please keep re-testing this so that we can adjust it.

Ingo

--------------------->
commit 7cff8cf61cac15fa29a1ca802826d2bcbca66152
Author: Ingo Molnar <[email protected]>
Date: Thu Aug 9 11:16:52 2007 +0200

sched: refine negative nice level granularity

refine the granularity of negative nice level tasks: let them
reschedule more often to offset the effect of them consuming
their wait_runtime proportionately slower. (This makes nice-0
task scheduling smoother in the presence of negatively
reniced tasks.)

Signed-off-by: Ingo Molnar <[email protected]>

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 7a632c5..e91db32 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -222,21 +222,25 @@ niced_granularity(struct sched_entity *curr, unsigned long granularity)
{
u64 tmp;

+ if (likely(curr->load.weight == NICE_0_LOAD))
+ return granularity;
/*
- * Negative nice levels get the same granularity as nice-0:
+ * Positive nice levels get the same granularity as nice-0:
*/
- if (likely(curr->load.weight >= NICE_0_LOAD))
- return granularity;
+ if (likely(curr->load.weight < NICE_0_LOAD)) {
+ tmp = curr->load.weight * (u64)granularity;
+ return (long) (tmp >> NICE_0_SHIFT);
+ }
/*
- * Positive nice level tasks get linearly finer
+ * Negative nice level tasks get linearly finer
* granularity:
*/
- tmp = curr->load.weight * (u64)granularity;
+ tmp = curr->load.inv_weight * (u64)granularity;

/*
* It will always fit into 'long':
*/
- return (long) (tmp >> NICE_0_SHIFT);
+ return (long) (tmp >> WMULT_SHIFT);
}

static inline void

2007-08-12 15:30:17

by Al Boldi

[permalink] [raw]
Subject: Re: CFS review

Ingo Molnar wrote:
> * Al Boldi <[email protected]> wrote:
> > That's because granularity increases when decreasing nice, and results
> > in larger timeslices, which affects smoothness negatively. chew.c
> > easily shows this problem with 2 background cpu-hogs at the same
> > nice-level.
> >
> > pid 908, prio 0, out for 8 ms, ran for 4 ms, load 37%
> > pid 908, prio 0, out for 8 ms, ran for 4 ms, load 37%
> > pid 908, prio 0, out for 8 ms, ran for 2 ms, load 26%
> > pid 908, prio 0, out for 8 ms, ran for 4 ms, load 38%
> > pid 908, prio 0, out for 2 ms, ran for 1 ms, load 47%
> >
> > pid 908, prio -5, out for 23 ms, ran for 3 ms, load 14%
> > pid 908, prio -5, out for 17 ms, ran for 9 ms, load 35%
>
> yeah. Incidentally, i refined this last week and those nice-level
> granularity changes went into the upstream scheduler code a few days
> ago:
>
> commit 7cff8cf61cac15fa29a1ca802826d2bcbca66152
> Author: Ingo Molnar <[email protected]>
> Date: Thu Aug 9 11:16:52 2007 +0200
>
> sched: refine negative nice level granularity
>
> refine the granularity of negative nice level tasks: let them
> reschedule more often to offset the effect of them consuming
> their wait_runtime proportionately slower. (This makes nice-0
> task scheduling smoother in the presence of negatively
> reniced tasks.)
>
> Signed-off-by: Ingo Molnar <[email protected]>
>
> so could you please re-check chew jitter behavior with the latest
> kernel? (i've attached the standalone patch below, it will apply cleanly
> to rc2 too.)

That fixes it, but by reducing granularity ctx is up 4-fold.

Mind you, it does have an enormous effect on responsiveness, as negative nice
with small granularity can't hijack the system any more.

> when testing this, you might also want to try chew-max:
>
> http://redhat.com/~mingo/cfs-scheduler/tools/chew-max.c
>
> i added a few trivial enhancements to chew.c: it tracks the maximum
> latency, latency fluctuations (noise of scheduling) and allows it to be
> run for a fixed amount of time.

Looks great. Thanks!

> NOTE: if you run chew from any indirect terminal (xterm, ssh, etc.) it's
> best to capture/report chew numbers like this:
>
> ./chew-max 60 > chew.log
>
> otherwise the indirect scheduling activities of the chew printout will
> disturb the numbers.

Correct; that's why I always boot into /bin/sh to get a clean run. But
redirecting output to a file is also a good idea, provided that this file
lives on something like tmpfs, otherwise you'll get flush out jitter.

> > It looks like the larger the granularity, the more unpredictable it
> > gets, which probably means that this unpredictability exists even at
> > smaller granularity but is only exposed with larger ones.
>
> this should only affect non-default nice levels. Note that 99.9%+ of all
> userspace Linux CPU time is spent on default nice level 0, and that is
> what controls the design. So the approach was always to first get nice-0
> right, and then to adjust the non-default nice level behavior too,
> carefully, without hurting nice-0 - to refine all the workloads where
> people (have to) use positive or negative nice levels. In any case,
> please keep re-testing this so that we can adjust it.

The thing is, this unpredictability seems to exist even at nice level 0, but
the smaller granularity covers it all up. It occasionally exhibits itself
as hick-ups during transient heavy workload flux. But it's not easily
reproducible.


Thanks!

--
Al

2007-08-12 15:53:31

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Al Boldi <[email protected]> wrote:

> > so could you please re-check chew jitter behavior with the latest
> > kernel? (i've attached the standalone patch below, it will apply
> > cleanly to rc2 too.)
>
> That fixes it, but by reducing granularity ctx is up 4-fold.

ok, great! (the context-switch rate is obviously up.)

> Mind you, it does have an enormous effect on responsiveness, as
> negative nice with small granularity can't hijack the system any more.

ok. i'm glad you like the result :-) This makes reniced X (or any
reniced app) more usable.

> The thing is, this unpredictability seems to exist even at nice level
> 0, but the smaller granularity covers it all up. It occasionally
> exhibits itself as hick-ups during transient heavy workload flux. But
> it's not easily reproducible.

In general, "hickups" can be due to many, many reasons. If a task got
indeed delayed by scheduling jitter that is provable, even if the
behavior is hard to reproduce, by enabling CONFIG_SCHED_DEBUG=y and
CONFIG_SCHEDSTATS=y in your kernel. First clear all the stats:

for N in /proc/*/task/*/sched; do echo 0 > $N; done

then wait for the 'hickup' to happen, and once it happens capture the
system state (after the hickup) via this script:

http://people.redhat.com/mingo/cfs-scheduler/tools/cfs-debug-info.sh

and tell me which specific task exhibited that 'hickup' and send me the
debug output. Also, could you try the patch below as well? Thanks,

Ingo

-------------------------------->
Subject: sched: fix sleeper bonus
From: Ingo Molnar <[email protected]>

Peter Ziljstra noticed that the sleeper bonus deduction code
was not properly rate-limited: a task that scheduled more
frequently would get a disproportionately large deduction.
So limit the deduction to delta_exec.

Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched_fair.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)

Index: linux/kernel/sched_fair.c
===================================================================
--- linux.orig/kernel/sched_fair.c
+++ linux/kernel/sched_fair.c
@@ -75,7 +75,7 @@ enum {

unsigned int sysctl_sched_features __read_mostly =
SCHED_FEAT_FAIR_SLEEPERS *1 |
- SCHED_FEAT_SLEEPER_AVG *1 |
+ SCHED_FEAT_SLEEPER_AVG *0 |
SCHED_FEAT_SLEEPER_LOAD_AVG *1 |
SCHED_FEAT_PRECISE_CPU_LOAD *1 |
SCHED_FEAT_START_DEBIT *1 |
@@ -304,11 +304,9 @@ __update_curr(struct cfs_rq *cfs_rq, str
delta_mine = calc_delta_mine(delta_exec, curr->load.weight, lw);

if (cfs_rq->sleeper_bonus > sysctl_sched_granularity) {
- delta = calc_delta_mine(cfs_rq->sleeper_bonus,
- curr->load.weight, lw);
- if (unlikely(delta > cfs_rq->sleeper_bonus))
- delta = cfs_rq->sleeper_bonus;
-
+ delta = min(cfs_rq->sleeper_bonus, (u64)delta_exec);
+ delta = calc_delta_mine(delta, curr->load.weight, lw);
+ delta = min((u64)delta, cfs_rq->sleeper_bonus);
cfs_rq->sleeper_bonus -= delta;
delta_mine -= delta;
}
@@ -521,6 +519,8 @@ static void __enqueue_sleeper(struct cfs
* Track the amount of bonus we've given to sleepers:
*/
cfs_rq->sleeper_bonus += delta_fair;
+ if (unlikely(cfs_rq->sleeper_bonus > sysctl_sched_runtime_limit))
+ cfs_rq->sleeper_bonus = sysctl_sched_runtime_limit;

schedstat_add(cfs_rq, wait_runtime, se->wait_runtime);
}

2007-08-12 19:43:47

by Al Boldi

[permalink] [raw]
Subject: Re: CFS review

Ingo Molnar wrote:
> * Al Boldi <[email protected]> wrote:
> > The thing is, this unpredictability seems to exist even at nice level
> > 0, but the smaller granularity covers it all up. It occasionally
> > exhibits itself as hick-ups during transient heavy workload flux. But
> > it's not easily reproducible.
>
> In general, "hickups" can be due to many, many reasons. If a task got
> indeed delayed by scheduling jitter that is provable, even if the
> behavior is hard to reproduce, by enabling CONFIG_SCHED_DEBUG=y and
> CONFIG_SCHEDSTATS=y in your kernel. First clear all the stats:
>
> for N in /proc/*/task/*/sched; do echo 0 > $N; done
>
> then wait for the 'hickup' to happen, and once it happens capture the
> system state (after the hickup) via this script:
>
> http://people.redhat.com/mingo/cfs-scheduler/tools/cfs-debug-info.sh
>
> and tell me which specific task exhibited that 'hickup' and send me the
> debug output.

Ok.

> Also, could you try the patch below as well? Thanks,

Looks ok, but I'm not sure which workload this is supposed to improve.

There is one workload that still isn't performing well; it's a web-server
workload that spawns 1K+ client procs. It can be emulated by using this:

for i in `seq 1 to 3333`; do ping 10.1 -A > /dev/null & done

The problem is that consecutive runs don't give consistent results and
sometimes stalls. You may want to try that.


Thanks!

--
Al

2007-08-21 10:59:19

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Al Boldi <[email protected]> wrote:

> There is one workload that still isn't performing well; it's a
> web-server workload that spawns 1K+ client procs. It can be emulated
> by using this:
>
> for i in `seq 1 to 3333`; do ping 10.1 -A > /dev/null & done

on bash i did this as:

for ((i=0; i<3333; i++)); do ping 10.1 -A > /dev/null & done

and this quickly creates a monster-runqueue with tons of ping tasks
pending. (i replaced 10.1 with the IP of another box on the same LAN as
the testbox) Is this what should happen?

> The problem is that consecutive runs don't give consistent results and
> sometimes stalls. You may want to try that.

well, there's a natural saturation point after a few hundred tasks
(depending on your CPU's speed), at which point there's no idle time
left. From that point on things get slower progressively (and the
ability of the shell to start new ping tasks is impacted as well), but
that's expected on an overloaded system, isnt it?

Ingo

2007-08-21 22:29:06

by Al Boldi

[permalink] [raw]
Subject: Re: CFS review

Ingo Molnar wrote:
> * Al Boldi <[email protected]> wrote:
> > There is one workload that still isn't performing well; it's a
> > web-server workload that spawns 1K+ client procs. It can be emulated
> > by using this:
> >
> > for i in `seq 1 to 3333`; do ping 10.1 -A > /dev/null & done
>
> on bash i did this as:
>
> for ((i=0; i<3333; i++)); do ping 10.1 -A > /dev/null & done
>
> and this quickly creates a monster-runqueue with tons of ping tasks
> pending. (i replaced 10.1 with the IP of another box on the same LAN as
> the testbox) Is this what should happen?

Yes, sometimes they start pending and sometimes they run immediately.

> > The problem is that consecutive runs don't give consistent results and
> > sometimes stalls. You may want to try that.
>
> well, there's a natural saturation point after a few hundred tasks
> (depending on your CPU's speed), at which point there's no idle time
> left. From that point on things get slower progressively (and the
> ability of the shell to start new ping tasks is impacted as well), but
> that's expected on an overloaded system, isnt it?

Of course, things should get slower with higher load, but it should be
consistent without stalls.

To see this problem, make sure you boot into /bin/sh with the normal VGA
console (ie. not fb-console). Then try each loop a few times to show
different behaviour; loops like:

# for ((i=0; i<3333; i++)); do ping 10.1 -A > /dev/null & done

# for ((i=0; i<3333; i++)); do nice -99 ping 10.1 -A > /dev/null & done

# { for ((i=0; i<3333; i++)); do
ping 10.1 -A > /dev/null &
done } > /dev/null 2>&1

Especially the last one sometimes causes a complete console lock-up, while
the other two sometimes stall then surge periodically.

BTW, I am also wondering how one might test threading behaviour wrt to
startup and sync-on-exit with parent thread. This may not show any problems
with small number of threads, but how does it scale with 1K+?


Thanks!

--
Al

2007-08-24 13:45:28

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Al Boldi <[email protected]> wrote:

> > > The problem is that consecutive runs don't give consistent results
> > > and sometimes stalls. You may want to try that.
> >
> > well, there's a natural saturation point after a few hundred tasks
> > (depending on your CPU's speed), at which point there's no idle time
> > left. From that point on things get slower progressively (and the
> > ability of the shell to start new ping tasks is impacted as well),
> > but that's expected on an overloaded system, isnt it?
>
> Of course, things should get slower with higher load, but it should be
> consistent without stalls.
>
> To see this problem, make sure you boot into /bin/sh with the normal
> VGA console (ie. not fb-console). Then try each loop a few times to
> show different behaviour; loops like:
>
> # for ((i=0; i<3333; i++)); do ping 10.1 -A > /dev/null & done
>
> # for ((i=0; i<3333; i++)); do nice -99 ping 10.1 -A > /dev/null & done
>
> # { for ((i=0; i<3333; i++)); do
> ping 10.1 -A > /dev/null &
> done } > /dev/null 2>&1
>
> Especially the last one sometimes causes a complete console lock-up,
> while the other two sometimes stall then surge periodically.

ok. I think i might finally have found the bug causing this. Could you
try the fix below, does your webserver thread-startup test work any
better?

Ingo

--------------------------->
Subject: sched: fix startup penalty calculation
From: Ingo Molnar <[email protected]>

fix task startup penalty miscalculation: sysctl_sched_granularity is
unsigned int and wait_runtime is long so we first have to convert it
to long before turning it negative ...

Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched_fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux/kernel/sched_fair.c
===================================================================
--- linux.orig/kernel/sched_fair.c
+++ linux/kernel/sched_fair.c
@@ -1048,7 +1048,7 @@ static void task_new_fair(struct rq *rq,
* -granularity/2, so initialize the task with that:
*/
if (sysctl_sched_features & SCHED_FEAT_START_DEBIT)
- p->se.wait_runtime = -(sysctl_sched_granularity / 2);
+ p->se.wait_runtime = -((long)sysctl_sched_granularity / 2);

__enqueue_entity(cfs_rq, se);
}

2007-08-25 22:29:43

by Al Boldi

[permalink] [raw]
Subject: Re: CFS review

Ingo Molnar wrote:
> * Al Boldi <[email protected]> wrote:
> > > > The problem is that consecutive runs don't give consistent results
> > > > and sometimes stalls. You may want to try that.
> > >
> > > well, there's a natural saturation point after a few hundred tasks
> > > (depending on your CPU's speed), at which point there's no idle time
> > > left. From that point on things get slower progressively (and the
> > > ability of the shell to start new ping tasks is impacted as well),
> > > but that's expected on an overloaded system, isnt it?
> >
> > Of course, things should get slower with higher load, but it should be
> > consistent without stalls.
> >
> > To see this problem, make sure you boot into /bin/sh with the normal
> > VGA console (ie. not fb-console). Then try each loop a few times to
> > show different behaviour; loops like:
> >
> > # for ((i=0; i<3333; i++)); do ping 10.1 -A > /dev/null & done
> >
> > # for ((i=0; i<3333; i++)); do nice -99 ping 10.1 -A > /dev/null & done
> >
> > # { for ((i=0; i<3333; i++)); do
> > ping 10.1 -A > /dev/null &
> > done } > /dev/null 2>&1
> >
> > Especially the last one sometimes causes a complete console lock-up,
> > while the other two sometimes stall then surge periodically.
>
> ok. I think i might finally have found the bug causing this. Could you
> try the fix below, does your webserver thread-startup test work any
> better?

It seems to help somewhat, but the problem is still visible. Even v20.3 on
2.6.22.5 didn't help.

It does look related to ia-boosting, so I turned off __update_curr like Roman
mentioned, which had an enormous smoothing effect, but then nice levels
completely break down and lockup the system.

There is another way to show the problem visually under X (vesa-driver), by
starting 3 gears simultaneously, which after laying them out side-by-side
need some settling time before smoothing out. Without __update_curr it's
absolutely smooth from the start.


Thanks!

--
Al

2007-08-25 23:15:48

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Al Boldi <[email protected]> wrote:

> > ok. I think i might finally have found the bug causing this. Could
> > you try the fix below, does your webserver thread-startup test work
> > any better?
>
> It seems to help somewhat, but the problem is still visible. Even
> v20.3 on 2.6.22.5 didn't help.
>
> It does look related to ia-boosting, so I turned off __update_curr
> like Roman mentioned, which had an enormous smoothing effect, but then
> nice levels completely break down and lockup the system.

you can turn sleeper-fairness off via:

echo 28 > /proc/sys/kernel/sched_features

another thing to try would be:

echo 12 > /proc/sys/kernel/sched_features

(that's the new-task penalty turned off.)

Another thing to try would be to edit this:

if (sysctl_sched_features & SCHED_FEAT_START_DEBIT)
p->se.wait_runtime = -(sched_granularity(cfs_rq) / 2);

to:

if (sysctl_sched_features & SCHED_FEAT_START_DEBIT)
p->se.wait_runtime = -(sched_granularity(cfs_rq);

and could you also check 20.4 on 2.6.22.5 perhaps, or very latest -git?
(Peter has experienced smaller spikes with that.)

Ingo

2007-08-26 16:28:19

by Al Boldi

[permalink] [raw]
Subject: Re: CFS review

Ingo Molnar wrote:
> * Al Boldi <[email protected]> wrote:
> > > ok. I think i might finally have found the bug causing this. Could
> > > you try the fix below, does your webserver thread-startup test work
> > > any better?
> >
> > It seems to help somewhat, but the problem is still visible. Even
> > v20.3 on 2.6.22.5 didn't help.
> >
> > It does look related to ia-boosting, so I turned off __update_curr
> > like Roman mentioned, which had an enormous smoothing effect, but then
> > nice levels completely break down and lockup the system.
>
> you can turn sleeper-fairness off via:
>
> echo 28 > /proc/sys/kernel/sched_features
>
> another thing to try would be:
>
> echo 12 > /proc/sys/kernel/sched_features
>
> (that's the new-task penalty turned off.)
>
> Another thing to try would be to edit this:
>
> if (sysctl_sched_features & SCHED_FEAT_START_DEBIT)
> p->se.wait_runtime = -(sched_granularity(cfs_rq) / 2);
>
> to:
>
> if (sysctl_sched_features & SCHED_FEAT_START_DEBIT)
> p->se.wait_runtime = -(sched_granularity(cfs_rq);
>
> and could you also check 20.4 on 2.6.22.5 perhaps, or very latest -git?
> (Peter has experienced smaller spikes with that.)

Ok, I tried all your suggestions, but nothing works as smooth as removing
__update_curr.

Does the problem show on your machine with the 3x gears under X-vesa test?


Thanks!

--
Al

2007-08-26 16:39:30

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Al Boldi <[email protected]> wrote:

> > and could you also check 20.4 on 2.6.22.5 perhaps, or very latest
> > -git? (Peter has experienced smaller spikes with that.)
>
> Ok, I tried all your suggestions, but nothing works as smooth as
> removing __update_curr.

could you send the exact patch that shows what you did? And could you
also please describe it exactly which aspect of the workload you call
'smooth'. Could it be made quantitative somehow?

Ingo

2007-08-27 04:08:09

by Al Boldi

[permalink] [raw]
Subject: Re: CFS review

Ingo Molnar wrote:
> * Al Boldi <[email protected]> wrote:
> > > and could you also check 20.4 on 2.6.22.5 perhaps, or very latest
> > > -git? (Peter has experienced smaller spikes with that.)
> >
> > Ok, I tried all your suggestions, but nothing works as smooth as
> > removing __update_curr.
>
> could you send the exact patch that shows what you did?

On 2.6.22.5-v20.3 (not v20.4):

340- curr->delta_exec += delta_exec;
341-
342- if (unlikely(curr->delta_exec > sysctl_sched_stat_granularity)) {
343:// __update_curr(cfs_rq, curr);
344- curr->delta_exec = 0;
345- }
346- curr->exec_start = rq_of(cfs_rq)->clock;

> And could you
> also please describe it exactly which aspect of the workload you call
> 'smooth'. Could it be made quantitative somehow?

The 3x gears test shows the startup problem in a really noticeable way. With
v20.4 they startup surging and stalling periodically for about 10sec, then
they are smooth. With v20.3 + above patch they startup completely smooth.


Thanks!

--
Al

2007-08-27 10:53:41

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Al Boldi <[email protected]> wrote:

> > could you send the exact patch that shows what you did?
>
> On 2.6.22.5-v20.3 (not v20.4):
>
> 340- curr->delta_exec += delta_exec;
> 341-
> 342- if (unlikely(curr->delta_exec > sysctl_sched_stat_granularity)) {
> 343:// __update_curr(cfs_rq, curr);
> 344- curr->delta_exec = 0;
> 345- }
> 346- curr->exec_start = rq_of(cfs_rq)->clock;

ouch - this produces a really broken scheduler - with this we dont do
any run-time accounting (!).

Could you try the patch below instead, does this make 3x glxgears smooth
again? (if yes, could you send me your Signed-off-by line as well.)

Ingo

------------------------>
Subject: sched: make the scheduler converge to the ideal latency
From: Ingo Molnar <[email protected]>

de-HZ-ification of the granularity defaults unearthed a pre-existing
property of CFS: while it correctly converges to the granularity goal,
it does not prevent run-time fluctuations in the range of [-gran ...
+gran].

With the increase of the granularity due to the removal of HZ
dependencies, this becomes visible in chew-max output (with 5 tasks
running):

out: 28 . 27. 32 | flu: 0 . 0 | ran: 9 . 13 | per: 37 . 40
out: 27 . 27. 32 | flu: 0 . 0 | ran: 17 . 13 | per: 44 . 40
out: 27 . 27. 32 | flu: 0 . 0 | ran: 9 . 13 | per: 36 . 40
out: 29 . 27. 32 | flu: 2 . 0 | ran: 17 . 13 | per: 46 . 40
out: 28 . 27. 32 | flu: 0 . 0 | ran: 9 . 13 | per: 37 . 40
out: 29 . 27. 32 | flu: 0 . 0 | ran: 18 . 13 | per: 47 . 40
out: 28 . 27. 32 | flu: 0 . 0 | ran: 9 . 13 | per: 37 . 40

average slice is the ideal 13 msecs and the period is picture-perfect 40
msecs. But the 'ran' field fluctuates around 13.33 msecs and there's no
mechanism in CFS to keep that from happening: it's a perfectly valid
solution that CFS finds.

the solution is to add a granularity/preemption rule that knows about
the "target latency", which makes tasks that run longer than the ideal
latency run a bit less. The simplest approach is to simply decrease the
preemption granularity when a task overruns its ideal latency. For this
we have to track how much the task executed since its last preemption.

( this adds a new field to task_struct, but we can eliminate that
overhead in 2.6.24 by putting all the scheduler timestamps into an
anonymous union. )

with this change in place, chew-max output is fluctuation-less all
around:

out: 28 . 27. 39 | flu: 0 . 2 | ran: 13 . 13 | per: 41 . 40
out: 28 . 27. 39 | flu: 0 . 2 | ran: 13 . 13 | per: 41 . 40
out: 28 . 27. 39 | flu: 0 . 2 | ran: 13 . 13 | per: 41 . 40
out: 28 . 27. 39 | flu: 0 . 2 | ran: 13 . 13 | per: 41 . 40
out: 28 . 27. 39 | flu: 0 . 1 | ran: 13 . 13 | per: 41 . 40
out: 28 . 27. 39 | flu: 0 . 1 | ran: 13 . 13 | per: 41 . 40

this patch has no impact on any fastpath or on any globally observable
scheduling property. (unless you have sharp enough eyes to see
millisecond-level ruckles in glxgears smoothness :-)

Also, with this mechanism in place the formula for adaptive granularity
can be simplified down to the obvious "granularity = latency/nr_running"
calculation.

Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
---
include/linux/sched.h | 1 +
kernel/sched_fair.c | 43 ++++++++++++++-----------------------------
2 files changed, 15 insertions(+), 29 deletions(-)

Index: linux/include/linux/sched.h
===================================================================
--- linux.orig/include/linux/sched.h
+++ linux/include/linux/sched.h
@@ -904,6 +904,7 @@ struct sched_entity {

u64 exec_start;
u64 sum_exec_runtime;
+ u64 prev_sum_exec_runtime;
u64 wait_start_fair;
u64 sleep_start_fair;

Index: linux/kernel/sched_fair.c
===================================================================
--- linux.orig/kernel/sched_fair.c
+++ linux/kernel/sched_fair.c
@@ -225,30 +225,6 @@ static struct sched_entity *__pick_next_
* Calculate the preemption granularity needed to schedule every
* runnable task once per sysctl_sched_latency amount of time.
* (down to a sensible low limit on granularity)
- *
- * For example, if there are 2 tasks running and latency is 10 msecs,
- * we switch tasks every 5 msecs. If we have 3 tasks running, we have
- * to switch tasks every 3.33 msecs to get a 10 msecs observed latency
- * for each task. We do finer and finer scheduling up to until we
- * reach the minimum granularity value.
- *
- * To achieve this we use the following dynamic-granularity rule:
- *
- * gran = lat/nr - lat/nr/nr
- *
- * This comes out of the following equations:
- *
- * kA1 + gran = kB1
- * kB2 + gran = kA2
- * kA2 = kA1
- * kB2 = kB1 - d + d/nr
- * lat = d * nr
- *
- * Where 'k' is key, 'A' is task A (waiting), 'B' is task B (running),
- * '1' is start of time, '2' is end of time, 'd' is delay between
- * 1 and 2 (during which task B was running), 'nr' is number of tasks
- * running, 'lat' is the the period of each task. ('lat' is the
- * sched_latency that we aim for.)
*/
static long
sched_granularity(struct cfs_rq *cfs_rq)
@@ -257,7 +233,7 @@ sched_granularity(struct cfs_rq *cfs_rq)
unsigned int nr = cfs_rq->nr_running;

if (nr > 1) {
- gran = gran/nr - gran/nr/nr;
+ gran = gran/nr;
gran = max(gran, sysctl_sched_min_granularity);
}

@@ -668,7 +644,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, st
/*
* Preempt the current task with a newly woken task if needed:
*/
-static void
+static int
__check_preempt_curr_fair(struct cfs_rq *cfs_rq, struct sched_entity *se,
struct sched_entity *curr, unsigned long granularity)
{
@@ -679,8 +655,11 @@ __check_preempt_curr_fair(struct cfs_rq
* preempt the current task unless the best task has
* a larger than sched_granularity fairness advantage:
*/
- if (__delta > niced_granularity(curr, granularity))
+ if (__delta > niced_granularity(curr, granularity)) {
resched_task(rq_of(cfs_rq)->curr);
+ return 1;
+ }
+ return 0;
}

static inline void
@@ -725,6 +704,7 @@ static void put_prev_entity(struct cfs_r

static void entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
{
+ unsigned long gran, delta_exec;
struct sched_entity *next;

/*
@@ -741,8 +721,13 @@ static void entity_tick(struct cfs_rq *c
if (next == curr)
return;

- __check_preempt_curr_fair(cfs_rq, next, curr,
- sched_granularity(cfs_rq));
+ gran = sched_granularity(cfs_rq);
+ delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
+ if (delta_exec > gran)
+ gran = 0;
+
+ if (__check_preempt_curr_fair(cfs_rq, next, curr, gran))
+ curr->prev_sum_exec_runtime = curr->sum_exec_runtime;
}

/**************************************************

2007-08-27 14:47:56

by Al Boldi

[permalink] [raw]
Subject: Re: CFS review

Ingo Molnar wrote:
> * Al Boldi <[email protected]> wrote:
> > > could you send the exact patch that shows what you did?
> >
> > On 2.6.22.5-v20.3 (not v20.4):
> >
> > 340- curr->delta_exec += delta_exec;
> > 341-
> > 342- if (unlikely(curr->delta_exec > sysctl_sched_stat_granularity))
> > { 343:// __update_curr(cfs_rq, curr);
> > 344- curr->delta_exec = 0;
> > 345- }
> > 346- curr->exec_start = rq_of(cfs_rq)->clock;
>
> ouch - this produces a really broken scheduler - with this we dont do
> any run-time accounting (!).

Of course it's broken, and it's not meant as a fix, but this change allows
you to see the amount of overhead as well as any miscalculations
__update_curr incurs.

In terms of overhead, __update_curr incurs ~3x slowdown, and in terms of
run-time accounting it exhibits a ~10sec task-startup miscalculation.

> Could you try the patch below instead, does this make 3x glxgears smooth
> again? (if yes, could you send me your Signed-off-by line as well.)

The task-startup stalling is still there for ~10sec.

Can you see the problem on your machine?


Thanks!

--
Al

2007-08-27 20:41:46

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Al Boldi <[email protected]> wrote:

> > Could you try the patch below instead, does this make 3x glxgears
> > smooth again? (if yes, could you send me your Signed-off-by line as
> > well.)
>
> The task-startup stalling is still there for ~10sec.
>
> Can you see the problem on your machine?

nope (i have no framebuffer setup) - but i can see some chew-max
latencies that occur when new tasks are started up. I _think_ it's
probably the same problem as yours.

could you try the patch below (which is the combo patch of my current
queue), ontop of head 50c46637aa? This makes chew-max behave better
during task mass-startup here.

Ingo

----------------->
Index: linux/include/linux/sched.h
===================================================================
--- linux.orig/include/linux/sched.h
+++ linux/include/linux/sched.h
@@ -904,6 +904,7 @@ struct sched_entity {

u64 exec_start;
u64 sum_exec_runtime;
+ u64 prev_sum_exec_runtime;
u64 wait_start_fair;
u64 sleep_start_fair;

Index: linux/kernel/sched.c
===================================================================
--- linux.orig/kernel/sched.c
+++ linux/kernel/sched.c
@@ -1587,6 +1587,7 @@ static void __sched_fork(struct task_str
p->se.wait_start_fair = 0;
p->se.exec_start = 0;
p->se.sum_exec_runtime = 0;
+ p->se.prev_sum_exec_runtime = 0;
p->se.delta_exec = 0;
p->se.delta_fair_run = 0;
p->se.delta_fair_sleep = 0;
Index: linux/kernel/sched_fair.c
===================================================================
--- linux.orig/kernel/sched_fair.c
+++ linux/kernel/sched_fair.c
@@ -82,12 +82,12 @@ enum {
};

unsigned int sysctl_sched_features __read_mostly =
- SCHED_FEAT_FAIR_SLEEPERS *1 |
+ SCHED_FEAT_FAIR_SLEEPERS *0 |
SCHED_FEAT_SLEEPER_AVG *0 |
SCHED_FEAT_SLEEPER_LOAD_AVG *1 |
SCHED_FEAT_PRECISE_CPU_LOAD *1 |
- SCHED_FEAT_START_DEBIT *1 |
- SCHED_FEAT_SKIP_INITIAL *0;
+ SCHED_FEAT_START_DEBIT *0 |
+ SCHED_FEAT_SKIP_INITIAL *1;

extern struct sched_class fair_sched_class;

@@ -225,39 +225,15 @@ static struct sched_entity *__pick_next_
* Calculate the preemption granularity needed to schedule every
* runnable task once per sysctl_sched_latency amount of time.
* (down to a sensible low limit on granularity)
- *
- * For example, if there are 2 tasks running and latency is 10 msecs,
- * we switch tasks every 5 msecs. If we have 3 tasks running, we have
- * to switch tasks every 3.33 msecs to get a 10 msecs observed latency
- * for each task. We do finer and finer scheduling up to until we
- * reach the minimum granularity value.
- *
- * To achieve this we use the following dynamic-granularity rule:
- *
- * gran = lat/nr - lat/nr/nr
- *
- * This comes out of the following equations:
- *
- * kA1 + gran = kB1
- * kB2 + gran = kA2
- * kA2 = kA1
- * kB2 = kB1 - d + d/nr
- * lat = d * nr
- *
- * Where 'k' is key, 'A' is task A (waiting), 'B' is task B (running),
- * '1' is start of time, '2' is end of time, 'd' is delay between
- * 1 and 2 (during which task B was running), 'nr' is number of tasks
- * running, 'lat' is the the period of each task. ('lat' is the
- * sched_latency that we aim for.)
*/
-static long
+static unsigned long
sched_granularity(struct cfs_rq *cfs_rq)
{
unsigned int gran = sysctl_sched_latency;
unsigned int nr = cfs_rq->nr_running;

if (nr > 1) {
- gran = gran/nr - gran/nr/nr;
+ gran = gran/nr;
gran = max(gran, sysctl_sched_min_granularity);
}

@@ -489,6 +465,9 @@ update_stats_wait_end(struct cfs_rq *cfs
{
unsigned long delta_fair;

+ if (unlikely(!se->wait_start_fair))
+ return;
+
delta_fair = (unsigned long)min((u64)(2*sysctl_sched_runtime_limit),
(u64)(cfs_rq->fair_clock - se->wait_start_fair));

@@ -668,7 +647,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, st
/*
* Preempt the current task with a newly woken task if needed:
*/
-static void
+static int
__check_preempt_curr_fair(struct cfs_rq *cfs_rq, struct sched_entity *se,
struct sched_entity *curr, unsigned long granularity)
{
@@ -679,8 +658,11 @@ __check_preempt_curr_fair(struct cfs_rq
* preempt the current task unless the best task has
* a larger than sched_granularity fairness advantage:
*/
- if (__delta > niced_granularity(curr, granularity))
+ if (__delta > niced_granularity(curr, granularity)) {
resched_task(rq_of(cfs_rq)->curr);
+ return 1;
+ }
+ return 0;
}

static inline void
@@ -725,6 +707,7 @@ static void put_prev_entity(struct cfs_r

static void entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
{
+ unsigned long gran, delta_exec;
struct sched_entity *next;

/*
@@ -741,8 +724,13 @@ static void entity_tick(struct cfs_rq *c
if (next == curr)
return;

- __check_preempt_curr_fair(cfs_rq, next, curr,
- sched_granularity(cfs_rq));
+ gran = sched_granularity(cfs_rq);
+ delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
+ if (delta_exec > gran)
+ gran = 0;
+
+ if (__check_preempt_curr_fair(cfs_rq, next, curr, gran))
+ curr->prev_sum_exec_runtime = curr->sum_exec_runtime;
}

/**************************************************
@@ -1080,29 +1068,27 @@ static void task_new_fair(struct rq *rq,

sched_info_queued(p);

+ update_curr(cfs_rq);
update_stats_enqueue(cfs_rq, se);
- /*
- * Child runs first: we let it run before the parent
- * until it reschedules once. We set up the key so that
- * it will preempt the parent:
- */
- p->se.fair_key = current->se.fair_key -
- niced_granularity(&rq->curr->se, sched_granularity(cfs_rq)) - 1;
+
/*
* The first wait is dominated by the child-runs-first logic,
* so do not credit it with that waiting time yet:
*/
if (sysctl_sched_features & SCHED_FEAT_SKIP_INITIAL)
- p->se.wait_start_fair = 0;
+ se->wait_start_fair = 0;

/*
* The statistical average of wait_runtime is about
* -granularity/2, so initialize the task with that:
*/
- if (sysctl_sched_features & SCHED_FEAT_START_DEBIT)
- p->se.wait_runtime = -(sched_granularity(cfs_rq) / 2);
+ if (sysctl_sched_features & SCHED_FEAT_START_DEBIT) {
+ se->wait_runtime = -(sched_granularity(cfs_rq)/2);
+ schedstat_add(cfs_rq, wait_runtime, se->wait_runtime);
+ }

__enqueue_entity(cfs_rq, se);
+ resched_task(current);
}

#ifdef CONFIG_FAIR_GROUP_SCHED

2007-08-28 04:39:17

by Al Boldi

[permalink] [raw]
Subject: Re: CFS review

Ingo Molnar wrote:
> * Al Boldi <[email protected]> wrote:
> > > Could you try the patch below instead, does this make 3x glxgears
> > > smooth again? (if yes, could you send me your Signed-off-by line as
> > > well.)
> >
> > The task-startup stalling is still there for ~10sec.
> >
> > Can you see the problem on your machine?
>
> nope (i have no framebuffer setup)

No need for framebuffer. All you need is X using the X.org vesa-driver.
Then start gears like this:

# gears & gears & gears &

Then lay them out side by side to see the periodic stallings for ~10sec.

> - but i can see some chew-max
> latencies that occur when new tasks are started up. I _think_ it's
> probably the same problem as yours.

chew-max is great, but it's too accurate in that it exposes any scheduling
glitches and as such hides the startup glitch within the many glitches it
exposes. For example, it fluctuates all over the place using this:

# for ((i=0;i<9;i++)); do chew-max 60 > /dev/shm/chew$i.log & done

Also, chew-max locks-up when disabling __update_curr, which means that the
workload of chew-max is different from either the ping-startup loop or the
gears. You really should try the gears test by any means, as the problem is
really pronounced there.

> could you try the patch below (which is the combo patch of my current
> queue), ontop of head 50c46637aa? This makes chew-max behave better
> during task mass-startup here.

Still no improvement.


Thanks!

--
Al

2007-08-28 05:06:15

by Linus Torvalds

[permalink] [raw]
Subject: Re: CFS review



On Tue, 28 Aug 2007, Al Boldi wrote:
>
> No need for framebuffer. All you need is X using the X.org vesa-driver.
> Then start gears like this:
>
> # gears & gears & gears &
>
> Then lay them out side by side to see the periodic stallings for ~10sec.

I don't think this is a good test.

Why?

If you're not using direct rendering, what you have is the X server doing
all the rendering, which in turn means that what you are testing is quite
possibly not so much about the *kernel* scheduling, but about *X-server*
scheduling!

I'm sure the kernel scheduler has an impact, but what's more likely to be
going on is that you're seeing effects that are indirect, and not
necessarily at all even "good".

For example, if the X server is the scheduling point, it's entirely
possible that it ends up showing effects that are more due to the queueing
of the X command stream than due to the scheduler - and that those
stalls are simply due to *that*.

One thing to try is to run the X connection in synchronous mode, which
minimizes queueing issues. I don't know if gears has a flag to turn on
synchronous X messaging, though. Many X programs take the "[+-]sync" flag
to turn on synchronous mode, iirc.

Linus

2007-08-28 05:24:42

by Al Boldi

[permalink] [raw]
Subject: Re: CFS review

Linus Torvalds wrote:
> On Tue, 28 Aug 2007, Al Boldi wrote:
> > No need for framebuffer. All you need is X using the X.org vesa-driver.
> > Then start gears like this:
> >
> > # gears & gears & gears &
> >
> > Then lay them out side by side to see the periodic stallings for ~10sec.
>
> I don't think this is a good test.
>
> Why?
>
> If you're not using direct rendering, what you have is the X server doing
> all the rendering, which in turn means that what you are testing is quite
> possibly not so much about the *kernel* scheduling, but about *X-server*
> scheduling!
>
> I'm sure the kernel scheduler has an impact, but what's more likely to be
> going on is that you're seeing effects that are indirect, and not
> necessarily at all even "good".
>
> For example, if the X server is the scheduling point, it's entirely
> possible that it ends up showing effects that are more due to the queueing
> of the X command stream than due to the scheduler - and that those
> stalls are simply due to *that*.
>
> One thing to try is to run the X connection in synchronous mode, which
> minimizes queueing issues. I don't know if gears has a flag to turn on
> synchronous X messaging, though. Many X programs take the "[+-]sync" flag
> to turn on synchronous mode, iirc.

I like your analysis, but how do you explain that these stalls vanish when
__update_curr is disabled?


Thanks!

--
Al

2007-08-28 07:28:33

by Mike Galbraith

[permalink] [raw]
Subject: Re: CFS review

On Tue, 2007-08-28 at 08:23 +0300, Al Boldi wrote:
> Linus Torvalds wrote:
> > On Tue, 28 Aug 2007, Al Boldi wrote:
> > > No need for framebuffer. All you need is X using the X.org vesa-driver.
> > > Then start gears like this:
> > >
> > > # gears & gears & gears &
> > >
> > > Then lay them out side by side to see the periodic stallings for ~10sec.
> >
> > I don't think this is a good test.
> >
> > Why?
> >
> > If you're not using direct rendering, what you have is the X server doing
> > all the rendering, which in turn means that what you are testing is quite
> > possibly not so much about the *kernel* scheduling, but about *X-server*
> > scheduling!
> >
> > I'm sure the kernel scheduler has an impact, but what's more likely to be
> > going on is that you're seeing effects that are indirect, and not
> > necessarily at all even "good".
> >
> > For example, if the X server is the scheduling point, it's entirely
> > possible that it ends up showing effects that are more due to the queueing
> > of the X command stream than due to the scheduler - and that those
> > stalls are simply due to *that*.
> >
> > One thing to try is to run the X connection in synchronous mode, which
> > minimizes queueing issues. I don't know if gears has a flag to turn on
> > synchronous X messaging, though. Many X programs take the "[+-]sync" flag
> > to turn on synchronous mode, iirc.
>
> I like your analysis, but how do you explain that these stalls vanish when
> __update_curr is disabled?

When you disable __update_curr(), you're utterly destroying the
scheduler. There may well be a scheduler connection, but disabling
__update_curr() doesn't tell you anything meaningful. Basically, you're
letting all tasks run uninterrupted for just as long as they please
(which is why busy loops lock your box solid as a rock). I'd suggest
gathering some sched_debug stats or something... shoot, _anything_ but
what you did :)

-Mike

2007-08-28 07:36:44

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Mike Galbraith <[email protected]> wrote:

> > I like your analysis, but how do you explain that these stalls
> > vanish when __update_curr is disabled?
>
> When you disable __update_curr(), you're utterly destroying the
> scheduler. There may well be a scheduler connection, but disabling
> __update_curr() doesn't tell you anything meaningful. Basically, you're
> letting all tasks run uninterrupted for just as long as they please
> (which is why busy loops lock your box solid as a rock). I'd suggest
> gathering some sched_debug stats or something... [...]

the output of the following would be nice:

http://people.redhat.com/mingo/cfs-scheduler/tools/cfs-debug-info.sh

captured while the gears are running.

Ingo

2007-08-28 07:43:52

by Xavier Bestel

[permalink] [raw]
Subject: Re: CFS review

On Tue, 2007-08-28 at 07:37 +0300, Al Boldi wrote:
> start gears like this:
>
> # gears & gears & gears &
>
> Then lay them out side by side to see the periodic stallings for
> ~10sec.

Are you sure they are stalled ? What you may have is simple gears
running at a multiple of your screen refresh rate, so they only appear
stalled.

Plus, as said Linus, you're not really testing the kernel scheduler.
gears is really bad benchmark, it should die.

Xav


2007-08-28 08:02:49

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Xavier Bestel <[email protected]> wrote:

> Are you sure they are stalled ? What you may have is simple gears
> running at a multiple of your screen refresh rate, so they only appear
> stalled.
>
> Plus, as said Linus, you're not really testing the kernel scheduler.
> gears is really bad benchmark, it should die.

i like glxgears as long as it runs on _real_ 3D hardware, because there
it has minimal interaction with X and so it's an excellent visual test
about consistency of scheduling. You can immediately see (literally)
scheduling hickups down to a millisecond range (!). In this sense, if
done and interpreted carefully, glxgears gives more feedback than many
audio tests. (audio latency problems are audible, but on most sound hw
it takes quite a bit of latency to produce an xrun.) So basically
glxgears is the "early warning system" that tells us about the potential
for xruns earlier than an xrun would happen for real.

[ of course you can also run all the other tools to get numeric results,
but glxgears is nice in that it gives immediate visual feedback. ]

but i agree that on a non-accelerated X setup glxgears is not really
meaningful. It can have similar "spam the X server" effects as xperf.

Ingo

2007-08-28 16:34:49

by Linus Torvalds

[permalink] [raw]
Subject: Re: CFS review



On Tue, 28 Aug 2007, Al Boldi wrote:
>
> I like your analysis, but how do you explain that these stalls vanish when
> __update_curr is disabled?

It's entirely possible that what happens is that the X scheduling is just
a slightly unstable system - which effectively would turn a small
scheduling difference into a *huge* visible difference.

And the "small scheduling difference" might be as simple as "if the
process slept for a while, we give it a bit more CPU time". And then you
get into some unbalanced setup where the X scheduler makes it sleep even
more, because it fills its buffers.

Or something. I can easily see two schedulers that are trying to
*individually* be "fair", fighting it out in a way where the end result is
not very good.

I do suspect it's probably a very interesting load, so I hope Ingo looks
more at it, but I also suspect it's more than just the kernel scheduler.

Linus

2007-08-28 16:45:45

by Arjan van de Ven

[permalink] [raw]
Subject: Re: CFS review

On Tue, 28 Aug 2007 09:34:03 -0700 (PDT)
Linus Torvalds <[email protected]> wrote:

>
>
> On Tue, 28 Aug 2007, Al Boldi wrote:
> >
> > I like your analysis, but how do you explain that these stalls
> > vanish when __update_curr is disabled?
>
> It's entirely possible that what happens is that the X scheduling is
> just a slightly unstable system - which effectively would turn a
> small scheduling difference into a *huge* visible difference.

one thing that happens if you remove __update_curr is the following
pattern (since no apps will get preempted involuntarily)

app 1 submits a full frame worth of 3D stuff to X
app 1 then sleeps/waits for that to complete
X gets to run, has 1 full frame to render, does this
X now waits for more input
app 2 now gets to run and submits a full frame
app 2 then sleeps again
X gets to run again to process and complete
X goes to sleep
app 3 gets to run and submits a full frame
app 3 then sleeps
X runs
X sleeps
app 1 gets to submit a frame

etc etc

so without preemption happening, you can get "perfect" behavior, just
because everything is perfectly doing 1 thing at a time cooperatively.
once you start doing timeslices and enforcing limits on them, this
"perfect pattern" will break down (remember this is all software
rendering in the problem being described), and whatever you get won't
be as perfect as this.

2007-08-28 16:46:11

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Linus Torvalds <[email protected]> wrote:

> On Tue, 28 Aug 2007, Al Boldi wrote:
> >
> > I like your analysis, but how do you explain that these stalls
> > vanish when __update_curr is disabled?
>
> It's entirely possible that what happens is that the X scheduling is
> just a slightly unstable system - which effectively would turn a small
> scheduling difference into a *huge* visible difference.

i think it's because disabling __update_curr() in essence removes the
ability of scheduler to preempt tasks - that hack in essence results in
a non-scheduler. Hence the gears + X pair of tasks becomes a synchronous
pair of tasks in essence - and thus gears cannot "overload" X.

Normally gears + X is an asynchronous pair of tasks, with gears (or
xperf, or devel versions of firefox, etc.) not being throttled at all
and thus being able to overload/spam the X server with requests. (And we
generally want to _reward_ asynchronity and want to allow tasks to
overlap each other and we want each task to go as fast and as parallel
as it can.)

Eventually X's built-in "bad, abusive client" throttling code kicks in,
which, AFAIK is pretty crude and might yield to such artifacts. But ...
it would be nice for an X person to confirm - and in any case i'll try
Al's workload - i thought i had a reproducer but i barked up the wrong
tree :-) My laptop doesnt run with the vesa driver, so i have no easy
reproducer for now.

( also, it would be nice if Al could try rc4 plus my latest scheduler
tree as well - just on the odd chance that something got fixed
meanwhile. In particular Mike's sleeper-bonus-limit fix could be
related. )

Ingo

2007-08-28 19:40:41

by Willy Tarreau

[permalink] [raw]
Subject: Re: CFS review

On Tue, Aug 28, 2007 at 10:02:18AM +0200, Ingo Molnar wrote:
>
> * Xavier Bestel <[email protected]> wrote:
>
> > Are you sure they are stalled ? What you may have is simple gears
> > running at a multiple of your screen refresh rate, so they only appear
> > stalled.
> >
> > Plus, as said Linus, you're not really testing the kernel scheduler.
> > gears is really bad benchmark, it should die.
>
> i like glxgears as long as it runs on _real_ 3D hardware, because there
> it has minimal interaction with X and so it's an excellent visual test
> about consistency of scheduling. You can immediately see (literally)
> scheduling hickups down to a millisecond range (!). In this sense, if
> done and interpreted carefully, glxgears gives more feedback than many
> audio tests. (audio latency problems are audible, but on most sound hw
> it takes quite a bit of latency to produce an xrun.) So basically
> glxgears is the "early warning system" that tells us about the potential
> for xruns earlier than an xrun would happen for real.
>
> [ of course you can also run all the other tools to get numeric results,
> but glxgears is nice in that it gives immediate visual feedback. ]

Al could also test ocbench, which brings visual feedback without harnessing
the X server : http://linux.1wt.eu/sched/

I packaged it exactly for this problem and it has already helped. It uses
X after each loop, so if you run it with large run time, X is nearly not
sollicitated.

Willy

2007-08-28 19:56:21

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Willy Tarreau <[email protected]> wrote:

> On Tue, Aug 28, 2007 at 10:02:18AM +0200, Ingo Molnar wrote:
> >
> > * Xavier Bestel <[email protected]> wrote:
> >
> > > Are you sure they are stalled ? What you may have is simple gears
> > > running at a multiple of your screen refresh rate, so they only appear
> > > stalled.
> > >
> > > Plus, as said Linus, you're not really testing the kernel scheduler.
> > > gears is really bad benchmark, it should die.
> >
> > i like glxgears as long as it runs on _real_ 3D hardware, because there
> > it has minimal interaction with X and so it's an excellent visual test
> > about consistency of scheduling. You can immediately see (literally)
> > scheduling hickups down to a millisecond range (!). In this sense, if
> > done and interpreted carefully, glxgears gives more feedback than many
> > audio tests. (audio latency problems are audible, but on most sound hw
> > it takes quite a bit of latency to produce an xrun.) So basically
> > glxgears is the "early warning system" that tells us about the potential
> > for xruns earlier than an xrun would happen for real.
> >
> > [ of course you can also run all the other tools to get numeric results,
> > but glxgears is nice in that it gives immediate visual feedback. ]
>
> Al could also test ocbench, which brings visual feedback without
> harnessing the X server : http://linux.1wt.eu/sched/
>
> I packaged it exactly for this problem and it has already helped. It
> uses X after each loop, so if you run it with large run time, X is
> nearly not sollicitated.

yeah, and ocbench is one of my favorite cross-task-fairness tests - i
dont release a CFS patch without checking it with ocbench first :-)

Ingo

2007-08-28 20:47:05

by Valdis Klētnieks

[permalink] [raw]
Subject: Re: CFS review

On Mon, 27 Aug 2007 22:05:37 PDT, Linus Torvalds said:
>
>
> On Tue, 28 Aug 2007, Al Boldi wrote:
> >
> > No need for framebuffer. All you need is X using the X.org vesa-driver.
> > Then start gears like this:
> >
> > # gears & gears & gears &
> >
> > Then lay them out side by side to see the periodic stallings for ~10sec.
>
> I don't think this is a good test.
>
> Why?
>
> If you're not using direct rendering, what you have is the X server doing
> all the rendering, which in turn means that what you are testing is quite
> possibly not so much about the *kernel* scheduling, but about *X-server*
> scheduling!

I wonder - can people who are doing this as a test please specify whether
they're using an older X that has the libX11 or the newer libxcb code? That
may have a similar impact as well.

(libxcb is pretty new - it landed in Fedora Rawhide just about a month ago,
after Fedora 7 shipped. Not sure what other distros have it now...)


Attachments:
(No filename) (226.00 B)

2007-08-29 03:38:01

by Bill Davidsen

[permalink] [raw]
Subject: Re: CFS review

Al Boldi wrote:
> Ingo Molnar wrote:
>> * Al Boldi <[email protected]> wrote:
>>>>> The problem is that consecutive runs don't give consistent results
>>>>> and sometimes stalls. You may want to try that.
>>>> well, there's a natural saturation point after a few hundred tasks
>>>> (depending on your CPU's speed), at which point there's no idle time
>>>> left. From that point on things get slower progressively (and the
>>>> ability of the shell to start new ping tasks is impacted as well),
>>>> but that's expected on an overloaded system, isnt it?
>>> Of course, things should get slower with higher load, but it should be
>>> consistent without stalls.
>>>
>>> To see this problem, make sure you boot into /bin/sh with the normal
>>> VGA console (ie. not fb-console). Then try each loop a few times to
>>> show different behaviour; loops like:
>>>
>>> # for ((i=0; i<3333; i++)); do ping 10.1 -A > /dev/null & done
>>>
>>> # for ((i=0; i<3333; i++)); do nice -99 ping 10.1 -A > /dev/null & done
>>>
>>> # { for ((i=0; i<3333; i++)); do
>>> ping 10.1 -A > /dev/null &
>>> done } > /dev/null 2>&1
>>>
>>> Especially the last one sometimes causes a complete console lock-up,
>>> while the other two sometimes stall then surge periodically.
>> ok. I think i might finally have found the bug causing this. Could you
>> try the fix below, does your webserver thread-startup test work any
>> better?
>
> It seems to help somewhat, but the problem is still visible. Even v20.3 on
> 2.6.22.5 didn't help.
>
> It does look related to ia-boosting, so I turned off __update_curr like Roman
> mentioned, which had an enormous smoothing effect, but then nice levels
> completely break down and lockup the system.
>
> There is another way to show the problem visually under X (vesa-driver), by
> starting 3 gears simultaneously, which after laying them out side-by-side
> need some settling time before smoothing out. Without __update_curr it's
> absolutely smooth from the start.

I posted a LOT of stuff using the glitch1 script, and finally found a
set of tuning values which make the test script run smooth. See back
posts, I don't have them here.

--
Bill Davidsen <[email protected]>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot

2007-08-29 03:42:50

by Bill Davidsen

[permalink] [raw]
Subject: Re: CFS review

Ingo Molnar wrote:
> * Al Boldi <[email protected]> wrote:
>
>>> ok. I think i might finally have found the bug causing this. Could
>>> you try the fix below, does your webserver thread-startup test work
>>> any better?
>> It seems to help somewhat, but the problem is still visible. Even
>> v20.3 on 2.6.22.5 didn't help.
>>
>> It does look related to ia-boosting, so I turned off __update_curr
>> like Roman mentioned, which had an enormous smoothing effect, but then
>> nice levels completely break down and lockup the system.
>
> you can turn sleeper-fairness off via:
>
> echo 28 > /proc/sys/kernel/sched_features
>
> another thing to try would be:
>
> echo 12 > /proc/sys/kernel/sched_features

14, and drop the granularity to 500000.
>
> (that's the new-task penalty turned off.)
>
> Another thing to try would be to edit this:
>
> if (sysctl_sched_features & SCHED_FEAT_START_DEBIT)
> p->se.wait_runtime = -(sched_granularity(cfs_rq) / 2);
>
> to:
>
> if (sysctl_sched_features & SCHED_FEAT_START_DEBIT)
> p->se.wait_runtime = -(sched_granularity(cfs_rq);
>
> and could you also check 20.4 on 2.6.22.5 perhaps, or very latest -git?
> (Peter has experienced smaller spikes with that.)
>
> Ingo


--
Bill Davidsen <[email protected]>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot

2007-08-29 03:46:12

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Bill Davidsen <[email protected]> wrote:

> > There is another way to show the problem visually under X
> > (vesa-driver), by starting 3 gears simultaneously, which after
> > laying them out side-by-side need some settling time before
> > smoothing out. Without __update_curr it's absolutely smooth from
> > the start.
>
> I posted a LOT of stuff using the glitch1 script, and finally found a
> set of tuning values which make the test script run smooth. See back
> posts, I don't have them here.

but you have real 3D hw and DRI enabled, correct? In that case X uses up
almost no CPU time and glxgears makes most of the processing. That is
quite different from the above software-rendering case, where X spends
most of the CPU time.

Ingo

2007-08-29 04:18:48

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Al Boldi <[email protected]> wrote:

> No need for framebuffer. All you need is X using the X.org
> vesa-driver. Then start gears like this:
>
> # gears & gears & gears &
>
> Then lay them out side by side to see the periodic stallings for
> ~10sec.

i just tried something similar (by adding Option "NoDRI" to xorg.conf)
and i'm wondering how it can be smooth on vesa-driver at all. I tested
it on a Core2Duo box and software rendering manages to do about 3 frames
per second. (although glxgears itself thinks it does ~600 fps) If i
start 3 glxgears then they do ~1 frame per second each. This is on
Fedora 7 with xorg-x11-server-Xorg-1.3.0.0-9.fc7 and
xorg-x11-drv-i810-2.0.0-4.fc7.

Ingo

2007-08-29 04:20:34

by Al Boldi

[permalink] [raw]
Subject: Re: CFS review

Ingo Molnar wrote:
> * Linus Torvalds <[email protected]> wrote:
> > On Tue, 28 Aug 2007, Al Boldi wrote:
> > > I like your analysis, but how do you explain that these stalls
> > > vanish when __update_curr is disabled?
> >
> > It's entirely possible that what happens is that the X scheduling is
> > just a slightly unstable system - which effectively would turn a small
> > scheduling difference into a *huge* visible difference.
>
> i think it's because disabling __update_curr() in essence removes the
> ability of scheduler to preempt tasks - that hack in essence results in
> a non-scheduler. Hence the gears + X pair of tasks becomes a synchronous
> pair of tasks in essence - and thus gears cannot "overload" X.

I have narrowed it down a bit to add_wait_runtime.

Patch 2.6.22.5-v20.4 like this:

346- * the two values are equal)
347- * [Note: delta_mine - delta_exec is negative]:
348- */
349:// add_wait_runtime(cfs_rq, curr, delta_mine - delta_exec);
350-}
351-
352-static void update_curr(struct cfs_rq *cfs_rq)

When disabling add_wait_runtime the stalls are gone. With this change the
scheduler is still usable, but it does not constitute a fix.

Now, even with this hack, uneven nice-levels between X and gears causes a
return of the stalls, so make sure both X and gears run on the same
nice-level when testing.

Again, the whole point of this workload is to expose scheduler glitches
regardless of whether X is broken or not, and my hunch is that this problem
looks suspiciously like an ia-boosting bug. What's important to note is
that by adjusting the scheduler we can effect a correction in behaviour, and
as such should yield this problem as fixable.

It's probably a good idea to look further into add_wait_runtime.


Thanks!

--
Al

2007-08-29 04:29:33

by Keith Packard

[permalink] [raw]
Subject: Re: CFS review

On Wed, 2007-08-29 at 06:18 +0200, Ingo Molnar wrote:

> > Then lay them out side by side to see the periodic stallings for
> > ~10sec.

The X scheduling code isn't really designed to handle software GL well;
the requests can be very expensive to execute, and yet are specified as
atomic operations (sigh).

> i just tried something similar (by adding Option "NoDRI" to xorg.conf)
> and i'm wondering how it can be smooth on vesa-driver at all. I tested
> it on a Core2Duo box and software rendering manages to do about 3 frames
> per second. (although glxgears itself thinks it does ~600 fps) If i
> start 3 glxgears then they do ~1 frame per second each. This is on
> Fedora 7 with xorg-x11-server-Xorg-1.3.0.0-9.fc7 and
> xorg-x11-drv-i810-2.0.0-4.fc7.

Are you attempting to measure the visible updates by eye? Or are you
using some other metric?

In any case, attempting to measure anything using glxgears is a bad
idea; it's not representative of *any* real applications. And then using
software GL on top of that...

What was the question again?

--
[email protected]


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2007-08-29 04:40:59

by Mike Galbraith

[permalink] [raw]
Subject: Re: CFS review

On Wed, 2007-08-29 at 06:18 +0200, Ingo Molnar wrote:
> * Al Boldi <[email protected]> wrote:
>
> > No need for framebuffer. All you need is X using the X.org
> > vesa-driver. Then start gears like this:
> >
> > # gears & gears & gears &
> >
> > Then lay them out side by side to see the periodic stallings for
> > ~10sec.
>
> i just tried something similar (by adding Option "NoDRI" to xorg.conf)
> and i'm wondering how it can be smooth on vesa-driver at all. I tested
> it on a Core2Duo box and software rendering manages to do about 3 frames
> per second. (although glxgears itself thinks it does ~600 fps) If i
> start 3 glxgears then they do ~1 frame per second each. This is on
> Fedora 7 with xorg-x11-server-Xorg-1.3.0.0-9.fc7 and
> xorg-x11-drv-i810-2.0.0-4.fc7.

At least you can run the darn test... the third instance of glxgears
here means say bye bye to GUI instantly.

-Mike

2007-08-29 04:46:54

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Keith Packard <[email protected]> wrote:

> > > Then lay them out side by side to see the periodic stallings for
> > > ~10sec.
>
> The X scheduling code isn't really designed to handle software GL
> well; the requests can be very expensive to execute, and yet are
> specified as atomic operations (sigh).
[...]

> Are you attempting to measure the visible updates by eye? Or are you
> using some other metric?
>
> In any case, attempting to measure anything using glxgears is a bad
> idea; it's not representative of *any* real applications. And then
> using software GL on top of that...
>
> What was the question again?

ok, i finally managed to reproduce the "artifact" myself on an older
box. It goes like this: start up X with the vesa driver (or with NoDRI)
to force software rendering. Then start up a couple of glxgears
instances. Those glxgears instances update in a very "chunky",
"stuttering" way - each glxgears instance runs/stops/runs/stops at a
rate of a about once per second, and this was reported to me as a
potential CPU scheduler regression.

at a quick glance this is not a CPU scheduler thing: X uses up 99% of
CPU time, all the glxgears tasks (i needed 8 parallel instances to see
the stallings) are using up the remaining 1% of CPU time. The ordering
of the requests from the glxgears tasks is X's choice - and for a
pathological overload situation like this we cannot blame X at all for
not producing a completely smooth output. (although Xorg could perhaps
try to schedule such requests more smoothly, in a more finegrained way?)

i've attached below a timestamped strace of one of the glxgears
instances that shows such a 'stall':

1188356998.440155 select(4, [3], [3], NULL, NULL) = 1 (out [3]) <1.173680>

the select (waiting for X) took 1.17 seconds.

Ingo

-------------------->
Process 3644 attached - interrupt to quit
1188356997.810351 select(4, [3], [3], NULL, NULL) = 1 (out [3]) <0.594074>
1188356998.404580 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000115>
1188356998.404769 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356998.404880 gettimeofday({1188356998, 404893}, {4294967176, 0}) = 0 <0.000006>
1188356998.404923 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
1188356998.405054 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.405116 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000073>
1188356998.405221 gettimeofday({1188356998, 405231}, {4294967176, 0}) = 0 <0.000006>
1188356998.405258 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.405394 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.405461 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356998.405582 gettimeofday({1188356998, 405593}, {4294967176, 0}) = 0 <0.000006>
1188356998.405620 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.405656 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000108>
1188356998.405818 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356998.405856 gettimeofday({1188356998, 405866}, {4294967176, 0}) = 0 <0.000006>
1188356998.405993 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
1188356998.406032 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000009>
1188356998.406092 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000107>
1188356998.406232 gettimeofday({1188356998, 406242}, {4294967176, 0}) = 0 <0.000006>
1188356998.406269 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
1188356998.406305 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000065>
1188356998.406423 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000118>
1188356998.406573 gettimeofday({1188356998, 406583}, {4294967176, 0}) = 0 <0.000006>
1188356998.406610 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
1188356998.406646 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000125>
1188356998.406824 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356998.406863 gettimeofday({1188356998, 406873}, {4294967176, 0}) = 0 <0.000007>
1188356998.406900 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.407069 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.407131 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.407169 gettimeofday({1188356998, 407179}, {4294967176, 0}) = 0 <0.000007>
1188356998.407206 ioctl(3, FIONREAD, [0]) = 0 <0.000139>
1188356998.407376 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000009>
1188356998.407440 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.407478 gettimeofday({1188356998, 407487}, {4294967176, 0}) = 0 <0.000006>
1188356998.407649 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.407687 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000009>
1188356998.407747 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356998.407785 gettimeofday({1188356998, 407795}, {4294967176, 0}) = 0 <0.000140>
1188356998.407957 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
1188356998.407993 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000009>
1188356998.408052 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356998.408224 gettimeofday({1188356998, 408236}, {4294967176, 0}) = 0 <0.000007>
1188356998.408263 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
1188356998.408322 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000113>
1188356998.408490 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.408528 gettimeofday({1188356998, 408537}, {4294967176, 0}) = 0 <0.000006>
1188356998.408565 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
1188356998.408735 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000009>
1188356998.408796 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.408834 gettimeofday({1188356998, 408843}, {4294967176, 0}) = 0 <0.000005>
1188356998.408870 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.408905 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000174>
1188356998.409132 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.409170 gettimeofday({1188356998, 409178}, {4294967176, 0}) = 0 <0.000005>
1188356998.409205 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.409240 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.409469 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.409508 gettimeofday({1188356998, 409517}, {4294967176, 0}) = 0 <0.000005>
1188356998.409544 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.409579 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.409638 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.409675 gettimeofday({1188356998, 409684}, {4294967176, 0}) = 0 <0.000005>
1188356998.409711 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.409747 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.409805 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000308>
1188356998.410145 gettimeofday({1188356998, 410154}, {4294967176, 0}) = 0 <0.000006>
1188356998.410181 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.410215 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.410274 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.410311 gettimeofday({1188356998, 410320}, {4294967176, 0}) = 0 <0.000006>
1188356998.410347 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.410381 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000260>
1188356998.410697 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356998.410735 gettimeofday({1188356998, 410743}, {4294967176, 0}) = 0 <0.000006>
1188356998.410771 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.410805 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.410864 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.410902 gettimeofday({1188356998, 410910}, {4294967176, 0}) = 0 <0.000005>
1188356998.410937 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.410972 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.411030 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.411068 gettimeofday({1188356998, 411077}, {4294967176, 0}) = 0 <0.000005>
1188356998.411104 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.411139 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.411198 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.411235 gettimeofday({1188356998, 411244}, {4294967176, 0}) = 0 <0.000005>
1188356998.411271 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.411306 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.411377 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000010>
1188356998.412017 gettimeofday({1188356998, 412027}, {4294967176, 0}) = 0 <0.000006>
1188356998.412055 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.412089 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.412148 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.412185 gettimeofday({1188356998, 412194}, {4294967176, 0}) = 0 <0.000006>
1188356998.412221 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.412255 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.412313 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.412350 gettimeofday({1188356998, 412359}, {4294967176, 0}) = 0 <0.000006>
1188356998.412385 ioctl(3, FIONREAD, [0]) = 0 <0.000009>
1188356998.412776 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000009>
1188356998.412837 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.412874 gettimeofday({1188356998, 412883}, {4294967176, 0}) = 0 <0.000006>
1188356998.412910 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.412944 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.413003 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.413040 gettimeofday({1188356998, 413049}, {4294967176, 0}) = 0 <0.000006>
1188356998.413076 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.413110 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.413169 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.413206 gettimeofday({1188356998, 413214}, {4294967176, 0}) = 0 <0.000006>
1188356998.413241 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.413276 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.413334 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.413371 gettimeofday({1188356998, 413380}, {4294967176, 0}) = 0 <0.000005>
1188356998.413924 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.413961 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.414020 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.414057 gettimeofday({1188356998, 414066}, {4294967176, 0}) = 0 <0.000006>
1188356998.414093 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.414127 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.414186 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.414223 gettimeofday({1188356998, 414232}, {4294967176, 0}) = 0 <0.000006>
1188356998.414259 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.414293 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.414351 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000006>
1188356998.414388 gettimeofday({1188356998, 414397}, {4294967176, 0}) = 0 <0.000082>
1188356998.414503 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.414538 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.414601 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.414638 gettimeofday({1188356998, 414647}, {4294967176, 0}) = 0 <0.000005>
1188356998.414674 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.414709 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.414768 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.414818 gettimeofday({1188356998, 414827}, {4294967176, 0}) = 0 <0.000006>
1188356998.414854 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.414889 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.414948 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.414986 gettimeofday({1188356998, 414995}, {4294967176, 0}) = 0 <0.000005>
1188356998.415022 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.415057 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000010>
1188356998.415118 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.415156 gettimeofday({1188356998, 415165}, {4294967176, 0}) = 0 <0.000005>
1188356998.415192 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.415227 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.415287 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.415325 gettimeofday({1188356998, 415334}, {4294967176, 0}) = 0 <0.000006>
1188356998.415362 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.415397 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.001116>
1188356998.416602 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000040>
1188356998.416707 gettimeofday({1188356998, 416717}, {4294967176, 0}) = 0 <0.000038>
1188356998.416778 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.416813 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.416872 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356998.416911 gettimeofday({1188356998, 416919}, {4294967176, 0}) = 0 <0.000006>
1188356998.416946 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.416981 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.417039 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.417076 gettimeofday({1188356998, 417085}, {4294967176, 0}) = 0 <0.000006>
1188356998.417111 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.417146 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.417204 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.417241 gettimeofday({1188356998, 417250}, {4294967176, 0}) = 0 <0.000006>
1188356998.417277 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.417311 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.417370 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.417953 gettimeofday({1188356998, 417963}, {4294967176, 0}) = 0 <0.000037>
1188356998.418056 ioctl(3, FIONREAD, [0]) = 0 <0.000037>
1188356998.418158 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.418218 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.418255 gettimeofday({1188356998, 418263}, {4294967176, 0}) = 0 <0.000006>
1188356998.418290 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.418325 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.418384 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000206>
1188356998.418656 gettimeofday({1188356998, 418666}, {4294967176, 0}) = 0 <0.000037>
1188356998.418759 ioctl(3, FIONREAD, [0]) = 0 <0.000038>
1188356998.418829 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.418887 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.418924 gettimeofday({1188356998, 418945}, {4294967176, 0}) = 0 <0.000006>
1188356998.418972 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.419007 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.419065 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.419103 gettimeofday({1188356998, 419111}, {4294967176, 0}) = 0 <0.000006>
1188356998.419138 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.419172 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.419231 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.419268 gettimeofday({1188356998, 419276}, {4294967176, 0}) = 0 <0.000005>
1188356998.419303 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.419338 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.419396 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000481>
1188356998.419944 gettimeofday({1188356998, 419954}, {4294967176, 0}) = 0 <0.000038>
1188356998.420048 ioctl(3, FIONREAD, [0]) = 0 <0.000038>
1188356998.420118 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.420176 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.420213 gettimeofday({1188356998, 420222}, {4294967176, 0}) = 0 <0.000005>
1188356998.420249 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.420283 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.420342 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.420379 gettimeofday({1188356998, 420387}, {4294967176, 0}) = 0 <0.000006>
1188356998.420682 ioctl(3, FIONREAD, [0]) = 0 <0.000038>
1188356998.420786 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000041>
1188356998.420880 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.420917 gettimeofday({1188356998, 420926}, {4294967176, 0}) = 0 <0.000006>
1188356998.420952 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.420987 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.421046 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.421083 gettimeofday({1188356998, 421091}, {4294967176, 0}) = 0 <0.000006>
1188356998.421118 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.421152 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000009>
1188356998.421212 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.421249 gettimeofday({1188356998, 421258}, {4294967176, 0}) = 0 <0.000005>
1188356998.421285 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.421319 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.421378 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000009>
1188356998.421881 gettimeofday({1188356998, 421891}, {4294967176, 0}) = 0 <0.000038>
1188356998.421984 ioctl(3, FIONREAD, [0]) = 0 <0.000038>
1188356998.422054 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.422112 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.422149 gettimeofday({1188356998, 422157}, {4294967176, 0}) = 0 <0.000005>
1188356998.422184 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.422218 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.422277 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.422314 gettimeofday({1188356998, 422323}, {4294967176, 0}) = 0 <0.000006>
1188356998.422350 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.422395 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000320>
1188356998.422804 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000039>
1188356998.422909 gettimeofday({1188356998, 422919}, {4294967176, 0}) = 0 <0.000005>
1188356998.422946 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.422981 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.423039 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.423077 gettimeofday({1188356998, 423085}, {4294967176, 0}) = 0 <0.000006>
1188356998.423112 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.423146 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.423204 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.423241 gettimeofday({1188356998, 423250}, {4294967176, 0}) = 0 <0.000006>
1188356998.423277 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.423312 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.423370 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.423831 gettimeofday({1188356998, 423841}, {4294967176, 0}) = 0 <0.000038>
1188356998.423936 ioctl(3, FIONREAD, [0]) = 0 <0.000038>
1188356998.424040 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.424100 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.424137 gettimeofday({1188356998, 424146}, {4294967176, 0}) = 0 <0.000005>
1188356998.424173 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.424207 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.424265 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.424302 gettimeofday({1188356998, 424310}, {4294967176, 0}) = 0 <0.000006>
1188356998.424339 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.424374 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000311>
1188356998.424775 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000040>
1188356998.424880 gettimeofday({1188356998, 424891}, {4294967176, 0}) = 0 <0.000006>
1188356998.424918 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.424952 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.425011 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.425048 gettimeofday({1188356998, 425056}, {4294967176, 0}) = 0 <0.000005>
1188356998.425083 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.425117 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.425175 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.425212 gettimeofday({1188356998, 425221}, {4294967176, 0}) = 0 <0.000005>
1188356998.425248 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.425282 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000009>
1188356998.425341 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356998.425379 gettimeofday({1188356998, 425388}, {4294967176, 0}) = 0 <0.000005>
1188356998.425930 ioctl(3, FIONREAD, [0]) = 0 <0.000039>
1188356998.426036 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.426096 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.426134 gettimeofday({1188356998, 426142}, {4294967176, 0}) = 0 <0.000006>
1188356998.426169 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.426203 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.426274 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.426312 gettimeofday({1188356998, 426320}, {4294967176, 0}) = 0 <0.000006>
1188356998.426347 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.426382 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000304>
1188356998.426775 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000041>
1188356998.426882 gettimeofday({1188356998, 426892}, {4294967176, 0}) = 0 <0.000006>
1188356998.426919 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.426954 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.427013 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.427050 gettimeofday({1188356998, 427059}, {4294967176, 0}) = 0 <0.000006>
1188356998.427086 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.427120 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.427179 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.427216 gettimeofday({1188356998, 427225}, {4294967176, 0}) = 0 <0.000006>
1188356998.427252 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.427286 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.427344 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.427381 gettimeofday({1188356998, 427390}, {4294967176, 0}) = 0 <0.000006>
1188356998.427867 ioctl(3, FIONREAD, [0]) = 0 <0.000039>
1188356998.427972 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000041>
1188356998.428067 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.428104 gettimeofday({1188356998, 428113}, {4294967176, 0}) = 0 <0.000005>
1188356998.428140 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.428175 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.428233 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.428270 gettimeofday({1188356998, 428279}, {4294967176, 0}) = 0 <0.000006>
1188356998.428305 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.428340 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.428401 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000322>
1188356998.428788 gettimeofday({1188356998, 428798}, {4294967176, 0}) = 0 <0.000039>
1188356998.428894 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.428930 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.428989 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.429027 gettimeofday({1188356998, 429035}, {4294967176, 0}) = 0 <0.000006>
1188356998.429062 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.429096 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.429155 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.429192 gettimeofday({1188356998, 429201}, {4294967176, 0}) = 0 <0.000005>
1188356998.429228 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.429262 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.429321 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000006>
1188356998.429357 gettimeofday({1188356998, 429366}, {4294967176, 0}) = 0 <0.000005>
1188356998.429393 ioctl(3, FIONREAD, [0]) = 0 <0.000436>
1188356998.429896 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000040>
1188356998.430023 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.430074 gettimeofday({1188356998, 430083}, {4294967176, 0}) = 0 <0.000005>
1188356998.430110 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.430145 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000009>
1188356998.430204 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.430242 gettimeofday({1188356998, 430250}, {4294967176, 0}) = 0 <0.000006>
1188356998.430277 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.430311 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.430370 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.430723 gettimeofday({1188356998, 430733}, {4294967176, 0}) = 0 <0.000039>
1188356998.430828 ioctl(3, FIONREAD, [0]) = 0 <0.000038>
1188356998.430899 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.430958 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.430995 gettimeofday({1188356998, 431004}, {4294967176, 0}) = 0 <0.000005>
1188356998.431031 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.431065 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.431124 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.431161 gettimeofday({1188356998, 431169}, {4294967176, 0}) = 0 <0.000006>
1188356998.431196 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.431230 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.431289 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.431326 gettimeofday({1188356998, 431334}, {4294967176, 0}) = 0 <0.000006>
1188356998.431361 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.431395 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000456>
1188356998.431940 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000039>
1188356998.432046 gettimeofday({1188356998, 432056}, {4294967176, 0}) = 0 <0.000005>
1188356998.432083 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.432117 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.432176 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.432213 gettimeofday({1188356998, 432222}, {4294967176, 0}) = 0 <0.000005>
1188356998.432249 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.432283 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.432342 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.432379 gettimeofday({1188356998, 432387}, {4294967176, 0}) = 0 <0.000006>
1188356998.432751 ioctl(3, FIONREAD, [0]) = 0 <0.000039>
1188356998.432855 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.432915 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.432952 gettimeofday({1188356998, 432961}, {4294967176, 0}) = 0 <0.000005>
1188356998.432988 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.433022 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.433081 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.433118 gettimeofday({1188356998, 433127}, {4294967176, 0}) = 0 <0.000006>
1188356998.433154 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.433188 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.433247 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356998.433283 gettimeofday({1188356998, 433292}, {4294967176, 0}) = 0 <0.000005>
1188356998.433330 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.433365 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000441>
1188356998.433896 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000040>
1188356998.434002 gettimeofday({1188356998, 434012}, {4294967176, 0}) = 0 <0.000038>
1188356998.434073 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.434108 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.434167 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.434204 gettimeofday({1188356998, 434213}, {4294967176, 0}) = 0 <0.000005>
1188356998.434240 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.434274 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.434332 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356998.434370 gettimeofday({1188356998, 434379}, {4294967176, 0}) = 0 <0.000006>
1188356998.434408 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.434443 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.434503 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.434540 gettimeofday({1188356998, 434549}, {4294967176, 0}) = 0 <0.000006>
1188356998.434575 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.434610 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.434668 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.434705 gettimeofday({1188356998, 434714}, {4294967176, 0}) = 0 <0.000005>
1188356998.434741 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.434775 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000009>
1188356998.434835 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.434871 gettimeofday({1188356998, 434880}, {4294967176, 0}) = 0 <0.000005>
1188356998.434907 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.434941 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.434999 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.435036 gettimeofday({1188356998, 435045}, {4294967176, 0}) = 0 <0.000006>
1188356998.435071 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.435106 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.435164 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.435201 gettimeofday({1188356998, 435209}, {4294967176, 0}) = 0 <0.000006>
1188356998.435237 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.435271 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.435329 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.435365 gettimeofday({1188356998, 435374}, {4294967176, 0}) = 0 <0.000006>
1188356998.435403 ioctl(3, FIONREAD, [0]) = 0 <0.001120>
1188356998.436587 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000041>
1188356998.436715 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000039>
1188356998.436821 gettimeofday({1188356998, 436831}, {4294967176, 0}) = 0 <0.000005>
1188356998.436857 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.436892 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.436951 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.436988 gettimeofday({1188356998, 436996}, {4294967176, 0}) = 0 <0.000006>
1188356998.437023 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.437069 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.437129 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.437166 gettimeofday({1188356998, 437175}, {4294967176, 0}) = 0 <0.000006>
1188356998.437202 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.437237 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.437295 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.437332 gettimeofday({1188356998, 437341}, {4294967176, 0}) = 0 <0.000005>
1188356998.437368 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.437404 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000498>
1188356998.437988 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000040>
1188356998.438094 gettimeofday({1188356998, 438104}, {4294967176, 0}) = 0 <0.000038>
1188356998.438198 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.438234 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.438293 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.438330 gettimeofday({1188356998, 438339}, {4294967176, 0}) = 0 <0.000006>
1188356998.438365 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.438403 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000206>
1188356998.438694 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000040>
1188356998.438800 gettimeofday({1188356998, 438810}, {4294967176, 0}) = 0 <0.000038>
1188356998.438871 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.438906 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000009>
1188356998.438966 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.439003 gettimeofday({1188356998, 439012}, {4294967176, 0}) = 0 <0.000005>
1188356998.439039 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.439073 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356998.439132 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.439169 gettimeofday({1188356998, 439178}, {4294967176, 0}) = 0 <0.000005>
1188356998.439205 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.439239 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.439297 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.439334 gettimeofday({1188356998, 439343}, {4294967176, 0}) = 0 <0.000006>
1188356998.439370 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.439406 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.439465 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.439503 gettimeofday({1188356998, 439511}, {4294967176, 0}) = 0 <0.000006>
1188356998.439538 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.439573 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.439631 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.439669 gettimeofday({1188356998, 439677}, {4294967176, 0}) = 0 <0.000005>
1188356998.439704 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.439738 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.439797 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.439834 gettimeofday({1188356998, 439842}, {4294967176, 0}) = 0 <0.000006>
1188356998.439869 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356998.439904 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356998.439974 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356998.440011 gettimeofday({1188356998, 440020}, {4294967176, 0}) = 0 <0.000006>
1188356998.440047 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356998.440081 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = -1 EAGAIN (Resource temporarily unavailable) <0.000007>
1188356998.440155 select(4, [3], [3], NULL, NULL) = 1 (out [3]) <1.173680>
1188356999.614013 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000047>
1188356999.614125 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000059>
1188356999.614218 gettimeofday({1188356999, 614228}, {4294967176, 0}) = 0 <0.000006>
1188356999.614326 ioctl(3, FIONREAD, [0]) = 0 <0.000008>
1188356999.614371 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000075>
1188356999.614505 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356999.614611 gettimeofday({1188356999, 614622}, {4294967176, 0}) = 0 <0.000006>
1188356999.614650 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.614770 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.614831 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000074>
1188356999.614938 gettimeofday({1188356999, 614948}, {4294967176, 0}) = 0 <0.000007>
1188356999.614975 ioctl(3, FIONREAD, [0]) = 0 <0.000088>
1188356999.615095 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.615155 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000090>
1188356999.615278 gettimeofday({1188356999, 615287}, {4294967176, 0}) = 0 <0.000006>
1188356999.615315 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
1188356999.615407 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.615565 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356999.615606 gettimeofday({1188356999, 615615}, {4294967176, 0}) = 0 <0.000006>
1188356999.615643 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
1188356999.615798 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.615859 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.615897 gettimeofday({1188356999, 615907}, {4294967176, 0}) = 0 <0.000006>
1188356999.616053 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
1188356999.616090 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.616150 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000109>
1188356999.616292 gettimeofday({1188356999, 616301}, {4294967176, 0}) = 0 <0.000006>
1188356999.616329 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
1188356999.616365 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000126>
1188356999.616548 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356999.616587 gettimeofday({1188356999, 616596}, {4294967176, 0}) = 0 <0.000006>
1188356999.616624 ioctl(3, FIONREAD, [0]) = 0 <0.000124>
1188356999.616779 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000009>
1188356999.616840 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.616878 gettimeofday({1188356999, 616888}, {4294967176, 0}) = 0 <0.000122>
1188356999.617034 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
1188356999.617070 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.617130 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000125>
1188356999.617288 gettimeofday({1188356999, 617297}, {4294967176, 0}) = 0 <0.000006>
1188356999.617348 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
1188356999.617385 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000129>
1188356999.617571 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.617610 gettimeofday({1188356999, 617619}, {4294967176, 0}) = 0 <0.000005>
1188356999.617748 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.617785 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.617844 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.617881 gettimeofday({1188356999, 617890}, {4294967176, 0}) = 0 <0.000005>
1188356999.617917 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.617952 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.618011 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.618305 gettimeofday({1188356999, 618315}, {4294967176, 0}) = 0 <0.000006>
1188356999.618343 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.618378 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000110>
1188356999.618543 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.618581 gettimeofday({1188356999, 618590}, {4294967176, 0}) = 0 <0.000005>
1188356999.618617 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.618652 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.618710 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.618748 gettimeofday({1188356999, 618756}, {4294967176, 0}) = 0 <0.000006>
1188356999.618783 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.618818 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000010>
1188356999.618878 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.618916 gettimeofday({1188356999, 618925}, {4294967176, 0}) = 0 <0.000005>
1188356999.618952 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.618987 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.619046 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.619083 gettimeofday({1188356999, 619092}, {4294967176, 0}) = 0 <0.000005>
1188356999.619119 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.619154 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.619792 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.619831 gettimeofday({1188356999, 619840}, {4294967176, 0}) = 0 <0.000006>
1188356999.619867 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.619902 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.619961 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.619998 gettimeofday({1188356999, 620007}, {4294967176, 0}) = 0 <0.000005>
1188356999.620034 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.620068 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.620127 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.620164 gettimeofday({1188356999, 620173}, {4294967176, 0}) = 0 <0.000005>
1188356999.620200 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.620235 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.620293 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.620330 gettimeofday({1188356999, 620339}, {4294967176, 0}) = 0 <0.000006>
1188356999.620366 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.620959 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.621020 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.621057 gettimeofday({1188356999, 621066}, {4294967176, 0}) = 0 <0.000005>
1188356999.621093 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.621128 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.621187 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.621224 gettimeofday({1188356999, 621233}, {4294967176, 0}) = 0 <0.000006>
1188356999.621259 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.621294 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.621352 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.621390 gettimeofday({1188356999, 621401}, {4294967176, 0}) = 0 <0.000008>
1188356999.621429 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.621464 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.621522 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.621559 gettimeofday({1188356999, 621568}, {4294967176, 0}) = 0 <0.000005>
1188356999.621595 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.621630 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.621688 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.621726 gettimeofday({1188356999, 621735}, {4294967176, 0}) = 0 <0.000005>
1188356999.621762 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.621797 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.621856 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.621894 gettimeofday({1188356999, 621903}, {4294967176, 0}) = 0 <0.000006>
1188356999.621930 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.621964 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.622023 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.622061 gettimeofday({1188356999, 622070}, {4294967176, 0}) = 0 <0.000006>
1188356999.622097 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.622132 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.622191 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356999.622230 gettimeofday({1188356999, 622239}, {4294967176, 0}) = 0 <0.000006>
1188356999.622267 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.622301 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.622360 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.622400 gettimeofday({1188356999, 622410}, {4294967176, 0}) = 0 <0.001199>
1188356999.623665 ioctl(3, FIONREAD, [0]) = 0 <0.000038>
1188356999.623768 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.623828 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.623866 gettimeofday({1188356999, 623875}, {4294967176, 0}) = 0 <0.000006>
1188356999.623902 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.623936 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.623995 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.624032 gettimeofday({1188356999, 624041}, {4294967176, 0}) = 0 <0.000005>
1188356999.624068 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.624103 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.624174 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.624212 gettimeofday({1188356999, 624221}, {4294967176, 0}) = 0 <0.000006>
1188356999.624248 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.624282 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.624344 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.624381 gettimeofday({1188356999, 624390}, {4294967176, 0}) = 0 <0.000005>
1188356999.624987 ioctl(3, FIONREAD, [0]) = 0 <0.000038>
1188356999.625091 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000041>
1188356999.625185 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.625222 gettimeofday({1188356999, 625231}, {4294967176, 0}) = 0 <0.000006>
1188356999.625258 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.625292 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.625351 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.625388 gettimeofday({1188356999, 625397}, {4294967176, 0}) = 0 <0.000261>
1188356999.625714 ioctl(3, FIONREAD, [0]) = 0 <0.000039>
1188356999.625818 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.625879 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.625916 gettimeofday({1188356999, 625925}, {4294967176, 0}) = 0 <0.000005>
1188356999.625952 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.625987 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.626045 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.626082 gettimeofday({1188356999, 626091}, {4294967176, 0}) = 0 <0.000005>
1188356999.626118 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.626153 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.626211 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.626249 gettimeofday({1188356999, 626257}, {4294967176, 0}) = 0 <0.000005>
1188356999.626284 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.626318 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.626377 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000010>
1188356999.626913 gettimeofday({1188356999, 626923}, {4294967176, 0}) = 0 <0.000038>
1188356999.627018 ioctl(3, FIONREAD, [0]) = 0 <0.000038>
1188356999.627088 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.627147 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.627185 gettimeofday({1188356999, 627193}, {4294967176, 0}) = 0 <0.000005>
1188356999.627220 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.627254 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.627313 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.627350 gettimeofday({1188356999, 627359}, {4294967176, 0}) = 0 <0.000005>
1188356999.627386 ioctl(3, FIONREAD, [0]) = 0 <0.000009>
1188356999.627733 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000043>
1188356999.627861 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000006>
1188356999.627901 gettimeofday({1188356999, 627910}, {4294967176, 0}) = 0 <0.000006>
1188356999.627936 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.627971 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000010>
1188356999.628031 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.628081 gettimeofday({1188356999, 628090}, {4294967176, 0}) = 0 <0.000006>
1188356999.628117 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.628152 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.628211 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.628248 gettimeofday({1188356999, 628256}, {4294967176, 0}) = 0 <0.000006>
1188356999.628283 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.628318 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.628376 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000009>
1188356999.628864 gettimeofday({1188356999, 628875}, {4294967176, 0}) = 0 <0.000039>
1188356999.628970 ioctl(3, FIONREAD, [0]) = 0 <0.000039>
1188356999.629040 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.629099 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.629136 gettimeofday({1188356999, 629145}, {4294967176, 0}) = 0 <0.000005>
1188356999.629172 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.629207 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.629265 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.629302 gettimeofday({1188356999, 629311}, {4294967176, 0}) = 0 <0.000005>
1188356999.629338 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.629373 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000323>
1188356999.629784 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000040>
1188356999.629891 gettimeofday({1188356999, 629901}, {4294967176, 0}) = 0 <0.000005>
1188356999.629928 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.629963 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.630022 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.630059 gettimeofday({1188356999, 630068}, {4294967176, 0}) = 0 <0.000005>
1188356999.630095 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.630130 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.630188 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.630225 gettimeofday({1188356999, 630234}, {4294967176, 0}) = 0 <0.000005>
1188356999.630261 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.630295 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.630354 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.630391 gettimeofday({1188356999, 630403}, {4294967176, 0}) = 0 <0.000441>
1188356999.630897 ioctl(3, FIONREAD, [0]) = 0 <0.000039>
1188356999.631000 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.631060 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.631097 gettimeofday({1188356999, 631106}, {4294967176, 0}) = 0 <0.000006>
1188356999.631133 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.631167 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.631226 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.631263 gettimeofday({1188356999, 631272}, {4294967176, 0}) = 0 <0.000006>
1188356999.631299 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.631333 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.631392 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000339>
1188356999.631799 gettimeofday({1188356999, 631809}, {4294967176, 0}) = 0 <0.000037>
1188356999.631916 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.631953 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.632012 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356999.632050 gettimeofday({1188356999, 632058}, {4294967176, 0}) = 0 <0.000006>
1188356999.632085 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.632120 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.632178 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.632215 gettimeofday({1188356999, 632224}, {4294967176, 0}) = 0 <0.000005>
1188356999.632251 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.632285 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.632344 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.632381 gettimeofday({1188356999, 632390}, {4294967176, 0}) = 0 <0.000006>
1188356999.632836 ioctl(3, FIONREAD, [0]) = 0 <0.000039>
1188356999.632939 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000041>
1188356999.633033 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356999.633071 gettimeofday({1188356999, 633079}, {4294967176, 0}) = 0 <0.000006>
1188356999.633106 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.633141 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.633199 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.633237 gettimeofday({1188356999, 633245}, {4294967176, 0}) = 0 <0.000006>
1188356999.633273 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.633307 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.633365 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.633405 gettimeofday({1188356999, 633414}, {4294967176, 0}) = 0 <0.000356>
1188356999.633826 ioctl(3, FIONREAD, [0]) = 0 <0.000038>
1188356999.633930 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.633989 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.634026 gettimeofday({1188356999, 634035}, {4294967176, 0}) = 0 <0.000006>
1188356999.634062 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.634096 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.634155 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.634192 gettimeofday({1188356999, 634200}, {4294967176, 0}) = 0 <0.000005>
1188356999.634227 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.634261 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.634319 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.634356 gettimeofday({1188356999, 634365}, {4294967176, 0}) = 0 <0.000005>
1188356999.634392 ioctl(3, FIONREAD, [0]) = 0 <0.000405>
1188356999.634863 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000041>
1188356999.634990 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.635029 gettimeofday({1188356999, 635038}, {4294967176, 0}) = 0 <0.000005>
1188356999.635065 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.635099 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.635158 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.635195 gettimeofday({1188356999, 635204}, {4294967176, 0}) = 0 <0.000006>
1188356999.635230 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.635265 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.635337 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.635374 gettimeofday({1188356999, 635383}, {4294967176, 0}) = 0 <0.000006>
1188356999.635781 ioctl(3, FIONREAD, [0]) = 0 <0.000038>
1188356999.635885 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000041>
1188356999.635980 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.636017 gettimeofday({1188356999, 636026}, {4294967176, 0}) = 0 <0.000006>
1188356999.636053 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.636087 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.636146 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.636183 gettimeofday({1188356999, 636192}, {4294967176, 0}) = 0 <0.000005>
1188356999.636219 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.636253 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.636312 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.636349 gettimeofday({1188356999, 636358}, {4294967176, 0}) = 0 <0.000005>
1188356999.636385 ioctl(3, FIONREAD, [0]) = 0 <0.000009>
1188356999.636828 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000041>
1188356999.636955 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000006>
1188356999.636994 gettimeofday({1188356999, 637003}, {4294967176, 0}) = 0 <0.000006>
1188356999.637030 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.637065 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.637123 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.637161 gettimeofday({1188356999, 637169}, {4294967176, 0}) = 0 <0.000006>
1188356999.637196 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.637232 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.637291 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.637328 gettimeofday({1188356999, 637337}, {4294967176, 0}) = 0 <0.000006>
1188356999.637364 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
1188356999.637402 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000414>
1188356999.637902 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000040>
1188356999.637976 gettimeofday({1188356999, 637985}, {4294967176, 0}) = 0 <0.000006>
1188356999.638012 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.638047 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.638106 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.638143 gettimeofday({1188356999, 638152}, {4294967176, 0}) = 0 <0.000006>
1188356999.638179 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.638213 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.638272 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.638308 gettimeofday({1188356999, 638317}, {4294967176, 0}) = 0 <0.000005>
1188356999.638344 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.638378 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000396>
1188356999.638863 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000040>
1188356999.638969 gettimeofday({1188356999, 638979}, {4294967176, 0}) = 0 <0.000005>
1188356999.639006 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.639041 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000009>
1188356999.639112 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.639150 gettimeofday({1188356999, 639159}, {4294967176, 0}) = 0 <0.000006>
1188356999.639186 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.639220 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.639279 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.639316 gettimeofday({1188356999, 639325}, {4294967176, 0}) = 0 <0.000006>
1188356999.639352 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.639386 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000373>
1188356999.639848 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000040>
1188356999.639955 gettimeofday({1188356999, 639965}, {4294967176, 0}) = 0 <0.000005>
1188356999.639992 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.640027 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.640085 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.640123 gettimeofday({1188356999, 640131}, {4294967176, 0}) = 0 <0.000006>
1188356999.640159 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.640193 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.640251 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.640288 gettimeofday({1188356999, 640297}, {4294967176, 0}) = 0 <0.000006>
1188356999.640324 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.640358 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000011>
1188356999.640802 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000041>
1188356999.640910 gettimeofday({1188356999, 640920}, {4294967176, 0}) = 0 <0.000038>
1188356999.640982 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.641017 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.641076 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.641113 gettimeofday({1188356999, 641122}, {4294967176, 0}) = 0 <0.000005>
1188356999.641149 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.641183 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.641242 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.641279 gettimeofday({1188356999, 641287}, {4294967176, 0}) = 0 <0.000006>
1188356999.641315 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.641349 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.641775 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000040>
1188356999.641882 gettimeofday({1188356999, 641892}, {4294967176, 0}) = 0 <0.000038>
1188356999.641954 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.641988 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.642047 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.642085 gettimeofday({1188356999, 642093}, {4294967176, 0}) = 0 <0.000006>
1188356999.642121 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.642155 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.642213 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.642250 gettimeofday({1188356999, 642259}, {4294967176, 0}) = 0 <0.000005>
1188356999.642286 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.642320 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.642379 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000388>
1188356999.642849 gettimeofday({1188356999, 642859}, {4294967176, 0}) = 0 <0.000038>
1188356999.642953 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.642989 select(4, [3], NULL, NULL, {0, 0}) = 0 (Timeout) <0.000008>
1188356999.643030 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.643090 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.643127 gettimeofday({1188356999, 643136}, {4294967176, 0}) = 0 <0.000006>
1188356999.643163 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.643198 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.643256 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.643294 gettimeofday({1188356999, 643303}, {4294967176, 0}) = 0 <0.000006>
1188356999.643330 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.643364 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000374>
1188356999.643827 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000040>
1188356999.643933 gettimeofday({1188356999, 643943}, {4294967176, 0}) = 0 <0.000038>
1188356999.644005 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.644040 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.644099 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.644137 gettimeofday({1188356999, 644145}, {4294967176, 0}) = 0 <0.000006>
1188356999.644172 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.644207 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.644266 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.644303 gettimeofday({1188356999, 644311}, {4294967176, 0}) = 0 <0.000006>
1188356999.644339 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.644373 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000357>
1188356999.644819 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000040>
1188356999.644926 gettimeofday({1188356999, 644936}, {4294967176, 0}) = 0 <0.000005>
1188356999.644963 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.644998 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.645057 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.645095 gettimeofday({1188356999, 645103}, {4294967176, 0}) = 0 <0.000005>
1188356999.645130 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.645165 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.645225 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.645262 gettimeofday({1188356999, 645271}, {4294967176, 0}) = 0 <0.000006>
1188356999.645298 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.645332 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.645391 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000407>
1188356999.645865 gettimeofday({1188356999, 645875}, {4294967176, 0}) = 0 <0.000038>
1188356999.645969 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.646005 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.646065 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.646102 gettimeofday({1188356999, 646111}, {4294967176, 0}) = 0 <0.000006>
1188356999.646138 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.646172 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.646231 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.646281 gettimeofday({1188356999, 646290}, {4294967176, 0}) = 0 <0.000005>
1188356999.646317 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.646352 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.646763 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000040>
1188356999.646870 gettimeofday({1188356999, 646880}, {4294967176, 0}) = 0 <0.000039>
1188356999.646941 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.646976 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.647036 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.647073 gettimeofday({1188356999, 647082}, {4294967176, 0}) = 0 <0.000005>
1188356999.647109 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.647143 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.647202 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.647240 gettimeofday({1188356999, 647248}, {4294967176, 0}) = 0 <0.000006>
1188356999.647276 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.647310 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000009>
1188356999.647372 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.647812 gettimeofday({1188356999, 647823}, {4294967176, 0}) = 0 <0.000039>
1188356999.647917 ioctl(3, FIONREAD, [0]) = 0 <0.000039>
1188356999.647988 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.648047 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.648084 gettimeofday({1188356999, 648093}, {4294967176, 0}) = 0 <0.000005>
1188356999.648120 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.648155 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.648213 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.648251 gettimeofday({1188356999, 648260}, {4294967176, 0}) = 0 <0.000006>
1188356999.648287 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.648321 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.648380 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000010>
1188356999.648420 gettimeofday({1188356999, 648429}, {4294967176, 0}) = 0 <0.000005>
1188356999.648456 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.648491 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.648549 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.648586 gettimeofday({1188356999, 648595}, {4294967176, 0}) = 0 <0.000005>
1188356999.648622 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.648656 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.648715 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.648752 gettimeofday({1188356999, 648761}, {4294967176, 0}) = 0 <0.000005>
1188356999.648788 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.648823 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000007>
1188356999.648881 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000008>
1188356999.648919 gettimeofday({1188356999, 648928}, {4294967176, 0}) = 0 <0.000006>
1188356999.648955 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.648990 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.649048 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.649086 gettimeofday({1188356999, 649094}, {4294967176, 0}) = 0 <0.000005>
1188356999.649134 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
1188356999.649170 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.649229 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000007>
1188356999.649266 gettimeofday({1188356999, 649275}, {4294967176, 0}) = 0 <0.000006>
1188356999.649302 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
1188356999.649336 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = 240 <0.000008>
1188356999.649394 write(3, "\217\v\3\0\1\0\0\0\2\0 \2", 12) = 12 <0.000040>
1188356999.650639 gettimeofday({1188356999, 650649}, {4294967176, 0}) = 0 <0.000038>
1188356999.650743 ioctl(3, FIONREAD, [0]) = 0 <0.000038>
1188356999.650846 writev(3, [{"\217\1<\0\1\0\0\0", 8}, {"\10\0\177\0\0A\0\0\4\0\270\0\24\0\272\0\0\0\240A\0\0\200"..., 232}], 2) = -1 EAGAIN (Resource temporarily unavailable) <0.000006>
1188356999.650912 select(4, [3], [3], NULL, NULL <unfinished ...>
Process 3644 detached

2007-08-29 04:53:38

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Al Boldi <[email protected]> wrote:

> I have narrowed it down a bit to add_wait_runtime.

the scheduler is a red herring here. Could you "strace -ttt -TTT" one of
the glxgears instances (and send us the cfs-debug-info.sh output, with
CONFIG_SCHED_DEBUG=y and CONFIG_SCHEDSTATS=y as requested before) so
that we can have a closer look?

i reproduced something similar and there the stall is caused by 1+
second select() delays on the X client<->server socket. The scheduler
stats agree with that:

se.sleep_max : 2194711437
se.block_max : 0
se.exec_max : 977446
se.wait_max : 1912321

the scheduler itself had a worst-case scheduling delay of 1.9
milliseconds for that glxgears instance (which is perfectly good - in
fact - excellent interactivity) - but the task had a maximum sleep time
of 2.19 seconds. So the 'glitch' was not caused by the scheduler.

Ingo

2007-08-29 05:59:37

by Al Boldi

[permalink] [raw]
Subject: Re: CFS review

Ingo Molnar wrote:
> * Al Boldi <[email protected]> wrote:
> > I have narrowed it down a bit to add_wait_runtime.
>
> the scheduler is a red herring here. Could you "strace -ttt -TTT" one of
> the glxgears instances (and send us the cfs-debug-info.sh output, with
> CONFIG_SCHED_DEBUG=y and CONFIG_SCHEDSTATS=y as requested before) so
> that we can have a closer look?
>
> i reproduced something similar and there the stall is caused by 1+
> second select() delays on the X client<->server socket. The scheduler
> stats agree with that:
>
> se.sleep_max : 2194711437
> se.block_max : 0
> se.exec_max : 977446
> se.wait_max : 1912321
>
> the scheduler itself had a worst-case scheduling delay of 1.9
> milliseconds for that glxgears instance (which is perfectly good - in
> fact - excellent interactivity) - but the task had a maximum sleep time
> of 2.19 seconds. So the 'glitch' was not caused by the scheduler.

2.19sec is probably the time you need to lay them out side by side. You see,
gears sleeps when it is covered by another window, so once you lay them out
it starts running, and that's when they start to stutter for about 10sec.
After that they should run smoothly, because they used up all the sleep
bonus.

If you like, I can send you my straces, but they are kind of big though, and
you need to strace each gear, as stracing itself changes the workload
balance.

Let's first make sure what we are looking for:
1. start # gears & gears & gears &
2. lay them out side by side, don't worry about sleep times yet.
3. now they start stuttering for about 10sec
4. now they run out of sleep bonuses and smooth out

If this is the sequence you get on your machine, then try disabling
add_wait_runtime to see the difference.


Thanks!

--
Al

2007-08-29 06:43:30

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Al Boldi <[email protected]> wrote:

> > se.sleep_max : 2194711437
> > se.block_max : 0
> > se.exec_max : 977446
> > se.wait_max : 1912321
> >
> > the scheduler itself had a worst-case scheduling delay of 1.9
> > milliseconds for that glxgears instance (which is perfectly good - in
> > fact - excellent interactivity) - but the task had a maximum sleep time
> > of 2.19 seconds. So the 'glitch' was not caused by the scheduler.
>
> 2.19sec is probably the time you need to lay them out side by side.
> [...]

nope, i cleared the stats after i laid the glxgears out, via:

for N in /proc/*/sched; do echo 0 > $N; done

and i did the strace (which showed a 1+ seconds latency) while the
glxgears was not manipulated in any way.

> [...] You see, gears sleeps when it is covered by another window,
> [...]

none of the gear windows in my test were overlaid...

> [...] so once you lay them out it starts running, and that's when they
> start to stutter for about 10sec. After that they should run
> smoothly, because they used up all the sleep bonus.

that's plain wrong - at least in the test i've reproduced. In any case,
if that were the case then that would be visible in the stats. So please
send me your cfs-debug-info.sh output captured while the test is running
(with a CONFIG_SCHEDSTATS=y and CONFIG_SCHED_DEBUG=y kernel) - you can
download it from:

http://people.redhat.com/mingo/cfs-scheduler/tools/cfs-debug-info.sh

for best data, execute this before running it:

for N in /proc/*/sched; do echo 0 > $N; done

> If you like, I can send you my straces, but they are kind of big
> though, and you need to strace each gear, as stracing itself changes
> the workload balance.

sure, send them along or upload them somewhere - but more importantly,
please send the cfs-debug-info.sh output.

Ingo

2007-08-29 07:57:48

by Keith Packard

[permalink] [raw]
Subject: Re: CFS review

On Wed, 2007-08-29 at 06:46 +0200, Ingo Molnar wrote:

> ok, i finally managed to reproduce the "artifact" myself on an older
> box. It goes like this: start up X with the vesa driver (or with NoDRI)
> to force software rendering. Then start up a couple of glxgears
> instances. Those glxgears instances update in a very "chunky",
> "stuttering" way - each glxgears instance runs/stops/runs/stops at a
> rate of a about once per second, and this was reported to me as a
> potential CPU scheduler regression.

Hmm. I can't even run two copies of glxgears on software GL code today;
it's broken in every X server I have available. Someone broke it a while
ago, but no-one noticed. However, this shouldn't be GLX related as the
software rasterizer is no different from any other rendering code.

Testing with my smart-scheduler case (many copies of 'plaid') shows that
at least with git master, things are working as designed. When GLX is
working again, I'll try that as well.

> at a quick glance this is not a CPU scheduler thing: X uses up 99% of
> CPU time, all the glxgears tasks (i needed 8 parallel instances to see
> the stallings) are using up the remaining 1% of CPU time. The ordering
> of the requests from the glxgears tasks is X's choice - and for a
> pathological overload situation like this we cannot blame X at all for
> not producing a completely smooth output. (although Xorg could perhaps
> try to schedule such requests more smoothly, in a more finegrained way?)

It does. It should switch between clients ever 20ms; that's why X spends
so much time asking the kernel for the current time.

Make sure the X server isn't running with the smart scheduler disabled;
that will cause precisely the symptoms you're seeing here. In the normal
usptream sources, you'd have to use '-dumbSched' as an X server command
line option.

The old 'scheduler' would run an entire X client's input buffer dry
before looking for requests from another client. Because glxgears
requests are small but time consuming, this can cause very long delays
between client switching.

--
[email protected]


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2007-08-29 08:04:49

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Keith Packard <[email protected]> wrote:

> Make sure the X server isn't running with the smart scheduler
> disabled; that will cause precisely the symptoms you're seeing here.
> In the normal usptream sources, you'd have to use '-dumbSched' as an X
> server command line option.
>
> The old 'scheduler' would run an entire X client's input buffer dry
> before looking for requests from another client. Because glxgears
> requests are small but time consuming, this can cause very long delays
> between client switching.

on the old box where i've reproduced this i've got an ancient X version:

neptune:~> X -version

X Window System Version 6.8.2
Release Date: 9 February 2005
X Protocol Version 11, Revision 0, Release 6.8.2
Build Operating System: Linux 2.6.9-22.ELsmp i686 [ELF]

is that old enough to not have the smart X scheduler?

on newer systems i dont see correctly updated glxgears output (probably
the GLX bug you mentioned) so i cannot reproduce the bug.

Al, could you send us your 'X -version' output?

Ingo

2007-08-29 08:58:10

by Al Boldi

[permalink] [raw]
Subject: Re: CFS review

Ingo Molnar wrote:
> * Keith Packard <[email protected]> wrote:
> > Make sure the X server isn't running with the smart scheduler
> > disabled; that will cause precisely the symptoms you're seeing here.
> > In the normal usptream sources, you'd have to use '-dumbSched' as an X
> > server command line option.
> >
> > The old 'scheduler' would run an entire X client's input buffer dry
> > before looking for requests from another client. Because glxgears
> > requests are small but time consuming, this can cause very long delays
> > between client switching.
>
> on the old box where i've reproduced this i've got an ancient X version:
>
> neptune:~> X -version
>
> X Window System Version 6.8.2
> Release Date: 9 February 2005
> X Protocol Version 11, Revision 0, Release 6.8.2
> Build Operating System: Linux 2.6.9-22.ELsmp i686 [ELF]
>
> is that old enough to not have the smart X scheduler?
>
> on newer systems i dont see correctly updated glxgears output (probably
> the GLX bug you mentioned) so i cannot reproduce the bug.
>
> Al, could you send us your 'X -version' output?

This is the one I have been talking about:

XFree86 Version 4.3.0
Release Date: 27 February 2003
X Protocol Version 11, Revision 0, Release 6.6
Build Operating System: Linux 2.4.21-0.13mdksmp i686 [ELF]


I also tried the gears test just now on this:

X Window System Version 6.8.1
Release Date: 17 September 2004
X Protocol Version 11, Revision 0, Release 6.8.1
Build Operating System: Linux 2.6.9-1.860_ELsmp i686 [ELF]

but it completely locks up. Disabling add_wait_runtime seems to fix it.


Thanks!

--
Al

2007-08-29 13:11:24

by Bill Davidsen

[permalink] [raw]
Subject: Re: CFS review

Ingo Molnar wrote:
> * Bill Davidsen <[email protected]> wrote:
>
>
>>> There is another way to show the problem visually under X
>>> (vesa-driver), by starting 3 gears simultaneously, which after
>>> laying them out side-by-side need some settling time before
>>> smoothing out. Without __update_curr it's absolutely smooth from
>>> the start.
>>>
>> I posted a LOT of stuff using the glitch1 script, and finally found a
>> set of tuning values which make the test script run smooth. See back
>> posts, I don't have them here.
>>
>
> but you have real 3D hw and DRI enabled, correct? In that case X uses up
> almost no CPU time and glxgears makes most of the processing. That is
> quite different from the above software-rendering case, where X spends
> most of the CPU time.
>

No, my test machine for that is a compile server, and uses the built-in
motherboard graphics which are very limited. This is not in any sense a
graphics powerhouse, it is used to build custom kernels and
applications, and for testing of kvm and xen, and I grabbed it because
it had the only Core2 CPU I could reboot to try new kernel versions and
"from cold boot" testing, discovered the graphics smoothness issue by
having several windows open on compiles, and developed the glitch1
script as a way to reproduce it.

The settings I used, features=14, granularity=500000, work to improve
smoothness on other machines for other uses, but they do seem to impact
performance for compiles, video processing, etc, so they are not optimal
for general use. I regard the existence of these tuning knobs as one of
the real strengths of CFS, when you change the tuning it has a visible
effect.

--
bill davidsen <[email protected]>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979

2007-08-29 15:58:25

by Keith Packard

[permalink] [raw]
Subject: Re: CFS review

On Wed, 2007-08-29 at 10:04 +0200, Ingo Molnar wrote:

> is that old enough to not have the smart X scheduler?

The smart scheduler went into the server in like 2000. I don't think
you've got any systems that old. XFree86 4.1 or 4.2, I can't remember
which.

> (probably
> the GLX bug you mentioned) so i cannot reproduce the bug.

With X server 1.3, I'm getting consistent crashes with two glxgear
instances running. So, if you're getting any output, it's better than my
situation.

--
[email protected]


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2007-08-29 20:02:30

by Rene Herman

[permalink] [raw]
Subject: Re: CFS review

On 08/29/2007 05:57 PM, Keith Packard wrote:

> With X server 1.3, I'm getting consistent crashes with two glxgear
> instances running. So, if you're getting any output, it's better than my
> situation.

Before people focuss on software rendering too much -- also with 1.3.0 (and
a Matrox Millenium G550 AGP, 32M) glxgears also works decidedly crummy using
hardware rendering. While I can move the glxgears window itself, the actual
spinning wheels stay in the upper-left corner of the screen and the movement
leaves a non-repainting trace on the screen. Running a second instance of
glxgears in addition seems to make both instances unkillable -- and when
I just now forcefully killed X in this situation (the spinning wheels were
covering the upper left corner of all my desktops) I got the below.

Kernel is 2.6.22.5-cfs-v20.5, schedule() is in the traces (but that may be
expected anyway).

BUG: unable to handle kernel NULL pointer dereference at virtual address 00000010
printing eip:
c10ff416
*pde = 00000000
Oops: 0000 [#1]
PREEMPT
Modules linked in: nfsd exportfs lockd nfs_acl sunrpc nls_iso8859_1 nls_cp437 vfat fat
nls_base
CPU: 0
EIP: 0060:[<c10ff416>] Not tainted VLI
EFLAGS: 00210246 (2.6.22.5-cfs-v20.5-local #5)
EIP is at mga_dma_buffers+0x189/0x2e3
eax: 00000000 ebx: efd07200 ecx: 00000001 edx: efc32c00
esi: 00000000 edi: c12756cc ebp: dfea44c0 esp: dddaaec0
ds: 007b es: 007b fs: 0000 gs: 0033 ss: 0068
Process glxgears (pid: 1775, ti=dddaa000 task=e9daca60 task.ti=dddaa000)
Stack: efc32c00 00000000 00000004 e4c3bd20 c10fa54b e4c3bd20 efc32c00 00000000
00000004 00000000 00000000 00000000 00000000 00000001 00010000 bfbdb8bc
bfbdb8b8 00000000 c10ff28d 00000029 c12756cc dfea44c0 c10f87fc bfbdb844
Call Trace:
[<c10fa54b>] drm_lock+0x255/0x2de
[<c10ff28d>] mga_dma_buffers+0x0/0x2e3
[<c10f87fc>] drm_ioctl+0x142/0x18a
[<c1005973>] do_IRQ+0x97/0xb0
[<c10f86ba>] drm_ioctl+0x0/0x18a
[<c10f86ba>] drm_ioctl+0x0/0x18a
[<c105b0d7>] do_ioctl+0x87/0x9f
[<c105b32c>] vfs_ioctl+0x23d/0x250
[<c11b533e>] schedule+0x2d0/0x2e6
[<c105b372>] sys_ioctl+0x33/0x4d
[<c1003d1e>] syscall_call+0x7/0xb
=======================
Code: 9a 08 03 00 00 8b 73 30 74 14 c7 44 24 04 28 76 1c c1 c7 04 24 49 51 23 c1 e8 b0 74
f1 ff 8b 83 d8 00 00 00 83 3d 1c 47 30 c1 00 <8b> 40 10 8b a8 58 1e 00 00 8b 43 28 8b b8
64 01 00 00 74 32 8b
EIP: [<c10ff416>] mga_dma_buffers+0x189/0x2e3 SS:ESP 0068:dddaaec0
BUG: unable to handle kernel NULL pointer dereference at virtual address 00000010
printing eip:
c10ff416
*pde = 00000000
Oops: 0000 [#2]
PREEMPT
Modules linked in: nfsd exportfs lockd nfs_acl sunrpc nls_iso8859_1 nls_cp437 vfat fat
nls_base
CPU: 0
EIP: 0060:[<c10ff416>] Not tainted VLI
EFLAGS: 00210246 (2.6.22.5-cfs-v20.5-local #5)
EIP is at mga_dma_buffers+0x189/0x2e3
eax: 00000000 ebx: efd07200 ecx: 00000001 edx: efc32c00
esi: 00000000 edi: c12756cc ebp: dfea4780 esp: e0552ec0
ds: 007b es: 007b fs: 0000 gs: 0033 ss: 0068
Process glxgears (pid: 1776, ti=e0552000 task=c19ec000 task.ti=e0552000)
Stack: efc32c00 00000000 00000003 efc64b40 c10fa54b efc64b40 efc32c00 00000000
00000003 00000000 00000000 00000000 00000000 00000001 00010000 bf8dbdcc
bf8dbdc8 00000000 c10ff28d 00000029 c12756cc dfea4780 c10f87fc bf8dbd54
Call Trace:
[<c10fa54b>] drm_lock+0x255/0x2de
[<c10ff28d>] mga_dma_buffers+0x0/0x2e3
[<c10f87fc>] drm_ioctl+0x142/0x18a
[<c11b53f6>] preempt_schedule+0x4e/0x5a
[<c10f86ba>] drm_ioctl+0x0/0x18a
[<c10f86ba>] drm_ioctl+0x0/0x18a
[<c105b0d7>] do_ioctl+0x87/0x9f
[<c105b32c>] vfs_ioctl+0x23d/0x250
[<c11b52a9>] schedule+0x23b/0x2e6
[<c11b533e>] schedule+0x2d0/0x2e6
[<c105b372>] sys_ioctl+0x33/0x4d
[<c1003d1e>] syscall_call+0x7/0xb
=======================
Code: 9a 08 03 00 00 8b 73 30 74 14 c7 44 24 04 28 76 1c c1 c7 04 24 49 51 23 c1 e8 b0 74
f1 ff 8b 83 d8 00 00 00 83 3d 1c 47 30 c1 00 <8b> 40 10 8b a8 58 1e 00 00 8b 43 28 8b b8
64 01 00 00 74 32 8b
EIP: [<c10ff416>] mga_dma_buffers+0x189/0x2e3 SS:ESP 0068:e0552ec0
[drm:drm_release] *ERROR* Device busy: 2 0

Rene.

2007-08-30 07:11:31

by Rene Herman

[permalink] [raw]
Subject: Re: CFS review

On 08/29/2007 09:56 PM, Rene Herman wrote:

Realised the BUGs may mean the kernel DRM people could want to be in CC...

> On 08/29/2007 05:57 PM, Keith Packard wrote:
>
>> With X server 1.3, I'm getting consistent crashes with two glxgear
>> instances running. So, if you're getting any output, it's better than my
>> situation.
>
> Before people focuss on software rendering too much -- also with 1.3.0
> (and a Matrox Millenium G550 AGP, 32M) glxgears also works decidedly
> crummy using hardware rendering. While I can move the glxgears window
> itself, the actual spinning wheels stay in the upper-left corner of the
> screen and the movement leaves a non-repainting trace on the screen.
> Running a second instance of glxgears in addition seems to make both
> instances unkillable -- and when I just now forcefully killed X in this
> situation (the spinning wheels were covering the upper left corner of all
> my desktops) I got the below.
>
> Kernel is 2.6.22.5-cfs-v20.5, schedule() is in the traces (but that may be
> expected anyway).
>
> BUG: unable to handle kernel NULL pointer dereference at virtual address
> 00000010
> printing eip:
> c10ff416
> *pde = 00000000
> Oops: 0000 [#1]
> PREEMPT
> Modules linked in: nfsd exportfs lockd nfs_acl sunrpc nls_iso8859_1
> nls_cp437 vfat fat nls_base
> CPU: 0
> EIP: 0060:[<c10ff416>] Not tainted VLI
> EFLAGS: 00210246 (2.6.22.5-cfs-v20.5-local #5)
> EIP is at mga_dma_buffers+0x189/0x2e3
> eax: 00000000 ebx: efd07200 ecx: 00000001 edx: efc32c00
> esi: 00000000 edi: c12756cc ebp: dfea44c0 esp: dddaaec0
> ds: 007b es: 007b fs: 0000 gs: 0033 ss: 0068
> Process glxgears (pid: 1775, ti=dddaa000 task=e9daca60 task.ti=dddaa000)
> Stack: efc32c00 00000000 00000004 e4c3bd20 c10fa54b e4c3bd20 efc32c00
> 00000000
> 00000004 00000000 00000000 00000000 00000000 00000001 00010000
> bfbdb8bc
> bfbdb8b8 00000000 c10ff28d 00000029 c12756cc dfea44c0 c10f87fc
> bfbdb844
> Call Trace:
> [<c10fa54b>] drm_lock+0x255/0x2de
> [<c10ff28d>] mga_dma_buffers+0x0/0x2e3
> [<c10f87fc>] drm_ioctl+0x142/0x18a
> [<c1005973>] do_IRQ+0x97/0xb0
> [<c10f86ba>] drm_ioctl+0x0/0x18a
> [<c10f86ba>] drm_ioctl+0x0/0x18a
> [<c105b0d7>] do_ioctl+0x87/0x9f
> [<c105b32c>] vfs_ioctl+0x23d/0x250
> [<c11b533e>] schedule+0x2d0/0x2e6
> [<c105b372>] sys_ioctl+0x33/0x4d
> [<c1003d1e>] syscall_call+0x7/0xb
> =======================
> Code: 9a 08 03 00 00 8b 73 30 74 14 c7 44 24 04 28 76 1c c1 c7 04 24 49
> 51 23 c1 e8 b0 74 f1 ff 8b 83 d8 00 00 00 83 3d 1c 47 30 c1 00 <8b> 40
> 10 8b a8 58 1e 00 00 8b 43 28 8b b8 64 01 00 00 74 32 8b
> EIP: [<c10ff416>] mga_dma_buffers+0x189/0x2e3 SS:ESP 0068:dddaaec0
> BUG: unable to handle kernel NULL pointer dereference at virtual address
> 00000010
> printing eip:
> c10ff416
> *pde = 00000000
> Oops: 0000 [#2]
> PREEMPT
> Modules linked in: nfsd exportfs lockd nfs_acl sunrpc nls_iso8859_1
> nls_cp437 vfat fat nls_base
> CPU: 0
> EIP: 0060:[<c10ff416>] Not tainted VLI
> EFLAGS: 00210246 (2.6.22.5-cfs-v20.5-local #5)
> EIP is at mga_dma_buffers+0x189/0x2e3
> eax: 00000000 ebx: efd07200 ecx: 00000001 edx: efc32c00
> esi: 00000000 edi: c12756cc ebp: dfea4780 esp: e0552ec0
> ds: 007b es: 007b fs: 0000 gs: 0033 ss: 0068
> Process glxgears (pid: 1776, ti=e0552000 task=c19ec000 task.ti=e0552000)
> Stack: efc32c00 00000000 00000003 efc64b40 c10fa54b efc64b40 efc32c00
> 00000000
> 00000003 00000000 00000000 00000000 00000000 00000001 00010000
> bf8dbdcc
> bf8dbdc8 00000000 c10ff28d 00000029 c12756cc dfea4780 c10f87fc
> bf8dbd54
> Call Trace:
> [<c10fa54b>] drm_lock+0x255/0x2de
> [<c10ff28d>] mga_dma_buffers+0x0/0x2e3
> [<c10f87fc>] drm_ioctl+0x142/0x18a
> [<c11b53f6>] preempt_schedule+0x4e/0x5a
> [<c10f86ba>] drm_ioctl+0x0/0x18a
> [<c10f86ba>] drm_ioctl+0x0/0x18a
> [<c105b0d7>] do_ioctl+0x87/0x9f
> [<c105b32c>] vfs_ioctl+0x23d/0x250
> [<c11b52a9>] schedule+0x23b/0x2e6
> [<c11b533e>] schedule+0x2d0/0x2e6
> [<c105b372>] sys_ioctl+0x33/0x4d
> [<c1003d1e>] syscall_call+0x7/0xb
> =======================
> Code: 9a 08 03 00 00 8b 73 30 74 14 c7 44 24 04 28 76 1c c1 c7 04 24 49
> 51 23 c1 e8 b0 74 f1 ff 8b 83 d8 00 00 00 83 3d 1c 47 30 c1 00 <8b> 40
> 10 8b a8 58 1e 00 00 8b 43 28 8b b8 64 01 00 00 74 32 8b
> EIP: [<c10ff416>] mga_dma_buffers+0x189/0x2e3 SS:ESP 0068:e0552ec0
> [drm:drm_release] *ERROR* Device busy: 2 0

Rene.

2007-08-30 07:20:44

by Ingo Molnar

[permalink] [raw]
Subject: Re: CFS review


* Rene Herman <[email protected]> wrote:

> Realised the BUGs may mean the kernel DRM people could want to be in CC...

and note that the schedule() call in there is not part of the crash
backtrace:

> >Call Trace:
> > [<c10fa54b>] drm_lock+0x255/0x2de
> > [<c10ff28d>] mga_dma_buffers+0x0/0x2e3
> > [<c10f87fc>] drm_ioctl+0x142/0x18a
> > [<c1005973>] do_IRQ+0x97/0xb0
> > [<c10f86ba>] drm_ioctl+0x0/0x18a
> > [<c10f86ba>] drm_ioctl+0x0/0x18a
> > [<c105b0d7>] do_ioctl+0x87/0x9f
> > [<c105b32c>] vfs_ioctl+0x23d/0x250
> > [<c11b533e>] schedule+0x2d0/0x2e6
> > [<c105b372>] sys_ioctl+0x33/0x4d
> > [<c1003d1e>] syscall_call+0x7/0xb

it just happened to be on the kernel stack. Nor is the do_IRQ() entry
real. Both are frequent functions (and were executed recently) that's
why they were still in the stackframe.

Ingo

2007-08-30 16:06:58

by Chuck Ebbert

[permalink] [raw]
Subject: Re: CFS review

On 08/29/2007 03:56 PM, Rene Herman wrote:
>
> Before people focuss on software rendering too much -- also with 1.3.0 (and
> a Matrox Millenium G550 AGP, 32M) glxgears also works decidedly crummy
> using
> hardware rendering. While I can move the glxgears window itself, the actual
> spinning wheels stay in the upper-left corner of the screen and the
> movement
> leaves a non-repainting trace on the screen. Running a second instance of
> glxgears in addition seems to make both instances unkillable -- and when
> I just now forcefully killed X in this situation (the spinning wheels were
> covering the upper left corner of all my desktops) I got the below.
>
> Kernel is 2.6.22.5-cfs-v20.5, schedule() is in the traces (but that may be
> expected anyway).
>

And this doesn't happen at all with the stock scheduler? (Just confirming,
in case you didn't compare.)

> BUG: unable to handle kernel NULL pointer dereference at virtual address
> 00000010
> printing eip:
> c10ff416
> *pde = 00000000
> Oops: 0000 [#1]
> PREEMPT

Try it without preempt?

> Modules linked in: nfsd exportfs lockd nfs_acl sunrpc nls_iso8859_1
> nls_cp437 vfat fat nls_base
> CPU: 0
> EIP: 0060:[<c10ff416>] Not tainted VLI
> EFLAGS: 00210246 (2.6.22.5-cfs-v20.5-local #5)
> EIP is at mga_dma_buffers+0x189/0x2e3
> eax: 00000000 ebx: efd07200 ecx: 00000001 edx: efc32c00
> esi: 00000000 edi: c12756cc ebp: dfea44c0 esp: dddaaec0
> ds: 007b es: 007b fs: 0000 gs: 0033 ss: 0068
> Process glxgears (pid: 1775, ti=dddaa000 task=e9daca60 task.ti=dddaa000)
> Stack: efc32c00 00000000 00000004 e4c3bd20 c10fa54b e4c3bd20 efc32c00
> 00000000
> 00000004 00000000 00000000 00000000 00000000 00000001 00010000
> bfbdb8bc
> bfbdb8b8 00000000 c10ff28d 00000029 c12756cc dfea44c0 c10f87fc
> bfbdb844
> Call Trace:
> [<c10fa54b>] drm_lock+0x255/0x2de
> [<c10ff28d>] mga_dma_buffers+0x0/0x2e3
> [<c10f87fc>] drm_ioctl+0x142/0x18a
> [<c1005973>] do_IRQ+0x97/0xb0
> [<c10f86ba>] drm_ioctl+0x0/0x18a
> [<c10f86ba>] drm_ioctl+0x0/0x18a
> [<c105b0d7>] do_ioctl+0x87/0x9f
> [<c105b32c>] vfs_ioctl+0x23d/0x250
> [<c11b533e>] schedule+0x2d0/0x2e6
> [<c105b372>] sys_ioctl+0x33/0x4d
> [<c1003d1e>] syscall_call+0x7/0xb
> =======================
> Code: 9a 08 03 00 00 8b 73 30 74 14 c7 44 24 04 28 76 1c c1 c7 04 24 49
> 51 23 c1 e8 b0 74 f1 ff 8b 83 d8 00 00 00 83 3d 1c 47 30 c1 00 <8b> 40
> 10 8b a8 58 1e 00 00 8b 43 28 8b b8 64 01 00 00 74 32 8b
> EIP: [<c10ff416>] mga_dma_buffers+0x189/0x2e3 SS:ESP 0068:dddaaec0

dev->dev_private->mmio is NULL when trying to access mmio.handle

2007-08-30 16:54:40

by Rene Herman

[permalink] [raw]
Subject: Re: CFS review

On 08/30/2007 06:06 PM, Chuck Ebbert wrote:

> On 08/29/2007 03:56 PM, Rene Herman wrote:

>> Before people focuss on software rendering too much -- also with 1.3.0
>> (and a Matrox Millenium G550 AGP, 32M) glxgears also works decidedly
>> crummy using hardware rendering. While I can move the glxgears window
>> itself, the actual spinning wheels stay in the upper-left corner of the
>> screen and the movement leaves a non-repainting trace on the screen.
>> Running a second instance of glxgears in addition seems to make both
>> instances unkillable -- and when I just now forcefully killed X in this
>> situation (the spinning wheels were covering the upper left corner of
>> all my desktops) I got the below.
>>
>> Kernel is 2.6.22.5-cfs-v20.5, schedule() is in the traces (but that may
>> be expected anyway).

> And this doesn't happen at all with the stock scheduler? (Just confirming,
> in case you didn't compare.)

I didn't compare -- it no doubt will. I know the title of this thread is
"CFS review" but it turned into Keith Packard noticing glxgears being broken
on recent-ish X.org. The start of the thread was about things being broken
using _software_ rendering though, so I thought it might be useful to
remark/report glxgears also being quite broken using hardware rendering on
my setup at least.

>> BUG: unable to handle kernel NULL pointer dereference at virtual address
>> 00000010
>> printing eip:
>> c10ff416
>> *pde = 00000000
>> Oops: 0000 [#1]
>> PREEMPT
>
> Try it without preempt?

If you're asking in a "I'll go debug the DRM" way I'll go dig a bit later
(please say) but if you are only interested in the thread due to CFS, note
that I'm aware it's not likely to have anything to do with CFS.

It's not reproducable for you? (full description of bug above).

Rene.

2007-08-31 06:56:20

by Tilman Sauerbeck

[permalink] [raw]
Subject: Re: CFS review

Rene Herman [2007-08-30 09:05]:
> On 08/29/2007 09:56 PM, Rene Herman wrote:
>
> Realised the BUGs may mean the kernel DRM people could want to be in CC...
>
> > On 08/29/2007 05:57 PM, Keith Packard wrote:
> >
> >> With X server 1.3, I'm getting consistent crashes with two glxgear
> >> instances running. So, if you're getting any output, it's better than my
> >> situation.
> >
> > Before people focuss on software rendering too much -- also with 1.3.0
> > (and a Matrox Millenium G550 AGP, 32M) glxgears also works decidedly
> > crummy using hardware rendering. While I can move the glxgears window
> > itself, the actual spinning wheels stay in the upper-left corner of the
> > screen and the movement leaves a non-repainting trace on the screen.

This sounds like you're running an older version of Mesa.
The bugfix went into Mesa 6.3 and 7.0.

> > Running a second instance of glxgears in addition seems to make both
> > instances unkillable -- and when I just now forcefully killed X in this
> > situation (the spinning wheels were covering the upper left corner of all
> > my desktops) I got the below.

Running two instances of glxgears and killing them works for me, too.

I'm using xorg-server 1.3.0.0, Mesa 7.0.1 with the latest DRM bits from
http://gitweb.freedesktop.org/?p=mesa/drm.git;a=summary

I'm not running CFS though, but I guess the oops wasn't related to that.

Regards,
Tilman

--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?


Attachments:
(No filename) (1.56 kB)
(No filename) (189.00 B)
Download all attachments

2007-08-31 10:50:26

by Rene Herman

[permalink] [raw]
Subject: DRM and/or X trouble (was Re: CFS review)

On 08/31/2007 08:46 AM, Tilman Sauerbeck wrote:

>> On 08/29/2007 09:56 PM, Rene Herman wrote:

>>>> With X server 1.3, I'm getting consistent crashes with two glxgear
>>>> instances running. So, if you're getting any output, it's better than my
>>>> situation.
>>> Before people focuss on software rendering too much -- also with 1.3.0
>>> (and a Matrox Millenium G550 AGP, 32M) glxgears also works decidedly
>>> crummy using hardware rendering. While I can move the glxgears window
>>> itself, the actual spinning wheels stay in the upper-left corner of the
>>> screen and the movement leaves a non-repainting trace on the screen.
>
> This sounds like you're running an older version of Mesa.
> The bugfix went into Mesa 6.3 and 7.0.

I have Mesa 6.5.2 it seems (slackware-12.0 standard):

OpenGL renderer string: Mesa DRI G400 20061030 AGP 2x x86/MMX+/3DNow!+/SSE
OpenGL version string: 1.2 Mesa 6.5.2

The bit of the problem sketched above -- the gears just sitting there in the
upper left corner of the screen and not moving alongside their window is
fully reproduceable. The bit below ... :

>>> Running a second instance of glxgears in addition seems to make both
>>> instances unkillable -- and when I just now forcefully killed X in this
>>> situation (the spinning wheels were covering the upper left corner of all
>>> my desktops) I got the below.

[ two kernel BUGs ]

... isn't. This seems to (again) have been a race of sorts that I hit by
accident since I haven't reproduced yet. Had the same type of "racyness"
trouble with keyboard behaviour in this version of X earlier.

> Running two instances of glxgears and killing them works for me, too.
>
> I'm using xorg-server 1.3.0.0, Mesa 7.0.1 with the latest DRM bits from
> http://gitweb.freedesktop.org/?p=mesa/drm.git;a=summary

For me, everything standard slackware-12.0 (X.org 1.3.0) and kernel 2.6.22 DRM.

> I'm not running CFS though, but I guess the oops wasn't related to that.

I've noticed before the Matrox driver seems to get little attention/testing
so maybe that's just it. A G550 is ofcourse in graphics-time a Model T by
now. I'm rather decidedly not a graphics person so I don't care a lot but
every time I try to do something fashionable (run Google Earth for example)
I notice things are horribly, horribly broken.

X bugs I do not find very interesting (there's just too many) and the kernel
bugs are requiring more time to reproduce than I have available. If the BUGs
as posted aren't enough for a diagnosis, please consider the report withdrawn.

Rene.

2007-08-31 14:42:53

by Satyam Sharma

[permalink] [raw]
Subject: Re: DRM and/or X trouble

[ Trimmed Cc: list, dropped sched folk, retained DRM. ]


On Fri, 31 Aug 2007, Rene Herman wrote:

> On 08/31/2007 08:46 AM, Tilman Sauerbeck wrote:
>
> > > On 08/29/2007 09:56 PM, Rene Herman wrote:
>
> > > > > With X server 1.3, I'm getting consistent crashes with two glxgear
> > > > > instances running. So, if you're getting any output, it's better than
> > > > > my
> > > > > situation.
> > > > Before people focuss on software rendering too much -- also with 1.3.0
> > > > (and a Matrox Millenium G550 AGP, 32M) glxgears also works decidedly
> > > > crummy using hardware rendering. While I can move the glxgears window
> > > > itself, the actual spinning wheels stay in the upper-left corner of the
> > > > screen and the movement leaves a non-repainting trace on the screen.
> >
> > This sounds like you're running an older version of Mesa.
> > The bugfix went into Mesa 6.3 and 7.0.
>
> I have Mesa 6.5.2 it seems (slackware-12.0 standard):
>
> OpenGL renderer string: Mesa DRI G400 20061030 AGP 2x x86/MMX+/3DNow!+/SSE
> OpenGL version string: 1.2 Mesa 6.5.2
>
> The bit of the problem sketched above -- the gears just sitting there in the
> upper left corner of the screen and not moving alongside their window is fully
> reproduceable. The bit below ... :
>
> > > > Running a second instance of glxgears in addition seems to make both
> > > > instances unkillable -- and when I just now forcefully killed X in this
> > > > situation (the spinning wheels were covering the upper left corner of
> > > > all
> > > > my desktops) I got the below.
>
> [ two kernel BUGs ]
>
> ... isn't. This seems to (again) have been a race of sorts that I hit by
> accident since I haven't reproduced yet. Had the same type of "racyness"
> trouble with keyboard behaviour in this version of X earlier.

Dave (Airlie), this is an oops first reported at:
http://lkml.org/lkml/2007/8/30/9

mga_freelist_get() is inlined at its only callsite mga_dma_get_buffers()
that is in turn inlined at its only callsite in mga_dma_buffers(). This
oops was hit ...

static struct drm_buf *mga_freelist_get(struct drm_device * dev)
{
...
head = MGA_READ(MGA_PRIMADDRESS); <=== ... HERE.
...
}

MGA_READ() is DRM_READ32(), and dev_priv->mmio was found to be NULL when
trying to access dev_priv->mmio->handle as shown above.


> > Running two instances of glxgears and killing them works for me, too.
> >
> > I'm using xorg-server 1.3.0.0, Mesa 7.0.1 with the latest DRM bits from
> > http://gitweb.freedesktop.org/?p=mesa/drm.git;a=summary
>
> For me, everything standard slackware-12.0 (X.org 1.3.0) and kernel 2.6.22
> DRM.
>
> > I'm not running CFS though, but I guess the oops wasn't related to that.
>
> I've noticed before the Matrox driver seems to get little attention/testing so
> maybe that's just it. A G550 is ofcourse in graphics-time a Model T by now.
> I'm rather decidedly not a graphics person so I don't care a lot but every
> time I try to do something fashionable (run Google Earth for example) I notice
> things are horribly, horribly broken.
>
> X bugs I do not find very interesting (there's just too many) and the kernel
> bugs are requiring more time to reproduce than I have available. If the BUGs
> as posted aren't enough for a diagnosis, please consider the report withdrawn.

As you already know by now, this oops isn't a bug or anything in the
scheduler at all, but more likely a race in the DRM itself (which possibly
could have been exposed by some aspect of CFS). So it makes sense to
"withdraw" this as a CFS-related bug report, but definitely not as a DRM-
related bug report.


Satyam