2020-03-04 21:40:29

by Xi Wang

[permalink] [raw]
Subject: [PATCH] sched: watchdog: Touch kernel watchdog in sched code

The main purpose of kernel watchdog is to test whether scheduler can
still schedule tasks on a cpu. In order to reduce latency from
periodically invoking watchdog reset in thread context, we can simply
touch watchdog from pick_next_task in scheduler. Compared to actually
resetting watchdog from cpu stop / migration threads, we lose coverage
on: a migration thread actually get picked and we actually context
switch to the migration thread. Both steps are heavily protected by
kernel locks and unlikely to silently fail. Thus the change would
provide the same level of protection with less overhead.

The new way vs the old way to touch the watchdogs is configurable
from:

/proc/sys/kernel/watchdog_touch_in_thread_interval

The value means:
0: Always touch watchdog from pick_next_task
1: Always touch watchdog from migration thread
N (N>0): Touch watchdog from migration thread once in every N
invocations, and touch watchdog from pick_next_task for
other invocations.

Suggested-by: Paul Turner <[email protected]>
Signed-off-by: Xi Wang <[email protected]>
---
kernel/sched/core.c | 36 ++++++++++++++++++++++++++++++++++--
kernel/sysctl.c | 11 ++++++++++-
kernel/watchdog.c | 39 ++++++++++++++++++++++++++++++++++-----
3 files changed, 78 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1a9983da4408..9d8e00760d1c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3898,6 +3898,27 @@ static inline void schedule_debug(struct task_struct *prev, bool preempt)
schedstat_inc(this_rq()->sched_count);
}

+#ifdef CONFIG_SOFTLOCKUP_DETECTOR
+
+DEFINE_PER_CPU(bool, sched_should_touch_watchdog);
+
+void touch_watchdog_from_sched(void);
+
+/* Helper called by watchdog code */
+void resched_for_watchdog(void)
+{
+ unsigned long flags;
+ struct rq *rq = this_rq();
+
+ this_cpu_write(sched_should_touch_watchdog, true);
+ raw_spin_lock_irqsave(&rq->lock, flags);
+ /* Trigger resched for code in pick_next_task to touch watchdog */
+ resched_curr(rq);
+ raw_spin_unlock_irqrestore(&rq->lock, flags);
+}
+
+#endif /* CONFIG_SOFTLOCKUP_DETECTOR */
+
/*
* Pick up the highest-prio task:
*/
@@ -3927,7 +3948,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
p = pick_next_task_idle(rq);
}

- return p;
+ goto out;
}

restart:
@@ -3951,11 +3972,22 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
for_each_class(class) {
p = class->pick_next_task(rq);
if (p)
- return p;
+ goto out;
}

/* The idle class should always have a runnable task: */
BUG();
+
+out:
+
+#ifdef CONFIG_SOFTLOCKUP_DETECTOR
+ if (this_cpu_read(sched_should_touch_watchdog)) {
+ touch_watchdog_from_sched();
+ this_cpu_write(sched_should_touch_watchdog, false);
+ }
+#endif
+
+ return p;
}

/*
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index ad5b88a53c5a..adb4b11fbccb 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -118,6 +118,9 @@ extern unsigned int sysctl_nr_open_min, sysctl_nr_open_max;
#ifndef CONFIG_MMU
extern int sysctl_nr_trim_pages;
#endif
+#ifdef CONFIG_SOFTLOCKUP_DETECTOR
+extern unsigned int sysctl_watchdog_touch_in_thread_interval;
+#endif

/* Constants used for minimum and maximum */
#ifdef CONFIG_LOCKUP_DETECTOR
@@ -961,6 +964,13 @@ static struct ctl_table kern_table[] = {
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
},
+ {
+ .procname = "watchdog_touch_in_thread_interval",
+ .data = &sysctl_watchdog_touch_in_thread_interval,
+ .maxlen = sizeof(unsigned int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ },
#ifdef CONFIG_SMP
{
.procname = "softlockup_all_cpu_backtrace",
@@ -996,7 +1006,6 @@ static struct ctl_table kern_table[] = {
#endif /* CONFIG_SMP */
#endif
#endif
-
#if defined(CONFIG_X86_LOCAL_APIC) && defined(CONFIG_X86)
{
.procname = "unknown_nmi_panic",
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index b6b1f54a7837..f9138c29db48 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -49,6 +49,16 @@ static struct cpumask watchdog_allowed_mask __read_mostly;
struct cpumask watchdog_cpumask __read_mostly;
unsigned long *watchdog_cpumask_bits = cpumask_bits(&watchdog_cpumask);

+#ifdef CONFIG_SOFTLOCKUP_DETECTOR
+/*
+ * 0: Always touch watchdog from pick_next_task
+ * 1: Always touch watchdog from migration thread
+ * N (N>0): Touch watchdog from migration thread once in every N invocations,
+ * and touch watchdog from pick_next_task for other invocations.
+ */
+unsigned int sysctl_watchdog_touch_in_thread_interval = 10;
+#endif
+
#ifdef CONFIG_HARDLOCKUP_DETECTOR
/*
* Should we panic when a soft-lockup or hard-lockup occurs:
@@ -356,6 +366,9 @@ static int softlockup_fn(void *data)
return 0;
}

+static DEFINE_PER_CPU(unsigned int, num_watchdog_wakeup_skipped);
+void resched_for_watchdog(void);
+
/* watchdog kicker functions */
static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
{
@@ -371,11 +384,20 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
watchdog_interrupt_count();

/* kick the softlockup detector */
- if (completion_done(this_cpu_ptr(&softlockup_completion))) {
- reinit_completion(this_cpu_ptr(&softlockup_completion));
- stop_one_cpu_nowait(smp_processor_id(),
- softlockup_fn, NULL,
- this_cpu_ptr(&softlockup_stop_work));
+ if ((!sysctl_watchdog_touch_in_thread_interval ||
+ sysctl_watchdog_touch_in_thread_interval > this_cpu_read(num_watchdog_wakeup_skipped) + 1)) {
+ this_cpu_write(num_watchdog_wakeup_skipped, sysctl_watchdog_touch_in_thread_interval ?
+ this_cpu_read(num_watchdog_wakeup_skipped) + 1 : 0);
+ /* touch watchdog from pick_next_task */
+ resched_for_watchdog();
+ } else {
+ this_cpu_write(num_watchdog_wakeup_skipped, 0);
+ if (completion_done(this_cpu_ptr(&softlockup_completion))) {
+ reinit_completion(this_cpu_ptr(&softlockup_completion));
+ stop_one_cpu_nowait(smp_processor_id(),
+ softlockup_fn, NULL,
+ this_cpu_ptr(&softlockup_stop_work));
+ }
}

/* .. and repeat */
@@ -526,6 +548,13 @@ static int softlockup_start_fn(void *data)
return 0;
}

+
+/* Similar to watchdog thread function but called from pick_next_task */
+void touch_watchdog_from_sched(void)
+{
+ __touch_watchdog();
+}
+
static void softlockup_start_all(void)
{
int cpu;
--
2.25.1.481.gfbce0eb801-goog


2020-03-05 03:12:09

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH] sched: watchdog: Touch kernel watchdog in sched code

On Wed, 4 Mar 2020 13:39:41 -0800
Xi Wang <[email protected]> wrote:

> The main purpose of kernel watchdog is to test whether scheduler can
> still schedule tasks on a cpu. In order to reduce latency from
> periodically invoking watchdog reset in thread context, we can simply
> touch watchdog from pick_next_task in scheduler. Compared to actually
> resetting watchdog from cpu stop / migration threads, we lose coverage
> on: a migration thread actually get picked and we actually context
> switch to the migration thread. Both steps are heavily protected by
> kernel locks and unlikely to silently fail. Thus the change would
> provide the same level of protection with less overhead.

Have any measurements showing the drop in overhead?

>
> The new way vs the old way to touch the watchdogs is configurable
> from:
>
> /proc/sys/kernel/watchdog_touch_in_thread_interval
>
> The value means:
> 0: Always touch watchdog from pick_next_task
> 1: Always touch watchdog from migration thread
> N (N>0): Touch watchdog from migration thread once in every N
> invocations, and touch watchdog from pick_next_task for
> other invocations.
>
> Suggested-by: Paul Turner <[email protected]>
> Signed-off-by: Xi Wang <[email protected]>
> ---
> kernel/sched/core.c | 36 ++++++++++++++++++++++++++++++++++--
> kernel/sysctl.c | 11 ++++++++++-
> kernel/watchdog.c | 39 ++++++++++++++++++++++++++++++++++-----
> 3 files changed, 78 insertions(+), 8 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 1a9983da4408..9d8e00760d1c 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3898,6 +3898,27 @@ static inline void schedule_debug(struct task_struct *prev, bool preempt)
> schedstat_inc(this_rq()->sched_count);
> }
>
> +#ifdef CONFIG_SOFTLOCKUP_DETECTOR
> +
> +DEFINE_PER_CPU(bool, sched_should_touch_watchdog);
> +
> +void touch_watchdog_from_sched(void);
> +
> +/* Helper called by watchdog code */
> +void resched_for_watchdog(void)
> +{
> + unsigned long flags;
> + struct rq *rq = this_rq();
> +
> + this_cpu_write(sched_should_touch_watchdog, true);

Perhaps we should have a preempt_disable, otherwise it is possible
to get preempted here.

-- Steve

> + raw_spin_lock_irqsave(&rq->lock, flags);
> + /* Trigger resched for code in pick_next_task to touch watchdog */
> + resched_curr(rq);
> + raw_spin_unlock_irqrestore(&rq->lock, flags);
> +}
> +
> +#endif /* CONFIG_SOFTLOCKUP_DETECTOR */
> +

2020-03-05 07:58:24

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched: watchdog: Touch kernel watchdog in sched code

On Wed, Mar 04, 2020 at 01:39:41PM -0800, Xi Wang wrote:
> The main purpose of kernel watchdog is to test whether scheduler can
> still schedule tasks on a cpu. In order to reduce latency from
> periodically invoking watchdog reset in thread context, we can simply
> touch watchdog from pick_next_task in scheduler. Compared to actually
> resetting watchdog from cpu stop / migration threads, we lose coverage
> on: a migration thread actually get picked and we actually context
> switch to the migration thread. Both steps are heavily protected by
> kernel locks and unlikely to silently fail. Thus the change would
> provide the same level of protection with less overhead.
>
> The new way vs the old way to touch the watchdogs is configurable
> from:
>
> /proc/sys/kernel/watchdog_touch_in_thread_interval
>
> The value means:
> 0: Always touch watchdog from pick_next_task
> 1: Always touch watchdog from migration thread
> N (N>0): Touch watchdog from migration thread once in every N
> invocations, and touch watchdog from pick_next_task for
> other invocations.
>

This is configurable madness. What are we really trying to do here?

2020-03-05 18:08:10

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH] sched: watchdog: Touch kernel watchdog in sched code

Peter Zijlstra <[email protected]> writes:

> On Wed, Mar 04, 2020 at 01:39:41PM -0800, Xi Wang wrote:
>> The main purpose of kernel watchdog is to test whether scheduler can
>> still schedule tasks on a cpu. In order to reduce latency from
>> periodically invoking watchdog reset in thread context, we can simply
>> touch watchdog from pick_next_task in scheduler. Compared to actually
>> resetting watchdog from cpu stop / migration threads, we lose coverage
>> on: a migration thread actually get picked and we actually context
>> switch to the migration thread. Both steps are heavily protected by
>> kernel locks and unlikely to silently fail. Thus the change would
>> provide the same level of protection with less overhead.
>>
>> The new way vs the old way to touch the watchdogs is configurable
>> from:
>>
>> /proc/sys/kernel/watchdog_touch_in_thread_interval
>>
>> The value means:
>> 0: Always touch watchdog from pick_next_task
>> 1: Always touch watchdog from migration thread
>> N (N>0): Touch watchdog from migration thread once in every N
>> invocations, and touch watchdog from pick_next_task for
>> other invocations.
>>
>
> This is configurable madness. What are we really trying to do here?

Create yet another knob which will be advertised in random web blogs to
solve all problems of the world and some more. Like the one which got
silently turned into a NOOP ~10 years ago :)


2020-03-05 21:43:26

by Xi Wang

[permalink] [raw]
Subject: Re: [PATCH] sched: watchdog: Touch kernel watchdog in sched code

Measuring jitter from a userspace busy loop showed a 4us peak was
flattened in the histogram (cascadelake). So the effect is likely
reducing overhead/jitter by 4us.

Code in resched_for_watchdog should be ok since it is always called
from the watchdog hrtimer function?

Why supporting the option to alternate between thread context and
touch in sched: Might be a little risky to completely switch to the
touch in sched method. Touch in sched for 9 out of 10 times still
captures most of the latency benefit. I can remove it or change it to
on/off if desired.

Advertising the knob on random blogs: Maybe I should create a blog :)

-Xi


On Thu, Mar 5, 2020 at 10:07 AM Thomas Gleixner <[email protected]> wrote:
>
> Peter Zijlstra <[email protected]> writes:
>
> > On Wed, Mar 04, 2020 at 01:39:41PM -0800, Xi Wang wrote:
> >> The main purpose of kernel watchdog is to test whether scheduler can
> >> still schedule tasks on a cpu. In order to reduce latency from
> >> periodically invoking watchdog reset in thread context, we can simply
> >> touch watchdog from pick_next_task in scheduler. Compared to actually
> >> resetting watchdog from cpu stop / migration threads, we lose coverage
> >> on: a migration thread actually get picked and we actually context
> >> switch to the migration thread. Both steps are heavily protected by
> >> kernel locks and unlikely to silently fail. Thus the change would
> >> provide the same level of protection with less overhead.
> >>
> >> The new way vs the old way to touch the watchdogs is configurable
> >> from:
> >>
> >> /proc/sys/kernel/watchdog_touch_in_thread_interval
> >>
> >> The value means:
> >> 0: Always touch watchdog from pick_next_task
> >> 1: Always touch watchdog from migration thread
> >> N (N>0): Touch watchdog from migration thread once in every N
> >> invocations, and touch watchdog from pick_next_task for
> >> other invocations.
> >>
> >
> > This is configurable madness. What are we really trying to do here?
>
> Create yet another knob which will be advertised in random web blogs to
> solve all problems of the world and some more. Like the one which got
> silently turned into a NOOP ~10 years ago :)
>
>

2020-03-05 22:10:07

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH] sched: watchdog: Touch kernel watchdog in sched code

On Thu, Mar 5, 2020 at 10:07 AM Thomas Gleixner <[email protected]> wrote:
>
> Peter Zijlstra <[email protected]> writes:
>
> > On Wed, Mar 04, 2020 at 01:39:41PM -0800, Xi Wang wrote:
> >> The main purpose of kernel watchdog is to test whether scheduler can
> >> still schedule tasks on a cpu. In order to reduce latency from
> >> periodically invoking watchdog reset in thread context, we can simply
> >> touch watchdog from pick_next_task in scheduler. Compared to actually
> >> resetting watchdog from cpu stop / migration threads, we lose coverage
> >> on: a migration thread actually get picked and we actually context
> >> switch to the migration thread. Both steps are heavily protected by
> >> kernel locks and unlikely to silently fail. Thus the change would
> >> provide the same level of protection with less overhead.
> >>
> >> The new way vs the old way to touch the watchdogs is configurable
> >> from:
> >>
> >> /proc/sys/kernel/watchdog_touch_in_thread_interval
> >>
> >> The value means:
> >> 0: Always touch watchdog from pick_next_task
> >> 1: Always touch watchdog from migration thread
> >> N (N>0): Touch watchdog from migration thread once in every N
> >> invocations, and touch watchdog from pick_next_task for
> >> other invocations.
> >>
> >
> > This is configurable madness. What are we really trying to do here?
>
> Create yet another knob which will be advertised in random web blogs to
> solve all problems of the world and some more. Like the one which got
> silently turned into a NOOP ~10 years ago :)
>

The knob can obviously be removed, it's vestigial and reflects caution
from when we were implementing / rolling things over to it. We have
default values that we know work at scale. I don't think this actually
needs or wants to be tunable beyond on or off (and even that could be
strictly compile or boot time only).

2020-03-05 22:13:04

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH] sched: watchdog: Touch kernel watchdog in sched code

On Wed, Mar 4, 2020 at 11:57 PM Peter Zijlstra <[email protected]> wrote:
>
> On Wed, Mar 04, 2020 at 01:39:41PM -0800, Xi Wang wrote:
> > The main purpose of kernel watchdog is to test whether scheduler can
> > still schedule tasks on a cpu. In order to reduce latency from
> > periodically invoking watchdog reset in thread context, we can simply
> > touch watchdog from pick_next_task in scheduler. Compared to actually
> > resetting watchdog from cpu stop / migration threads, we lose coverage
> > on: a migration thread actually get picked and we actually context
> > switch to the migration thread. Both steps are heavily protected by
> > kernel locks and unlikely to silently fail. Thus the change would
> > provide the same level of protection with less overhead.
> >
> > The new way vs the old way to touch the watchdogs is configurable
> > from:
> >
> > /proc/sys/kernel/watchdog_touch_in_thread_interval
> >
> > The value means:
> > 0: Always touch watchdog from pick_next_task
> > 1: Always touch watchdog from migration thread
> > N (N>0): Touch watchdog from migration thread once in every N
> > invocations, and touch watchdog from pick_next_task for
> > other invocations.
> >
>
> This is configurable madness. What are we really trying to do here?

See reply to Thomas, no config is actually required here. Focusing on
the intended outcome:

The goal is to improve jitter since we're constantly periodically
preempting other classes to run the watchdog. Even on a single CPU
this is measurable as jitter in the us range. But, what increases the
motivation is this disruption has been recently magnified by CPU
"gifts" which require evicting the whole core when one of the siblings
schedules one of these watchdog threads.

The majority outcome being asserted here is that we could actually
exercise pick_next_task if required -- there are other potential
things this will catch, but they are much more braindead generally
speaking (e.g. a bug in pick_next_task itself).

2020-03-06 08:29:52

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched: watchdog: Touch kernel watchdog in sched code



A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

2020-03-06 08:42:57

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched: watchdog: Touch kernel watchdog in sched code

On Thu, Mar 05, 2020 at 02:11:49PM -0800, Paul Turner wrote:
> The goal is to improve jitter since we're constantly periodically
> preempting other classes to run the watchdog. Even on a single CPU
> this is measurable as jitter in the us range. But, what increases the
> motivation is this disruption has been recently magnified by CPU
> "gifts" which require evicting the whole core when one of the siblings
> schedules one of these watchdog threads.
>
> The majority outcome being asserted here is that we could actually
> exercise pick_next_task if required -- there are other potential
> things this will catch, but they are much more braindead generally
> speaking (e.g. a bug in pick_next_task itself).

I still utterly hate what the patch does though; there is no way I'll
have watchdog code hook in the scheduler like this. That's just asking
for trouble.

Why isn't it sufficient to sample the existing context switch counters
from the watchdog? And why can't we fix that?

2020-03-06 22:35:59

by Xi Wang

[permalink] [raw]
Subject: Re: [PATCH] sched: watchdog: Touch kernel watchdog in sched code

On Fri, Mar 6, 2020 at 12:40 AM Peter Zijlstra <[email protected]> wrote:
>
> On Thu, Mar 05, 2020 at 02:11:49PM -0800, Paul Turner wrote:
> > The goal is to improve jitter since we're constantly periodically
> > preempting other classes to run the watchdog. Even on a single CPU
> > this is measurable as jitter in the us range. But, what increases the
> > motivation is this disruption has been recently magnified by CPU
> > "gifts" which require evicting the whole core when one of the siblings
> > schedules one of these watchdog threads.
> >
> > The majority outcome being asserted here is that we could actually
> > exercise pick_next_task if required -- there are other potential
> > things this will catch, but they are much more braindead generally
> > speaking (e.g. a bug in pick_next_task itself).
>
> I still utterly hate what the patch does though; there is no way I'll
> have watchdog code hook in the scheduler like this. That's just asking
> for trouble.
>
> Why isn't it sufficient to sample the existing context switch counters
> from the watchdog? And why can't we fix that?

We could go to pick next and repick the same task. There won't be a
context switch but we still want to hold the watchdog. I assume such a
counter also needs to be per cpu and inside the rq lock. There doesn't
seem to be an existing one that fits this purpose.

2020-03-10 08:44:34

by Chen, Rong A

[permalink] [raw]
Subject: [sched] db8e976e4a: will-it-scale.per_process_ops 15.8% improvement

Greeting,

FYI, we noticed a 15.8% improvement of will-it-scale.per_process_ops due to commit:


commit: db8e976e4a08f1f194a3503f88dec1319f9ee34f ("[PATCH] sched: watchdog: Touch kernel watchdog in sched code")
url: https://github.com/0day-ci/linux/commits/Xi-Wang/sched-watchdog-Touch-kernel-watchdog-in-sched-code/20200305-062335


in testcase: will-it-scale
on test machine: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory
with following parameters:

nr_task: 100%
mode: process
test: mmap1
cpufreq_governor: performance
ucode: 0x11

test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale





Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml

=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/process/100%/debian-x86_64-20191114.cgz/lkp-knm01/mmap1/will-it-scale/0x11

commit:
6f2bc932d8 ("Merge branch 'core/objtool'")
db8e976e4a ("sched: watchdog: Touch kernel watchdog in sched code")

6f2bc932d8ff72b1 db8e976e4a08f1f194a3503f88d
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 -50% :4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
:4 50% 2:4 dmesg.WARNING:at_ip_perf_event_mmap_output/0x
%stddev %change %stddev
\ | \
1426 +15.8% 1651 will-it-scale.per_process_ops
411035 +15.7% 475727 will-it-scale.workload
11380 ± 2% +2.6% 11674 boot-time.idle
2001 -6.9% 1862 ± 2% vmstat.system.cs
1385167 ± 2% +14.2% 1582207 ± 2% meminfo.DirectMap4k
11741 ± 4% +9.1% 12805 meminfo.max_used_kB
112158 +1.6% 113946 proc-vmstat.nr_anon_pages
41945 ± 2% -2.7% 40804 proc-vmstat.nr_shmem
606.55 ± 10% -28.5% 433.40 ± 11% sched_debug.cfs_rq:/.util_avg.min
186488 ± 5% +39.7% 260513 ± 14% sched_debug.cpu.max_idle_balance_cost.stddev
130.50 -23.1% 100.35 ± 4% sched_debug.cpu.ttwu_count.min
124.95 ± 2% -23.5% 95.55 ± 4% sched_debug.cpu.ttwu_local.min
9.733e+09 +2.8% 1e+10 perf-stat.i.branch-instructions
1.09 ± 2% +0.1 1.18 perf-stat.i.branch-miss-rate%
1.029e+08 ± 3% +12.7% 1.16e+08 perf-stat.i.branch-misses
1989 -6.3% 1863 perf-stat.i.context-switches
11.26 -2.6% 10.98 perf-stat.i.cpi
61009464 ± 2% +9.1% 66590176 perf-stat.i.iTLB-load-misses
3.973e+10 +2.8% 4.084e+10 perf-stat.i.iTLB-loads
3.968e+10 +2.8% 4.08e+10 perf-stat.i.instructions
653.29 ± 2% -5.9% 614.52 perf-stat.i.instructions-per-iTLB-miss
0.09 +2.4% 0.09 perf-stat.i.ipc
1.05 ± 2% +0.1 1.15 perf-stat.overall.branch-miss-rate%
11.29 -2.5% 11.00 perf-stat.overall.cpi
0.15 ± 2% +0.0 0.16 perf-stat.overall.iTLB-load-miss-rate%
649.22 ± 2% -5.9% 610.81 perf-stat.overall.instructions-per-iTLB-miss
0.09 +2.6% 0.09 perf-stat.overall.ipc
29737649 -11.6% 26299776 perf-stat.overall.path-length
9.723e+09 +2.5% 9.971e+09 perf-stat.ps.branch-instructions
1.023e+08 ± 3% +12.5% 1.15e+08 perf-stat.ps.branch-misses
1926 -7.3% 1786 ± 2% perf-stat.ps.context-switches
61101086 ± 3% +9.0% 66601356 perf-stat.ps.iTLB-load-misses
3.967e+10 +2.6% 4.069e+10 perf-stat.ps.iTLB-loads
3.964e+10 +2.6% 4.067e+10 perf-stat.ps.instructions
1.222e+13 +2.4% 1.251e+13 perf-stat.total.instructions
20.96 ± 59% -21.0 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
20.95 ± 59% -21.0 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.50 ± 58% -10.5 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.49 ± 58% -10.5 0.00 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.20 ± 59% -10.2 0.00 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.20 ± 59% -10.2 0.00 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.85 ± 17% +0.3 1.12 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap
0.92 ± 17% +0.3 1.21 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
1.18 ± 17% +0.4 1.55 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
38.17 ± 15% +10.3 48.45 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
38.18 ± 15% +10.3 48.47 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
38.45 ± 15% +10.4 48.86 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
38.46 ± 15% +10.4 48.88 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.mmap64
38.66 ± 15% +10.5 49.12 perf-profile.calltrace.cycles-pp.mmap64
39.32 ± 15% +10.6 49.88 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
39.34 ± 15% +10.6 49.91 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
39.63 ± 15% +10.7 50.31 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
39.64 ± 15% +10.7 50.33 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.munmap
39.82 ± 15% +10.7 50.54 perf-profile.calltrace.cycles-pp.munmap
0.90 ± 6% -0.1 0.78 perf-profile.children.cycles-pp.tick_sched_handle
0.94 ± 6% -0.1 0.83 perf-profile.children.cycles-pp.tick_sched_timer
0.88 ± 6% -0.1 0.77 perf-profile.children.cycles-pp.update_process_times
0.69 ± 7% -0.1 0.60 perf-profile.children.cycles-pp.scheduler_tick
0.47 ± 2% -0.0 0.43 perf-profile.children.cycles-pp.task_tick_fair
0.14 ± 5% -0.0 0.12 ± 3% perf-profile.children.cycles-pp.update_curr
0.07 ± 7% +0.0 0.08 perf-profile.children.cycles-pp.remove_vma
0.14 ± 6% +0.0 0.17 ± 5% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.15 ± 14% +0.0 0.19 ± 2% perf-profile.children.cycles-pp.perf_iterate_sb
1.12 ± 4% +0.0 1.17 perf-profile.children.cycles-pp.unmap_page_range
0.00 +0.1 0.05 perf-profile.children.cycles-pp.perf_event_mmap_output
1.17 ± 4% +0.1 1.22 perf-profile.children.cycles-pp.unmap_vmas
0.00 +0.1 0.09 ± 13% perf-profile.children.cycles-pp.userfaultfd_unmap_complete
0.27 ± 8% +0.1 0.41 perf-profile.children.cycles-pp.perf_event_mmap
38.67 ± 15% +10.5 49.12 perf-profile.children.cycles-pp.mmap64
39.84 ± 15% +10.7 50.56 perf-profile.children.cycles-pp.munmap
0.08 ± 5% -0.0 0.07 ± 5% perf-profile.self.cycles-pp.update_curr
0.12 ± 4% +0.0 0.15 ± 5% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.52 ± 3% +0.0 0.55 perf-profile.self.cycles-pp.___might_sleep
0.09 ± 14% +0.0 0.14 ± 11% perf-profile.self.cycles-pp.do_syscall_64
0.00 +0.1 0.05 perf-profile.self.cycles-pp.perf_event_mmap_output
0.00 +0.1 0.09 ± 14% perf-profile.self.cycles-pp.userfaultfd_unmap_complete
0.14 ± 16% +0.1 0.23 perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.07 ± 6% +0.1 0.17 ± 3% perf-profile.self.cycles-pp.perf_event_mmap
136047 -10.3% 122085 softirqs.CPU0.TIMER
142844 ± 7% -12.9% 124359 softirqs.CPU1.TIMER
135746 -9.4% 123045 softirqs.CPU10.TIMER
52171 ± 13% +30.0% 67833 ± 11% softirqs.CPU105.RCU
43570 ± 12% +32.7% 57800 ± 6% softirqs.CPU109.RCU
135354 -9.8% 122137 softirqs.CPU11.TIMER
61775 ± 6% -15.1% 52429 ± 2% softirqs.CPU112.RCU
65779 ± 7% -24.4% 49713 ± 11% softirqs.CPU116.RCU
67826 ± 7% -11.6% 59956 ± 6% softirqs.CPU119.RCU
136890 -11.2% 121517 softirqs.CPU12.TIMER
67497 ± 10% -26.7% 49495 ± 8% softirqs.CPU126.RCU
66978 ± 8% -14.4% 57348 softirqs.CPU127.RCU
68007 ± 8% -19.1% 55010 ± 4% softirqs.CPU128.RCU
73177 ± 20% -23.7% 55863 ± 5% softirqs.CPU129.RCU
137065 -10.8% 122305 softirqs.CPU13.TIMER
45451 ± 13% +36.0% 61809 ± 9% softirqs.CPU134.RCU
42557 ± 15% +40.1% 59639 ± 3% softirqs.CPU135.RCU
135866 ± 2% -10.3% 121860 softirqs.CPU14.TIMER
62809 ± 9% -17.9% 51590 ± 17% softirqs.CPU142.RCU
134061 -10.4% 120146 softirqs.CPU145.TIMER
47655 ± 10% +36.4% 65020 ± 11% softirqs.CPU146.RCU
46689 ± 14% +43.6% 67028 ± 22% softirqs.CPU147.RCU
132777 -9.0% 120864 softirqs.CPU147.TIMER
58663 ± 17% -21.4% 46097 ± 8% softirqs.CPU148.RCU
135185 -10.3% 121233 softirqs.CPU15.TIMER
45147 ± 18% +31.8% 59524 ± 8% softirqs.CPU150.RCU
43131 ± 10% +30.8% 56431 ± 12% softirqs.CPU151.RCU
45028 ± 14% +29.6% 58361 ± 8% softirqs.CPU156.RCU
66885 ± 19% -28.2% 48046 ± 9% softirqs.CPU158.RCU
54669 ± 11% +15.7% 63279 ± 5% softirqs.CPU16.RCU
135175 -10.2% 121330 softirqs.CPU16.TIMER
65572 ± 12% -23.8% 49957 ± 13% softirqs.CPU164.RCU
44791 ± 16% +38.3% 61959 ± 9% softirqs.CPU169.RCU
135558 -10.6% 121159 softirqs.CPU17.TIMER
133452 -9.2% 121202 softirqs.CPU170.TIMER
133532 -9.3% 121141 softirqs.CPU171.TIMER
47392 ± 11% +24.7% 59117 ± 8% softirqs.CPU173.RCU
133403 -9.0% 121345 softirqs.CPU174.TIMER
135163 -9.9% 121727 softirqs.CPU18.TIMER
135425 -10.3% 121530 softirqs.CPU19.TIMER
42257 ± 12% +42.4% 60184 ± 4% softirqs.CPU195.RCU
53072 ± 7% +20.6% 64002 ± 4% softirqs.CPU199.RCU
135802 -10.5% 121598 softirqs.CPU2.TIMER
135522 -10.3% 121496 softirqs.CPU20.TIMER
69873 ± 8% -23.1% 53716 ± 15% softirqs.CPU206.RCU
66999 ± 6% -26.7% 49080 ± 11% softirqs.CPU207.RCU
70738 ± 5% -18.3% 57793 ± 5% softirqs.CPU208.RCU
73865 ± 20% -26.2% 54482 ± 7% softirqs.CPU209.RCU
135015 -9.9% 121582 softirqs.CPU21.TIMER
46523 ± 13% +31.6% 61230 ± 8% softirqs.CPU211.RCU
133727 -9.8% 120658 softirqs.CPU214.TIMER
52296 ± 15% +25.3% 65529 ± 11% softirqs.CPU215.RCU
43894 ± 10% +37.8% 60485 ± 8% softirqs.CPU217.RCU
73420 ± 21% -27.1% 53490 ± 6% softirqs.CPU219.RCU
135360 -10.0% 121811 softirqs.CPU22.TIMER
132373 -9.1% 120338 softirqs.CPU220.TIMER
133390 -9.3% 121016 softirqs.CPU222.TIMER
55865 ± 4% -9.0% 50834 ± 6% softirqs.CPU225.RCU
136998 -11.4% 121423 softirqs.CPU23.TIMER
54255 ± 5% -14.3% 46491 ± 9% softirqs.CPU232.RCU
133106 -9.4% 120627 softirqs.CPU232.TIMER
44670 ± 12% +18.6% 52956 ± 10% softirqs.CPU234.RCU
52856 ± 10% +17.3% 62004 ± 4% softirqs.CPU238.RCU
48377 ± 11% +23.2% 59609 ± 5% softirqs.CPU239.RCU
135035 -10.1% 121361 softirqs.CPU24.TIMER
77763 ± 21% -35.8% 49895 ± 13% softirqs.CPU243.RCU
63845 ± 11% -19.5% 51400 ± 13% softirqs.CPU249.RCU
135168 -10.2% 121439 softirqs.CPU25.TIMER
138499 ± 2% -12.5% 121152 softirqs.CPU26.TIMER
132357 -9.2% 120182 softirqs.CPU264.TIMER
132170 -9.3% 119886 softirqs.CPU266.TIMER
42781 ± 12% +20.9% 51738 ± 3% softirqs.CPU267.RCU
133062 -9.7% 120174 softirqs.CPU267.TIMER
132712 -10.1% 119303 softirqs.CPU268.TIMER
131632 -9.0% 119730 softirqs.CPU269.TIMER
135175 -10.7% 120773 softirqs.CPU27.TIMER
133004 -9.9% 119900 softirqs.CPU270.TIMER
133270 -10.3% 119510 softirqs.CPU271.TIMER
132916 -10.0% 119643 softirqs.CPU272.TIMER
133082 -10.0% 119776 softirqs.CPU274.TIMER
133786 -10.6% 119578 softirqs.CPU275.TIMER
132309 -9.6% 119672 softirqs.CPU276.TIMER
132662 -9.9% 119494 softirqs.CPU278.TIMER
134886 -9.4% 122225 softirqs.CPU28.TIMER
132521 -9.6% 119769 softirqs.CPU280.TIMER
134159 ± 2% -10.4% 120251 softirqs.CPU282.TIMER
133004 -9.3% 120668 softirqs.CPU283.TIMER
132418 -9.9% 119268 softirqs.CPU284.TIMER
131960 -9.5% 119474 softirqs.CPU285.TIMER
132179 ± 2% -9.6% 119440 softirqs.CPU286.TIMER
134987 -10.2% 121232 softirqs.CPU29.TIMER
134984 -10.0% 121436 softirqs.CPU3.TIMER
134885 -9.3% 122324 ± 2% softirqs.CPU30.TIMER
135262 -10.4% 121151 softirqs.CPU31.TIMER
134900 -10.0% 121406 softirqs.CPU32.TIMER
135067 -10.3% 121138 softirqs.CPU33.TIMER
135436 -10.5% 121240 softirqs.CPU34.TIMER
134709 -9.5% 121965 softirqs.CPU35.TIMER
134664 -10.1% 121053 softirqs.CPU36.TIMER
134921 -10.0% 121492 softirqs.CPU38.TIMER
55252 ± 5% -20.1% 44140 ± 10% softirqs.CPU39.RCU
137516 ± 3% -11.8% 121343 softirqs.CPU39.TIMER
134929 -10.0% 121470 softirqs.CPU40.TIMER
138809 ± 6% -12.2% 121885 softirqs.CPU41.TIMER
134871 -10.1% 121298 softirqs.CPU43.TIMER
134766 -9.6% 121772 softirqs.CPU44.TIMER
149033 ± 12% -18.5% 121486 softirqs.CPU45.TIMER
134119 -9.7% 121157 softirqs.CPU46.TIMER
134773 -10.0% 121245 softirqs.CPU47.TIMER
138113 ± 5% -12.3% 121156 softirqs.CPU48.TIMER
133896 -9.9% 120615 softirqs.CPU49.TIMER
135352 -10.5% 121195 softirqs.CPU5.TIMER
134644 -9.9% 121354 softirqs.CPU50.TIMER
134956 -9.6% 122021 ± 2% softirqs.CPU51.TIMER
137890 ± 4% -12.1% 121232 softirqs.CPU52.TIMER
143732 ± 6% -13.4% 124469 ± 4% softirqs.CPU53.TIMER
134524 -9.5% 121793 softirqs.CPU54.TIMER
134248 -9.7% 121285 softirqs.CPU56.TIMER
133592 -9.2% 121293 softirqs.CPU57.TIMER
52449 ± 6% +11.9% 58710 ± 3% softirqs.CPU58.RCU
134725 -10.0% 121290 softirqs.CPU58.TIMER
134055 -9.6% 121131 softirqs.CPU59.TIMER
135888 -10.6% 121534 softirqs.CPU6.TIMER
133889 -9.3% 121489 softirqs.CPU60.TIMER
134347 -9.6% 121505 softirqs.CPU61.TIMER
134305 -9.3% 121753 softirqs.CPU62.TIMER
134645 -9.5% 121901 ± 2% softirqs.CPU63.TIMER
134588 -9.6% 121707 softirqs.CPU64.TIMER
134167 -9.9% 120874 softirqs.CPU65.TIMER
134594 -9.4% 121950 softirqs.CPU66.TIMER
133648 -9.2% 121319 softirqs.CPU68.TIMER
63238 ± 6% -12.5% 55351 ± 2% softirqs.CPU69.RCU
134627 -10.0% 121216 softirqs.CPU69.TIMER
135395 -10.4% 121298 softirqs.CPU7.TIMER
142231 ± 9% -14.6% 121503 softirqs.CPU70.TIMER
64567 ± 11% -25.3% 48263 ± 9% softirqs.CPU73.RCU
69013 ± 38% -30.1% 48221 ± 14% softirqs.CPU76.RCU
136051 ± 3% -10.6% 121631 softirqs.CPU78.TIMER
135859 ± 2% -10.6% 121477 softirqs.CPU8.TIMER
45091 ± 11% +34.3% 60576 ± 2% softirqs.CPU84.RCU
134229 -8.8% 122430 ± 2% softirqs.CPU84.TIMER
134922 -9.6% 121932 softirqs.CPU85.TIMER
72255 ± 10% -21.2% 56910 ± 6% softirqs.CPU86.RCU
67986 ± 5% -20.4% 54106 ± 8% softirqs.CPU87.RCU
134602 -9.9% 121326 softirqs.CPU9.TIMER
45407 ± 14% +35.4% 61490 ± 3% softirqs.CPU94.RCU
352236 ± 4% +8.9% 383520 ± 4% interrupts.CAL:Function_call_interrupts
1223 ± 4% +9.0% 1333 ± 4% interrupts.CPU1.CAL:Function_call_interrupts
698.00 ± 23% +71.2% 1195 ± 11% interrupts.CPU1.RES:Rescheduling_interrupts
1223 ± 4% +9.0% 1333 ± 3% interrupts.CPU10.CAL:Function_call_interrupts
1221 ± 4% +8.9% 1329 ± 4% interrupts.CPU100.CAL:Function_call_interrupts
1221 ± 4% +9.1% 1332 ± 4% interrupts.CPU101.CAL:Function_call_interrupts
1221 ± 4% +8.9% 1329 ± 4% interrupts.CPU102.CAL:Function_call_interrupts
1222 ± 4% +8.8% 1329 ± 4% interrupts.CPU103.CAL:Function_call_interrupts
1217 ± 3% +9.2% 1329 ± 4% interrupts.CPU104.CAL:Function_call_interrupts
1222 ± 4% +8.8% 1329 ± 4% interrupts.CPU105.CAL:Function_call_interrupts
1221 ± 4% +8.8% 1329 ± 4% interrupts.CPU106.CAL:Function_call_interrupts
1222 ± 4% +8.8% 1329 ± 4% interrupts.CPU107.CAL:Function_call_interrupts
1221 ± 4% +8.8% 1328 ± 3% interrupts.CPU108.CAL:Function_call_interrupts
1221 ± 4% +8.7% 1328 ± 3% interrupts.CPU109.CAL:Function_call_interrupts
1224 ± 3% +8.9% 1333 ± 4% interrupts.CPU11.CAL:Function_call_interrupts
1221 ± 4% +8.7% 1327 ± 4% interrupts.CPU110.CAL:Function_call_interrupts
1221 ± 4% +8.8% 1329 ± 4% interrupts.CPU111.CAL:Function_call_interrupts
1221 ± 4% +8.9% 1329 ± 4% interrupts.CPU112.CAL:Function_call_interrupts
1222 ± 4% +8.7% 1327 ± 4% interrupts.CPU114.CAL:Function_call_interrupts
1222 ± 4% +8.8% 1329 ± 4% interrupts.CPU115.CAL:Function_call_interrupts
1222 ± 4% +8.7% 1328 ± 3% interrupts.CPU116.CAL:Function_call_interrupts
1223 ± 4% +8.8% 1330 ± 4% interrupts.CPU117.CAL:Function_call_interrupts
1221 ± 4% +8.9% 1330 ± 4% interrupts.CPU118.CAL:Function_call_interrupts
1222 ± 4% +8.9% 1331 ± 4% interrupts.CPU119.CAL:Function_call_interrupts
1223 ± 3% +9.1% 1334 ± 4% interrupts.CPU12.CAL:Function_call_interrupts
1222 ± 4% +8.7% 1328 ± 4% interrupts.CPU120.CAL:Function_call_interrupts
1223 ± 4% +8.2% 1323 ± 3% interrupts.CPU121.CAL:Function_call_interrupts
1223 ± 4% +8.6% 1328 ± 3% interrupts.CPU122.CAL:Function_call_interrupts
1223 ± 4% +8.8% 1330 ± 4% interrupts.CPU123.CAL:Function_call_interrupts
1222 ± 4% +8.6% 1327 ± 3% interrupts.CPU124.CAL:Function_call_interrupts
6560 ± 24% -27.9% 4729 ± 34% interrupts.CPU124.NMI:Non-maskable_interrupts
6560 ± 24% -27.9% 4729 ± 34% interrupts.CPU124.PMI:Performance_monitoring_interrupts
1222 ± 4% +8.7% 1329 ± 4% interrupts.CPU126.CAL:Function_call_interrupts
1225 ± 4% +8.2% 1325 ± 4% interrupts.CPU127.CAL:Function_call_interrupts
1222 ± 4% +8.9% 1330 ± 3% interrupts.CPU128.CAL:Function_call_interrupts
1222 ± 4% +8.9% 1331 ± 4% interrupts.CPU129.CAL:Function_call_interrupts
1221 ± 4% +9.2% 1334 ± 4% interrupts.CPU13.CAL:Function_call_interrupts
1222 ± 4% +8.8% 1330 ± 4% interrupts.CPU130.CAL:Function_call_interrupts
1223 ± 4% +8.8% 1331 ± 4% interrupts.CPU131.CAL:Function_call_interrupts
1220 ± 3% +9.0% 1330 ± 4% interrupts.CPU132.CAL:Function_call_interrupts
117.75 ± 41% -68.6% 37.00 ± 4% interrupts.CPU132.RES:Rescheduling_interrupts
1222 ± 3% +8.9% 1331 ± 3% interrupts.CPU133.CAL:Function_call_interrupts
1222 ± 4% +8.8% 1330 ± 4% interrupts.CPU134.CAL:Function_call_interrupts
1222 ± 4% +8.9% 1331 ± 3% interrupts.CPU135.CAL:Function_call_interrupts
1222 ± 4% +8.8% 1330 ± 4% interrupts.CPU136.CAL:Function_call_interrupts
1222 ± 4% +8.9% 1331 ± 4% interrupts.CPU137.CAL:Function_call_interrupts
1222 ± 4% +8.9% 1330 ± 4% interrupts.CPU138.CAL:Function_call_interrupts
1223 ± 4% +8.7% 1330 ± 3% interrupts.CPU139.CAL:Function_call_interrupts
1225 ± 4% +8.9% 1334 ± 4% interrupts.CPU14.CAL:Function_call_interrupts
1223 ± 4% +8.8% 1330 ± 4% interrupts.CPU140.CAL:Function_call_interrupts
1222 ± 4% +8.9% 1331 ± 4% interrupts.CPU141.CAL:Function_call_interrupts
1223 ± 4% +8.8% 1331 ± 3% interrupts.CPU142.CAL:Function_call_interrupts
1223 ± 4% +8.8% 1331 ± 4% interrupts.CPU143.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 3% interrupts.CPU144.CAL:Function_call_interrupts
1233 ± 3% +8.0% 1331 ± 3% interrupts.CPU145.CAL:Function_call_interrupts
1223 ± 4% +8.8% 1331 ± 3% interrupts.CPU146.CAL:Function_call_interrupts
7510 -37.1% 4725 ± 34% interrupts.CPU146.NMI:Non-maskable_interrupts
7510 -37.1% 4725 ± 34% interrupts.CPU146.PMI:Performance_monitoring_interrupts
1224 ± 4% +8.7% 1330 ± 3% interrupts.CPU147.CAL:Function_call_interrupts
1220 ± 3% +9.2% 1332 ± 3% interrupts.CPU148.CAL:Function_call_interrupts
1220 ± 3% +9.0% 1329 ± 3% interrupts.CPU149.CAL:Function_call_interrupts
1225 ± 4% +8.9% 1334 ± 4% interrupts.CPU15.CAL:Function_call_interrupts
1223 ± 4% +8.7% 1330 ± 4% interrupts.CPU150.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 4% interrupts.CPU151.CAL:Function_call_interrupts
84.50 ±148% -96.2% 3.25 ± 13% interrupts.CPU151.TLB:TLB_shootdowns
1222 ± 4% +8.9% 1331 ± 4% interrupts.CPU152.CAL:Function_call_interrupts
1078 ± 24% +23.5% 1332 ± 3% interrupts.CPU153.CAL:Function_call_interrupts
1223 ± 4% +8.7% 1330 ± 4% interrupts.CPU154.CAL:Function_call_interrupts
1224 ± 4% +8.8% 1332 ± 4% interrupts.CPU155.CAL:Function_call_interrupts
1223 ± 4% +8.7% 1329 ± 4% interrupts.CPU157.CAL:Function_call_interrupts
1223 ± 4% +8.8% 1331 ± 3% interrupts.CPU158.CAL:Function_call_interrupts
1223 ± 4% +9.0% 1333 ± 3% interrupts.CPU159.CAL:Function_call_interrupts
1225 ± 4% +9.0% 1335 ± 4% interrupts.CPU16.CAL:Function_call_interrupts
4722 ± 34% +40.2% 6622 ± 24% interrupts.CPU16.NMI:Non-maskable_interrupts
4722 ± 34% +40.2% 6622 ± 24% interrupts.CPU16.PMI:Performance_monitoring_interrupts
1223 ± 4% +8.7% 1330 ± 4% interrupts.CPU160.CAL:Function_call_interrupts
1223 ± 4% +8.8% 1331 ± 3% interrupts.CPU161.CAL:Function_call_interrupts
4640 ± 33% +22.5% 5683 ± 33% interrupts.CPU161.NMI:Non-maskable_interrupts
4640 ± 33% +22.5% 5683 ± 33% interrupts.CPU161.PMI:Performance_monitoring_interrupts
1223 ± 4% +8.9% 1331 ± 3% interrupts.CPU162.CAL:Function_call_interrupts
1223 ± 4% +9.0% 1333 ± 4% interrupts.CPU163.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 4% interrupts.CPU164.CAL:Function_call_interrupts
1223 ± 4% +8.7% 1330 ± 4% interrupts.CPU165.CAL:Function_call_interrupts
1222 ± 4% +8.9% 1332 ± 3% interrupts.CPU166.CAL:Function_call_interrupts
1224 ± 4% +8.8% 1332 ± 3% interrupts.CPU167.CAL:Function_call_interrupts
1223 ± 4% +9.1% 1333 ± 3% interrupts.CPU168.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 4% interrupts.CPU169.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1333 ± 4% interrupts.CPU17.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 3% interrupts.CPU170.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1332 ± 3% interrupts.CPU171.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 4% interrupts.CPU172.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 4% interrupts.CPU173.CAL:Function_call_interrupts
1223 ± 4% +9.0% 1332 ± 3% interrupts.CPU174.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1332 ± 4% interrupts.CPU175.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 3% interrupts.CPU176.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 3% interrupts.CPU177.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 3% interrupts.CPU178.CAL:Function_call_interrupts
5616 ± 32% -32.9% 3766 interrupts.CPU178.NMI:Non-maskable_interrupts
5616 ± 32% -32.9% 3766 interrupts.CPU178.PMI:Performance_monitoring_interrupts
1223 ± 4% +8.9% 1332 ± 3% interrupts.CPU179.CAL:Function_call_interrupts
1225 ± 4% +8.9% 1333 ± 4% interrupts.CPU18.CAL:Function_call_interrupts
1223 ± 4% +9.0% 1332 ± 3% interrupts.CPU180.CAL:Function_call_interrupts
1224 ± 4% +8.7% 1331 ± 3% interrupts.CPU181.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 3% interrupts.CPU182.CAL:Function_call_interrupts
6558 ± 24% -27.9% 4727 ± 34% interrupts.CPU182.NMI:Non-maskable_interrupts
6558 ± 24% -27.9% 4727 ± 34% interrupts.CPU182.PMI:Performance_monitoring_interrupts
1224 ± 4% +9.0% 1334 ± 3% interrupts.CPU183.CAL:Function_call_interrupts
9.75 ± 27% +756.4% 83.50 ±139% interrupts.CPU183.RES:Rescheduling_interrupts
1223 ± 4% +8.9% 1332 ± 3% interrupts.CPU184.CAL:Function_call_interrupts
6549 ± 24% -42.5% 3765 interrupts.CPU184.NMI:Non-maskable_interrupts
6549 ± 24% -42.5% 3765 interrupts.CPU184.PMI:Performance_monitoring_interrupts
1224 ± 4% +8.9% 1333 ± 3% interrupts.CPU185.CAL:Function_call_interrupts
1223 ± 4% +8.6% 1328 ± 4% interrupts.CPU186.CAL:Function_call_interrupts
5664 ± 33% -33.5% 3767 interrupts.CPU186.NMI:Non-maskable_interrupts
5664 ± 33% -33.5% 3767 interrupts.CPU186.PMI:Performance_monitoring_interrupts
1224 ± 4% +8.8% 1332 ± 3% interrupts.CPU187.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1332 ± 3% interrupts.CPU188.CAL:Function_call_interrupts
1225 ± 4% +8.7% 1331 ± 3% interrupts.CPU189.CAL:Function_call_interrupts
6544 ± 24% -42.2% 3783 interrupts.CPU189.NMI:Non-maskable_interrupts
6544 ± 24% -42.2% 3783 interrupts.CPU189.PMI:Performance_monitoring_interrupts
1224 ± 4% +9.0% 1334 ± 3% interrupts.CPU19.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 4% interrupts.CPU190.CAL:Function_call_interrupts
1224 ± 4% +8.8% 1332 ± 3% interrupts.CPU191.CAL:Function_call_interrupts
1223 ± 4% +8.5% 1327 ± 3% interrupts.CPU192.CAL:Function_call_interrupts
18.75 ± 5% +222.7% 60.50 ± 54% interrupts.CPU192.RES:Rescheduling_interrupts
1224 ± 4% +8.8% 1332 ± 4% interrupts.CPU193.CAL:Function_call_interrupts
3779 +75.1% 6618 ± 24% interrupts.CPU193.NMI:Non-maskable_interrupts
3779 +75.1% 6618 ± 24% interrupts.CPU193.PMI:Performance_monitoring_interrupts
1079 ± 24% +23.3% 1331 ± 3% interrupts.CPU194.CAL:Function_call_interrupts
1224 ± 4% +8.4% 1327 ± 3% interrupts.CPU195.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 4% interrupts.CPU196.CAL:Function_call_interrupts
1224 ± 4% +8.8% 1332 ± 4% interrupts.CPU197.CAL:Function_call_interrupts
14.25 ± 56% +731.6% 118.50 ±103% interrupts.CPU197.RES:Rescheduling_interrupts
1223 ± 4% +8.6% 1328 ± 4% interrupts.CPU198.CAL:Function_call_interrupts
1224 ± 4% +8.7% 1330 ± 4% interrupts.CPU199.CAL:Function_call_interrupts
1226 ± 3% +8.7% 1333 ± 4% interrupts.CPU2.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1333 ± 4% interrupts.CPU20.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 3% interrupts.CPU200.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 3% interrupts.CPU201.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 3% interrupts.CPU202.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1333 ± 4% interrupts.CPU203.CAL:Function_call_interrupts
1221 ± 3% +9.0% 1330 ± 4% interrupts.CPU204.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1333 ± 4% interrupts.CPU205.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 3% interrupts.CPU206.CAL:Function_call_interrupts
1224 ± 4% +8.8% 1332 ± 3% interrupts.CPU207.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 4% interrupts.CPU208.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1333 ± 4% interrupts.CPU209.CAL:Function_call_interrupts
1224 ± 4% +9.0% 1333 ± 4% interrupts.CPU21.CAL:Function_call_interrupts
297.25 ± 5% +36.4% 405.50 ± 20% interrupts.CPU21.RES:Rescheduling_interrupts
1224 ± 4% +9.0% 1334 ± 4% interrupts.CPU210.CAL:Function_call_interrupts
1224 ± 4% +9.0% 1333 ± 4% interrupts.CPU211.CAL:Function_call_interrupts
1223 ± 4% +9.0% 1333 ± 4% interrupts.CPU212.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1333 ± 4% interrupts.CPU213.CAL:Function_call_interrupts
1223 ± 4% +9.0% 1333 ± 3% interrupts.CPU214.CAL:Function_call_interrupts
1224 ± 4% +9.0% 1334 ± 4% interrupts.CPU215.CAL:Function_call_interrupts
6447 ± 24% -28.0% 4640 ± 34% interrupts.CPU215.NMI:Non-maskable_interrupts
6447 ± 24% -28.0% 4640 ± 34% interrupts.CPU215.PMI:Performance_monitoring_interrupts
1223 ± 4% +9.0% 1334 ± 3% interrupts.CPU216.CAL:Function_call_interrupts
1223 ± 4% +9.0% 1333 ± 4% interrupts.CPU217.CAL:Function_call_interrupts
1223 ± 4% +9.1% 1334 ± 4% interrupts.CPU218.CAL:Function_call_interrupts
1223 ± 4% +9.0% 1333 ± 3% interrupts.CPU219.CAL:Function_call_interrupts
1223 ± 4% +9.0% 1333 ± 4% interrupts.CPU22.CAL:Function_call_interrupts
711.75 ± 23% -31.3% 489.00 ± 10% interrupts.CPU22.RES:Rescheduling_interrupts
1222 ± 4% +9.1% 1333 ± 3% interrupts.CPU220.CAL:Function_call_interrupts
1221 ± 3% +9.6% 1338 ± 3% interrupts.CPU221.CAL:Function_call_interrupts
1227 ± 4% +9.6% 1344 ± 4% interrupts.CPU222.CAL:Function_call_interrupts
1233 ± 3% +9.0% 1345 ± 3% interrupts.CPU223.CAL:Function_call_interrupts
1239 ± 4% +8.8% 1348 ± 3% interrupts.CPU224.CAL:Function_call_interrupts
1239 ± 4% +8.9% 1349 ± 3% interrupts.CPU225.CAL:Function_call_interrupts
1239 ± 4% +8.8% 1349 ± 3% interrupts.CPU226.CAL:Function_call_interrupts
1240 ± 4% +8.8% 1349 ± 3% interrupts.CPU227.CAL:Function_call_interrupts
1239 ± 4% +8.8% 1348 ± 3% interrupts.CPU228.CAL:Function_call_interrupts
1240 ± 4% +8.8% 1349 ± 3% interrupts.CPU229.CAL:Function_call_interrupts
1223 ± 4% +9.0% 1334 ± 4% interrupts.CPU23.CAL:Function_call_interrupts
1239 ± 4% +8.9% 1349 ± 3% interrupts.CPU230.CAL:Function_call_interrupts
1239 ± 4% +8.9% 1349 ± 3% interrupts.CPU231.CAL:Function_call_interrupts
1239 ± 4% +8.9% 1349 ± 3% interrupts.CPU232.CAL:Function_call_interrupts
1239 ± 4% +8.7% 1347 ± 3% interrupts.CPU233.CAL:Function_call_interrupts
1238 ± 4% +9.0% 1350 ± 3% interrupts.CPU234.CAL:Function_call_interrupts
1239 ± 4% +8.8% 1349 ± 3% interrupts.CPU235.CAL:Function_call_interrupts
1239 ± 4% +8.9% 1350 ± 3% interrupts.CPU236.CAL:Function_call_interrupts
4677 ± 32% +41.1% 6600 ± 24% interrupts.CPU236.NMI:Non-maskable_interrupts
4677 ± 32% +41.1% 6600 ± 24% interrupts.CPU236.PMI:Performance_monitoring_interrupts
1240 ± 4% +8.8% 1348 ± 3% interrupts.CPU237.CAL:Function_call_interrupts
1239 ± 4% +8.9% 1349 ± 3% interrupts.CPU238.CAL:Function_call_interrupts
1241 ± 4% +8.8% 1349 ± 3% interrupts.CPU239.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1333 ± 4% interrupts.CPU24.CAL:Function_call_interrupts
1242 ± 4% +8.7% 1350 ± 3% interrupts.CPU240.CAL:Function_call_interrupts
1239 ± 4% +8.9% 1349 ± 4% interrupts.CPU241.CAL:Function_call_interrupts
1239 ± 4% +8.9% 1350 ± 4% interrupts.CPU242.CAL:Function_call_interrupts
1239 ± 4% +8.8% 1348 ± 3% interrupts.CPU243.CAL:Function_call_interrupts
1239 ± 4% +8.9% 1350 ± 3% interrupts.CPU244.CAL:Function_call_interrupts
1239 ± 4% +8.9% 1349 ± 3% interrupts.CPU245.CAL:Function_call_interrupts
1239 ± 4% +9.0% 1350 ± 3% interrupts.CPU246.CAL:Function_call_interrupts
1242 ± 4% +8.7% 1349 ± 3% interrupts.CPU247.CAL:Function_call_interrupts
1236 ± 3% +9.2% 1349 ± 3% interrupts.CPU248.CAL:Function_call_interrupts
1240 ± 4% +8.8% 1348 ± 3% interrupts.CPU249.CAL:Function_call_interrupts
1225 ± 4% +8.9% 1334 ± 3% interrupts.CPU25.CAL:Function_call_interrupts
1239 ± 4% +8.9% 1350 ± 3% interrupts.CPU250.CAL:Function_call_interrupts
1239 ± 4% +9.0% 1350 ± 3% interrupts.CPU251.CAL:Function_call_interrupts
1240 ± 4% +8.6% 1347 ± 3% interrupts.CPU252.CAL:Function_call_interrupts
1240 ± 4% +8.5% 1345 ± 3% interrupts.CPU253.CAL:Function_call_interrupts
6556 ± 24% -42.4% 3774 interrupts.CPU253.NMI:Non-maskable_interrupts
6556 ± 24% -42.4% 3774 interrupts.CPU253.PMI:Performance_monitoring_interrupts
1240 ± 4% +8.9% 1350 ± 3% interrupts.CPU254.CAL:Function_call_interrupts
1240 ± 4% +8.8% 1350 ± 3% interrupts.CPU255.CAL:Function_call_interrupts
1094 ± 18% +23.3% 1350 ± 3% interrupts.CPU256.CAL:Function_call_interrupts
1240 ± 4% +8.9% 1350 ± 3% interrupts.CPU257.CAL:Function_call_interrupts
1240 ± 4% +8.5% 1346 ± 3% interrupts.CPU258.CAL:Function_call_interrupts
1240 ± 4% +8.9% 1350 ± 3% interrupts.CPU259.CAL:Function_call_interrupts
1225 ± 4% +8.8% 1333 ± 4% interrupts.CPU26.CAL:Function_call_interrupts
1239 ± 4% +8.8% 1348 ± 3% interrupts.CPU260.CAL:Function_call_interrupts
1240 ± 4% +8.7% 1348 ± 3% interrupts.CPU261.CAL:Function_call_interrupts
1240 ± 4% +8.8% 1350 ± 3% interrupts.CPU262.CAL:Function_call_interrupts
1240 ± 4% +8.8% 1350 ± 3% interrupts.CPU263.CAL:Function_call_interrupts
1241 ± 4% +8.4% 1346 ± 3% interrupts.CPU264.CAL:Function_call_interrupts
1241 ± 4% +8.6% 1348 ± 3% interrupts.CPU265.CAL:Function_call_interrupts
1240 ± 4% +8.8% 1350 ± 3% interrupts.CPU266.CAL:Function_call_interrupts
1241 ± 4% +8.8% 1350 ± 3% interrupts.CPU267.CAL:Function_call_interrupts
1242 ± 4% +8.7% 1350 ± 3% interrupts.CPU268.CAL:Function_call_interrupts
1240 ± 4% +8.6% 1347 ± 3% interrupts.CPU269.CAL:Function_call_interrupts
1224 ± 3% +8.7% 1330 ± 3% interrupts.CPU27.CAL:Function_call_interrupts
1241 ± 4% +8.6% 1347 ± 4% interrupts.CPU270.CAL:Function_call_interrupts
1240 ± 4% +8.8% 1349 ± 3% interrupts.CPU272.CAL:Function_call_interrupts
4705 ± 34% +40.9% 6629 ± 24% interrupts.CPU272.NMI:Non-maskable_interrupts
4705 ± 34% +40.9% 6629 ± 24% interrupts.CPU272.PMI:Performance_monitoring_interrupts
1241 ± 4% +8.7% 1349 ± 3% interrupts.CPU273.CAL:Function_call_interrupts
1238 ± 4% +9.1% 1351 ± 3% interrupts.CPU274.CAL:Function_call_interrupts
1241 ± 4% +8.8% 1351 ± 3% interrupts.CPU275.CAL:Function_call_interrupts
1241 ± 4% +8.9% 1351 ± 3% interrupts.CPU276.CAL:Function_call_interrupts
1241 ± 4% +8.9% 1352 ± 3% interrupts.CPU277.CAL:Function_call_interrupts
1241 ± 4% +9.2% 1355 ± 4% interrupts.CPU278.CAL:Function_call_interrupts
1241 ± 4% +9.0% 1353 ± 3% interrupts.CPU279.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1332 ± 4% interrupts.CPU28.CAL:Function_call_interrupts
1241 ± 4% +8.7% 1349 ± 3% interrupts.CPU280.CAL:Function_call_interrupts
1241 ± 4% +9.0% 1353 ± 3% interrupts.CPU281.CAL:Function_call_interrupts
1241 ± 4% +9.1% 1353 ± 3% interrupts.CPU282.CAL:Function_call_interrupts
1241 ± 4% +9.0% 1353 ± 3% interrupts.CPU283.CAL:Function_call_interrupts
1240 ± 4% +9.2% 1354 ± 3% interrupts.CPU284.CAL:Function_call_interrupts
1096 ± 25% +23.5% 1354 ± 3% interrupts.CPU285.CAL:Function_call_interrupts
1242 ± 4% +9.1% 1355 ± 3% interrupts.CPU286.CAL:Function_call_interrupts
1091 ± 25% +23.6% 1348 ± 3% interrupts.CPU287.CAL:Function_call_interrupts
1224 ± 4% +9.0% 1334 ± 4% interrupts.CPU29.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1333 ± 4% interrupts.CPU3.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1333 ± 4% interrupts.CPU30.CAL:Function_call_interrupts
3778 +50.7% 5694 ± 33% interrupts.CPU30.NMI:Non-maskable_interrupts
3778 +50.7% 5694 ± 33% interrupts.CPU30.PMI:Performance_monitoring_interrupts
1225 ± 4% +9.0% 1335 ± 3% interrupts.CPU31.CAL:Function_call_interrupts
1226 ± 3% +8.7% 1333 ± 4% interrupts.CPU32.CAL:Function_call_interrupts
1223 ± 4% +9.0% 1333 ± 4% interrupts.CPU33.CAL:Function_call_interrupts
1224 ± 4% +8.8% 1332 ± 4% interrupts.CPU34.CAL:Function_call_interrupts
451.25 ± 11% -47.3% 238.00 ± 16% interrupts.CPU34.RES:Rescheduling_interrupts
1224 ± 4% +8.8% 1332 ± 3% interrupts.CPU35.CAL:Function_call_interrupts
1223 ± 4% +8.7% 1330 ± 3% interrupts.CPU36.CAL:Function_call_interrupts
1224 ± 4% +8.8% 1332 ± 4% interrupts.CPU37.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 4% interrupts.CPU38.CAL:Function_call_interrupts
1224 ± 4% +8.8% 1332 ± 4% interrupts.CPU39.CAL:Function_call_interrupts
105.25 ± 18% +41.3% 148.75 ± 15% interrupts.CPU39.RES:Rescheduling_interrupts
1224 ± 4% +8.9% 1333 ± 4% interrupts.CPU4.CAL:Function_call_interrupts
440.00 ± 10% +71.3% 753.75 ± 14% interrupts.CPU4.RES:Rescheduling_interrupts
1226 ± 4% +8.8% 1334 ± 4% interrupts.CPU40.CAL:Function_call_interrupts
1225 ± 4% +8.9% 1334 ± 4% interrupts.CPU41.CAL:Function_call_interrupts
1223 ± 4% +8.6% 1329 ± 3% interrupts.CPU42.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1333 ± 4% interrupts.CPU43.CAL:Function_call_interrupts
1221 ± 4% +9.2% 1333 ± 4% interrupts.CPU44.CAL:Function_call_interrupts
1222 ± 4% +9.0% 1332 ± 4% interrupts.CPU45.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 4% interrupts.CPU46.CAL:Function_call_interrupts
1224 ± 3% +8.9% 1333 ± 4% interrupts.CPU47.CAL:Function_call_interrupts
1224 ± 3% +8.7% 1330 ± 3% interrupts.CPU48.CAL:Function_call_interrupts
357.00 ± 38% -44.3% 198.75 ± 8% interrupts.CPU48.RES:Rescheduling_interrupts
1223 ± 4% +9.0% 1334 ± 4% interrupts.CPU49.CAL:Function_call_interrupts
1222 ± 3% +9.1% 1333 ± 4% interrupts.CPU5.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 3% interrupts.CPU50.CAL:Function_call_interrupts
1224 ± 4% +8.8% 1332 ± 3% interrupts.CPU51.CAL:Function_call_interrupts
1223 ± 4% +8.8% 1331 ± 3% interrupts.CPU52.CAL:Function_call_interrupts
1223 ± 4% +8.9% 1332 ± 3% interrupts.CPU53.CAL:Function_call_interrupts
1223 ± 4% +8.7% 1329 ± 4% interrupts.CPU54.CAL:Function_call_interrupts
3784 +74.7% 6611 ± 24% interrupts.CPU54.NMI:Non-maskable_interrupts
3784 +74.7% 6611 ± 24% interrupts.CPU54.PMI:Performance_monitoring_interrupts
1224 ± 4% +8.8% 1331 ± 4% interrupts.CPU55.CAL:Function_call_interrupts
1224 ± 4% +8.8% 1331 ± 3% interrupts.CPU56.CAL:Function_call_interrupts
1224 ± 4% +8.8% 1332 ± 4% interrupts.CPU57.CAL:Function_call_interrupts
1224 ± 4% +8.5% 1328 ± 3% interrupts.CPU58.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1333 ± 3% interrupts.CPU59.CAL:Function_call_interrupts
1224 ± 4% +8.8% 1332 ± 4% interrupts.CPU6.CAL:Function_call_interrupts
3763 +51.2% 5690 ± 33% interrupts.CPU6.NMI:Non-maskable_interrupts
3763 +51.2% 5690 ± 33% interrupts.CPU6.PMI:Performance_monitoring_interrupts
454.25 ± 13% +45.7% 662.00 ± 18% interrupts.CPU6.RES:Rescheduling_interrupts
1225 ± 4% +8.8% 1332 ± 4% interrupts.CPU60.CAL:Function_call_interrupts
1223 ± 3% +9.0% 1332 ± 4% interrupts.CPU61.CAL:Function_call_interrupts
1225 ± 4% +8.8% 1332 ± 4% interrupts.CPU62.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1332 ± 4% interrupts.CPU63.CAL:Function_call_interrupts
1223 ± 4% +9.0% 1333 ± 4% interrupts.CPU64.CAL:Function_call_interrupts
4665 ± 32% +42.5% 6646 ± 24% interrupts.CPU64.NMI:Non-maskable_interrupts
4665 ± 32% +42.5% 6646 ± 24% interrupts.CPU64.PMI:Performance_monitoring_interrupts
1223 ± 4% +8.9% 1332 ± 4% interrupts.CPU65.CAL:Function_call_interrupts
1224 ± 4% +9.0% 1334 ± 4% interrupts.CPU66.CAL:Function_call_interrupts
1224 ± 4% +9.0% 1334 ± 4% interrupts.CPU67.CAL:Function_call_interrupts
1222 ± 4% +8.9% 1332 ± 4% interrupts.CPU68.CAL:Function_call_interrupts
1081 ± 18% +23.2% 1333 ± 4% interrupts.CPU69.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1333 ± 4% interrupts.CPU7.CAL:Function_call_interrupts
1223 ± 4% +9.0% 1333 ± 4% interrupts.CPU70.CAL:Function_call_interrupts
1226 ± 3% +8.7% 1332 ± 4% interrupts.CPU71.CAL:Function_call_interrupts
3709 +51.1% 5604 ± 33% interrupts.CPU71.NMI:Non-maskable_interrupts
3709 +51.1% 5604 ± 33% interrupts.CPU71.PMI:Performance_monitoring_interrupts
1220 ± 4% +8.9% 1329 ± 4% interrupts.CPU72.CAL:Function_call_interrupts
1220 ± 4% +8.9% 1329 ± 4% interrupts.CPU73.CAL:Function_call_interrupts
28.50 ± 28% +97.4% 56.25 ± 49% interrupts.CPU73.RES:Rescheduling_interrupts
1220 ± 4% +8.7% 1326 ± 3% interrupts.CPU74.CAL:Function_call_interrupts
1221 ± 4% +8.8% 1328 ± 3% interrupts.CPU75.CAL:Function_call_interrupts
1223 ± 3% +8.7% 1329 ± 4% interrupts.CPU76.CAL:Function_call_interrupts
1221 ± 4% +8.8% 1329 ± 4% interrupts.CPU77.CAL:Function_call_interrupts
1221 ± 4% +8.6% 1326 ± 4% interrupts.CPU78.CAL:Function_call_interrupts
1221 ± 4% +8.6% 1326 ± 4% interrupts.CPU79.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1334 ± 4% interrupts.CPU8.CAL:Function_call_interrupts
1221 ± 4% +8.9% 1329 ± 4% interrupts.CPU80.CAL:Function_call_interrupts
1221 ± 4% +8.8% 1329 ± 4% interrupts.CPU81.CAL:Function_call_interrupts
5605 ± 32% -32.7% 3774 interrupts.CPU81.NMI:Non-maskable_interrupts
5605 ± 32% -32.7% 3774 interrupts.CPU81.PMI:Performance_monitoring_interrupts
1220 ± 4% +8.9% 1329 ± 4% interrupts.CPU82.CAL:Function_call_interrupts
1221 ± 4% +8.9% 1329 ± 4% interrupts.CPU83.CAL:Function_call_interrupts
1222 ± 3% +8.8% 1329 ± 4% interrupts.CPU84.CAL:Function_call_interrupts
1220 ± 4% +8.9% 1329 ± 4% interrupts.CPU85.CAL:Function_call_interrupts
1220 ± 4% +8.9% 1329 ± 4% interrupts.CPU86.CAL:Function_call_interrupts
1221 ± 4% +8.9% 1329 ± 4% interrupts.CPU87.CAL:Function_call_interrupts
1221 ± 4% +8.8% 1329 ± 4% interrupts.CPU88.CAL:Function_call_interrupts
1221 ± 4% +8.8% 1329 ± 4% interrupts.CPU89.CAL:Function_call_interrupts
1224 ± 4% +8.9% 1333 ± 4% interrupts.CPU9.CAL:Function_call_interrupts
1221 ± 4% +8.8% 1329 ± 4% interrupts.CPU90.CAL:Function_call_interrupts
1221 ± 4% +8.8% 1329 ± 4% interrupts.CPU91.CAL:Function_call_interrupts
1221 ± 4% +8.8% 1329 ± 4% interrupts.CPU92.CAL:Function_call_interrupts
1221 ± 4% +8.9% 1329 ± 4% interrupts.CPU93.CAL:Function_call_interrupts
1221 ± 4% +8.8% 1329 ± 4% interrupts.CPU94.CAL:Function_call_interrupts
1221 ± 4% +8.8% 1329 ± 4% interrupts.CPU95.CAL:Function_call_interrupts
44.50 ± 82% -68.0% 14.25 ± 43% interrupts.CPU95.RES:Rescheduling_interrupts
1221 ± 4% +8.6% 1325 ± 3% interrupts.CPU96.CAL:Function_call_interrupts
1221 ± 4% +8.9% 1329 ± 4% interrupts.CPU97.CAL:Function_call_interrupts
1223 ± 4% +8.5% 1327 ± 3% interrupts.CPU98.CAL:Function_call_interrupts
55.25 ± 17% +243.0% 189.50 ± 83% interrupts.CPU98.RES:Rescheduling_interrupts
1222 ± 4% +8.5% 1325 ± 3% interrupts.CPU99.CAL:Function_call_interrupts



will-it-scale.per_process_ops

1850 +--------------------------------------------------------------------+
1800 |-+ |
| O O O O |
1750 |-+ O O O O O O O O O O O O |
1700 |-+ |
1650 |-+ O O O O O O O O |
1600 |-+ |
| |
1550 |-+ |
1500 |-+ |
1450 |-+ .+ .+. |
1400 |-+.+.+.. .+.+. .+.+. .+.+..+.+.+ + .+. +.+.|
|.+ +.+. .+..+.+.+ +..+.+ + + |
1350 |-+ +.+.+ |
1300 +--------------------------------------------------------------------+


will-it-scale.workload

520000 +------------------------------------------------------------------+
| O O O |
500000 |-O O O O O O O O O O O O |
| O |
480000 |-+ O O O O |
| O O O O |
460000 |-+ |
| |
440000 |-+ |
| |
420000 |-+ .+.. |
| .+.+. .+.+. .+.+. .+. .+. .+ .+.+.+.+.|
400000 |.+ +. +.+.+.+ +..+.+ + + + + |
| +..+.+. + |
380000 +------------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Rong Chen


Attachments:
(No filename) (54.53 kB)
config-5.6.0-rc4-00302-gdb8e976e4a08f (206.90 kB)
job-script (7.39 kB)
job.yaml (5.03 kB)
reproduce (320.00 B)
Download all attachments

2020-10-05 11:22:21

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched: watchdog: Touch kernel watchdog in sched code

On Fri, Mar 06, 2020 at 02:34:20PM -0800, Xi Wang wrote:
> On Fri, Mar 6, 2020 at 12:40 AM Peter Zijlstra <[email protected]> wrote:
> >
> > On Thu, Mar 05, 2020 at 02:11:49PM -0800, Paul Turner wrote:
> > > The goal is to improve jitter since we're constantly periodically
> > > preempting other classes to run the watchdog. Even on a single CPU
> > > this is measurable as jitter in the us range. But, what increases the
> > > motivation is this disruption has been recently magnified by CPU
> > > "gifts" which require evicting the whole core when one of the siblings
> > > schedules one of these watchdog threads.
> > >
> > > The majority outcome being asserted here is that we could actually
> > > exercise pick_next_task if required -- there are other potential
> > > things this will catch, but they are much more braindead generally
> > > speaking (e.g. a bug in pick_next_task itself).
> >
> > I still utterly hate what the patch does though; there is no way I'll
> > have watchdog code hook in the scheduler like this. That's just asking
> > for trouble.
> >
> > Why isn't it sufficient to sample the existing context switch counters
> > from the watchdog? And why can't we fix that?
>
> We could go to pick next and repick the same task. There won't be a
> context switch but we still want to hold the watchdog. I assume such a
> counter also needs to be per cpu and inside the rq lock. There doesn't
> seem to be an existing one that fits this purpose.

Sorry, your reply got lost, but I just ran into something that reminded
me of this.

There's sched_count. That's currently schedstat, but if you can find a
spot in a hot cacheline (from schedule()'s perspective) then it
should be cheap to incremenent unconditionally.

If only someone were to write a useful cacheline perf tool (and no that
c2c trainwreck doesn't count).

2020-10-06 02:23:21

by Xi Wang

[permalink] [raw]
Subject: Re: [PATCH] sched: watchdog: Touch kernel watchdog in sched code

On Mon, Oct 5, 2020 at 4:19 AM Peter Zijlstra <[email protected]> wrote:
>
> On Fri, Mar 06, 2020 at 02:34:20PM -0800, Xi Wang wrote:
> > On Fri, Mar 6, 2020 at 12:40 AM Peter Zijlstra <[email protected]> wrote:
> > >
> > > On Thu, Mar 05, 2020 at 02:11:49PM -0800, Paul Turner wrote:
> > > > The goal is to improve jitter since we're constantly periodically
> > > > preempting other classes to run the watchdog. Even on a single CPU
> > > > this is measurable as jitter in the us range. But, what increases the
> > > > motivation is this disruption has been recently magnified by CPU
> > > > "gifts" which require evicting the whole core when one of the siblings
> > > > schedules one of these watchdog threads.
> > > >
> > > > The majority outcome being asserted here is that we could actually
> > > > exercise pick_next_task if required -- there are other potential
> > > > things this will catch, but they are much more braindead generally
> > > > speaking (e.g. a bug in pick_next_task itself).
> > >
> > > I still utterly hate what the patch does though; there is no way I'll
> > > have watchdog code hook in the scheduler like this. That's just asking
> > > for trouble.
> > >
> > > Why isn't it sufficient to sample the existing context switch counters
> > > from the watchdog? And why can't we fix that?
> >
> > We could go to pick next and repick the same task. There won't be a
> > context switch but we still want to hold the watchdog. I assume such a
> > counter also needs to be per cpu and inside the rq lock. There doesn't
> > seem to be an existing one that fits this purpose.
>
> Sorry, your reply got lost, but I just ran into something that reminded
> me of this.
>
> There's sched_count. That's currently schedstat, but if you can find a
> spot in a hot cacheline (from schedule()'s perspective) then it
> should be cheap to incremenent unconditionally.
>
> If only someone were to write a useful cacheline perf tool (and no that
> c2c trainwreck doesn't count).
>

Thanks, I'll try the alternative implementation.

-Xi