2021-06-15 11:18:00

by Mel Gorman

[permalink] [raw]
Subject: [PATCH v2] sched/fair: Age the average idle time

From: Peter Zijlstra (Intel) <[email protected]>

This is a partial forward-port of Peter Ziljstra's work first posted
at https://lore.kernel.org/lkml/[email protected]/.
His Signed-off has been removed because it is modified but will be restored
if he says it's still ok.

Currently select_idle_cpu()'s proportional scheme uses the average idle
time *for when we are idle*, that is temporally challenged. When a CPU
is not at all idle, we'll happily continue using whatever value we did
see when the CPU goes idle. To fix this, introduce a separate average
idle and age it (the existing value still makes sense for things like
new-idle balancing, which happens when we do go idle).

The overall goal is to not spend more time scanning for idle CPUs than
we're idle for. Otherwise we're inhibiting work. This means that we need to
consider the cost over all the wake-ups between consecutive idle periods.
To track this, the scan cost is subtracted from the estimated average
idle time.

The impact of this patch is related to workloads that have domains that
are fully busy or overloaded. Without the patch, the scan depth may be
too high because a CPU is not reaching idle.

Due to the nature of the patch, this is a regression magnet. It
potentially wins when domains are almost fully busy or overloaded --
at that point searches are likely to fail but idle is not being aged
as CPUs are active so search depth is too large and useless. It will
potentially show regressions when there are idle CPUs and a deep search is
beneficial. This tbench result on a 2-socket broadwell machine partially
illustates the problem

5.13.0-rc2 5.13.0-rc2
vanilla sched-avgidle-v1r5
Hmean 1 445.02 ( 0.00%) 451.36 * 1.42%*
Hmean 2 830.69 ( 0.00%) 846.03 * 1.85%*
Hmean 4 1350.80 ( 0.00%) 1505.56 * 11.46%*
Hmean 8 2888.88 ( 0.00%) 2586.40 * -10.47%*
Hmean 16 5248.18 ( 0.00%) 5305.26 * 1.09%*
Hmean 32 8914.03 ( 0.00%) 9191.35 * 3.11%*
Hmean 64 10663.10 ( 0.00%) 10192.65 * -4.41%*
Hmean 128 18043.89 ( 0.00%) 18478.92 * 2.41%*
Hmean 256 16530.89 ( 0.00%) 17637.16 * 6.69%*
Hmean 320 16451.13 ( 0.00%) 17270.97 * 4.98%*

Note that 8 was a regression point where a deeper search would have helped
but it gains for high thread counts when searches are useless. Hackbench
is a more extreme example although not perfect as the tasks idle rapidly

hackbench-process-pipes
5.13.0-rc2 5.13.0-rc2
vanilla sched-avgidle-v1r5
Amean 1 0.3950 ( 0.00%) 0.3887 ( 1.60%)
Amean 4 0.9450 ( 0.00%) 0.9677 ( -2.40%)
Amean 7 1.4737 ( 0.00%) 1.4890 ( -1.04%)
Amean 12 2.3507 ( 0.00%) 2.3360 * 0.62%*
Amean 21 4.0807 ( 0.00%) 4.0993 * -0.46%*
Amean 30 5.6820 ( 0.00%) 5.7510 * -1.21%*
Amean 48 8.7913 ( 0.00%) 8.7383 ( 0.60%)
Amean 79 14.3880 ( 0.00%) 13.9343 * 3.15%*
Amean 110 21.2233 ( 0.00%) 19.4263 * 8.47%*
Amean 141 28.2930 ( 0.00%) 25.1003 * 11.28%*
Amean 172 34.7570 ( 0.00%) 30.7527 * 11.52%*
Amean 203 41.0083 ( 0.00%) 36.4267 * 11.17%*
Amean 234 47.7133 ( 0.00%) 42.0623 * 11.84%*
Amean 265 53.0353 ( 0.00%) 47.7720 * 9.92%*
Amean 296 60.0170 ( 0.00%) 53.4273 * 10.98%*
Stddev 1 0.0052 ( 0.00%) 0.0025 ( 51.57%)
Stddev 4 0.0357 ( 0.00%) 0.0370 ( -3.75%)
Stddev 7 0.0190 ( 0.00%) 0.0298 ( -56.64%)
Stddev 12 0.0064 ( 0.00%) 0.0095 ( -48.38%)
Stddev 21 0.0065 ( 0.00%) 0.0097 ( -49.28%)
Stddev 30 0.0185 ( 0.00%) 0.0295 ( -59.54%)
Stddev 48 0.0559 ( 0.00%) 0.0168 ( 69.92%)
Stddev 79 0.1559 ( 0.00%) 0.0278 ( 82.17%)
Stddev 110 1.1728 ( 0.00%) 0.0532 ( 95.47%)
Stddev 141 0.7867 ( 0.00%) 0.0968 ( 87.69%)
Stddev 172 1.0255 ( 0.00%) 0.0420 ( 95.91%)
Stddev 203 0.8106 ( 0.00%) 0.1384 ( 82.92%)
Stddev 234 1.1949 ( 0.00%) 0.1328 ( 88.89%)
Stddev 265 0.9231 ( 0.00%) 0.0820 ( 91.11%)
Stddev 296 1.0456 ( 0.00%) 0.1327 ( 87.31%)

Again, higher thread counts benefit and the standard deviation
shows that results are also a lot more stable when the idle
time is aged.

The patch potentially matters when a socket was multiple LLCs as the
maximum search depth is lower. However, some of the test results were
suspiciously good (e.g. specjbb2005 gaining 50% on a Zen1 machine) and
other results were not dramatically different to other mcahines.

Given the nature of the patch, Peter's full series is not being forward
ported as each part should stand on its own. Preferably they would be
merged at different times to reduce the risk of false bisections.

Signed-off-by: Mel Gorman <[email protected]>
---
Changelog since v1
o No major change, rebase to 5.13-rc5 and retest, still passed

kernel/sched/core.c | 5 +++++
kernel/sched/fair.c | 25 +++++++++++++++++++++----
kernel/sched/sched.h | 3 +++
3 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5226cc26a095..6a3fdb9f4380 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2979,6 +2979,9 @@ static void ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags,
if (rq->avg_idle > max)
rq->avg_idle = max;

+ rq->wake_stamp = jiffies;
+ rq->wake_avg_idle = rq->avg_idle / 2;
+
rq->idle_stamp = 0;
}
#endif
@@ -8215,6 +8218,8 @@ void __init sched_init(void)
rq->online = 0;
rq->idle_stamp = 0;
rq->avg_idle = 2*sysctl_sched_migration_cost;
+ rq->wake_stamp = jiffies;
+ rq->wake_avg_idle = rq->avg_idle;
rq->max_idle_balance_cost = sysctl_sched_migration_cost;

INIT_LIST_HEAD(&rq->cfs_tasks);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3248e24a90b0..cc7d1144a356 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6172,9 +6172,10 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
{
struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
int i, cpu, idle_cpu = -1, nr = INT_MAX;
+ struct rq *this_rq = this_rq();
int this = smp_processor_id();
struct sched_domain *this_sd;
- u64 time;
+ u64 time = 0;

this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
if (!this_sd)
@@ -6184,12 +6185,21 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool

if (sched_feat(SIS_PROP) && !has_idle_core) {
u64 avg_cost, avg_idle, span_avg;
+ unsigned long now = jiffies;

/*
- * Due to large variance we need a large fuzz factor;
- * hackbench in particularly is sensitive here.
+ * If we're busy, the assumption that the last idle period
+ * predicts the future is flawed; age away the remaining
+ * predicted idle time.
*/
- avg_idle = this_rq()->avg_idle / 512;
+ if (unlikely(this_rq->wake_stamp < now)) {
+ while (this_rq->wake_stamp < now && this_rq->wake_avg_idle) {
+ this_rq->wake_stamp++;
+ this_rq->wake_avg_idle >>= 1;
+ }
+ }
+
+ avg_idle = this_rq->wake_avg_idle;
avg_cost = this_sd->avg_scan_cost + 1;

span_avg = sd->span_weight * avg_idle;
@@ -6221,6 +6231,13 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool

if (sched_feat(SIS_PROP) && !has_idle_core) {
time = cpu_clock(this) - time;
+
+ /*
+ * Account for the scan cost of wakeups against the average
+ * idle time.
+ */
+ this_rq->wake_avg_idle -= min(this_rq->wake_avg_idle, time);
+
update_avg(&this_sd->avg_scan_cost, time);
}

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index a189bec13729..7bc20e5541cf 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1017,6 +1017,9 @@ struct rq {
u64 idle_stamp;
u64 avg_idle;

+ unsigned long wake_stamp;
+ u64 wake_avg_idle;
+
/* This is used to determine avg_idle's max value */
u64 max_idle_balance_cost;


2021-06-16 09:04:11

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH v2] sched/fair: Age the average idle time

On Tue, Jun 15, 2021 at 10:42:28PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 15, 2021 at 12:16:11PM +0100, Mel Gorman wrote:
> > From: Peter Zijlstra (Intel) <[email protected]>
> >
> > This is a partial forward-port of Peter Ziljstra's work first posted
> > at https://lore.kernel.org/lkml/[email protected]/.
>
> It's patches 2 and 3 together, right?
>

Patches 2, 3, 9 and 10. I saw limited value to preserving the feature
flag. Some of the series has since been obsoleted. The main patch of
interest that was dropped was patch 1 because the results were somewhat
inconclusive but leaning towards being an overall loss.

> > His Signed-off has been removed because it is modified but will be restored
> > if he says it's still ok.
>
> I suppose the SoB will auto-magically re-appear if I apply it :-)
>

Yep, it would and it would indicate that you didn't object to the copying
at least :P

> > The patch potentially matters when a socket was multiple LLCs as the
> > maximum search depth is lower. However, some of the test results were
> > suspiciously good (e.g. specjbb2005 gaining 50% on a Zen1 machine) and
> > other results were not dramatically different to other mcahines.
> >
> > Given the nature of the patch, Peter's full series is not being forward
> > ported as each part should stand on its own. Preferably they would be
> > merged at different times to reduce the risk of false bisections.
>
> I'm tempted to give it a go.. anyone object?

Thanks, so far no serious objection :)

The latest results as I see them have been copied to
https://beta.suse.com/private/mgorman/melt/v5.13-rc5/3-perf-test/sched/sched-avgidle-v1r6/html/dashboard.html
They will move from here if the patch is accepted to 5-assembly replacing
3-perf-test. This naming is part of my workflow for evaluating topic
branches separetly and then putting them together for another round
of testing.

NAS shows small differences but NAS would see limited impact from the
patch. Specjbb shows small losses and some minor gains which is unfortunate
but the workload tends to see small gains and losses all the time.
redis is a mixed bag but has some wins. hackbench is the main benefit
because it's wakeup intensive and tends to overload machines where deep
searches hurt.

There are other results in there if you feel like digging around
such as sched-core tested with no processes getting tagged with prctl
https://beta.suse.com/private/mgorman/melt/v5.13-rc5/5-assembly/sched/sched-schedcore-v1r2/html/dashboard.html


--
Mel Gorman
SUSE Labs

2021-06-17 07:45:37

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH v2] sched/fair: Age the average idle time

On Wed, Jun 16, 2021 at 05:52:25PM +0200, Vincent Guittot wrote:
> On Tue, 15 Jun 2021 at 22:43, Peter Zijlstra <[email protected]> wrote:
> >
> > On Tue, Jun 15, 2021 at 12:16:11PM +0100, Mel Gorman wrote:
> > > From: Peter Zijlstra (Intel) <[email protected]>
> > >
> > > This is a partial forward-port of Peter Ziljstra's work first posted
> > > at https://lore.kernel.org/lkml/[email protected]/.
> >
> > It's patches 2 and 3 together, right?
> >
> > > His Signed-off has been removed because it is modified but will be restored
> > > if he says it's still ok.
> >
> > I suppose the SoB will auto-magically re-appear if I apply it :-)
> >
> > > The patch potentially matters when a socket was multiple LLCs as the
> > > maximum search depth is lower. However, some of the test results were
> > > suspiciously good (e.g. specjbb2005 gaining 50% on a Zen1 machine) and
> > > other results were not dramatically different to other mcahines.
> > >
> > > Given the nature of the patch, Peter's full series is not being forward
> > > ported as each part should stand on its own. Preferably they would be
> > > merged at different times to reduce the risk of false bisections.
> >
> > I'm tempted to give it a go.. anyone object?
>
> Just finished running some tests on my large arm64 system.
> Tbench tests are a mixed between small gain and loss
>

Same for tbench on three x86 machines I reran tests for

https://beta.suse.com/private/mgorman/melt/v5.13-rc5/3-perf-test/sched/sched-avgidle-v1r6/html/network-tbench/bing2/index.html#tbench4
Small gains and losses, gains at higher client counts where search depth
should be reduced

https://beta.suse.com/private/mgorman/melt/v5.13-rc5/3-perf-test/sched/sched-avgidle-v1r6/html/network-tbench/hardy2/index.html#tbench4
Mostly gains, one counter-example at 4 clients

https://beta.suse.com/private/mgorman/melt/v5.13-rc5/3-perf-test/sched/sched-avgidle-v1r6/html/network-tbench/marvin2/index.html#tbench4
Worst by far, 1 client took a major hit for unknown reasons, otherwise
mix of gains and losses. I'm not confident that the 1 client
results are meaningful because for this machine, there should
have been idle cores so the code the patch adjusts should not
even be executed.

> hackbench shows significant changes in both direction
> hackbench -g $group
>
> group tip/sched/core + this patch
> 1 13.358(+/- 1.82%) 12.850(+/- 2.21%) +4%
> 4 4.286(+/- 2.77%) 4.114(+/- 2.25%) +4%
> 16 3.175(+/- 0.55%) 3.559(+/- 0.43%) -12%
> 32 2.912(+/- 0.79%) 3.165(+/- 0.95%) -8%
> 64 2.859(+/- 1.12%) 2.937(+/- 0.91%) -3%
> 128 3.092(+/- 4.75%) 3.003(+/-5.18%) +3%
> 256 3.233(+/- 3.03%) 2.973(+/- 0.80%) +8%

Think this is processes and sockets. Of the hackbench results I had,
this one performed the worst

https://beta.suse.com/private/mgorman/melt/v5.13-rc5/3-perf-test/sched/sched-avgidle-v1r6/html/scheduler-unbound/bing2/index.html#hackbench-process-sockets
Small gains and losses

https://beta.suse.com/private/mgorman/melt/v5.13-rc5/3-perf-test/sched/sched-avgidle-v1r6/html/scheduler-unbound/hardy2/index.html#hackbench-process-sockets
Small gains and losses

https://beta.suse.com/private/mgorman/melt/v5.13-rc5/3-perf-test/sched/sched-avgidle-v1r6/html/scheduler-unbound/marvin2/index.html#hackbench-process-sockets
Small gains and losses

One of the better results for hackbench was processes and pipes
https://beta.suse.com/private/mgorman/melt/v5.13-rc5/3-perf-test/sched/sched-avgidle-v1r6/html/scheduler-unbound/bing2/index.html#hackbench-process-pipes
1-12% gains

For your arm machine, how many logical CPUs are online, what is the level
of SMT if any and is the machine NUMA?

Fundamentally though, as the changelog notes "due to the nature of the
patch, this is a regression magnet". There are going to be examples
where a deep search is better even if a machine is fully busy or
overloaded and examples where cutting off the search is better. I think
it's better to have an idle estimate that gets updated if CPUs are fully
busy even if it's not a universal win.

--
Mel Gorman
SUSE Labs

2021-06-17 11:00:33

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH v2] sched/fair: Age the average idle time

On Thu, Jun 17, 2021 at 10:30:09AM +0200, Vincent Guittot wrote:
> > > > I'm tempted to give it a go.. anyone object?
> > >
> > > Just finished running some tests on my large arm64 system.
> > > Tbench tests are a mixed between small gain and loss
> > >
> >
> > Same for tbench on three x86 machines I reran tests for
> >
> > <SNIP>
> >
> > For your arm machine, how many logical CPUs are online, what is the level
> > of SMT if any and is the machine NUMA?
>
> It's a SMT4 x 28 cores x 2 NUMA nodes = 224 CPUs
>

Ok, SMT4 is what I was interested in. I suspected this was the case but
was not sure. I wondered about the possibility that SMT4 should be
accounted for in the scan depth.

> >
> > Fundamentally though, as the changelog notes "due to the nature of the
> > patch, this is a regression magnet". There are going to be examples
> > where a deep search is better even if a machine is fully busy or
> > overloaded and examples where cutting off the search is better. I think
> > it's better to have an idle estimate that gets updated if CPUs are fully
> > busy even if it's not a universal win.
>
> Although I agree that using a stall average idle time value of local
> is not good, I'm not sure this proposal is better. The main problem is
> that we use the avg_idle of the local CPU to estimate how many times
> we should loop and try to find another idle CPU. But there is no
> direct relation between both.

This is true. The idle time of the local CPU is used to estimate the
idle time of the domain which is inevitably going to be inaccurate but
tracking idle time for the domain will be cache write intensive and
potentially very expensive. I think this was discussed before but maybe
it is my imaginaction.

> Typically, a short average idle time on
> the local CPU doesn't mean that there are less idle CPUs and that's
> why we have a mix a gain and loss
>

Can you evaluate if scanning proportional to cores helps if applied on
top? The patch below is a bit of pick&mix and has only seen a basic build
test with a distro config. While I will queue this, I don't expect it to
have an impact on SMT2.

--8--
sched/fair: Make select_idle_cpu() proportional to cores

From: Peter Zijlstra (Intel) <[email protected]>

Instead of calculating how many (logical) CPUs to scan, compute how
many cores to scan.

This changes behaviour for anything !SMT2.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>
---
kernel/sched/core.c | 17 ++++++++++++-----
kernel/sched/fair.c | 11 +++++++++--
kernel/sched/sched.h | 2 ++
3 files changed, 23 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6a3fdb9f4380..1773e0707a5d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7846,11 +7846,18 @@ int sched_cpu_activate(unsigned int cpu)
balance_push_set(cpu, false);

#ifdef CONFIG_SCHED_SMT
- /*
- * When going up, increment the number of cores with SMT present.
- */
- if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
- static_branch_inc_cpuslocked(&sched_smt_present);
+ do {
+ int weight = cpumask_weight(cpu_smt_mask(cpu));
+
+ if (weight > sched_smt_weight)
+ sched_smt_weight = weight;
+
+ /*
+ * When going up, increment the number of cores with SMT present.
+ */
+ if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
+ static_branch_inc_cpuslocked(&sched_smt_present);
+ } while (0);
#endif
set_cpu_active(cpu, true);

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index cc7d1144a356..4fc4e1f2eaae 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6037,6 +6037,8 @@ static inline int __select_idle_cpu(int cpu)
DEFINE_STATIC_KEY_FALSE(sched_smt_present);
EXPORT_SYMBOL_GPL(sched_smt_present);

+int __read_mostly sched_smt_weight = 1;
+
static inline void set_idle_cores(int cpu, int val)
{
struct sched_domain_shared *sds;
@@ -6151,6 +6153,8 @@ static inline bool test_idle_cores(int cpu, bool def)
return def;
}

+#define sched_smt_weight 1
+
static inline int select_idle_core(struct task_struct *p, int core, struct cpumask *cpus, int *idle_cpu)
{
return __select_idle_cpu(core);
@@ -6163,6 +6167,8 @@ static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd

#endif /* CONFIG_SCHED_SMT */

+#define sis_min_cores 2
+
/*
* Scan the LLC domain for idle CPUs; this is dynamically regulated by
* comparing the average scan cost (tracked in sd->avg_scan_cost) against the
@@ -6203,11 +6209,12 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
avg_cost = this_sd->avg_scan_cost + 1;

span_avg = sd->span_weight * avg_idle;
- if (span_avg > 4*avg_cost)
+ if (span_avg > sis_min_cores * avg_cost)
nr = div_u64(span_avg, avg_cost);
else
- nr = 4;
+ nr = sis_min_cores;

+ nr *= sched_smt_weight;
time = cpu_clock(this);
}

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 7bc20e5541cf..440a2bbc19d5 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1119,6 +1119,8 @@ static inline bool is_migration_disabled(struct task_struct *p)
#ifdef CONFIG_SCHED_SMT
extern void __update_idle_core(struct rq *rq);

+extern int sched_smt_weight;
+
static inline void update_idle_core(struct rq *rq)
{
if (static_branch_unlikely(&sched_smt_present))

2021-06-17 13:21:26

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH v2] sched/fair: Age the average idle time

On Thu, Jun 17, 2021 at 12:02:56PM +0200, Vincent Guittot wrote:
> > > >
> > > > Fundamentally though, as the changelog notes "due to the nature of the
> > > > patch, this is a regression magnet". There are going to be examples
> > > > where a deep search is better even if a machine is fully busy or
> > > > overloaded and examples where cutting off the search is better. I think
> > > > it's better to have an idle estimate that gets updated if CPUs are fully
> > > > busy even if it's not a universal win.
> > >
> > > Although I agree that using a stall average idle time value of local
> > > is not good, I'm not sure this proposal is better. The main problem is
> > > that we use the avg_idle of the local CPU to estimate how many times
> > > we should loop and try to find another idle CPU. But there is no
> > > direct relation between both.
> >
> > This is true. The idle time of the local CPU is used to estimate the
> > idle time of the domain which is inevitably going to be inaccurate but
>
> I'm more and more convinced that using average idle time (of the
> local cpu or the full domain) is not the right metric. In
> select_idle_cpu(), we looks for an idle CPU but we don't care about
> how long it will be idle.

Can we predict that accurately? cpufreq for intel_pstate used to try
something like that but it was a bit fuzzy and I don't know if the
scheduler could do much better. There is some idle prediction stuff but
it's related to nohz which does not really help us if a machine is nearly
fully busy or overloaded.

I guess for tracking idle that revisiting
https://lore.kernel.org/lkml/[email protected]/
is an option now that the scan is somewhat unified. A two-pass scan
could be used to check potentially idle CPUs first and if there is
sufficient search depth left, scan other CPUs. There were some questions
on how accurate the idle mask was and how expensive it was to maintain.
Unfortunately, it would not help with scan depth calculations, it just
might reduce useless scanning.

Selecting based on avg idle time could be interesting but hazardous. If
for example, we prioritised selecting a CPU that is mostly idle, it'll
also pick CPUs that are potentially in a deep idle state incurring a
larger wakeup cost. Right now we are not much better because we just
select an idle CPU and hope for the best but always targetting the most
idle CPU could have problems. There would also be the cost of tracking
idle CPUs in priority order. It would eliminate the scan depth cost
calculations but the overall cost would be much worse.

Hence, I still think we can improve the scan depth costs in the short
term until a replacement is identified that works reasonably well.

> Even more, we can scan all CPUs whatever the
> avg idle time if there is a chance that there is an idle core.
>

That is an important, but separate topic. It's known that the idle core
detection can yield false positives. Putting core scanning under SIS_PROP
had mixed results when we last tried but things change. Again, it doesn't
help with scan depth calculations.

> > tracking idle time for the domain will be cache write intensive and
> > potentially very expensive. I think this was discussed before but maybe
> > it is my imaginaction.
> >
> > > Typically, a short average idle time on
> > > the local CPU doesn't mean that there are less idle CPUs and that's
> > > why we have a mix a gain and loss
> > >
> >
> > Can you evaluate if scanning proportional to cores helps if applied on
> > top? The patch below is a bit of pick&mix and has only seen a basic build
>
> I will queue it for some test later today
>

Thanks. The proposed patch since passed a build and boot test,
performance evaluation is under way but as it's x86 and SMT2, I'm mostly
just checking that it's neutral.

--
Mel Gorman
SUSE Labs

2021-06-17 16:21:58

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH v2] sched/fair: Age the average idle time

On Thu, Jun 17, 2021 at 05:03:54PM +0200, Vincent Guittot wrote:
> On Thu, 17 Jun 2021 at 13:05, Mel Gorman <[email protected]> wrote:
> >
> > On Thu, Jun 17, 2021 at 12:02:56PM +0200, Vincent Guittot wrote:
> > > > > >
> > > > > > Fundamentally though, as the changelog notes "due to the nature of the
> > > > > > patch, this is a regression magnet". There are going to be examples
> > > > > > where a deep search is better even if a machine is fully busy or
> > > > > > overloaded and examples where cutting off the search is better. I think
> > > > > > it's better to have an idle estimate that gets updated if CPUs are fully
> > > > > > busy even if it's not a universal win.
> > > > >
> > > > > Although I agree that using a stall average idle time value of local
> > > > > is not good, I'm not sure this proposal is better. The main problem is
> > > > > that we use the avg_idle of the local CPU to estimate how many times
> > > > > we should loop and try to find another idle CPU. But there is no
> > > > > direct relation between both.
> > > >
> > > > This is true. The idle time of the local CPU is used to estimate the
> > > > idle time of the domain which is inevitably going to be inaccurate but
> > >
> > > I'm more and more convinced that using average idle time (of the
> > > local cpu or the full domain) is not the right metric. In
> > > select_idle_cpu(), we looks for an idle CPU but we don't care about
> > > how long it will be idle.
> >
> > Can we predict that accurately? cpufreq for intel_pstate used to try
> > something like that but it was a bit fuzzy and I don't know if the
> > scheduler could do much better. There is some idle prediction stuff but
> > it's related to nohz which does not really help us if a machine is nearly
> > fully busy or overloaded.
> >
> > I guess for tracking idle that revisiting
> > https://lore.kernel.org/lkml/[email protected]/
> > is an option now that the scan is somewhat unified. A two-pass scan
> > could be used to check potentially idle CPUs first and if there is
> > sufficient search depth left, scan other CPUs. There were some questions
>
> I think it's the other way around:
> a CPU is busy for sure if it is not set in the cpuidle_mask and we
> don't need to check it. But a cpu might not be idle even if it is set
> in the idle mask might because it's cleared during the tick
>

Tick is a long time so scan depth may still be a problem.

> > Selecting based on avg idle time could be interesting but hazardous. If
> > for example, we prioritised selecting a CPU that is mostly idle, it'll
> > also pick CPUs that are potentially in a deep idle state incurring a
> > larger wakeup cost. Right now we are not much better because we just
> > select an idle CPU and hope for the best but always targetting the most
> > idle CPU could have problems. There would also be the cost of tracking
> > idle CPUs in priority order. It would eliminate the scan depth cost
> > calculations but the overall cost would be much worse.
> >
> > Hence, I still think we can improve the scan depth costs in the short
> > term until a replacement is identified that works reasonably well.
> >
> > > Even more, we can scan all CPUs whatever the
> > > avg idle time if there is a chance that there is an idle core.
> > >
> >
> > That is an important, but separate topic. It's known that the idle core
> > detection can yield false positives. Putting core scanning under SIS_PROP
> > had mixed results when we last tried but things change. Again, it doesn't
> > help with scan depth calculations.
>
> my point was mainly to highlight that the path can take opposite
> decision for the same avg_idle value:
> - scan all cpus if has_idle_core is true whatever avg_idle
> - limit the depth if has_idle_core is false and avg_idle is short
>

I do understand the point but the idle core scan anomaly was not
intended to be addressed in the patch because putting the idle scan
under SIS_PROP potentially means using cpus with active idle siblings
prematurely.

> >
> > > > tracking idle time for the domain will be cache write intensive and
> > > > potentially very expensive. I think this was discussed before but maybe
> > > > it is my imaginaction.
> > > >
> > > > > Typically, a short average idle time on
> > > > > the local CPU doesn't mean that there are less idle CPUs and that's
> > > > > why we have a mix a gain and loss
> > > > >
> > > >
> > > > Can you evaluate if scanning proportional to cores helps if applied on
> > > > top? The patch below is a bit of pick&mix and has only seen a basic build
> > >
> > > I will queue it for some test later today
> > >
> >
> > Thanks. The proposed patch since passed a build and boot test,
> > performance evaluation is under way but as it's x86 and SMT2, I'm mostly
> > just checking that it's neutral.
>
> Results stay similar:
> group tip/sched/core + this patch + latest addon
> 1 13.358(+/- 1.82%) 12.850(+/- 2.21%) +4% 13.411(+/- 2.47%) -0%
> 4 4.286(+/- 2.77%) 4.114(+/- 2.25%) +4% 4.163(+/- 1.88%) +3%
> 16 3.175(+/- 0.55%) 3.559(+/- 0.43%) -12% 3.535(+/- 0.52%) -11%
> 32 2.912(+/- 0.79%) 3.165(+/- 0.95%) -8% 3.153(+/- 0.76%) -10%
> 64 2.859(+/- 1.12%) 2.937(+/- 0.91%) -3% 2.919(+/- 0.73%) -2%
> 128 3.092(+/- 4.75%) 3.003(+/-5.18%) +3% 2.973(+/- 0.90%) +4%
> 256 3.233(+/- 3.03%) 2.973(+/- 0.80%) +8% 3.036(+/- 1.05%) +6%
>

Ok, accounting for SMT4 didn't help.

--
Mel Gorman
SUSE Labs

2021-06-17 18:30:30

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH v2] sched/fair: Age the average idle time

On Thu, Jun 17, 2021 at 11:01:16AM -0400, Phil Auld wrote:
> > Thanks, so far no serious objection :)
> >
> > The latest results as I see them have been copied to
> > https://beta.suse.com/private/mgorman/melt/v5.13-rc5/3-perf-test/sched/sched-avgidle-v1r6/html/dashboard.html
> > They will move from here if the patch is accepted to 5-assembly replacing
> > 3-perf-test. This naming is part of my workflow for evaluating topic
> > branches separetly and then putting them together for another round
> > of testing.
> >
> > NAS shows small differences but NAS would see limited impact from the
> > patch. Specjbb shows small losses and some minor gains which is unfortunate
> > but the workload tends to see small gains and losses all the time.
> > redis is a mixed bag but has some wins. hackbench is the main benefit
> > because it's wakeup intensive and tends to overload machines where deep
> > searches hurt.
> >
> > There are other results in there if you feel like digging around
> > such as sched-core tested with no processes getting tagged with prctl
> > https://beta.suse.com/private/mgorman/melt/v5.13-rc5/5-assembly/sched/sched-schedcore-v1r2/html/dashboard.html
> >
>
> Thanks for the links. It's cool to see what your results dashboard looks like.
> It's really small, what are you plotting in those heat maps?
>
> It's hard for me to publish the results that come from our testing (web based
> on intranet) but we don't see any major differences with this patch. There
> are some gains here and there mostly balanced by some loses. Overall it comes
> out basically as a wash across our main performance test workload.
>

Ok, that's unfortunate. It's also somewhat surprising but then again, I
don't know what tests were executed.

> It'll be interesting to see if it effects a sensitive, proprietary perf test
> suite from a European company with a 3 letter name :)
>

I don't think it's worth the effort if it's failing microbenchmarks at
the moment.

--
Mel Gorman
SUSE Labs

2021-06-18 19:55:42

by Phil Auld

[permalink] [raw]
Subject: Re: [PATCH v2] sched/fair: Age the average idle time

On Thu, Jun 17, 2021 at 04:40:06PM +0100 Mel Gorman wrote:
> On Thu, Jun 17, 2021 at 11:01:16AM -0400, Phil Auld wrote:
> > > Thanks, so far no serious objection :)
> > >
> > > The latest results as I see them have been copied to
> > > https://beta.suse.com/private/mgorman/melt/v5.13-rc5/3-perf-test/sched/sched-avgidle-v1r6/html/dashboard.html
> > > They will move from here if the patch is accepted to 5-assembly replacing
> > > 3-perf-test. This naming is part of my workflow for evaluating topic
> > > branches separetly and then putting them together for another round
> > > of testing.
> > >
> > > NAS shows small differences but NAS would see limited impact from the
> > > patch. Specjbb shows small losses and some minor gains which is unfortunate
> > > but the workload tends to see small gains and losses all the time.
> > > redis is a mixed bag but has some wins. hackbench is the main benefit
> > > because it's wakeup intensive and tends to overload machines where deep
> > > searches hurt.
> > >
> > > There are other results in there if you feel like digging around
> > > such as sched-core tested with no processes getting tagged with prctl
> > > https://beta.suse.com/private/mgorman/melt/v5.13-rc5/5-assembly/sched/sched-schedcore-v1r2/html/dashboard.html
> > >
> >
> > Thanks for the links. It's cool to see what your results dashboard looks like.
> > It's really small, what are you plotting in those heat maps?
> >
> > It's hard for me to publish the results that come from our testing (web based
> > on intranet) but we don't see any major differences with this patch. There
> > are some gains here and there mostly balanced by some loses. Overall it comes
> > out basically as a wash across our main performance test workload.
> >
>
> Ok, that's unfortunate. It's also somewhat surprising but then again, I
> don't know what tests were executed.

Yes, I know, sorry. I get these really nice reports but they're hard to
summarize in text :)

The testing consists of NAS (mostly C, some D), stress-ng, libmicro, linpack
and stream, specjvm2008, specjbb2005 and a few others across a range of (x86_64)
hardware.

From libmicro there was these 2 which stand out

exit_100 52584 66975 [ -27.4%]
exit_100_nolibc 46831 57395 [ -22.6%]

Then some scattered gain and loss in the under +/-10% range.
That's on a 2-node Rome. It's a bit worse on a 8-node Rome
but with libmicro it's a few nsecs here or there.


Here are some pieces from the summary page I pulled out and trimmed as an example:
kernel testcase system top % change
5.13.0_rc5.idle_aging NASParallel-2_NUMA_nodes gold-2s-b NASParallel_ua_C_x: 4%
NASParallel_sp_C_x: -4%

5.13.0_rc5.idle_aging NASParallel-2_NUMA_nodes amd-epyc2-rome NASParallel_sp_C_x: 6%
NASParallel_sp_C_x: -10%

5.13.0_rc5.idle_aging LinpackAndStream-2_NUMA_nod… amd-epyc2-rome Linpack_stream_default: 4%
Linpack_linpackd_default: -11%

5.13.0_rc5.idle_aging LinpackAndStream-2_NUMA_nod… gold-2s-b Linpack_linpacks_default: 7%
Linpack_linpackd_default: -2%

5.13.0_rc5.idle_aging stress-ng-1_NUMA_nodes gold-1s STRESSngTest_getdent: 11%
STRESSngTest_madvise: -15%

5.13.0_rc5.idle_aging Intel_Linpack-8_NUMA_nodes gold-4s-b IntelLinpack_default: 12%
IntelLinpack_single: -12%

The "gold"s are Intel based.

As I said, the perf team reported no real difference one way or the other.
From their perspective this patch is basically neutral.

>
> > It'll be interesting to see if it effects a sensitive, proprietary perf test
> > suite from a European company with a 3 letter name :)
> >
>
> I don't think it's worth the effort if it's failing microbenchmarks at
> the moment.

Fair enough. I'm sensitive to this one because it can be a real pain to track down minor
trade-off differences. And I believe you have been there too...


Cheers,
Phil

>
> --
> Mel Gorman
> SUSE Labs
>

--

2021-06-23 14:32:38

by kernel test robot

[permalink] [raw]
Subject: [sched/fair] 5359f5ca0f: phoronix-test-suite.stress-ng.SystemVMessagePassing.bogo_ops_s 77.9% improvement



Greeting,

FYI, we noticed a 77.9% improvement of phoronix-test-suite.stress-ng.SystemVMessagePassing.bogo_ops_s due to commit:


commit: 5359f5ca0f37ba59f5d679beefbdb05184272846 ("[PATCH v2] sched/fair: Age the average idle time")
url: https://github.com/0day-ci/linux/commits/Mel-Gorman/sched-fair-Age-the-average-idle-time/20210617-061733
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 0159bb020ca9a43b17aa9149f1199643c1d49426

in testcase: phoronix-test-suite
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 128G memory
with following parameters:

test: stress-ng-1.2.2
option_a: System V Message Passing
cpufreq_governor: performance
ucode: 0x5003006

test-description: The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added.
test-url: http://www.phoronix-test-suite.com/





Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
bin/lkp run generated-yaml-file

=========================================================================================
compiler/cpufreq_governor/kconfig/option_a/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/System V Message Passing/debian-x86_64-phoronix/lkp-csl-2sp8/stress-ng-1.2.2/phoronix-test-suite/0x5003006

commit:
0159bb020c ("Documentation: Add usecases, design and interface for core scheduling")
5359f5ca0f ("sched/fair: Age the average idle time")

0159bb020ca9a43b 5359f5ca0f37ba59f5d679beefb
---------------- ---------------------------
%stddev %change %stddev
\ | \
5945973 ? 2% +77.9% 10579766 phoronix-test-suite.stress-ng.SystemVMessagePassing.bogo_ops_s
108.43 +332.7% 469.13 ? 9% phoronix-test-suite.time.elapsed_time
108.43 +332.7% 469.13 ? 9% phoronix-test-suite.time.elapsed_time.max
94007862 ? 7% +3040.3% 2.952e+09 ? 11% phoronix-test-suite.time.involuntary_context_switches
13897 +332.2% 60060 ? 9% phoronix-test-suite.time.major_page_faults
206843 +63.8% 338905 ? 4% phoronix-test-suite.time.minor_page_faults
7878 +4.4% 8226 phoronix-test-suite.time.percent_of_cpu_this_job_got
8289 ? 3% +345.8% 36954 ? 9% phoronix-test-suite.time.system_time
254.31 ?128% +547.1% 1645 ? 10% phoronix-test-suite.time.user_time
94825903 ? 7% +3022.2% 2.961e+09 ? 11% phoronix-test-suite.time.voluntary_context_switches
739620 ? 42% +267.9% 2721203 ? 33% cpuidle.C1.usage
60684 ? 58% +663.9% 463591 ? 9% cpuidle.POLL.usage
32.72 ? 7% -8.2 24.52 ? 9% mpstat.cpu.all.idle%
0.02 ? 12% -0.0 0.01 ? 21% mpstat.cpu.all.iowait%
142.25 +253.3% 502.56 ? 9% uptime.boot
4492 +91.0% 8579 ? 6% uptime.idle
2583014 ? 8% +205.2% 7884139 ? 10% numa-numastat.node0.local_node
2620218 ? 7% +202.7% 7930255 ? 9% numa-numastat.node0.numa_hit
2105537 ? 9% +123.2% 4698964 ? 15% numa-numastat.node1.local_node
2136498 ? 8% +121.0% 4721144 ? 15% numa-numastat.node1.numa_hit
737686 ? 42% +267.7% 2712445 ? 33% turbostat.C1
15.13 ? 13% -27.0% 11.04 ? 18% turbostat.CPU%c1
26699122 ? 2% +352.1% 1.207e+08 ? 10% turbostat.IRQ
58941 ? 59% +683.9% 462022 ? 9% turbostat.POLL
225.37 +15.1% 259.29 turbostat.PkgWatt
18.57 ? 2% -28.8% 13.22 ? 4% vmstat.cpu.id
787.57 -72.6% 216.00 ? 20% vmstat.io.bi
2249365 +177.1% 6233159 ? 12% vmstat.memory.cache
1697615 ? 7% +637.3% 12517343 ? 3% vmstat.system.cs
239328 ? 2% +6.8% 255676 vmstat.system.in
3741 ? 31% +11068.0% 417793 ?101% numa-meminfo.node0.Active(anon)
104563 ? 18% +135.2% 245946 ? 12% numa-meminfo.node0.AnonHugePages
322097 ? 7% +14.2% 367690 ? 3% numa-meminfo.node0.AnonPages
334284 ? 7% +16.6% 389727 ? 2% numa-meminfo.node0.AnonPages.max
35879 ? 58% +8255.1% 2997760 ? 25% numa-meminfo.node1.Active
4722 ? 76% +62623.4% 2962159 ? 26% numa-meminfo.node1.Active(anon)
52250 ? 46% +205.0% 159373 ? 7% numa-meminfo.node1.AnonPages.max
597329 ? 43% +585.3% 4093689 ? 19% numa-meminfo.node1.FilePages
611703 ? 41% +86.6% 1141181 ? 19% numa-meminfo.node1.Inactive
494824 ? 61% +121.1% 1093896 ? 18% numa-meminfo.node1.Inactive(anon)
36416 ? 38% +850.1% 346002 ? 32% numa-meminfo.node1.Mapped
1294831 ? 30% +259.5% 4654338 ? 15% numa-meminfo.node1.MemUsed
449198 ? 66% +792.7% 4010040 ? 20% numa-meminfo.node1.Shmem
937.29 ? 31% +11058.3% 104584 ?101% numa-vmstat.node0.nr_active_anon
80520 ? 7% +14.2% 91925 ? 3% numa-vmstat.node0.nr_anon_pages
50.57 ? 18% +136.8% 119.78 ? 12% numa-vmstat.node0.nr_anon_transparent_hugepages
937.29 ? 31% +11058.3% 104584 ?101% numa-vmstat.node0.nr_zone_active_anon
2283200 ? 11% +123.9% 5112850 ? 9% numa-vmstat.node0.numa_hit
2241864 ? 12% +125.8% 5062120 ? 9% numa-vmstat.node0.numa_local
1191 ? 74% +62090.5% 741044 ? 26% numa-vmstat.node1.nr_active_anon
150180 ? 42% +581.7% 1023830 ? 19% numa-vmstat.node1.nr_file_pages
124538 ? 61% +119.5% 273372 ? 19% numa-vmstat.node1.nr_inactive_anon
9149 ? 38% +843.6% 86339 ? 33% numa-vmstat.node1.nr_mapped
113148 ? 65% +786.4% 1002918 ? 20% numa-vmstat.node1.nr_shmem
1191 ? 74% +62098.0% 741044 ? 26% numa-vmstat.node1.nr_zone_active_anon
124537 ? 61% +119.5% 273372 ? 19% numa-vmstat.node1.nr_zone_inactive_anon
1474730 ? 15% +128.5% 3370126 ? 5% numa-vmstat.node1.numa_hit
1436010 ? 17% +132.6% 3340029 ? 5% numa-vmstat.node1.numa_local
107629 ? 2% +3137.7% 3484665 ? 15% meminfo.Active
8540 ? 31% +39507.8% 3382785 ? 15% meminfo.Active(anon)
117002 ? 13% +114.0% 250403 ? 12% meminfo.AnonHugePages
370927 +11.2% 412449 meminfo.AnonPages
2119037 +188.5% 6112827 ? 12% meminfo.Cached
8683262 +51.3% 13133654 ? 5% meminfo.Committed_AS
2387590 +27.4% 3042942 ? 8% meminfo.Inactive
1434994 +45.0% 2081458 ? 12% meminfo.Inactive(anon)
103247 +11.8% 115451 meminfo.KReclaimable
118209 ? 2% +333.7% 512718 ? 28% meminfo.Mapped
3897405 +95.0% 7600821 ? 9% meminfo.Memused
189.86 ?147% +586.7% 1303 ? 44% meminfo.Mlocked
103247 +11.8% 115451 meminfo.SReclaimable
1068756 +372.5% 5049917 ? 14% meminfo.Shmem
257.71 ?108% +432.3% 1371 ? 42% meminfo.Unevictable
3925928 +102.6% 7954869 ? 9% meminfo.max_used_kB
5.113e+09 ? 4% +182.6% 1.445e+10 ? 2% perf-stat.i.branch-instructions
39914105 ? 2% +216.3% 1.262e+08 ? 2% perf-stat.i.branch-misses
18.94 ? 3% -6.5 12.47 ? 8% perf-stat.i.cache-miss-rate%
58820269 ? 3% +16.8% 68682862 ? 2% perf-stat.i.cache-misses
2.911e+08 +100.8% 5.846e+08 ? 3% perf-stat.i.cache-references
1760838 ? 7% +617.1% 12626462 ? 3% perf-stat.i.context-switches
8.40 ? 5% -55.4% 3.75 ? 5% perf-stat.i.cpi
2.236e+11 +3.3% 2.31e+11 perf-stat.i.cpu-cycles
1885 ? 5% -42.1% 1092 perf-stat.i.cpu-migrations
3789 ? 4% -10.8% 3381 ? 5% perf-stat.i.cycles-between-cache-misses
6.849e+09 ? 4% +179.8% 1.916e+10 ? 3% perf-stat.i.dTLB-loads
88930 ? 27% +62.6% 144605 ? 8% perf-stat.i.dTLB-store-misses
3.708e+09 ? 3% +215.9% 1.171e+10 ? 2% perf-stat.i.dTLB-stores
62.59 ? 4% -14.5 48.07 perf-stat.i.iTLB-load-miss-rate%
17539757 ? 8% +209.2% 54234924 ? 2% perf-stat.i.iTLB-load-misses
9680763 ? 4% +521.1% 60130535 ? 3% perf-stat.i.iTLB-loads
2.502e+10 ? 4% +180.5% 7.019e+10 ? 2% perf-stat.i.instructions
0.18 ? 6% +67.7% 0.31 ? 3% perf-stat.i.ipc
131.96 -2.4% 128.83 perf-stat.i.major-faults
2333330 +3.2% 2406907 perf-stat.i.metric.GHz
1.668e+08 ? 4% +187.3% 4.791e+08 ? 2% perf-stat.i.metric.M/sec
7373 ? 2% -32.8% 4954 perf-stat.i.minor-faults
86.13 -2.7 83.48 perf-stat.i.node-load-miss-rate%
8195094 +10.8% 9083633 ? 2% perf-stat.i.node-load-misses
1116036 +51.6% 1692321 ? 8% perf-stat.i.node-loads
80.52 +5.0 85.56 perf-stat.i.node-store-miss-rate%
3081722 ? 2% -13.8% 2655744 ? 2% perf-stat.i.node-store-misses
655229 ? 5% -44.8% 361888 ? 6% perf-stat.i.node-stores
7505 ? 2% -32.3% 5083 perf-stat.i.page-faults
26853 ? 3% +49.0% 40003 ? 3% slabinfo.anon_vma.active_objs
584.71 ? 3% +48.8% 870.00 ? 3% slabinfo.anon_vma.active_slabs
26917 ? 3% +48.7% 40037 ? 3% slabinfo.anon_vma.num_objs
584.71 ? 3% +48.8% 870.00 ? 3% slabinfo.anon_vma.num_slabs
62790 ? 2% +36.1% 85427 ? 3% slabinfo.anon_vma_chain.active_objs
981.57 ? 2% +36.3% 1337 ? 3% slabinfo.anon_vma_chain.active_slabs
62862 ? 2% +36.2% 85640 ? 3% slabinfo.anon_vma_chain.num_objs
981.57 ? 2% +36.3% 1337 ? 3% slabinfo.anon_vma_chain.num_slabs
1594 ? 10% +21.6% 1939 ? 5% slabinfo.dmaengine-unmap-16.active_objs
1594 ? 10% +21.6% 1939 ? 5% slabinfo.dmaengine-unmap-16.num_objs
4655 +11.2% 5176 slabinfo.files_cache.active_objs
4655 +11.2% 5176 slabinfo.files_cache.num_objs
3537 +10.3% 3902 slabinfo.khugepaged_mm_slot.active_objs
3537 +10.3% 3902 slabinfo.khugepaged_mm_slot.num_objs
157829 -9.4% 143010 slabinfo.kmalloc-64.active_objs
2471 -9.4% 2239 slabinfo.kmalloc-64.active_slabs
158174 -9.4% 143338 slabinfo.kmalloc-64.num_objs
2471 -9.4% 2239 slabinfo.kmalloc-64.num_slabs
3183 +15.0% 3659 slabinfo.mm_struct.active_objs
3183 +15.0% 3659 slabinfo.mm_struct.num_objs
14153 ? 2% +12.5% 15920 ? 2% slabinfo.proc_inode_cache.active_objs
14155 ? 2% +12.6% 15938 ? 2% slabinfo.proc_inode_cache.num_objs
19068 ? 2% +82.9% 34879 ? 8% slabinfo.radix_tree_node.active_objs
340.00 ? 2% +83.1% 622.44 ? 8% slabinfo.radix_tree_node.active_slabs
19068 ? 2% +82.9% 34879 ? 8% slabinfo.radix_tree_node.num_objs
340.00 ? 2% +83.1% 622.44 ? 8% slabinfo.radix_tree_node.num_slabs
2655 +13.8% 3023 ? 3% slabinfo.sighand_cache.active_objs
2655 +14.1% 3029 ? 3% slabinfo.sighand_cache.num_objs
10970 +33.6% 14657 ? 4% slabinfo.vmap_area.active_objs
10978 +33.6% 14662 ? 4% slabinfo.vmap_area.num_objs
2132 ? 31% +39540.4% 845415 ? 15% proc-vmstat.nr_active_anon
24773 +2.8% 25468 proc-vmstat.nr_active_file
92725 +11.2% 103116 proc-vmstat.nr_anon_pages
56.57 ? 13% +115.3% 121.78 ? 12% proc-vmstat.nr_anon_transparent_hugepages
3200606 -2.9% 3108484 proc-vmstat.nr_dirty_background_threshold
6409038 -2.9% 6224570 proc-vmstat.nr_dirty_threshold
530292 +188.2% 1528310 ? 12% proc-vmstat.nr_file_pages
31944790 -2.9% 31019305 proc-vmstat.nr_free_pages
358870 +45.0% 520348 ? 12% proc-vmstat.nr_inactive_anon
21238 -2.5% 20712 proc-vmstat.nr_kernel_stack
29553 ? 2% +334.1% 128281 ? 27% proc-vmstat.nr_mapped
47.00 ?148% +592.7% 325.56 ? 44% proc-vmstat.nr_mlock
3822 +8.1% 4130 proc-vmstat.nr_page_table_pages
267316 +372.2% 1262178 ? 14% proc-vmstat.nr_shmem
25811 +11.8% 28861 proc-vmstat.nr_slab_reclaimable
64.00 ?108% +435.2% 342.56 ? 42% proc-vmstat.nr_unevictable
2132 ? 31% +39540.4% 845415 ? 15% proc-vmstat.nr_zone_active_anon
24773 +2.8% 25468 proc-vmstat.nr_zone_active_file
358870 +45.0% 520348 ? 12% proc-vmstat.nr_zone_inactive_anon
64.00 ?108% +435.2% 342.56 ? 42% proc-vmstat.nr_zone_unevictable
4770410 ? 2% +165.6% 12670919 ? 8% proc-vmstat.numa_hit
342.14 ? 36% +383.3% 1653 ? 24% proc-vmstat.numa_huge_pte_updates
4702238 ? 2% +168.0% 12602615 ? 8% proc-vmstat.numa_local
2547 ? 39% +234.2% 8514 ? 30% proc-vmstat.numa_pages_migrated
318930 ? 18% +252.2% 1123301 ? 18% proc-vmstat.numa_pte_updates
14337 ? 8% +9217.3% 1335860 ? 16% proc-vmstat.pgactivate
4884863 ? 2% +163.3% 12862044 ? 8% proc-vmstat.pgalloc_normal
885801 +184.5% 2520248 ? 8% proc-vmstat.pgfault
3980905 ? 2% +187.7% 11452403 ? 8% proc-vmstat.pgfree
2547 ? 39% +234.2% 8514 ? 30% proc-vmstat.pgmigrate_success
87976 +16.0% 102016 ? 19% proc-vmstat.pgpgin
76526 +279.3% 290270 ? 9% proc-vmstat.pgreuse
6284 ? 16% +230.4% 20760 ? 12% softirqs.CPU0.RCU
7613 ? 14% +144.3% 18596 ? 10% softirqs.CPU0.SCHED
5378 ? 18% +307.3% 21904 ? 9% softirqs.CPU1.RCU
7122 ? 10% +151.4% 17903 ? 11% softirqs.CPU1.SCHED
4994 ? 15% +321.1% 21031 ? 9% softirqs.CPU10.RCU
5919 ? 6% +173.9% 16210 ? 7% softirqs.CPU10.SCHED
5272 ? 18% +299.5% 21063 ? 9% softirqs.CPU11.RCU
6118 ? 8% +165.0% 16212 ? 7% softirqs.CPU11.SCHED
4904 ? 15% +325.2% 20854 ? 14% softirqs.CPU12.RCU
5759 ? 3% +177.4% 15975 ? 7% softirqs.CPU12.SCHED
4626 ? 18% +347.6% 20710 ? 14% softirqs.CPU13.RCU
5564 ? 4% +190.0% 16137 ? 7% softirqs.CPU13.SCHED
4656 ? 16% +355.8% 21226 ? 10% softirqs.CPU14.RCU
5681 ? 7% +184.3% 16153 ? 8% softirqs.CPU14.SCHED
4726 ? 17% +348.6% 21202 ? 11% softirqs.CPU15.RCU
5687 ? 3% +182.0% 16040 ? 7% softirqs.CPU15.SCHED
4917 ? 20% +344.8% 21873 ? 11% softirqs.CPU16.RCU
5621 ? 2% +186.8% 16120 ? 7% softirqs.CPU16.SCHED
4781 ? 19% +368.3% 22392 ? 11% softirqs.CPU17.RCU
5757 ? 3% +184.0% 16351 ? 7% softirqs.CPU17.SCHED
4679 ? 18% +382.3% 22570 ? 11% softirqs.CPU18.RCU
5580 ? 4% +194.5% 16434 ? 8% softirqs.CPU18.SCHED
5126 ? 25% +321.6% 21608 ? 14% softirqs.CPU19.RCU
5639 ? 3% +187.8% 16232 ? 7% softirqs.CPU19.SCHED
5211 ? 19% +310.0% 21363 ? 10% softirqs.CPU2.RCU
5771 ? 14% +191.3% 16808 ? 9% softirqs.CPU2.SCHED
5394 ? 10% +312.7% 22262 ? 10% softirqs.CPU20.RCU
5981 ? 6% +176.4% 16531 ? 7% softirqs.CPU20.SCHED
4895 ? 17% +353.7% 22207 ? 10% softirqs.CPU21.RCU
5624 ? 5% +190.5% 16339 ? 7% softirqs.CPU21.SCHED
5372 ? 23% +303.4% 21674 ? 10% softirqs.CPU22.RCU
5866 ? 6% +174.5% 16101 ? 7% softirqs.CPU22.SCHED
4908 ? 21% +354.8% 22321 ? 11% softirqs.CPU23.RCU
5607 ? 3% +191.1% 16322 ? 8% softirqs.CPU23.SCHED
4779 ? 19% +340.7% 21063 ? 9% softirqs.CPU24.RCU
5321 ? 7% +173.8% 14567 ? 7% softirqs.CPU24.SCHED
4405 ? 22% +358.3% 20191 ? 10% softirqs.CPU25.RCU
5382 ? 3% +165.4% 14286 ? 9% softirqs.CPU25.SCHED
4788 ? 25% +335.3% 20844 ? 9% softirqs.CPU26.RCU
5330 ? 4% +171.1% 14447 ? 8% softirqs.CPU26.SCHED
4555 ? 20% +363.5% 21113 ? 10% softirqs.CPU27.RCU
5471 ? 4% +165.7% 14536 ? 9% softirqs.CPU27.SCHED
4743 ? 18% +349.1% 21301 ? 10% softirqs.CPU28.RCU
5251 ? 7% +172.9% 14329 ? 9% softirqs.CPU28.SCHED
4501 ? 17% +370.3% 21171 ? 9% softirqs.CPU29.RCU
5313 ? 7% +169.8% 14336 ? 9% softirqs.CPU29.SCHED
5115 ? 16% +317.0% 21329 ? 10% softirqs.CPU3.RCU
6178 ? 9% +166.6% 16472 ? 7% softirqs.CPU3.SCHED
4688 ? 28% +344.1% 20817 ? 9% softirqs.CPU30.RCU
5323 ? 3% +169.9% 14369 ? 9% softirqs.CPU30.SCHED
4282 ? 17% +375.6% 20369 ? 10% softirqs.CPU31.RCU
5304 ? 4% +169.2% 14280 ? 10% softirqs.CPU31.SCHED
4987 ? 19% +337.9% 21841 ? 10% softirqs.CPU32.RCU
5288 ? 2% +170.8% 14320 ? 9% softirqs.CPU32.SCHED
4533 ? 17% +382.4% 21871 ? 8% softirqs.CPU33.RCU
5263 ? 4% +171.8% 14308 ? 9% softirqs.CPU33.SCHED
4516 ? 18% +375.3% 21468 ? 9% softirqs.CPU34.RCU
5340 ? 2% +172.7% 14563 ? 9% softirqs.CPU34.SCHED
4679 ? 17% +367.8% 21893 ? 8% softirqs.CPU35.RCU
5287 ? 6% +169.0% 14222 ? 10% softirqs.CPU35.SCHED
4900 ? 21% +342.8% 21700 ? 10% softirqs.CPU36.RCU
5294 ? 4% +170.3% 14312 ? 9% softirqs.CPU36.SCHED
4754 ? 24% +338.8% 20864 ? 10% softirqs.CPU37.RCU
5284 ? 4% +170.1% 14274 ? 9% softirqs.CPU37.SCHED
4819 ? 17% +345.7% 21481 ? 10% softirqs.CPU38.RCU
5376 ? 3% +163.5% 14165 ? 9% softirqs.CPU38.SCHED
4842 ? 21% +341.5% 21376 ? 10% softirqs.CPU39.RCU
5335 ? 2% +167.5% 14271 ? 9% softirqs.CPU39.SCHED
5196 ? 18% +306.7% 21135 ? 11% softirqs.CPU4.RCU
5697 ? 4% +185.1% 16241 ? 8% softirqs.CPU4.SCHED
4732 ? 17% +356.9% 21619 ? 10% softirqs.CPU40.RCU
5352 ? 4% +165.9% 14232 ? 9% softirqs.CPU40.SCHED
4647 ? 26% +366.0% 21658 ? 8% softirqs.CPU41.RCU
5310 ? 5% +171.5% 14420 ? 9% softirqs.CPU41.SCHED
4853 ? 17% +352.7% 21968 ? 8% softirqs.CPU42.RCU
5286 ? 3% +169.5% 14246 ? 9% softirqs.CPU42.SCHED
4834 ? 21% +355.7% 22030 ? 9% softirqs.CPU43.RCU
5286 ? 3% +171.4% 14347 ? 8% softirqs.CPU43.SCHED
4852 ? 18% +351.0% 21883 ? 9% softirqs.CPU44.RCU
5397 ? 5% +168.4% 14489 ? 8% softirqs.CPU44.SCHED
4918 ? 22% +341.1% 21693 ? 8% softirqs.CPU45.RCU
5313 ? 3% +170.7% 14385 ? 8% softirqs.CPU45.SCHED
4727 ? 20% +358.3% 21665 ? 8% softirqs.CPU46.RCU
5303 ? 2% +172.0% 14424 ? 9% softirqs.CPU46.SCHED
4739 ? 22% +358.1% 21711 ? 9% softirqs.CPU47.RCU
5206 ? 6% +175.6% 14351 ? 9% softirqs.CPU47.SCHED
4560 ? 15% +337.5% 19951 ? 13% softirqs.CPU48.RCU
5173 ? 6% +203.1% 15680 ? 9% softirqs.CPU48.SCHED
5231 ? 25% +304.5% 21161 ? 10% softirqs.CPU49.RCU
5790 ? 6% +170.1% 15638 ? 5% softirqs.CPU49.SCHED
4714 ? 19% +347.8% 21111 ? 10% softirqs.CPU5.RCU
6107 ? 20% +168.1% 16374 ? 7% softirqs.CPU5.SCHED
4589 ? 20% +364.6% 21325 ? 10% softirqs.CPU50.RCU
5510 ? 5% +185.2% 15715 ? 8% softirqs.CPU50.SCHED
4796 ? 15% +351.0% 21631 ? 11% softirqs.CPU51.RCU
5703 ? 4% +180.1% 15974 ? 5% softirqs.CPU51.SCHED
4670 ? 17% +356.4% 21315 ? 12% softirqs.CPU52.RCU
5639 ? 4% +183.8% 16007 ? 7% softirqs.CPU52.SCHED
4874 ? 22% +348.6% 21866 ? 11% softirqs.CPU53.RCU
5831 ? 8% +176.0% 16098 ? 7% softirqs.CPU53.SCHED
5298 ? 39% +311.4% 21800 ? 12% softirqs.CPU54.RCU
5666 ? 6% +183.2% 16044 ? 8% softirqs.CPU54.SCHED
4658 ? 16% +356.4% 21260 ? 12% softirqs.CPU55.RCU
5445 ? 4% +190.3% 15807 ? 7% softirqs.CPU55.SCHED
4691 ? 17% +347.9% 21014 ? 11% softirqs.CPU56.RCU
5410 ? 4% +189.7% 15673 ? 8% softirqs.CPU56.SCHED
4638 ? 18% +362.4% 21450 ? 11% softirqs.CPU57.RCU
5695 ? 3% +183.4% 16138 ? 6% softirqs.CPU57.SCHED
5241 ? 17% +302.6% 21102 ? 9% softirqs.CPU58.RCU
5627 ? 5% +183.4% 15951 ? 8% softirqs.CPU58.SCHED
4674 ? 15% +361.0% 21550 ? 10% softirqs.CPU59.RCU
5435 ? 4% +194.3% 15993 ? 8% softirqs.CPU59.SCHED
4784 ? 10% +345.5% 21316 ? 10% softirqs.CPU6.RCU
5613 ? 8% +191.0% 16336 ? 7% softirqs.CPU6.SCHED
4843 ? 14% +347.5% 21673 ? 11% softirqs.CPU60.RCU
5593 ? 4% +184.2% 15894 ? 8% softirqs.CPU60.SCHED
4660 ? 18% +350.9% 21011 ? 11% softirqs.CPU61.RCU
5399 ? 4% +193.7% 15859 ? 7% softirqs.CPU61.SCHED
4830 ? 20% +340.9% 21298 ? 9% softirqs.CPU62.RCU
5543 ? 6% +185.7% 15838 ? 6% softirqs.CPU62.SCHED
4914 ? 19% +327.0% 20983 ? 10% softirqs.CPU63.RCU
5578 ? 4% +183.0% 15787 ? 7% softirqs.CPU63.SCHED
5130 ? 33% +325.4% 21826 ? 11% softirqs.CPU64.RCU
5517 ? 3% +189.7% 15982 ? 7% softirqs.CPU64.SCHED
4779 ? 22% +364.8% 22217 ? 9% softirqs.CPU65.RCU
5846 ? 10% +175.6% 16114 ? 7% softirqs.CPU65.SCHED
5190 ? 24% +326.5% 22138 ? 10% softirqs.CPU66.RCU
5589 ? 4% +183.5% 15847 ? 7% softirqs.CPU66.SCHED
4818 ? 16% +356.0% 21972 ? 12% softirqs.CPU67.RCU
5658 ? 5% +180.0% 15843 ? 9% softirqs.CPU67.SCHED
5339 ? 20% +321.8% 22522 ? 9% softirqs.CPU68.RCU
5580 ? 4% +194.5% 16434 ? 8% softirqs.CPU68.SCHED
4654 ? 19% +361.9% 21496 ? 14% softirqs.CPU69.RCU
5626 ? 3% +186.9% 16142 ? 8% softirqs.CPU69.SCHED
4946 ? 17% +332.7% 21404 ? 11% softirqs.CPU7.RCU
5643 ? 2% +183.0% 15974 ? 7% softirqs.CPU7.SCHED
4808 ? 18% +364.2% 22321 ? 11% softirqs.CPU70.RCU
5546 ? 4% +193.0% 16250 ? 8% softirqs.CPU70.SCHED
4949 ? 15% +347.6% 22152 ? 12% softirqs.CPU71.RCU
5663 ? 4% +180.2% 15870 ? 7% softirqs.CPU71.SCHED
4381 ? 24% +381.0% 21072 ? 8% softirqs.CPU72.RCU
5296 ? 3% +174.4% 14533 ? 8% softirqs.CPU72.SCHED
4321 ? 19% +371.6% 20380 ? 11% softirqs.CPU73.RCU
5361 ? 6% +165.6% 14243 ? 8% softirqs.CPU73.SCHED
4445 ? 24% +371.3% 20952 ? 9% softirqs.CPU74.RCU
5371 ? 3% +165.0% 14233 ? 8% softirqs.CPU74.SCHED
4704 ? 19% +349.0% 21123 ? 11% softirqs.CPU75.RCU
5363 ? 4% +170.3% 14497 ? 9% softirqs.CPU75.SCHED
5114 ? 20% +318.4% 21398 ? 10% softirqs.CPU76.RCU
5334 ? 3% +168.2% 14306 ? 9% softirqs.CPU76.SCHED
4334 ? 21% +388.8% 21183 ? 10% softirqs.CPU77.RCU
5352 ? 5% +167.8% 14335 ? 9% softirqs.CPU77.SCHED
4906 ? 34% +326.6% 20929 ? 9% softirqs.CPU78.RCU
5486 ? 3% +159.7% 14249 ? 9% softirqs.CPU78.SCHED
4334 ? 19% +371.1% 20420 ? 9% softirqs.CPU79.RCU
5353 +168.2% 14360 ? 8% softirqs.CPU79.SCHED
5025 ? 16% +323.7% 21290 ? 9% softirqs.CPU8.RCU
5731 ? 6% +182.1% 16171 ? 9% softirqs.CPU8.SCHED
4373 ? 23% +382.4% 21098 ? 9% softirqs.CPU80.RCU
5351 ? 4% +166.5% 14261 ? 9% softirqs.CPU80.SCHED
4278 ? 18% +415.5% 22054 ? 8% softirqs.CPU81.RCU
5246 ? 5% +179.1% 14644 ? 8% softirqs.CPU81.SCHED
4298 ? 19% +389.4% 21037 ? 8% softirqs.CPU82.RCU
5337 ? 4% +170.9% 14460 ? 8% softirqs.CPU82.SCHED
4415 ? 18% +393.2% 21778 ? 9% softirqs.CPU83.RCU
5192 ? 4% +174.7% 14263 ? 9% softirqs.CPU83.SCHED
4795 ? 18% +348.0% 21486 ? 9% softirqs.CPU84.RCU
5243 ? 5% +173.1% 14317 ? 8% softirqs.CPU84.SCHED
4652 ? 19% +350.9% 20979 ? 9% softirqs.CPU85.RCU
5296 ? 4% +170.8% 14342 ? 8% softirqs.CPU85.SCHED
4652 ? 19% +355.6% 21194 ? 8% softirqs.CPU86.RCU
5136 ? 5% +178.0% 14279 ? 8% softirqs.CPU86.SCHED
4943 ? 22% +325.0% 21010 ? 9% softirqs.CPU87.RCU
5347 ? 3% +169.8% 14426 ? 8% softirqs.CPU87.SCHED
4530 ? 19% +369.2% 21253 ? 9% softirqs.CPU88.RCU
5311 ? 3% +167.6% 14211 ? 8% softirqs.CPU88.SCHED
4934 ? 43% +339.6% 21691 ? 9% softirqs.CPU89.RCU
5379 +170.5% 14550 ? 8% softirqs.CPU89.SCHED
4836 ? 15% +336.2% 21094 ? 10% softirqs.CPU9.RCU
5874 ? 3% +181.1% 16514 ? 8% softirqs.CPU9.SCHED
4945 ? 18% +346.2% 22064 ? 9% softirqs.CPU90.RCU
5453 ? 2% +168.5% 14640 ? 8% softirqs.CPU90.SCHED
4650 ? 19% +368.6% 21793 ? 10% softirqs.CPU91.RCU
5250 ? 5% +175.0% 14440 ? 8% softirqs.CPU91.SCHED
4480 ? 17% +388.6% 21890 ? 11% softirqs.CPU92.RCU
5340 ? 3% +172.0% 14528 ? 8% softirqs.CPU92.SCHED
4740 ? 19% +349.7% 21318 ? 8% softirqs.CPU93.RCU
5377 ? 3% +166.6% 14336 ? 7% softirqs.CPU93.SCHED
4639 ? 23% +363.4% 21497 ? 8% softirqs.CPU94.RCU
5324 ? 4% +171.9% 14476 ? 8% softirqs.CPU94.SCHED
4617 ? 24% +371.6% 21773 ? 6% softirqs.CPU95.RCU
5291 ? 5% +175.3% 14570 ? 7% softirqs.CPU95.SCHED
1433 ? 64% +442.0% 7767 ? 64% softirqs.NET_RX
461190 ? 17% +346.2% 2057758 ? 9% softirqs.RCU
531285 ? 2% +176.2% 1467162 ? 7% softirqs.SCHED
23428 +225.7% 76310 ? 8% softirqs.TIMER
220.57 +327.0% 941.78 ? 9% interrupts.9:IO-APIC.9-fasteoi.acpi
1376522 ? 3% +516.7% 8489677 ? 10% interrupts.CAL:Function_call_interrupts
23047 ? 14% +440.1% 124471 ? 16% interrupts.CPU0.CAL:Function_call_interrupts
215307 +334.6% 935623 ? 10% interrupts.CPU0.LOC:Local_timer_interrupts
44963 ? 7% +530.9% 283667 ? 10% interrupts.CPU0.RES:Rescheduling_interrupts
220.57 +327.0% 941.78 ? 9% interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
20164 ? 13% +483.5% 117666 ? 8% interrupts.CPU1.CAL:Function_call_interrupts
215576 +333.9% 935343 ? 10% interrupts.CPU1.LOC:Local_timer_interrupts
43270 ? 10% +558.2% 284822 ? 9% interrupts.CPU1.RES:Rescheduling_interrupts
17268 ? 11% +516.8% 106515 ? 8% interrupts.CPU10.CAL:Function_call_interrupts
215077 +334.8% 935134 ? 10% interrupts.CPU10.LOC:Local_timer_interrupts
43087 ? 13% +575.4% 290995 ? 9% interrupts.CPU10.RES:Rescheduling_interrupts
17690 ? 14% +522.8% 110171 ? 19% interrupts.CPU11.CAL:Function_call_interrupts
215068 +334.7% 934976 ? 10% interrupts.CPU11.LOC:Local_timer_interrupts
44504 ? 9% +556.8% 292318 ? 11% interrupts.CPU11.RES:Rescheduling_interrupts
15759 ? 10% +551.9% 102743 ? 9% interrupts.CPU12.CAL:Function_call_interrupts
215488 +333.8% 934784 ? 10% interrupts.CPU12.LOC:Local_timer_interrupts
40440 ? 12% +575.4% 273144 ? 8% interrupts.CPU12.RES:Rescheduling_interrupts
15107 ? 10% +585.4% 103538 ? 13% interrupts.CPU13.CAL:Function_call_interrupts
215167 +334.6% 935052 ? 10% interrupts.CPU13.LOC:Local_timer_interrupts
38977 ? 12% +623.1% 281862 ? 14% interrupts.CPU13.RES:Rescheduling_interrupts
16406 ? 15% +562.4% 108669 ? 12% interrupts.CPU14.CAL:Function_call_interrupts
214955 +334.9% 934869 ? 10% interrupts.CPU14.LOC:Local_timer_interrupts
41711 ? 14% +597.0% 290709 ? 16% interrupts.CPU14.RES:Rescheduling_interrupts
16332 ? 10% +535.0% 103709 ? 14% interrupts.CPU15.CAL:Function_call_interrupts
215024 +334.7% 934691 ? 10% interrupts.CPU15.LOC:Local_timer_interrupts
43003 ? 11% +536.2% 273590 ? 11% interrupts.CPU15.RES:Rescheduling_interrupts
16265 ? 10% +532.4% 102852 ? 11% interrupts.CPU16.CAL:Function_call_interrupts
214861 +335.1% 934943 ? 10% interrupts.CPU16.LOC:Local_timer_interrupts
40145 ? 8% +597.0% 279796 ? 8% interrupts.CPU16.RES:Rescheduling_interrupts
17273 ? 13% +533.4% 109406 ? 10% interrupts.CPU17.CAL:Function_call_interrupts
215056 +334.6% 934620 ? 10% interrupts.CPU17.LOC:Local_timer_interrupts
42365 ? 10% +625.6% 307405 ? 12% interrupts.CPU17.RES:Rescheduling_interrupts
16231 ? 15% +530.3% 102309 ? 10% interrupts.CPU18.CAL:Function_call_interrupts
214952 +334.8% 934692 ? 10% interrupts.CPU18.LOC:Local_timer_interrupts
40297 ? 14% +596.3% 280608 ? 13% interrupts.CPU18.RES:Rescheduling_interrupts
16301 ? 7% +519.0% 100908 ? 12% interrupts.CPU19.CAL:Function_call_interrupts
215078 +334.8% 935220 ? 10% interrupts.CPU19.LOC:Local_timer_interrupts
43362 ? 5% +533.3% 274601 ? 12% interrupts.CPU19.RES:Rescheduling_interrupts
17752 ? 11% +489.6% 104675 ? 13% interrupts.CPU2.CAL:Function_call_interrupts
215128 +334.5% 934820 ? 10% interrupts.CPU2.LOC:Local_timer_interrupts
41136 ? 14% +561.8% 272230 ? 10% interrupts.CPU2.RES:Rescheduling_interrupts
20777 ? 8% +507.1% 126131 ? 11% interrupts.CPU20.CAL:Function_call_interrupts
215231 +334.4% 934872 ? 10% interrupts.CPU20.LOC:Local_timer_interrupts
54928 ? 10% +541.4% 352318 ? 9% interrupts.CPU20.RES:Rescheduling_interrupts
15964 ? 11% +550.3% 103814 ? 11% interrupts.CPU21.CAL:Function_call_interrupts
215360 +334.0% 934640 ? 10% interrupts.CPU21.LOC:Local_timer_interrupts
41956 ? 7% +574.4% 282960 ? 11% interrupts.CPU21.RES:Rescheduling_interrupts
16562 ? 9% +523.5% 103258 ? 11% interrupts.CPU22.CAL:Function_call_interrupts
214934 +335.1% 935122 ? 10% interrupts.CPU22.LOC:Local_timer_interrupts
43306 ? 14% +548.3% 280753 ? 10% interrupts.CPU22.RES:Rescheduling_interrupts
15839 ? 8% +549.6% 102893 ? 10% interrupts.CPU23.CAL:Function_call_interrupts
214908 +334.9% 934607 ? 10% interrupts.CPU23.LOC:Local_timer_interrupts
40284 ? 10% +591.4% 278513 ? 11% interrupts.CPU23.RES:Rescheduling_interrupts
12457 ? 16% +765.8% 107849 ? 17% interrupts.CPU24.CAL:Function_call_interrupts
211126 ? 3% +341.6% 932326 ? 10% interrupts.CPU24.LOC:Local_timer_interrupts
38664 ? 12% +385.3% 187658 ? 18% interrupts.CPU24.RES:Rescheduling_interrupts
11681 ? 7% +685.8% 91796 ? 19% interrupts.CPU25.CAL:Function_call_interrupts
211052 ? 3% +341.8% 932502 ? 10% interrupts.CPU25.LOC:Local_timer_interrupts
36562 ? 7% +389.5% 178992 ? 11% interrupts.CPU25.RES:Rescheduling_interrupts
11601 ? 13% +681.4% 90653 ? 24% interrupts.CPU26.CAL:Function_call_interrupts
211147 ? 3% +341.5% 932180 ? 10% interrupts.CPU26.LOC:Local_timer_interrupts
38371 ? 9% +352.7% 173689 ? 16% interrupts.CPU26.RES:Rescheduling_interrupts
14245 ? 15% +510.3% 86935 ? 17% interrupts.CPU27.CAL:Function_call_interrupts
211514 ? 3% +340.7% 932156 ? 10% interrupts.CPU27.LOC:Local_timer_interrupts
49800 ? 11% +288.9% 193694 ? 18% interrupts.CPU27.RES:Rescheduling_interrupts
11387 ? 11% +512.8% 69785 ? 15% interrupts.CPU28.CAL:Function_call_interrupts
211429 ? 3% +340.9% 932126 ? 10% interrupts.CPU28.LOC:Local_timer_interrupts
36495 ? 9% +372.1% 172295 ? 19% interrupts.CPU28.RES:Rescheduling_interrupts
12417 ? 8% +455.7% 69004 ? 11% interrupts.CPU29.CAL:Function_call_interrupts
210967 ? 3% +341.9% 932218 ? 10% interrupts.CPU29.LOC:Local_timer_interrupts
43342 ? 16% +290.4% 169218 ? 11% interrupts.CPU29.RES:Rescheduling_interrupts
20157 ? 10% +523.5% 125693 ? 9% interrupts.CPU3.CAL:Function_call_interrupts
215109 +334.9% 935540 ? 10% interrupts.CPU3.LOC:Local_timer_interrupts
52152 ? 9% +541.6% 334631 ? 8% interrupts.CPU3.RES:Rescheduling_interrupts
11842 ? 8% +434.6% 63313 ? 18% interrupts.CPU30.CAL:Function_call_interrupts
211011 ? 3% +341.8% 932297 ? 10% interrupts.CPU30.LOC:Local_timer_interrupts
37689 ? 9% +319.0% 157907 ? 22% interrupts.CPU30.RES:Rescheduling_interrupts
12308 ? 12% +487.0% 72249 ? 15% interrupts.CPU31.CAL:Function_call_interrupts
211593 ? 3% +340.6% 932201 ? 10% interrupts.CPU31.LOC:Local_timer_interrupts
41661 ? 10% +326.6% 177711 ? 22% interrupts.CPU31.RES:Rescheduling_interrupts
11723 ? 7% +527.0% 73506 ? 21% interrupts.CPU32.CAL:Function_call_interrupts
211451 ? 3% +340.8% 932032 ? 10% interrupts.CPU32.LOC:Local_timer_interrupts
40236 ? 6% +313.9% 166548 ? 20% interrupts.CPU32.RES:Rescheduling_interrupts
15080 ? 11% +474.6% 86654 ? 16% interrupts.CPU33.CAL:Function_call_interrupts
211044 ? 3% +341.8% 932395 ? 10% interrupts.CPU33.LOC:Local_timer_interrupts
49685 ? 9% +328.4% 212836 ? 16% interrupts.CPU33.RES:Rescheduling_interrupts
12594 ? 14% +509.1% 76711 ? 18% interrupts.CPU34.CAL:Function_call_interrupts
211159 ? 3% +341.4% 932057 ? 10% interrupts.CPU34.LOC:Local_timer_interrupts
41510 ? 16% +336.6% 181222 ? 19% interrupts.CPU34.RES:Rescheduling_interrupts
12285 ? 18% +490.3% 72528 ? 15% interrupts.CPU35.CAL:Function_call_interrupts
210838 ? 3% +342.1% 932098 ? 10% interrupts.CPU35.LOC:Local_timer_interrupts
39795 ? 16% +346.4% 177651 ? 18% interrupts.CPU35.RES:Rescheduling_interrupts
12611 ? 18% +442.1% 68370 ? 15% interrupts.CPU36.CAL:Function_call_interrupts
211040 ? 3% +341.7% 932241 ? 10% interrupts.CPU36.LOC:Local_timer_interrupts
41334 ? 14% +319.0% 173184 ? 14% interrupts.CPU36.RES:Rescheduling_interrupts
12369 ? 8% +428.5% 65373 ? 16% interrupts.CPU37.CAL:Function_call_interrupts
210792 ? 3% +342.3% 932306 ? 10% interrupts.CPU37.LOC:Local_timer_interrupts
41484 ? 3% +302.3% 166879 ? 17% interrupts.CPU37.RES:Rescheduling_interrupts
11497 ? 10% +526.4% 72019 ? 14% interrupts.CPU38.CAL:Function_call_interrupts
211325 ? 3% +341.1% 932259 ? 10% interrupts.CPU38.LOC:Local_timer_interrupts
38450 ? 10% +347.6% 172118 ? 15% interrupts.CPU38.RES:Rescheduling_interrupts
10780 ? 11% +521.6% 67013 ? 22% interrupts.CPU39.CAL:Function_call_interrupts
211485 ? 3% +340.7% 932067 ? 10% interrupts.CPU39.LOC:Local_timer_interrupts
36165 ? 16% +358.1% 165667 ? 19% interrupts.CPU39.RES:Rescheduling_interrupts
14729 ? 12% +583.1% 100612 ? 11% interrupts.CPU4.CAL:Function_call_interrupts
215071 +334.6% 934594 ? 10% interrupts.CPU4.LOC:Local_timer_interrupts
38922 ? 12% +596.1% 270954 ? 9% interrupts.CPU4.RES:Rescheduling_interrupts
11714 ? 11% +478.6% 67782 ? 18% interrupts.CPU40.CAL:Function_call_interrupts
211665 ? 3% +340.4% 932137 ? 10% interrupts.CPU40.LOC:Local_timer_interrupts
39783 ? 8% +316.7% 165778 ? 19% interrupts.CPU40.RES:Rescheduling_interrupts
12754 ? 11% +494.2% 75787 ? 19% interrupts.CPU41.CAL:Function_call_interrupts
211241 ? 3% +341.2% 931979 ? 10% interrupts.CPU41.LOC:Local_timer_interrupts
42249 ? 15% +328.2% 180915 ? 11% interrupts.CPU41.RES:Rescheduling_interrupts
11551 ? 7% +517.6% 71343 ? 18% interrupts.CPU42.CAL:Function_call_interrupts
211045 ? 3% +341.6% 932018 ? 10% interrupts.CPU42.LOC:Local_timer_interrupts
38724 ? 12% +352.1% 175085 ? 21% interrupts.CPU42.RES:Rescheduling_interrupts
11922 ? 9% +502.1% 71779 ? 14% interrupts.CPU43.CAL:Function_call_interrupts
211129 ? 3% +341.5% 932145 ? 10% interrupts.CPU43.LOC:Local_timer_interrupts
39071 ? 10% +313.6% 161615 ? 14% interrupts.CPU43.RES:Rescheduling_interrupts
14499 ? 13% +462.4% 81548 ? 19% interrupts.CPU44.CAL:Function_call_interrupts
211030 ? 3% +341.7% 932041 ? 10% interrupts.CPU44.LOC:Local_timer_interrupts
48536 ? 10% +331.1% 209248 ? 15% interrupts.CPU44.RES:Rescheduling_interrupts
11703 ? 11% +521.9% 72776 ? 14% interrupts.CPU45.CAL:Function_call_interrupts
210939 ? 3% +341.8% 932031 ? 10% interrupts.CPU45.LOC:Local_timer_interrupts
39027 ? 15% +356.0% 177960 ? 12% interrupts.CPU45.RES:Rescheduling_interrupts
11835 ? 11% +474.9% 68039 ? 15% interrupts.CPU46.CAL:Function_call_interrupts
211073 ? 3% +341.6% 932079 ? 10% interrupts.CPU46.LOC:Local_timer_interrupts
39506 ? 10% +318.3% 165252 ? 15% interrupts.CPU46.RES:Rescheduling_interrupts
12307 ? 6% +526.2% 77068 ? 23% interrupts.CPU47.CAL:Function_call_interrupts
210925 ? 3% +341.9% 932018 ? 10% interrupts.CPU47.LOC:Local_timer_interrupts
39611 ? 11% +350.4% 178419 ? 20% interrupts.CPU47.RES:Rescheduling_interrupts
18268 ? 15% +493.2% 108365 ? 8% interrupts.CPU48.CAL:Function_call_interrupts
215396 +334.1% 935137 ? 10% interrupts.CPU48.LOC:Local_timer_interrupts
43664 ? 13% +537.2% 278221 ? 6% interrupts.CPU48.RES:Rescheduling_interrupts
17607 ? 9% +468.7% 100127 ? 12% interrupts.CPU49.CAL:Function_call_interrupts
215275 +334.5% 935364 ? 10% interrupts.CPU49.LOC:Local_timer_interrupts
44432 ? 12% +500.3% 266724 ? 10% interrupts.CPU49.RES:Rescheduling_interrupts
15125 ? 20% +618.9% 108740 ? 12% interrupts.CPU5.CAL:Function_call_interrupts
215004 +334.8% 934784 ? 10% interrupts.CPU5.LOC:Local_timer_interrupts
37525 ? 7% +657.3% 284200 ? 9% interrupts.CPU5.RES:Rescheduling_interrupts
15786 ? 10% +531.0% 99608 ? 15% interrupts.CPU50.CAL:Function_call_interrupts
215121 +334.5% 934744 ? 10% interrupts.CPU50.LOC:Local_timer_interrupts
38299 ? 11% +549.0% 248580 ? 9% interrupts.CPU50.RES:Rescheduling_interrupts
18542 ? 14% +563.8% 123091 ? 9% interrupts.CPU51.CAL:Function_call_interrupts
215218 +334.8% 935794 ? 10% interrupts.CPU51.LOC:Local_timer_interrupts
48341 ? 15% +568.2% 323015 ? 9% interrupts.CPU51.RES:Rescheduling_interrupts
15807 ? 12% +520.2% 98035 ? 15% interrupts.CPU52.CAL:Function_call_interrupts
215047 +334.8% 935096 ? 10% interrupts.CPU52.LOC:Local_timer_interrupts
40324 ? 14% +530.2% 254126 ? 10% interrupts.CPU52.RES:Rescheduling_interrupts
15759 ? 15% +535.9% 100220 ? 10% interrupts.CPU53.CAL:Function_call_interrupts
215178 +334.6% 935127 ? 10% interrupts.CPU53.LOC:Local_timer_interrupts
38916 ? 17% +591.3% 269042 ? 13% interrupts.CPU53.RES:Rescheduling_interrupts
14293 ? 11% +595.2% 99372 ? 13% interrupts.CPU54.CAL:Function_call_interrupts
215160 +334.7% 935306 ? 10% interrupts.CPU54.LOC:Local_timer_interrupts
37757 ? 15% +608.9% 267661 ? 12% interrupts.CPU54.RES:Rescheduling_interrupts
15699 ? 11% +524.5% 98036 ? 8% interrupts.CPU55.CAL:Function_call_interrupts
214896 +335.0% 934796 ? 10% interrupts.CPU55.LOC:Local_timer_interrupts
38394 ? 6% +597.9% 267954 ? 6% interrupts.CPU55.RES:Rescheduling_interrupts
15483 ? 9% +477.6% 89423 ? 17% interrupts.CPU56.CAL:Function_call_interrupts
214958 +334.8% 934536 ? 10% interrupts.CPU56.LOC:Local_timer_interrupts
38584 ? 9% +534.0% 244608 ? 14% interrupts.CPU56.RES:Rescheduling_interrupts
19316 ? 14% +480.2% 112067 ? 10% interrupts.CPU57.CAL:Function_call_interrupts
215246 +334.6% 935390 ? 10% interrupts.CPU57.LOC:Local_timer_interrupts
50490 ? 12% +518.8% 312436 ? 11% interrupts.CPU57.RES:Rescheduling_interrupts
16033 ? 17% +547.4% 103800 ? 16% interrupts.CPU58.CAL:Function_call_interrupts
214987 +334.8% 934852 ? 10% interrupts.CPU58.LOC:Local_timer_interrupts
40141 ? 20% +584.1% 274612 ? 14% interrupts.CPU58.RES:Rescheduling_interrupts
16569 ? 8% +517.6% 102334 ? 14% interrupts.CPU59.CAL:Function_call_interrupts
214995 +335.2% 935578 ? 10% interrupts.CPU59.LOC:Local_timer_interrupts
43011 ? 14% +540.6% 275509 ? 12% interrupts.CPU59.RES:Rescheduling_interrupts
14446 ? 14% +636.9% 106459 ? 7% interrupts.CPU6.CAL:Function_call_interrupts
215046 +334.6% 934671 ? 10% interrupts.CPU6.LOC:Local_timer_interrupts
36164 ? 14% +679.0% 281718 ? 9% interrupts.CPU6.RES:Rescheduling_interrupts
16469 ? 13% +462.7% 92676 ? 8% interrupts.CPU60.CAL:Function_call_interrupts
214951 +335.0% 935138 ? 10% interrupts.CPU60.LOC:Local_timer_interrupts
40102 ? 10% +537.4% 255621 ? 8% interrupts.CPU60.RES:Rescheduling_interrupts
14788 ? 8% +543.3% 95134 ? 12% interrupts.CPU61.CAL:Function_call_interrupts
214920 +335.0% 934826 ? 10% interrupts.CPU61.LOC:Local_timer_interrupts
37932 ? 10% +587.7% 260844 ? 11% interrupts.CPU61.RES:Rescheduling_interrupts
15536 ? 15% +538.2% 99149 ? 9% interrupts.CPU62.CAL:Function_call_interrupts
215142 +334.9% 935664 ? 10% interrupts.CPU62.LOC:Local_timer_interrupts
40517 ? 14% +563.9% 268981 ? 8% interrupts.CPU62.RES:Rescheduling_interrupts
16322 ? 17% +523.6% 101790 ? 11% interrupts.CPU63.CAL:Function_call_interrupts
214765 +335.4% 935095 ? 10% interrupts.CPU63.LOC:Local_timer_interrupts
38836 ? 6% +599.4% 271638 ? 11% interrupts.CPU63.RES:Rescheduling_interrupts
14025 ? 9% +620.7% 101075 ? 12% interrupts.CPU64.CAL:Function_call_interrupts
215186 +334.3% 934597 ? 10% interrupts.CPU64.LOC:Local_timer_interrupts
35818 ? 10% +662.2% 272998 ? 12% interrupts.CPU64.RES:Rescheduling_interrupts
16338 ? 11% +520.9% 101448 ? 12% interrupts.CPU65.CAL:Function_call_interrupts
215008 +334.8% 934858 ? 10% interrupts.CPU65.LOC:Local_timer_interrupts
43984 ? 8% +525.1% 274931 ? 9% interrupts.CPU65.RES:Rescheduling_interrupts
15465 ? 7% +503.6% 93350 ? 6% interrupts.CPU66.CAL:Function_call_interrupts
215220 +334.4% 934966 ? 10% interrupts.CPU66.LOC:Local_timer_interrupts
36413 ? 9% +593.2% 252400 ? 8% interrupts.CPU66.RES:Rescheduling_interrupts
16710 ? 13% +452.7% 92368 ? 8% interrupts.CPU67.CAL:Function_call_interrupts
215205 +334.4% 934778 ? 10% interrupts.CPU67.LOC:Local_timer_interrupts
43295 ? 15% +493.7% 257039 ? 9% interrupts.CPU67.RES:Rescheduling_interrupts
17364 ? 10% +594.2% 120547 ? 13% interrupts.CPU68.CAL:Function_call_interrupts
215214 +334.6% 935344 ? 10% interrupts.CPU68.LOC:Local_timer_interrupts
46610 ? 11% +612.8% 332222 ? 15% interrupts.CPU68.RES:Rescheduling_interrupts
16527 ? 11% +509.4% 100719 ? 10% interrupts.CPU69.CAL:Function_call_interrupts
215185 +334.3% 934545 ? 10% interrupts.CPU69.LOC:Local_timer_interrupts
42524 ? 10% +537.1% 270904 ? 9% interrupts.CPU69.RES:Rescheduling_interrupts
15920 ? 7% +559.0% 104924 ? 11% interrupts.CPU7.CAL:Function_call_interrupts
215135 +334.4% 934463 ? 10% interrupts.CPU7.LOC:Local_timer_interrupts
39874 ? 12% +588.1% 274393 ? 9% interrupts.CPU7.RES:Rescheduling_interrupts
15294 ? 15% +581.5% 104226 ? 9% interrupts.CPU70.CAL:Function_call_interrupts
214898 +335.0% 934745 ? 10% interrupts.CPU70.LOC:Local_timer_interrupts
39234 ? 12% +603.6% 276066 ? 10% interrupts.CPU70.RES:Rescheduling_interrupts
14747 ? 10% +533.7% 93447 ? 9% interrupts.CPU71.CAL:Function_call_interrupts
214990 +334.6% 934356 ? 10% interrupts.CPU71.LOC:Local_timer_interrupts
37619 ? 12% +578.0% 255055 ? 9% interrupts.CPU71.RES:Rescheduling_interrupts
11756 ? 8% +564.9% 78176 ? 19% interrupts.CPU72.CAL:Function_call_interrupts
211065 ? 3% +341.7% 932320 ? 10% interrupts.CPU72.LOC:Local_timer_interrupts
36976 ? 5% +396.5% 183605 ? 19% interrupts.CPU72.RES:Rescheduling_interrupts
11341 ? 13% +538.2% 72380 ? 17% interrupts.CPU73.CAL:Function_call_interrupts
211040 ? 3% +341.8% 932446 ? 10% interrupts.CPU73.LOC:Local_timer_interrupts
38080 ? 10% +319.0% 159549 ? 14% interrupts.CPU73.RES:Rescheduling_interrupts
11358 ? 13% +487.5% 66730 ? 19% interrupts.CPU74.CAL:Function_call_interrupts
211211 ? 3% +341.4% 932219 ? 10% interrupts.CPU74.LOC:Local_timer_interrupts
37050 ? 9% +317.8% 154807 ? 21% interrupts.CPU74.RES:Rescheduling_interrupts
13810 ? 18% +539.7% 88347 ? 22% interrupts.CPU75.CAL:Function_call_interrupts
211632 ? 3% +340.5% 932240 ? 10% interrupts.CPU75.LOC:Local_timer_interrupts
46726 ? 16% +294.1% 184144 ? 21% interrupts.CPU75.RES:Rescheduling_interrupts
10807 ? 11% +511.9% 66130 ? 21% interrupts.CPU76.CAL:Function_call_interrupts
210591 ? 4% +342.8% 932554 ? 10% interrupts.CPU76.LOC:Local_timer_interrupts
36061 ? 8% +359.6% 165752 ? 19% interrupts.CPU76.RES:Rescheduling_interrupts
12237 ? 15% +434.6% 65419 ? 12% interrupts.CPU77.CAL:Function_call_interrupts
211144 ? 3% +341.5% 932296 ? 10% interrupts.CPU77.LOC:Local_timer_interrupts
39508 ? 17% +324.3% 167619 ? 15% interrupts.CPU77.RES:Rescheduling_interrupts
11371 ? 10% +493.1% 67450 ? 25% interrupts.CPU78.CAL:Function_call_interrupts
211171 ? 3% +341.6% 932566 ? 10% interrupts.CPU78.LOC:Local_timer_interrupts
39133 ? 11% +307.3% 159393 ? 17% interrupts.CPU78.RES:Rescheduling_interrupts
11992 ? 15% +447.4% 65653 ? 16% interrupts.CPU79.CAL:Function_call_interrupts
211514 ? 3% +340.9% 932535 ? 10% interrupts.CPU79.LOC:Local_timer_interrupts
38749 ? 14% +312.5% 159820 ? 17% interrupts.CPU79.RES:Rescheduling_interrupts
16567 ? 7% +520.6% 102822 ? 16% interrupts.CPU8.CAL:Function_call_interrupts
214940 +335.0% 935038 ? 10% interrupts.CPU8.LOC:Local_timer_interrupts
39876 ? 10% +564.4% 264927 ? 14% interrupts.CPU8.RES:Rescheduling_interrupts
11734 ? 13% +454.1% 65021 ? 18% interrupts.CPU80.CAL:Function_call_interrupts
211370 ? 3% +341.0% 932142 ? 10% interrupts.CPU80.LOC:Local_timer_interrupts
38773 ? 11% +312.9% 160105 ? 21% interrupts.CPU80.RES:Rescheduling_interrupts
14939 ? 18% +408.1% 75910 ? 19% interrupts.CPU81.CAL:Function_call_interrupts
211084 ? 3% +341.7% 932262 ? 10% interrupts.CPU81.LOC:Local_timer_interrupts
48603 ? 16% +275.1% 182315 ? 15% interrupts.CPU81.RES:Rescheduling_interrupts
11869 ? 8% +473.9% 68118 ? 19% interrupts.CPU82.CAL:Function_call_interrupts
211199 ? 3% +341.4% 932128 ? 10% interrupts.CPU82.LOC:Local_timer_interrupts
37756 ? 10% +327.6% 161457 ? 18% interrupts.CPU82.RES:Rescheduling_interrupts
11067 ? 12% +452.9% 61193 ? 17% interrupts.CPU83.CAL:Function_call_interrupts
210851 ? 3% +342.1% 932154 ? 10% interrupts.CPU83.LOC:Local_timer_interrupts
35926 ? 17% +349.1% 161329 ? 19% interrupts.CPU83.RES:Rescheduling_interrupts
12004 ? 11% +442.1% 65082 ? 13% interrupts.CPU84.CAL:Function_call_interrupts
211095 ? 3% +341.6% 932195 ? 10% interrupts.CPU84.LOC:Local_timer_interrupts
39750 ? 12% +296.4% 157575 ? 13% interrupts.CPU84.RES:Rescheduling_interrupts
10882 ? 17% +454.9% 60385 ? 20% interrupts.CPU85.CAL:Function_call_interrupts
210894 ? 3% +342.2% 932602 ? 10% interrupts.CPU85.LOC:Local_timer_interrupts
36026 ? 9% +326.7% 153742 ? 16% interrupts.CPU85.RES:Rescheduling_interrupts
10709 ? 12% +465.1% 60518 ? 17% interrupts.CPU86.CAL:Function_call_interrupts
211324 ? 3% +341.1% 932168 ? 10% interrupts.CPU86.LOC:Local_timer_interrupts
37578 ? 10% +310.7% 154349 ? 14% interrupts.CPU86.RES:Rescheduling_interrupts
10391 ? 7% +487.4% 61039 ? 19% interrupts.CPU87.CAL:Function_call_interrupts
211614 ? 3% +340.5% 932145 ? 10% interrupts.CPU87.LOC:Local_timer_interrupts
37149 ? 11% +303.5% 149913 ? 19% interrupts.CPU87.RES:Rescheduling_interrupts
11430 ? 13% +445.0% 62300 ? 10% interrupts.CPU88.CAL:Function_call_interrupts
211688 ? 3% +340.4% 932276 ? 10% interrupts.CPU88.LOC:Local_timer_interrupts
39433 ? 12% +299.1% 157364 ? 13% interrupts.CPU88.RES:Rescheduling_interrupts
11172 ? 13% +501.5% 67200 ? 9% interrupts.CPU89.CAL:Function_call_interrupts
211317 ? 3% +341.0% 931996 ? 10% interrupts.CPU89.LOC:Local_timer_interrupts
37836 ? 11% +326.5% 161361 ? 15% interrupts.CPU89.RES:Rescheduling_interrupts
20044 ? 10% +542.2% 128730 ? 11% interrupts.CPU9.CAL:Function_call_interrupts
215239 +334.4% 935056 ? 10% interrupts.CPU9.LOC:Local_timer_interrupts
50564 ? 11% +589.3% 348530 ? 10% interrupts.CPU9.RES:Rescheduling_interrupts
11437 ? 11% +496.5% 68227 ? 19% interrupts.CPU90.CAL:Function_call_interrupts
211082 ? 3% +341.6% 932106 ? 10% interrupts.CPU90.LOC:Local_timer_interrupts
38314 ? 15% +332.9% 165869 ? 17% interrupts.CPU90.RES:Rescheduling_interrupts
11417 ? 12% +471.0% 65189 ? 21% interrupts.CPU91.CAL:Function_call_interrupts
211264 ? 3% +341.3% 932316 ? 10% interrupts.CPU91.LOC:Local_timer_interrupts
39753 ? 8% +297.2% 157890 ? 16% interrupts.CPU91.RES:Rescheduling_interrupts
13444 ? 9% +460.9% 75411 ? 26% interrupts.CPU92.CAL:Function_call_interrupts
211136 ? 3% +341.5% 932121 ? 10% interrupts.CPU92.LOC:Local_timer_interrupts
45188 ? 11% +312.8% 186532 ? 22% interrupts.CPU92.RES:Rescheduling_interrupts
11442 ? 7% +448.4% 62752 ? 15% interrupts.CPU93.CAL:Function_call_interrupts
211003 ? 3% +341.7% 932090 ? 10% interrupts.CPU93.LOC:Local_timer_interrupts
38950 ? 11% +307.7% 158788 ? 14% interrupts.CPU93.RES:Rescheduling_interrupts
11369 ? 9% +486.9% 66731 ? 19% interrupts.CPU94.CAL:Function_call_interrupts
211027 ? 3% +341.7% 932096 ? 10% interrupts.CPU94.LOC:Local_timer_interrupts
37074 ? 5% +344.8% 164911 ? 15% interrupts.CPU94.RES:Rescheduling_interrupts
10867 ? 12% +565.1% 72276 ? 24% interrupts.CPU95.CAL:Function_call_interrupts
210947 ? 3% +341.8% 932068 ? 10% interrupts.CPU95.LOC:Local_timer_interrupts
36921 ? 13% +370.7% 173807 ? 25% interrupts.CPU95.RES:Rescheduling_interrupts
1413 ? 7% +84.1% 2602 ? 17% interrupts.IWI:IRQ_work_interrupts
20461203 ? 2% +338.0% 89624783 ? 10% interrupts.LOC:Local_timer_interrupts
3923155 ? 3% +452.0% 21654414 ? 9% interrupts.RES:Rescheduling_interrupts
56.71 ? 21% +91.8% 108.78 ? 15% interrupts.TLB:TLB_shootdowns
9.03 ? 8% -5.6 3.47 ? 17% perf-profile.calltrace.cycles-pp._raw_spin_lock.do_msgrcv.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.88 ? 6% -5.4 2.49 ? 20% perf-profile.calltrace.cycles-pp._raw_spin_lock.do_msgsnd.do_syscall_64.entry_SYSCALL_64_after_hwframe
51.28 ? 3% -5.0 46.32 perf-profile.calltrace.cycles-pp.do_msgsnd.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.30 ? 6% -4.7 2.64 ? 18% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.do_msgrcv.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.56 ? 6% -4.6 1.95 ? 21% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.do_msgsnd.do_syscall_64.entry_SYSCALL_64_after_hwframe
97.54 -1.5 96.02 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
97.69 -1.4 96.34 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
2.11 -0.6 1.48 ? 6% perf-profile.calltrace.cycles-pp.ksys_msgctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.76 ? 2% -0.5 1.30 ? 5% perf-profile.calltrace.cycles-pp.msgctl_info.ksys_msgctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.77 ? 5% +0.1 0.87 ? 5% perf-profile.calltrace.cycles-pp.get_obj_cgroup_from_current.__kmalloc.load_msg.do_msgsnd.do_syscall_64
0.55 ? 2% +0.2 0.71 ? 2% perf-profile.calltrace.cycles-pp.store_msg.do_msg_fill.do_msgrcv.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.79 ? 9% +0.3 1.05 perf-profile.calltrace.cycles-pp.do_msg_fill.do_msgrcv.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.08 ? 3% +0.4 1.43 ? 13% perf-profile.calltrace.cycles-pp.__mod_memcg_state.__mod_memcg_lruvec_state.kfree.free_msg.do_msgrcv
1.07 ? 3% +0.4 1.43 ? 12% perf-profile.calltrace.cycles-pp.__mod_memcg_state.__mod_memcg_lruvec_state.__kmalloc.load_msg.do_msgsnd
1.13 ? 4% +0.4 1.50 ? 12% perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.kfree.free_msg.do_msgrcv.do_syscall_64
1.13 ? 3% +0.4 1.51 ? 11% perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.__kmalloc.load_msg.do_msgsnd.do_syscall_64
3.54 ? 3% +0.5 4.04 ? 7% perf-profile.calltrace.cycles-pp.__kmalloc.load_msg.do_msgsnd.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.60 ? 3% perf-profile.calltrace.cycles-pp.wake_q_add.do_msgsnd.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.62 ? 5% perf-profile.calltrace.cycles-pp.pick_next_task_fair.pick_next_task.__schedule.schedule.do_msgrcv
2.55 ? 3% +0.6 3.20 ? 9% perf-profile.calltrace.cycles-pp.kfree.free_msg.do_msgrcv.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.04 ? 2% +0.7 4.70 ? 7% perf-profile.calltrace.cycles-pp.load_msg.do_msgsnd.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.7 0.66 ? 8% perf-profile.calltrace.cycles-pp.select_idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.wake_up_q
0.23 ?115% +0.7 0.93 ? 7% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.wake_up_q.do_msgsnd
0.00 +0.7 0.74 ? 5% perf-profile.calltrace.cycles-pp.switch_fpu_return.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.33 ? 86% +0.8 1.08 ? 6% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.wake_up_q.do_msgsnd.do_syscall_64
0.00 +0.8 0.76 ? 4% perf-profile.calltrace.cycles-pp.pick_next_task.__schedule.schedule.do_msgrcv.do_syscall_64
0.00 +0.8 0.84 ? 3% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.do_msgrcv.do_syscall_64
0.00 +0.8 0.85 ? 3% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.exit_to_user_mode_prepare.syscall_exit_to_user_mode
0.00 +1.0 0.98 ? 6% perf-profile.calltrace.cycles-pp.pick_next_task_fair.pick_next_task.__schedule.schedule.exit_to_user_mode_prepare
0.00 +1.1 1.07 ? 5% perf-profile.calltrace.cycles-pp.pick_next_task.__schedule.schedule.exit_to_user_mode_prepare.syscall_exit_to_user_mode
0.00 +4.7 4.66 ? 6% perf-profile.calltrace.cycles-pp.__get_user_nocheck_8.perf_callchain_user.get_perf_callchain.perf_callchain.perf_prepare_sample
0.00 +4.9 4.89 ? 7% perf-profile.calltrace.cycles-pp.unwind_next_frame.perf_callchain_kernel.get_perf_callchain.perf_callchain.perf_prepare_sample
1.02 ? 13% +5.0 6.03 ? 7% perf-profile.calltrace.cycles-pp.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_wakeup_template
1.02 ? 13% +5.0 6.05 ? 7% perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_wakeup_template.ttwu_do_wakeup
1.02 ? 13% +5.0 6.06 ? 7% perf-profile.calltrace.cycles-pp.perf_swevent_overflow.perf_tp_event.perf_trace_sched_wakeup_template.ttwu_do_wakeup.try_to_wake_up
1.09 ? 12% +5.0 6.14 ? 7% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_switch.__schedule.schedule.do_msgrcv
1.16 ? 12% +5.2 6.37 ? 7% perf-profile.calltrace.cycles-pp.perf_trace_sched_switch.__schedule.schedule.do_msgrcv.do_syscall_64
1.07 ? 11% +5.2 6.29 ? 7% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_switch.__schedule.schedule.exit_to_user_mode_prepare
1.08 ? 13% +5.2 6.32 ? 7% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_wakeup_template.ttwu_do_wakeup.try_to_wake_up.wake_up_q
1.11 ? 13% +5.3 6.45 ? 7% perf-profile.calltrace.cycles-pp.perf_trace_sched_wakeup_template.ttwu_do_wakeup.try_to_wake_up.wake_up_q.do_msgsnd
1.12 ? 11% +5.4 6.53 ? 7% perf-profile.calltrace.cycles-pp.perf_trace_sched_switch.__schedule.schedule.exit_to_user_mode_prepare.syscall_exit_to_user_mode
1.23 ? 12% +5.5 6.74 ? 6% perf-profile.calltrace.cycles-pp.perf_swevent_overflow.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.dequeue_entity
1.27 ? 12% +5.6 6.90 ? 6% perf-profile.calltrace.cycles-pp.ttwu_do_wakeup.try_to_wake_up.wake_up_q.do_msgsnd.do_syscall_64
1.25 ? 13% +5.7 6.92 ? 6% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.dequeue_entity.dequeue_task_fair
1.21 ? 13% +5.7 6.91 ? 7% perf-profile.calltrace.cycles-pp.perf_swevent_overflow.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.enqueue_entity
1.28 ? 13% +5.8 7.07 ? 6% perf-profile.calltrace.cycles-pp.perf_trace_sched_stat_runtime.update_curr.dequeue_entity.dequeue_task_fair.__schedule
1.23 ? 13% +5.8 7.08 ? 7% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.enqueue_entity.enqueue_task_fair
1.26 ? 12% +6.0 7.24 ? 7% perf-profile.calltrace.cycles-pp.perf_trace_sched_stat_runtime.update_curr.enqueue_entity.enqueue_task_fair.ttwu_do_activate
1.34 ? 13% +6.1 7.44 ? 6% perf-profile.calltrace.cycles-pp.update_curr.dequeue_entity.dequeue_task_fair.__schedule.schedule
1.32 ? 12% +6.2 7.55 ? 7% perf-profile.calltrace.cycles-pp.update_curr.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
1.45 ? 12% +6.3 7.78 ? 6% perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.do_msgrcv
1.45 ? 12% +6.6 8.00 ? 7% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.wake_up_q
1.60 ? 12% +7.0 8.57 ? 6% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.do_msgrcv.do_syscall_64
1.60 ? 12% +7.2 8.77 ? 7% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.wake_up_q.do_msgsnd
1.61 ? 12% +7.2 8.82 ? 7% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.wake_up_q.do_msgsnd.do_syscall_64
2.28 ? 9% +7.5 9.76 ? 6% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
2.31 ? 9% +7.5 9.84 ? 6% perf-profile.calltrace.cycles-pp.schedule.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.59 ? 9% +8.1 10.73 ? 6% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +8.7 8.68 ? 7% perf-profile.calltrace.cycles-pp.unwind_next_frame.__unwind_start.perf_callchain_kernel.get_perf_callchain.perf_callchain
0.00 +9.5 9.52 ? 7% perf-profile.calltrace.cycles-pp.perf_callchain_user.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward
0.00 +9.9 9.92 ? 7% perf-profile.calltrace.cycles-pp.__unwind_start.perf_callchain_kernel.get_perf_callchain.perf_callchain.perf_prepare_sample
2.09 ? 12% +9.9 12.03 ? 7% perf-profile.calltrace.cycles-pp.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_switch
2.10 ? 12% +10.0 12.09 ? 7% perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_switch.__schedule
2.11 ? 12% +10.0 12.11 ? 7% perf-profile.calltrace.cycles-pp.perf_swevent_overflow.perf_tp_event.perf_trace_sched_switch.__schedule.schedule
0.00 +10.9 10.90 ? 7% perf-profile.calltrace.cycles-pp.asm_exc_page_fault.__get_user_nocheck_8.perf_callchain_user.get_perf_callchain.perf_callchain
2.42 ? 13% +11.1 13.56 ? 7% perf-profile.calltrace.cycles-pp.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_stat_runtime
2.43 ? 13% +11.2 13.63 ? 7% perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_stat_runtime.update_curr
3.82 ? 12% +14.0 17.81 ? 6% perf-profile.calltrace.cycles-pp.try_to_wake_up.wake_up_q.do_msgsnd.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.95 ? 12% +14.1 18.05 ? 6% perf-profile.calltrace.cycles-pp.wake_up_q.do_msgsnd.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.85 ? 11% +14.1 17.97 ? 6% perf-profile.calltrace.cycles-pp.__schedule.schedule.do_msgrcv.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.87 ? 11% +14.2 18.06 ? 6% perf-profile.calltrace.cycles-pp.schedule.do_msgrcv.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.09 ? 58% +15.9 17.94 ? 7% perf-profile.calltrace.cycles-pp.perf_callchain_kernel.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward
4.55 ? 13% +23.7 28.23 ? 7% perf-profile.calltrace.cycles-pp.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow
4.57 ? 13% +23.8 28.34 ? 7% perf-profile.calltrace.cycles-pp.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow
4.92 ? 13% +25.1 30.04 ? 7% perf-profile.calltrace.cycles-pp.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event
17.64 ? 7% -10.8 6.80 ? 16% perf-profile.children.cycles-pp._raw_spin_lock
14.17 ? 6% -9.4 4.74 ? 19% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
51.34 ? 3% -5.0 46.35 perf-profile.children.cycles-pp.do_msgsnd
98.47 -1.6 96.87 perf-profile.children.cycles-pp.do_syscall_64
98.62 -1.4 97.18 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
2.11 -0.6 1.49 ? 6% perf-profile.children.cycles-pp.ksys_msgctl
0.84 ? 10% -0.6 0.26 ? 13% perf-profile.children.cycles-pp.__slab_free
1.77 ? 2% -0.5 1.30 ? 5% perf-profile.children.cycles-pp.msgctl_info
1.10 ? 7% -0.2 0.93 ? 3% perf-profile.children.cycles-pp.ipc_obtain_object_check
0.31 ? 8% -0.2 0.15 ? 15% perf-profile.children.cycles-pp.msgctl_stat
0.18 ? 11% -0.1 0.07 ? 7% perf-profile.children.cycles-pp.___slab_alloc
0.20 ? 5% -0.1 0.09 ? 5% perf-profile.children.cycles-pp.obj_cgroup_charge_pages
0.18 ? 9% -0.1 0.07 ? 9% perf-profile.children.cycles-pp.__slab_alloc
0.17 ? 6% -0.1 0.06 ? 10% perf-profile.children.cycles-pp.page_counter_try_charge
0.24 ? 5% -0.1 0.14 ? 4% perf-profile.children.cycles-pp.obj_cgroup_charge
0.21 ? 4% -0.1 0.12 ? 10% perf-profile.children.cycles-pp.down_read
0.19 ? 6% -0.1 0.11 ? 4% perf-profile.children.cycles-pp.drain_obj_stock
0.26 ? 3% -0.1 0.18 ? 3% perf-profile.children.cycles-pp.refill_obj_stock
0.16 ? 6% -0.1 0.08 ? 12% perf-profile.children.cycles-pp.up_read
0.16 ? 4% -0.1 0.08 ? 7% perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
0.11 ? 6% -0.1 0.04 ? 35% perf-profile.children.cycles-pp.page_counter_uncharge
0.17 ? 3% -0.1 0.11 ? 5% perf-profile.children.cycles-pp.__list_del_entry_valid
0.12 ? 16% -0.1 0.07 ? 22% perf-profile.children.cycles-pp.shmem_write_end
0.46 ? 3% -0.0 0.42 ? 4% perf-profile.children.cycles-pp.task_tick_fair
0.14 ? 3% -0.0 0.11 ? 7% perf-profile.children.cycles-pp.ipcperms
0.12 ? 9% -0.0 0.09 ? 10% perf-profile.children.cycles-pp.__cond_resched
0.06 ? 11% +0.0 0.08 ? 17% perf-profile.children.cycles-pp.__mod_lruvec_page_state
0.13 ? 3% +0.0 0.16 ? 3% perf-profile.children.cycles-pp.__might_sleep
0.04 ? 40% +0.0 0.07 ? 4% perf-profile.children.cycles-pp.security_msg_msg_free
0.10 ? 5% +0.0 0.13 ? 3% perf-profile.children.cycles-pp.__get_user_8
0.18 +0.0 0.21 ? 2% perf-profile.children.cycles-pp.___might_sleep
0.12 ? 3% +0.0 0.16 ? 4% perf-profile.children.cycles-pp.copy_user_generic_unrolled
0.08 ? 10% +0.0 0.12 ? 15% perf-profile.children.cycles-pp.mem_cgroup_charge
0.18 ? 8% +0.0 0.22 ? 3% perf-profile.children.cycles-pp.__entry_text_start
0.01 ?244% +0.1 0.06 ? 7% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.00 +0.1 0.05 ? 7% perf-profile.children.cycles-pp.__list_add_valid
0.18 ? 8% +0.1 0.23 ? 13% perf-profile.children.cycles-pp.shmem_add_to_page_cache
0.00 +0.1 0.06 ? 11% perf-profile.children.cycles-pp.local_clock
0.00 +0.1 0.06 ? 11% perf-profile.children.cycles-pp.__rdgsbase_inactive
0.05 ? 8% +0.1 0.11 ? 5% perf-profile.children.cycles-pp.resched_curr
0.00 +0.1 0.06 ? 5% perf-profile.children.cycles-pp.__x64_sys_msgrcv
0.22 ? 10% +0.1 0.28 ? 3% perf-profile.children.cycles-pp.__check_heap_object
0.32 ? 9% +0.1 0.38 ? 13% perf-profile.children.cycles-pp.shmem_getpage_gfp
0.00 +0.1 0.07 ? 10% perf-profile.children.cycles-pp.rb_erase
0.00 +0.1 0.07 ? 7% perf-profile.children.cycles-pp.__bad_area_nosemaphore
0.12 ? 10% +0.1 0.18 ? 9% perf-profile.children.cycles-pp.do_user_addr_fault
0.32 ? 9% +0.1 0.38 ? 13% perf-profile.children.cycles-pp.shmem_write_begin
0.00 +0.1 0.07 ? 10% perf-profile.children.cycles-pp.ex_handler_uaccess
0.00 +0.1 0.07 ? 10% perf-profile.children.cycles-pp.cpumask_next
0.00 +0.1 0.07 ? 7% perf-profile.children.cycles-pp.__x64_sys_msgsnd
0.00 +0.1 0.07 ? 8% perf-profile.children.cycles-pp.rb_insert_color
0.00 +0.1 0.07 ? 9% perf-profile.children.cycles-pp.in_task_stack
0.12 ? 8% +0.1 0.19 ? 6% perf-profile.children.cycles-pp.__virt_addr_valid
0.00 +0.1 0.07 ? 6% perf-profile.children.cycles-pp.__enqueue_entity
0.19 ? 3% +0.1 0.27 ? 2% perf-profile.children.cycles-pp._copy_to_user
0.00 +0.1 0.08 ? 8% perf-profile.children.cycles-pp.is_module_text_address
0.17 ? 5% +0.1 0.25 ? 3% perf-profile.children.cycles-pp._copy_from_user
0.03 ? 87% +0.1 0.12 ? 9% perf-profile.children.cycles-pp.cpumask_next_wrap
0.00 +0.1 0.09 ? 11% perf-profile.children.cycles-pp.__cgroup_account_cputime
0.16 ? 16% +0.1 0.25 ? 4% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.06 ? 7% +0.1 0.15 ? 5% perf-profile.children.cycles-pp.rcu_read_unlock_strict
0.00 +0.1 0.09 ? 6% perf-profile.children.cycles-pp.perf_swevent_get_recursion_context
0.78 ? 5% +0.1 0.88 ? 5% perf-profile.children.cycles-pp.get_obj_cgroup_from_current
0.00 +0.1 0.10 ? 14% perf-profile.children.cycles-pp.rcu_is_watching
0.00 +0.1 0.10 ? 8% perf-profile.children.cycles-pp.clear_buddies
0.00 +0.1 0.11 ? 8% perf-profile.children.cycles-pp.__wrgsbase_inactive
0.17 ? 13% +0.1 0.28 ? 8% perf-profile.children.cycles-pp.available_idle_cpu
0.00 +0.1 0.11 ? 8% perf-profile.children.cycles-pp.perf_trace_run_bpf_submit
0.21 ? 4% +0.1 0.32 ? 4% perf-profile.children.cycles-pp.ksys_msgsnd
0.00 +0.1 0.11 ? 7% perf-profile.children.cycles-pp._find_next_bit
0.19 ? 7% +0.1 0.30 ? 8% perf-profile.children.cycles-pp.__radix_tree_lookup
0.00 +0.1 0.12 ? 6% perf-profile.children.cycles-pp.update_min_vruntime
0.00 +0.1 0.12 ? 6% perf-profile.children.cycles-pp.wakeup_preempt_entity
0.00 +0.1 0.12 ? 7% perf-profile.children.cycles-pp.ftrace_ops_trampoline
0.35 ? 5% +0.1 0.47 perf-profile.children.cycles-pp.syscall_return_via_sysret
0.00 +0.1 0.13 ? 10% perf-profile.children.cycles-pp.tracing_gen_ctx_irq_test
0.00 +0.1 0.13 ? 7% perf-profile.children.cycles-pp.is_ftrace_trampoline
0.00 +0.1 0.13 ? 11% perf-profile.children.cycles-pp.perf_instruction_pointer
0.00 +0.1 0.13 ? 12% perf-profile.children.cycles-pp.set_next_buddy
0.00 +0.1 0.14 ? 9% perf-profile.children.cycles-pp.get_stack_info_noinstr
0.04 ? 63% +0.1 0.18 ? 6% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.56 ? 2% +0.2 0.72 ? 2% perf-profile.children.cycles-pp.store_msg
0.29 ? 4% +0.2 0.45 ? 2% perf-profile.children.cycles-pp.__might_fault
0.07 ? 10% +0.2 0.23 ? 3% perf-profile.children.cycles-pp.update_rq_clock
0.00 +0.2 0.17 ? 8% perf-profile.children.cycles-pp.__calc_delta
0.07 ? 14% +0.2 0.24 ? 6% perf-profile.children.cycles-pp.___perf_sw_event
0.01 ?244% +0.2 0.18 ? 6% perf-profile.children.cycles-pp.perf_misc_flags
0.02 ?116% +0.2 0.20 ? 10% perf-profile.children.cycles-pp.cpuacct_charge
0.53 ? 6% +0.2 0.71 ? 3% perf-profile.children.cycles-pp.__check_object_size
0.50 ? 5% +0.2 0.68 ? 3% perf-profile.children.cycles-pp.wake_q_add
0.10 ? 11% +0.2 0.28 ? 6% perf-profile.children.cycles-pp.update_cfs_group
0.03 ? 86% +0.2 0.22 ? 6% perf-profile.children.cycles-pp.__is_insn_slot_addr
0.00 +0.2 0.19 ? 12% perf-profile.children.cycles-pp.get_callchain_entry
0.06 ? 14% +0.2 0.26 ? 6% perf-profile.children.cycles-pp.get_stack_info
0.03 ? 86% +0.2 0.22 ? 9% perf-profile.children.cycles-pp.perf_trace_buf_update
0.02 ?115% +0.2 0.22 ? 18% perf-profile.children.cycles-pp.bpf_ksym_find
0.05 ? 41% +0.2 0.25 ? 8% perf-profile.children.cycles-pp.perf_trace_buf_alloc
0.08 ? 9% +0.2 0.29 ? 7% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.00 +0.2 0.21 ? 8% perf-profile.children.cycles-pp.kvm_is_in_guest
0.10 ? 13% +0.2 0.31 ? 4% perf-profile.children.cycles-pp.pick_next_entity
0.10 ? 10% +0.2 0.32 ? 5% perf-profile.children.cycles-pp.check_preempt_wakeup
0.07 ? 9% +0.2 0.30 ? 6% perf-profile.children.cycles-pp.__update_load_avg_se
0.26 ? 8% +0.2 0.49 ? 16% perf-profile.children.cycles-pp.memcpy_erms
0.11 ? 9% +0.3 0.36 ? 6% perf-profile.children.cycles-pp.check_preempt_curr
0.42 ? 15% +0.3 0.67 ? 8% perf-profile.children.cycles-pp.select_idle_cpu
0.06 ? 16% +0.3 0.32 ? 12% perf-profile.children.cycles-pp.is_bpf_text_address
0.32 ? 8% +0.3 0.58 ? 15% perf-profile.children.cycles-pp.perf_output_copy
0.02 ?115% +0.3 0.29 ? 7% perf-profile.children.cycles-pp.copy_fpregs_to_fpstate
0.80 ? 9% +0.3 1.06 perf-profile.children.cycles-pp.do_msg_fill
0.08 ? 15% +0.3 0.34 ? 7% perf-profile.children.cycles-pp.__task_pid_nr_ns
0.19 ? 4% +0.3 0.47 ? 4% perf-profile.children.cycles-pp.finish_task_switch
0.08 ? 11% +0.3 0.36 ? 6% perf-profile.children.cycles-pp.put_prev_entity
0.25 ? 8% +0.3 0.54 ? 5% perf-profile.children.cycles-pp.__switch_to
0.00 +0.3 0.29 ? 13% perf-profile.children.cycles-pp.perf_tp_event_match
0.06 ? 11% +0.3 0.37 ? 8% perf-profile.children.cycles-pp.bad_get_user
0.41 ? 8% +0.3 0.73 ? 2% perf-profile.children.cycles-pp.raw_spin_rq_lock_nested
0.09 ? 8% +0.3 0.43 ? 6% perf-profile.children.cycles-pp.native_sched_clock
0.48 ? 8% +0.3 0.83 ? 15% perf-profile.children.cycles-pp.perf_output_sample
0.08 ? 17% +0.4 0.43 ? 8% perf-profile.children.cycles-pp.__switch_to_asm
0.06 ? 13% +0.4 0.42 ? 8% perf-profile.children.cycles-pp.ftrace_graph_ret_addr
0.10 ? 12% +0.4 0.48 ? 7% perf-profile.children.cycles-pp.perf_event_pid_type
0.10 ? 6% +0.4 0.50 ? 6% perf-profile.children.cycles-pp.sched_clock_cpu
0.11 ? 13% +0.4 0.54 ? 7% perf-profile.children.cycles-pp.reweight_entity
0.09 ? 13% +0.5 0.55 ? 5% perf-profile.children.cycles-pp.set_next_entity
0.48 ? 15% +0.5 0.95 ? 7% perf-profile.children.cycles-pp.select_idle_sibling
0.11 ? 11% +0.5 0.59 ? 8% perf-profile.children.cycles-pp.perf_output_begin_forward
3.56 ? 2% +0.5 4.07 ? 7% perf-profile.children.cycles-pp.__kmalloc
0.13 ? 12% +0.5 0.65 ? 7% perf-profile.children.cycles-pp.load_new_mm_cr3
0.19 ? 11% +0.6 0.75 ? 5% perf-profile.children.cycles-pp.switch_fpu_return
0.52 ? 14% +0.6 1.08 ? 6% perf-profile.children.cycles-pp.select_task_rq_fair
2.60 ? 3% +0.7 3.26 ? 9% perf-profile.children.cycles-pp.kfree
4.04 ? 2% +0.7 4.71 ? 7% perf-profile.children.cycles-pp.load_msg
2.19 ? 3% +0.7 2.93 ? 12% perf-profile.children.cycles-pp.__mod_memcg_state
2.32 ? 3% +0.8 3.09 ? 11% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.20 ? 9% +0.8 0.97 ? 7% perf-profile.children.cycles-pp.__perf_event_header__init_id
0.16 ? 10% +0.8 0.99 ? 7% perf-profile.children.cycles-pp.cmp_ex_search
0.27 ? 8% +0.9 1.12 ? 7% perf-profile.children.cycles-pp.update_load_avg
0.79 ? 7% +0.9 1.71 ? 3% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.25 ? 12% +1.1 1.37 ? 7% perf-profile.children.cycles-pp.kernel_text_address
0.39 ? 11% +1.3 1.67 ? 5% perf-profile.children.cycles-pp.pick_next_task_fair
0.26 ? 14% +1.3 1.59 ? 7% perf-profile.children.cycles-pp.stack_access_ok
0.31 ? 12% +1.3 1.65 ? 7% perf-profile.children.cycles-pp.__kernel_text_address
0.34 ? 11% +1.3 1.68 ? 6% perf-profile.children.cycles-pp.native_irq_return_iret
0.27 ? 12% +1.3 1.61 ? 7% perf-profile.children.cycles-pp.bsearch
0.28 ? 11% +1.4 1.69 ? 7% perf-profile.children.cycles-pp.search_extable
0.44 ? 10% +1.4 1.85 ? 5% perf-profile.children.cycles-pp.pick_next_task
0.29 ? 11% +1.5 1.74 ? 7% perf-profile.children.cycles-pp.search_exception_tables
0.32 ? 12% +1.6 1.88 ? 7% perf-profile.children.cycles-pp.fixup_exception
0.37 ? 13% +1.6 2.02 ? 7% perf-profile.children.cycles-pp.unwind_get_return_address
0.35 ? 11% +1.7 2.03 ? 7% perf-profile.children.cycles-pp.kernelmode_fixup_or_oops
0.36 ? 14% +1.8 2.18 ? 7% perf-profile.children.cycles-pp.orc_find
0.53 ? 10% +1.9 2.47 ? 6% perf-profile.children.cycles-pp.exc_page_fault
0.61 ? 11% +3.0 3.56 ? 7% perf-profile.children.cycles-pp.__orc_find
1.18 ? 12% +5.3 6.51 ? 6% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
1.34 ? 11% +5.6 6.97 ? 6% perf-profile.children.cycles-pp.ttwu_do_wakeup
1.34 ? 11% +5.7 7.01 ? 7% perf-profile.children.cycles-pp.asm_exc_page_fault
1.48 ? 12% +6.3 7.82 ? 6% perf-profile.children.cycles-pp.dequeue_entity
1.53 ? 11% +6.6 8.08 ? 7% perf-profile.children.cycles-pp.enqueue_entity
1.63 ? 12% +7.0 8.62 ? 6% perf-profile.children.cycles-pp.dequeue_task_fair
1.68 ? 11% +7.2 8.84 ? 7% perf-profile.children.cycles-pp.enqueue_task_fair
1.69 ? 11% +7.2 8.89 ? 7% perf-profile.children.cycles-pp.ttwu_do_activate
1.67 ? 12% +7.5 9.18 ? 7% perf-profile.children.cycles-pp.__get_user_nocheck_8
1.77 ? 11% +7.9 9.66 ? 7% perf-profile.children.cycles-pp.perf_callchain_user
2.62 ? 8% +8.1 10.75 ? 6% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
1.69 ? 12% +8.5 10.17 ? 7% perf-profile.children.cycles-pp.__unwind_start
2.34 ? 11% +10.6 12.97 ? 7% perf-profile.children.cycles-pp.perf_trace_sched_switch
2.98 ? 11% +11.8 14.73 ? 7% perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime
2.37 ? 12% +11.9 14.32 ? 7% perf-profile.children.cycles-pp.unwind_next_frame
3.19 ? 11% +12.6 15.84 ? 7% perf-profile.children.cycles-pp.update_curr
3.91 ? 12% +14.0 17.91 ? 6% perf-profile.children.cycles-pp.try_to_wake_up
4.06 ? 11% +14.1 18.16 ? 6% perf-profile.children.cycles-pp.wake_up_q
3.11 ? 12% +15.3 18.38 ? 7% perf-profile.children.cycles-pp.perf_callchain_kernel
6.34 ? 10% +21.6 27.96 ? 6% perf-profile.children.cycles-pp.__schedule
6.27 ? 10% +21.7 27.99 ? 6% perf-profile.children.cycles-pp.schedule
4.99 ? 12% +23.7 28.68 ? 7% perf-profile.children.cycles-pp.get_perf_callchain
5.02 ? 12% +23.8 28.79 ? 7% perf-profile.children.cycles-pp.perf_callchain
5.39 ? 12% +25.1 30.54 ? 7% perf-profile.children.cycles-pp.perf_prepare_sample
6.07 ? 11% +26.1 32.13 ? 7% perf-profile.children.cycles-pp.perf_event_output_forward
6.09 ? 11% +26.2 32.28 ? 7% perf-profile.children.cycles-pp.__perf_event_overflow
6.10 ? 11% +26.2 32.33 ? 7% perf-profile.children.cycles-pp.perf_swevent_overflow
6.27 ? 11% +27.0 33.29 ? 7% perf-profile.children.cycles-pp.perf_tp_event
34.00 ? 2% -14.3 19.69 ? 6% perf-profile.self.cycles-pp.do_msgsnd
21.37 ? 3% -11.8 9.61 ? 10% perf-profile.self.cycles-pp.do_msgrcv
14.04 ? 6% -9.3 4.69 ? 19% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
3.45 ? 13% -1.4 2.06 ? 9% perf-profile.self.cycles-pp._raw_spin_lock
0.84 ? 10% -0.6 0.26 ? 13% perf-profile.self.cycles-pp.__slab_free
1.38 ? 2% -0.3 1.08 ? 5% perf-profile.self.cycles-pp.msgctl_info
0.42 ? 5% -0.2 0.23 ? 9% perf-profile.self.cycles-pp.ipc_obtain_object_check
0.16 ? 6% -0.1 0.06 ? 10% perf-profile.self.cycles-pp.page_counter_try_charge
0.20 ? 3% -0.1 0.12 ? 12% perf-profile.self.cycles-pp.down_read
0.16 ? 6% -0.1 0.08 ? 12% perf-profile.self.cycles-pp.up_read
0.17 ? 2% -0.1 0.10 ? 6% perf-profile.self.cycles-pp.__list_del_entry_valid
0.14 ? 5% -0.0 0.10 ? 6% perf-profile.self.cycles-pp.ipcperms
0.17 ? 4% +0.0 0.20 ? 4% perf-profile.self.cycles-pp.__check_object_size
0.09 ? 7% +0.0 0.13 ? 3% perf-profile.self.cycles-pp.__get_user_8
0.17 ? 2% +0.0 0.21 ? 3% perf-profile.self.cycles-pp.___might_sleep
0.11 ? 4% +0.0 0.15 ? 3% perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.12 ? 7% +0.0 0.15 ? 13% perf-profile.self.cycles-pp.perf_output_sample
0.17 ? 6% +0.0 0.22 ? 3% perf-profile.self.cycles-pp.__entry_text_start
0.01 ?158% +0.0 0.06 ? 8% perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.05 ? 41% +0.0 0.09 ? 11% perf-profile.self.cycles-pp.ttwu_do_wakeup
0.00 +0.1 0.05 ? 6% perf-profile.self.cycles-pp.__list_add_valid
0.00 +0.1 0.05 ? 7% perf-profile.self.cycles-pp.perf_swevent_overflow
0.05 ? 8% +0.1 0.11 ? 4% perf-profile.self.cycles-pp.resched_curr
0.00 +0.1 0.05 ? 12% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.00 +0.1 0.06 ? 8% perf-profile.self.cycles-pp.perf_trace_sched_wakeup_template
0.00 +0.1 0.06 ? 11% perf-profile.self.cycles-pp.search_exception_tables
0.00 +0.1 0.06 ? 12% perf-profile.self.cycles-pp.ex_handler_uaccess
0.00 +0.1 0.06 ? 9% perf-profile.self.cycles-pp.__rdgsbase_inactive
0.00 +0.1 0.06 ? 7% perf-profile.self.cycles-pp.__x64_sys_msgrcv
0.00 +0.1 0.06 ? 10% perf-profile.self.cycles-pp.in_task_stack
0.00 +0.1 0.06 ? 10% perf-profile.self.cycles-pp.rb_erase
0.09 ? 14% +0.1 0.15 ? 6% perf-profile.self.cycles-pp.exit_to_user_mode_prepare
0.00 +0.1 0.06 ? 10% perf-profile.self.cycles-pp.sched_clock_cpu
0.01 ?244% +0.1 0.07 ? 4% perf-profile.self.cycles-pp.do_msg_fill
0.00 +0.1 0.06 ? 10% perf-profile.self.cycles-pp.cpumask_next_wrap
0.21 ? 10% +0.1 0.28 ? 3% perf-profile.self.cycles-pp.__check_heap_object
0.00 +0.1 0.07 ? 14% perf-profile.self.cycles-pp.__cgroup_account_cputime
0.00 +0.1 0.07 ? 10% perf-profile.self.cycles-pp.get_stack_info_noinstr
0.00 +0.1 0.07 ? 7% perf-profile.self.cycles-pp.__bad_area_nosemaphore
0.00 +0.1 0.07 ? 7% perf-profile.self.cycles-pp.__x64_sys_msgsnd
0.12 ? 5% +0.1 0.18 ? 6% perf-profile.self.cycles-pp.__virt_addr_valid
0.00 +0.1 0.07 ? 7% perf-profile.self.cycles-pp.__enqueue_entity
0.00 +0.1 0.07 ? 8% perf-profile.self.cycles-pp.rb_insert_color
0.11 ? 6% +0.1 0.18 ? 16% perf-profile.self.cycles-pp.perf_output_copy
0.00 +0.1 0.08 ? 12% perf-profile.self.cycles-pp.search_extable
0.74 ? 6% +0.1 0.81 ? 4% perf-profile.self.cycles-pp.get_obj_cgroup_from_current
0.00 +0.1 0.08 ? 12% perf-profile.self.cycles-pp.is_bpf_text_address
0.00 +0.1 0.08 ? 8% perf-profile.self.cycles-pp.fixup_exception
0.06 ? 11% +0.1 0.15 ? 5% perf-profile.self.cycles-pp.__might_fault
0.00 +0.1 0.09 ? 8% perf-profile.self.cycles-pp.put_prev_entity
0.00 +0.1 0.09 ? 7% perf-profile.self.cycles-pp.perf_instruction_pointer
0.16 ? 16% +0.1 0.25 ? 4% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.00 +0.1 0.09 ? 9% perf-profile.self.cycles-pp.perf_swevent_get_recursion_context
0.00 +0.1 0.09 ? 14% perf-profile.self.cycles-pp.perf_trace_buf_update
0.00 +0.1 0.10 ? 7% perf-profile.self.cycles-pp.clear_buddies
0.04 ? 63% +0.1 0.13 ? 9% perf-profile.self.cycles-pp.dequeue_entity
0.07 ? 40% +0.1 0.16 ? 5% perf-profile.self.cycles-pp.do_syscall_64
0.15 ? 8% +0.1 0.24 ? 5% perf-profile.self.cycles-pp.wake_up_q
0.00 +0.1 0.10 ? 16% perf-profile.self.cycles-pp.rcu_is_watching
0.17 ? 14% +0.1 0.27 ? 8% perf-profile.self.cycles-pp.available_idle_cpu
0.00 +0.1 0.10 ? 5% perf-profile.self.cycles-pp.update_rq_clock
0.00 +0.1 0.10 ? 7% perf-profile.self.cycles-pp.ftrace_ops_trampoline
0.00 +0.1 0.10 ? 9% perf-profile.self.cycles-pp.perf_trace_run_bpf_submit
0.00 +0.1 0.10 ? 7% perf-profile.self.cycles-pp._find_next_bit
0.00 +0.1 0.11 ? 6% perf-profile.self.cycles-pp.rcu_read_unlock_strict
0.00 +0.1 0.11 ? 11% perf-profile.self.cycles-pp.__perf_event_overflow
0.00 +0.1 0.11 ? 8% perf-profile.self.cycles-pp.__wrgsbase_inactive
0.00 +0.1 0.11 ? 12% perf-profile.self.cycles-pp.perf_callchain
0.18 ? 5% +0.1 0.29 ? 8% perf-profile.self.cycles-pp.__radix_tree_lookup
0.00 +0.1 0.11 ? 8% perf-profile.self.cycles-pp.wakeup_preempt_entity
0.00 +0.1 0.11 ? 8% perf-profile.self.cycles-pp.perf_misc_flags
0.01 ?158% +0.1 0.13 ? 9% perf-profile.self.cycles-pp.do_user_addr_fault
0.00 +0.1 0.12 ? 9% perf-profile.self.cycles-pp.update_min_vruntime
0.00 +0.1 0.12 ? 8% perf-profile.self.cycles-pp.schedule
0.34 ? 6% +0.1 0.46 perf-profile.self.cycles-pp.syscall_return_via_sysret
0.00 +0.1 0.12 ? 12% perf-profile.self.cycles-pp.set_next_buddy
0.00 +0.1 0.12 ? 7% perf-profile.self.cycles-pp.get_stack_info
0.00 +0.1 0.12 ? 6% perf-profile.self.cycles-pp.perf_event_output_forward
0.00 +0.1 0.12 ? 9% perf-profile.self.cycles-pp.perf_event_pid_type
0.00 +0.1 0.12 ? 10% perf-profile.self.cycles-pp.tracing_gen_ctx_irq_test
0.00 +0.1 0.13 ? 7% perf-profile.self.cycles-pp.check_preempt_wakeup
0.00 +0.1 0.13 ? 8% perf-profile.self.cycles-pp.select_task_rq_fair
0.00 +0.1 0.14 ? 10% perf-profile.self.cycles-pp.finish_task_switch
0.06 ? 42% +0.1 0.20 ? 4% perf-profile.self.cycles-pp.pick_next_entity
0.06 ? 11% +0.1 0.21 ? 6% perf-profile.self.cycles-pp.___perf_sw_event
0.00 +0.1 0.15 ? 9% perf-profile.self.cycles-pp.perf_trace_buf_alloc
0.00 +0.2 0.15 ? 7% perf-profile.self.cycles-pp.select_idle_sibling
0.00 +0.2 0.15 ? 7% perf-profile.self.cycles-pp.exc_page_fault
0.00 +0.2 0.15 ? 6% perf-profile.self.cycles-pp.set_next_entity
0.02 ?115% +0.2 0.18 ? 6% perf-profile.self.cycles-pp.pick_next_task
0.00 +0.2 0.15 ? 8% perf-profile.self.cycles-pp.dequeue_task_fair
0.00 +0.2 0.15 ? 9% perf-profile.self.cycles-pp.__perf_event_header__init_id
0.00 +0.2 0.16 ? 4% perf-profile.self.cycles-pp.kernelmode_fixup_or_oops
0.00 +0.2 0.16 ? 6% perf-profile.self.cycles-pp.__calc_delta
0.15 ? 7% +0.2 0.31 ? 5% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.06 ? 16% +0.2 0.23 ? 6% perf-profile.self.cycles-pp.reweight_entity
0.02 ?115% +0.2 0.19 ? 10% perf-profile.self.cycles-pp.enqueue_entity
0.17 ? 9% +0.2 0.35 ? 3% perf-profile.self.cycles-pp.try_to_wake_up
0.02 ?159% +0.2 0.19 ? 11% perf-profile.self.cycles-pp.cpuacct_charge
0.00 +0.2 0.17 ? 7% perf-profile.self.cycles-pp.__is_insn_slot_addr
0.50 ? 5% +0.2 0.67 ? 3% perf-profile.self.cycles-pp.wake_q_add
0.01 ?244% +0.2 0.19 ? 11% perf-profile.self.cycles-pp.perf_trace_sched_stat_runtime
0.00 +0.2 0.18 ? 11% perf-profile.self.cycles-pp.get_callchain_entry
0.00 +0.2 0.18 ? 9% perf-profile.self.cycles-pp.kvm_is_in_guest
0.09 ? 9% +0.2 0.28 ? 5% perf-profile.self.cycles-pp.update_cfs_group
0.08 ? 13% +0.2 0.27 ? 6% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.02 ?115% +0.2 0.22 ? 17% perf-profile.self.cycles-pp.bpf_ksym_find
0.04 ? 63% +0.2 0.24 ? 7% perf-profile.self.cycles-pp.enqueue_task_fair
0.07 ? 13% +0.2 0.27 ? 7% perf-profile.self.cycles-pp.asm_exc_page_fault
0.04 ? 63% +0.2 0.26 ? 9% perf-profile.self.cycles-pp.__kernel_text_address
0.07 ? 8% +0.2 0.30 ? 5% perf-profile.self.cycles-pp.__update_load_avg_se
0.25 ? 8% +0.2 0.48 ? 16% perf-profile.self.cycles-pp.memcpy_erms
0.11 ? 10% +0.2 0.35 ? 8% perf-profile.self.cycles-pp.perf_trace_sched_switch
0.07 ? 13% +0.3 0.32 ? 7% perf-profile.self.cycles-pp.__task_pid_nr_ns
0.08 ? 12% +0.3 0.33 ? 5% perf-profile.self.cycles-pp.pick_next_task_fair
0.07 ? 9% +0.3 0.32 ? 8% perf-profile.self.cycles-pp.perf_callchain_user
0.24 ? 8% +0.3 0.50 ? 5% perf-profile.self.cycles-pp.__switch_to
0.02 ?115% +0.3 0.28 ? 6% perf-profile.self.cycles-pp.copy_fpregs_to_fpstate
0.00 +0.3 0.29 ? 13% perf-profile.self.cycles-pp.perf_tp_event_match
0.04 ? 64% +0.3 0.33 ? 9% perf-profile.self.cycles-pp.ftrace_graph_ret_addr
1.00 ? 4% +0.3 1.30 ? 9% perf-profile.self.cycles-pp.__kmalloc
0.06 ? 18% +0.3 0.37 ? 8% perf-profile.self.cycles-pp.unwind_get_return_address
0.09 ? 12% +0.3 0.42 ? 7% perf-profile.self.cycles-pp.perf_tp_event
0.08 ? 5% +0.3 0.42 ? 6% perf-profile.self.cycles-pp.native_sched_clock
0.07 ? 15% +0.3 0.42 ? 7% perf-profile.self.cycles-pp.__switch_to_asm
1.18 ? 4% +0.4 1.54 ? 8% perf-profile.self.cycles-pp.kfree
0.12 ? 13% +0.4 0.49 ? 6% perf-profile.self.cycles-pp.perf_prepare_sample
0.07 ? 18% +0.4 0.44 ? 8% perf-profile.self.cycles-pp.get_perf_callchain
0.65 ? 7% +0.4 1.04 ? 2% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.12 ? 10% +0.4 0.53 ? 9% perf-profile.self.cycles-pp.update_load_avg
0.09 ? 14% +0.4 0.52 ? 6% perf-profile.self.cycles-pp.kernel_text_address
0.09 ? 7% +0.5 0.54 ? 8% perf-profile.self.cycles-pp.update_curr
0.10 ? 8% +0.5 0.56 ? 7% perf-profile.self.cycles-pp.perf_output_begin_forward
0.13 ? 12% +0.5 0.65 ? 7% perf-profile.self.cycles-pp.load_new_mm_cr3
0.12 ? 13% +0.5 0.65 ? 8% perf-profile.self.cycles-pp.bsearch
0.19 ? 10% +0.5 0.74 ? 5% perf-profile.self.cycles-pp.switch_fpu_return
0.29 ? 11% +0.7 0.95 ? 5% perf-profile.self.cycles-pp.__schedule
2.19 ? 3% +0.7 2.93 ? 12% perf-profile.self.cycles-pp.__mod_memcg_state
0.14 ? 15% +0.8 0.89 ? 7% perf-profile.self.cycles-pp.__unwind_start
0.15 ? 11% +0.8 0.95 ? 7% perf-profile.self.cycles-pp.cmp_ex_search
0.21 ? 12% +0.9 1.12 ? 6% perf-profile.self.cycles-pp.perf_callchain_kernel
0.23 ? 14% +1.1 1.37 ? 7% perf-profile.self.cycles-pp.stack_access_ok
0.33 ? 11% +1.3 1.67 ? 6% perf-profile.self.cycles-pp.native_irq_return_iret
0.34 ? 14% +1.7 2.06 ? 7% perf-profile.self.cycles-pp.orc_find
0.61 ? 12% +2.9 3.55 ? 7% perf-profile.self.cycles-pp.__orc_find
0.77 ? 13% +3.8 4.58 ? 7% perf-profile.self.cycles-pp.__get_user_nocheck_8
1.11 ? 13% +5.7 6.84 ? 7% perf-profile.self.cycles-pp.unwind_next_frame



phoronix-test-suite.stress-ng.SystemVMessagePassing.bogo_ops_s

1.3e+07 +-----------------------------------------------------------------+
| O |
1.2e+07 |-+ O |
1.1e+07 |-+ |
| OO O O OO O OO OO O OO O OO O OO OO |
1e+07 |-+ OO O O O O O O O |
| |
9e+06 |-+ |
| |
8e+06 |-+ |
7e+06 |-+ |
| |
6e+06 |.++.++. +.++.++.++.++.++.++.++.++.++.++.++.++.+ .++.++.++.++. +.+|
| + + + |
5e+06 +-----------------------------------------------------------------+


phoronix-test-suite.time.voluntary_context_switches

3.5e+09 +-----------------------------------------------------------------+
| O O O OO OO OO OO O O OO O O O |
3e+09 |-+ O O O O |
| O O |
2.5e+09 |-+ O O O O OO O |
| |
2e+09 |-+ |
| |
1.5e+09 |-+ |
| |
1e+09 |-+ |
| |
5e+08 |-+ |
| .+ +. |
0 +-----------------------------------------------------------------+


phoronix-test-suite.time.involuntary_context_switches

3.5e+09 +-----------------------------------------------------------------+
| O O O OO OO OO OO O O OO O O O |
3e+09 |-+ O O O O |
| O O |
2.5e+09 |-+ O O O O OO O |
| |
2e+09 |-+ |
| |
1.5e+09 |-+ |
| |
1e+09 |-+ |
| |
5e+08 |-+ |
| .+ +. |
0 +-----------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation

Thanks,
Oliver Sang


Attachments:
(No filename) (101.07 kB)
config-5.13.0-rc4-00057-g5359f5ca0f37 (176.86 kB)
job-script (7.69 kB)
job.yaml (5.00 kB)
reproduce (311.00 B)
Download all attachments