2022-01-22 02:03:29

by Waiman Long

[permalink] [raw]
Subject: [PATCH] ipc/mqueue: Use get_tree_nodev() in mqueue_get_tree()

When running the stress-ng clone benchmark with multiple testing
threads, it was found that there were significant spinlock contention
in sget_fc(). The contended spinlock was the sb_lock. It is under
heavy contention because the following code in the critcal section
of sget_fc():

hlist_for_each_entry(old, &fc->fs_type->fs_supers, s_instances) {
if (test(old, fc))
goto share_extant_sb;
}

After testing with added instrumentation code, it was found that the
the benchmark could generate thousands of ipc namespaces with the
corresponding number of entries in the mqueue's fs_supers list where
the namespaces are the key for the search. This leads to excessive time
in scanning the list for a match.

Looking back at the mqueue calling sequence leading to sget_fc():

mq_init_ns()
=> mq_create_mount()
=> fc_mount()
=> vfs_get_tree()
=> mqueue_get_tree()
=> get_tree_keyed()
=> vfs_get_super()
=> sget_fc()

Currently, mq_init_ns() is the only mqueue function that will indirectly
call mqueue_get_tree() with a newly allocated ipc namespace as the
key for searching. As a result, there will never be a match with the
exising ipc namespaces stored in the mqueue's fs_supers list.

So using get_tree_keyed() to do an existing ipc namespace search is just
a waste of time. Instead, we could use get_tree_nodev() to eliminate
the useless search. By doing so, we can greatly reduce the sb_lock hold
time and avoid the spinlock contention problem in case a large number
of ipc namespaces are present.

Of course, if the code is modified in the future to allow
mqueue_get_tree() to be called with an existing ipc namespace instead
of a new one, we will have to use get_tree_keyed() in this case.

The following stress-ng clone benchmark command was run on a 2-socket
48-core Intel system:

./stress-ng --clone 32 --verbose --oomable --metrics-brief -t 20

The "bogo ops/s" increased from 5948.45 before patch to 9137.06 after
patch. This is an increase of 54% in performance.

Fixes: 935c6912b198 ("ipc: Convert mqueue fs to fs_context")
Signed-off-by: Waiman Long <[email protected]>
---
ipc/mqueue.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)

diff --git a/ipc/mqueue.c b/ipc/mqueue.c
index 5becca9be867..089c34d0732c 100644
--- a/ipc/mqueue.c
+++ b/ipc/mqueue.c
@@ -45,6 +45,7 @@

struct mqueue_fs_context {
struct ipc_namespace *ipc_ns;
+ bool newns; /* Set if newly created ipc namespace */
};

#define MQUEUE_MAGIC 0x19800202
@@ -427,6 +428,14 @@ static int mqueue_get_tree(struct fs_context *fc)
{
struct mqueue_fs_context *ctx = fc->fs_private;

+ /*
+ * With a newly created ipc namespace, we don't need to do a search
+ * for an ipc namespace match, but we still need to set s_fs_info.
+ */
+ if (ctx->newns) {
+ fc->s_fs_info = ctx->ipc_ns;
+ return get_tree_nodev(fc, mqueue_fill_super);
+ }
return get_tree_keyed(fc, mqueue_fill_super, ctx->ipc_ns);
}

@@ -454,6 +463,10 @@ static int mqueue_init_fs_context(struct fs_context *fc)
return 0;
}

+/*
+ * mq_init_ns() is currently the only caller of mq_create_mount().
+ * So the ns parameter is always a newly created ipc namespace.
+ */
static struct vfsmount *mq_create_mount(struct ipc_namespace *ns)
{
struct mqueue_fs_context *ctx;
@@ -465,6 +478,7 @@ static struct vfsmount *mq_create_mount(struct ipc_namespace *ns)
return ERR_CAST(fc);

ctx = fc->fs_private;
+ ctx->newns = true;
put_ipc_ns(ctx->ipc_ns);
ctx->ipc_ns = get_ipc_ns(ns);
put_user_ns(fc->user_ns);
--
2.27.0


2022-02-06 18:53:35

by kernel test robot

[permalink] [raw]
Subject: [ipc/mqueue] d5f04a939d: stress-ng.clone.ops_per_sec 178.9% improvement



Greeting,

FYI, we noticed a 178.9% improvement of stress-ng.clone.ops_per_sec due to commit:


commit: d5f04a939d3a0f900a6f03f0979826e1cf85e0c7 ("[PATCH] ipc/mqueue: Use get_tree_nodev() in mqueue_get_tree()")
url: https://github.com/0day-ci/linux/commits/Waiman-Long/ipc-mqueue-Use-get_tree_nodev-in-mqueue_get_tree/20220122-012445
base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git 2c271fe77d52a0555161926c232cd5bc07178b39
patch link: https://lore.kernel.org/lkml/[email protected]

in testcase: stress-ng
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 512G memory
with following parameters:

nr_threads: 100%
testtime: 60s
sc_pid_max: 4194304
class: scheduler
test: clone
cpufreq_governor: performance
ucode: 0x5003102






Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file

# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.

=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/sc_pid_max/tbox_group/test/testcase/testtime/ucode:
scheduler/gcc-9/performance/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/4194304/lkp-csl-2sp7/clone/stress-ng/60s/0x5003102

commit:
2c271fe77d ("Merge tag 'gpio-fixes-for-v5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux")
d5f04a939d ("ipc/mqueue: Use get_tree_nodev() in mqueue_get_tree()")

2c271fe77d52a055 d5f04a939d3a0f900a6f03f0979
---------------- ---------------------------
%stddev %change %stddev
\ | \
222534 ? 7% +177.6% 617849 ? 3% stress-ng.clone.ops
3377 ? 7% +178.9% 9420 ? 3% stress-ng.clone.ops_per_sec
148049 ? 7% +344.2% 657675 ? 15% stress-ng.time.involuntary_context_switches
1341 ? 10% +290.2% 5233 ? 9% stress-ng.time.major_page_faults
157568 +16.4% 183379 stress-ng.time.maximum_resident_set_size
4999210 ? 5% +156.1% 12804320 ? 3% stress-ng.time.minor_page_faults
2678 ? 41% +115.7% 5777 ? 7% stress-ng.time.percent_of_cpu_this_job_got
1820 ? 41% +114.6% 3906 ? 7% stress-ng.time.system_time
140704 ? 11% +187.1% 403970 ? 11% stress-ng.time.voluntary_context_switches
393.23 +11.3% 437.52 ? 4% pmeter.Average_Active_Power
3762 ? 9% +16.5% 4383 ? 8% uptime.idle
7.899e+08 ? 34% +57.3% 1.243e+09 ? 28% cpuidle..time
1673943 ? 36% +54.4% 2584371 ? 26% cpuidle..usage
13.20 ? 31% +50.0% 19.80 ? 26% vmstat.cpu.id
7535328 ? 5% +105.0% 15447853 ? 3% vmstat.memory.cache
2746 ? 29% -85.0% 412.60 ? 41% vmstat.procs.r
15340 ? 4% +184.1% 43589 ? 8% vmstat.system.cs
12.78 ? 32% +11.7 24.49 ? 21% mpstat.cpu.all.idle%
0.90 ? 2% +0.3 1.16 ? 2% mpstat.cpu.all.irq%
0.10 ? 21% +0.3 0.35 ? 20% mpstat.cpu.all.soft%
85.94 ? 4% -12.6 73.34 ? 7% mpstat.cpu.all.sys%
0.28 ? 13% +0.4 0.66 ? 15% mpstat.cpu.all.usr%
18522728 ? 12% +259.3% 66550638 ? 6% numa-numastat.node0.local_node
19054743 ? 11% +257.5% 68128168 ? 6% numa-numastat.node0.numa_hit
541400 ? 15% +200.5% 1627074 ? 6% numa-numastat.node0.other_node
24146816 ? 9% +193.6% 70903687 ? 9% numa-numastat.node1.local_node
24701175 ? 9% +193.3% 72448658 ? 9% numa-numastat.node1.numa_hit
577574 ? 6% +182.9% 1633668 ? 8% numa-numastat.node1.other_node
2433 ? 4% -6.5% 2273 ? 6% turbostat.Avg_MHz
88.02 ? 4% -6.5 81.53 ? 6% turbostat.Busy%
69987 ? 28% +305.2% 283587 ?138% turbostat.C1
11.55 ? 36% +51.3% 17.48 ? 30% turbostat.CPU%c1
244.54 +9.4% 267.44 ? 2% turbostat.PkgWatt
155.21 +35.0% 209.58 ? 2% turbostat.RAMWatt
13096984 ? 3% +19.2% 15613708 ? 2% meminfo.AnonPages
1.57e+09 ? 24% +91.5% 3.008e+09 ? 21% meminfo.Committed_AS
13150964 ? 3% +19.1% 15666947 ? 2% meminfo.Inactive
13150813 ? 3% +19.1% 15666796 ? 2% meminfo.Inactive(anon)
4655195 ? 9% +172.7% 12694048 ? 4% meminfo.KReclaimable
181149 ? 19% +77.8% 322162 ? 11% meminfo.KernelStack
42197327 ? 4% +101.8% 85160207 ? 3% meminfo.Memused
3446946 ? 24% +88.2% 6488419 ? 21% meminfo.PageTables
4903779 ? 9% +180.1% 13737584 ? 3% meminfo.Percpu
4655195 ? 9% +172.7% 12694048 ? 4% meminfo.SReclaimable
11296066 ? 7% +177.3% 31319072 meminfo.SUnreclaim
15951262 ? 7% +175.9% 44013121 meminfo.Slab
538538 ? 6% +61.8% 871386 ? 5% meminfo.VmallocUsed
42502599 ? 4% +101.6% 85664789 ? 3% meminfo.max_used_kB
3260902 ? 3% +19.6% 3901214 ? 2% proc-vmstat.nr_anon_pages
5399 ? 2% +6.2% 5731 proc-vmstat.nr_anon_transparent_hugepages
12088899 -8.9% 11008352 proc-vmstat.nr_dirty_background_threshold
24207357 -8.9% 22043620 proc-vmstat.nr_dirty_threshold
1.216e+08 -8.9% 1.108e+08 proc-vmstat.nr_free_pages
3273723 ? 3% +19.6% 3914490 ? 2% proc-vmstat.nr_inactive_anon
181175 ? 17% +77.6% 321803 ? 11% proc-vmstat.nr_kernel_stack
859099 ? 22% +88.5% 1619159 ? 21% proc-vmstat.nr_page_table_pages
1134181 ? 9% +178.5% 3158719 ? 4% proc-vmstat.nr_slab_reclaimable
2754659 ? 7% +183.0% 7794905 proc-vmstat.nr_slab_unreclaimable
3273723 ? 3% +19.6% 3914490 ? 2% proc-vmstat.nr_zone_inactive_anon
43764296 ? 8% +221.2% 1.406e+08 ? 4% proc-vmstat.numa_hit
42676118 ? 8% +222.1% 1.374e+08 ? 4% proc-vmstat.numa_local
1120778 ? 9% +190.9% 3260023 ? 6% proc-vmstat.numa_other
45791853 ? 7% +249.9% 1.602e+08 ? 5% proc-vmstat.pgalloc_normal
9563773 ? 5% +171.3% 25946899 ? 3% proc-vmstat.pgfault
41075040 ? 6% +253.4% 1.452e+08 ? 6% proc-vmstat.pgfree
868717 ? 6% +167.1% 2320512 ? 3% proc-vmstat.pgreuse
1548509 ? 5% +23.2% 1908459 numa-vmstat.node0.nr_anon_pages
1551208 ? 5% +23.2% 1910626 numa-vmstat.node0.nr_inactive_anon
83703 ? 25% +90.2% 159196 ? 8% numa-vmstat.node0.nr_kernel_stack
384912 ? 31% +108.3% 801653 ? 17% numa-vmstat.node0.nr_page_table_pages
644931 ? 12% +150.7% 1616972 ? 9% numa-vmstat.node0.nr_slab_reclaimable
1476067 ? 11% +167.8% 3952279 ? 4% numa-vmstat.node0.nr_slab_unreclaimable
1551207 ? 5% +23.2% 1910626 numa-vmstat.node0.nr_zone_inactive_anon
12779761 ? 15% +187.9% 36796137 ? 7% numa-vmstat.node0.numa_hit
12453144 ? 16% +188.3% 35903627 ? 7% numa-vmstat.node0.numa_local
333989 ? 19% +174.2% 915825 ? 7% numa-vmstat.node0.numa_other
1677996 ? 3% +18.7% 1992262 ? 3% numa-vmstat.node1.nr_anon_pages
1688325 ? 3% +18.7% 2003440 ? 3% numa-vmstat.node1.nr_inactive_anon
93827 ? 12% +74.3% 163512 ? 15% numa-vmstat.node1.nr_kernel_stack
454657 ? 16% +83.3% 833523 ? 25% numa-vmstat.node1.nr_page_table_pages
496697 ? 15% +215.6% 1567753 ? 8% numa-vmstat.node1.nr_slab_reclaimable
1284879 ? 11% +203.3% 3896661 ? 6% numa-vmstat.node1.nr_slab_unreclaimable
1688323 ? 3% +18.7% 2003439 ? 3% numa-vmstat.node1.nr_zone_inactive_anon
16388619 ? 5% +140.7% 39442015 ? 10% numa-vmstat.node1.numa_hit
16004843 ? 5% +140.7% 38523963 ? 10% numa-vmstat.node1.numa_local
401216 ? 6% +141.5% 969097 ? 8% numa-vmstat.node1.numa_other
6264622 ? 4% +21.8% 7629560 numa-meminfo.node0.AnonPages
7421366 ? 2% +30.0% 9649748 numa-meminfo.node0.AnonPages.max
6275586 ? 4% +21.7% 7638275 numa-meminfo.node0.Inactive
6275435 ? 4% +21.7% 7638156 numa-meminfo.node0.Inactive(anon)
2609177 ? 11% +146.7% 6436691 ? 9% numa-meminfo.node0.KReclaimable
85600 ? 24% +85.2% 158524 ? 8% numa-meminfo.node0.KernelStack
22126055 ? 7% +95.4% 43241986 ? 4% numa-meminfo.node0.MemUsed
1584038 ? 30% +100.4% 3174269 ? 17% numa-meminfo.node0.PageTables
2609177 ? 11% +146.7% 6436691 ? 9% numa-meminfo.node0.SReclaimable
5989993 ? 9% +162.8% 15741566 ? 5% numa-meminfo.node0.SUnreclaim
8599170 ? 9% +157.9% 22178258 ? 6% numa-meminfo.node0.Slab
6799834 ? 3% +17.2% 7967292 ? 3% numa-meminfo.node1.AnonPages
8538440 ? 4% +29.2% 11031952 ? 4% numa-meminfo.node1.AnonPages.max
6844271 ? 3% +17.1% 8011736 ? 3% numa-meminfo.node1.Inactive
6844271 ? 3% +17.1% 8011703 ? 3% numa-meminfo.node1.Inactive(anon)
2017323 ? 12% +209.2% 6237124 ? 8% numa-meminfo.node1.KReclaimable
94488 ? 12% +72.3% 162789 ? 15% numa-meminfo.node1.KernelStack
19853667 ? 6% +110.4% 41776408 ? 7% numa-meminfo.node1.MemUsed
1840591 ? 16% +79.2% 3298697 ? 25% numa-meminfo.node1.PageTables
2017323 ? 12% +209.2% 6237124 ? 8% numa-meminfo.node1.SReclaimable
5223390 ? 8% +197.0% 15513463 ? 6% numa-meminfo.node1.SUnreclaim
7240714 ? 9% +200.4% 21750587 ? 6% numa-meminfo.node1.Slab
10.41 ? 11% +85.6% 19.31 ? 7% perf-stat.i.MPKI
7.979e+09 ? 7% +63.5% 1.305e+10 ? 3% perf-stat.i.branch-instructions
71768002 ? 15% +77.8% 1.276e+08 ? 2% perf-stat.i.branch-misses
1.462e+08 ? 13% +192.0% 4.269e+08 ? 2% perf-stat.i.cache-misses
4.449e+08 ? 15% +177.8% 1.236e+09 ? 3% perf-stat.i.cache-references
32193 ? 8% +61.5% 51979 ? 2% perf-stat.i.context-switches
6.56 ? 9% -33.3% 4.37 ? 7% perf-stat.i.cpi
2745 ? 16% -71.6% 780.61 ? 8% perf-stat.i.cycles-between-cache-misses
0.30 ? 7% +0.3 0.65 ? 17% perf-stat.i.dTLB-load-miss-rate%
37598305 ? 13% +172.0% 1.023e+08 ? 11% perf-stat.i.dTLB-load-misses
9.313e+09 ? 8% +74.9% 1.629e+10 ? 3% perf-stat.i.dTLB-loads
0.10 ? 8% +0.1 0.17 ? 11% perf-stat.i.dTLB-store-miss-rate%
4029389 ? 17% +206.9% 12366631 ? 11% perf-stat.i.dTLB-store-misses
2.834e+09 ? 13% +125.8% 6.399e+09 ? 2% perf-stat.i.dTLB-stores
71.20 ? 3% +9.9 81.12 ? 2% perf-stat.i.iTLB-load-miss-rate%
5216672 ? 12% +85.8% 9692202 ? 4% perf-stat.i.iTLB-load-misses
1356651 ? 5% +32.5% 1797130 ? 9% perf-stat.i.iTLB-loads
3.588e+10 ? 8% +71.2% 6.143e+10 ? 3% perf-stat.i.instructions
10908 ? 9% -35.7% 7018 ? 8% perf-stat.i.instructions-per-iTLB-miss
141.30 ? 26% +66.6% 235.36 ? 8% perf-stat.i.major-faults
2.05 ? 2% +15.9% 2.37 ? 5% perf-stat.i.metric.GHz
937.15 ? 8% +78.2% 1669 perf-stat.i.metric.K/sec
180.46 ? 8% +96.6% 354.71 ? 6% perf-stat.i.metric.M/sec
283676 ? 14% +55.4% 440879 ? 4% perf-stat.i.minor-faults
35638790 ? 13% +181.7% 1.004e+08 ? 3% perf-stat.i.node-load-misses
16437815 ? 7% +221.5% 52847803 ? 3% perf-stat.i.node-loads
49.00 ? 2% +9.4 58.36 perf-stat.i.node-store-miss-rate%
18682848 ? 14% +151.5% 46987417 ? 2% perf-stat.i.node-store-misses
14051072 ? 13% +128.6% 32120053 perf-stat.i.node-stores
283817 ? 14% +55.4% 441114 ? 4% perf-stat.i.page-faults
8.26 ? 8% +142.5% 20.03 perf-stat.overall.MPKI
0.52 ? 8% +0.4 0.95 perf-stat.overall.branch-miss-rate%
7.84 ? 5% -50.1% 3.91 ? 2% perf-stat.overall.cpi
2829 ? 11% -80.3% 557.63 ? 3% perf-stat.overall.cycles-between-cache-misses
0.32 ? 8% +0.3 0.62 ? 9% perf-stat.overall.dTLB-load-miss-rate%
0.13 ? 7% +0.1 0.19 ? 8% perf-stat.overall.dTLB-store-miss-rate%
77.47 +6.9 84.37 perf-stat.overall.iTLB-load-miss-rate%
11283 ? 6% -39.8% 6791 ? 6% perf-stat.overall.instructions-per-iTLB-miss
0.13 ? 5% +100.0% 0.26 ? 2% perf-stat.overall.ipc
58.90 ? 2% +5.4 64.31 perf-stat.overall.node-load-miss-rate%
52.98 +6.1 59.05 perf-stat.overall.node-store-miss-rate%
6.088e+09 ? 4% +94.1% 1.182e+10 ? 6% perf-stat.ps.branch-instructions
31630794 ? 4% +253.0% 1.117e+08 ? 6% perf-stat.ps.branch-misses
73638545 ? 3% +430.0% 3.903e+08 ? 7% perf-stat.ps.cache-misses
2.178e+08 ? 5% +411.2% 1.113e+09 ? 6% perf-stat.ps.cache-references
15213 ? 2% +186.4% 43576 ? 8% perf-stat.ps.context-switches
84923 ? 10% +11.3% 94558 perf-stat.ps.cpu-clock
3154 ? 7% +38.9% 4381 ? 12% perf-stat.ps.cpu-migrations
22193885 ? 7% +315.1% 92118347 ? 15% perf-stat.ps.dTLB-load-misses
6.833e+09 ? 3% +115.5% 1.473e+10 ? 6% perf-stat.ps.dTLB-loads
2041800 ? 7% +442.9% 11084501 ? 14% perf-stat.ps.dTLB-store-misses
1.537e+09 ? 5% +272.2% 5.721e+09 ? 6% perf-stat.ps.dTLB-stores
2348313 ? 5% +248.7% 8187423 ? 3% perf-stat.ps.iTLB-load-misses
683485 ? 8% +121.6% 1514271 ? 3% perf-stat.ps.iTLB-loads
2.643e+10 ? 4% +110.3% 5.558e+10 ? 6% perf-stat.ps.instructions
37.88 ? 19% +419.6% 196.80 ? 4% perf-stat.ps.major-faults
141214 ? 4% +164.1% 372942 ? 4% perf-stat.ps.minor-faults
18469271 +391.5% 90779607 ? 6% perf-stat.ps.node-load-misses
12900276 ? 5% +291.3% 50481745 ? 9% perf-stat.ps.node-loads
7714784 ? 5% +430.2% 40902523 ? 4% perf-stat.ps.node-store-misses
6838735 ? 2% +315.2% 28391507 ? 6% perf-stat.ps.node-stores
141252 ? 4% +164.2% 373138 ? 4% perf-stat.ps.page-faults
84923 ? 10% +11.3% 94558 perf-stat.ps.task-clock
1.575e+12 ? 13% +143.7% 3.838e+12 ? 6% perf-stat.total.instructions
7100 ? 5% +137.8% 16884 ? 6% softirqs.CPU0.RCU
7777 ? 14% +28.5% 9991 ? 6% softirqs.CPU0.SCHED
5028 ? 2% +211.6% 15671 ? 16% softirqs.CPU1.RCU
4269 ? 18% +230.8% 14124 ? 9% softirqs.CPU10.RCU
4656 ? 7% +238.8% 15777 ? 7% softirqs.CPU11.RCU
5633 ? 51% +157.2% 14485 ? 13% softirqs.CPU12.RCU
3685 ? 11% +270.8% 13663 ? 8% softirqs.CPU13.RCU
3737 ? 14% +270.5% 13847 ? 8% softirqs.CPU14.RCU
4057 ? 10% +261.8% 14682 ? 10% softirqs.CPU15.RCU
4005 ? 19% +262.6% 14522 ? 12% softirqs.CPU16.RCU
4078 ? 16% +251.5% 14334 ? 10% softirqs.CPU17.RCU
4077 ? 9% +290.6% 15925 ? 5% softirqs.CPU18.RCU
3853 ? 5% +314.4% 15968 ? 19% softirqs.CPU19.RCU
4777 ? 15% +211.3% 14870 ? 12% softirqs.CPU2.RCU
3966 ? 12% +271.7% 14742 ? 11% softirqs.CPU20.RCU
4009 ? 9% +276.1% 15078 ? 7% softirqs.CPU21.RCU
3845 ? 6% +282.3% 14698 ? 9% softirqs.CPU22.RCU
5088 ? 42% +188.1% 14659 ? 9% softirqs.CPU23.RCU
3555 ? 8% +317.6% 14844 ? 13% softirqs.CPU24.RCU
3853 ? 21% +263.8% 14018 ? 6% softirqs.CPU25.RCU
3401 ? 9% +309.8% 13935 ? 7% softirqs.CPU26.RCU
3436 ? 10% +318.3% 14372 ? 10% softirqs.CPU27.RCU
3695 ? 4% +288.9% 14373 ? 11% softirqs.CPU28.RCU
3475 ? 2% +323.8% 14727 ? 10% softirqs.CPU29.RCU
5787 ? 41% +154.3% 14718 ? 8% softirqs.CPU3.RCU
3709 ? 13% +281.3% 14143 ? 7% softirqs.CPU30.RCU
3417 ? 10% +321.0% 14390 ? 11% softirqs.CPU31.RCU
3341 ? 5% +312.5% 13783 ? 10% softirqs.CPU32.RCU
3516 ? 6% +298.1% 13998 ? 9% softirqs.CPU33.RCU
3606 ? 8% +289.7% 14052 ? 8% softirqs.CPU34.RCU
3330 ? 7% +329.5% 14304 ? 6% softirqs.CPU35.RCU
3631 ? 3% +290.9% 14193 ? 10% softirqs.CPU36.RCU
3537 ? 15% +298.1% 14084 ? 7% softirqs.CPU37.RCU
3410 ? 5% +317.9% 14252 ? 10% softirqs.CPU38.RCU
3408 ? 6% +305.5% 13821 ? 7% softirqs.CPU39.RCU
4943 ? 32% +205.1% 15083 ? 9% softirqs.CPU4.RCU
3491 ? 7% +293.3% 13730 ? 8% softirqs.CPU40.RCU
3448 ? 14% +291.5% 13502 ? 10% softirqs.CPU41.RCU
3446 ? 14% +312.4% 14212 ? 6% softirqs.CPU42.RCU
3547 ? 7% +298.8% 14146 ? 6% softirqs.CPU43.RCU
3592 ? 8% +283.1% 13760 ? 7% softirqs.CPU44.RCU
3560 ? 11% +291.9% 13953 ? 6% softirqs.CPU45.RCU
3557 ? 6% +298.7% 14186 ? 7% softirqs.CPU46.RCU
3487 ? 7% +291.3% 13645 ? 6% softirqs.CPU47.RCU
4214 ? 13% +228.8% 13857 ? 10% softirqs.CPU48.RCU
3804 ? 14% +294.4% 15006 ? 6% softirqs.CPU49.RCU
4185 ? 2% +234.1% 13983 ? 11% softirqs.CPU5.RCU
3705 ? 16% +274.5% 13877 ? 10% softirqs.CPU50.RCU
3613 ? 11% +288.6% 14042 ? 5% softirqs.CPU51.RCU
4254 ? 17% +238.3% 14391 ? 12% softirqs.CPU52.RCU
5150 ? 39% +163.8% 13586 ? 9% softirqs.CPU53.RCU
4133 ? 14% +236.3% 13900 ? 11% softirqs.CPU54.RCU
3817 ? 6% +271.2% 14170 ? 7% softirqs.CPU55.RCU
3859 ? 17% +261.4% 13948 ? 10% softirqs.CPU56.RCU
4244 ? 15% +239.3% 14401 ? 7% softirqs.CPU57.RCU
3960 ? 13% +257.4% 14156 ? 8% softirqs.CPU58.RCU
4567 ? 20% +221.8% 14695 ? 9% softirqs.CPU59.RCU
4030 ? 10% +258.1% 14432 ? 7% softirqs.CPU6.RCU
4515 ? 22% +215.8% 14257 ? 11% softirqs.CPU60.RCU
4248 ? 5% +235.0% 14234 ? 8% softirqs.CPU61.RCU
4430 ? 17% +215.8% 13990 ? 11% softirqs.CPU62.RCU
4321 ? 16% +231.7% 14332 ? 11% softirqs.CPU63.RCU
4271 ? 6% +246.9% 14814 ? 9% softirqs.CPU64.RCU
4263 ? 14% +246.0% 14751 ? 9% softirqs.CPU65.RCU
4296 ? 12% +255.3% 15267 ? 11% softirqs.CPU66.RCU
3836 ? 12% +279.0% 14539 ? 7% softirqs.CPU67.RCU
4156 ? 11% +262.6% 15070 ? 12% softirqs.CPU68.RCU
4119 ? 11% +255.1% 14628 ? 12% softirqs.CPU69.RCU
4203 ? 21% +259.6% 15117 ? 21% softirqs.CPU7.RCU
4449 ? 21% +235.8% 14941 ? 9% softirqs.CPU70.RCU
4714 ? 14% +208.9% 14563 ? 8% softirqs.CPU71.RCU
3453 ? 6% +308.0% 14089 ? 7% softirqs.CPU72.RCU
3610 ? 7% +325.2% 15351 ? 21% softirqs.CPU73.RCU
3598 ? 6% +293.3% 14152 ? 7% softirqs.CPU74.RCU
3426 ? 6% +306.8% 13937 ? 8% softirqs.CPU75.RCU
3456 ? 5% +303.1% 13933 ? 7% softirqs.CPU76.RCU
3482 ? 5% +313.9% 14414 ? 10% softirqs.CPU77.RCU
3656 ? 14% +296.9% 14509 ? 8% softirqs.CPU78.RCU
3491 ? 13% +303.8% 14096 ? 8% softirqs.CPU79.RCU
4006 ? 12% +259.1% 14387 ? 8% softirqs.CPU8.RCU
3355 ? 3% +339.2% 14736 ? 12% softirqs.CPU80.RCU
3448 ? 3% +302.4% 13878 ? 7% softirqs.CPU81.RCU
3421 ? 7% +312.8% 14125 ? 7% softirqs.CPU82.RCU
3430 ? 9% +313.2% 14172 ? 7% softirqs.CPU83.RCU
3600 ? 6% +292.8% 14140 ? 9% softirqs.CPU84.RCU
3487 ? 17% +299.5% 13934 ? 8% softirqs.CPU85.RCU
3671 ? 6% +303.4% 14808 ? 13% softirqs.CPU86.RCU
3333 ? 7% +318.9% 13963 ? 7% softirqs.CPU87.RCU
3643 ? 7% +279.4% 13824 ? 7% softirqs.CPU88.RCU
3372 ? 13% +307.9% 13758 ? 7% softirqs.CPU89.RCU
4049 ? 12% +283.7% 15536 ? 14% softirqs.CPU9.RCU
3386 ? 15% +309.5% 13866 ? 6% softirqs.CPU90.RCU
3389 ? 11% +307.7% 13816 ? 8% softirqs.CPU91.RCU
3365 ? 11% +305.0% 13628 ? 7% softirqs.CPU92.RCU
4925 ? 59% +184.8% 14027 ? 6% softirqs.CPU93.RCU
3586 ? 8% +286.6% 13863 ? 6% softirqs.CPU94.RCU
3209 ? 6% +214.6% 10098 ? 5% softirqs.CPU95.RCU
377303 ? 4% +264.7% 1376204 ? 8% softirqs.RCU
291121 ? 20% +59.9% 465360 ? 12% softirqs.SCHED
17368 ? 4% +86.5% 32389 ? 17% softirqs.TIMER
93.35 ? 2% -92.0 1.31 ? 18% perf-profile.calltrace.cycles-pp.fc_mount.mq_create_mount.mq_init_ns.copy_ipcs.create_new_namespaces
93.35 ? 2% -92.0 1.31 ? 18% perf-profile.calltrace.cycles-pp.vfs_get_tree.fc_mount.mq_create_mount.mq_init_ns.copy_ipcs
93.35 ? 2% -92.0 1.31 ? 18% perf-profile.calltrace.cycles-pp.vfs_get_super.vfs_get_tree.fc_mount.mq_create_mount.mq_init_ns
93.35 ? 2% -92.0 1.31 ? 18% perf-profile.calltrace.cycles-pp.sget_fc.vfs_get_super.vfs_get_tree.fc_mount.mq_create_mount
93.36 ? 2% -92.0 1.35 ? 17% perf-profile.calltrace.cycles-pp.mq_init_ns.copy_ipcs.create_new_namespaces.unshare_nsproxy_namespaces.ksys_unshare
93.36 ? 2% -92.0 1.35 ? 17% perf-profile.calltrace.cycles-pp.mq_create_mount.mq_init_ns.copy_ipcs.create_new_namespaces.unshare_nsproxy_namespaces
93.36 ? 2% -92.0 1.36 ? 17% perf-profile.calltrace.cycles-pp.copy_ipcs.create_new_namespaces.unshare_nsproxy_namespaces.ksys_unshare.__x64_sys_unshare
91.87 ? 2% -91.9 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.sget_fc.vfs_get_super.vfs_get_tree.fc_mount
91.86 ? 2% -91.9 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.sget_fc.vfs_get_super.vfs_get_tree
93.85 ? 2% -87.2 6.66 ? 21% perf-profile.calltrace.cycles-pp.create_new_namespaces.unshare_nsproxy_namespaces.ksys_unshare.__x64_sys_unshare.do_syscall_64
93.85 ? 2% -87.2 6.66 ? 21% perf-profile.calltrace.cycles-pp.unshare_nsproxy_namespaces.ksys_unshare.__x64_sys_unshare.do_syscall_64.entry_SYSCALL_64_after_hwframe
92.14 -85.5 6.67 ? 21% perf-profile.calltrace.cycles-pp.__x64_sys_unshare.do_syscall_64.entry_SYSCALL_64_after_hwframe.unshare
92.14 -85.5 6.67 ? 21% perf-profile.calltrace.cycles-pp.ksys_unshare.__x64_sys_unshare.do_syscall_64.entry_SYSCALL_64_after_hwframe.unshare
92.14 -85.5 6.68 ? 21% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.unshare
92.14 -85.5 6.68 ? 21% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.unshare
92.15 -85.5 6.69 ? 21% perf-profile.calltrace.cycles-pp.unshare
0.27 ?127% +0.6 0.90 ? 19% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.28 ?127% +0.6 0.92 ? 18% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
0.12 ?200% +0.8 0.92 ? 16% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.prealloc_shrinker.alloc_super.sget_fc.vfs_get_super
0.00 +0.8 0.84 ? 22% perf-profile.calltrace.cycles-pp.register_netdev.loopback_net_init.ops_init.setup_net.copy_net_ns
0.00 +0.9 0.86 ? 22% perf-profile.calltrace.cycles-pp.loopback_net_init.ops_init.setup_net.copy_net_ns.create_new_namespaces
0.00 +0.9 0.87 ? 21% perf-profile.calltrace.cycles-pp.__sock_create.inet_ctl_sock_create.tcp_sk_init.ops_init.setup_net
0.28 ?126% +0.9 1.16 ? 28% perf-profile.calltrace.cycles-pp.ret_from_fork
0.28 ?126% +0.9 1.16 ? 28% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.00 +1.1 1.07 ? 23% perf-profile.calltrace.cycles-pp.inet_ctl_sock_create.icmp_sk_init.ops_init.setup_net.copy_net_ns
0.12 ?200% +1.1 1.21 ? 19% perf-profile.calltrace.cycles-pp.prealloc_shrinker.alloc_super.sget_fc.vfs_get_super.vfs_get_tree
0.00 +1.1 1.09 ? 23% perf-profile.calltrace.cycles-pp.icmp_sk_init.ops_init.setup_net.copy_net_ns.create_new_namespaces
0.12 ?200% +1.2 1.28 ? 18% perf-profile.calltrace.cycles-pp.alloc_super.sget_fc.vfs_get_super.vfs_get_tree.fc_mount
0.00 +1.2 1.18 ? 21% perf-profile.calltrace.cycles-pp.inet_ctl_sock_create.tcp_sk_init.ops_init.setup_net.copy_net_ns
0.00 +1.2 1.20 ? 21% perf-profile.calltrace.cycles-pp.tcp_sk_init.ops_init.setup_net.copy_net_ns.create_new_namespaces
0.00 +2.0 2.01 ? 22% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.do_exit.do_group_exit
0.00 +2.0 2.02 ? 22% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.do_exit.__x64_sys_exit
0.00 +2.2 2.16 ? 23% perf-profile.calltrace.cycles-pp.obj_cgroup_charge_pages.__memcg_kmem_charge_page.__alloc_pages.pte_alloc_one.__pte_alloc
0.00 +2.2 2.24 ? 37% perf-profile.calltrace.cycles-pp.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.2 2.24 ? 37% perf-profile.calltrace.cycles-pp.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.__waitpid
0.00 +2.2 2.24 ? 37% perf-profile.calltrace.cycles-pp.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.__waitpid
0.00 +2.2 2.25 ? 37% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__waitpid
0.00 +2.2 2.25 ? 37% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__waitpid
0.00 +2.2 2.25 ? 37% perf-profile.calltrace.cycles-pp.__waitpid
0.00 +2.3 2.29 ? 23% perf-profile.calltrace.cycles-pp.__memcg_kmem_charge_page.__alloc_pages.pte_alloc_one.__pte_alloc.copy_pte_range
0.00 +2.5 2.54 ? 37% perf-profile.calltrace.cycles-pp.anon_vma_interval_tree_insert.anon_vma_clone.anon_vma_fork.dup_mmap.dup_mm
0.00 +2.7 2.70 ? 24% perf-profile.calltrace.cycles-pp.__alloc_pages.pte_alloc_one.__pte_alloc.copy_pte_range.copy_page_range
0.00 +2.8 2.76 ? 23% perf-profile.calltrace.cycles-pp.pte_alloc_one.__pte_alloc.copy_pte_range.copy_page_range.dup_mmap
0.00 +2.8 2.84 ? 24% perf-profile.calltrace.cycles-pp.__pte_alloc.copy_pte_range.copy_page_range.dup_mmap.dup_mm
0.00 +2.9 2.85 ? 33% perf-profile.calltrace.cycles-pp.page_counter_uncharge.obj_cgroup_uncharge_pages.__memcg_kmem_uncharge_page.free_pcp_prepare.free_unref_page
0.00 +2.9 2.94 ? 33% perf-profile.calltrace.cycles-pp.obj_cgroup_uncharge_pages.__memcg_kmem_uncharge_page.free_pcp_prepare.free_unref_page.zap_huge_pmd
0.00 +2.9 2.95 ? 33% perf-profile.calltrace.cycles-pp.__memcg_kmem_uncharge_page.free_pcp_prepare.free_unref_page.zap_huge_pmd.unmap_page_range
0.00 +2.9 2.95 ? 33% perf-profile.calltrace.cycles-pp.free_pcp_prepare.free_unref_page.zap_huge_pmd.unmap_page_range.unmap_vmas
0.00 +3.0 2.95 ? 21% perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.exit_mmap.mmput.copy_process
0.00 +3.0 2.96 ? 21% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.copy_process.kernel_clone
0.00 +3.0 3.05 ? 33% perf-profile.calltrace.cycles-pp.free_unref_page.zap_huge_pmd.unmap_page_range.unmap_vmas.exit_mmap
0.00 +3.5 3.55 ? 27% perf-profile.calltrace.cycles-pp.zap_huge_pmd.unmap_page_range.unmap_vmas.exit_mmap.mmput
0.00 +3.8 3.82 ? 22% perf-profile.calltrace.cycles-pp.obj_cgroup_charge_pages.__memcg_kmem_charge_page.__alloc_pages.pte_alloc_one.copy_huge_pmd
0.00 +4.0 4.00 ? 22% perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.exit_mmap.mmput.do_exit
0.00 +4.0 4.03 ? 21% perf-profile.calltrace.cycles-pp.__memcg_kmem_charge_page.__alloc_pages.pte_alloc_one.copy_huge_pmd.copy_page_range
0.00 +4.6 4.63 ? 34% perf-profile.calltrace.cycles-pp.anon_vma_clone.anon_vma_fork.dup_mmap.dup_mm.copy_process
0.00 +4.7 4.69 ? 21% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.exit_mmap.mmput
0.24 ?200% +4.8 5.01 ? 19% perf-profile.calltrace.cycles-pp.ops_init.setup_net.copy_net_ns.create_new_namespaces.unshare_nsproxy_namespaces
0.24 ?200% +4.8 5.01 ? 19% perf-profile.calltrace.cycles-pp.setup_net.copy_net_ns.create_new_namespaces.unshare_nsproxy_namespaces.ksys_unshare
0.00 +4.8 4.77 ? 21% perf-profile.calltrace.cycles-pp.__alloc_pages.pte_alloc_one.copy_huge_pmd.copy_page_range.dup_mmap
0.24 ?200% +4.8 5.02 ? 19% perf-profile.calltrace.cycles-pp.copy_net_ns.create_new_namespaces.unshare_nsproxy_namespaces.ksys_unshare.__x64_sys_unshare
0.00 +4.9 4.87 ? 21% perf-profile.calltrace.cycles-pp.pte_alloc_one.copy_huge_pmd.copy_page_range.dup_mmap.dup_mm
0.00 +5.3 5.25 ? 21% perf-profile.calltrace.cycles-pp.copy_huge_pmd.copy_page_range.dup_mmap.dup_mm.copy_process
0.00 +5.4 5.43 ? 22% perf-profile.calltrace.cycles-pp.page_counter_charge.obj_cgroup_charge_pages.__memcg_kmem_charge_page.__alloc_pages.pte_alloc_one
0.00 +6.4 6.35 ? 24% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.__x64_sys_exit
0.00 +6.4 6.35 ? 24% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
0.00 +6.6 6.61 ? 34% perf-profile.calltrace.cycles-pp.anon_vma_fork.dup_mmap.dup_mm.copy_process.kernel_clone
0.00 +7.3 7.26 ? 23% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.copy_process.kernel_clone.__do_sys_clone
0.00 +7.3 7.26 ? 23% perf-profile.calltrace.cycles-pp.mmput.copy_process.kernel_clone.__do_sys_clone.do_syscall_64
0.00 +7.3 7.28 ? 23% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.copy_process.kernel_clone.__do_sys_clone3
0.00 +7.3 7.28 ? 23% perf-profile.calltrace.cycles-pp.mmput.copy_process.kernel_clone.__do_sys_clone3.do_syscall_64
0.00 +7.7 7.70 ? 22% perf-profile.calltrace.cycles-pp.page_remove_rmap.zap_pte_range.unmap_page_range.unmap_vmas.exit_mmap
0.52 ? 51% +9.0 9.48 ? 24% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
0.53 ? 52% +9.0 9.50 ? 24% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.__x64_sys_exit.do_syscall_64
0.52 ? 51% +9.0 9.49 ? 24% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.53 ? 52% +9.0 9.50 ? 24% perf-profile.calltrace.cycles-pp.mmput.do_exit.__x64_sys_exit.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +9.8 9.82 ? 23% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.copy_process
0.00 +9.9 9.89 ? 23% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.copy_process.kernel_clone
0.00 +12.6 12.63 ? 24% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
0.65 ? 58% +15.8 16.43 ? 25% perf-profile.calltrace.cycles-pp.dup_mmap.dup_mm.copy_process.kernel_clone.__do_sys_clone
0.65 ? 58% +15.8 16.45 ? 24% perf-profile.calltrace.cycles-pp.dup_mmap.dup_mm.copy_process.kernel_clone.__do_sys_clone3
0.65 ? 58% +15.9 16.50 ? 24% perf-profile.calltrace.cycles-pp.dup_mm.copy_process.kernel_clone.__do_sys_clone.do_syscall_64
0.66 ? 57% +15.9 16.53 ? 24% perf-profile.calltrace.cycles-pp.dup_mm.copy_process.kernel_clone.__do_sys_clone3.do_syscall_64
0.62 ? 56% +17.6 18.22 ? 39% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.62 ? 56% +17.6 18.22 ? 39% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.62 ? 56% +17.6 18.22 ? 39% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.62 ? 55% +17.6 18.26 ? 40% perf-profile.calltrace.cycles-pp.do_exit.__x64_sys_exit.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.62 ? 55% +17.6 18.26 ? 40% perf-profile.calltrace.cycles-pp.__x64_sys_exit.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.21 ?200% +18.5 18.74 ? 23% perf-profile.calltrace.cycles-pp.copy_pte_range.copy_page_range.dup_mmap.dup_mm.copy_process
0.00 +18.6 18.55 ? 24% perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.exit_mmap.mmput
0.51 ?124% +23.9 24.43 ? 22% perf-profile.calltrace.cycles-pp.copy_page_range.dup_mmap.dup_mm.copy_process.kernel_clone
0.95 ? 58% +24.3 25.28 ? 24% perf-profile.calltrace.cycles-pp.copy_process.kernel_clone.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.95 ? 58% +24.4 25.34 ? 24% perf-profile.calltrace.cycles-pp.copy_process.kernel_clone.__do_sys_clone3.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.96 ? 58% +24.4 25.36 ? 24% perf-profile.calltrace.cycles-pp.kernel_clone.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe.__clone
0.96 ? 58% +24.4 25.36 ? 24% perf-profile.calltrace.cycles-pp.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe.__clone
0.96 ? 58% +24.4 25.39 ? 24% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__clone
0.96 ? 58% +24.4 25.39 ? 24% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__clone
0.96 ? 58% +24.5 25.43 ? 24% perf-profile.calltrace.cycles-pp.kernel_clone.__do_sys_clone3.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
0.96 ? 58% +24.5 25.43 ? 24% perf-profile.calltrace.cycles-pp.__do_sys_clone3.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
0.97 ? 57% +24.5 25.50 ? 24% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.syscall
0.97 ? 57% +24.5 25.50 ? 24% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
0.98 ? 57% +24.6 25.58 ? 24% perf-profile.calltrace.cycles-pp.__clone
0.98 ? 57% +24.7 25.65 ? 24% perf-profile.calltrace.cycles-pp.syscall
2.96 ? 67% +33.5 36.49 ? 39% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
2.96 ? 67% +33.5 36.49 ? 39% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
93.35 ? 2% -92.0 1.31 ? 18% perf-profile.children.cycles-pp.fc_mount
93.35 ? 2% -92.0 1.31 ? 18% perf-profile.children.cycles-pp.vfs_get_tree
93.35 ? 2% -92.0 1.31 ? 18% perf-profile.children.cycles-pp.vfs_get_super
93.35 ? 2% -92.0 1.31 ? 18% perf-profile.children.cycles-pp.sget_fc
93.35 ? 2% -92.0 1.35 ? 17% perf-profile.children.cycles-pp.mq_init_ns
93.35 ? 2% -92.0 1.35 ? 17% perf-profile.children.cycles-pp.mq_create_mount
93.35 ? 2% -92.0 1.36 ? 17% perf-profile.children.cycles-pp.copy_ipcs
93.85 ? 2% -87.2 6.66 ? 21% perf-profile.children.cycles-pp.unshare_nsproxy_namespaces
93.85 ? 2% -87.2 6.67 ? 21% perf-profile.children.cycles-pp.create_new_namespaces
93.85 ? 2% -87.2 6.67 ? 21% perf-profile.children.cycles-pp.__x64_sys_unshare
93.85 ? 2% -87.2 6.67 ? 21% perf-profile.children.cycles-pp.ksys_unshare
92.15 -85.5 6.69 ? 21% perf-profile.children.cycles-pp.unshare
92.37 ? 2% -76.2 16.12 ?118% perf-profile.children.cycles-pp._raw_spin_lock
92.39 ? 2% -71.3 21.09 ? 83% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.00 +0.1 0.07 ? 20% perf-profile.children.cycles-pp.new_inode
0.00 +0.1 0.07 ? 18% perf-profile.children.cycles-pp.inode_init_always
0.00 +0.1 0.07 ? 18% perf-profile.children.cycles-pp.folio_unlock
0.00 +0.1 0.07 ? 18% perf-profile.children.cycles-pp.__vunmap
0.00 +0.1 0.07 ? 21% perf-profile.children.cycles-pp.setns
0.01 ?200% +0.1 0.08 ? 24% perf-profile.children.cycles-pp.addrconf_init_net
0.00 +0.1 0.08 ? 27% perf-profile.children.cycles-pp.schedule
0.00 +0.1 0.08 ? 15% perf-profile.children.cycles-pp.free_work
0.28 ? 12% +0.1 0.36 ? 7% perf-profile.children.cycles-pp.update_process_times
0.00 +0.1 0.08 ? 20% perf-profile.children.cycles-pp.__anon_vma_interval_tree_augment_rotate
0.00 +0.1 0.08 ? 24% perf-profile.children.cycles-pp.proc_pid_make_inode
0.00 +0.1 0.08 ? 22% perf-profile.children.cycles-pp.stress_set_oom_adjustment
0.01 ?200% +0.1 0.09 ? 30% perf-profile.children.cycles-pp.__addrconf_sysctl_register
0.00 +0.1 0.08 ? 26% perf-profile.children.cycles-pp.free_unref_page_list
0.00 +0.1 0.08 ? 23% perf-profile.children.cycles-pp.__lookup_slow
0.28 ? 12% +0.1 0.36 ? 6% perf-profile.children.cycles-pp.tick_sched_handle
0.00 +0.1 0.08 ? 23% perf-profile.children.cycles-pp.copy_creds
0.00 +0.1 0.08 ? 20% perf-profile.children.cycles-pp.find_vma
0.00 +0.1 0.08 ? 9% perf-profile.children.cycles-pp.drain_stock
0.00 +0.1 0.09 ? 27% perf-profile.children.cycles-pp.copy_page
0.00 +0.1 0.09 ? 27% perf-profile.children.cycles-pp.pick_next_task_fair
0.00 +0.1 0.09 ? 4% perf-profile.children.cycles-pp.native_irq_return_iret
0.00 +0.1 0.09 ? 19% perf-profile.children.cycles-pp.fput_many
0.30 ? 12% +0.1 0.39 ? 6% perf-profile.children.cycles-pp.tick_sched_timer
0.00 +0.1 0.09 ? 22% perf-profile.children.cycles-pp.kmem_cache_alloc_node
0.00 +0.1 0.09 ? 21% perf-profile.children.cycles-pp.rmqueue_bulk
0.00 +0.1 0.10 ? 24% perf-profile.children.cycles-pp.devinet_init_net
0.00 +0.1 0.10 ? 24% perf-profile.children.cycles-pp.__mmdrop
0.00 +0.1 0.10 ? 40% perf-profile.children.cycles-pp.free_p4d_range
0.00 +0.1 0.10 ? 40% perf-profile.children.cycles-pp.free_pgd_range
0.00 +0.1 0.10 ? 13% perf-profile.children.cycles-pp.do_set_pte
0.00 +0.1 0.11 ? 14% perf-profile.children.cycles-pp.refill_stock
0.00 +0.1 0.11 ? 18% perf-profile.children.cycles-pp.inet6_net_init
0.48 ? 9% +0.1 0.59 perf-profile.children.cycles-pp.hrtimer_interrupt
0.00 +0.1 0.11 ? 24% perf-profile.children.cycles-pp.get_obj_cgroup_from_current
0.34 ? 10% +0.1 0.45 ? 5% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.01 ?200% +0.1 0.12 ? 29% perf-profile.children.cycles-pp.apparmor_sk_alloc_security
0.00 +0.1 0.11 ? 17% perf-profile.children.cycles-pp.pgd_alloc
0.48 ? 9% +0.1 0.60 perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.01 ?200% +0.1 0.13 ? 20% perf-profile.children.cycles-pp.__rb_erase_color
0.00 +0.1 0.12 ? 26% perf-profile.children.cycles-pp._raw_write_lock_bh
0.01 ?200% +0.1 0.13 ? 26% perf-profile.children.cycles-pp.security_sk_alloc
0.00 +0.1 0.12 ? 14% perf-profile.children.cycles-pp.free_pcppages_bulk
0.00 +0.1 0.12 ? 15% perf-profile.children.cycles-pp.mm_init
0.00 +0.1 0.13 ? 32% perf-profile.children.cycles-pp.osq_unlock
0.00 +0.1 0.13 ? 22% perf-profile.children.cycles-pp.find_idlest_group
0.00 +0.1 0.13 ? 19% perf-profile.children.cycles-pp.get_mem_cgroup_from_objcg
0.00 +0.1 0.13 ? 10% perf-profile.children.cycles-pp.preempt_schedule_common
0.00 +0.1 0.13 ? 21% perf-profile.children.cycles-pp.uncharge_batch
0.02 ?200% +0.1 0.15 ? 27% perf-profile.children.cycles-pp.insert_header
0.00 +0.1 0.13 ? 23% perf-profile.children.cycles-pp.vma_interval_tree_remove
0.00 +0.1 0.14 ? 21% perf-profile.children.cycles-pp.select_task_rq_fair
0.01 ?200% +0.1 0.15 ? 22% perf-profile.children.cycles-pp.inode_init_once
0.00 +0.1 0.14 ? 26% perf-profile.children.cycles-pp.schedule_tail
0.00 +0.1 0.15 ? 22% perf-profile.children.cycles-pp.__mem_cgroup_uncharge_list
0.00 +0.1 0.15 ? 9% perf-profile.children.cycles-pp.walk_component
0.00 +0.2 0.15 ? 11% perf-profile.children.cycles-pp.link_path_walk
0.00 +0.2 0.15 ? 23% perf-profile.children.cycles-pp.__pmd_alloc
0.00 +0.2 0.16 ? 20% perf-profile.children.cycles-pp.__pud_alloc
0.01 ?200% +0.2 0.18 ? 24% perf-profile.children.cycles-pp.kmem_cache_alloc_trace
0.00 +0.2 0.17 ? 21% perf-profile.children.cycles-pp.wake_up_new_task
0.00 +0.2 0.17 ? 19% perf-profile.children.cycles-pp.__mod_lruvec_page_state
0.00 +0.2 0.17 ? 19% perf-profile.children.cycles-pp.__folio_memcg_unlock
0.00 +0.2 0.18 ? 28% perf-profile.children.cycles-pp.alloc_vmap_area
0.00 +0.2 0.18 ? 29% perf-profile.children.cycles-pp.__get_vm_area_node
0.00 +0.2 0.18 ? 22% perf-profile.children.cycles-pp.dup_userfaultfd
0.00 +0.2 0.18 ? 23% perf-profile.children.cycles-pp.page_counter_try_charge
0.00 +0.2 0.19 ? 19% perf-profile.children.cycles-pp.wp_page_copy
0.00 +0.2 0.20 ? 16% perf-profile.children.cycles-pp.stress_clone_child
0.00 +0.2 0.21 ? 42% perf-profile.children.cycles-pp.smp_call_function_single
0.02 ?200% +0.2 0.23 ? 20% perf-profile.children.cycles-pp.ipv4_mib_init_net
0.00 +0.2 0.21 ? 23% perf-profile.children.cycles-pp.apparmor_socket_post_create
0.00 +0.2 0.21 ? 23% perf-profile.children.cycles-pp.security_socket_post_create
0.00 +0.2 0.21 ? 23% perf-profile.children.cycles-pp.lock_page_memcg
0.00 +0.2 0.21 ? 42% perf-profile.children.cycles-pp.sync_rcu_exp_select_node_cpus
0.00 +0.2 0.22 ? 8% perf-profile.children.cycles-pp.__cond_resched
0.00 +0.2 0.22 ? 14% perf-profile.children.cycles-pp.__slab_free
0.00 +0.2 0.23 ? 27% perf-profile.children.cycles-pp.__vmalloc_node_range
0.01 ?200% +0.2 0.24 ? 19% perf-profile.children.cycles-pp.__might_resched
0.00 +0.2 0.23 ? 16% perf-profile.children.cycles-pp.finish_task_switch
0.00 +0.2 0.25 ? 15% perf-profile.children.cycles-pp.put_task_stack
0.00 +0.3 0.25 ? 9% perf-profile.children.cycles-pp.__schedule
0.00 +0.3 0.26 ? 16% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.02 ?200% +0.3 0.29 ? 22% perf-profile.children.cycles-pp.allocate_slab
0.02 ?122% +0.3 0.29 ? 22% perf-profile.children.cycles-pp.rmqueue
0.02 ?125% +0.3 0.30 ? 22% perf-profile.children.cycles-pp.__rb_insert_augmented
0.01 ?200% +0.3 0.29 ? 24% perf-profile.children.cycles-pp.expand_shrinker_info
0.00 +0.3 0.28 ? 17% perf-profile.children.cycles-pp.mutex_spin_on_owner
0.00 +0.3 0.28 ? 24% perf-profile.children.cycles-pp.__tlb_remove_page_size
0.02 ?200% +0.3 0.31 ? 14% perf-profile.children.cycles-pp.do_filp_open
0.02 ?200% +0.3 0.31 ? 14% perf-profile.children.cycles-pp.path_openat
0.05 ? 52% +0.3 0.34 ? 24% perf-profile.children.cycles-pp.unlink_file_vma
0.03 ?130% +0.3 0.33 ? 14% perf-profile.children.cycles-pp.do_sys_open
0.03 ?130% +0.3 0.33 ? 14% perf-profile.children.cycles-pp.do_sys_openat2
0.04 ? 51% +0.3 0.34 ? 15% perf-profile.children.cycles-pp.__list_del_entry_valid
0.00 +0.3 0.30 ? 21% perf-profile.children.cycles-pp.unlock_page_memcg
0.03 ?130% +0.3 0.36 ? 16% perf-profile.children.cycles-pp.open64
0.06 ? 56% +0.3 0.39 ? 23% perf-profile.children.cycles-pp.anon_vma_interval_tree_remove
0.01 ?200% +0.3 0.35 ? 24% perf-profile.children.cycles-pp.memcpy_erms
0.06 ? 64% +0.3 0.40 ? 21% perf-profile.children.cycles-pp.___slab_alloc
0.00 +0.3 0.34 ? 18% perf-profile.children.cycles-pp.next_uptodate_page
0.06 ? 64% +0.3 0.40 ? 21% perf-profile.children.cycles-pp.__slab_alloc
0.00 +0.3 0.35 ? 21% perf-profile.children.cycles-pp.up_write
0.02 ?200% +0.3 0.36 ? 25% perf-profile.children.cycles-pp.inet6_create
0.00 +0.3 0.35 ? 19% perf-profile.children.cycles-pp.__get_free_pages
0.01 ?200% +0.4 0.37 ? 16% perf-profile.children.cycles-pp.mod_objcg_state
0.04 ?149% +0.4 0.43 ? 26% perf-profile.children.cycles-pp.__register_sysctl_table
0.01 ?200% +0.4 0.40 ? 54% perf-profile.children.cycles-pp.irq_exit_rcu
0.08 ? 77% +0.4 0.50 ? 24% perf-profile.children.cycles-pp.memset_erms
0.00 +0.4 0.42 ? 37% perf-profile.children.cycles-pp.dropmon_net_event
0.03 ?124% +0.4 0.46 ? 24% perf-profile.children.cycles-pp.vm_normal_page
0.00 +0.5 0.45 ? 69% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.01 ?200% +0.5 0.47 ? 23% perf-profile.children.cycles-pp.raw_notifier_call_chain
0.02 ?200% +0.5 0.48 ? 33% perf-profile.children.cycles-pp.unregister_netdevice_many
0.02 ?200% +0.5 0.48 ? 33% perf-profile.children.cycles-pp.cleanup_net
0.02 ?200% +0.5 0.48 ? 33% perf-profile.children.cycles-pp.default_device_exit_batch
0.55 ? 11% +0.5 1.02 ? 22% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.04 ? 50% +0.5 0.56 ? 22% perf-profile.children.cycles-pp.remove_vma
0.01 ?200% +0.5 0.53 ? 28% perf-profile.children.cycles-pp.wait_consider_task
0.02 ?200% +0.5 0.55 ? 22% perf-profile.children.cycles-pp.sock_alloc_inode
0.59 ? 10% +0.5 1.12 ? 20% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.03 ? 82% +0.5 0.56 ? 23% perf-profile.children.cycles-pp.down_write
0.09 ? 80% +0.5 0.62 ? 25% perf-profile.children.cycles-pp.pcpu_alloc
0.00 +0.6 0.55 ? 88% perf-profile.children.cycles-pp.rcu_do_batch
0.05 ? 55% +0.6 0.61 ? 21% perf-profile.children.cycles-pp.try_charge_memcg
0.05 ? 53% +0.6 0.62 ? 17% perf-profile.children.cycles-pp.filemap_map_pages
0.00 +0.6 0.57 ? 34% perf-profile.children.cycles-pp.__put_anon_vma
0.05 ? 54% +0.6 0.64 ? 18% perf-profile.children.cycles-pp.do_fault
0.00 +0.6 0.58 ? 83% perf-profile.children.cycles-pp.rcu_core
0.31 ?102% +0.6 0.90 ? 19% perf-profile.children.cycles-pp.process_one_work
0.04 ?145% +0.6 0.63 ? 22% perf-profile.children.cycles-pp.sock_alloc
0.04 ?200% +0.6 0.63 ? 22% perf-profile.children.cycles-pp.raw_unhash_sk
0.32 ?102% +0.6 0.92 ? 18% perf-profile.children.cycles-pp.worker_thread
0.00 +0.6 0.62 ? 81% perf-profile.children.cycles-pp.__softirqentry_text_start
0.04 ?145% +0.6 0.67 ? 22% perf-profile.children.cycles-pp.alloc_inode
0.05 ? 53% +0.6 0.70 ? 23% perf-profile.children.cycles-pp.folio_memcg_lock
0.04 ?139% +0.6 0.69 ? 22% perf-profile.children.cycles-pp.new_inode_pseudo
0.01 ?200% +0.7 0.67 ? 22% perf-profile.children.cycles-pp.dup_task_struct
0.03 ?200% +0.7 0.69 ? 22% perf-profile.children.cycles-pp.raw_hash_sk
0.03 ?200% +0.7 0.71 ? 23% perf-profile.children.cycles-pp.icmpv6_sk_init
0.04 ?145% +0.7 0.73 ? 24% perf-profile.children.cycles-pp.sk_prot_alloc
0.06 ? 54% +0.7 0.76 ? 22% perf-profile.children.cycles-pp.vm_area_dup
0.04 ?147% +0.7 0.75 ? 24% perf-profile.children.cycles-pp.sk_alloc
0.00 +0.8 0.78 ? 38% perf-profile.children.cycles-pp.release_task
0.00 +0.8 0.78 ? 38% perf-profile.children.cycles-pp.wait_task_zombie
0.01 ?200% +0.8 0.84 ? 22% perf-profile.children.cycles-pp.register_netdev
0.33 ?100% +0.8 1.16 ? 28% perf-profile.children.cycles-pp.kthread
0.01 ?200% +0.8 0.86 ? 22% perf-profile.children.cycles-pp.loopback_net_init
0.00 +0.9 0.85 ? 44% perf-profile.children.cycles-pp.queued_read_lock_slowpath
0.10 ? 54% +0.9 0.96 ? 21% perf-profile.children.cycles-pp.prep_new_page
0.10 ? 54% +0.9 0.97 ? 21% perf-profile.children.cycles-pp.clear_page_erms
0.08 ? 53% +0.9 0.95 ? 18% perf-profile.children.cycles-pp.__handle_mm_fault
0.10 ? 54% +0.9 0.97 ? 21% perf-profile.children.cycles-pp.kernel_init_free_pages
0.01 ?200% +0.9 0.91 ? 26% perf-profile.children.cycles-pp.__mutex_lock
0.09 ? 53% +0.9 1.00 ? 18% perf-profile.children.cycles-pp.handle_mm_fault
0.24 ? 94% +1.0 1.21 ? 19% perf-profile.children.cycles-pp.prealloc_shrinker
0.35 ? 88% +1.0 1.33 ? 21% perf-profile.children.cycles-pp.ret_from_fork
0.07 ? 52% +1.0 1.05 ? 9% perf-profile.children.cycles-pp.kmem_cache_free
0.25 ? 90% +1.0 1.28 ? 18% perf-profile.children.cycles-pp.alloc_super
0.11 ? 53% +1.0 1.15 ? 18% perf-profile.children.cycles-pp.do_user_addr_fault
0.05 ?200% +1.0 1.09 ? 23% perf-profile.children.cycles-pp.icmp_sk_init
0.11 ? 53% +1.0 1.15 ? 18% perf-profile.children.cycles-pp.exc_page_fault
0.05 ?200% +1.0 1.10 ? 22% perf-profile.children.cycles-pp.inet_create
0.13 ? 52% +1.1 1.26 ? 17% perf-profile.children.cycles-pp.asm_exc_page_fault
0.06 ?200% +1.1 1.20 ? 21% perf-profile.children.cycles-pp.tcp_sk_init
0.13 ? 54% +1.2 1.29 ? 21% perf-profile.children.cycles-pp.get_page_from_freelist
0.00 +1.5 1.46 ? 23% perf-profile.children.cycles-pp.obj_cgroup_charge
0.04 ? 51% +1.9 1.97 ? 38% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.08 ? 54% +2.1 2.22 ? 25% perf-profile.children.cycles-pp.free_swap_cache
0.17 ?110% +2.2 2.36 ? 23% perf-profile.children.cycles-pp.__sock_create
0.09 ? 55% +2.2 2.29 ? 25% perf-profile.children.cycles-pp.free_pages_and_swap_cache
0.03 ?123% +2.2 2.25 ? 37% perf-profile.children.cycles-pp.do_wait
0.03 ?123% +2.2 2.25 ? 37% perf-profile.children.cycles-pp.__waitpid
0.03 ?123% +2.2 2.25 ? 37% perf-profile.children.cycles-pp.__do_sys_wait4
0.03 ?123% +2.2 2.25 ? 37% perf-profile.children.cycles-pp.kernel_wait4
0.19 ? 14% +2.3 2.47 ? 22% perf-profile.children.cycles-pp.kmem_cache_alloc
0.19 ? 58% +2.4 2.56 ? 37% perf-profile.children.cycles-pp.anon_vma_interval_tree_insert
0.18 ? 52% +2.6 2.77 ? 30% perf-profile.children.cycles-pp.unlink_anon_vmas
0.09 ? 57% +2.8 2.84 ? 24% perf-profile.children.cycles-pp.__pte_alloc
0.21 ?122% +2.8 3.00 ? 22% perf-profile.children.cycles-pp.inet_ctl_sock_create
0.00 +2.9 2.85 ? 23% perf-profile.children.cycles-pp.propagate_protected_usage
0.32 ? 96% +2.9 3.24 ? 32% perf-profile.children.cycles-pp.osq_lock
0.01 ?200% +3.0 2.99 ? 18% perf-profile.children.cycles-pp.page_counter_cancel
0.24 ? 52% +3.0 3.28 ? 29% perf-profile.children.cycles-pp.free_pgtables
0.01 ?200% +3.2 3.21 ? 21% perf-profile.children.cycles-pp.free_pcp_prepare
0.05 ? 51% +3.3 3.36 ? 21% perf-profile.children.cycles-pp.free_unref_page
0.01 ?200% +3.4 3.41 ? 21% perf-profile.children.cycles-pp.__memcg_kmem_uncharge_page
0.08 ? 51% +3.6 3.65 ? 21% perf-profile.children.cycles-pp.zap_huge_pmd
0.02 ?123% +4.0 3.98 ? 18% perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
0.02 ?123% +4.0 4.06 ? 18% perf-profile.children.cycles-pp.page_counter_uncharge
0.27 ? 57% +4.4 4.64 ? 34% perf-profile.children.cycles-pp.anon_vma_clone
0.40 ? 81% +4.6 4.95 ? 41% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
0.39 ?105% +4.6 5.01 ? 19% perf-profile.children.cycles-pp.ops_init
0.39 ?105% +4.6 5.01 ? 19% perf-profile.children.cycles-pp.setup_net
0.18 ? 53% +4.6 4.80 ? 21% perf-profile.children.cycles-pp.release_pages
0.39 ?105% +4.6 5.02 ? 19% perf-profile.children.cycles-pp.copy_net_ns
0.20 ? 57% +5.1 5.26 ? 21% perf-profile.children.cycles-pp.copy_huge_pmd
0.06 ?200% +5.2 5.31 ? 35% perf-profile.children.cycles-pp.queued_write_lock_slowpath
0.33 ? 57% +6.3 6.61 ? 34% perf-profile.children.cycles-pp.anon_vma_fork
0.27 ? 53% +6.7 6.99 ? 22% perf-profile.children.cycles-pp.tlb_finish_mmu
0.14 ? 57% +6.8 6.94 ? 22% perf-profile.children.cycles-pp.__memcg_kmem_charge_page
0.27 ? 53% +6.8 7.11 ? 23% perf-profile.children.cycles-pp.tlb_flush_mmu
0.08 ? 59% +7.2 7.33 ? 22% perf-profile.children.cycles-pp.page_counter_charge
0.26 ? 56% +7.4 7.65 ? 22% perf-profile.children.cycles-pp.pte_alloc_one
0.30 ? 54% +7.6 7.93 ? 22% perf-profile.children.cycles-pp.page_remove_rmap
0.28 ? 56% +7.8 8.08 ? 22% perf-profile.children.cycles-pp.__alloc_pages
0.13 ? 57% +7.9 8.07 ? 22% perf-profile.children.cycles-pp.obj_cgroup_charge_pages
0.63 ? 56% +17.6 18.24 ? 39% perf-profile.children.cycles-pp.__x64_sys_exit_group
0.63 ? 56% +17.6 18.24 ? 39% perf-profile.children.cycles-pp.do_group_exit
0.63 ? 55% +17.7 18.28 ? 39% perf-profile.children.cycles-pp.__x64_sys_exit
0.90 ? 54% +17.8 18.67 ? 24% perf-profile.children.cycles-pp.zap_pte_range
0.59 ? 59% +18.2 18.77 ? 23% perf-profile.children.cycles-pp.copy_pte_range
1.00 ? 53% +21.4 22.45 ? 23% perf-profile.children.cycles-pp.unmap_page_range
1.02 ? 53% +21.6 22.59 ? 23% perf-profile.children.cycles-pp.unmap_vmas
0.82 ? 55% +23.6 24.43 ? 22% perf-profile.children.cycles-pp.copy_page_range
0.98 ? 55% +24.4 25.37 ? 24% perf-profile.children.cycles-pp.__do_sys_clone
0.98 ? 54% +24.5 25.43 ? 24% perf-profile.children.cycles-pp.__do_sys_clone3
1.00 ? 55% +24.6 25.59 ? 24% perf-profile.children.cycles-pp.__clone
1.00 ? 54% +24.7 25.67 ? 24% perf-profile.children.cycles-pp.syscall
1.32 ? 54% +31.6 32.88 ? 24% perf-profile.children.cycles-pp.dup_mmap
1.32 ? 54% +31.7 33.03 ? 24% perf-profile.children.cycles-pp.dup_mm
1.59 ? 51% +31.9 33.52 ? 23% perf-profile.children.cycles-pp.exit_mmap
1.60 ? 51% +31.9 33.54 ? 23% perf-profile.children.cycles-pp.mmput
1.26 ? 56% +35.3 36.52 ? 39% perf-profile.children.cycles-pp.do_exit
1.93 ? 55% +48.7 50.63 ? 24% perf-profile.children.cycles-pp.copy_process
1.95 ? 55% +48.8 50.80 ? 24% perf-profile.children.cycles-pp.kernel_clone
91.87 ? 2% -70.9 20.95 ? 83% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.00 +0.1 0.07 ? 20% perf-profile.self.cycles-pp.folio_unlock
0.00 +0.1 0.08 ? 24% perf-profile.self.cycles-pp.unmap_vmas
0.00 +0.1 0.08 ? 22% perf-profile.self.cycles-pp.__alloc_pages
0.00 +0.1 0.08 ? 19% perf-profile.self.cycles-pp.__anon_vma_interval_tree_augment_rotate
0.00 +0.1 0.08 ? 24% perf-profile.self.cycles-pp.do_wait
0.00 +0.1 0.08 ? 22% perf-profile.self.cycles-pp.unmap_page_range
0.00 +0.1 0.08 ? 22% perf-profile.self.cycles-pp.copy_page_range
0.00 +0.1 0.09 ? 21% perf-profile.self.cycles-pp.___slab_alloc
0.00 +0.1 0.09 ? 27% perf-profile.self.cycles-pp.copy_page
0.00 +0.1 0.09 ? 22% perf-profile.self.cycles-pp.page_counter_try_charge
0.00 +0.1 0.09 ? 18% perf-profile.self.cycles-pp.fput_many
0.00 +0.1 0.09 ? 4% perf-profile.self.cycles-pp.native_irq_return_iret
0.00 +0.1 0.09 ? 15% perf-profile.self.cycles-pp.zap_huge_pmd
0.00 +0.1 0.10 ? 20% perf-profile.self.cycles-pp.get_obj_cgroup_from_current
0.00 +0.1 0.10 ? 24% perf-profile.self.cycles-pp.anon_vma_fork
0.00 +0.1 0.11 ? 22% perf-profile.self.cycles-pp.free_pages_and_swap_cache
0.00 +0.1 0.11 ? 20% perf-profile.self.cycles-pp.__rb_erase_color
0.00 +0.1 0.11 ? 22% perf-profile.self.cycles-pp.find_idlest_group
0.00 +0.1 0.12 ? 26% perf-profile.self.cycles-pp._raw_write_lock_bh
0.00 +0.1 0.12 ? 18% perf-profile.self.cycles-pp.get_mem_cgroup_from_objcg
0.00 +0.1 0.12 ? 20% perf-profile.self.cycles-pp.__folio_memcg_unlock
0.00 +0.1 0.13 ? 32% perf-profile.self.cycles-pp.osq_unlock
0.01 ?200% +0.1 0.14 ? 22% perf-profile.self.cycles-pp.inode_init_once
0.00 +0.1 0.13 ? 23% perf-profile.self.cycles-pp.vma_interval_tree_remove
0.00 +0.1 0.14 ? 28% perf-profile.self.cycles-pp.obj_cgroup_charge_pages
0.00 +0.1 0.14 ? 13% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.00 +0.2 0.16 ? 23% perf-profile.self.cycles-pp.unlink_anon_vmas
0.00 +0.2 0.17 ? 18% perf-profile.self.cycles-pp.mod_objcg_state
0.00 +0.2 0.17 ? 24% perf-profile.self.cycles-pp.lock_page_memcg
0.00 +0.2 0.18 ? 22% perf-profile.self.cycles-pp.dup_userfaultfd
0.00 +0.2 0.18 ? 22% perf-profile.self.cycles-pp.__tlb_remove_page_size
0.00 +0.2 0.19 ? 44% perf-profile.self.cycles-pp.smp_call_function_single
0.00 +0.2 0.20 ? 21% perf-profile.self.cycles-pp.copy_huge_pmd
0.02 ?200% +0.2 0.22 ? 37% perf-profile.self.cycles-pp.rwsem_down_write_slowpath
0.00 +0.2 0.21 ? 25% perf-profile.self.cycles-pp.apparmor_socket_post_create
0.00 +0.2 0.22 ? 14% perf-profile.self.cycles-pp.__slab_free
0.00 +0.2 0.23 ? 18% perf-profile.self.cycles-pp.__might_resched
0.03 ? 82% +0.2 0.27 ? 18% perf-profile.self.cycles-pp.kmem_cache_free
0.00 +0.2 0.24 ? 20% perf-profile.self.cycles-pp.dup_mmap
0.00 +0.2 0.24 ? 24% perf-profile.self.cycles-pp.__memcg_kmem_charge_page
0.00 +0.2 0.25 ? 16% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.00 +0.3 0.26 ? 24% perf-profile.self.cycles-pp.vm_area_dup
0.02 ?125% +0.3 0.28 ? 23% perf-profile.self.cycles-pp.__rb_insert_augmented
0.00 +0.3 0.27 ? 17% perf-profile.self.cycles-pp.mutex_spin_on_owner
0.04 ? 51% +0.3 0.34 ? 16% perf-profile.self.cycles-pp.__list_del_entry_valid
0.00 +0.3 0.30 ? 21% perf-profile.self.cycles-pp.unlock_page_memcg
0.02 ?123% +0.3 0.33 ? 20% perf-profile.self.cycles-pp.try_charge_memcg
0.04 ? 83% +0.3 0.35 ? 21% perf-profile.self.cycles-pp.kmem_cache_alloc
0.06 ? 53% +0.3 0.38 ? 23% perf-profile.self.cycles-pp.anon_vma_interval_tree_remove
0.01 ?200% +0.3 0.35 ? 24% perf-profile.self.cycles-pp.memcpy_erms
0.00 +0.3 0.34 ? 18% perf-profile.self.cycles-pp.next_uptodate_page
0.00 +0.3 0.34 ? 22% perf-profile.self.cycles-pp.up_write
0.00 +0.4 0.36 ? 25% perf-profile.self.cycles-pp.anon_vma_clone
0.03 ?127% +0.4 0.41 ? 23% perf-profile.self.cycles-pp.vm_normal_page
0.08 ? 77% +0.4 0.49 ? 24% perf-profile.self.cycles-pp.memset_erms
0.00 +0.4 0.41 ? 37% perf-profile.self.cycles-pp.dropmon_net_event
0.02 ?122% +0.5 0.49 ? 24% perf-profile.self.cycles-pp.down_write
0.01 ?200% +0.5 0.52 ? 28% perf-profile.self.cycles-pp.wait_consider_task
0.05 ? 52% +0.6 0.64 ? 19% perf-profile.self.cycles-pp._raw_spin_lock
0.05 ? 53% +0.6 0.69 ? 23% perf-profile.self.cycles-pp.folio_memcg_lock
0.00 +0.7 0.73 ? 25% perf-profile.self.cycles-pp.queued_write_lock_slowpath
0.10 ? 56% +0.9 0.96 ? 21% perf-profile.self.cycles-pp.clear_page_erms
0.03 ? 82% +1.9 1.95 ? 38% perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.08 ? 55% +2.1 2.16 ? 25% perf-profile.self.cycles-pp.free_swap_cache
0.19 ? 58% +2.4 2.54 ? 37% perf-profile.self.cycles-pp.anon_vma_interval_tree_insert
0.00 +2.8 2.83 ? 23% perf-profile.self.cycles-pp.propagate_protected_usage
0.32 ? 96% +2.9 3.21 ? 33% perf-profile.self.cycles-pp.osq_lock
0.01 ?200% +3.0 2.96 ? 18% perf-profile.self.cycles-pp.page_counter_cancel
0.16 ? 54% +4.4 4.51 ? 21% perf-profile.self.cycles-pp.release_pages
0.06 ? 60% +5.4 5.49 ? 22% perf-profile.self.cycles-pp.page_counter_charge
0.24 ? 54% +6.7 6.96 ? 22% perf-profile.self.cycles-pp.page_remove_rmap
0.53 ? 54% +9.2 9.75 ? 25% perf-profile.self.cycles-pp.zap_pte_range
0.47 ? 60% +14.9 15.38 ? 23% perf-profile.self.cycles-pp.copy_pte_range




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation

Thanks,
Oliver Sang


Attachments:
(No filename) (65.38 kB)
config-5.16.0-11414-gd5f04a939d3a (177.28 kB)
job-script (8.31 kB)
job.yaml (5.63 kB)
reproduce (393.00 B)
Download all attachments