2021-05-28 08:58:34

by Ian Kent

[permalink] [raw]
Subject: [REPOST PATCH v4 0/5] kernfs: proposed locking and concurrency improvement

There have been a few instances of contention on the kernfs_mutex during
path walks, a case on very large IBM systems seen by myself, a report by
Brice Goglin and followed up by Fox Chen, and I've since seen a couple
of other reports by CoreOS users.

The common thread is a large number of kernfs path walks leading to
slowness of path walks due to kernfs_mutex contention.

The problem being that changes to the VFS over some time have increased
it's concurrency capabilities to an extent that kernfs's use of a mutex
is no longer appropriate. There's also an issue of walks for non-existent
paths causing contention if there are quite a few of them which is a less
common problem.

This patch series is relatively straight forward.

All it does is add the ability to take advantage of VFS negative dentry
caching to avoid needless dentry alloc/free cycles for lookups of paths
that don't exit and change the kernfs_mutex to a read/write semaphore.

The patch that tried to stay in VFS rcu-walk mode during path walks has
been dropped for two reasons. First, it doesn't actually give very much
improvement and, second, if there's a place where mistakes could go
unnoticed it would be in that path. This makes the patch series simpler
to review and reduces the likelihood of problems going unnoticed and
popping up later.

The patch to use a revision to identify if a directory has changed has
also been dropped. If the directory has changed the dentry revision
needs to be updated to avoid subsequent rb tree searches and after
changing to use a read/write semaphore the update also requires a lock.
But the d_lock is the only lock available at this point which might
itself be contended.

Changes since v3:
- remove unneeded indirection when referencing the super block.
- check if inode attribute update is actually needed.

Changes since v2:
- actually fix the inode attribute update locking.
- drop the patch that tried to stay in rcu-walk mode.
- drop the use a revision to identify if a directory has changed patch.

Changes since v1:
- fix locking in .permission() and .getattr() by re-factoring the attribute
handling code.
---

Ian Kent (5):
kernfs: move revalidate to be near lookup
kernfs: use VFS negative dentry caching
kernfs: switch kernfs to use an rwsem
kernfs: use i_lock to protect concurrent inode updates
kernfs: add kernfs_need_inode_refresh()


fs/kernfs/dir.c | 170 ++++++++++++++++++++----------------
fs/kernfs/file.c | 4 +-
fs/kernfs/inode.c | 45 ++++++++--
fs/kernfs/kernfs-internal.h | 5 +-
fs/kernfs/mount.c | 12 +--
fs/kernfs/symlink.c | 4 +-
include/linux/kernfs.h | 2 +-
7 files changed, 147 insertions(+), 95 deletions(-)

--
Ian


2021-05-28 08:58:42

by Ian Kent

[permalink] [raw]
Subject: [REPOST PATCH v4 4/5] kernfs: use i_lock to protect concurrent inode updates

The inode operations .permission() and .getattr() use the kernfs node
write lock but all that's needed is to keep the rb tree stable while
updating the inode attributes as well as protecting the update itself
against concurrent changes.

And .permission() is called frequently during path walks and can cause
quite a bit of contention between kernfs node operations and path
walks when the number of concurrent walks is high.

To change kernfs_iop_getattr() and kernfs_iop_permission() to take
the rw sem read lock instead of the write lock an additional lock is
needed to protect against multiple processes concurrently updating
the inode attributes and link count in kernfs_refresh_inode().

The inode i_lock seems like the sensible thing to use to protect these
inode attribute updates so use it in kernfs_refresh_inode().

Signed-off-by: Ian Kent <[email protected]>
---
fs/kernfs/inode.c | 10 ++++++----
fs/kernfs/mount.c | 4 ++--
2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/fs/kernfs/inode.c b/fs/kernfs/inode.c
index 3b01e9e61f14e..6728ecd81eb37 100644
--- a/fs/kernfs/inode.c
+++ b/fs/kernfs/inode.c
@@ -172,6 +172,7 @@ static void kernfs_refresh_inode(struct kernfs_node *kn, struct inode *inode)
{
struct kernfs_iattrs *attrs = kn->iattr;

+ spin_lock(&inode->i_lock);
inode->i_mode = kn->mode;
if (attrs)
/*
@@ -182,6 +183,7 @@ static void kernfs_refresh_inode(struct kernfs_node *kn, struct inode *inode)

if (kernfs_type(kn) == KERNFS_DIR)
set_nlink(inode, kn->dir.subdirs + 2);
+ spin_unlock(&inode->i_lock);
}

int kernfs_iop_getattr(struct user_namespace *mnt_userns,
@@ -191,9 +193,9 @@ int kernfs_iop_getattr(struct user_namespace *mnt_userns,
struct inode *inode = d_inode(path->dentry);
struct kernfs_node *kn = inode->i_private;

- down_write(&kernfs_rwsem);
+ down_read(&kernfs_rwsem);
kernfs_refresh_inode(kn, inode);
- up_write(&kernfs_rwsem);
+ up_read(&kernfs_rwsem);

generic_fillattr(&init_user_ns, inode, stat);
return 0;
@@ -284,9 +286,9 @@ int kernfs_iop_permission(struct user_namespace *mnt_userns,

kn = inode->i_private;

- down_write(&kernfs_rwsem);
+ down_read(&kernfs_rwsem);
kernfs_refresh_inode(kn, inode);
- up_write(&kernfs_rwsem);
+ up_read(&kernfs_rwsem);

return generic_permission(&init_user_ns, inode, mask);
}
diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c
index baa4155ba2edf..f2f909d09f522 100644
--- a/fs/kernfs/mount.c
+++ b/fs/kernfs/mount.c
@@ -255,9 +255,9 @@ static int kernfs_fill_super(struct super_block *sb, struct kernfs_fs_context *k
sb->s_shrink.seeks = 0;

/* get root inode, initialize and unlock it */
- down_write(&kernfs_rwsem);
+ down_read(&kernfs_rwsem);
inode = kernfs_get_inode(sb, info->root->kn);
- up_write(&kernfs_rwsem);
+ up_read(&kernfs_rwsem);
if (!inode) {
pr_debug("kernfs: could not get root inode\n");
return -ENOMEM;


2021-05-28 11:37:50

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [REPOST PATCH v4 0/5] kernfs: proposed locking and concurrency improvement

On Fri, May 28, 2021 at 02:33:42PM +0800, Ian Kent wrote:
> There have been a few instances of contention on the kernfs_mutex during
> path walks, a case on very large IBM systems seen by myself, a report by
> Brice Goglin and followed up by Fox Chen, and I've since seen a couple
> of other reports by CoreOS users.
>
> The common thread is a large number of kernfs path walks leading to
> slowness of path walks due to kernfs_mutex contention.
>
> The problem being that changes to the VFS over some time have increased
> it's concurrency capabilities to an extent that kernfs's use of a mutex
> is no longer appropriate. There's also an issue of walks for non-existent
> paths causing contention if there are quite a few of them which is a less
> common problem.
>
> This patch series is relatively straight forward.
>
> All it does is add the ability to take advantage of VFS negative dentry
> caching to avoid needless dentry alloc/free cycles for lookups of paths
> that don't exit and change the kernfs_mutex to a read/write semaphore.
>
> The patch that tried to stay in VFS rcu-walk mode during path walks has
> been dropped for two reasons. First, it doesn't actually give very much
> improvement and, second, if there's a place where mistakes could go
> unnoticed it would be in that path. This makes the patch series simpler
> to review and reduces the likelihood of problems going unnoticed and
> popping up later.
>
> The patch to use a revision to identify if a directory has changed has
> also been dropped. If the directory has changed the dentry revision
> needs to be updated to avoid subsequent rb tree searches and after
> changing to use a read/write semaphore the update also requires a lock.
> But the d_lock is the only lock available at this point which might
> itself be contended.

Fox, can you take some time and test these to verify it all still works
properly with your benchmarks?

thanks,

greg k-h

2021-05-28 12:39:51

by Fox Chen

[permalink] [raw]
Subject: Re: [REPOST PATCH v4 0/5] kernfs: proposed locking and concurrency improvement

On Fri, May 28, 2021 at 4:56 PM Greg Kroah-Hartman
<[email protected]> wrote:
>
> On Fri, May 28, 2021 at 02:33:42PM +0800, Ian Kent wrote:
> > There have been a few instances of contention on the kernfs_mutex during
> > path walks, a case on very large IBM systems seen by myself, a report by
> > Brice Goglin and followed up by Fox Chen, and I've since seen a couple
> > of other reports by CoreOS users.
> >
> > The common thread is a large number of kernfs path walks leading to
> > slowness of path walks due to kernfs_mutex contention.
> >
> > The problem being that changes to the VFS over some time have increased
> > it's concurrency capabilities to an extent that kernfs's use of a mutex
> > is no longer appropriate. There's also an issue of walks for non-existent
> > paths causing contention if there are quite a few of them which is a less
> > common problem.
> >
> > This patch series is relatively straight forward.
> >
> > All it does is add the ability to take advantage of VFS negative dentry
> > caching to avoid needless dentry alloc/free cycles for lookups of paths
> > that don't exit and change the kernfs_mutex to a read/write semaphore.
> >
> > The patch that tried to stay in VFS rcu-walk mode during path walks has
> > been dropped for two reasons. First, it doesn't actually give very much
> > improvement and, second, if there's a place where mistakes could go
> > unnoticed it would be in that path. This makes the patch series simpler
> > to review and reduces the likelihood of problems going unnoticed and
> > popping up later.
> >
> > The patch to use a revision to identify if a directory has changed has
> > also been dropped. If the directory has changed the dentry revision
> > needs to be updated to avoid subsequent rb tree searches and after
> > changing to use a read/write semaphore the update also requires a lock.
> > But the d_lock is the only lock available at this point which might
> > itself be contended.
>
> Fox, can you take some time and test these to verify it all still works
> properly with your benchmarks?

Sure, I will take a look.
Actually, I've tested it before, but I will test it again to confirm it.

> thanks,
>
> greg k-h

thanks,
fox

2021-05-30 04:46:02

by Fox Chen

[permalink] [raw]
Subject: Re: [REPOST PATCH v4 0/5] kernfs: proposed locking and concurrency improvement

On Fri, May 28, 2021 at 4:56 PM Greg Kroah-Hartman
<[email protected]> wrote:
>
> Fox, can you take some time and test these to verify it all still works
> properly with your benchmarks?
>

I've tested it on an AWS C5a (amd, 96 logical cores):
Before, mutex_locks in kernfs_iop_permission(), kernfs_dop_revalidate
take significant time.
With the patchset, there is no mutex_lock issue. (see flamegraph
before.png/after.png)

On AWS C5 (intel, also 96 logical cores), the benchmark runs slower
than on c5a. But I don't think it's related, because
running the benchmark on ext4 is slower too, and the perf report,
which is no different from running on kernfs with this patchset,
shows the pressure is on the VFS side.

My conclusion: It works well with my benchmark.

I've attached:
flame graphs -- before.png/after.png
benchmark outputs -- result.before/result.after
perf reports -- report.before/report.after
perf report on ext4 -- report.baremetal.ext4
for you reference.


thanks,
fox


Attachments:
after.png (90.41 kB)
before.png (84.93 kB)
result.before (4.94 kB)
result.after (4.81 kB)
report.after (207.81 kB)
report.before (162.71 kB)
report.baremetal.ext4 (155.12 kB)
Download all attachments

2021-05-31 16:32:28

by kernel test robot

[permalink] [raw]
Subject: [kernfs] 9a658329cd: stress-ng.get.ops_per_sec 191.4% improvement



Greeting,

FYI, we noticed a 191.4% improvement of stress-ng.get.ops_per_sec due to commit:


commit: 9a658329cda84c0916a85fec1cd55d08a453d671 ("[REPOST PATCH v4 4/5] kernfs: use i_lock to protect concurrent inode updates")
url: https://github.com/0day-ci/linux/commits/Ian-Kent/kernfs-proposed-locking-and-concurrency-improvement/20210528-143519
base: https://git.kernel.org/cgit/linux/kernel/git/gregkh/driver-core.git 39b27e89a76f3827ad93aed9213a6daf2b91f819

in testcase: stress-ng
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
with following parameters:

nr_threads: 10%
disk: 1HDD
testtime: 60s
fs: ext4
class: os
test: get
cpufreq_governor: performance
ucode: 0x5003006






Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
bin/lkp run generated-yaml-file

=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
os/gcc-9/performance/1HDD/ext4/x86_64-rhel-8.3/10%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp5/get/stress-ng/60s/0x5003006

commit:
dbc68beb5a ("kernfs: switch kernfs to use an rwsem")
9a658329cd ("kernfs: use i_lock to protect concurrent inode updates")

dbc68beb5a70364f 9a658329cda84c0916a85fec1cd
---------------- ---------------------------
%stddev %change %stddev
\ | \
318098 ? 5% +191.4% 926882 ? 4% stress-ng.get.ops
5301 ? 5% +191.4% 15448 ? 4% stress-ng.get.ops_per_sec
38975 ? 4% -94.1% 2306 ? 5% stress-ng.time.involuntary_context_switches
805.50 +4.8% 843.83 stress-ng.time.percent_of_cpu_this_job_got
491.11 +2.7% 504.29 stress-ng.time.system_time
9.13 ? 4% +116.7% 19.79 ? 10% stress-ng.time.user_time
9069575 -100.0% 902.67 ? 18% stress-ng.time.voluntary_context_switches
5611023 ? 3% -93.7% 355240 ?213% cpuidle.POLL.usage
8.78 +4.8% 9.20 iostat.cpu.system
0.09 +0.1 0.20 ? 3% mpstat.cpu.all.soft%
0.19 ? 3% +0.2 0.35 ? 9% mpstat.cpu.all.usr%
26907 ? 15% +35.9% 36572 ? 14% numa-meminfo.node1.KReclaimable
26907 ? 15% +35.9% 36572 ? 14% numa-meminfo.node1.SReclaimable
166.34 +3.0% 171.40 turbostat.PkgWatt
84.19 +2.7% 86.46 turbostat.RAMWatt
282176 -99.3% 1950 vmstat.system.cs
207323 -7.6% 191467 vmstat.system.in
6725 ? 15% +36.2% 9161 ? 14% numa-vmstat.node1.nr_slab_reclaimable
514724 ? 14% +82.8% 940837 ? 24% numa-vmstat.node1.numa_hit
466993 ? 14% +94.6% 908580 ? 24% numa-vmstat.node1.numa_local
175792 ? 22% +392.1% 864994 ? 11% numa-numastat.node0.local_node
225225 ? 16% +311.9% 927760 ? 11% numa-numastat.node0.numa_hit
148702 ? 25% +385.8% 722426 ? 12% numa-numastat.node1.local_node
185788 ? 19% +301.6% 746175 ? 12% numa-numastat.node1.numa_hit
0.27 ?143% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rwsem_down_read_slowpath.kernfs_dop_revalidate.lookup_fast.walk_component
16.33 ? 71% -100.0% 0.00 perf-sched.wait_and_delay.count.rwsem_down_read_slowpath.kernfs_dop_revalidate.lookup_fast.walk_component
2.82 ?108% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rwsem_down_read_slowpath.kernfs_dop_revalidate.lookup_fast.walk_component
0.10 ?191% -100.0% 0.00 perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.kernfs_dop_revalidate.lookup_fast.walk_component
0.02 ?153% -100.0% 0.00 perf-sched.wait_time.avg.ms.rwsem_down_write_slowpath.kernfs_iop_permission.inode_permission.link_path_walk.part
1.26 ?204% -100.0% 0.00 perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.kernfs_dop_revalidate.lookup_fast.walk_component
0.04 ?177% -100.0% 0.00 perf-sched.wait_time.max.ms.rwsem_down_write_slowpath.kernfs_iop_permission.inode_permission.link_path_walk.part
10168 +2.0% 10369 proc-vmstat.nr_mapped
6781 +3.0% 6983 proc-vmstat.nr_shmem
22147 +7.9% 23908 proc-vmstat.nr_slab_reclaimable
45841 +9.8% 50328 proc-vmstat.nr_slab_unreclaimable
444471 +281.9% 1697425 ? 2% proc-vmstat.numa_hit
357937 +350.0% 1610894 ? 3% proc-vmstat.numa_local
525864 ? 2% +480.4% 3052021 ? 3% proc-vmstat.pgalloc_normal
396561 ? 3% +636.6% 2921129 ? 3% proc-vmstat.pgfree
2926 -18.0% 2399 ? 8% slabinfo.PING.active_objs
2926 -18.0% 2399 ? 8% slabinfo.PING.num_objs
122528 +17.0% 143418 slabinfo.dentry.active_objs
2935 +17.2% 3439 slabinfo.dentry.active_slabs
123316 +17.2% 144486 slabinfo.dentry.num_objs
2935 +17.2% 3439 slabinfo.dentry.num_slabs
30529 +190.0% 88521 ? 4% slabinfo.filp.active_objs
961.17 +188.5% 2773 ? 4% slabinfo.filp.active_slabs
30769 +188.4% 88751 ? 4% slabinfo.filp.num_objs
961.17 +188.5% 2773 ? 4% slabinfo.filp.num_slabs
9308 +107.2% 19285 ? 8% slabinfo.kmalloc-256.active_objs
291.00 ? 2% +109.3% 609.17 ? 8% slabinfo.kmalloc-256.active_slabs
9325 ? 2% +109.2% 19506 ? 8% slabinfo.kmalloc-256.num_objs
291.00 ? 2% +109.3% 609.17 ? 8% slabinfo.kmalloc-256.num_slabs
4640 +138.0% 11043 ? 5% slabinfo.kmalloc-rcl-512.active_objs
144.67 +138.8% 345.50 ? 5% slabinfo.kmalloc-rcl-512.active_slabs
4640 +138.5% 11069 ? 5% slabinfo.kmalloc-rcl-512.num_objs
144.67 +138.8% 345.50 ? 5% slabinfo.kmalloc-rcl-512.num_slabs
4045 -12.2% 3551 ? 6% slabinfo.sock_inode_cache.active_objs
4045 -12.2% 3551 ? 6% slabinfo.sock_inode_cache.num_objs
6051 -25.8% 4488 ? 2% slabinfo.trace_event_file.active_objs
6051 -25.8% 4488 ? 2% slabinfo.trace_event_file.num_objs
11.74 ? 3% +30.8% 15.35 ? 4% perf-stat.i.MPKI
2.247e+09 ? 2% +53.1% 3.44e+09 ? 3% perf-stat.i.branch-instructions
0.96 -0.1 0.85 ? 3% perf-stat.i.branch-miss-rate%
21090920 ? 2% +31.8% 27799658 ? 3% perf-stat.i.branch-misses
25783788 ? 6% +94.6% 50163044 ? 3% perf-stat.i.cache-misses
1.249e+08 ? 6% +98.8% 2.482e+08 perf-stat.i.cache-references
292463 -99.4% 1759 perf-stat.i.context-switches
2.52 ? 2% -33.0% 1.69 ? 4% perf-stat.i.cpi
219.80 -49.5% 110.96 perf-stat.i.cpu-migrations
1253 ? 12% -49.8% 628.55 ? 3% perf-stat.i.cycles-between-cache-misses
2.792e+09 ? 2% +55.5% 4.341e+09 ? 3% perf-stat.i.dTLB-loads
0.01 ? 6% +0.0 0.01 ? 14% perf-stat.i.dTLB-store-miss-rate%
60929 ? 9% +152.5% 153866 ? 16% perf-stat.i.dTLB-store-misses
1.133e+09 ? 3% +110.2% 2.381e+09 ? 3% perf-stat.i.dTLB-stores
75.05 +11.3 86.38 perf-stat.i.iTLB-load-miss-rate%
7531434 ? 2% +107.5% 15628073 ? 3% perf-stat.i.iTLB-load-misses
1.047e+10 ? 2% +53.2% 1.604e+10 ? 3% perf-stat.i.instructions
1473 -24.7% 1109 ? 3% perf-stat.i.instructions-per-iTLB-miss
0.40 ? 2% +47.9% 0.59 ? 4% perf-stat.i.ipc
11.05 +4.9% 11.60 ? 3% perf-stat.i.major-faults
184.97 ? 5% +37.2% 253.81 ? 2% perf-stat.i.metric.K/sec
65.58 ? 2% +65.3% 108.41 ? 3% perf-stat.i.metric.M/sec
90.65 -1.9 88.79 perf-stat.i.node-load-miss-rate%
8639400 ? 6% +24.0% 10708698 ? 4% perf-stat.i.node-load-misses
781197 ? 5% +49.5% 1167934 ? 14% perf-stat.i.node-loads
96.61 -2.3 94.33 perf-stat.i.node-store-miss-rate%
4553793 ? 6% +79.7% 8184022 perf-stat.i.node-store-misses
41459 ? 9% +438.9% 223414 ? 8% perf-stat.i.node-stores
13312 ? 3% +145.3% 32654 ? 3% perf-stat.i.page-faults
11.92 ? 3% +30.1% 15.50 ? 4% perf-stat.overall.MPKI
0.94 -0.1 0.81 perf-stat.overall.branch-miss-rate%
2.54 ? 2% -33.6% 1.69 ? 4% perf-stat.overall.cpi
1036 ? 6% -48.0% 539.37 ? 3% perf-stat.overall.cycles-between-cache-misses
76.14 +11.5 87.59 perf-stat.overall.iTLB-load-miss-rate%
1390 -26.2% 1026 ? 3% perf-stat.overall.instructions-per-iTLB-miss
0.39 ? 2% +50.9% 0.59 ? 4% perf-stat.overall.ipc
91.69 -1.5 90.21 perf-stat.overall.node-load-miss-rate%
99.09 -1.8 97.34 perf-stat.overall.node-store-miss-rate%
2.211e+09 ? 2% +53.1% 3.386e+09 ? 3% perf-stat.ps.branch-instructions
20754755 ? 2% +31.8% 27355760 ? 3% perf-stat.ps.branch-misses
25369981 ? 6% +94.6% 49380619 ? 3% perf-stat.ps.cache-misses
1.229e+08 ? 6% +98.9% 2.443e+08 perf-stat.ps.cache-references
287786 -99.4% 1730 perf-stat.ps.context-switches
216.30 -49.5% 109.24 perf-stat.ps.cpu-migrations
2.748e+09 ? 2% +55.5% 4.272e+09 ? 3% perf-stat.ps.dTLB-loads
59958 ? 9% +152.5% 151372 ? 16% perf-stat.ps.dTLB-store-misses
1.115e+09 ? 3% +110.2% 2.343e+09 ? 3% perf-stat.ps.dTLB-stores
7411100 ? 2% +107.6% 15381874 ? 3% perf-stat.ps.iTLB-load-misses
1.03e+10 ? 2% +53.3% 1.579e+10 ? 3% perf-stat.ps.instructions
3119 -0.8% 3094 perf-stat.ps.minor-faults
8500724 ? 6% +24.0% 10541253 ? 4% perf-stat.ps.node-load-misses
768670 ? 5% +49.6% 1149632 ? 14% perf-stat.ps.node-loads
4480680 ? 6% +79.8% 8056175 perf-stat.ps.node-store-misses
40808 ? 9% +438.7% 219828 ? 8% perf-stat.ps.node-stores
13100 ? 3% +145.3% 32137 ? 3% perf-stat.ps.page-faults
6.511e+11 ? 2% +52.6% 9.935e+11 ? 3% perf-stat.total.instructions
19879 ? 5% -81.1% 3757 ? 8% softirqs.CPU0.RCU
14745 ? 4% -21.7% 11542 ? 12% softirqs.CPU0.SCHED
20773 ? 11% -85.4% 3028 ? 18% softirqs.CPU1.RCU
13361 ? 6% -34.3% 8778 ? 11% softirqs.CPU1.SCHED
19242 ? 12% -88.2% 2276 ? 7% softirqs.CPU10.RCU
11621 ? 3% -23.4% 8899 ? 3% softirqs.CPU10.SCHED
20518 ? 8% -88.5% 2366 ? 18% softirqs.CPU11.RCU
11882 -28.0% 8559 ? 15% softirqs.CPU11.SCHED
19804 ? 11% -88.3% 2311 ? 10% softirqs.CPU12.RCU
11886 ? 3% -21.5% 9331 ? 3% softirqs.CPU12.SCHED
19546 ? 6% -87.4% 2464 ? 16% softirqs.CPU13.RCU
11649 -24.9% 8748 ? 18% softirqs.CPU13.SCHED
20062 ? 6% -88.9% 2225 ? 6% softirqs.CPU14.RCU
19471 ? 9% -87.7% 2388 ? 17% softirqs.CPU15.RCU
11662 ? 2% -30.2% 8135 ? 21% softirqs.CPU15.SCHED
21044 ? 9% -87.2% 2691 ? 18% softirqs.CPU16.RCU
20706 ? 12% -85.8% 2937 ? 19% softirqs.CPU17.RCU
19892 ? 11% -86.0% 2792 ? 20% softirqs.CPU18.RCU
11724 ? 3% -24.2% 8884 ? 11% softirqs.CPU18.SCHED
20594 ? 9% -88.7% 2337 ? 11% softirqs.CPU19.RCU
11876 ? 3% -18.4% 9695 ? 7% softirqs.CPU19.SCHED
20272 ? 9% -88.4% 2341 ? 8% softirqs.CPU2.RCU
20147 ? 8% -88.0% 2416 ? 12% softirqs.CPU20.RCU
11597 ? 2% -22.5% 8991 ? 4% softirqs.CPU20.SCHED
21232 ? 8% -89.3% 2278 ? 11% softirqs.CPU21.RCU
11941 ? 2% -24.5% 9014 ? 8% softirqs.CPU21.SCHED
21322 ? 10% -88.7% 2412 ? 11% softirqs.CPU22.RCU
21544 ? 11% -89.2% 2330 ? 6% softirqs.CPU23.RCU
11983 ? 3% -29.5% 8449 ? 16% softirqs.CPU23.SCHED
13707 ? 6% -79.3% 2832 ? 12% softirqs.CPU24.RCU
10804 ? 3% -39.4% 6545 ? 26% softirqs.CPU24.SCHED
13754 ? 4% -81.0% 2609 ? 19% softirqs.CPU25.RCU
10662 ? 2% -25.8% 7907 ? 14% softirqs.CPU25.SCHED
13168 ? 8% -81.3% 2458 ? 14% softirqs.CPU26.RCU
10474 ? 3% -18.2% 8564 ? 14% softirqs.CPU26.SCHED
14032 ? 9% -80.7% 2705 ? 16% softirqs.CPU27.RCU
13950 ? 8% -83.9% 2242 ? 7% softirqs.CPU28.RCU
10658 ? 2% -14.3% 9132 ? 5% softirqs.CPU28.SCHED
13273 ? 6% -84.1% 2104 ? 4% softirqs.CPU29.RCU
20047 ? 9% -85.7% 2861 ? 32% softirqs.CPU3.RCU
12083 ? 2% -33.2% 8073 ? 24% softirqs.CPU3.SCHED
13881 ? 6% -84.5% 2146 ? 7% softirqs.CPU30.RCU
13523 ? 7% -84.2% 2133 ? 6% softirqs.CPU31.RCU
12672 ? 5% -82.1% 2267 ? 25% softirqs.CPU32.RCU
13126 ? 7% -84.5% 2034 ? 11% softirqs.CPU33.RCU
12706 ? 10% -82.9% 2171 ? 17% softirqs.CPU34.RCU
10497 ? 2% -17.6% 8647 ? 21% softirqs.CPU34.SCHED
12699 ? 7% -83.0% 2164 ? 14% softirqs.CPU35.RCU
10613 ? 3% -10.5% 9501 ? 9% softirqs.CPU35.SCHED
12472 ? 8% -82.9% 2137 ? 8% softirqs.CPU36.RCU
10450 ? 3% -12.6% 9129 ? 8% softirqs.CPU36.SCHED
12641 ? 8% -82.8% 2172 ? 4% softirqs.CPU37.RCU
12374 ? 7% -83.0% 2104 ? 7% softirqs.CPU38.RCU
12532 ? 7% -82.3% 2214 ? 21% softirqs.CPU39.RCU
10407 ? 3% -19.9% 8338 ? 10% softirqs.CPU39.SCHED
20339 ? 15% -87.5% 2544 ? 26% softirqs.CPU4.RCU
12051 ? 4% -26.5% 8856 ? 6% softirqs.CPU4.SCHED
13264 ? 7% -84.1% 2108 ? 12% softirqs.CPU40.RCU
13351 ? 7% -83.9% 2149 ? 6% softirqs.CPU41.RCU
13317 ? 8% -84.1% 2122 ? 10% softirqs.CPU42.RCU
10595 ? 3% -21.0% 8375 ? 12% softirqs.CPU42.SCHED
12939 ? 10% -83.9% 2079 ? 13% softirqs.CPU43.RCU
10415 ? 4% -17.8% 8566 ? 9% softirqs.CPU43.SCHED
12572 ? 7% -83.8% 2039 ? 10% softirqs.CPU44.RCU
10484 ? 2% -16.5% 8756 ? 5% softirqs.CPU44.SCHED
13022 ? 7% -84.2% 2059 ? 6% softirqs.CPU45.RCU
10552 ? 3% -17.6% 8692 ? 11% softirqs.CPU45.SCHED
12842 ? 5% -83.4% 2129 ? 9% softirqs.CPU46.RCU
12593 ? 4% -82.3% 2232 ? 19% softirqs.CPU47.RCU
19365 ? 15% -89.3% 2066 ? 11% softirqs.CPU48.RCU
11060 ? 7% -24.5% 8348 ? 8% softirqs.CPU48.SCHED
19953 ? 12% -89.5% 2089 ? 3% softirqs.CPU49.RCU
11517 ? 4% -27.7% 8325 ? 3% softirqs.CPU49.SCHED
20599 ? 12% -87.8% 2521 ? 20% softirqs.CPU5.RCU
19806 ? 10% -88.2% 2334 ? 14% softirqs.CPU50.RCU
11157 ? 5% -23.4% 8548 ? 8% softirqs.CPU50.SCHED
19892 ? 11% -89.3% 2138 ? 6% softirqs.CPU51.RCU
19758 ? 15% -86.8% 2602 ? 21% softirqs.CPU52.RCU
19122 ? 11% -86.8% 2518 ? 25% softirqs.CPU53.RCU
20141 ? 12% -89.0% 2206 ? 8% softirqs.CPU54.RCU
11311 ? 9% -24.4% 8550 ? 6% softirqs.CPU54.SCHED
18913 ? 10% -87.0% 2449 ? 38% softirqs.CPU55.RCU
11165 ? 6% -28.3% 8010 ? 12% softirqs.CPU55.SCHED
18701 ? 13% -87.7% 2293 ? 17% softirqs.CPU56.RCU
11247 ? 3% -21.8% 8791 ? 4% softirqs.CPU56.SCHED
19742 ? 10% -88.0% 2368 ? 15% softirqs.CPU57.RCU
11543 ? 3% -28.6% 8244 ? 6% softirqs.CPU57.SCHED
19075 ? 9% -88.5% 2198 ? 6% softirqs.CPU58.RCU
18777 ? 9% -86.9% 2454 ? 23% softirqs.CPU59.RCU
19641 ? 8% -88.2% 2324 ? 8% softirqs.CPU6.RCU
11577 ? 4% -24.9% 8700 ? 7% softirqs.CPU6.SCHED
18783 ? 16% -87.2% 2409 ? 9% softirqs.CPU60.RCU
11459 ? 5% -20.9% 9063 ? 8% softirqs.CPU60.SCHED
19265 ? 17% -87.8% 2359 ? 15% softirqs.CPU61.RCU
11501 ? 3% -32.0% 7821 ? 12% softirqs.CPU61.SCHED
18970 ? 8% -87.8% 2323 ? 16% softirqs.CPU62.RCU
11106 ? 5% -21.6% 8702 ? 9% softirqs.CPU62.SCHED
19648 ? 11% -87.2% 2524 ? 34% softirqs.CPU63.RCU
11679 ? 3% -24.9% 8776 ? 12% softirqs.CPU63.SCHED
20771 ? 7% -89.6% 2166 ? 11% softirqs.CPU64.RCU
11725 ? 6% -21.3% 9227 ? 3% softirqs.CPU64.SCHED
20874 ? 11% -88.6% 2389 ? 15% softirqs.CPU65.RCU
11801 ? 3% -28.6% 8431 ? 7% softirqs.CPU65.SCHED
21376 ? 5% -89.0% 2361 ? 14% softirqs.CPU66.RCU
11815 ? 2% -23.2% 9072 ? 6% softirqs.CPU66.SCHED
20018 ? 9% -88.7% 2266 ? 4% softirqs.CPU67.RCU
11509 ? 3% -25.3% 8595 ? 10% softirqs.CPU67.SCHED
19886 ? 11% -87.2% 2537 ? 21% softirqs.CPU68.RCU
11531 ? 4% -20.8% 9134 ? 4% softirqs.CPU68.SCHED
19427 ? 6% -88.7% 2198 ? 6% softirqs.CPU69.RCU
11508 ? 2% -26.3% 8483 ? 15% softirqs.CPU69.SCHED
20383 ? 9% -86.6% 2733 ? 19% softirqs.CPU7.RCU
12005 ? 3% -34.3% 7888 ? 15% softirqs.CPU7.SCHED
20153 ? 11% -88.2% 2387 ? 12% softirqs.CPU70.RCU
11692 ? 3% -28.1% 8401 ? 16% softirqs.CPU70.SCHED
21687 ? 10% -88.1% 2580 ? 18% softirqs.CPU71.RCU
11998 ? 4% -25.1% 8981 ? 6% softirqs.CPU71.SCHED
13726 ? 7% -81.7% 2518 ? 28% softirqs.CPU72.RCU
10651 ? 3% -21.6% 8347 ? 8% softirqs.CPU72.SCHED
12261 ? 7% -82.8% 2114 ? 15% softirqs.CPU73.RCU
10278 ? 2% -19.2% 8304 ? 5% softirqs.CPU73.SCHED
12825 ? 4% -83.0% 2185 ? 18% softirqs.CPU74.RCU
10326 -14.9% 8790 ? 6% softirqs.CPU74.SCHED
12875 ? 9% -84.5% 2001 ? 4% softirqs.CPU75.RCU
10516 ? 2% -17.2% 8710 ? 12% softirqs.CPU75.SCHED
12981 ? 7% -84.4% 2020 ? 7% softirqs.CPU76.RCU
10425 ? 2% -13.1% 9061 ? 6% softirqs.CPU76.SCHED
13111 ? 7% -84.8% 1994 ? 5% softirqs.CPU77.RCU
10498 ? 2% -12.1% 9228 ? 5% softirqs.CPU77.SCHED
12867 ? 8% -83.7% 2091 ? 14% softirqs.CPU78.RCU
10435 ? 2% -11.3% 9261 softirqs.CPU78.SCHED
12742 ? 4% -84.0% 2036 ? 6% softirqs.CPU79.RCU
10248 ? 2% -13.5% 8870 ? 6% softirqs.CPU79.SCHED
19762 ? 9% -87.5% 2470 ? 21% softirqs.CPU8.RCU
11712 ? 2% -22.9% 9024 ? 11% softirqs.CPU8.SCHED
12359 ? 9% -84.4% 1928 ? 4% softirqs.CPU80.RCU
10233 ? 3% -9.8% 9235 ? 2% softirqs.CPU80.SCHED
12142 ? 5% -84.1% 1935 ? 3% softirqs.CPU81.RCU
12698 ? 8% -84.2% 2000 ? 11% softirqs.CPU82.RCU
10555 ? 3% -13.2% 9163 ? 9% softirqs.CPU82.SCHED
12563 ? 8% -84.6% 1929 ? 6% softirqs.CPU83.RCU
10406 ? 2% -11.7% 9189 ? 5% softirqs.CPU83.SCHED
12341 ? 8% -83.8% 2001 ? 10% softirqs.CPU84.RCU
10226 ? 3% -11.3% 9071 ? 3% softirqs.CPU84.SCHED
12409 ? 4% -84.0% 1981 ? 9% softirqs.CPU85.RCU
10146 ? 2% -15.5% 8569 ? 7% softirqs.CPU85.SCHED
12025 ? 8% -83.4% 1990 ? 12% softirqs.CPU86.RCU
10220 ? 3% -16.9% 8497 ? 11% softirqs.CPU86.SCHED
12530 ? 5% -83.7% 2043 ? 6% softirqs.CPU87.RCU
10223 -19.6% 8222 ? 9% softirqs.CPU87.SCHED
11948 ? 10% -83.3% 1991 ? 11% softirqs.CPU88.RCU
10373 ? 3% -15.1% 8811 ? 9% softirqs.CPU88.SCHED
12595 ? 10% -83.1% 2131 ? 11% softirqs.CPU89.RCU
10569 ? 3% -10.4% 9472 ? 5% softirqs.CPU89.SCHED
20331 ? 10% -88.1% 2429 ? 10% softirqs.CPU9.RCU
11652 ? 2% -34.4% 7638 ? 12% softirqs.CPU9.SCHED
13229 ? 7% -84.7% 2022 ? 9% softirqs.CPU90.RCU
10554 ? 3% -19.7% 8478 ? 10% softirqs.CPU90.SCHED
12425 ? 11% -83.9% 1995 ? 6% softirqs.CPU91.RCU
10182 ? 4% -12.0% 8959 ? 2% softirqs.CPU91.SCHED
12145 ? 6% -83.5% 1999 ? 14% softirqs.CPU92.RCU
10348 -12.5% 9052 ? 5% softirqs.CPU92.SCHED
12686 ? 10% -83.8% 2054 ? 18% softirqs.CPU93.RCU
10320 ? 2% -14.1% 8869 ? 7% softirqs.CPU93.SCHED
12096 ? 6% -83.2% 2031 ? 17% softirqs.CPU94.RCU
10247 ? 2% -10.1% 9213 softirqs.CPU94.SCHED
12522 ? 4% -82.4% 2207 ? 13% softirqs.CPU95.RCU
1577826 ? 3% -86.0% 220116 ? 3% softirqs.RCU
1058063 -19.7% 849975 softirqs.SCHED
1045243 ? 6% -92.4% 79317 ? 10% interrupts.CAL:Function_call_interrupts
14348 ? 8% -95.5% 648.00 ? 31% interrupts.CPU0.CAL:Function_call_interrupts
117.83 ? 16% -76.7% 27.50 ? 75% interrupts.CPU0.RES:Rescheduling_interrupts
16484 ? 11% -96.9% 510.00 ? 26% interrupts.CPU1.CAL:Function_call_interrupts
126.17 ? 15% -91.3% 11.00 ? 50% interrupts.CPU1.RES:Rescheduling_interrupts
16068 ? 21% -96.6% 545.17 ? 7% interrupts.CPU10.CAL:Function_call_interrupts
1103 ? 30% -83.2% 185.33 ?106% interrupts.CPU10.NMI:Non-maskable_interrupts
1103 ? 30% -83.2% 185.33 ?106% interrupts.CPU10.PMI:Performance_monitoring_interrupts
118.00 ? 24% -97.0% 3.50 ? 76% interrupts.CPU10.RES:Rescheduling_interrupts
16994 ? 11% -96.9% 523.33 ? 3% interrupts.CPU11.CAL:Function_call_interrupts
1361 ? 13% -78.7% 289.67 ?151% interrupts.CPU11.NMI:Non-maskable_interrupts
1361 ? 13% -78.7% 289.67 ?151% interrupts.CPU11.PMI:Performance_monitoring_interrupts
123.50 ? 13% -95.3% 5.83 ?129% interrupts.CPU11.RES:Rescheduling_interrupts
15303 ? 13% -94.4% 860.67 ? 66% interrupts.CPU12.CAL:Function_call_interrupts
1126 ? 35% -91.3% 97.67 ? 31% interrupts.CPU12.NMI:Non-maskable_interrupts
1126 ? 35% -91.3% 97.67 ? 31% interrupts.CPU12.PMI:Performance_monitoring_interrupts
108.83 ? 15% -97.5% 2.67 ?141% interrupts.CPU12.RES:Rescheduling_interrupts
16337 ? 17% -88.4% 1896 ?104% interrupts.CPU13.CAL:Function_call_interrupts
98.00 ? 23% -94.9% 5.00 ? 56% interrupts.CPU13.RES:Rescheduling_interrupts
16067 ? 12% -83.3% 2678 ?157% interrupts.CPU14.CAL:Function_call_interrupts
110.33 ? 10% -96.2% 4.17 ? 88% interrupts.CPU14.RES:Rescheduling_interrupts
15390 ? 16% -96.6% 517.17 ? 4% interrupts.CPU15.CAL:Function_call_interrupts
115.33 ? 21% -98.0% 2.33 ?141% interrupts.CPU15.RES:Rescheduling_interrupts
17398 ? 12% -82.1% 3120 ?159% interrupts.CPU16.CAL:Function_call_interrupts
1490 ? 13% -82.3% 263.83 ?154% interrupts.CPU16.NMI:Non-maskable_interrupts
1490 ? 13% -82.3% 263.83 ?154% interrupts.CPU16.PMI:Performance_monitoring_interrupts
136.00 ? 12% -98.4% 2.17 ? 81% interrupts.CPU16.RES:Rescheduling_interrupts
16726 ? 18% -95.7% 716.33 ? 57% interrupts.CPU17.CAL:Function_call_interrupts
125.17 ? 26% -95.3% 5.83 ?144% interrupts.CPU17.RES:Rescheduling_interrupts
15301 ? 17% -95.3% 717.00 ? 39% interrupts.CPU18.CAL:Function_call_interrupts
123.67 ? 20% -96.6% 4.17 ? 86% interrupts.CPU18.RES:Rescheduling_interrupts
17143 ? 17% -85.9% 2421 ? 84% interrupts.CPU19.CAL:Function_call_interrupts
114.83 ? 16% -96.2% 4.33 ? 98% interrupts.CPU19.RES:Rescheduling_interrupts
15882 ? 15% -96.1% 613.17 ? 26% interrupts.CPU2.CAL:Function_call_interrupts
1411 ? 19% -85.7% 201.83 ? 79% interrupts.CPU2.NMI:Non-maskable_interrupts
1411 ? 19% -85.7% 201.83 ? 79% interrupts.CPU2.PMI:Performance_monitoring_interrupts
121.33 ? 18% -92.0% 9.67 ?122% interrupts.CPU2.RES:Rescheduling_interrupts
15669 ? 11% -96.7% 510.83 interrupts.CPU20.CAL:Function_call_interrupts
1434 ? 17% -93.4% 94.50 ? 31% interrupts.CPU20.NMI:Non-maskable_interrupts
1434 ? 17% -93.4% 94.50 ? 31% interrupts.CPU20.PMI:Performance_monitoring_interrupts
108.33 ? 16% -98.0% 2.17 ? 81% interrupts.CPU20.RES:Rescheduling_interrupts
17038 ? 18% -95.9% 704.00 ? 38% interrupts.CPU21.CAL:Function_call_interrupts
126.00 ? 28% -98.0% 2.50 ? 91% interrupts.CPU21.RES:Rescheduling_interrupts
17323 ? 23% -96.7% 578.00 ? 19% interrupts.CPU22.CAL:Function_call_interrupts
119.67 ? 25% -92.9% 8.50 ?109% interrupts.CPU22.RES:Rescheduling_interrupts
17451 ? 20% -96.9% 535.17 ? 6% interrupts.CPU23.CAL:Function_call_interrupts
133.33 ? 19% -97.9% 2.83 ? 62% interrupts.CPU23.RES:Rescheduling_interrupts
5645 ? 29% -90.7% 527.33 ? 3% interrupts.CPU24.CAL:Function_call_interrupts
6121 ? 22% -91.4% 526.83 ? 4% interrupts.CPU25.CAL:Function_call_interrupts
5667 ? 33% -86.8% 749.83 ? 69% interrupts.CPU26.CAL:Function_call_interrupts
6011 ? 32% -91.3% 523.50 ? 5% interrupts.CPU28.CAL:Function_call_interrupts
981.50 ? 19% -87.3% 124.67 ? 71% interrupts.CPU28.NMI:Non-maskable_interrupts
981.50 ? 19% -87.3% 124.67 ? 71% interrupts.CPU28.PMI:Performance_monitoring_interrupts
841.33 ? 34% -86.9% 110.50 ? 23% interrupts.CPU29.NMI:Non-maskable_interrupts
841.33 ? 34% -86.9% 110.50 ? 23% interrupts.CPU29.PMI:Performance_monitoring_interrupts
16647 ? 14% -96.8% 527.33 ? 5% interrupts.CPU3.CAL:Function_call_interrupts
1419 ? 20% -80.3% 279.83 ?147% interrupts.CPU3.NMI:Non-maskable_interrupts
1419 ? 20% -80.3% 279.83 ?147% interrupts.CPU3.PMI:Performance_monitoring_interrupts
127.17 ? 15% -97.6% 3.00 ? 54% interrupts.CPU3.RES:Rescheduling_interrupts
5643 ? 23% -79.8% 1142 ?108% interrupts.CPU30.CAL:Function_call_interrupts
5844 ? 30% -75.1% 1457 ?143% interrupts.CPU31.CAL:Function_call_interrupts
820.00 ? 20% -87.4% 103.67 ? 34% interrupts.CPU32.NMI:Non-maskable_interrupts
820.00 ? 20% -87.4% 103.67 ? 34% interrupts.CPU32.PMI:Performance_monitoring_interrupts
794.83 ? 21% -87.1% 102.50 ? 34% interrupts.CPU33.NMI:Non-maskable_interrupts
794.83 ? 21% -87.1% 102.50 ? 34% interrupts.CPU33.PMI:Performance_monitoring_interrupts
5306 ? 26% -82.1% 949.17 ?101% interrupts.CPU34.CAL:Function_call_interrupts
5519 ? 29% -77.0% 1272 ? 86% interrupts.CPU35.CAL:Function_call_interrupts
5379 ? 29% -84.7% 824.67 ? 89% interrupts.CPU36.CAL:Function_call_interrupts
666.67 ? 45% -83.9% 107.33 ? 23% interrupts.CPU36.NMI:Non-maskable_interrupts
666.67 ? 45% -83.9% 107.33 ? 23% interrupts.CPU36.PMI:Performance_monitoring_interrupts
5609 ? 33% -75.8% 1355 ? 88% interrupts.CPU37.CAL:Function_call_interrupts
6058 ? 37% -91.4% 518.67 ? 2% interrupts.CPU39.CAL:Function_call_interrupts
16672 ? 19% -96.0% 665.83 ? 26% interrupts.CPU4.CAL:Function_call_interrupts
124.17 ? 22% -90.1% 12.33 ?107% interrupts.CPU4.RES:Rescheduling_interrupts
5562 ? 31% -79.9% 1119 ?129% interrupts.CPU40.CAL:Function_call_interrupts
558.33 ? 40% -80.9% 106.83 ? 34% interrupts.CPU40.NMI:Non-maskable_interrupts
558.33 ? 40% -80.9% 106.83 ? 34% interrupts.CPU40.PMI:Performance_monitoring_interrupts
5934 ? 31% -91.2% 521.50 ? 2% interrupts.CPU42.CAL:Function_call_interrupts
5945 ? 33% -83.9% 957.00 ?103% interrupts.CPU43.CAL:Function_call_interrupts
5361 ? 32% -90.1% 533.17 ? 5% interrupts.CPU44.CAL:Function_call_interrupts
5511 ? 24% -90.7% 515.17 interrupts.CPU45.CAL:Function_call_interrupts
5339 ? 25% -75.9% 1286 ?134% interrupts.CPU46.CAL:Function_call_interrupts
5086 ? 23% -89.8% 520.17 ? 2% interrupts.CPU47.CAL:Function_call_interrupts
15990 ? 21% -96.8% 514.67 interrupts.CPU48.CAL:Function_call_interrupts
101.00 ? 22% -94.6% 5.50 ? 81% interrupts.CPU48.RES:Rescheduling_interrupts
17628 ? 21% -96.9% 541.33 ? 10% interrupts.CPU49.CAL:Function_call_interrupts
101.83 ? 23% -93.8% 6.33 ? 58% interrupts.CPU49.RES:Rescheduling_interrupts
16369 ? 19% -94.0% 980.17 ? 83% interrupts.CPU5.CAL:Function_call_interrupts
1405 ? 17% -93.1% 96.50 ? 32% interrupts.CPU5.NMI:Non-maskable_interrupts
1405 ? 17% -93.1% 96.50 ? 32% interrupts.CPU5.PMI:Performance_monitoring_interrupts
113.00 ? 19% -98.4% 1.83 ? 73% interrupts.CPU5.RES:Rescheduling_interrupts
17436 ? 17% -96.8% 561.50 ? 16% interrupts.CPU50.CAL:Function_call_interrupts
1250 ? 31% -76.1% 298.83 ?129% interrupts.CPU50.NMI:Non-maskable_interrupts
1250 ? 31% -76.1% 298.83 ?129% interrupts.CPU50.PMI:Performance_monitoring_interrupts
103.17 ? 20% -91.8% 8.50 ?116% interrupts.CPU50.RES:Rescheduling_interrupts
15642 ? 16% -96.6% 533.33 ? 4% interrupts.CPU51.CAL:Function_call_interrupts
1377 ? 15% -88.8% 154.33 ?100% interrupts.CPU51.NMI:Non-maskable_interrupts
1377 ? 15% -88.8% 154.33 ?100% interrupts.CPU51.PMI:Performance_monitoring_interrupts
107.33 ? 23% -92.9% 7.67 ?132% interrupts.CPU51.RES:Rescheduling_interrupts
16130 ? 17% -96.8% 515.33 ? 3% interrupts.CPU52.CAL:Function_call_interrupts
100.67 ? 18% -95.7% 4.33 ? 51% interrupts.CPU52.RES:Rescheduling_interrupts
15003 ? 21% -96.6% 515.50 interrupts.CPU53.CAL:Function_call_interrupts
1411 ? 21% -93.2% 95.67 ? 31% interrupts.CPU53.NMI:Non-maskable_interrupts
1411 ? 21% -93.2% 95.67 ? 31% interrupts.CPU53.PMI:Performance_monitoring_interrupts
95.00 ? 27% -92.8% 6.83 ? 82% interrupts.CPU53.RES:Rescheduling_interrupts
16791 ? 22% -96.7% 547.83 ? 9% interrupts.CPU54.CAL:Function_call_interrupts
102.17 ? 41% -94.9% 5.17 ? 54% interrupts.CPU54.RES:Rescheduling_interrupts
15999 ? 16% -96.7% 525.83 ? 3% interrupts.CPU55.CAL:Function_call_interrupts
93.00 ? 17% -92.7% 6.83 ? 39% interrupts.CPU55.RES:Rescheduling_interrupts
15850 ? 22% -96.9% 498.50 ? 4% interrupts.CPU56.CAL:Function_call_interrupts
87.33 ? 21% -93.3% 5.83 ? 84% interrupts.CPU56.RES:Rescheduling_interrupts
16309 ? 17% -96.8% 519.67 ? 2% interrupts.CPU57.CAL:Function_call_interrupts
104.50 ? 15% -94.9% 5.33 ? 38% interrupts.CPU57.RES:Rescheduling_interrupts
14718 ? 19% -96.5% 518.17 ? 3% interrupts.CPU58.CAL:Function_call_interrupts
1386 ? 23% -90.4% 132.50 ? 63% interrupts.CPU58.NMI:Non-maskable_interrupts
1386 ? 23% -90.4% 132.50 ? 63% interrupts.CPU58.PMI:Performance_monitoring_interrupts
100.00 ? 25% -96.0% 4.00 ? 50% interrupts.CPU58.RES:Rescheduling_interrupts
14707 ? 10% -96.5% 517.67 ? 3% interrupts.CPU59.CAL:Function_call_interrupts
1408 ? 20% -80.7% 271.17 ?138% interrupts.CPU59.NMI:Non-maskable_interrupts
1408 ? 20% -80.7% 271.17 ?138% interrupts.CPU59.PMI:Performance_monitoring_interrupts
103.33 ? 7% -95.0% 5.17 ?131% interrupts.CPU59.RES:Rescheduling_interrupts
16059 ? 15% -96.6% 539.33 ? 4% interrupts.CPU6.CAL:Function_call_interrupts
136.00 ? 21% -95.8% 5.67 ? 57% interrupts.CPU6.RES:Rescheduling_interrupts
15466 ? 26% -96.5% 536.17 ? 8% interrupts.CPU60.CAL:Function_call_interrupts
1216 ? 25% -91.2% 106.67 ? 24% interrupts.CPU60.NMI:Non-maskable_interrupts
1216 ? 25% -91.2% 106.67 ? 24% interrupts.CPU60.PMI:Performance_monitoring_interrupts
90.00 ? 32% -96.5% 3.17 ? 69% interrupts.CPU60.RES:Rescheduling_interrupts
16533 ? 29% -96.9% 507.83 ? 17% interrupts.CPU61.CAL:Function_call_interrupts
97.33 ? 41% -87.0% 12.67 ? 76% interrupts.CPU61.RES:Rescheduling_interrupts
15917 ? 15% -96.5% 555.67 ? 13% interrupts.CPU62.CAL:Function_call_interrupts
85.17 ? 15% -91.6% 7.17 ? 85% interrupts.CPU62.RES:Rescheduling_interrupts
16431 ? 18% -96.9% 515.50 ? 3% interrupts.CPU63.CAL:Function_call_interrupts
107.67 ? 24% -96.4% 3.83 ?106% interrupts.CPU63.RES:Rescheduling_interrupts
17023 ? 14% -96.4% 610.83 ? 35% interrupts.CPU64.CAL:Function_call_interrupts
102.33 ? 15% -90.4% 9.83 ?106% interrupts.CPU64.RES:Rescheduling_interrupts
17217 ? 18% -97.0% 514.67 ? 7% interrupts.CPU65.CAL:Function_call_interrupts
121.33 ? 19% -94.5% 6.67 ? 69% interrupts.CPU65.RES:Rescheduling_interrupts
17319 ? 10% -97.0% 527.33 ? 22% interrupts.CPU66.CAL:Function_call_interrupts
100.33 ? 17% -95.0% 5.00 ? 60% interrupts.CPU66.RES:Rescheduling_interrupts
16534 ? 17% -96.9% 520.00 ? 2% interrupts.CPU67.CAL:Function_call_interrupts
88.67 ? 24% -91.2% 7.83 ? 73% interrupts.CPU67.RES:Rescheduling_interrupts
15719 ? 21% -96.7% 516.33 ? 2% interrupts.CPU68.CAL:Function_call_interrupts
1225 ? 28% -92.5% 91.33 ? 11% interrupts.CPU68.NMI:Non-maskable_interrupts
1225 ? 28% -92.5% 91.33 ? 11% interrupts.CPU68.PMI:Performance_monitoring_interrupts
96.50 ? 25% -97.2% 2.67 ? 41% interrupts.CPU68.RES:Rescheduling_interrupts
15215 ? 15% -96.7% 509.33 interrupts.CPU69.CAL:Function_call_interrupts
84.67 ? 25% -95.1% 4.17 ? 44% interrupts.CPU69.RES:Rescheduling_interrupts
16512 ? 14% -96.8% 531.67 ? 3% interrupts.CPU7.CAL:Function_call_interrupts
116.83 ? 23% -92.2% 9.17 ? 81% interrupts.CPU7.RES:Rescheduling_interrupts
14889 ? 22% -96.5% 522.33 ? 5% interrupts.CPU70.CAL:Function_call_interrupts
99.17 ? 28% -95.6% 4.33 ? 83% interrupts.CPU70.RES:Rescheduling_interrupts
17056 ? 13% -97.0% 516.83 ? 2% interrupts.CPU71.CAL:Function_call_interrupts
91.83 ? 19% -92.6% 6.83 ? 99% interrupts.CPU71.RES:Rescheduling_interrupts
6314 ? 31% -91.7% 522.33 ? 2% interrupts.CPU72.CAL:Function_call_interrupts
4920 ? 31% -89.5% 517.17 ? 2% interrupts.CPU73.CAL:Function_call_interrupts
5577 ? 19% -90.8% 511.33 interrupts.CPU74.CAL:Function_call_interrupts
5454 ? 30% -89.8% 554.33 ? 9% interrupts.CPU75.CAL:Function_call_interrupts
5443 ? 28% -90.2% 531.83 ? 3% interrupts.CPU76.CAL:Function_call_interrupts
760.00 ? 23% -82.8% 130.83 ? 59% interrupts.CPU76.NMI:Non-maskable_interrupts
760.00 ? 23% -82.8% 130.83 ? 59% interrupts.CPU76.PMI:Performance_monitoring_interrupts
5486 ? 28% -90.4% 527.67 ? 5% interrupts.CPU77.CAL:Function_call_interrupts
778.17 ? 37% -87.3% 98.50 ? 10% interrupts.CPU77.NMI:Non-maskable_interrupts
778.17 ? 37% -87.3% 98.50 ? 10% interrupts.CPU77.PMI:Performance_monitoring_interrupts
5303 ? 33% -89.4% 564.50 ? 22% interrupts.CPU78.CAL:Function_call_interrupts
5484 ? 20% -88.7% 619.00 ? 38% interrupts.CPU79.CAL:Function_call_interrupts
16625 ? 15% -93.7% 1054 ?104% interrupts.CPU8.CAL:Function_call_interrupts
107.33 ? 22% -93.9% 6.50 ? 62% interrupts.CPU8.RES:Rescheduling_interrupts
5364 ? 33% -89.5% 563.67 ? 20% interrupts.CPU80.CAL:Function_call_interrupts
731.67 ? 27% -87.0% 95.17 ? 9% interrupts.CPU80.NMI:Non-maskable_interrupts
731.67 ? 27% -87.0% 95.17 ? 9% interrupts.CPU80.PMI:Performance_monitoring_interrupts
5111 ? 21% -89.4% 544.00 ? 9% interrupts.CPU81.CAL:Function_call_interrupts
717.33 ? 29% -86.8% 94.50 ? 8% interrupts.CPU81.NMI:Non-maskable_interrupts
717.33 ? 29% -86.8% 94.50 ? 8% interrupts.CPU81.PMI:Performance_monitoring_interrupts
5566 ? 23% -90.7% 515.00 interrupts.CPU82.CAL:Function_call_interrupts
5540 ? 29% -90.4% 530.00 ? 9% interrupts.CPU83.CAL:Function_call_interrupts
5199 ? 30% -90.1% 514.50 ? 3% interrupts.CPU84.CAL:Function_call_interrupts
737.17 ? 33% -87.5% 92.00 ? 7% interrupts.CPU84.NMI:Non-maskable_interrupts
737.17 ? 33% -87.5% 92.00 ? 7% interrupts.CPU84.PMI:Performance_monitoring_interrupts
5501 ? 27% -90.5% 524.17 ? 3% interrupts.CPU85.CAL:Function_call_interrupts
5142 ? 31% -89.8% 523.33 ? 3% interrupts.CPU86.CAL:Function_call_interrupts
5427 ? 24% -90.5% 515.00 interrupts.CPU87.CAL:Function_call_interrupts
4763 ? 31% -89.1% 519.17 ? 8% interrupts.CPU88.CAL:Function_call_interrupts
675.50 ? 26% -84.2% 106.67 ? 32% interrupts.CPU88.NMI:Non-maskable_interrupts
675.50 ? 26% -84.2% 106.67 ? 32% interrupts.CPU88.PMI:Performance_monitoring_interrupts
5099 ? 30% -89.8% 519.17 interrupts.CPU89.CAL:Function_call_interrupts
16196 ? 14% -96.3% 591.83 ? 16% interrupts.CPU9.CAL:Function_call_interrupts
117.67 ? 18% -95.9% 4.83 ? 60% interrupts.CPU9.RES:Rescheduling_interrupts
5940 ? 25% -91.1% 529.33 ? 4% interrupts.CPU90.CAL:Function_call_interrupts
5525 ? 32% -87.8% 675.50 ? 36% interrupts.CPU91.CAL:Function_call_interrupts
5040 ? 25% -89.6% 525.67 ? 9% interrupts.CPU92.CAL:Function_call_interrupts
5392 ? 33% -90.5% 511.83 interrupts.CPU93.CAL:Function_call_interrupts
4800 ? 24% -89.3% 512.50 interrupts.CPU94.CAL:Function_call_interrupts
6775 ? 13% -92.4% 517.17 ? 3% interrupts.CPU95.CAL:Function_call_interrupts
7137 ? 7% -93.8% 442.33 ? 28% interrupts.RES:Rescheduling_interrupts
29.06 ? 7% -29.1 0.00 perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_slowpath.kernfs_iop_permission.inode_permission.link_path_walk
23.13 ? 7% -23.1 0.00 perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.kernfs_iop_permission.inode_permission.link_path_walk.path_lookupat
26.22 ? 7% -19.4 6.80 ? 10% perf-profile.calltrace.cycles-pp.kernfs_iop_permission.inode_permission.link_path_walk.path_lookupat.filename_lookup
26.67 ? 7% -17.9 8.80 ? 10% perf-profile.calltrace.cycles-pp.inode_permission.link_path_walk.path_lookupat.filename_lookup.user_statfs
30.59 ? 7% -12.4 18.18 ? 9% perf-profile.calltrace.cycles-pp.link_path_walk.path_lookupat.filename_lookup.user_statfs.__do_sys_statfs
11.52 ? 7% -11.5 0.00 perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.kernfs_iop_permission.inode_permission.link_path_walk.path_openat
13.03 ? 7% -9.7 3.38 ? 10% perf-profile.calltrace.cycles-pp.kernfs_iop_permission.inode_permission.link_path_walk.path_openat.do_filp_open
13.28 ? 7% -8.8 4.52 ? 10% perf-profile.calltrace.cycles-pp.inode_permission.link_path_walk.path_openat.do_filp_open.do_sys_openat2
32.52 ? 7% -8.8 23.77 ? 8% perf-profile.calltrace.cycles-pp.path_lookupat.filename_lookup.user_statfs.__do_sys_statfs.do_syscall_64
32.59 ? 7% -8.7 23.93 ? 8% perf-profile.calltrace.cycles-pp.filename_lookup.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.28 ? 7% -5.8 9.47 ? 8% perf-profile.calltrace.cycles-pp.link_path_walk.path_openat.do_filp_open.do_sys_openat2.do_sys_open
4.45 ? 6% -3.7 0.73 ? 13% perf-profile.calltrace.cycles-pp.kernfs_iop_permission.inode_permission.may_open.do_open.path_openat
4.50 ? 6% -3.6 0.89 ? 13% perf-profile.calltrace.cycles-pp.inode_permission.may_open.do_open.path_openat.do_filp_open
4.55 ? 6% -3.5 1.01 ? 13% perf-profile.calltrace.cycles-pp.may_open.do_open.path_openat.do_filp_open.do_sys_openat2
5.18 ? 6% -1.2 3.97 ? 13% perf-profile.calltrace.cycles-pp.do_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open
0.72 ? 7% +0.6 1.29 ? 16% perf-profile.calltrace.cycles-pp.lookup_fast.walk_component.path_lookupat.filename_lookup.user_statfs
0.00 +0.7 0.69 ? 8% perf-profile.calltrace.cycles-pp.lockref_put_return.dput.terminate_walk.path_openat.do_filp_open
0.00 +0.7 0.71 ? 13% perf-profile.calltrace.cycles-pp.kmem_cache_alloc.__alloc_file.alloc_empty_file.path_openat.do_filp_open
0.00 +0.7 0.71 ? 7% perf-profile.calltrace.cycles-pp.lockref_get_not_dead.__legitimize_path.try_to_unlazy.link_path_walk.path_openat
0.00 +0.7 0.72 ? 13% perf-profile.calltrace.cycles-pp.lockref_get.__traverse_mounts.step_into.walk_component.path_lookupat
0.00 +0.7 0.72 ? 11% perf-profile.calltrace.cycles-pp.__task_pid_nr_ns.__x64_sys_getpriority.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.7 0.74 ? 14% perf-profile.calltrace.cycles-pp.__d_lookup.lookup_fast.walk_component.link_path_walk.path_openat
0.00 +0.7 0.75 ? 15% perf-profile.calltrace.cycles-pp.apparmor_file_alloc_security.security_file_alloc.__alloc_file.alloc_empty_file.path_openat
0.00 +0.8 0.77 ? 12% perf-profile.calltrace.cycles-pp.step_into.path_openat.do_filp_open.do_sys_openat2.do_sys_open
1.35 ? 6% +0.8 2.12 ? 11% perf-profile.calltrace.cycles-pp.lookup_fast.walk_component.link_path_walk.path_openat.do_filp_open
0.00 +0.8 0.81 ? 13% perf-profile.calltrace.cycles-pp.fs_index.__x64_sys_sysfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.8 0.81 ? 15% perf-profile.calltrace.cycles-pp.__d_lookup.lookup_fast.walk_component.path_lookupat.filename_lookup
0.00 +0.8 0.85 ? 16% perf-profile.calltrace.cycles-pp.getname_flags.user_path_at_empty.user_statfs.__do_sys_statfs.do_syscall_64
0.00 +0.9 0.87 ? 11% perf-profile.calltrace.cycles-pp.dput.step_into.walk_component.link_path_walk.path_openat
0.00 +0.9 0.88 ? 16% perf-profile.calltrace.cycles-pp.user_path_at_empty.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.9 0.89 ? 12% perf-profile.calltrace.cycles-pp.__legitimize_path.try_to_unlazy.link_path_walk.path_openat.do_filp_open
0.00 +0.9 0.90 ? 15% perf-profile.calltrace.cycles-pp.security_file_alloc.__alloc_file.alloc_empty_file.path_openat.do_filp_open
0.00 +0.9 0.92 ? 18% perf-profile.calltrace.cycles-pp.__traverse_mounts.step_into.walk_component.link_path_walk.path_lookupat
0.00 +0.9 0.92 ? 13% perf-profile.calltrace.cycles-pp.try_to_unlazy.link_path_walk.path_openat.do_filp_open.do_sys_openat2
0.00 +0.9 0.93 ? 4% perf-profile.calltrace.cycles-pp.dput.terminate_walk.path_openat.do_filp_open.do_sys_openat2
0.00 +1.0 0.97 ? 6% perf-profile.calltrace.cycles-pp.lockref_put_return.dput.terminate_walk.path_lookupat.filename_lookup
0.00 +1.0 0.98 ? 15% perf-profile.calltrace.cycles-pp.down_read.kernfs_iop_permission.inode_permission.link_path_walk.path_openat
0.00 +1.0 1.00 ? 2% perf-profile.calltrace.cycles-pp.terminate_walk.path_openat.do_filp_open.do_sys_openat2.do_sys_open
0.00 +1.0 1.04 ? 9% perf-profile.calltrace.cycles-pp.up_read.kernfs_iop_permission.inode_permission.link_path_walk.path_openat
0.00 +1.1 1.11 ? 13% perf-profile.calltrace.cycles-pp.__traverse_mounts.step_into.walk_component.path_lookupat.filename_lookup
0.00 +1.1 1.15 ? 15% perf-profile.calltrace.cycles-pp.__x64_sys_prlimit64.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.2 1.17 ? 12% perf-profile.calltrace.cycles-pp.__percpu_counter_sum.ext4_statfs.statfs_by_dentry.vfs_statfs.user_statfs
0.00 +1.2 1.21 ? 8% perf-profile.calltrace.cycles-pp.dput.terminate_walk.path_lookupat.filename_lookup.user_statfs
0.00 +1.2 1.22 ? 12% perf-profile.calltrace.cycles-pp.ext4_statfs.statfs_by_dentry.vfs_statfs.user_statfs.__do_sys_statfs
0.47 ? 44% +1.3 1.72 ? 13% perf-profile.calltrace.cycles-pp.shmem_statfs.statfs_by_dentry.vfs_statfs.user_statfs.__do_sys_statfs
0.00 +1.3 1.26 ? 8% perf-profile.calltrace.cycles-pp.terminate_walk.path_lookupat.filename_lookup.user_statfs.__do_sys_statfs
0.36 ? 71% +1.3 1.63 ? 12% perf-profile.calltrace.cycles-pp.__do_sys_fstatfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.68 ? 8% +1.3 3.96 ? 14% perf-profile.calltrace.cycles-pp.lookup_fast.walk_component.link_path_walk.path_lookupat.filename_lookup
0.49 ? 45% +1.3 1.78 ? 13% perf-profile.calltrace.cycles-pp.__alloc_file.alloc_empty_file.path_openat.do_filp_open.do_sys_openat2
0.50 ? 45% +1.3 1.80 ? 13% perf-profile.calltrace.cycles-pp.alloc_empty_file.path_openat.do_filp_open.do_sys_openat2.do_sys_open
0.00 +1.3 1.30 ? 35% perf-profile.calltrace.cycles-pp._raw_spin_lock.__d_lookup.lookup_fast.walk_component.link_path_walk
0.00 +1.3 1.31 ? 8% perf-profile.calltrace.cycles-pp.kernfs_refresh_inode.kernfs_iop_permission.inode_permission.link_path_walk.path_openat
0.00 +1.4 1.35 ? 11% perf-profile.calltrace.cycles-pp.statfs_by_dentry.vfs_statfs.fd_statfs.__do_sys_fstatfs.do_syscall_64
0.00 +1.4 1.36 ? 12% perf-profile.calltrace.cycles-pp.__x64_sys_sysfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.4 1.39 ? 12% perf-profile.calltrace.cycles-pp.vfs_statfs.fd_statfs.__do_sys_fstatfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.50 ? 44% +1.4 1.92 ? 13% perf-profile.calltrace.cycles-pp.step_into.walk_component.path_lookupat.filename_lookup.user_statfs
0.09 ?223% +1.4 1.51 ? 13% perf-profile.calltrace.cycles-pp.__percpu_counter_sum.shmem_statfs.statfs_by_dentry.vfs_statfs.user_statfs
0.00 +1.4 1.43 ? 12% perf-profile.calltrace.cycles-pp.fd_statfs.__do_sys_fstatfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.4 1.45 ? 5% perf-profile.calltrace.cycles-pp.lockref_get_not_dead.__legitimize_path.try_to_unlazy.link_path_walk.path_lookupat
0.47 ? 45% +1.5 1.93 ? 13% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.75 ? 8% +1.5 2.24 ? 13% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.5 1.51 ? 13% perf-profile.calltrace.cycles-pp.step_into.walk_component.link_path_walk.path_openat.do_filp_open
0.00 +1.5 1.51 ? 18% perf-profile.calltrace.cycles-pp.__d_lookup.lookup_fast.walk_component.link_path_walk.path_lookupat
0.09 ?223% +1.5 1.62 ? 29% perf-profile.calltrace.cycles-pp.down_read.kernfs_dop_revalidate.lookup_fast.walk_component.link_path_walk
0.00 +1.5 1.54 ? 31% perf-profile.calltrace.cycles-pp.lockref_put_return.dput.step_into.walk_component.link_path_walk
0.82 ? 8% +1.6 2.38 ? 14% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.6 1.63 ? 14% perf-profile.calltrace.cycles-pp.dput.step_into.walk_component.link_path_walk.path_lookupat
0.00 +1.7 1.65 ? 13% perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
0.00 +1.8 1.79 ? 12% perf-profile.calltrace.cycles-pp.__legitimize_path.try_to_unlazy.link_path_walk.path_lookupat.filename_lookup
0.00 +1.9 1.85 ? 12% perf-profile.calltrace.cycles-pp.try_to_unlazy.link_path_walk.path_lookupat.filename_lookup.user_statfs
0.00 +1.9 1.89 ? 14% perf-profile.calltrace.cycles-pp.down_read.kernfs_iop_permission.inode_permission.link_path_walk.path_lookupat
1.75 ? 7% +1.9 3.67 ? 12% perf-profile.calltrace.cycles-pp.walk_component.link_path_walk.path_openat.do_filp_open.do_sys_openat2
0.27 ?100% +1.9 2.21 ? 14% perf-profile.calltrace.cycles-pp.do_dentry_open.do_open.path_openat.do_filp_open.do_sys_openat2
1.30 ? 9% +2.0 3.28 ? 13% perf-profile.calltrace.cycles-pp.walk_component.path_lookupat.filename_lookup.user_statfs.__do_sys_statfs
0.00 +2.1 2.11 ? 11% perf-profile.calltrace.cycles-pp.up_read.kernfs_iop_permission.inode_permission.link_path_walk.path_lookupat
0.75 ? 13% +2.2 2.90 ? 13% perf-profile.calltrace.cycles-pp.step_into.walk_component.link_path_walk.path_lookupat.filename_lookup
1.27 ? 10% +2.2 3.48 ? 12% perf-profile.calltrace.cycles-pp.__x64_sys_getpriority.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.06 ? 9% +2.3 3.35 ? 13% perf-profile.calltrace.cycles-pp.statfs_by_dentry.vfs_statfs.user_statfs.__do_sys_statfs.do_syscall_64
1.08 ? 9% +2.3 3.41 ? 13% perf-profile.calltrace.cycles-pp.vfs_statfs.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.4 2.44 ? 8% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.kernfs_refresh_inode.kernfs_iop_permission.inode_permission
0.00 +2.7 2.70 ? 8% perf-profile.calltrace.cycles-pp.kernfs_refresh_inode.kernfs_iop_permission.inode_permission.link_path_walk.path_lookupat
3.44 ? 9% +3.5 6.92 ? 13% perf-profile.calltrace.cycles-pp.walk_component.link_path_walk.path_lookupat.filename_lookup.user_statfs
0.00 +3.6 3.65 ? 9% perf-profile.calltrace.cycles-pp._raw_spin_lock.kernfs_refresh_inode.kernfs_iop_permission.inode_permission.link_path_walk
38.79 ? 7% -38.8 0.00 perf-profile.children.cycles-pp.rwsem_down_write_slowpath
43.72 ? 7% -32.8 10.93 ? 11% perf-profile.children.cycles-pp.kernfs_iop_permission
32.43 ? 7% -32.4 0.00 perf-profile.children.cycles-pp.osq_lock
44.46 ? 7% -30.2 14.24 ? 10% perf-profile.children.cycles-pp.inode_permission
45.89 ? 7% -18.2 27.71 ? 9% perf-profile.children.cycles-pp.link_path_walk
32.54 ? 7% -8.7 23.83 ? 8% perf-profile.children.cycles-pp.path_lookupat
32.61 ? 7% -8.6 24.00 ? 8% perf-profile.children.cycles-pp.filename_lookup
4.55 ? 6% -3.5 1.02 ? 13% perf-profile.children.cycles-pp.may_open
5.19 ? 6% -1.2 3.98 ? 13% perf-profile.children.cycles-pp.do_open
0.68 ? 6% -0.6 0.05 ? 47% perf-profile.children.cycles-pp._raw_spin_lock_irq
1.24 ? 8% -0.6 0.61 ? 15% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.30 ? 10% -0.2 0.11 ? 29% perf-profile.children.cycles-pp.ret_from_fork
0.30 ? 10% -0.2 0.11 ? 29% perf-profile.children.cycles-pp.kthread
0.29 ? 7% -0.2 0.12 ? 16% perf-profile.children.cycles-pp.native_sched_clock
0.25 ? 13% -0.2 0.08 ? 57% perf-profile.children.cycles-pp.smpboot_thread_fn
0.24 ? 12% -0.2 0.07 ? 57% perf-profile.children.cycles-pp.run_ksoftirqd
0.18 ? 12% -0.1 0.07 ? 16% perf-profile.children.cycles-pp.update_rq_clock
0.22 ? 6% -0.1 0.13 ? 17% perf-profile.children.cycles-pp.sched_clock_cpu
0.12 ? 15% -0.0 0.09 ? 14% perf-profile.children.cycles-pp.irq_enter_rcu
0.04 ? 71% +0.0 0.08 ? 16% perf-profile.children.cycles-pp.task_tick_fair
0.10 ? 11% +0.0 0.15 ? 17% perf-profile.children.cycles-pp.rcu_all_qs
0.03 ?100% +0.1 0.08 ? 20% perf-profile.children.cycles-pp.exc_page_fault
0.03 ? 70% +0.1 0.09 ? 12% perf-profile.children.cycles-pp.task_work_add
0.00 +0.1 0.06 ? 11% perf-profile.children.cycles-pp.rcu_segcblist_enqueue
0.00 +0.1 0.06 ? 18% perf-profile.children.cycles-pp.simple_statfs
0.00 +0.1 0.06 ? 15% perf-profile.children.cycles-pp.fs_maxindex
0.00 +0.1 0.06 ? 14% perf-profile.children.cycles-pp.shmem_file_read_iter
0.00 +0.1 0.06 ? 11% perf-profile.children.cycles-pp.pick_link
0.06 ? 11% +0.1 0.13 ? 16% perf-profile.children.cycles-pp.fput_many
0.12 ? 19% +0.1 0.18 ? 13% perf-profile.children.cycles-pp.call_rcu
0.08 ? 23% +0.1 0.15 ? 13% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.06 ? 14% +0.1 0.13 ? 13% perf-profile.children.cycles-pp.__dentry_kill
0.00 +0.1 0.07 ? 9% perf-profile.children.cycles-pp.__do_proc_dointvec
0.00 +0.1 0.07 ? 18% perf-profile.children.cycles-pp.pick_file
0.00 +0.1 0.07 ? 19% perf-profile.children.cycles-pp.do_adjtimex
0.00 +0.1 0.08 ? 16% perf-profile.children.cycles-pp.__radix_tree_lookup
0.00 +0.1 0.08 ? 12% perf-profile.children.cycles-pp.file_free_rcu
0.00 +0.1 0.08 ? 12% perf-profile.children.cycles-pp.proc_dointvec_minmax
0.07 ? 17% +0.1 0.15 ? 18% perf-profile.children.cycles-pp.asm_exc_page_fault
0.00 +0.1 0.08 ? 22% perf-profile.children.cycles-pp.apparmor_task_setrlimit
0.00 +0.1 0.08 ? 24% perf-profile.children.cycles-pp.syscall_exit_to_user_mode_prepare
0.05 ? 46% +0.1 0.13 ? 16% perf-profile.children.cycles-pp.__lookup_mnt
0.00 +0.1 0.08 ? 27% perf-profile.children.cycles-pp.proc_sys_open
0.00 +0.1 0.08 ? 23% perf-profile.children.cycles-pp.security_task_setrlimit
0.04 ? 71% +0.1 0.12 ? 15% perf-profile.children.cycles-pp.obj_cgroup_charge_pages
0.05 ? 56% +0.1 0.14 ? 16% perf-profile.children.cycles-pp.alloc_fd
0.06 ? 11% +0.1 0.14 ? 14% perf-profile.children.cycles-pp.find_task_by_vpid
0.02 ?141% +0.1 0.10 ? 19% perf-profile.children.cycles-pp.__do_sys_getrusage
0.00 +0.1 0.09 ? 18% perf-profile.children.cycles-pp.getrusage
0.00 +0.1 0.09 ? 19% perf-profile.children.cycles-pp.proc_sys_compare
0.01 ?223% +0.1 0.10 ? 11% perf-profile.children.cycles-pp.__do_sys_newstat
0.01 ?223% +0.1 0.10 ? 11% perf-profile.children.cycles-pp.fsnotify
0.00 +0.1 0.09 ? 9% perf-profile.children.cycles-pp.vfs_statx
0.05 ? 47% +0.1 0.14 ? 13% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.05 ? 47% +0.1 0.14 ? 21% perf-profile.children.cycles-pp.__virt_addr_valid
0.04 ? 73% +0.1 0.13 ? 19% perf-profile.children.cycles-pp.fsnotify_grab_connector
0.00 +0.1 0.09 ? 14% perf-profile.children.cycles-pp.close_fd
0.05 ? 46% +0.1 0.14 ? 16% perf-profile.children.cycles-pp.copy_user_generic_unrolled
0.06 ? 11% +0.1 0.16 ? 18% perf-profile.children.cycles-pp.get_obj_cgroup_from_current
0.01 ?223% +0.1 0.10 ? 17% perf-profile.children.cycles-pp.page_counter_try_charge
0.04 ? 72% +0.1 0.14 ? 18% perf-profile.children.cycles-pp.fsnotify_find_mark
0.00 +0.1 0.10 ? 12% perf-profile.children.cycles-pp._copy_from_user
0.06 ? 7% +0.1 0.16 ? 16% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.01 ?223% +0.1 0.11 ? 13% perf-profile.children.cycles-pp.page_counter_cancel
0.06 ? 47% +0.1 0.16 ? 17% perf-profile.children.cycles-pp.obj_cgroup_charge
0.00 +0.1 0.11 ? 16% perf-profile.children.cycles-pp.__do_sys_adjtimex
0.05 ? 45% +0.1 0.16 ? 25% perf-profile.children.cycles-pp.btrfs_statfs
0.05 ? 48% +0.1 0.17 ? 17% perf-profile.children.cycles-pp.dnotify_flush
0.06 ? 11% +0.1 0.18 ? 16% perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
0.03 ?100% +0.1 0.14 ? 11% perf-profile.children.cycles-pp.page_counter_uncharge
0.02 ?142% +0.1 0.13 ? 14% perf-profile.children.cycles-pp.memset_erms
0.07 ? 11% +0.1 0.18 ? 11% perf-profile.children.cycles-pp.mntput_no_expire
0.09 ? 9% +0.1 0.21 ? 8% perf-profile.children.cycles-pp.strcmp
0.04 ? 72% +0.1 0.17 ? 11% perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
0.04 ? 71% +0.1 0.16 ? 11% perf-profile.children.cycles-pp.__slab_free
0.00 +0.1 0.13 ? 16% perf-profile.children.cycles-pp.allocate_slab
0.08 ? 6% +0.1 0.21 ? 12% perf-profile.children.cycles-pp.map_id_range_down
0.16 ? 9% +0.1 0.30 ? 16% perf-profile.children.cycles-pp.__cond_resched
0.09 ? 20% +0.2 0.25 ? 17% perf-profile.children.cycles-pp.__do_sys_newuname
0.09 ? 9% +0.2 0.25 ? 14% perf-profile.children.cycles-pp.make_kuid
0.11 ? 12% +0.2 0.27 ? 14% perf-profile.children.cycles-pp.rcu_read_unlock_strict
0.08 ? 21% +0.2 0.24 ? 11% perf-profile.children.cycles-pp.drain_obj_stock
0.06 ? 18% +0.2 0.22 ? 14% perf-profile.children.cycles-pp.proc_sys_call_handler
0.09 ? 9% +0.2 0.26 ? 15% perf-profile.children.cycles-pp.__check_heap_object
0.00 +0.2 0.17 ? 13% perf-profile.children.cycles-pp.___slab_alloc
0.00 +0.2 0.18 ? 13% perf-profile.children.cycles-pp.__slab_alloc
0.12 ? 25% +0.2 0.30 ? 21% perf-profile.children.cycles-pp.legitimize_mnt
0.08 ? 14% +0.2 0.26 ? 19% perf-profile.children.cycles-pp.__d_alloc
0.11 ? 13% +0.2 0.30 ? 16% perf-profile.children.cycles-pp.__d_lookup_rcu
0.05 ? 49% +0.2 0.24 ? 9% perf-profile.children.cycles-pp.try_to_unlazy_next
0.00 +0.2 0.20 ? 25% perf-profile.children.cycles-pp.security_inode_permission
0.23 ? 9% +0.2 0.43 ? 14% perf-profile.children.cycles-pp.__might_sleep
0.10 ? 14% +0.2 0.31 ? 12% perf-profile.children.cycles-pp.new_sync_read
0.13 ? 18% +0.2 0.34 ? 14% perf-profile.children.cycles-pp.filp_close
0.11 ? 9% +0.2 0.33 ? 17% perf-profile.children.cycles-pp.__might_fault
0.13 ? 25% +0.2 0.35 ? 20% perf-profile.children.cycles-pp.lookup_mnt
0.04 ? 71% +0.2 0.26 ? 18% perf-profile.children.cycles-pp.lockref_put_or_lock
0.08 ? 5% +0.2 0.31 ? 33% perf-profile.children.cycles-pp.set_nlink
0.09 ? 16% +0.2 0.33 ? 10% perf-profile.children.cycles-pp.path_put
0.12 ? 16% +0.2 0.36 ? 15% perf-profile.children.cycles-pp.vfs_read
0.12 ? 14% +0.3 0.38 ? 15% perf-profile.children.cycles-pp.ksys_read
0.11 ? 10% +0.3 0.37 ? 11% perf-profile.children.cycles-pp.dcache_dir_close
0.14 ? 12% +0.3 0.40 ? 12% perf-profile.children.cycles-pp.refill_obj_stock
0.17 ? 15% +0.3 0.44 ? 15% perf-profile.children.cycles-pp.__x64_sys_close
0.00 +0.3 0.29 ? 20% perf-profile.children.cycles-pp.sysctl_head_finish
0.00 +0.3 0.31 ? 20% perf-profile.children.cycles-pp.sysctl_head_grab
0.00 +0.3 0.31 ? 14% perf-profile.children.cycles-pp.aa_get_task_label
0.20 ? 9% +0.3 0.53 ? 12% perf-profile.children.cycles-pp.do_statfs_native
0.16 ? 12% +0.3 0.49 ? 12% perf-profile.children.cycles-pp.fs_name
0.08 ? 13% +0.3 0.41 ? 15% perf-profile.children.cycles-pp._raw_read_lock
0.19 ? 10% +0.3 0.52 ? 16% perf-profile.children.cycles-pp.__entry_text_start
0.05 ? 77% +0.4 0.41 ? 70% perf-profile.children.cycles-pp.set_root
0.09 ? 11% +0.4 0.46 ? 16% perf-profile.children.cycles-pp.do_prlimit
0.25 ? 12% +0.4 0.61 ? 14% perf-profile.children.cycles-pp.___might_sleep
0.11 ? 12% +0.4 0.49 ? 15% perf-profile.children.cycles-pp.d_alloc_cursor
0.11 ? 10% +0.4 0.49 ? 15% perf-profile.children.cycles-pp.dcache_dir_open
0.23 ? 11% +0.4 0.61 ? 14% perf-profile.children.cycles-pp.__check_object_size
0.07 ? 56% +0.4 0.46 ? 61% perf-profile.children.cycles-pp.nd_jump_root
0.22 ? 10% +0.4 0.62 ? 13% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.07 ? 14% +0.4 0.46 ? 19% perf-profile.children.cycles-pp.proc_sys_permission
0.11 ? 27% +0.4 0.53 ? 52% perf-profile.children.cycles-pp.path_init
0.06 ? 13% +0.4 0.50 ? 16% perf-profile.children.cycles-pp.security_task_getsecid_subj
0.11 ? 14% +0.4 0.56 ? 15% perf-profile.children.cycles-pp.security_file_free
0.05 ? 47% +0.4 0.50 ? 16% perf-profile.children.cycles-pp.apparmor_task_getsecid
0.11 ? 14% +0.5 0.56 ? 15% perf-profile.children.cycles-pp.apparmor_file_free_security
0.07 ? 20% +0.5 0.53 ? 15% perf-profile.children.cycles-pp.ima_file_check
0.31 ? 12% +0.5 0.80 ? 10% perf-profile.children.cycles-pp.__task_pid_nr_ns
0.12 ? 8% +0.5 0.61 ? 14% perf-profile.children.cycles-pp.apparmor_file_open
0.30 ? 7% +0.5 0.82 ? 14% perf-profile.children.cycles-pp._find_next_bit
0.13 ? 6% +0.5 0.66 ? 14% perf-profile.children.cycles-pp.security_file_open
0.33 ? 11% +0.5 0.88 ? 14% perf-profile.children.cycles-pp._copy_to_user
0.22 ? 6% +0.5 0.77 ? 15% perf-profile.children.cycles-pp.apparmor_file_alloc_security
0.21 ? 7% +0.6 0.77 ? 12% perf-profile.children.cycles-pp.complete_walk
0.24 ? 7% +0.6 0.81 ? 13% perf-profile.children.cycles-pp.fs_index
0.33 ? 13% +0.6 0.90 ? 15% perf-profile.children.cycles-pp.user_path_at_empty
0.32 ? 12% +0.6 0.91 ? 14% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.28 ? 14% +0.6 0.90 ? 9% perf-profile.children.cycles-pp.__legitimize_mnt
0.27 ? 6% +0.6 0.90 ? 15% perf-profile.children.cycles-pp.security_file_alloc
0.89 ? 8% +0.7 1.62 ? 11% perf-profile.children.cycles-pp.__softirqentry_text_start
0.43 ? 10% +0.8 1.18 ? 15% perf-profile.children.cycles-pp.strncpy_from_user
0.22 ? 14% +0.8 1.00 ? 14% perf-profile.children.cycles-pp.generic_permission
0.43 ? 9% +0.8 1.23 ? 14% perf-profile.children.cycles-pp.cpumask_next
0.54 ? 12% +0.8 1.36 ? 12% perf-profile.children.cycles-pp.rcu_core
0.30 ? 11% +0.9 1.16 ? 14% perf-profile.children.cycles-pp.__x64_sys_prlimit64
0.73 ? 8% +0.9 1.60 ? 10% perf-profile.children.cycles-pp.irq_exit_rcu
0.49 ? 13% +0.9 1.36 ? 12% perf-profile.children.cycles-pp.rcu_do_batch
0.50 ? 11% +0.9 1.43 ? 13% perf-profile.children.cycles-pp.kmem_cache_free
0.43 ? 7% +0.9 1.36 ? 13% perf-profile.children.cycles-pp.__x64_sys_sysfs
0.50 ? 12% +1.0 1.45 ? 13% perf-profile.children.cycles-pp.kmem_cache_alloc
3.33 ? 8% +1.0 4.30 ? 8% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
3.06 ? 8% +1.0 4.03 ? 8% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.43 ? 10% +1.0 1.43 ? 12% perf-profile.children.cycles-pp.fd_statfs
0.59 ? 10% +1.1 1.65 ? 14% perf-profile.children.cycles-pp.getname_flags
0.51 ? 11% +1.1 1.64 ? 12% perf-profile.children.cycles-pp.__do_sys_fstatfs
0.57 ? 13% +1.2 1.78 ? 12% perf-profile.children.cycles-pp.ext4_statfs
0.57 ? 11% +1.2 1.79 ? 13% perf-profile.children.cycles-pp.__alloc_file
0.58 ? 10% +1.2 1.81 ? 13% perf-profile.children.cycles-pp.alloc_empty_file
0.43 ? 6% +1.2 1.67 ? 13% perf-profile.children.cycles-pp.__fput
0.55 ? 8% +1.4 1.94 ? 13% perf-profile.children.cycles-pp.task_work_run
0.77 ? 8% +1.5 2.27 ? 13% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.61 ? 10% +1.5 2.12 ? 14% perf-profile.children.cycles-pp.lockref_get
0.84 ? 8% +1.6 2.43 ? 14% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.72 ? 9% +1.6 2.34 ? 12% perf-profile.children.cycles-pp.shmem_statfs
0.59 ? 8% +1.7 2.28 ? 5% perf-profile.children.cycles-pp.terminate_walk
0.52 ? 6% +1.7 2.23 ? 13% perf-profile.children.cycles-pp.do_dentry_open
0.88 ? 10% +1.9 2.80 ? 16% perf-profile.children.cycles-pp.__traverse_mounts
1.28 ? 10% +2.2 3.52 ? 12% perf-profile.children.cycles-pp.__x64_sys_getpriority
0.49 ? 5% +2.3 2.79 ? 4% perf-profile.children.cycles-pp.lockref_get_not_dead
0.81 ? 10% +2.6 3.39 ? 14% perf-profile.children.cycles-pp.__d_lookup
1.21 ? 10% +2.6 3.83 ? 12% perf-profile.children.cycles-pp.__percpu_counter_sum
0.70 ? 9% +2.9 3.56 ? 8% perf-profile.children.cycles-pp.try_to_unlazy
0.69 ? 8% +2.9 3.58 ? 8% perf-profile.children.cycles-pp.__legitimize_path
5.04 ? 7% +2.9 7.99 ? 13% perf-profile.children.cycles-pp.lookup_fast
1.47 ? 9% +3.2 4.71 ? 12% perf-profile.children.cycles-pp.statfs_by_dentry
1.50 ? 9% +3.3 4.80 ? 12% perf-profile.children.cycles-pp.vfs_statfs
1.19 ? 6% +3.6 4.80 ? 10% perf-profile.children.cycles-pp.lockref_put_return
1.22 ? 8% +3.8 4.98 ? 10% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.45 ? 8% +4.1 4.60 ? 12% perf-profile.children.cycles-pp.up_read
0.00 +4.2 4.16 ? 8% perf-profile.children.cycles-pp.kernfs_refresh_inode
0.84 ? 7% +4.5 5.32 ? 14% perf-profile.children.cycles-pp.down_read
1.92 ? 12% +5.2 7.13 ? 13% perf-profile.children.cycles-pp.step_into
1.68 ? 9% +5.6 7.27 ? 7% perf-profile.children.cycles-pp.dput
0.91 ? 11% +7.3 8.19 ? 10% perf-profile.children.cycles-pp._raw_spin_lock
6.52 ? 8% +7.4 13.92 ? 12% perf-profile.children.cycles-pp.walk_component
32.12 ? 7% -32.1 0.00 perf-profile.self.cycles-pp.osq_lock
0.52 ? 6% -0.2 0.32 ? 17% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.28 ? 7% -0.2 0.11 ? 14% perf-profile.self.cycles-pp.native_sched_clock
0.21 ? 11% -0.2 0.05 ? 47% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.00 +0.1 0.06 ? 13% perf-profile.self.cycles-pp.simple_statfs
0.00 +0.1 0.06 ? 11% perf-profile.self.cycles-pp.rcu_segcblist_enqueue
0.00 +0.1 0.06 ? 18% perf-profile.self.cycles-pp._copy_to_user
0.05 ? 45% +0.1 0.11 ? 16% perf-profile.self.cycles-pp.may_open
0.00 +0.1 0.06 ? 17% perf-profile.self.cycles-pp.path_openat
0.00 +0.1 0.07 ? 20% perf-profile.self.cycles-pp.kernfs_refresh_inode
0.00 +0.1 0.07 ? 18% perf-profile.self.cycles-pp.task_work_run
0.03 ? 70% +0.1 0.10 ? 14% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.00 +0.1 0.07 ? 20% perf-profile.self.cycles-pp.filename_lookup
0.06 ? 45% +0.1 0.13 ? 17% perf-profile.self.cycles-pp.do_syscall_64
0.05 ? 47% +0.1 0.12 ? 19% perf-profile.self.cycles-pp.__lookup_mnt
0.00 +0.1 0.07 ? 19% perf-profile.self.cycles-pp.apparmor_task_setrlimit
0.00 +0.1 0.08 ? 16% perf-profile.self.cycles-pp.__radix_tree_lookup
0.00 +0.1 0.08 ? 12% perf-profile.self.cycles-pp.file_free_rcu
0.00 +0.1 0.08 ? 17% perf-profile.self.cycles-pp.drain_obj_stock
0.00 +0.1 0.08 ? 15% perf-profile.self.cycles-pp.do_statfs_native
0.00 +0.1 0.08 ? 12% perf-profile.self.cycles-pp.path_init
0.00 +0.1 0.08 ? 17% perf-profile.self.cycles-pp.__might_fault
0.00 +0.1 0.08 ? 17% perf-profile.self.cycles-pp.task_work_add
0.05 ? 47% +0.1 0.13 ? 23% perf-profile.self.cycles-pp.__virt_addr_valid
0.05 ? 45% +0.1 0.13 ? 12% perf-profile.self.cycles-pp.getname_flags
0.00 +0.1 0.09 ? 7% perf-profile.self.cycles-pp.fsnotify
0.04 ? 45% +0.1 0.13 ? 16% perf-profile.self.cycles-pp.get_obj_cgroup_from_current
0.05 ? 45% +0.1 0.14 ? 17% perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.06 ? 17% +0.1 0.15 ? 15% perf-profile.self.cycles-pp.rcu_read_unlock_strict
0.05 ? 8% +0.1 0.14 ? 15% perf-profile.self.cycles-pp.refill_obj_stock
0.00 +0.1 0.09 ? 18% perf-profile.self.cycles-pp.vfs_statfs
0.01 ?223% +0.1 0.10 ? 15% perf-profile.self.cycles-pp.page_counter_try_charge
0.06 ? 13% +0.1 0.16 ? 11% perf-profile.self.cycles-pp.mntput_no_expire
0.00 +0.1 0.10 ? 19% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.01 ?223% +0.1 0.11 ? 13% perf-profile.self.cycles-pp.page_counter_cancel
0.00 +0.1 0.11 ? 20% perf-profile.self.cycles-pp.do_sys_openat2
0.04 ? 71% +0.1 0.14 ? 16% perf-profile.self.cycles-pp.__cond_resched
0.02 ?142% +0.1 0.13 ? 14% perf-profile.self.cycles-pp.memset_erms
0.09 ? 5% +0.1 0.20 ? 9% perf-profile.self.cycles-pp.strcmp
0.06 ? 11% +0.1 0.18 ? 16% perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.04 ? 71% +0.1 0.15 ? 14% perf-profile.self.cycles-pp.__slab_free
0.09 ? 15% +0.1 0.21 ? 15% perf-profile.self.cycles-pp.__check_object_size
0.04 ? 71% +0.1 0.16 ? 13% perf-profile.self.cycles-pp.__alloc_file
0.04 ? 71% +0.1 0.16 ? 11% perf-profile.self.cycles-pp.walk_component
0.14 ? 9% +0.1 0.27 ? 17% perf-profile.self.cycles-pp.exit_to_user_mode_prepare
0.07 ? 10% +0.1 0.20 ? 15% perf-profile.self.cycles-pp.map_id_range_down
0.04 ? 75% +0.1 0.18 ? 40% perf-profile.self.cycles-pp.__traverse_mounts
0.01 ?223% +0.1 0.15 ? 13% perf-profile.self.cycles-pp.fs_name
0.02 ?142% +0.1 0.17 ? 40% perf-profile.self.cycles-pp.kernfs_iop_permission
0.03 ?101% +0.2 0.18 ? 12% perf-profile.self.cycles-pp.__fput
0.00 +0.2 0.16 ? 16% perf-profile.self.cycles-pp.fs_index
0.09 ? 11% +0.2 0.25 ? 16% perf-profile.self.cycles-pp.__check_heap_object
0.21 ? 8% +0.2 0.38 ? 14% perf-profile.self.cycles-pp.__might_sleep
0.07 ? 11% +0.2 0.24 ? 13% perf-profile.self.cycles-pp.shmem_statfs
0.11 ? 13% +0.2 0.30 ? 15% perf-profile.self.cycles-pp.__d_lookup_rcu
0.00 +0.2 0.19 ? 19% perf-profile.self.cycles-pp.apparmor_task_getsecid
0.13 ? 14% +0.2 0.32 ? 14% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.00 +0.2 0.19 ? 16% perf-profile.self.cycles-pp.lockref_put_or_lock
0.00 +0.2 0.19 ? 25% perf-profile.self.cycles-pp.security_inode_permission
0.00 +0.2 0.20 ? 19% perf-profile.self.cycles-pp.do_prlimit
0.08 ? 8% +0.2 0.30 ? 33% perf-profile.self.cycles-pp.set_nlink
0.04 ?110% +0.2 0.26 ? 65% perf-profile.self.cycles-pp.__legitimize_path
0.07 ? 26% +0.2 0.30 ? 13% perf-profile.self.cycles-pp.lookup_fast
0.09 ? 20% +0.2 0.32 ? 11% perf-profile.self.cycles-pp.statfs_by_dentry
0.09 ? 12% +0.3 0.36 ? 30% perf-profile.self.cycles-pp.kernfs_dop_revalidate
0.09 ? 14% +0.3 0.38 ? 15% perf-profile.self.cycles-pp.__x64_sys_prlimit64
0.19 ? 11% +0.3 0.49 ? 16% perf-profile.self.cycles-pp.strncpy_from_user
0.00 +0.3 0.30 ? 16% perf-profile.self.cycles-pp.aa_get_task_label
0.21 ? 13% +0.3 0.52 ? 13% perf-profile.self.cycles-pp.kmem_cache_alloc
0.17 ? 5% +0.3 0.48 ? 9% perf-profile.self.cycles-pp.do_dentry_open
0.08 ? 13% +0.3 0.40 ? 15% perf-profile.self.cycles-pp._raw_read_lock
0.24 ? 10% +0.3 0.57 ? 14% perf-profile.self.cycles-pp.___might_sleep
0.19 ? 12% +0.3 0.53 ? 13% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.19 ? 10% +0.3 0.52 ? 16% perf-profile.self.cycles-pp.__entry_text_start
0.18 ? 9% +0.3 0.53 ? 14% perf-profile.self.cycles-pp.cpumask_next
0.05 ? 78% +0.3 0.40 ? 69% perf-profile.self.cycles-pp.set_root
0.11 ? 13% +0.4 0.55 ? 15% perf-profile.self.cycles-pp.apparmor_file_free_security
0.26 ? 8% +0.4 0.70 ? 14% perf-profile.self.cycles-pp._find_next_bit
0.20 ? 41% +0.4 0.64 ? 22% perf-profile.self.cycles-pp.step_into
0.29 ? 13% +0.5 0.75 ? 9% perf-profile.self.cycles-pp.__task_pid_nr_ns
0.29 ? 9% +0.5 0.77 ? 13% perf-profile.self.cycles-pp.kmem_cache_free
0.12 ? 6% +0.5 0.60 ? 15% perf-profile.self.cycles-pp.apparmor_file_open
0.20 ? 15% +0.5 0.71 ? 13% perf-profile.self.cycles-pp.link_path_walk
0.20 ? 6% +0.5 0.72 ? 14% perf-profile.self.cycles-pp.apparmor_file_alloc_security
0.32 ? 11% +0.6 0.89 ? 14% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.27 ? 14% +0.6 0.88 ? 10% perf-profile.self.cycles-pp.__legitimize_mnt
0.17 ? 45% +0.6 0.79 ? 22% perf-profile.self.cycles-pp.dput
0.15 ? 16% +0.6 0.79 ? 15% perf-profile.self.cycles-pp.generic_permission
0.22 ? 22% +0.8 1.00 ? 17% perf-profile.self.cycles-pp.__d_lookup
0.70 ? 11% +1.3 2.01 ? 12% perf-profile.self.cycles-pp.__percpu_counter_sum
0.46 ? 12% +1.4 1.81 ? 11% perf-profile.self.cycles-pp.inode_permission
0.61 ? 9% +1.4 2.04 ? 13% perf-profile.self.cycles-pp.lockref_get
0.94 ? 10% +1.6 2.58 ? 12% perf-profile.self.cycles-pp.__x64_sys_getpriority
0.48 ? 6% +2.2 2.64 ? 4% perf-profile.self.cycles-pp.lockref_get_not_dead
0.86 ? 11% +2.7 3.52 ? 12% perf-profile.self.cycles-pp._raw_spin_lock
1.17 ? 7% +3.5 4.67 ? 11% perf-profile.self.cycles-pp.lockref_put_return
1.22 ? 8% +3.5 4.74 ? 10% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.45 ? 8% +4.0 4.49 ? 12% perf-profile.self.cycles-pp.up_read
0.69 ? 8% +4.3 4.99 ? 14% perf-profile.self.cycles-pp.down_read



stress-ng.get.ops

1.2e+06 +-----------------------------------------------------------------+
1.1e+06 |-O O O O O O O O |
| O O O O O O O O O O O O O O O |
1e+06 |-+ O O O O O O |
900000 |-+ O O O |
| O |
800000 |-+ |
700000 |-+ |
600000 |-+ .+.+. .+ |
|.+.+. .+.+. .+.+ .+.+. .+.+. .+.+.+.+ +.+.+ : |
500000 |-+ + + + + + : |
400000 |-+ : |
| +.+. .+.+. |
300000 |-+ + + |
200000 +-----------------------------------------------------------------+


stress-ng.get.ops_per_sec

20000 +-------------------------------------------------------------------+
| O O O O O O O O |
18000 |-+ O O O O O O O O O O O O O O O |
16000 |-+ O O O O O |
| O O O O |
14000 |-+ O |
| |
12000 |-+ |
| |
10000 |-+.+. .+. .+.+. .+.+. .+.+. .+.+.+.+. .+ |
8000 |.+ + +.+.+.+.+ + + + +.+ : |
| : |
6000 |-+ : .+. |
| +.+.+.+ + |
4000 +-------------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation

Thanks,
Oliver Sang


Attachments:
(No filename) (81.99 kB)
config-5.13.0-rc2-00023-g9a658329cda8 (176.78 kB)
job-script (8.50 kB)
job.yaml (5.88 kB)
reproduce (548.00 B)
Download all attachments

2021-06-01 13:20:48

by Miklos Szeredi

[permalink] [raw]
Subject: Re: [REPOST PATCH v4 4/5] kernfs: use i_lock to protect concurrent inode updates

On Fri, 28 May 2021 at 08:34, Ian Kent <[email protected]> wrote:
>
> The inode operations .permission() and .getattr() use the kernfs node
> write lock but all that's needed is to keep the rb tree stable while
> updating the inode attributes as well as protecting the update itself
> against concurrent changes.
>
> And .permission() is called frequently during path walks and can cause
> quite a bit of contention between kernfs node operations and path
> walks when the number of concurrent walks is high.
>
> To change kernfs_iop_getattr() and kernfs_iop_permission() to take
> the rw sem read lock instead of the write lock an additional lock is
> needed to protect against multiple processes concurrently updating
> the inode attributes and link count in kernfs_refresh_inode().
>
> The inode i_lock seems like the sensible thing to use to protect these
> inode attribute updates so use it in kernfs_refresh_inode().
>
> Signed-off-by: Ian Kent <[email protected]>
> ---
> fs/kernfs/inode.c | 10 ++++++----
> fs/kernfs/mount.c | 4 ++--
> 2 files changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/fs/kernfs/inode.c b/fs/kernfs/inode.c
> index 3b01e9e61f14e..6728ecd81eb37 100644
> --- a/fs/kernfs/inode.c
> +++ b/fs/kernfs/inode.c
> @@ -172,6 +172,7 @@ static void kernfs_refresh_inode(struct kernfs_node *kn, struct inode *inode)
> {
> struct kernfs_iattrs *attrs = kn->iattr;
>
> + spin_lock(&inode->i_lock);
> inode->i_mode = kn->mode;
> if (attrs)
> /*
> @@ -182,6 +183,7 @@ static void kernfs_refresh_inode(struct kernfs_node *kn, struct inode *inode)
>
> if (kernfs_type(kn) == KERNFS_DIR)
> set_nlink(inode, kn->dir.subdirs + 2);
> + spin_unlock(&inode->i_lock);
> }
>
> int kernfs_iop_getattr(struct user_namespace *mnt_userns,
> @@ -191,9 +193,9 @@ int kernfs_iop_getattr(struct user_namespace *mnt_userns,
> struct inode *inode = d_inode(path->dentry);
> struct kernfs_node *kn = inode->i_private;
>
> - down_write(&kernfs_rwsem);
> + down_read(&kernfs_rwsem);
> kernfs_refresh_inode(kn, inode);
> - up_write(&kernfs_rwsem);
> + up_read(&kernfs_rwsem);
>
> generic_fillattr(&init_user_ns, inode, stat);
> return 0;
> @@ -284,9 +286,9 @@ int kernfs_iop_permission(struct user_namespace *mnt_userns,
>
> kn = inode->i_private;
>
> - down_write(&kernfs_rwsem);
> + down_read(&kernfs_rwsem);
> kernfs_refresh_inode(kn, inode);
> - up_write(&kernfs_rwsem);
> + up_read(&kernfs_rwsem);
>
> return generic_permission(&init_user_ns, inode, mask);
> }
> diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c
> index baa4155ba2edf..f2f909d09f522 100644
> --- a/fs/kernfs/mount.c
> +++ b/fs/kernfs/mount.c
> @@ -255,9 +255,9 @@ static int kernfs_fill_super(struct super_block *sb, struct kernfs_fs_context *k
> sb->s_shrink.seeks = 0;
>
> /* get root inode, initialize and unlock it */
> - down_write(&kernfs_rwsem);
> + down_read(&kernfs_rwsem);
> inode = kernfs_get_inode(sb, info->root->kn);
> - up_write(&kernfs_rwsem);
> + up_read(&kernfs_rwsem);
> if (!inode) {
> pr_debug("kernfs: could not get root inode\n");
> return -ENOMEM;
>

This last hunk is not mentioned in the patch header. Why is this needed?

Otherwise looks good.

Thanks,
Miklos

2021-06-02 06:43:02

by Ian Kent

[permalink] [raw]
Subject: Re: [REPOST PATCH v4 4/5] kernfs: use i_lock to protect concurrent inode updates

On Tue, 2021-06-01 at 15:18 +0200, Miklos Szeredi wrote:
> On Fri, 28 May 2021 at 08:34, Ian Kent <[email protected]> wrote:
> >
> > The inode operations .permission() and .getattr() use the kernfs
> > node
> > write lock but all that's needed is to keep the rb tree stable
> > while
> > updating the inode attributes as well as protecting the update
> > itself
> > against concurrent changes.
> >
> > And .permission() is called frequently during path walks and can
> > cause
> > quite a bit of contention between kernfs node operations and path
> > walks when the number of concurrent walks is high.
> >
> > To change kernfs_iop_getattr() and kernfs_iop_permission() to take
> > the rw sem read lock instead of the write lock an additional lock
> > is
> > needed to protect against multiple processes concurrently updating
> > the inode attributes and link count in kernfs_refresh_inode().
> >
> > The inode i_lock seems like the sensible thing to use to protect
> > these
> > inode attribute updates so use it in kernfs_refresh_inode().
> >
> > Signed-off-by: Ian Kent <[email protected]>
> > ---
> >  fs/kernfs/inode.c |   10 ++++++----
> >  fs/kernfs/mount.c |    4 ++--
> >  2 files changed, 8 insertions(+), 6 deletions(-)
> >
> > diff --git a/fs/kernfs/inode.c b/fs/kernfs/inode.c
> > index 3b01e9e61f14e..6728ecd81eb37 100644
> > --- a/fs/kernfs/inode.c
> > +++ b/fs/kernfs/inode.c
> > @@ -172,6 +172,7 @@ static void kernfs_refresh_inode(struct
> > kernfs_node *kn, struct inode *inode)
> >  {
> >         struct kernfs_iattrs *attrs = kn->iattr;
> >
> > +       spin_lock(&inode->i_lock);
> >         inode->i_mode = kn->mode;
> >         if (attrs)
> >                 /*
> > @@ -182,6 +183,7 @@ static void kernfs_refresh_inode(struct
> > kernfs_node *kn, struct inode *inode)
> >
> >         if (kernfs_type(kn) == KERNFS_DIR)
> >                 set_nlink(inode, kn->dir.subdirs + 2);
> > +       spin_unlock(&inode->i_lock);
> >  }
> >
> >  int kernfs_iop_getattr(struct user_namespace *mnt_userns,
> > @@ -191,9 +193,9 @@ int kernfs_iop_getattr(struct user_namespace
> > *mnt_userns,
> >         struct inode *inode = d_inode(path->dentry);
> >         struct kernfs_node *kn = inode->i_private;
> >
> > -       down_write(&kernfs_rwsem);
> > +       down_read(&kernfs_rwsem);
> >         kernfs_refresh_inode(kn, inode);
> > -       up_write(&kernfs_rwsem);
> > +       up_read(&kernfs_rwsem);
> >
> >         generic_fillattr(&init_user_ns, inode, stat);
> >         return 0;
> > @@ -284,9 +286,9 @@ int kernfs_iop_permission(struct user_namespace
> > *mnt_userns,
> >
> >         kn = inode->i_private;
> >
> > -       down_write(&kernfs_rwsem);
> > +       down_read(&kernfs_rwsem);
> >         kernfs_refresh_inode(kn, inode);
> > -       up_write(&kernfs_rwsem);
> > +       up_read(&kernfs_rwsem);
> >
> >         return generic_permission(&init_user_ns, inode, mask);
> >  }
> > diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c
> > index baa4155ba2edf..f2f909d09f522 100644
> > --- a/fs/kernfs/mount.c
> > +++ b/fs/kernfs/mount.c
> > @@ -255,9 +255,9 @@ static int kernfs_fill_super(struct super_block
> > *sb, struct kernfs_fs_context *k
> >         sb->s_shrink.seeks = 0;
> >
> >         /* get root inode, initialize and unlock it */
> > -       down_write(&kernfs_rwsem);
> > +       down_read(&kernfs_rwsem);
> >         inode = kernfs_get_inode(sb, info->root->kn);
> > -       up_write(&kernfs_rwsem);
> > +       up_read(&kernfs_rwsem);
> >         if (!inode) {
> >                 pr_debug("kernfs: could not get root inode\n");
> >                 return -ENOMEM;
> >
>
> This last hunk is not mentioned in the patch header.  Why is this
> needed?

Yes, that's right.

The lock is needed to keep the node rb tree stable.

kernfs_get_inode() calls kernfs_refresh_inode() indirectly so
since the i_lock is probably not needed here this hunk could
just as well have gone into the rwsem change but because of
that kernfs_refresh_inode() call it also makes sense to put
it here.

I'd prefer to keep it here and clearly what's going on isn't
as obvious as I thought so I can add this reasoning to the
description if you still think it's worth while?

>
> Otherwise looks good.
>
> Thanks,
> Miklos