2020-05-01 19:15:26

by Jens Axboe

[permalink] [raw]
Subject: [PATCH v4] eventfd: convert to f_op->read_iter()

eventfd is using ->read() as it's file_operations read handler, but
this prevents passing in information about whether a given IO operation
is blocking or not. We can only use the file flags for that. To support
async (-EAGAIN/poll based) retries for io_uring, we need ->read_iter()
support. Convert eventfd to using ->read_iter().

With ->read_iter(), we can support IOCB_NOWAIT. Ensure the fd setup
is done such that we set file->f_mode with FMODE_NOWAIT.

Signed-off-by: Jens Axboe <[email protected]>

---

Since v3:

- Ensure we fiddle ->f_mode before doing fd_install() on the fd

Since v2:

- Cleanup eventfd_read() as per Al's suggestions

Since v1:

- Add FMODE_NOWAIT to the eventfd file

diff --git a/fs/eventfd.c b/fs/eventfd.c
index 78e41c7c3d05..20f0fd4d56e1 100644
--- a/fs/eventfd.c
+++ b/fs/eventfd.c
@@ -216,32 +216,32 @@ int eventfd_ctx_remove_wait_queue(struct eventfd_ctx *ctx, wait_queue_entry_t *w
}
EXPORT_SYMBOL_GPL(eventfd_ctx_remove_wait_queue);

-static ssize_t eventfd_read(struct file *file, char __user *buf, size_t count,
- loff_t *ppos)
+static ssize_t eventfd_read(struct kiocb *iocb, struct iov_iter *to)
{
+ struct file *file = iocb->ki_filp;
struct eventfd_ctx *ctx = file->private_data;
- ssize_t res;
__u64 ucnt = 0;
DECLARE_WAITQUEUE(wait, current);

- if (count < sizeof(ucnt))
+ if (iov_iter_count(to) < sizeof(ucnt))
return -EINVAL;
-
spin_lock_irq(&ctx->wqh.lock);
- res = -EAGAIN;
- if (ctx->count > 0)
- res = sizeof(ucnt);
- else if (!(file->f_flags & O_NONBLOCK)) {
+ if (!ctx->count) {
+ if ((file->f_flags & O_NONBLOCK) ||
+ (iocb->ki_flags & IOCB_NOWAIT)) {
+ spin_unlock_irq(&ctx->wqh.lock);
+ return -EAGAIN;
+ }
__add_wait_queue(&ctx->wqh, &wait);
for (;;) {
set_current_state(TASK_INTERRUPTIBLE);
- if (ctx->count > 0) {
- res = sizeof(ucnt);
+ if (ctx->count)
break;
- }
if (signal_pending(current)) {
- res = -ERESTARTSYS;
- break;
+ __remove_wait_queue(&ctx->wqh, &wait);
+ __set_current_state(TASK_RUNNING);
+ spin_unlock_irq(&ctx->wqh.lock);
+ return -ERESTARTSYS;
}
spin_unlock_irq(&ctx->wqh.lock);
schedule();
@@ -250,17 +250,14 @@ static ssize_t eventfd_read(struct file *file, char __user *buf, size_t count,
__remove_wait_queue(&ctx->wqh, &wait);
__set_current_state(TASK_RUNNING);
}
- if (likely(res > 0)) {
- eventfd_ctx_do_read(ctx, &ucnt);
- if (waitqueue_active(&ctx->wqh))
- wake_up_locked_poll(&ctx->wqh, EPOLLOUT);
- }
+ eventfd_ctx_do_read(ctx, &ucnt);
+ if (waitqueue_active(&ctx->wqh))
+ wake_up_locked_poll(&ctx->wqh, EPOLLOUT);
spin_unlock_irq(&ctx->wqh.lock);
-
- if (res > 0 && put_user(ucnt, (__u64 __user *)buf))
+ if (unlikely(copy_to_iter(&ucnt, sizeof(ucnt), to) != sizeof(ucnt)))
return -EFAULT;

- return res;
+ return sizeof(ucnt);
}

static ssize_t eventfd_write(struct file *file, const char __user *buf, size_t count,
@@ -329,7 +326,7 @@ static const struct file_operations eventfd_fops = {
#endif
.release = eventfd_release,
.poll = eventfd_poll,
- .read = eventfd_read,
+ .read_iter = eventfd_read,
.write = eventfd_write,
.llseek = noop_llseek,
};
@@ -406,6 +403,7 @@ EXPORT_SYMBOL_GPL(eventfd_ctx_fileget);
static int do_eventfd(unsigned int count, int flags)
{
struct eventfd_ctx *ctx;
+ struct file *file;
int fd;

/* Check the EFD_* constants for consistency. */
@@ -425,11 +423,24 @@ static int do_eventfd(unsigned int count, int flags)
ctx->flags = flags;
ctx->id = ida_simple_get(&eventfd_ida, 0, 0, GFP_KERNEL);

- fd = anon_inode_getfd("[eventfd]", &eventfd_fops, ctx,
- O_RDWR | (flags & EFD_SHARED_FCNTL_FLAGS));
+ flags &= EFD_SHARED_FCNTL_FLAGS;
+ flags |= O_RDWR;
+ fd = get_unused_fd_flags(flags);
if (fd < 0)
- eventfd_free_ctx(ctx);
+ goto err;
+
+ file = anon_inode_getfile("[eventfd]", &eventfd_fops, ctx, flags);
+ if (IS_ERR(file)) {
+ put_unused_fd(fd);
+ fd = PTR_ERR(file);
+ goto err;
+ }

+ file->f_mode |= FMODE_NOWAIT;
+ fd_install(fd, file);
+ return fd;
+err:
+ eventfd_free_ctx(ctx);
return fd;
}

--
Jens Axboe


2020-05-01 23:14:28

by Al Viro

[permalink] [raw]
Subject: Re: [PATCH v4] eventfd: convert to f_op->read_iter()

On Fri, May 01, 2020 at 01:11:09PM -0600, Jens Axboe wrote:
> + flags &= EFD_SHARED_FCNTL_FLAGS;
> + flags |= O_RDWR;
> + fd = get_unused_fd_flags(flags);
> if (fd < 0)
> - eventfd_free_ctx(ctx);
> + goto err;
> +
> + file = anon_inode_getfile("[eventfd]", &eventfd_fops, ctx, flags);
> + if (IS_ERR(file)) {
> + put_unused_fd(fd);
> + fd = PTR_ERR(file);
> + goto err;
> + }
>
> + file->f_mode |= FMODE_NOWAIT;
> + fd_install(fd, file);
> + return fd;
> +err:
> + eventfd_free_ctx(ctx);
> return fd;
> }

Looks sane... I can take it via vfs.git, or leave it for you if you
have other stuff in the same area...

2020-05-01 23:55:51

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH v4] eventfd: convert to f_op->read_iter()

On 5/1/20 5:12 PM, Al Viro wrote:
> On Fri, May 01, 2020 at 01:11:09PM -0600, Jens Axboe wrote:
>> + flags &= EFD_SHARED_FCNTL_FLAGS;
>> + flags |= O_RDWR;
>> + fd = get_unused_fd_flags(flags);
>> if (fd < 0)
>> - eventfd_free_ctx(ctx);
>> + goto err;
>> +
>> + file = anon_inode_getfile("[eventfd]", &eventfd_fops, ctx, flags);
>> + if (IS_ERR(file)) {
>> + put_unused_fd(fd);
>> + fd = PTR_ERR(file);
>> + goto err;
>> + }
>>
>> + file->f_mode |= FMODE_NOWAIT;
>> + fd_install(fd, file);
>> + return fd;
>> +err:
>> + eventfd_free_ctx(ctx);
>> return fd;
>> }
>
> Looks sane... I can take it via vfs.git, or leave it for you if you
> have other stuff in the same area...

Would be great if you can queue it up in vfs.git, thanks! Don't have
anything else that'd conflict with this.

--
Jens Axboe

2020-05-03 13:49:53

by Al Viro

[permalink] [raw]
Subject: Re: [PATCH v4] eventfd: convert to f_op->read_iter()

On Fri, May 01, 2020 at 05:54:09PM -0600, Jens Axboe wrote:
> On 5/1/20 5:12 PM, Al Viro wrote:
> > On Fri, May 01, 2020 at 01:11:09PM -0600, Jens Axboe wrote:
> >> + flags &= EFD_SHARED_FCNTL_FLAGS;
> >> + flags |= O_RDWR;
> >> + fd = get_unused_fd_flags(flags);
> >> if (fd < 0)
> >> - eventfd_free_ctx(ctx);
> >> + goto err;
> >> +
> >> + file = anon_inode_getfile("[eventfd]", &eventfd_fops, ctx, flags);
> >> + if (IS_ERR(file)) {
> >> + put_unused_fd(fd);
> >> + fd = PTR_ERR(file);
> >> + goto err;
> >> + }
> >>
> >> + file->f_mode |= FMODE_NOWAIT;
> >> + fd_install(fd, file);
> >> + return fd;
> >> +err:
> >> + eventfd_free_ctx(ctx);
> >> return fd;
> >> }
> >
> > Looks sane... I can take it via vfs.git, or leave it for you if you
> > have other stuff in the same area...
>
> Would be great if you can queue it up in vfs.git, thanks! Don't have
> anything else that'd conflict with this.

Applied; BTW, what happens if
ctx->id = ida_simple_get(&eventfd_ida, 0, 0, GFP_KERNEL);
fails? Quitely succeed with BS value (-ENOSPC/-ENOMEM) shown by
eventfd_show_fdinfo()?

2020-05-03 14:46:53

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH v4] eventfd: convert to f_op->read_iter()

On 5/3/20 7:46 AM, Al Viro wrote:
> On Fri, May 01, 2020 at 05:54:09PM -0600, Jens Axboe wrote:
>> On 5/1/20 5:12 PM, Al Viro wrote:
>>> On Fri, May 01, 2020 at 01:11:09PM -0600, Jens Axboe wrote:
>>>> + flags &= EFD_SHARED_FCNTL_FLAGS;
>>>> + flags |= O_RDWR;
>>>> + fd = get_unused_fd_flags(flags);
>>>> if (fd < 0)
>>>> - eventfd_free_ctx(ctx);
>>>> + goto err;
>>>> +
>>>> + file = anon_inode_getfile("[eventfd]", &eventfd_fops, ctx, flags);
>>>> + if (IS_ERR(file)) {
>>>> + put_unused_fd(fd);
>>>> + fd = PTR_ERR(file);
>>>> + goto err;
>>>> + }
>>>>
>>>> + file->f_mode |= FMODE_NOWAIT;
>>>> + fd_install(fd, file);
>>>> + return fd;
>>>> +err:
>>>> + eventfd_free_ctx(ctx);
>>>> return fd;
>>>> }
>>>
>>> Looks sane... I can take it via vfs.git, or leave it for you if you
>>> have other stuff in the same area...
>>
>> Would be great if you can queue it up in vfs.git, thanks! Don't have
>> anything else that'd conflict with this.
>
> Applied; BTW, what happens if
> ctx->id = ida_simple_get(&eventfd_ida, 0, 0, GFP_KERNEL);
> fails? Quitely succeed with BS value (-ENOSPC/-ENOMEM) shown by
> eventfd_show_fdinfo()?

Huh yeah that's odd, not sure how I missed that when touching code
near it. Want a followup patch to fix that issue?

--
Jens Axboe

2020-05-03 16:52:07

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH v4] eventfd: convert to f_op->read_iter()

On 5/3/20 8:42 AM, Jens Axboe wrote:
> On 5/3/20 7:46 AM, Al Viro wrote:
>> On Fri, May 01, 2020 at 05:54:09PM -0600, Jens Axboe wrote:
>>> On 5/1/20 5:12 PM, Al Viro wrote:
>>>> On Fri, May 01, 2020 at 01:11:09PM -0600, Jens Axboe wrote:
>>>>> + flags &= EFD_SHARED_FCNTL_FLAGS;
>>>>> + flags |= O_RDWR;
>>>>> + fd = get_unused_fd_flags(flags);
>>>>> if (fd < 0)
>>>>> - eventfd_free_ctx(ctx);
>>>>> + goto err;
>>>>> +
>>>>> + file = anon_inode_getfile("[eventfd]", &eventfd_fops, ctx, flags);
>>>>> + if (IS_ERR(file)) {
>>>>> + put_unused_fd(fd);
>>>>> + fd = PTR_ERR(file);
>>>>> + goto err;
>>>>> + }
>>>>>
>>>>> + file->f_mode |= FMODE_NOWAIT;
>>>>> + fd_install(fd, file);
>>>>> + return fd;
>>>>> +err:
>>>>> + eventfd_free_ctx(ctx);
>>>>> return fd;
>>>>> }
>>>>
>>>> Looks sane... I can take it via vfs.git, or leave it for you if you
>>>> have other stuff in the same area...
>>>
>>> Would be great if you can queue it up in vfs.git, thanks! Don't have
>>> anything else that'd conflict with this.
>>
>> Applied; BTW, what happens if
>> ctx->id = ida_simple_get(&eventfd_ida, 0, 0, GFP_KERNEL);
>> fails? Quitely succeed with BS value (-ENOSPC/-ENOMEM) shown by
>> eventfd_show_fdinfo()?
>
> Huh yeah that's odd, not sure how I missed that when touching code
> near it. Want a followup patch to fix that issue?

This should do the trick. Ideally we'd change the order of these, and
shove this fix into 5.7, but not sure it's super important since this
bug is pretty old. Would make stable backports easier, though...
Let me know how you want to handle it, as it'll impact the one you
have already queued up.


diff --git a/fs/eventfd.c b/fs/eventfd.c
index 20f0fd4d56e1..96081efdd0c9 100644
--- a/fs/eventfd.c
+++ b/fs/eventfd.c
@@ -422,6 +422,10 @@ static int do_eventfd(unsigned int count, int flags)
ctx->count = count;
ctx->flags = flags;
ctx->id = ida_simple_get(&eventfd_ida, 0, 0, GFP_KERNEL);
+ if (ctx->id < 0) {
+ fd = ctx->id;
+ goto err;
+ }

flags &= EFD_SHARED_FCNTL_FLAGS;
flags |= O_RDWR;

--
Jens Axboe

2020-05-04 12:59:29

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v4] eventfd: convert to f_op->read_iter()

On Fri, May 01, 2020 at 01:11:09PM -0600, Jens Axboe wrote:
> eventfd is using ->read() as it's file_operations read handler, but
> this prevents passing in information about whether a given IO operation
> is blocking or not. We can only use the file flags for that. To support
> async (-EAGAIN/poll based) retries for io_uring, we need ->read_iter()
> support. Convert eventfd to using ->read_iter().
>
> With ->read_iter(), we can support IOCB_NOWAIT. Ensure the fd setup
> is done such that we set file->f_mode with FMODE_NOWAIT.

Can you add a anon_inode_getfd_mode that passes extra flags for f_mode
instead of opencoding it? Especially as I expect more users that might
want to handle IOCB_NOWAIT.

2020-05-08 09:03:11

by Chen, Rong A

[permalink] [raw]
Subject: [eventfd] a4ef93e263: will-it-scale.per_process_ops -2.2% regression

Greeting,

FYI, we noticed a -2.2% regression of will-it-scale.per_process_ops due to commit:


commit: a4ef93e263d720a988a208c56aad22300b944ee6 ("[PATCH v4] eventfd: convert to f_op->read_iter()")
url: https://github.com/0day-ci/linux/commits/Jens-Axboe/eventfd-convert-to-f_op-read_iter/20200502-043943


in testcase: will-it-scale
on test machine: 104 threads Skylake with 192G memory
with following parameters:

nr_task: 100%
mode: process
test: eventfd1
cpufreq_governor: performance
ucode: 0x2000065

test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale



If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml

=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/process/100%/debian-x86_64-20191114.cgz/lkp-skl-fpga01/eventfd1/will-it-scale/0x2000065

commit:
052c467cb5 (" block-5.7-2020-05-01")
a4ef93e263 ("eventfd: convert to f_op->read_iter()")

052c467cb58748e3 a4ef93e263d720a988a208c56aa
---------------- ---------------------------
%stddev %change %stddev
\ | \
586005 -2.2% 573349 will-it-scale.per_process_ops
60944537 -2.2% 59628423 will-it-scale.workload
1611 -100.0% 0.00 meminfo.Mlocked
36.10 ± 2% +2.9% 37.15 boot-time.boot
3321 +2.5% 3403 boot-time.idle
49320 ±107% -80.6% 9578 ± 2% cpuidle.POLL.time
3633 ± 3% -23.7% 2773 ± 2% cpuidle.POLL.usage
11182 ±141% +139.9% 26832 ± 25% numa-numastat.node0.other_node
341388 ± 7% +6.7% 364292 ± 3% numa-numastat.node1.local_node
52.67 +6.3% 56.00 vmstat.cpu.sy
45.00 -6.7% 42.00 vmstat.cpu.us
1431 ± 2% +5.8% 1514 ± 2% vmstat.system.cs
15631 ± 61% -86.3% 2134 ± 25% numa-meminfo.node0.Inactive
17411 ± 53% -83.2% 2933 ± 26% numa-meminfo.node0.Shmem
12939 ± 75% +105.5% 26586 numa-meminfo.node1.Inactive
12675 ± 77% +109.0% 26486 ± 2% numa-meminfo.node1.Inactive(anon)
39471 ± 16% +12.9% 44547 ± 7% numa-meminfo.node1.KReclaimable
6841 ± 10% +8.1% 7396 ± 6% numa-meminfo.node1.PageTables
39471 ± 16% +12.9% 44547 ± 7% numa-meminfo.node1.SReclaimable
61920 ± 17% +30.8% 80987 numa-meminfo.node1.Shmem
121373 +9.3% 132631 ± 4% numa-meminfo.node1.Slab
211.67 ± 11% -100.0% 0.00 numa-vmstat.node0.nr_mlock
4352 ± 53% -83.2% 733.00 ± 26% numa-vmstat.node0.nr_shmem
155270 ± 10% +10.2% 171086 ± 3% numa-vmstat.node0.numa_other
3168 ± 76% +110.5% 6668 ± 2% numa-vmstat.node1.nr_inactive_anon
190.33 ± 13% -100.0% 0.00 numa-vmstat.node1.nr_mlock
1710 ± 10% +8.0% 1848 ± 6% numa-vmstat.node1.nr_page_table_pages
15498 ± 17% +30.9% 20281 numa-vmstat.node1.nr_shmem
9868 ± 16% +12.9% 11137 ± 7% numa-vmstat.node1.nr_slab_reclaimable
3168 ± 76% +110.5% 6668 ± 2% numa-vmstat.node1.nr_zone_inactive_anon
4141 ± 7% +11.5% 4616 ± 3% slabinfo.eventpoll_pwq.active_objs
4141 ± 7% +11.5% 4616 ± 3% slabinfo.eventpoll_pwq.num_objs
970.67 ± 9% +25.3% 1216 slabinfo.kmalloc-rcl-128.active_objs
970.67 ± 9% +25.3% 1216 slabinfo.kmalloc-rcl-128.num_objs
4484 ± 3% +17.9% 5287 ± 10% slabinfo.kmalloc-rcl-64.active_objs
4484 ± 3% +17.9% 5287 ± 10% slabinfo.kmalloc-rcl-64.num_objs
1119 ± 3% +15.1% 1288 ± 9% slabinfo.skbuff_ext_cache.active_objs
1119 ± 3% +15.1% 1288 ± 9% slabinfo.skbuff_ext_cache.num_objs
2484 +7.4% 2668 ± 3% slabinfo.trace_event_file.active_objs
2484 +7.4% 2668 ± 3% slabinfo.trace_event_file.num_objs
88548 +1.7% 90036 proc-vmstat.nr_active_anon
7035 +0.8% 7092 proc-vmstat.nr_inactive_anon
8842 +1.7% 8995 proc-vmstat.nr_mapped
403.00 -100.0% 0.00 proc-vmstat.nr_mlock
19852 ± 2% +5.8% 21001 proc-vmstat.nr_shmem
19893 +1.7% 20225 proc-vmstat.nr_slab_reclaimable
88548 +1.7% 90036 proc-vmstat.nr_zone_active_anon
7035 +0.8% 7092 proc-vmstat.nr_zone_inactive_anon
731288 +1.3% 740501 proc-vmstat.numa_hit
697623 +1.3% 706849 proc-vmstat.numa_local
14860 ± 92% +58.6% 23562 ± 51% proc-vmstat.numa_pte_updates
18740 ± 3% +9.9% 20592 proc-vmstat.pgactivate
800446 +1.3% 811159 proc-vmstat.pgalloc_normal
9767 ± 12% +10.2% 10765 ± 15% sched_debug.cfs_rq:/.load.avg
210597 +11.3% 234483 sched_debug.cfs_rq:/.min_vruntime.stddev
1.93 ± 19% +27.7% 2.46 ± 3% sched_debug.cfs_rq:/.nr_spread_over.avg
59.22 ± 12% +30.6% 77.33 ± 16% sched_debug.cfs_rq:/.nr_spread_over.max
8.10 ± 17% +29.6% 10.49 ± 8% sched_debug.cfs_rq:/.nr_spread_over.stddev
210512 +11.3% 234373 sched_debug.cfs_rq:/.spread0.stddev
62.54 ± 8% +24.8% 78.03 ± 15% sched_debug.cfs_rq:/.util_avg.stddev
78.05 ± 5% +22.8% 95.81 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.stddev
19567 ± 14% +49.4% 29227 sched_debug.cpu.nr_switches.max
3236 ± 4% +14.1% 3692 sched_debug.cpu.nr_switches.stddev
783.06 ± 2% +11.7% 874.67 ± 7% sched_debug.cpu.sched_count.min
293.67 ± 7% +20.6% 354.17 ± 9% sched_debug.cpu.ttwu_count.min
618.29 ± 2% +10.6% 683.94 ± 3% sched_debug.cpu.ttwu_local.avg
280.56 ± 5% +18.2% 331.58 ± 8% sched_debug.cpu.ttwu_local.min
884.00 ± 57% +223.9% 2863 ± 66% interrupts.38:PCI-MSI.67633153-edge.eth0-TxRx-0
381.33 ± 44% +1845.1% 7417 ± 42% interrupts.39:PCI-MSI.67633154-edge.eth0-TxRx-1
2568 ± 95% -89.2% 277.50 ± 4% interrupts.40:PCI-MSI.67633155-edge.eth0-TxRx-2
225.33 ± 20% +328.3% 965.00 ± 32% interrupts.41:PCI-MSI.67633156-edge.eth0-TxRx-3
1789 ± 21% +30.2% 2328 interrupts.CPU0.RES:Rescheduling_interrupts
327.00 ± 6% +130.6% 754.00 ± 57% interrupts.CPU100.RES:Rescheduling_interrupts
315.67 ± 3% +53.3% 484.00 ± 27% interrupts.CPU101.RES:Rescheduling_interrupts
1094 ± 14% +126.0% 2473 ± 13% interrupts.CPU2.RES:Rescheduling_interrupts
6226 ± 26% +26.4% 7868 interrupts.CPU26.NMI:Non-maskable_interrupts
6226 ± 26% +26.4% 7868 interrupts.CPU26.PMI:Performance_monitoring_interrupts
1447 ± 39% +28.5% 1859 ± 26% interrupts.CPU26.RES:Rescheduling_interrupts
1550 ± 17% +23.9% 1920 ± 18% interrupts.CPU27.RES:Rescheduling_interrupts
4915 ± 28% +20.1% 5902 ± 33% interrupts.CPU29.NMI:Non-maskable_interrupts
4915 ± 28% +20.1% 5902 ± 33% interrupts.CPU29.PMI:Performance_monitoring_interrupts
884.00 ± 57% +223.9% 2863 ± 66% interrupts.CPU30.38:PCI-MSI.67633153-edge.eth0-TxRx-0
381.33 ± 44% +1845.1% 7417 ± 42% interrupts.CPU31.39:PCI-MSI.67633154-edge.eth0-TxRx-1
2568 ± 95% -89.2% 277.50 ± 4% interrupts.CPU32.40:PCI-MSI.67633155-edge.eth0-TxRx-2
225.33 ± 20% +328.3% 965.00 ± 32% interrupts.CPU33.41:PCI-MSI.67633156-edge.eth0-TxRx-3
6228 ± 26% +26.3% 7868 interrupts.CPU39.NMI:Non-maskable_interrupts
6228 ± 26% +26.3% 7868 interrupts.CPU39.PMI:Performance_monitoring_interrupts
6228 ± 26% +26.4% 7869 interrupts.CPU40.NMI:Non-maskable_interrupts
6228 ± 26% +26.4% 7869 interrupts.CPU40.PMI:Performance_monitoring_interrupts
6228 ± 26% +26.3% 7867 interrupts.CPU45.NMI:Non-maskable_interrupts
6228 ± 26% +26.3% 7867 interrupts.CPU45.PMI:Performance_monitoring_interrupts
6227 ± 26% +26.3% 7867 interrupts.CPU47.NMI:Non-maskable_interrupts
6227 ± 26% +26.3% 7867 interrupts.CPU47.PMI:Performance_monitoring_interrupts
6227 ± 26% +26.3% 7867 interrupts.CPU48.NMI:Non-maskable_interrupts
6227 ± 26% +26.3% 7867 interrupts.CPU48.PMI:Performance_monitoring_interrupts
6227 ± 26% +26.4% 7868 interrupts.CPU50.NMI:Non-maskable_interrupts
6227 ± 26% +26.4% 7868 interrupts.CPU50.PMI:Performance_monitoring_interrupts
6226 ± 26% +26.3% 7865 interrupts.CPU51.NMI:Non-maskable_interrupts
6226 ± 26% +26.3% 7865 interrupts.CPU51.PMI:Performance_monitoring_interrupts
1951 ± 6% -13.4% 1689 ± 3% interrupts.CPU52.RES:Rescheduling_interrupts
467.00 ± 10% +360.0% 2148 ± 18% interrupts.CPU57.RES:Rescheduling_interrupts
357.67 ± 5% +95.9% 700.50 ± 17% interrupts.CPU58.RES:Rescheduling_interrupts
335.67 ± 5% +14.1% 383.00 ± 12% interrupts.CPU60.RES:Rescheduling_interrupts
466.67 ± 44% +150.1% 1167 ± 16% interrupts.CPU62.RES:Rescheduling_interrupts
6210 ± 26% +26.5% 7857 interrupts.CPU69.NMI:Non-maskable_interrupts
6210 ± 26% +26.5% 7857 interrupts.CPU69.PMI:Performance_monitoring_interrupts
799.33 ± 54% -59.7% 322.00 interrupts.CPU7.RES:Rescheduling_interrupts
329.67 ± 5% +167.5% 882.00 ± 60% interrupts.CPU8.RES:Rescheduling_interrupts
6161 ± 35% +51.3% 9321 interrupts.CPU80.RES:Rescheduling_interrupts
4911 ± 22% -61.2% 1907 ± 55% interrupts.CPU81.RES:Rescheduling_interrupts
868.00 ± 53% +80.0% 1562 ± 34% interrupts.CPU82.RES:Rescheduling_interrupts
499.33 ± 13% -21.6% 391.50 ± 2% interrupts.CPU83.RES:Rescheduling_interrupts
328.33 ± 4% +148.1% 814.50 ± 60% interrupts.CPU9.RES:Rescheduling_interrupts
355.67 ± 13% +10.5% 393.00 ± 12% interrupts.CPU93.RES:Rescheduling_interrupts
730440 ± 3% +8.0% 788906 interrupts.NMI:Non-maskable_interrupts
730440 ± 3% +8.0% 788906 interrupts.PMI:Performance_monitoring_interrupts
24238 ± 7% +14.6% 27769 ± 10% softirqs.CPU0.RCU
106850 ± 2% +25.2% 133780 ± 20% softirqs.CPU1.TIMER
27066 ± 11% +18.5% 32064 ± 15% softirqs.CPU10.RCU
25790 ± 10% +20.2% 30999 ± 13% softirqs.CPU100.RCU
25624 ± 9% +21.7% 31186 ± 14% softirqs.CPU101.RCU
25892 ± 12% +19.9% 31034 ± 14% softirqs.CPU102.RCU
26207 ± 10% +18.1% 30961 ± 14% softirqs.CPU103.RCU
26890 ± 10% +18.1% 31769 ± 15% softirqs.CPU11.RCU
26960 ± 10% +18.7% 31993 ± 14% softirqs.CPU12.RCU
27170 ± 11% +19.5% 32473 ± 15% softirqs.CPU13.RCU
27089 ± 10% +20.0% 32496 ± 14% softirqs.CPU14.RCU
27534 ± 16% +31.3% 36166 ± 13% softirqs.CPU15.RCU
29615 ± 10% +23.8% 36651 ± 13% softirqs.CPU16.RCU
29695 ± 10% +24.7% 37023 ± 14% softirqs.CPU17.RCU
29884 ± 10% +23.9% 37016 ± 14% softirqs.CPU18.RCU
29957 ± 10% +22.7% 36761 ± 14% softirqs.CPU19.RCU
27903 ± 11% +17.1% 32662 ± 15% softirqs.CPU2.RCU
29475 ± 10% +22.8% 36187 ± 13% softirqs.CPU20.RCU
29223 ± 10% +24.7% 36438 ± 14% softirqs.CPU21.RCU
29403 ± 10% +24.7% 36680 ± 14% softirqs.CPU22.RCU
29223 ± 10% +24.5% 36377 ± 14% softirqs.CPU23.RCU
29439 ± 11% +23.8% 36447 ± 14% softirqs.CPU24.RCU
29644 ± 11% +26.4% 37466 ± 15% softirqs.CPU25.RCU
27156 ± 13% +19.6% 32470 ± 14% softirqs.CPU26.RCU
27077 ± 12% +15.5% 31267 ± 13% softirqs.CPU27.RCU
27006 ± 12% +17.6% 31759 ± 14% softirqs.CPU29.RCU
27206 ± 10% +19.1% 32405 ± 15% softirqs.CPU3.RCU
26797 ± 9% +26.4% 33871 ± 12% softirqs.CPU30.RCU
27088 ± 9% +24.2% 33643 ± 13% softirqs.CPU31.RCU
27162 ± 9% +24.1% 33721 ± 14% softirqs.CPU32.RCU
25150 ± 14% +31.9% 33177 ± 13% softirqs.CPU33.RCU
26739 ± 8% +23.9% 33128 ± 13% softirqs.CPU34.RCU
26626 ± 8% +26.5% 33670 ± 10% softirqs.CPU35.RCU
27097 ± 7% +22.5% 33195 ± 13% softirqs.CPU36.RCU
27004 ± 9% +23.5% 33344 ± 14% softirqs.CPU37.RCU
25138 +31.9% 33163 ± 14% softirqs.CPU38.RCU
26487 ± 8% +25.4% 33217 ± 14% softirqs.CPU39.RCU
27006 ± 11% +19.7% 32321 ± 16% softirqs.CPU4.RCU
26603 ± 8% +24.6% 33136 ± 12% softirqs.CPU40.RCU
26385 ± 9% +24.7% 32911 ± 12% softirqs.CPU41.RCU
27351 ± 11% +23.2% 33684 ± 14% softirqs.CPU42.RCU
27019 ± 9% +22.0% 32960 ± 13% softirqs.CPU43.RCU
26723 ± 10% +25.9% 33654 ± 12% softirqs.CPU44.RCU
29057 ± 11% +16.6% 33876 ± 12% softirqs.CPU45.RCU
29413 ± 11% +15.8% 34050 ± 13% softirqs.CPU46.RCU
29157 ± 11% +15.7% 33739 ± 12% softirqs.CPU47.RCU
29199 ± 12% +16.1% 33895 ± 12% softirqs.CPU48.RCU
28857 ± 10% +16.4% 33578 ± 13% softirqs.CPU49.RCU
27418 ± 10% +17.4% 32199 ± 16% softirqs.CPU5.RCU
28971 ± 11% +16.9% 33869 ± 12% softirqs.CPU50.RCU
28954 ± 13% +17.1% 33912 ± 12% softirqs.CPU51.RCU
30469 ± 8% +22.4% 37280 ± 9% softirqs.CPU52.RCU
105919 ± 2% +27.1% 134605 ± 18% softirqs.CPU52.TIMER
31184 ± 9% +23.6% 38533 ± 13% softirqs.CPU54.RCU
30675 ± 8% +24.9% 38314 ± 13% softirqs.CPU55.RCU
30626 ± 7% +23.2% 37723 ± 13% softirqs.CPU56.RCU
31019 ± 8% +20.4% 37347 ± 12% softirqs.CPU57.RCU
30597 ± 9% +22.8% 37588 ± 13% softirqs.CPU58.RCU
30243 ± 8% +24.8% 37751 ± 13% softirqs.CPU59.RCU
26980 ± 10% +19.0% 32102 ± 16% softirqs.CPU6.RCU
26808 ± 10% +22.9% 32935 ± 14% softirqs.CPU60.RCU
27006 ± 10% +23.9% 33453 ± 16% softirqs.CPU61.RCU
27337 ± 11% +21.7% 33259 ± 14% softirqs.CPU62.RCU
27593 ± 11% +21.1% 33417 ± 15% softirqs.CPU63.RCU
27796 ± 10% +21.0% 33641 ± 14% softirqs.CPU64.RCU
27087 ± 11% +22.1% 33072 ± 13% softirqs.CPU65.RCU
27109 ± 10% +21.5% 32925 ± 13% softirqs.CPU66.RCU
25097 ± 21% +31.9% 33094 ± 13% softirqs.CPU67.RCU
27755 ± 10% +19.6% 33194 ± 13% softirqs.CPU68.RCU
27476 ± 10% +22.2% 33569 ± 14% softirqs.CPU69.RCU
27402 ± 11% +19.1% 32628 ± 16% softirqs.CPU7.RCU
27829 ± 10% +20.8% 33604 ± 14% softirqs.CPU70.RCU
28217 ± 11% +19.3% 33669 ± 13% softirqs.CPU71.RCU
27146 ± 10% +21.5% 32977 ± 13% softirqs.CPU72.RCU
27327 ± 9% +19.6% 32674 ± 14% softirqs.CPU73.RCU
27173 ± 11% +22.7% 33345 ± 13% softirqs.CPU74.RCU
26143 ± 9% +28.8% 33669 ± 13% softirqs.CPU75.RCU
25936 ± 8% +28.8% 33398 ± 14% softirqs.CPU76.RCU
25725 ± 7% +30.7% 33626 ± 13% softirqs.CPU77.RCU
30496 ± 10% +21.6% 37072 ± 12% softirqs.CPU78.RCU
30408 ± 10% +20.5% 36633 ± 13% softirqs.CPU79.RCU
27088 ± 10% +19.4% 32345 ± 15% softirqs.CPU8.RCU
29710 ± 10% +38.3% 41102 ± 11% softirqs.CPU80.RCU
29133 ± 10% +21.6% 35420 ± 12% softirqs.CPU81.RCU
28787 ± 11% +21.3% 34923 ± 12% softirqs.CPU83.RCU
28669 ± 10% +21.8% 34907 ± 12% softirqs.CPU84.RCU
28342 ± 12% +23.3% 34938 ± 12% softirqs.CPU85.RCU
28874 ± 11% +20.9% 34922 ± 14% softirqs.CPU86.RCU
28600 ± 10% +21.2% 34651 ± 13% softirqs.CPU87.RCU
28668 ± 11% +20.7% 34590 ± 13% softirqs.CPU88.RCU
28553 ± 11% +20.9% 34534 ± 14% softirqs.CPU89.RCU
27215 ± 10% +19.7% 32580 ± 15% softirqs.CPU9.RCU
23437 +31.1% 30737 ± 13% softirqs.CPU90.RCU
25261 ± 9% +22.8% 31030 ± 13% softirqs.CPU91.RCU
25566 ± 10% +22.8% 31395 ± 13% softirqs.CPU92.RCU
25677 ± 10% +21.0% 31073 ± 13% softirqs.CPU93.RCU
25746 ± 11% +22.1% 31435 ± 14% softirqs.CPU94.RCU
25492 ± 9% +20.9% 30816 ± 13% softirqs.CPU95.RCU
25219 ± 9% +22.8% 30958 ± 13% softirqs.CPU96.RCU
25537 ± 10% +22.3% 31221 ± 13% softirqs.CPU97.RCU
25615 ± 10% +22.2% 31306 ± 13% softirqs.CPU98.RCU
25599 ± 9% +21.8% 31169 ± 13% softirqs.CPU99.RCU
2884148 ± 10% +21.7% 3508969 ± 14% softirqs.RCU
6.21 -6.2 0.00 perf-profile.calltrace.cycles-pp.eventfd_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
50.14 -3.3 46.80 perf-profile.calltrace.cycles-pp.write
12.15 -1.7 10.41 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
13.59 -1.7 11.90 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
26.88 -1.5 25.42 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
9.36 -1.4 7.92 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.read
7.73 -1.4 6.31 perf-profile.calltrace.cycles-pp.eventfd_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
27.40 -1.4 25.99 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
4.15 -1.3 2.85 perf-profile.calltrace.cycles-pp._copy_from_user.eventfd_write.vfs_write.ksys_write.do_syscall_64
2.20 -1.0 1.22 perf-profile.calltrace.cycles-pp.fsnotify.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
12.11 -0.6 11.48 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.write
1.22 -0.5 0.72 ± 2% perf-profile.calltrace.cycles-pp.copy_user_generic_unrolled._copy_from_user.eventfd_write.vfs_write.ksys_write
1.65 -0.5 1.19 perf-profile.calltrace.cycles-pp.__might_fault._copy_from_user.eventfd_write.vfs_write.ksys_write
8.29 -0.4 7.89 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.write
2.99 -0.3 2.73 perf-profile.calltrace.cycles-pp.security_file_permission.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.76 -0.2 0.55 perf-profile.calltrace.cycles-pp.___might_sleep.__might_fault._copy_from_user.eventfd_write.vfs_write
2.31 -0.2 2.12 perf-profile.calltrace.cycles-pp.security_file_permission.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.34 -0.1 1.23 ± 2% perf-profile.calltrace.cycles-pp.common_file_perm.security_file_permission.vfs_write.ksys_write.do_syscall_64
1.28 -0.1 1.21 perf-profile.calltrace.cycles-pp.common_file_perm.security_file_permission.vfs_read.ksys_read.do_syscall_64
0.63 -0.0 0.58 perf-profile.calltrace.cycles-pp.fsnotify.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.74 -0.0 0.70 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.eventfd_write.vfs_write.ksys_write.do_syscall_64
0.73 ± 3% +0.2 0.91 perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.66 +0.2 0.84 perf-profile.calltrace.cycles-pp.__x64_sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.81 ± 2% +0.2 1.02 perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
13.25 +0.4 13.70 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +0.6 0.57 perf-profile.calltrace.cycles-pp.__might_sleep.__might_fault._copy_to_iter.eventfd_read.new_sync_read
0.00 +0.6 0.64 perf-profile.calltrace.cycles-pp.___might_sleep.__might_fault._copy_to_iter.eventfd_read.new_sync_read
0.00 +0.7 0.73 perf-profile.calltrace.cycles-pp.copy_user_generic_unrolled.copyout._copy_to_iter.eventfd_read.new_sync_read
0.00 +0.8 0.84 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.eventfd_read.new_sync_read.vfs_read.ksys_read
14.68 +1.2 15.90 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.63 +1.4 2.08 perf-profile.calltrace.cycles-pp.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +1.5 1.48 perf-profile.calltrace.cycles-pp.copyout._copy_to_iter.eventfd_read.new_sync_read.vfs_read
0.00 +1.8 1.77 perf-profile.calltrace.cycles-pp.__might_fault._copy_to_iter.eventfd_read.new_sync_read.vfs_read
11.98 +1.8 13.82 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.read
27.66 +2.7 30.34 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
28.20 +3.1 31.31 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
51.23 +3.3 54.56 perf-profile.calltrace.cycles-pp.read
0.00 +4.5 4.50 perf-profile.calltrace.cycles-pp._copy_to_iter.eventfd_read.new_sync_read.vfs_read.ksys_read
0.00 +7.1 7.06 perf-profile.calltrace.cycles-pp.eventfd_read.new_sync_read.vfs_read.ksys_read.do_syscall_64
0.00 +8.2 8.20 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
50.38 -3.3 47.03 perf-profile.children.cycles-pp.write
19.51 -1.9 17.61 perf-profile.children.cycles-pp.syscall_return_via_sysret
12.23 -1.7 10.48 perf-profile.children.cycles-pp.vfs_write
13.63 -1.7 11.94 perf-profile.children.cycles-pp.ksys_write
7.79 -1.4 6.37 perf-profile.children.cycles-pp.eventfd_write
4.24 -1.3 2.93 perf-profile.children.cycles-pp._copy_from_user
2.89 -1.0 1.86 perf-profile.children.cycles-pp.fsnotify
5.41 -0.5 4.94 perf-profile.children.cycles-pp.security_file_permission
1.91 -0.3 1.60 perf-profile.children.cycles-pp._raw_spin_lock_irq
0.34 ± 2% -0.2 0.15 ± 3% perf-profile.children.cycles-pp.__vfs_read
2.62 -0.2 2.45 perf-profile.children.cycles-pp.common_file_perm
0.15 ± 5% -0.1 0.07 ± 7% perf-profile.children.cycles-pp.read@plt
0.50 -0.1 0.43 perf-profile.children.cycles-pp.fpregs_assert_state_consistent
1.24 -0.1 1.18 perf-profile.children.cycles-pp.fsnotify_parent
1.08 -0.1 1.02 perf-profile.children.cycles-pp.apparmor_file_permission
0.16 ± 5% -0.1 0.11 perf-profile.children.cycles-pp.__vfs_write
0.45 -0.0 0.41 perf-profile.children.cycles-pp.aa_file_perm
0.45 -0.0 0.43 perf-profile.children.cycles-pp.hrtimer_interrupt
0.06 -0.0 0.05 perf-profile.children.cycles-pp.write@plt
1.45 ± 2% +0.2 1.61 perf-profile.children.cycles-pp.__fget_light
1.51 +0.2 1.68 perf-profile.children.cycles-pp.copy_user_generic_unrolled
0.00 +0.2 0.17 ± 2% perf-profile.children.cycles-pp.iov_iter_init
1.74 ± 2% +0.2 1.92 perf-profile.children.cycles-pp.__fdget_pos
0.74 +0.2 0.93 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.66 +0.2 0.85 perf-profile.children.cycles-pp.__x64_sys_write
0.77 ± 2% +0.2 0.97 perf-profile.children.cycles-pp.__might_sleep
0.99 +0.3 1.28 perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
13.32 +0.5 13.77 perf-profile.children.cycles-pp.vfs_read
2.59 +0.5 3.07 perf-profile.children.cycles-pp.__might_fault
6.33 +0.8 7.16 perf-profile.children.cycles-pp.eventfd_read
14.70 +1.2 15.93 perf-profile.children.cycles-pp.ksys_read
22.24 +1.2 23.48 perf-profile.children.cycles-pp.entry_SYSCALL_64
0.64 ± 2% +1.5 2.12 perf-profile.children.cycles-pp.__x64_sys_read
0.00 +1.5 1.51 perf-profile.children.cycles-pp.copyout
54.64 +1.5 56.16 perf-profile.children.cycles-pp.do_syscall_64
55.67 +1.7 57.39 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
51.46 +3.3 54.77 perf-profile.children.cycles-pp.read
0.00 +4.6 4.60 perf-profile.children.cycles-pp._copy_to_iter
0.00 +8.2 8.24 perf-profile.children.cycles-pp.new_sync_read
19.48 -1.9 17.59 perf-profile.self.cycles-pp.syscall_return_via_sysret
2.98 -1.3 1.64 perf-profile.self.cycles-pp.eventfd_read
2.82 -1.0 1.80 perf-profile.self.cycles-pp.fsnotify
2.17 ± 2% -0.9 1.31 perf-profile.self.cycles-pp.write
1.84 -0.3 1.54 perf-profile.self.cycles-pp._raw_spin_lock_irq
0.31 ± 4% -0.2 0.14 ± 3% perf-profile.self.cycles-pp.__vfs_read
0.61 ± 2% -0.2 0.46 perf-profile.self.cycles-pp._copy_from_user
2.12 -0.1 1.98 perf-profile.self.cycles-pp.common_file_perm
1.49 -0.1 1.35 perf-profile.self.cycles-pp.read
1.18 -0.1 1.07 perf-profile.self.cycles-pp.security_file_permission
0.84 ± 2% -0.1 0.73 ± 2% perf-profile.self.cycles-pp.vfs_read
0.15 ± 5% -0.1 0.07 ± 7% perf-profile.self.cycles-pp.read@plt
0.48 -0.1 0.41 perf-profile.self.cycles-pp.fpregs_assert_state_consistent
2.71 -0.1 2.65 perf-profile.self.cycles-pp.eventfd_write
0.85 -0.1 0.79 perf-profile.self.cycles-pp.vfs_write
1.18 -0.1 1.13 perf-profile.self.cycles-pp.fsnotify_parent
0.95 -0.0 0.90 perf-profile.self.cycles-pp.apparmor_file_permission
0.12 ± 3% -0.0 0.08 perf-profile.self.cycles-pp.__vfs_write
0.45 -0.0 0.41 perf-profile.self.cycles-pp.aa_file_perm
0.06 -0.0 0.05 perf-profile.self.cycles-pp.write@plt
0.34 +0.0 0.37 perf-profile.self.cycles-pp.__fdget_pos
0.56 +0.1 0.61 perf-profile.self.cycles-pp.ksys_write
0.00 +0.2 0.16 perf-profile.self.cycles-pp.iov_iter_init
1.36 ± 2% +0.2 1.52 perf-profile.self.cycles-pp.__fget_light
0.72 +0.2 0.89 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.58 +0.2 0.77 perf-profile.self.cycles-pp.__x64_sys_write
1.02 +0.2 1.21 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.70 ± 2% +0.2 0.91 perf-profile.self.cycles-pp.__might_sleep
0.64 +0.3 0.90 perf-profile.self.cycles-pp.__might_fault
0.00 +0.3 0.29 perf-profile.self.cycles-pp.copyout
1.19 +0.3 1.53 perf-profile.self.cycles-pp.copy_user_generic_unrolled
24.37 +0.4 24.79 perf-profile.self.cycles-pp.do_syscall_64
0.53 +0.5 1.04 perf-profile.self.cycles-pp.ksys_read
0.00 +0.9 0.92 perf-profile.self.cycles-pp.new_sync_read
20.35 +1.3 21.62 perf-profile.self.cycles-pp.entry_SYSCALL_64
0.00 +1.3 1.32 perf-profile.self.cycles-pp._copy_to_iter
0.56 ± 2% +1.4 1.98 perf-profile.self.cycles-pp.__x64_sys_read



will-it-scale.per_process_ops

600000 +------------------------------------------------------------------+
| O O O O O O O : O : : O O : : : |
500000 |-+ : : : : : : |
| : : : : : : |
| : : : : : : : :|
400000 |-+ : : : : : : : :|
| : : : : : : : :|
300000 |-+ : : : : : : : :|
| : : : : : : : :|
200000 |-+ : : : : : : : :|
| : : : : : : : :|
| : : : : |
100000 |-+ : : : : |
| : : : : |
0 +------------------------------------------------------------------+


will-it-scale.workload

7e+07 +-------------------------------------------------------------------+
| |
6e+07 |.+.+.+.+.+.+.+.+.+.+.+.+ O + +.+.+.+.+.+.+ +.+.+.+.+.+.+.+.+ |
| : : : : : : |
5e+07 |-+ : : : : : : |
| : :: : : : : |
4e+07 |-+ : : : : : : : :|
| : : : : : : : :|
3e+07 |-+ : : : : : : : :|
| : : : : : : : :|
2e+07 |-+ : : : : : : : :|
| :: :: :: ::|
1e+07 |-+ : : : : |
| : : : : |
0 +-------------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Rong Chen


Attachments:
(No filename) (34.26 kB)
config-5.7.0-rc3-00122-ga4ef93e263d72 (209.80 kB)
job-script (7.44 kB)
job.yaml (5.01 kB)
reproduce (323.00 B)
Download all attachments