2018-02-25 14:47:52

by kernel test robot

[permalink] [raw]
Subject: [lkp-robot] [mm, mlock, vmscan] 9c4e6b1a70: stress-ng.hdd.ops_per_sec -7.9% regression


Greeting,

FYI, we noticed a -7.9% regression of stress-ng.hdd.ops_per_sec due to commit:


commit: 9c4e6b1a7027f102990c0395296015a812525f4d ("mm, mlock, vmscan: no more skipping pagevecs")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

in testcase: stress-ng
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
with following parameters:

nr_threads: 100%
testtime: 1s
class: io
cpufreq_governor: performance


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml

=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime:
io/gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2016-08-31.cgz/lkp-bdw-ep6/stress-ng/1s

commit:
c3cc39118c ("mm: memcontrol: fix NR_WRITEBACK leak in memcg and system stats")
9c4e6b1a70 ("mm, mlock, vmscan: no more skipping pagevecs")

c3cc39118c3610eb 9c4e6b1a7027f102990c039529
---------------- --------------------------
%stddev %change %stddev
\ | \
1242815 -14.4% 1063699 stress-ng.hdd.ops
645914 -7.9% 594931 stress-ng.hdd.ops_per_sec
473.69 -2.5% 461.99 stress-ng.time.system_time
12569476 -9.4% 11392192 vmstat.memory.cache
802679 ? 7% +40.2% 1125287 ? 12% cpuidle.C3.time
2423 ? 4% +38.7% 3360 ? 15% cpuidle.C3.usage
16584420 -9.1% 15071340 numa-numastat.node1.local_node
16594335 -9.1% 15081279 numa-numastat.node1.numa_hit
2164 ? 6% +42.7% 3087 ? 17% turbostat.C3
0.10 ? 8% +0.0 0.14 ? 15% turbostat.C3%
0.05 ? 8% +33.3% 0.07 ? 17% turbostat.CPU%c3
1.911e+09 -1.1% 1.89e+09 perf-stat.branch-misses
7.536e+09 -3.3% 7.29e+09 perf-stat.cache-references
1.633e+12 -1.6% 1.607e+12 perf-stat.cpu-cycles
0.61 +1.7% 0.62 perf-stat.ipc
2.576e+08 -2.5% 2.513e+08 perf-stat.node-loads
394861 ? 11% -53.2% 184617 ? 22% meminfo.Active
205118 ? 4% -100.0% 4.00 meminfo.Active(file)
14015193 ? 4% -99.8% 21126 ? 2% meminfo.Inactive
13995095 ? 4% -100.0% 403.25 ? 11% meminfo.Inactive(file)
1.303e+08 -10.1% 1.171e+08 meminfo.MemAvailable
3.00 +4.5e+08% 13428597 ? 4% meminfo.Unevictable
2816 ? 53% +561.9% 18643 ?134% sched_debug.cpu.avg_idle.min
3.68 ? 3% -13.5% 3.18 ? 3% sched_debug.cpu.clock.stddev
3.68 ? 3% -13.5% 3.18 ? 3% sched_debug.cpu.clock_task.stddev
0.12 ? 3% -19.3% 0.09 ? 21% sched_debug.rt_rq:/.rt_time.avg
10.04 ? 3% -21.5% 7.88 ? 22% sched_debug.rt_rq:/.rt_time.max
1.06 ? 3% -21.5% 0.84 ? 22% sched_debug.rt_rq:/.rt_time.stddev
204608 ? 11% -62.2% 77341 ? 29% numa-meminfo.node0.Active
102698 ? 4% -100.0% 0.00 numa-meminfo.node0.Active(file)
6913218 ? 5% -99.8% 13623 ? 55% numa-meminfo.node0.Inactive
6903513 ? 5% -100.0% 197.75 ? 24% numa-meminfo.node0.Inactive(file)
190923 ? 14% -43.5% 107793 ? 19% numa-meminfo.node1.Active
102681 ? 4% -100.0% 4.00 numa-meminfo.node1.Active(file)
7063606 ? 4% -99.9% 7736 ?100% numa-meminfo.node1.Inactive
7053168 ? 4% -100.0% 204.00 ? 45% numa-meminfo.node1.Inactive(file)
3.00 +2.2e+08% 6713147 ? 4% numa-meminfo.node1.Unevictable
25680 ? 4% -100.0% 0.00 numa-vmstat.node0.nr_active_file
1730227 ? 6% -100.0% 49.00 ? 24% numa-vmstat.node0.nr_inactive_file
25680 ? 4% -100.0% 0.00 numa-vmstat.node0.nr_zone_active_file
1730144 ? 6% -100.0% 49.00 ? 24% numa-vmstat.node0.nr_zone_inactive_file
7950112 ? 5% -13.4% 6886737 ? 6% numa-vmstat.node0.numa_hit
7943736 ? 5% -13.4% 6880367 ? 6% numa-vmstat.node0.numa_local
25670 ? 4% -100.0% 1.00 numa-vmstat.node1.nr_active_file
1765886 ? 5% -100.0% 50.50 ? 45% numa-vmstat.node1.nr_inactive_file
0.00 +Inf% 1670961 ? 4% numa-vmstat.node1.nr_unevictable
25670 ? 4% -100.0% 1.00 numa-vmstat.node1.nr_zone_active_file
1765833 ? 5% -100.0% 50.50 ? 45% numa-vmstat.node1.nr_zone_inactive_file
0.00 +Inf% 1670970 ? 4% numa-vmstat.node1.nr_zone_unevictable
8050425 ? 5% -13.9% 6933412 ? 5% numa-vmstat.node1.numa_hit
7878053 ? 6% -14.2% 6761044 ? 6% numa-vmstat.node1.numa_local
51353 ? 4% -100.0% 1.00 proc-vmstat.nr_active_file
3257694 -10.2% 2924036 proc-vmstat.nr_dirty_background_threshold
6523354 -10.2% 5855222 proc-vmstat.nr_dirty_threshold
3493404 ? 5% -100.0% 100.25 ? 11% proc-vmstat.nr_inactive_file
0.00 +Inf% 3344234 ? 4% proc-vmstat.nr_unevictable
51353 ? 4% -100.0% 1.00 proc-vmstat.nr_zone_active_file
3493404 ? 5% -100.0% 100.25 ? 11% proc-vmstat.nr_zone_inactive_file
0.00 +Inf% 3344234 ? 4% proc-vmstat.nr_zone_unevictable
1014 ? 32% -40.5% 603.75 ? 26% proc-vmstat.numa_hint_faults
33019247 -9.7% 29815458 proc-vmstat.numa_hit
33002235 -9.7% 29798428 proc-vmstat.numa_local
1298 ? 57% -60.9% 507.75 ? 31% proc-vmstat.numa_pages_migrated
213856 ? 38% -31.2% 147128 ? 12% proc-vmstat.numa_pte_updates
361381 -99.8% 659.50 ? 3% proc-vmstat.pgactivate
33235386 -9.4% 30127871 proc-vmstat.pgfree
1298 ? 57% -60.9% 507.75 ? 31% proc-vmstat.pgmigrate_success
1.00 +3e+09% 30018233 proc-vmstat.unevictable_pgs_culled
28.62 ? 41% -27.8 0.77 ?100% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
28.62 ? 41% -27.8 0.77 ?100% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
28.62 ? 41% -27.8 0.77 ?100% perf-profile.calltrace.cycles-pp.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
28.85 ? 42% -27.4 1.49 ?117% perf-profile.calltrace.cycles-pp.write
28.12 ? 44% -27.3 0.77 ?100% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
28.12 ? 44% -27.3 0.77 ?100% perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
26.31 ? 53% -26.3 0.00 perf-profile.calltrace.cycles-pp.devkmsg_write.__vfs_write.vfs_write.sys_write.do_syscall_64
26.31 ? 53% -26.3 0.00 perf-profile.calltrace.cycles-pp.printk_emit.devkmsg_write.__vfs_write.vfs_write.sys_write
26.31 ? 53% -26.3 0.00 perf-profile.calltrace.cycles-pp.vprintk_emit.printk_emit.devkmsg_write.__vfs_write.vfs_write
26.31 ? 53% -26.3 0.00 perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.printk_emit.devkmsg_write.__vfs_write
22.00 ? 68% -22.0 0.00 perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.printk_emit.devkmsg_write
21.07 ? 71% -21.1 0.00 perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.printk_emit
21.07 ? 71% -21.1 0.00 perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
21.07 ? 71% -21.1 0.00 perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock
14.27 ? 72% -14.3 0.00 perf-profile.calltrace.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
6.80 ? 73% -6.8 0.00 perf-profile.calltrace.cycles-pp.delay_tsc.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
0.81 ?173% +5.1 5.89 ? 40% perf-profile.calltrace.cycles-pp.link_path_walk.path_openat.do_filp_open.do_sys_open.do_syscall_64
33.02 ? 12% +16.8 49.84 ? 5% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
34.45 ? 12% +17.0 51.41 ? 5% perf-profile.calltrace.cycles-pp.secondary_startup_64
33.25 ? 12% +17.7 51.00 ? 6% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
33.25 ? 12% +17.7 51.00 ? 6% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
33.25 ? 12% +17.7 51.00 ? 6% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
28.62 ? 41% -27.8 0.77 ?100% perf-profile.children.cycles-pp.sys_write
28.85 ? 42% -27.4 1.49 ?117% perf-profile.children.cycles-pp.write
28.12 ? 44% -27.3 0.77 ?100% perf-profile.children.cycles-pp.vfs_write
28.12 ? 44% -27.3 0.77 ?100% perf-profile.children.cycles-pp.__vfs_write
26.31 ? 53% -26.3 0.00 perf-profile.children.cycles-pp.devkmsg_write
26.31 ? 53% -26.3 0.00 perf-profile.children.cycles-pp.printk_emit
26.31 ? 53% -26.3 0.00 perf-profile.children.cycles-pp.vprintk_emit
26.31 ? 53% -26.3 0.00 perf-profile.children.cycles-pp.console_unlock
22.24 ? 68% -22.2 0.00 perf-profile.children.cycles-pp.wait_for_xmitr
22.00 ? 68% -22.0 0.00 perf-profile.children.cycles-pp.serial8250_console_write
21.30 ? 71% -21.3 0.00 perf-profile.children.cycles-pp.serial8250_console_putchar
21.07 ? 71% -21.1 0.00 perf-profile.children.cycles-pp.uart_console_write
59.80 ? 10% -19.5 40.33 ? 5% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
59.56 ? 10% -19.2 40.33 ? 5% perf-profile.children.cycles-pp.do_syscall_64
14.51 ? 71% -14.5 0.00 perf-profile.children.cycles-pp.io_serial_in
7.49 ? 63% -7.5 0.00 perf-profile.children.cycles-pp.delay_tsc
0.81 ?173% +5.5 6.31 ? 41% perf-profile.children.cycles-pp.link_path_walk
34.21 ? 12% +16.8 51.03 ? 5% perf-profile.children.cycles-pp.cpuidle_enter_state
34.45 ? 12% +17.0 51.41 ? 5% perf-profile.children.cycles-pp.secondary_startup_64
34.45 ? 12% +17.0 51.41 ? 5% perf-profile.children.cycles-pp.cpu_startup_entry
34.45 ? 12% +17.0 51.41 ? 5% perf-profile.children.cycles-pp.do_idle
33.25 ? 12% +17.7 51.00 ? 6% perf-profile.children.cycles-pp.start_secondary
14.51 ? 71% -14.5 0.00 perf-profile.self.cycles-pp.io_serial_in
7.49 ? 63% -7.5 0.00 perf-profile.self.cycles-pp.delay_tsc



stress-ng.hdd.ops

1.4e+06 +-+---------------------------------------------------------------+
| .+.. .+. .+.. .+.. .+.+..|
1.2e+06 +-+ +.+..+..+..+ +..+. +..+..+..+ +..+..+.+. +. |
O O O O : O : O O O O O O O O O O O O O O
1e+06 +-+ O O O O O O |
| : : |
800000 +-+ : : |
| : : |
600000 +-+ : : |
| : : |
400000 +-+ : : |
| :: |
200000 +-+ :: |
| : |
0 +-+---------------------------------------------------------------+


stress-ng.hdd.ops_per_sec

700000 +-+----------------------------------------------------------------+
|..+..+.+..+..+..+ +..+..+..+..+.+..+..+..+.+..+..+..+..+.+..+..|
600000 O-+O O O O O O O O O O O O O O O O O O O O O O O O
| : : |
500000 +-+ : : |
| : : |
400000 +-+ : : |
| : : |
300000 +-+ : : |
| : : |
200000 +-+ : : |
| :: |
100000 +-+ : |
| : |
0 +-+----------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Xiaolong


Attachments:
(No filename) (15.20 kB)
config-4.16.0-rc2-00069-g9c4e6b1 (168.81 kB)
job-script (6.64 kB)
job.yaml (4.35 kB)
reproduce (263.00 B)
Download all attachments

2018-02-25 22:04:46

by Shakeel Butt

[permalink] [raw]
Subject: Re: [lkp-robot] [mm, mlock, vmscan] 9c4e6b1a70: stress-ng.hdd.ops_per_sec -7.9% regression

On Sun, Feb 25, 2018 at 6:44 AM, kernel test robot
<[email protected]> wrote:
>
> Greeting,
>
> FYI, we noticed a -7.9% regression of stress-ng.hdd.ops_per_sec due to commit:
>
>
> commit: 9c4e6b1a7027f102990c0395296015a812525f4d ("mm, mlock, vmscan: no more skipping pagevecs")
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>
> in testcase: stress-ng
> on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
> with following parameters:
>

Hi Xiaolong,

Is there a way I can get the output of "perf record -a -g" running in
parallel to the actual test on this machine. As I have mentioned
before I am not able to reproduce this issue. However I am trying to
repro on a VM with 4 vcpus and 4 GiB memory and I don't see any
difference. I am suspecting that it may repro on a larger machine but
I don't have access to one.

thanks,
Shakeel

2018-02-26 03:00:27

by kernel test robot

[permalink] [raw]
Subject: Re: [lkp-robot] [mm, mlock, vmscan] 9c4e6b1a70: stress-ng.hdd.ops_per_sec -7.9% regression

Hi, Shakeel

On 02/25, Shakeel Butt wrote:
>On Sun, Feb 25, 2018 at 6:44 AM, kernel test robot
><[email protected]> wrote:
>>
>> Greeting,
>>
>> FYI, we noticed a -7.9% regression of stress-ng.hdd.ops_per_sec due to commit:
>>
>>
>> commit: 9c4e6b1a7027f102990c0395296015a812525f4d ("mm, mlock, vmscan: no more skipping pagevecs")
>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>>
>> in testcase: stress-ng
>> on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
>> with following parameters:
>>
>
>Hi Xiaolong,
>
>Is there a way I can get the output of "perf record -a -g" running in
>parallel to the actual test on this machine. As I have mentioned
>before I am not able to reproduce this issue. However I am trying to
>repro on a VM with 4 vcpus and 4 GiB memory and I don't see any
>difference. I am suspecting that it may repro on a larger machine but
>I don't have access to one.
>

perf.data attached. It was generated via `perf record -q -ag --realtime=1 -m 256`

Thanks,
Xiaolong

>thanks,
>Shakeel

2018-02-26 04:59:53

by kernel test robot

[permalink] [raw]
Subject: Re: [LKP] [lkp-robot] [mm, mlock, vmscan] 9c4e6b1a70: stress-ng.hdd.ops_per_sec -7.9% regression

On 02/26, Ye Xiaolong wrote:
>Hi, Shakeel
>
>On 02/25, Shakeel Butt wrote:
>>On Sun, Feb 25, 2018 at 6:44 AM, kernel test robot
>><[email protected]> wrote:
>>>
>>> Greeting,
>>>
>>> FYI, we noticed a -7.9% regression of stress-ng.hdd.ops_per_sec due to commit:
>>>
>>>
>>> commit: 9c4e6b1a7027f102990c0395296015a812525f4d ("mm, mlock, vmscan: no more skipping pagevecs")
>>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>>>
>>> in testcase: stress-ng
>>> on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
>>> with following parameters:
>>>
>>
>>Hi Xiaolong,
>>
>>Is there a way I can get the output of "perf record -a -g" running in
>>parallel to the actual test on this machine. As I have mentioned
>>before I am not able to reproduce this issue. However I am trying to
>>repro on a VM with 4 vcpus and 4 GiB memory and I don't see any
>>difference. I am suspecting that it may repro on a larger machine but
>>I don't have access to one.
>>
>
>perf.data attached. It was generated via `perf record -q -ag --realtime=1 -m 256`

Attached perf-profile.gz is the result of `perf report` result.

Thanks,
Xiaolong
>
>Thanks,
>Xiaolong
>
>>thanks,
>>Shakeel
>_______________________________________________
>LKP mailing list
>[email protected]
>https://lists.01.org/mailman/listinfo/lkp


Attachments:
(No filename) (1.36 kB)
perf.data (245.04 kB)
perf-profile.gz (2.74 kB)
Download all attachments

2018-02-26 21:25:39

by Shakeel Butt

[permalink] [raw]
Subject: Re: [LKP] [lkp-robot] [mm, mlock, vmscan] 9c4e6b1a70: stress-ng.hdd.ops_per_sec -7.9% regression

On Sun, Feb 25, 2018 at 8:56 PM, Ye Xiaolong <[email protected]> wrote:
> On 02/26, Ye Xiaolong wrote:
>>Hi, Shakeel
>>
>>On 02/25, Shakeel Butt wrote:
>>>On Sun, Feb 25, 2018 at 6:44 AM, kernel test robot
>>><[email protected]> wrote:
>>>>
>>>> Greeting,
>>>>
>>>> FYI, we noticed a -7.9% regression of stress-ng.hdd.ops_per_sec due to commit:
>>>>
>>>>
>>>> commit: 9c4e6b1a7027f102990c0395296015a812525f4d ("mm, mlock, vmscan: no more skipping pagevecs")
>>>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>>>>
>>>> in testcase: stress-ng
>>>> on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
>>>> with following parameters:
>>>>
>>>
>>>Hi Xiaolong,
>>>
>>>Is there a way I can get the output of "perf record -a -g" running in
>>>parallel to the actual test on this machine. As I have mentioned
>>>before I am not able to reproduce this issue. However I am trying to
>>>repro on a VM with 4 vcpus and 4 GiB memory and I don't see any
>>>difference. I am suspecting that it may repro on a larger machine but
>>>I don't have access to one.
>>>
>>
>>perf.data attached. It was generated via `perf record -q -ag --realtime=1 -m 256`
>
> Attached perf-profile.gz is the result of `perf report` result.
>

Hi Xiaolong,

Can you please give me the actual full stress-ng command used in this
test? I have run following command on linux tree (with top commit
being 4c3579f6cadd5e) with and without my patch on a 72 thread
machine.

$ stress-ng --sequential 0 --class io -t 10s --times --verify --metrics-brief

Result without the patch:

stress-ng: info: [16828] successful run completed in 97.90s (1 min, 37.90 secs)
stress-ng: info: [16828] stressor bogo ops real time usr time
sys time bogo ops/s bogo ops/s
stress-ng: info: [16828] (secs) (secs)
(secs) (real time) (usr+sys time)
stress-ng: info: [16828] aio 44928 10.00 0.00
0.00 4492.79 0.00
stress-ng: info: [16828] aiol 155 15.52 0.00
0.00 9.98 0.00
stress-ng: info: [16828] hdd 71136 14.25 0.00
2.78 4991.33 25588.49
stress-ng: info: [16828] rawdev 16681 10.07 0.00
0.00 1656.10 0.00
stress-ng: info: [16828] readahead 383446363 10.00 340.42
327.13 38330308.93 574408.45
stress-ng: info: [16828] revio 40008242 10.00 12.27
704.03 4000263.49 55854.03
stress-ng: info: [16828] seek 1807314 10.58 1.02
22.24 170899.04 77700.52
stress-ng: info: [16828] sync-file 1730 10.05 0.00
0.67 172.20 2582.09
stress-ng: info: [16828] for a 97.90s run time:
stress-ng: info: [16828] 7048.63s available CPU time
stress-ng: info: [16828] 355.18s user time ( 5.04%)
stress-ng: info: [16828] 1059.33s system time ( 15.03%)
stress-ng: info: [16828] 1414.51s total time ( 20.07%)
stress-ng: info: [16828] load average: 53.40 28.06 14.48


Result with the patch:

stress-ng: info: [31637] successful run completed in 94.40s (1 min, 34.40 secs)
stress-ng: info: [31637] stressor bogo ops real time usr time
sys time bogo ops/s bogo ops/s
stress-ng: info: [31637] (secs) (secs)
(secs) (real time) (usr+sys time)
stress-ng: info: [31637] aio 44928 10.00 0.00
0.00 4492.79 0.00
stress-ng: info: [31637] aiol 138 14.26 0.00
0.00 9.68 0.00
stress-ng: info: [31637] hdd 75305 13.77 0.00
2.82 5467.52 26703.90
stress-ng: info: [31637] rawdev 13309 10.05 0.00
1.29 1323.72 10317.05
stress-ng: info: [31637] readahead 373902555 10.00 323.52
316.95 37382265.90 583794.02
stress-ng: info: [31637] revio 45142381 10.00 13.73
702.51 4513648.49 63026.89
stress-ng: info: [31637] seek 3046010 10.32 1.83
23.92 295270.48 118291.65
stress-ng: info: [31637] sync-file 1858 10.03 0.00
0.65 185.17 2858.46
stress-ng: info: [31637] for a 94.40s run time:
stress-ng: info: [31637] 6796.83s available CPU time
stress-ng: info: [31637] 340.50s user time ( 5.01%)
stress-ng: info: [31637] 1050.70s system time ( 15.46%)
stress-ng: info: [31637] 1391.20s total time ( 20.47%)
stress-ng: info: [31637] load average: 51.64 26.97 14.28


What should I be looking at in the results? Also please note that I
have to compile stress-ng statically to run on these machines and skip
the lkp framework.

thanks,
Shakeel