Hello scheduler maintainers,
I got the following warning in Linux 5.18-rc1, I don't have the reproducer yet,
it happens randomly. Please shed some light.
dmesg output and config attached below...
Thanks!
<4>[ 2845.651268][ T0] ------------[ cut here ]------------
<4>[ 2845.651274][ T0] cfs_rq->avg.load_avg || cfs_rq->avg.util_avg || cfs_rq->avg.runnable_avg
<4>[ 2845.651292][ T0] WARNING: CPU: 1 PID: 0 at kernel/sched/fair.c:3355 update_blocked_averages (kernel/sched/fair.c:3353 kernel/sched/fair.c:8214 kernel/sched/fair.c:8301)
<4>[ 2845.651298][ T0] Modules linked in: ccm xt_CHECKSUM xt_MASQUERADE xt_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp nft_compat nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 rfcomm nf_tables nfnetlink bridge stp llc overlay cmac algif_hash algif_skcipher af_alg bnep snd_soc_skl_hda_dsp snd_soc_hdac_hdmi snd_soc_intel_hda_dsp_common snd_sof_probes snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic intel_tcc_cooling i915 nls_iso8859_1 snd_soc_dmic snd_sof_pci_intel_tgl snd_sof_intel_hda_common snd_soc_hdac_hda rtw88_8822ce snd_sof_intel_hda soundwire_intel rtw88_8822c soundwire_generic_allocation soundwire_cadence snd_sof_pci rtw88_pci snd_sof_xtensa_dsp rtw88_core snd_sof snd_sof_utils snd_hda_ext_core snd_soc_acpi_intel_match snd_soc_acpi x86_pkg_temp_thermal mac80211 soundwire_bus ledtrig_audio snd_soc_core snd_compress ac97_bus snd_pcm_dmaengine snd_hda_intel snd_intel_dspcfg snd_intel_sdw_acpi snd_hda_codec mei_hdcp intel_powerclamp
<4>[ 2845.651330][ T0] intel_rapl_msr snd_hda_core btusb snd_hwdep btrtl coretemp btmtk snd_pcm kvm_intel btintel btbcm kvm snd_seq_midi bluetooth snd_seq_midi_event uvcvideo cfg80211 snd_rawmidi snd_seq videobuf2_vmalloc crct10dif_pclmul snd_seq_device videobuf2_memops ghash_clmulni_intel snd_timer videobuf2_v4l2 aesni_intel hp_wmi videobuf2_common crypto_simd cryptd platform_profile drm_buddy ttm videodev snd sparse_keymap wmi_bmof joydev input_leds serio_raw ee1004 drm_dp_helper ecdh_generic mc efi_pstore libarc4 ecc hid_multitouch soundcore cec rc_core processor_thermal_device_pci_legacy drm_kms_helper processor_thermal_device processor_thermal_rfim i2c_algo_bit mei_me sysimgblt processor_thermal_mbox processor_thermal_rapl syscopyarea mei sysfillrect intel_rapl_common fb_sys_fops intel_soc_dts_iosf int3403_thermal int3400_thermal mac_hid int340x_thermal_zone acpi_thermal_rel dptf_pch_fivr acpi_pad sch_fq_codel drm msr parport_pc ppdev lp parport ip_tables x_tables autofs4
<4>[ 2845.651370][ T0] usbhid btrfs raid6_pq xor libcrc32c nvme hid_generic nvme_core i2c_i801 intel_lpss_pci xhci_pci crc32_pclmul vmd intel_lpss i2c_smbus xhci_pci_renesas idma64 wmi i2c_hid_acpi i2c_hid hid video pinctrl_tigerlake
<4>[ 2845.651381][ T0] CPU: 1 PID: 0 Comm: swapper/1 Tainted: G W 5.18.0-rc1-superb-owl #5 f3938b7027e9f0a27179200b9e84ac711db665f6
<4>[ 2845.651384][ T0] Hardware name: HP HP Laptop 14s-dq2xxx/87FD, BIOS F.15 09/15/2021
<4>[ 2845.651385][ T0] RIP: 0010:update_blocked_averages (kernel/sched/fair.c:3353 kernel/sched/fair.c:8214 kernel/sched/fair.c:8301)
<4>[ 2845.651387][ T0] Code: 0f 0b 41 83 bd c0 0a 00 00 01 0f 86 ec fd ff ff e9 f4 fd ff ff c6 05 81 c7 72 01 01 48 c7 c7 9b 2a 57 82 31 c0 e8 38 7d fa ff <0f> 0b 8b 45 f8 85 c0 0f 85 53 ff ff ff e9 c6 fe ff ff 45 31 f6 83
All code
========
0: 0f 0b ud2
2: 41 83 bd c0 0a 00 00 cmpl $0x1,0xac0(%r13)
9: 01
a: 0f 86 ec fd ff ff jbe 0xfffffffffffffdfc
10: e9 f4 fd ff ff jmp 0xfffffffffffffe09
15: c6 05 81 c7 72 01 01 movb $0x1,0x172c781(%rip) # 0x172c79d
1c: 48 c7 c7 9b 2a 57 82 mov $0xffffffff82572a9b,%rdi
23: 31 c0 xor %eax,%eax
25: e8 38 7d fa ff call 0xfffffffffffa7d62
2a:* 0f 0b ud2 <-- trapping instruction
2c: 8b 45 f8 mov -0x8(%rbp),%eax
2f: 85 c0 test %eax,%eax
31: 0f 85 53 ff ff ff jne 0xffffffffffffff8a
37: e9 c6 fe ff ff jmp 0xffffffffffffff02
3c: 45 31 f6 xor %r14d,%r14d
3f: 83 .byte 0x83
Code starting with the faulting instruction
===========================================
0: 0f 0b ud2
2: 8b 45 f8 mov -0x8(%rbp),%eax
5: 85 c0 test %eax,%eax
7: 0f 85 53 ff ff ff jne 0xffffffffffffff60
d: e9 c6 fe ff ff jmp 0xfffffffffffffed8
12: 45 31 f6 xor %r14d,%r14d
15: 83 .byte 0x83
<4>[ 2845.651389][ T0] RSP: 0018:ffffc900001bfd40 EFLAGS: 00010086
<4>[ 2845.651391][ T0] RAX: 0000000000000048 RBX: ffff888466a73fc0 RCX: 0000000000000003
<4>[ 2845.651392][ T0] RDX: 0000000000000000 RSI: ffffffff8257b8d2 RDI: 00000000ffffffff
<4>[ 2845.651393][ T0] RBP: ffff888466a74140 R08: 0000000000000000 R09: ffff888476bfe000
<4>[ 2845.651394][ T0] R10: 00000000ffefffff R11: ffffc900001bfba0 R12: ffff888466a748f8
<4>[ 2845.651395][ T0] R13: ffff888466a73e80 R14: 0000000000000000 R15: ffff888466a748f8
<4>[ 2845.651397][ T0] FS: 0000000000000000(0000) GS:ffff888466a40000(0000) knlGS:0000000000000000
<4>[ 2845.651398][ T0] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 2845.651399][ T0] CR2: 00007efdb6f2b230 CR3: 0000000009826004 CR4: 0000000000770ee0
<4>[ 2845.651400][ T0] PKRU: 55555554
<4>[ 2845.651401][ T0] Call Trace:
<4>[ 2845.651403][ T0] <TASK>
<4>[ 2845.651407][ T0] newidle_balance (./include/linux/rcupdate.h:692 kernel/sched/fair.c:10957)
<4>[ 2845.651410][ T0] pick_next_task_fair (kernel/sched/fair.c:7394)
<4>[ 2845.651412][ T0] pick_next_task (kernel/sched/core.c:5696 kernel/sched/core.c:5768)
<4>[ 2845.651415][ T0] ? _raw_spin_lock_nested (kernel/locking/spinlock.c:378)
<4>[ 2845.651418][ T0] __schedule (kernel/sched/core.c:6346 ./include/asm-generic/bitops/instrumented-atomic.h:42 ./include/linux/thread_info.h:94 ./include/linux/sched.h:1988 ./include/linux/sched.h:2019 kernel/sched/core.c:6347)
<4>[ 2845.651421][ T0] schedule_idle (./arch/x86/include/asm/bitops.h:207 ./include/asm-generic/bitops/instrumented-non-atomic.h:135 ./include/linux/thread_info.h:118 ./include/linux/sched.h:2153 kernel/sched/core.c:6483)
<4>[ 2845.651423][ T0] do_idle+0x260/0x290
<4>[ 2845.651426][ T0] cpu_startup_entry (kernel/sched/idle.c:399)
<4>[ 2845.651428][ T0] start_secondary (smpboot.c:?)
<4>[ 2845.651430][ T0] secondary_startup_64_no_verify (??:?)
<4>[ 2845.651436][ T0] </TASK>
<4>[ 2845.651437][ T0] irq event stamp: 329722
<4>[ 2845.651438][ T0] hardirqs last enabled at (329721): tick_nohz_idle_enter (kernel/time/tick-sched.c:1161 kernel/time/tick-sched.c:1161)
<4>[ 2845.651441][ T0] hardirqs last disabled at (329722): do_idle+0x8b/0x290
<4>[ 2845.651443][ T0] softirqs last enabled at (329714): __irq_exit_rcu (kernel/softirq.c:617 kernel/softirq.c:639)
<4>[ 2845.651445][ T0] softirqs last disabled at (329701): __irq_exit_rcu (kernel/softirq.c:617 kernel/softirq.c:639)
<4>[ 2845.651447][ T0] ---[ end trace 0000000000000000 ]---
<6>[13420.623334][ C7] perf: interrupt took too long (2530 > 2500), lowering kernel.perf_event_max_sample_rate to 78900
--
Ammar Faizi
On 04/04/2022 08:19, Ammar Faizi wrote:
>
> Hello scheduler maintainers,
>
> I got the following warning in Linux 5.18-rc1, I don't have the
> reproducer yet,
> it happens randomly. Please shed some light.
Tried to recreate the issue but no success so far. I used you config
file, clang-14 and a Xeon CPU E5-2690 v2 (2 sockets 40 CPUs) with 20
two-level cgoupv1 taskgroups '/X/Y' with 'hackbench (10 groups, 40 fds)
+ idling' running in all '/X/Y/'.
What userspace are you running?
There seemed to be some pressure on your machine when it happened?
> <6>[13420.623334][ C7] perf: interrupt took too long (2530 > 2500),
> lowering kernel.perf_event_max_sample_rate to 78900
Maybe you could split the SCHED_WARN_ON so we know which signal causes this?
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d4bd299d67ab..0d45e09e5bfc 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3350,9 +3350,9 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq
*cfs_rq)
* Make sure that rounding and/or propagation of PELT values never
* break this.
*/
- SCHED_WARN_ON(cfs_rq->avg.load_avg ||
- cfs_rq->avg.util_avg ||
- cfs_rq->avg.runnable_avg);
+ SCHED_WARN_ON(cfs_rq->avg.load_avg);
+ SCHED_WARN_ON(cfs_rq->avg.util_avg);
+ SCHED_WARN_ON(cfs_rq->avg.runnable_avg);
return true;
}
[...]
On 4/5/22 7:21 PM, Dietmar Eggemann wrote:
> Tried to recreate the issue but no success so far. I used you config
> file, clang-14 and a Xeon CPU E5-2690 v2 (2 sockets 40 CPUs) with 20
> two-level cgoupv1 taskgroups '/X/Y' with 'hackbench (10 groups, 40 fds)
> + idling' running in all '/X/Y/'.
>
> What userspace are you running?
HP Laptop, Intel i7-1165G7, 8 CPUs, with 16 GB of RAM. Ubuntu 21.10. Just for
daily workstation. Compiling kernel, browsing and coding stuff.
> There seemed to be some pressure on your machine when it happened?
Yeah, might be, I don't fully remember the activity at the time it
happened, though.
>> <6>[13420.623334][ C7] perf: interrupt took too long (2530 > 2500),
>> lowering kernel.perf_event_max_sample_rate to 78900
>
> Maybe you could split the SCHED_WARN_ON so we know which signal causes this?
OK, I will apply the diff on top of 5.18-rc1 and will start using it for daily
routine tomorrow morning. Let's see if I can hit this bug again. Will send an
update later...
Thank you.
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index d4bd299d67ab..0d45e09e5bfc 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3350,9 +3350,9 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq
> *cfs_rq)
> * Make sure that rounding and/or propagation of PELT values never
> * break this.
> */
> - SCHED_WARN_ON(cfs_rq->avg.load_avg ||
> - cfs_rq->avg.util_avg ||
> - cfs_rq->avg.runnable_avg);
> + SCHED_WARN_ON(cfs_rq->avg.load_avg);
> + SCHED_WARN_ON(cfs_rq->avg.util_avg);
> + SCHED_WARN_ON(cfs_rq->avg.runnable_avg);
>
> return true;
> }
>
> [...]
--
Ammar Faizi
On 4/5/22 7:21 PM, Dietmar Eggemann wrote:
> Maybe you could split the SCHED_WARN_ON so we know which signal causes this?
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index d4bd299d67ab..0d45e09e5bfc 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3350,9 +3350,9 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq
> *cfs_rq)
> * Make sure that rounding and/or propagation of PELT values never
> * break this.
> */
> - SCHED_WARN_ON(cfs_rq->avg.load_avg ||
> - cfs_rq->avg.util_avg ||
> - cfs_rq->avg.runnable_avg);
> + SCHED_WARN_ON(cfs_rq->avg.load_avg);
> + SCHED_WARN_ON(cfs_rq->avg.util_avg);
> + SCHED_WARN_ON(cfs_rq->avg.runnable_avg);
>
> return true;
> }
error: corrupt patch at line 6
Trivial enough, fixed it. Compiling now...
ammarfaizi2@integral2:~/work/linux.work$ git log -n1
commit 3123109284176b1532874591f7c81f3837bbdc17 (HEAD -> master, tag: v5.18-rc1, @torvalds/linux/master)
Author: Linus Torvalds <[email protected]>
Date: Sun Apr 3 14:08:21 2022 -0700
Linux 5.18-rc1
ammarfaizi2@integral2:~/work/linux.work$ git diff
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d4bd299d67ab..0d45e09e5bfc 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3350,9 +3350,9 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
* Make sure that rounding and/or propagation of PELT values never
* break this.
*/
- SCHED_WARN_ON(cfs_rq->avg.load_avg ||
- cfs_rq->avg.util_avg ||
- cfs_rq->avg.runnable_avg);
+ SCHED_WARN_ON(cfs_rq->avg.load_avg);
+ SCHED_WARN_ON(cfs_rq->avg.util_avg);
+ SCHED_WARN_ON(cfs_rq->avg.runnable_avg);
return true;
}
ammarfaizi2@integral2:~/work/linux.work$
--
Ammar Faizi
On 05/04/2022 15:13, Ammar Faizi wrote:
> On 4/5/22 7:21 PM, Dietmar Eggemann wrote:
>> Tried to recreate the issue but no success so far. I used you config
>> file, clang-14 and a Xeon CPU E5-2690 v2 (2 sockets 40 CPUs) with 20
>> two-level cgoupv1 taskgroups '/X/Y' with 'hackbench (10 groups, 40 fds)
>> + idling' running in all '/X/Y/'.
>>
>> What userspace are you running?
>
> HP Laptop, Intel i7-1165G7, 8 CPUs, with 16 GB of RAM. Ubuntu 21.10.
> Just for
> daily workstation. Compiling kernel, browsing and coding stuff.
Can you check that CFS Bandwidth control (CONFIG_CFS_BANDWIDTH=y) is
still not used on Ubuntu desktop 21.10?
It shouldn't but I can't verify since I'm still on 20.04 LTS Desktop:
$ mount | grep "cgroup2\|\bcpu\b"
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
CPU controller is still used in cgroupv1. So cgroupv2 can't use it:
$ cat /sys/fs/cgroup/unified/cgroup.controllers
/* empty */
And there is no cgroupv1 hierarchy under /sys/fs/cgroup/cpu,cpuacct/ .
No cpu.cfs_quota_us files with something other than -1.
So CFS Bandwidth control is not used.
[...]
On 4/6/22 7:21 PM, Dietmar Eggemann wrote:
> On 05/04/2022 15:13, Ammar Faizi wrote:
>> On 4/5/22 7:21 PM, Dietmar Eggemann wrote:
>>> Tried to recreate the issue but no success so far. I used you config
>>> file, clang-14 and a Xeon CPU E5-2690 v2 (2 sockets 40 CPUs) with 20
>>> two-level cgoupv1 taskgroups '/X/Y' with 'hackbench (10 groups, 40 fds)
>>> + idling' running in all '/X/Y/'.
>>>
>>> What userspace are you running?
>>
>> HP Laptop, Intel i7-1165G7, 8 CPUs, with 16 GB of RAM. Ubuntu 21.10.
>> Just for
>> daily workstation. Compiling kernel, browsing and coding stuff.
>
> Can you check that CFS Bandwidth control (CONFIG_CFS_BANDWIDTH=y) is
> still not used on Ubuntu desktop 21.10?
>
> It shouldn't but I can't verify since I'm still on 20.04 LTS Desktop:
>
> $ mount | grep "cgroup2\|\bcpu\b"
> cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
> cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
>
> CPU controller is still used in cgroupv1. So cgroupv2 can't use it:
>
> $ cat /sys/fs/cgroup/unified/cgroup.controllers
> /* empty */
>
> And there is no cgroupv1 hierarchy under /sys/fs/cgroup/cpu,cpuacct/ .
>
> No cpu.cfs_quota_us files with something other than -1.
>
> So CFS Bandwidth control is not used.
Not familiar with CFS stuff, but here...
===============
ammarfaizi2@integral2:~$ mount | grep "cgroup2\|\bcpu\b"
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
ammarfaizi2@integral2:~$ cat /sys/fs/cgroup/unified/cgroup.controllers
cat: /sys/fs/cgroup/unified/cgroup.controllers: No such file or directory
ammarfaizi2@integral2:~$ ls /sys/fs/cgroup/{cpu,cpuacct}
ls: cannot access '/sys/fs/cgroup/cpu': No such file or directory
ls: cannot access '/sys/fs/cgroup/cpuacct': No such file or directory
ammarfaizi2@integral2:~$
ammarfaizi2@integral2:~$ cat /etc/os-release
PRETTY_NAME="Ubuntu 21.10"
NAME="Ubuntu"
VERSION_ID="21.10"
VERSION="21.10 (Impish Indri)"
VERSION_CODENAME=impish
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=impish
ammarfaizi2@integral2:~$
===============
Update:
So far I have been using and torturing my machine for a day, but
still couldn't reproduce the issue. It seems I hit a rarely
happened bug. I will continue using this until 5.18-rc2 before
recompile my kernel.
--
Ammar Faizi
On 06/04/2022 22:34, Ammar Faizi wrote:
> On 4/6/22 7:21 PM, Dietmar Eggemann wrote:
>> On 05/04/2022 15:13, Ammar Faizi wrote:
>>> On 4/5/22 7:21 PM, Dietmar Eggemann wrote:
[...]
> Not familiar with CFS stuff, but here...
>
> ===============
> ammarfaizi2@integral2:~$ mount | grep "cgroup2\|\bcpu\b"
> cgroup2 on /sys/fs/cgroup type cgroup2
> (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
> ammarfaizi2@integral2:~$ cat /sys/fs/cgroup/unified/cgroup.controllers
> cat: /sys/fs/cgroup/unified/cgroup.controllers: No such file or directory
> ammarfaizi2@integral2:~$ ls /sys/fs/cgroup/{cpu,cpuacct}
> ls: cannot access '/sys/fs/cgroup/cpu': No such file or directory
> ls: cannot access '/sys/fs/cgroup/cpuacct': No such file or directory
[...]
Looks like 21.10 finally abandoned legacy cgroup v1 and switched to v2
completely, which is now mounted under /sys/fs/cgroup .
So your /sys/fs/cgroup/cgroup.controllers should contain `cpu`.
Can you check if any of the cpu.max files under /sys/fs/cgroup has
something else then `max 100000` ?
Background is that if this is the case, cgroups (i.e. cfs_rqs) might be
throttled and this could be related to what you see. I haven't
stress-test it so far with active CFS BW ctrl (cfs_rq throttling).
> Update:
> So far I have been using and torturing my machine for a day, but
> still couldn't reproduce the issue. It seems I hit a rarely
> happened bug. I will continue using this until 5.18-rc2 before
> recompile my kernel.
Thanks.
On 4/7/22 5:52 PM, Dietmar Eggemann wrote:
[...]
> Looks like 21.10 finally abandoned legacy cgroup v1 and switched to v2
> completely, which is now mounted under /sys/fs/cgroup .
>
> So your /sys/fs/cgroup/cgroup.controllers should contain `cpu`.
>
> Can you check if any of the cpu.max files under /sys/fs/cgroup has
> something else then `max 100000` ?
I only see "max 100000" at the moment. Not sure if it may change when I
do other activities anyway. If you need more information, I can always
send it, so feel free to ask.
> Background is that if this is the case, cgroups (i.e. cfs_rqs) might be
> throttled and this could be related to what you see. I haven't
> stress-test it so far with active CFS BW ctrl (cfs_rq throttling).
root@integral2:~#
root@integral2:~# cat /sys/fs/cgroup/cgroup.controllers
cpuset cpu io memory hugetlb pids rdma
root@integral2:~#
root@integral2:~# ls /sys/fs/cgroup
cgroup.controllers cgroup.subtree_control cpu.stat io.cost.qos memory.pressure sys-kernel-config.mount
cgroup.max.depth cgroup.threads dev-hugepages.mount io.pressure memory.stat sys-kernel-debug.mount
cgroup.max.descendants cpu.pressure dev-mqueue.mount io.stat -.mount sys-kernel-tracing.mount
cgroup.procs cpuset.cpus.effective init.scope machine.slice proc-sys-fs-binfmt_misc.mount system.slice
cgroup.stat cpuset.mems.effective io.cost.model memory.numa_stat sys-fs-fuse-connections.mount user.slice
root@integral2:~#
root@integral2:~# cd /sys/fs/cgroup/
root@integral2:/sys/fs/cgroup#
root@integral2:/sys/fs/cgroup# more $(find | grep cpu.max) | cat
::::::::::::::
./sys-fs-fuse-connections.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./sys-fs-fuse-connections.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./sys-kernel-config.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./sys-kernel-config.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./sys-kernel-debug.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./sys-kernel-debug.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./-.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./-.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./dev-mqueue.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./dev-mqueue.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./user.slice/cpu.max.burst
::::::::::::::
0
::::::::::::::
./user.slice/cpu.max
::::::::::::::
max 100000
::::::::::::::
./sys-kernel-tracing.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./sys-kernel-tracing.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./init.scope/cpu.max.burst
::::::::::::::
0
::::::::::::::
./init.scope/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/irqbalance.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/irqbalance.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-update-utmp.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-update-utmp.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-sysusers.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-sysusers.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/system-systemd\x2dfsck.slice/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/system-systemd\x2dfsck.slice/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/run-user-1000-doc.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/run-user-1000-doc.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snap-bare-5.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snap-bare-5.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-udevd-control.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-udevd-control.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/lvm2-monitor.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/lvm2-monitor.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/run-snapd-ns-snap\x2dstore.mnt.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/run-snapd-ns-snap\x2dstore.mnt.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-journal-flush.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-journal-flush.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/run-qemu.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/run-qemu.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/containerd.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/containerd.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-sysctl.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-sysctl.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/packagekit.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/packagekit.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snapd.apparmor.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snapd.apparmor.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-udevd.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-udevd.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-udevd-kernel.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-udevd-kernel.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/whoopsie.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/whoopsie.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/cron.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/cron.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/acpid.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/acpid.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/thermald.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/thermald.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/dev-nvme0n1p6.swap/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/dev-nvme0n1p6.swap/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/docker.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/docker.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/polkit.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/polkit.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-remount-fs.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-remount-fs.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/networkd-dispatcher.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/networkd-dispatcher.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/rtkit-daemon.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/rtkit-daemon.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snap-snap\x2dstore-558.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snap-snap\x2dstore-558.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snap-gtk\x2dcommon\x2dthemes-1519.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snap-gtk\x2dcommon\x2dthemes-1519.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/libvirtd-admin.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/libvirtd-admin.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/bluetooth.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/bluetooth.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/lvm2-lvmpolld.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/lvm2-lvmpolld.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/home.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/home.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snap-core-12941.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snap-core-12941.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/accounts-daemon.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/accounts-daemon.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-tmpfiles-setup.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-tmpfiles-setup.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/dev-disk-by\x2did-nvme\x2deui.0000000001000000e4d25c928d835401\x2dpart6.swap/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/dev-disk-by\x2did-nvme\x2deui.0000000001000000e4d25c928d835401\x2dpart6.swap/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/dev-disk-by\x2duuid-37a29cd2\x2d7d0f\x2d4a23\x2d9ffc\x2dac7768fd19c7.swap/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/dev-disk-by\x2duuid-37a29cd2\x2d7d0f\x2d4a23\x2d9ffc\x2dac7768fd19c7.swap/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/wpa_supplicant.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/wpa_supplicant.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/system-modprobe.slice/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/system-modprobe.slice/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/libvirtd.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/libvirtd.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-journald-dev-log.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-journald-dev-log.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/ModemManager.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/ModemManager.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/console-setup.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/console-setup.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/virtlockd.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/virtlockd.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snap-firefox-1154.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snap-firefox-1154.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/alsa-restore.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/alsa-restore.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-journald.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-journald.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/qemu-kvm.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/qemu-kvm.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/php8.0-fpm.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/php8.0-fpm.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-udev-trigger.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-udev-trigger.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/system-systemd\x2dbacklight.slice/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/system-systemd\x2dbacklight.slice/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/power-profiles-daemon.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/power-profiles-daemon.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/unattended-upgrades.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/unattended-upgrades.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/plymouth-quit-wait.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/plymouth-quit-wait.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-rfkill.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-rfkill.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/ssh.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/ssh.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/syslog.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/syslog.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/virtlockd-admin.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/virtlockd-admin.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/colord.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/colord.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/plymouth-read-write.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/plymouth-read-write.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/run-user-1000-gvfs.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/run-user-1000-gvfs.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/virtlogd-admin.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/virtlogd-admin.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/tmp-.tmpfs.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/tmp-.tmpfs.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/libvirt-guests.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/libvirt-guests.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-initctl.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-initctl.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/NetworkManager.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/NetworkManager.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/lm-sensors.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/lm-sensors.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snapd.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snapd.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/gdm.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/gdm.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snap-core-12834.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snap-core-12834.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-machined.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-machined.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/switcheroo-control.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/switcheroo-control.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/ufw.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/ufw.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-random-seed.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-random-seed.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/dbus.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/dbus.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-journald-audit.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-journald-audit.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/libvirtd-ro.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/libvirtd-ro.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snapd.seeded.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snapd.seeded.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snap-snapcraft-7010.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snap-snapcraft-7010.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/dev-disk-by\x2dpartuuid-7f986e8c\x2d0f7f\x2d4f9f\x2db792\x2d8207b9dcd5de.swap/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/dev-disk-by\x2dpartuuid-7f986e8c\x2d0f7f\x2d4f9f\x2db792\x2d8207b9dcd5de.swap/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/uuidd.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/uuidd.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/run-snapd-ns.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/run-snapd-ns.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/rsyslog.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/rsyslog.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-modules-load.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-modules-load.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snap-gnome\x2d3\x2d38\x2d2004-99.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snap-gnome\x2d3\x2d38\x2d2004-99.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/blk-availability.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/blk-availability.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snap-firefox-1188.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snap-firefox-1188.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/boot-efi.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/boot-efi.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-tmpfiles-setup-dev.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-tmpfiles-setup-dev.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/run-user-1000.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/run-user-1000.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-fsckd.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-fsckd.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/kerneloops.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/kerneloops.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/avahi-daemon.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/avahi-daemon.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/dev-disk-by\x2dpath-pci\x2d0000:00:0e.0\x2dpci\x2d10000:e1:00.0\x2dnvme\x2d1\x2dpart6.swap/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/dev-disk-by\x2dpath-pci\x2d0000:00:0e.0\x2dpci\x2d10000:e1:00.0\x2dnvme\x2d1\x2dpart6.swap/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/libvirtd.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/libvirtd.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-journald.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-journald.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snapd.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snapd.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snap-core20-1405.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snap-core20-1405.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/kmod-static-nodes.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/kmod-static-nodes.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/cups-browsed.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/cups-browsed.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/openvpn.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/openvpn.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/docker.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/docker.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/dev-disk-by\x2did-nvme\x2dINTEL_SSDPEKNW512G8H_BTNH128109LJ512A\x2dpart6.swap/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/dev-disk-by\x2did-nvme\x2dINTEL_SSDPEKNW512G8H_BTNH128109LJ512A\x2dpart6.swap/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/apport.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/apport.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/cups.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/cups.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/upower.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/upower.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snap-core20-1376.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snap-core20-1376.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/apparmor.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/apparmor.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-resolved.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-resolved.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/binfmt-support.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/binfmt-support.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/udisks2.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/udisks2.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/acpid.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/acpid.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snap-snapcraft-7201.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snap-snapcraft-7201.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/dbus.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/dbus.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-timesyncd.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-timesyncd.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/NetworkManager-wait-online.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/NetworkManager-wait-online.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/system-getty.slice/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/system-getty.slice/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snap-snap\x2dstore-557.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snap-snap\x2dstore-557.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/keyboard-setup.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/keyboard-setup.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/virtlogd.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/virtlogd.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-user-sessions.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-user-sessions.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/dm-event.socket/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/dm-event.socket/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/avahi-daemon.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/avahi-daemon.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/systemd-logind.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/systemd-logind.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/snap-gnome\x2d3\x2d38\x2d2004-87.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/snap-gnome\x2d3\x2d38\x2d2004-87.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./system.slice/setvtrgb.service/cpu.max.burst
::::::::::::::
0
::::::::::::::
./system.slice/setvtrgb.service/cpu.max
::::::::::::::
max 100000
::::::::::::::
./proc-sys-fs-binfmt_misc.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./proc-sys-fs-binfmt_misc.mount/cpu.max
::::::::::::::
max 100000
::::::::::::::
./machine.slice/cpu.max.burst
::::::::::::::
0
::::::::::::::
./machine.slice/cpu.max
::::::::::::::
max 100000
::::::::::::::
./dev-hugepages.mount/cpu.max.burst
::::::::::::::
0
::::::::::::::
./dev-hugepages.mount/cpu.max
::::::::::::::
max 100000
root@integral2:/sys/fs/cgroup#
--
Ammar Faizi
On 08/04/2022 08:03, Ammar Faizi wrote:
> On 4/7/22 5:52 PM, Dietmar Eggemann wrote:
>
> [...]
>
>> Looks like 21.10 finally abandoned legacy cgroup v1 and switched to v2
>> completely, which is now mounted under /sys/fs/cgroup .
>>
>> So your /sys/fs/cgroup/cgroup.controllers should contain `cpu`.
>>
>> Can you check if any of the cpu.max files under /sys/fs/cgroup has
>> something else then `max 100000` ?
>
> I only see "max 100000" at the moment. Not sure if it may change when I
> do other activities anyway. If you need more information, I can always
> send it, so feel free to ask.
Looks like you saw the same issue which got fixed here:
https://lkml.kernel.org/r/[email protected]
So nothing to do with CFS BW control. It's triggered by a task with very
low nice value and load_avg=1 during cfs_rq attach.
On 4/14/22 4:38 PM, Dietmar Eggemann wrote:
> Looks like you saw the same issue which got fixed here:
>
> https://lkml.kernel.org/r/[email protected]
>
> So nothing to do with CFS BW control. It's triggered by a task with very
> low nice value and load_avg=1 during cfs_rq attach.
Yeah, it looks like I hit the same SCHED_WARN_ON() with what is explained
in that patch. As such, I assume it's fixed then. I didn't manage to
reproduce the bug, but I am moving on from this now.
Thanks for the update!
--
Ammar Faizi