A customer using nohz_full has experienced the following interruption:
oslat-1004510 [018] timer_cancel: timer=0xffff90a7ca663cf8
oslat-1004510 [018] timer_expire_entry: timer=0xffff90a7ca663cf8 function=delayed_work_timer_fn now=4709188240 baseclk=4709188240
oslat-1004510 [018] workqueue_queue_work: work struct=0xffff90a7ca663cd8 function=fb_flashcursor workqueue=events_power_efficient req_cpu=8192 cpu=18
oslat-1004510 [018] workqueue_activate_work: work struct 0xffff90a7ca663cd8
oslat-1004510 [018] sched_wakeup: kworker/18:1:326 [120] CPU:018
oslat-1004510 [018] timer_expire_exit: timer=0xffff90a7ca663cf8
oslat-1004510 [018] irq_work_entry: vector=246
oslat-1004510 [018] irq_work_exit: vector=246
oslat-1004510 [018] tick_stop: success=0 dependency=SCHED
oslat-1004510 [018] hrtimer_start: hrtimer=0xffff90a70009cb00 function=tick_sched_timer/0x0 ...
oslat-1004510 [018] softirq_exit: vec=1 [action=TIMER]
oslat-1004510 [018] softirq_entry: vec=7 [action=SCHED]
oslat-1004510 [018] softirq_exit: vec=7 [action=SCHED]
oslat-1004510 [018] tick_stop: success=0 dependency=SCHED
oslat-1004510 [018] sched_switch: oslat:1004510 [120] R ==> kworker/18:1:326 [120]
kworker/18:1-326 [018] workqueue_execute_start: work struct 0xffff90a7ca663cd8: function fb_flashcursor
kworker/18:1-326 [018] workqueue_queue_work: work struct=0xffff9078f119eed0 function=drm_fb_helper_damage_work workqueue=events req_cpu=8192 cpu=18
kworker/18:1-326 [018] workqueue_activate_work: work struct 0xffff9078f119eed0
kworker/18:1-326 [018] timer_start: timer=0xffff90a7ca663cf8 function=delayed_work_timer_fn ...
Set wq_power_efficient to true, in case nohz_full is enabled.
This makes the power efficient workqueue be unbounded, which allows
workqueue items there to be moved to HK CPUs.
Signed-off-by: Marcelo Tosatti <[email protected]>
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 76e60faed892..45b3a63954a9 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -6630,6 +6630,13 @@ void __init workqueue_init_early(void)
wq_update_pod_attrs_buf = alloc_workqueue_attrs();
BUG_ON(!wq_update_pod_attrs_buf);
+ /*
+ * If nohz_full is enabled, set power efficient workqueue as unbound.
+ * This allows workqueue items to be moved to HK CPUs.
+ */
+ if (housekeeping_enabled(HK_TYPE_TICK))
+ wq_power_efficient = true;
+
/* initialize WQ_AFFN_SYSTEM pods */
pt->pod_cpus = kcalloc(1, sizeof(pt->pod_cpus[0]), GFP_KERNEL);
pt->pod_node = kcalloc(1, sizeof(pt->pod_node[0]), GFP_KERNEL);
On Fri, Jan 19, 2024 at 12:54:39PM -0300, Marcelo Tosatti wrote:
>
> A customer using nohz_full has experienced the following interruption:
>
> oslat-1004510 [018] timer_cancel: timer=0xffff90a7ca663cf8
> oslat-1004510 [018] timer_expire_entry: timer=0xffff90a7ca663cf8 function=delayed_work_timer_fn now=4709188240 baseclk=4709188240
> oslat-1004510 [018] workqueue_queue_work: work struct=0xffff90a7ca663cd8 function=fb_flashcursor workqueue=events_power_efficient req_cpu=8192 cpu=18
> oslat-1004510 [018] workqueue_activate_work: work struct 0xffff90a7ca663cd8
> oslat-1004510 [018] sched_wakeup: kworker/18:1:326 [120] CPU:018
> oslat-1004510 [018] timer_expire_exit: timer=0xffff90a7ca663cf8
> oslat-1004510 [018] irq_work_entry: vector=246
> oslat-1004510 [018] irq_work_exit: vector=246
> oslat-1004510 [018] tick_stop: success=0 dependency=SCHED
> oslat-1004510 [018] hrtimer_start: hrtimer=0xffff90a70009cb00 function=tick_sched_timer/0x0 ...
> oslat-1004510 [018] softirq_exit: vec=1 [action=TIMER]
> oslat-1004510 [018] softirq_entry: vec=7 [action=SCHED]
> oslat-1004510 [018] softirq_exit: vec=7 [action=SCHED]
> oslat-1004510 [018] tick_stop: success=0 dependency=SCHED
> oslat-1004510 [018] sched_switch: oslat:1004510 [120] R ==> kworker/18:1:326 [120]
> kworker/18:1-326 [018] workqueue_execute_start: work struct 0xffff90a7ca663cd8: function fb_flashcursor
> kworker/18:1-326 [018] workqueue_queue_work: work struct=0xffff9078f119eed0 function=drm_fb_helper_damage_work workqueue=events req_cpu=8192 cpu=18
> kworker/18:1-326 [018] workqueue_activate_work: work struct 0xffff9078f119eed0
> kworker/18:1-326 [018] timer_start: timer=0xffff90a7ca663cf8 function=delayed_work_timer_fn ...
>
> Set wq_power_efficient to true, in case nohz_full is enabled.
> This makes the power efficient workqueue be unbounded, which allows
> workqueue items there to be moved to HK CPUs.
>
> Signed-off-by: Marcelo Tosatti <[email protected]>
Applied to wq/for-6.9.
A side note: with the recent affinity improvements to unbound workqueues, I
wonder whether we'd be able to drop wq_power_efficient and just use
system_unbound_wq instead without noticeable perf difference.
Thanks.
--
tejun
On Fri, Jan 19, 2024 at 01:57:48PM -1000, Tejun Heo wrote:
> On Fri, Jan 19, 2024 at 12:54:39PM -0300, Marcelo Tosatti wrote:
> >
> > A customer using nohz_full has experienced the following interruption:
> >
> > oslat-1004510 [018] timer_cancel: timer=0xffff90a7ca663cf8
> > oslat-1004510 [018] timer_expire_entry: timer=0xffff90a7ca663cf8 function=delayed_work_timer_fn now=4709188240 baseclk=4709188240
> > oslat-1004510 [018] workqueue_queue_work: work struct=0xffff90a7ca663cd8 function=fb_flashcursor workqueue=events_power_efficient req_cpu=8192 cpu=18
> > oslat-1004510 [018] workqueue_activate_work: work struct 0xffff90a7ca663cd8
> > oslat-1004510 [018] sched_wakeup: kworker/18:1:326 [120] CPU:018
> > oslat-1004510 [018] timer_expire_exit: timer=0xffff90a7ca663cf8
> > oslat-1004510 [018] irq_work_entry: vector=246
> > oslat-1004510 [018] irq_work_exit: vector=246
> > oslat-1004510 [018] tick_stop: success=0 dependency=SCHED
> > oslat-1004510 [018] hrtimer_start: hrtimer=0xffff90a70009cb00 function=tick_sched_timer/0x0 ...
> > oslat-1004510 [018] softirq_exit: vec=1 [action=TIMER]
> > oslat-1004510 [018] softirq_entry: vec=7 [action=SCHED]
> > oslat-1004510 [018] softirq_exit: vec=7 [action=SCHED]
> > oslat-1004510 [018] tick_stop: success=0 dependency=SCHED
> > oslat-1004510 [018] sched_switch: oslat:1004510 [120] R ==> kworker/18:1:326 [120]
> > kworker/18:1-326 [018] workqueue_execute_start: work struct 0xffff90a7ca663cd8: function fb_flashcursor
> > kworker/18:1-326 [018] workqueue_queue_work: work struct=0xffff9078f119eed0 function=drm_fb_helper_damage_work workqueue=events req_cpu=8192 cpu=18
> > kworker/18:1-326 [018] workqueue_activate_work: work struct 0xffff9078f119eed0
> > kworker/18:1-326 [018] timer_start: timer=0xffff90a7ca663cf8 function=delayed_work_timer_fn ...
> >
> > Set wq_power_efficient to true, in case nohz_full is enabled.
> > This makes the power efficient workqueue be unbounded, which allows
> > workqueue items there to be moved to HK CPUs.
> >
> > Signed-off-by: Marcelo Tosatti <[email protected]>
>
> Applied to wq/for-6.9.
>
> A side note: with the recent affinity improvements to unbound workqueues, I
> wonder whether we'd be able to drop wq_power_efficient and just use
> system_unbound_wq instead without noticeable perf difference.
>
> Thanks.
>
> --
> tejun
Tejun,
About the performance difference (of running locally VS running
remotely), can you list a few performance sensitive work queues
(where per-CPU execution makes a significant difference).
Because i suppose it would be safe (from a performance regression
perspective) to move all delayed works to housekeeping CPUs.
And also, being more extreme, why not an option to mark all workqueues
as unbounded (or perhaps userspace control of bounding, even for
workqueues marked as "per-CPU").
Thanks.
Hello, Marcelo.
On Mon, Jan 22, 2024 at 11:22:10AM -0300, Marcelo Tosatti wrote:
> About the performance difference (of running locally VS running
> remotely), can you list a few performance sensitive work queues
> (where per-CPU execution makes a significant difference).
Unfortunately, I have no idea. It goes way back and I'm not sure anyone
actually tested the difference in a long time. We'd have to dig through
history to gather some context, set up a benchmark which exercises the path
heavily and see whether the difference is still there.
> Because i suppose it would be safe (from a performance regression
> perspective) to move all delayed works to housekeeping CPUs.
Yeah, replacing power_efficient with unbound should be safe.
> And also, being more extreme, why not an option to mark all workqueues
> as unbounded (or perhaps userspace control of bounding, even for
> workqueues marked as "per-CPU").
There are correctness issues with per-cpu workqueues - e.g. accessing local
atomic counters, cpu states and what not. Also, many per-cpu users already
know that the cpu is hot as they're queueing on the local CPU. I'm not
against moving more users towards unbound workqueues but that'd have be done
case by case unfortunately.
Thanks.
--
tejun