2019-12-11 23:08:39

by Daniel Jordan

[permalink] [raw]
Subject: Re: [RFC 1/4] workqueue: fix selecting cpu for queuing work

[please cc maintainers]

On Wed, Dec 11, 2019 at 06:59:19PM +0800, Hillf Danton wrote:
> Round robin is needed only for unbound workqueue and wq_unbound_cpumask
> has nothing to do with standard workqueues, so we have to select cpu in
> case of WORK_CPU_UNBOUND also with workqueue type taken into account.

Good catch. I'd include something like this in the changelog.

Otherwise, work queued on a bound workqueue with WORK_CPU_UNBOUND might
not prefer the local CPU if wq_unbound_cpumask is non-empty and doesn't
include that CPU.

With that you can add

Reviewed-by: Daniel Jordan <[email protected]>


2020-01-23 23:20:28

by Daniel Jordan

[permalink] [raw]
Subject: Re: [RFC 1/4] workqueue: fix selecting cpu for queuing work

On Wed, Dec 11, 2019 at 06:07:35PM -0500, Daniel Jordan wrote:
> [please cc maintainers]
>
> On Wed, Dec 11, 2019 at 06:59:19PM +0800, Hillf Danton wrote:
> > Round robin is needed only for unbound workqueue and wq_unbound_cpumask
> > has nothing to do with standard workqueues, so we have to select cpu in
> > case of WORK_CPU_UNBOUND also with workqueue type taken into account.
>
> Good catch. I'd include something like this in the changelog.
>
> Otherwise, work queued on a bound workqueue with WORK_CPU_UNBOUND might
> not prefer the local CPU if wq_unbound_cpumask is non-empty and doesn't
> include that CPU.
>
> With that you can add
>
> Reviewed-by: Daniel Jordan <[email protected]>

Any plans to repost this patch, Hillf? If not, I can do it while retaining
your authorship.

Adding back the context, which I forgot to keep when adding the maintainers.

> > Fixes: ef557180447f ("workqueue: schedule WORK_CPU_UNBOUND work on wq_unbound_cpumask CPUs")
> > Signed-off-by: Hillf Danton <[email protected]>
> > ---
> >
> > --- a/kernel/workqueue.c
> > +++ c/kernel/workqueue.c
> > @@ -1409,16 +1409,19 @@ static void __queue_work(int cpu, struct
> > if (unlikely(wq->flags & __WQ_DRAINING) &&
> > WARN_ON_ONCE(!is_chained_work(wq)))
> > return;
> > +
> > rcu_read_lock();
> > retry:
> > - if (req_cpu == WORK_CPU_UNBOUND)
> > - cpu = wq_select_unbound_cpu(raw_smp_processor_id());
> > -
> > /* pwq which will be used unless @work is executing elsewhere */
> > - if (!(wq->flags & WQ_UNBOUND))
> > - pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
> > - else
> > + if (wq->flags & WQ_UNBOUND) {
> > + if (req_cpu == WORK_CPU_UNBOUND)
> > + cpu = wq_select_unbound_cpu(raw_smp_processor_id());
> > pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));
> > + } else {
> > + if (req_cpu == WORK_CPU_UNBOUND)
> > + cpu = raw_smp_processor_id();
> > + pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
> > + }
> >
> > /*
> > * If @work was previously on a different pool, it might still be
> >
> >