2019-08-15 05:46:22

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/2] padata: always acquire cpu_hotplug_lock before pinst->lock

On Fri, Aug 09, 2019 at 03:28:56PM -0400, Daniel Jordan wrote:
> On a 5.2 kernel, lockdep complains when offlining a CPU and writing to a
> parallel_cpumask sysfs file.
>
> echo 0 > /sys/devices/system/cpu/cpu1/online
> echo ff > /sys/kernel/pcrypt/pencrypt/parallel_cpumask
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 5.2.0-padata-base+ #19 Not tainted
> ------------------------------------------------------
> cpuhp/1/13 is trying to acquire lock:
> ... (&pinst->lock){+.+.}, at: padata_cpu_prep_down+0x37/0x70
>
> but task is already holding lock:
> ... (cpuhp_state-down){+.+.}, at: cpuhp_thread_fun+0x34/0x240
>
> which lock already depends on the new lock.
>
> padata doesn't take cpu_hotplug_lock and pinst->lock in a consistent
> order. Which should be first? CPU hotplug calls into padata with
> cpu_hotplug_lock already held, so it should have priority.

Yeah this is clearly a bug but I think we need tackle something
else first.

> diff --git a/kernel/padata.c b/kernel/padata.c
> index b60cc3dcee58..d056276a96ce 100644
> --- a/kernel/padata.c
> +++ b/kernel/padata.c
> @@ -487,9 +487,7 @@ static void __padata_stop(struct padata_instance *pinst)
>
> synchronize_rcu();
>
> - get_online_cpus();
> padata_flush_queues(pinst->pd);
> - put_online_cpus();
> }

As I pointed earlier, the whole concept of flushing the queues is
suspect. So we should tackle that first and it may obviate the need
to do get_online_cpus completely if the flush call disappears.

My main worry is that you're adding an extra lock around synchronize_rcu
and that is always something that should be done only after careful
investigation.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


2019-08-21 04:15:47

by Daniel Jordan

[permalink] [raw]
Subject: Re: [PATCH 1/2] padata: always acquire cpu_hotplug_lock before pinst->lock

[sorry for late reply, moved to new place in past week]

On 8/15/19 1:15 AM, Herbert Xu wrote:
> On Fri, Aug 09, 2019 at 03:28:56PM -0400, Daniel Jordan wrote:
>> padata doesn't take cpu_hotplug_lock and pinst->lock in a consistent
>> order. Which should be first? CPU hotplug calls into padata with
>> cpu_hotplug_lock already held, so it should have priority.
>
> Yeah this is clearly a bug but I think we need tackle something
> else first.
>
>> diff --git a/kernel/padata.c b/kernel/padata.c
>> index b60cc3dcee58..d056276a96ce 100644
>> --- a/kernel/padata.c
>> +++ b/kernel/padata.c
>> @@ -487,9 +487,7 @@ static void __padata_stop(struct padata_instance *pinst)
>>
>> synchronize_rcu();
>>
>> - get_online_cpus();
>> padata_flush_queues(pinst->pd);
>> - put_online_cpus();
>> }
>
> As I pointed earlier, the whole concept of flushing the queues is
> suspect. So we should tackle that first and it may obviate the need
> to do get_online_cpus completely if the flush call disappears.
>
> My main worry is that you're adding an extra lock around synchronize_rcu
> and that is always something that should be done only after careful
> investigation.

Agreed, padata_stop may not need to do get_online_cpus() if we stop an instance in a way that plays well with async crypto.

I'll try fixing the flushing with Steffen's refcounting idea assuming he hasn't already started on that. So we're on the same page, the problem is that if padata's ->parallel() punts to a cryptd thread, flushing the parallel work will return immediately without necessarily indicating the parallel job is finished, so flushing is pointless and padata_replace needs to wait till the instance's refcount drops to 0. Did I get it right?

Daniel

2019-08-21 06:44:23

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/2] padata: always acquire cpu_hotplug_lock before pinst->lock

On Wed, Aug 21, 2019 at 12:14:19AM -0400, Daniel Jordan wrote:
>
> I'll try fixing the flushing with Steffen's refcounting idea assuming he hasn't already started on that. So we're on the same page, the problem is that if padata's ->parallel() punts to a cryptd thread, flushing the parallel work will return immediately without necessarily indicating the parallel job is finished, so flushing is pointless and padata_replace needs to wait till the instance's refcount drops to 0. Did I get it right?

Yeah you can never flush an async crypto job. You have to wait
for it to finish.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt