Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp1682756ybl; Wed, 14 Aug 2019 22:46:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqzlImXuffQJBZvpfrzjrpJ7lHYJil+gb2Cs4GQzp5qiYA0plpbB3uf2B2q1KmjSra5IA9Ze X-Received: by 2002:a65:49cc:: with SMTP id t12mr2100916pgs.83.1565847982194; Wed, 14 Aug 2019 22:46:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565847982; cv=none; d=google.com; s=arc-20160816; b=VjkfRTaVlqDej0rmORomOoGSh5UbMYd02HwwyQKBuSXL0NqcQeA261Af6s4ozzdG1I 3Fr7rdL034ShRiuwkTagxq67UBhzF/2bh9OTQdMs5E4C41w9E8NEKG34dAqVCdQI2tzh I/y1KrHU8wsAsgcVuzMvma385nVE3T7aFFFnqW97LpUHNfGPwsLwXwQjg+b+XCtlYGSN ARFckoErTTn8P+g0VheksnwwK1OwE2reK9SgOOyaFYwfVx2up22AhgKhG8Zg+kTXtE3G Vqc9zTCKTKUhyn2OzOMMbICbmjfAaFmAZKoFOsj+QTNB1wpTglwNLk+7RJpykApIQ8oc BK8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=b2rQJ/swE7c5Aw/SU16WmxAmlI0/+Q5YI+zbHL2C3D4=; b=UVb+96ZEs6Y6NVKWNw9B/O3mfFLa8DKYD+StKHU9041ODLcEcVqQD0Z3ozAz0nqukl jdniwVPnpsPW0IRVOFe/z7Ga6GHPx+y+J9id7+RPWP0wljHCALff16NJUl/C4fu+ScbL lMxg1gzxE95NXY6yXQFtyjM6SsjfbFeln9BIzWmowhTqcnqa9bvt1Fdtnr3Rr3CDEXd4 mC7a+wGG6Uq4g6+Iuljzd33MBwmjdAfH6pniby52cMXgkZQSip5jpBY3CRxTGSXUO4/E RutTvY9pSGxLDLjcE1pckXuWQvtMDeXjVnJ8BlQJ1GAciir3s79T7aRD7KEwKy59jf0l CUrw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j18si1204401pgh.186.2019.08.14.22.46.09; Wed, 14 Aug 2019 22:46:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728728AbfHOFP3 (ORCPT + 99 others); Thu, 15 Aug 2019 01:15:29 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:57056 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726008AbfHOFP3 (ORCPT ); Thu, 15 Aug 2019 01:15:29 -0400 Received: from gondolin.me.apana.org.au ([192.168.0.6] helo=gondolin.hengli.com.au) by fornost.hmeau.com with esmtps (Exim 4.89 #2 (Debian)) id 1hy86b-0005BL-Bk; Thu, 15 Aug 2019 15:15:21 +1000 Received: from herbert by gondolin.hengli.com.au with local (Exim 4.80) (envelope-from ) id 1hy86Y-0006W0-7R; Thu, 15 Aug 2019 15:15:18 +1000 Date: Thu, 15 Aug 2019 15:15:18 +1000 From: Herbert Xu To: Daniel Jordan Cc: Steffen Klassert , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/2] padata: always acquire cpu_hotplug_lock before pinst->lock Message-ID: <20190815051518.GB24982@gondor.apana.org.au> References: <20190809192857.26585-1-daniel.m.jordan@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190809192857.26585-1-daniel.m.jordan@oracle.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On Fri, Aug 09, 2019 at 03:28:56PM -0400, Daniel Jordan wrote: > On a 5.2 kernel, lockdep complains when offlining a CPU and writing to a > parallel_cpumask sysfs file. > > echo 0 > /sys/devices/system/cpu/cpu1/online > echo ff > /sys/kernel/pcrypt/pencrypt/parallel_cpumask > > ====================================================== > WARNING: possible circular locking dependency detected > 5.2.0-padata-base+ #19 Not tainted > ------------------------------------------------------ > cpuhp/1/13 is trying to acquire lock: > ... (&pinst->lock){+.+.}, at: padata_cpu_prep_down+0x37/0x70 > > but task is already holding lock: > ... (cpuhp_state-down){+.+.}, at: cpuhp_thread_fun+0x34/0x240 > > which lock already depends on the new lock. > > padata doesn't take cpu_hotplug_lock and pinst->lock in a consistent > order. Which should be first? CPU hotplug calls into padata with > cpu_hotplug_lock already held, so it should have priority. Yeah this is clearly a bug but I think we need tackle something else first. > diff --git a/kernel/padata.c b/kernel/padata.c > index b60cc3dcee58..d056276a96ce 100644 > --- a/kernel/padata.c > +++ b/kernel/padata.c > @@ -487,9 +487,7 @@ static void __padata_stop(struct padata_instance *pinst) > > synchronize_rcu(); > > - get_online_cpus(); > padata_flush_queues(pinst->pd); > - put_online_cpus(); > } As I pointed earlier, the whole concept of flushing the queues is suspect. So we should tackle that first and it may obviate the need to do get_online_cpus completely if the flush call disappears. My main worry is that you're adding an extra lock around synchronize_rcu and that is always something that should be done only after careful investigation. Cheers, -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt