Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp1798619rwd; Wed, 17 May 2023 01:16:01 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6hxLZ3e0OV+pwSwELKH+eNGlb8PvaGgN/qx53hIyE7vQwb2YzEH460G0Kb8gj/+fjrBbGU X-Received: by 2002:a05:6a20:442a:b0:ff:68f1:679 with SMTP id ce42-20020a056a20442a00b000ff68f10679mr50133516pzb.52.1684311360742; Wed, 17 May 2023 01:16:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684311360; cv=none; d=google.com; s=arc-20160816; b=XkVvIcdwiCcbnBWysYFe7XgP73uigLTrAE4Q+2p+1zVV4LxCUWkSmNlXU9YRH2Rpfm vgDw/NvNgj0Kyb8UOBKq1xWRnn9Jf+Ev7LpWq+gkSOhAPkCbOU2RqV8wbeAGpZKcyZOi PknLvqAhvzYft2ow4AClYlhPdgjlT6klkM72Otwd9/jY1mI5iJFTil1oRm/dL0kDdTJD RuNS9WKrtfP9DoLEMimeQJ9idF/Th/CKCyx5xiNwz1xe5DLnVVmmlv650vSQpZ7AqojP +lCeACSwTT3OIGJ3NMfu+DNNkSCmpofxVX5h30sEja5huXVb0+Di6KfDlph+MTjype0v yJGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=lYjBqehqUWvSOXsymQqye+/pd2/QBGQCCztad22+z+8=; b=NupCZpckmVEmt+FudfqHLCGdRJb2OrpeIY1ctz4Qp3qg+rM7PpxL8q5WHNM9FWEY41 9Hx+P948jrrj3aOLcRUNFAtTf+rXBag/JJVeXu6KdLIV1udDjSdESJzDgm7y0TdRSbcj vpNY9+6CdmSQthPmf7dQzijG16PaAsSOXjgYhA9r0FlQcyjgf8nb7hpsyfLiXCoLngPM 84y6r93Xq4XrrR/q7j5nFk/T6sD21csLj/BcBm77aqBVBETdJapVye0nAwreTLRUxg5L 6ytUlerxjbBNmMuPu0S/4gCh85laYqI1pdELrvBRuTDJXuK+NIV0twO3HmkELjpCsAAV aoMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=xoTnZwWN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e5-20020aa79805000000b0063d48d82e85si21545907pfl.15.2023.05.17.01.15.46; Wed, 17 May 2023 01:16:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=xoTnZwWN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230256AbjEQHwk (ORCPT + 99 others); Wed, 17 May 2023 03:52:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230134AbjEQHwg (ORCPT ); Wed, 17 May 2023 03:52:36 -0400 X-Greylist: delayed 356 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Wed, 17 May 2023 00:52:35 PDT Received: from out-41.mta1.migadu.com (out-41.mta1.migadu.com [95.215.58.41]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A69F3A9D for ; Wed, 17 May 2023 00:52:34 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1684309598; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=lYjBqehqUWvSOXsymQqye+/pd2/QBGQCCztad22+z+8=; b=xoTnZwWNmd3UbNaJGG6QcrcKvZ9D6Y2wAV+JUnjezWa4UelfUEdPIBQafZcvTy7KHUldTM bM1INjvJvYZLf9gXTym6Jo/bj1JaB+q65jnCK1R1PhB9E8zrAfGG6WuLr2RhHVV7hEpA3Z O9KPdv9m2VcMDAydOH+hSzKxDVaPLTM= From: Qi Zheng To: tj@kernel.org, lizefan.x@bytedance.com, hannes@cmpxchg.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, muchun.song@linux.dev, Qi Zheng , Zhao Gongyi Subject: [PATCH] cgroup: fix missing cpus_read_{lock,unlock}() in cgroup_transfer_tasks() Date: Wed, 17 May 2023 07:45:45 +0000 Message-Id: <20230517074545.2045035-1-qi.zheng@linux.dev> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Qi Zheng The commit 4f7e7236435c ("cgroup: Fix threadgroup_rwsem <-> cpus_read_lock() deadlock") fixed the deadlock between cgroup_threadgroup_rwsem and cpus_read_lock() by introducing cgroup_attach_{lock,unlock}() and removing cpus_read_{lock,unlock}() from cpuset_attach(). But cgroup_transfer_tasks() was missed and not handled, which will cause th following warning: WARNING: CPU: 0 PID: 589 at kernel/cpu.c:526 lockdep_assert_cpus_held+0x32/0x40 CPU: 0 PID: 589 Comm: kworker/1:4 Not tainted 6.4.0-rc2-next-20230517 #50 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 Workqueue: events cpuset_hotplug_workfn RIP: 0010:lockdep_assert_cpus_held+0x32/0x40 <...> Call Trace: cpuset_attach+0x40/0x240 cgroup_migrate_execute+0x452/0x5e0 ? _raw_spin_unlock_irq+0x28/0x40 cgroup_transfer_tasks+0x1f3/0x360 ? find_held_lock+0x32/0x90 ? cpuset_hotplug_workfn+0xc81/0xed0 cpuset_hotplug_workfn+0xcb1/0xed0 ? process_one_work+0x248/0x5b0 process_one_work+0x2b9/0x5b0 worker_thread+0x56/0x3b0 ? process_one_work+0x5b0/0x5b0 kthread+0xf1/0x120 ? kthread_complete_and_exit+0x20/0x20 ret_from_fork+0x1f/0x30 So just use the cgroup_attach_{lock,unlock}() helper to fix it. Fixes: 4f7e7236435c ("cgroup: Fix threadgroup_rwsem <-> cpus_read_lock() deadlock") Reported-by: Zhao Gongyi Signed-off-by: Qi Zheng --- kernel/cgroup/cgroup-v1.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c index aeef06c465ef..5407241dbb45 100644 --- a/kernel/cgroup/cgroup-v1.c +++ b/kernel/cgroup/cgroup-v1.c @@ -108,7 +108,7 @@ int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from) cgroup_lock(); - percpu_down_write(&cgroup_threadgroup_rwsem); + cgroup_attach_lock(true); /* all tasks in @from are being moved, all csets are source */ spin_lock_irq(&css_set_lock); @@ -144,7 +144,7 @@ int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from) } while (task && !ret); out_err: cgroup_migrate_finish(&mgctx); - percpu_up_write(&cgroup_threadgroup_rwsem); + cgroup_attach_unlock(true); cgroup_unlock(); return ret; } -- 2.30.2