Received: by 2002:a05:6358:489b:b0:bb:da1:e618 with SMTP id x27csp6832116rwn; Tue, 13 Sep 2022 09:33:27 -0700 (PDT) X-Google-Smtp-Source: AA6agR5boIMbr9DvgWDg7D9SzIta6MNsUKV3YLf74JuhzTQK98PxBGGUINtFXJDckhTCiEblAYqV X-Received: by 2002:a63:515d:0:b0:42a:cf33:4320 with SMTP id r29-20020a63515d000000b0042acf334320mr19322965pgl.21.1663086807577; Tue, 13 Sep 2022 09:33:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1663086807; cv=none; d=google.com; s=arc-20160816; b=BMehG67UXDRB/rmRSd8r5SnXrvwze5Dp9nKadpurfATee17VevYmOCqD71ru+gZuQ7 YCVtmXCXR+5N9XwBxpnuetPkQlfQV/oYkiitIJ4ayYrjW/QoT5UsmbidIofgFO3rKLtH SMh5rYs8ECX0mgdfhIHWr5HAFiQqZZTwp1SBlacJt3bfjeEH+Tdwb2sXmynwBumLxFhD 9f+vZJyfrzLZn6+o7ZVhrf0iTr97ubiq6rQCvVU9uaR0j0hWEtYaNe9KY8YmuM/TBXw5 VWaTdi5EiFVRe0pmxXGhu6fUdiHNSQLgCpyUJiubnMAu4PbZUCVNOHol6XfUaHL4gr9D qkXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=fpAabn83ExIDMVvpZj3YSy4zKUl6vI7z3JtBosb9Pdw=; b=ivLFE2R7HTvlbULVno3bww4UZWU/KyX7bgmvdsfjotlqMuLqESbCydbA8yILrlUnaz +jvW+/I67LApVEwoha1M+DE32AN/2LdG/Ck+Q0obReyoCsnIcEYbflrScKzU0hg0lNHq 7GFTR/PQRBDL9+Q0c57QuGPhitEeYL132MmS32AbQXtPDiJJpdAXhzza8H1qzthems/U GcN/E9UBahRVvUmvslonNN1R0HBAq509l902h9wubc98iJRdtojWf1+QrE34TlXHN6Lw 6DCKTuKSjZD8GOES3oRf2AaU9YJ0P0/x7buiqHvHNaYQWOIHRbCDHj1ljHDqeoTLAmsk GS/A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b="akl5FqE/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w136-20020a627b8e000000b0052e7d706a6fsi2264655pfc.149.2022.09.13.09.33.14; Tue, 13 Sep 2022 09:33:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b="akl5FqE/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234681AbiIMOvx (ORCPT + 99 others); Tue, 13 Sep 2022 10:51:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234531AbiIMOsy (ORCPT ); Tue, 13 Sep 2022 10:48:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 607BA6F551; Tue, 13 Sep 2022 07:25:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C68A2614CE; Tue, 13 Sep 2022 14:24:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CB73DC433D6; Tue, 13 Sep 2022 14:23:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1663079040; bh=+fk9TLqEpNx7WuHZZ2zSbxmR6htfifbuU340iivay+g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=akl5FqE/8bc4vG/yjmiEzuYhZS088AV9t/F0+ap1fc2TxqMEYAmK2/UMi6vKHVUwN P6GoyS2ldUmbJhoPR85AALlXoRUtGBgzQAjicXMkAn293ErIJtLIa+eKn07RzNTO+K rFsLy0BX1xMVJPB3ZdpjqPtXiHG7dXXcizlWxPCE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Tejun Heo , Sasha Levin , Imran Khan , Xuewen Yan Subject: [PATCH 5.10 34/79] cgroup: Fix threadgroup_rwsem <-> cpus_read_lock() deadlock Date: Tue, 13 Sep 2022 16:04:39 +0200 Message-Id: <20220913140351.945201247@linuxfoundation.org> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220913140350.291927556@linuxfoundation.org> References: <20220913140350.291927556@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Tejun Heo [ Upstream commit 4f7e7236435ca0abe005c674ebd6892c6e83aeb3 ] Bringing up a CPU may involve creating and destroying tasks which requires read-locking threadgroup_rwsem, so threadgroup_rwsem nests inside cpus_read_lock(). However, cpuset's ->attach(), which may be called with thredagroup_rwsem write-locked, also wants to disable CPU hotplug and acquires cpus_read_lock(), leading to a deadlock. Fix it by guaranteeing that ->attach() is always called with CPU hotplug disabled and removing cpus_read_lock() call from cpuset_attach(). Signed-off-by: Tejun Heo Reviewed-and-tested-by: Imran Khan Reported-and-tested-by: Xuewen Yan Fixes: 05c7b7a92cc8 ("cgroup/cpuset: Fix a race between cpuset_attach() and cpu hotplug") Cc: stable@vger.kernel.org # v5.17+ Signed-off-by: Sasha Levin --- kernel/cgroup/cgroup.c | 77 +++++++++++++++++++++++++++++------------- kernel/cgroup/cpuset.c | 3 +- 2 files changed, 55 insertions(+), 25 deletions(-) diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 1072843b25709..684c16849eff3 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -2304,6 +2304,47 @@ int task_cgroup_path(struct task_struct *task, char *buf, size_t buflen) } EXPORT_SYMBOL_GPL(task_cgroup_path); +/** + * cgroup_attach_lock - Lock for ->attach() + * @lock_threadgroup: whether to down_write cgroup_threadgroup_rwsem + * + * cgroup migration sometimes needs to stabilize threadgroups against forks and + * exits by write-locking cgroup_threadgroup_rwsem. However, some ->attach() + * implementations (e.g. cpuset), also need to disable CPU hotplug. + * Unfortunately, letting ->attach() operations acquire cpus_read_lock() can + * lead to deadlocks. + * + * Bringing up a CPU may involve creating and destroying tasks which requires + * read-locking threadgroup_rwsem, so threadgroup_rwsem nests inside + * cpus_read_lock(). If we call an ->attach() which acquires the cpus lock while + * write-locking threadgroup_rwsem, the locking order is reversed and we end up + * waiting for an on-going CPU hotplug operation which in turn is waiting for + * the threadgroup_rwsem to be released to create new tasks. For more details: + * + * http://lkml.kernel.org/r/20220711174629.uehfmqegcwn2lqzu@wubuntu + * + * Resolve the situation by always acquiring cpus_read_lock() before optionally + * write-locking cgroup_threadgroup_rwsem. This allows ->attach() to assume that + * CPU hotplug is disabled on entry. + */ +static void cgroup_attach_lock(bool lock_threadgroup) +{ + cpus_read_lock(); + if (lock_threadgroup) + percpu_down_write(&cgroup_threadgroup_rwsem); +} + +/** + * cgroup_attach_unlock - Undo cgroup_attach_lock() + * @lock_threadgroup: whether to up_write cgroup_threadgroup_rwsem + */ +static void cgroup_attach_unlock(bool lock_threadgroup) +{ + if (lock_threadgroup) + percpu_up_write(&cgroup_threadgroup_rwsem); + cpus_read_unlock(); +} + /** * cgroup_migrate_add_task - add a migration target task to a migration context * @task: target task @@ -2780,8 +2821,7 @@ int cgroup_attach_task(struct cgroup *dst_cgrp, struct task_struct *leader, } struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup, - bool *locked) - __acquires(&cgroup_threadgroup_rwsem) + bool *threadgroup_locked) { struct task_struct *tsk; pid_t pid; @@ -2798,12 +2838,8 @@ struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup, * Therefore, we can skip the global lock. */ lockdep_assert_held(&cgroup_mutex); - if (pid || threadgroup) { - percpu_down_write(&cgroup_threadgroup_rwsem); - *locked = true; - } else { - *locked = false; - } + *threadgroup_locked = pid || threadgroup; + cgroup_attach_lock(*threadgroup_locked); rcu_read_lock(); if (pid) { @@ -2834,17 +2870,14 @@ struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup, goto out_unlock_rcu; out_unlock_threadgroup: - if (*locked) { - percpu_up_write(&cgroup_threadgroup_rwsem); - *locked = false; - } + cgroup_attach_unlock(*threadgroup_locked); + *threadgroup_locked = false; out_unlock_rcu: rcu_read_unlock(); return tsk; } -void cgroup_procs_write_finish(struct task_struct *task, bool locked) - __releases(&cgroup_threadgroup_rwsem) +void cgroup_procs_write_finish(struct task_struct *task, bool threadgroup_locked) { struct cgroup_subsys *ss; int ssid; @@ -2852,8 +2885,8 @@ void cgroup_procs_write_finish(struct task_struct *task, bool locked) /* release reference from cgroup_procs_write_start() */ put_task_struct(task); - if (locked) - percpu_up_write(&cgroup_threadgroup_rwsem); + cgroup_attach_unlock(threadgroup_locked); + for_each_subsys(ss, ssid) if (ss->post_attach) ss->post_attach(); @@ -2930,8 +2963,7 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp) * write-locking can be skipped safely. */ has_tasks = !list_empty(&mgctx.preloaded_src_csets); - if (has_tasks) - percpu_down_write(&cgroup_threadgroup_rwsem); + cgroup_attach_lock(has_tasks); /* NULL dst indicates self on default hierarchy */ ret = cgroup_migrate_prepare_dst(&mgctx); @@ -2952,8 +2984,7 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp) ret = cgroup_migrate_execute(&mgctx); out_finish: cgroup_migrate_finish(&mgctx); - if (has_tasks) - percpu_up_write(&cgroup_threadgroup_rwsem); + cgroup_attach_unlock(has_tasks); return ret; } @@ -4809,13 +4840,13 @@ static ssize_t cgroup_procs_write(struct kernfs_open_file *of, struct task_struct *task; const struct cred *saved_cred; ssize_t ret; - bool locked; + bool threadgroup_locked; dst_cgrp = cgroup_kn_lock_live(of->kn, false); if (!dst_cgrp) return -ENODEV; - task = cgroup_procs_write_start(buf, true, &locked); + task = cgroup_procs_write_start(buf, true, &threadgroup_locked); ret = PTR_ERR_OR_ZERO(task); if (ret) goto out_unlock; @@ -4841,7 +4872,7 @@ static ssize_t cgroup_procs_write(struct kernfs_open_file *of, ret = cgroup_attach_task(dst_cgrp, task, true); out_finish: - cgroup_procs_write_finish(task, locked); + cgroup_procs_write_finish(task, threadgroup_locked); out_unlock: cgroup_kn_unlock(of->kn); diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index c51863b63f93a..b7830f1f1f3a5 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -2212,7 +2212,7 @@ static void cpuset_attach(struct cgroup_taskset *tset) cgroup_taskset_first(tset, &css); cs = css_cs(css); - cpus_read_lock(); + lockdep_assert_cpus_held(); /* see cgroup_attach_lock() */ percpu_down_write(&cpuset_rwsem); /* prepare for attach */ @@ -2268,7 +2268,6 @@ static void cpuset_attach(struct cgroup_taskset *tset) wake_up(&cpuset_attach_wq); percpu_up_write(&cpuset_rwsem); - cpus_read_unlock(); } /* The various types of files and directories in a cpuset file system */ -- 2.35.1