Received: by 2002:a05:6358:4e97:b0:b3:742d:4702 with SMTP id ce23csp3290978rwb; Mon, 15 Aug 2022 23:22:55 -0700 (PDT) X-Google-Smtp-Source: AA6agR54VkrvYmbIH9IuBqyCC6rItiPasNwiFLeHmtEBuSXew18euIBo2PHrOoZl91P1Qe/qz7Y2 X-Received: by 2002:a17:906:84f7:b0:734:e049:302d with SMTP id zp23-20020a17090684f700b00734e049302dmr10875672ejb.589.1660630975154; Mon, 15 Aug 2022 23:22:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1660630975; cv=none; d=google.com; s=arc-20160816; b=d5u4+yB+1PNz/CR2cxrOOBR3SkDzZIp88rZqXgQ8K1s9b1cFwQbBUKMHBabZQNYfVA ipnXqvMpcIujIYc61EW/pf6JFxLvUIOVPZE/EUsak6P9YZl91yttUcIn12LdAc2lkwsa 80n4lQLU8G55I8Cu7AyGVp74RYQhKQqWIjQCDU+Rms6/aE+qfbtJ89eU6hxMf1yhPzLf 3+dQzolE0RIEqCkN4aQtbpCDRZ3AB/UPZqbvKM1q8mjhPOlLQEamgtFdZT7fpli+uXkh RJmah+3OXUPQB01wIJwxo7/vpCtekpSFBTukJX+yiK0ZQhNBjfRTo46vJwPabcm59GPZ A7vQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:sender:dkim-signature; bh=uRkPsDWpO6vIBgu5M+7piD5f4UvaAnIFSBv2N8HBdkA=; b=r9rm0A0yKkm5GtPrzjwxioupQSzyagn1GMw6DMHlE1cCNsa3IFROkmea+YYNYJG6b3 /0myraPT14/maLeC5xB53hYJ2kJAqtDwf+H3K/pN4QqT9ZwboAeNoSXRNLgC0DSv6qC2 EF0oNN0+RtQ7Z5pDV1LkhMptfV4yDyqStFgbLohTBcD6MFZrS942lNLTGE6VPV8le3xZ F6TvNqTZ+UQTn2GbyLsqxBKKFKPMthl6xu6DMXMed8QmclKyB9UhlKpbKeMY8DXyK0ry irdkx5rUMuaufmHcCqatrx8fi/nugTZZ4QTCF26gJ0g4SoQ5233xm1fuyvmgUw7cQmvL U5Zg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=La1AmedY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gn16-20020a1709070d1000b0070fc7c9d71dsi12294073ejc.989.2022.08.15.23.22.28; Mon, 15 Aug 2022 23:22:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=La1AmedY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229844AbiHPGCk (ORCPT + 99 others); Tue, 16 Aug 2022 02:02:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229832AbiHPGCW (ORCPT ); Tue, 16 Aug 2022 02:02:22 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75961481DD; Mon, 15 Aug 2022 16:27:41 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id t22so8251659pjy.1; Mon, 15 Aug 2022 16:27:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:from:to:cc; bh=uRkPsDWpO6vIBgu5M+7piD5f4UvaAnIFSBv2N8HBdkA=; b=La1AmedYuOvLHpclRcmDVhOGiyvk02QZTwVFP/CKqVKPB/HmCKpH7RP6WqNDlrhblR HBZHLp0/bciRj7GBBgLQ2NeDIDJoe0NlIZ9j8/HEs6qdlnOYMw0hvErehS7C5c0mGYYy EbDxVfUoghs/ZaA/f9zCrYrH/e/qXfL+WqLYmcqI1SfllKLDhFeD1JTQM8rVCEPw4MGE AmcamTwMFZkNnOJdyLLMySpbCM2aCmHy1ER7il+MR9/pxyySFzF4mgh33IkVMuhU250m bJkuAMoL7eNdRt3gZ7EiwvUajhPmTKgbVMulo9J4LMRAlWtHbra0Dww6Ojmta4vjJ+2i VH4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc; bh=uRkPsDWpO6vIBgu5M+7piD5f4UvaAnIFSBv2N8HBdkA=; b=SGAwocrBjHyKOux2m6uMi83xP19tS5u43Uc26K/zO9CIvu50qvGdEGtJm2BEeK/8zU Dnu08xFDXWMoAC/nSnjXdPmGpCyfi6btpNv9KmHFnTfXtL3fPTfhwIqDynBsUUwpyi/p 1/scoZd0BXGsFvlWsmOeunl0Yy/1v5Kk59eG3UnIfSFj/HA+V67fLKiEedzvXq9pDIU+ CKZZHpydEiILLEFgxbxrxe74AHLaygZ86FYzUHs12xf/Kq5bGebuo3mjgmFj7928loWc UXDAmhnKdvzXTI9Rh8Mw90MN3SAH+0h6JV80qlphtqxqpE6AdBSIjYN4qXbogLw2BlXy qpoQ== X-Gm-Message-State: ACgBeo1T9lAYrVkxroeBf7yYHJ9lostqyHvX9zz3tHu2h2kYWre1KoG1 uVzxBP6AbxgLswGk3GCjSZ0= X-Received: by 2002:a17:903:2446:b0:171:4853:e57d with SMTP id l6-20020a170903244600b001714853e57dmr19587344pls.68.1660606060810; Mon, 15 Aug 2022 16:27:40 -0700 (PDT) Received: from localhost ([2620:10d:c090:400::5:3a69]) by smtp.gmail.com with ESMTPSA id 12-20020a170902c20c00b0015e8d4eb219sm7539756pll.99.2022.08.15.16.27.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Aug 2022 16:27:40 -0700 (PDT) Sender: Tejun Heo Date: Mon, 15 Aug 2022 13:27:38 -1000 From: Tejun Heo To: Mukesh Ojha Cc: Michal =?iso-8859-1?Q?Koutn=FD?= , Xuewen Yan , Imran Khan , lizefan.x@bytedance.com, hannes@cmpxchg.org, tglx@linutronix.de, steven.price@arm.com, peterz@infradead.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Zhao Gongyi , Zhang Qiao Subject: [PATCH cgroup/for-6.0-fixes] cgroup: Fix threadgroup_rwsem <-> cpus_read_lock() deadlock Message-ID: References: <224b19f3-912d-b858-7af4-185b8e55bc66@quicinc.com> <20220815090556.GB27407@blackbody.suse.cz> <20220815093934.GA29323@blackbody.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-1.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Bringing up a CPU may involve creating new tasks which requires read-locking threadgroup_rwsem, so threadgroup_rwsem nests inside cpus_read_lock(). However, cpuset's ->attach(), which may be called with thredagroup_rwsem write-locked, also wants to disable CPU hotplug and acquires cpus_read_lock(), leading to a deadlock. Fix it by guaranteeing that ->attach() is always called with CPU hotplug disabled and removing cpus_read_lock() call from cpuset_attach(). Signed-off-by: Tejun Heo --- Hello, sorry about the delay. So, the previous patch + the revert isn't quite correct because we sometimes elide both cpus_read_lock() and threadgroup_rwsem together and cpuset_attach() woudl end up running without CPU hotplug enabled. Can you please test whether this patch fixes the problem? Thanks. kernel/cgroup/cgroup.c | 77 ++++++++++++++++++++++++++++++++++--------------- kernel/cgroup/cpuset.c | 3 - 2 files changed, 55 insertions(+), 25 deletions(-) diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index ffaccd6373f1e..52502f34fae8c 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -2369,6 +2369,47 @@ int task_cgroup_path(struct task_struct *task, char *buf, size_t buflen) } EXPORT_SYMBOL_GPL(task_cgroup_path); +/** + * cgroup_attach_lock - Lock for ->attach() + * @lock_threadgroup: whether to down_write cgroup_threadgroup_rwsem + * + * cgroup migration sometimes needs to stabilize threadgroups against forks and + * exits by write-locking cgroup_threadgroup_rwsem. However, some ->attach() + * implementations (e.g. cpuset), also need to disable CPU hotplug. + * Unfortunately, letting ->attach() operations acquire cpus_read_lock() can + * lead to deadlocks. + * + * Bringing up a CPU may involve creating new tasks which requires read-locking + * threadgroup_rwsem, so threadgroup_rwsem nests inside cpus_read_lock(). If we + * call an ->attach() which acquires the cpus lock while write-locking + * threadgroup_rwsem, the locking order is reversed and we end up waiting for an + * on-going CPU hotplug operation which in turn is waiting for the + * threadgroup_rwsem to be released to create new tasks. For more details: + * + * http://lkml.kernel.org/r/20220711174629.uehfmqegcwn2lqzu@wubuntu + * + * Resolve the situation by always acquiring cpus_read_lock() before optionally + * write-locking cgroup_threadgroup_rwsem. This allows ->attach() to assume that + * CPU hotplug is disabled on entry. + */ +static void cgroup_attach_lock(bool lock_threadgroup) +{ + cpus_read_lock(); + if (lock_threadgroup) + percpu_down_write(&cgroup_threadgroup_rwsem); +} + +/** + * cgroup_attach_unlock - Undo cgroup_attach_lock() + * @lock_threadgroup: whether to up_write cgroup_threadgroup_rwsem + */ +static void cgroup_attach_unlock(bool lock_threadgroup) +{ + if (lock_threadgroup) + percpu_up_write(&cgroup_threadgroup_rwsem); + cpus_read_unlock(); +} + /** * cgroup_migrate_add_task - add a migration target task to a migration context * @task: target task @@ -2841,8 +2882,7 @@ int cgroup_attach_task(struct cgroup *dst_cgrp, struct task_struct *leader, } struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup, - bool *locked) - __acquires(&cgroup_threadgroup_rwsem) + bool *threadgroup_locked) { struct task_struct *tsk; pid_t pid; @@ -2859,12 +2899,8 @@ struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup, * Therefore, we can skip the global lock. */ lockdep_assert_held(&cgroup_mutex); - if (pid || threadgroup) { - percpu_down_write(&cgroup_threadgroup_rwsem); - *locked = true; - } else { - *locked = false; - } + *threadgroup_locked = pid || threadgroup; + cgroup_attach_lock(*threadgroup_locked); rcu_read_lock(); if (pid) { @@ -2895,17 +2931,14 @@ struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup, goto out_unlock_rcu; out_unlock_threadgroup: - if (*locked) { - percpu_up_write(&cgroup_threadgroup_rwsem); - *locked = false; - } + cgroup_attach_unlock(*threadgroup_locked); + *threadgroup_locked = false; out_unlock_rcu: rcu_read_unlock(); return tsk; } -void cgroup_procs_write_finish(struct task_struct *task, bool locked) - __releases(&cgroup_threadgroup_rwsem) +void cgroup_procs_write_finish(struct task_struct *task, bool threadgroup_locked) { struct cgroup_subsys *ss; int ssid; @@ -2913,8 +2946,8 @@ void cgroup_procs_write_finish(struct task_struct *task, bool locked) /* release reference from cgroup_procs_write_start() */ put_task_struct(task); - if (locked) - percpu_up_write(&cgroup_threadgroup_rwsem); + cgroup_attach_unlock(threadgroup_locked); + for_each_subsys(ss, ssid) if (ss->post_attach) ss->post_attach(); @@ -3000,8 +3033,7 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp) * write-locking can be skipped safely. */ has_tasks = !list_empty(&mgctx.preloaded_src_csets); - if (has_tasks) - percpu_down_write(&cgroup_threadgroup_rwsem); + cgroup_attach_lock(has_tasks); /* NULL dst indicates self on default hierarchy */ ret = cgroup_migrate_prepare_dst(&mgctx); @@ -3022,8 +3054,7 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp) ret = cgroup_migrate_execute(&mgctx); out_finish: cgroup_migrate_finish(&mgctx); - if (has_tasks) - percpu_up_write(&cgroup_threadgroup_rwsem); + cgroup_attach_unlock(has_tasks); return ret; } @@ -4971,13 +5002,13 @@ static ssize_t __cgroup_procs_write(struct kernfs_open_file *of, char *buf, struct task_struct *task; const struct cred *saved_cred; ssize_t ret; - bool locked; + bool threadgroup_locked; dst_cgrp = cgroup_kn_lock_live(of->kn, false); if (!dst_cgrp) return -ENODEV; - task = cgroup_procs_write_start(buf, threadgroup, &locked); + task = cgroup_procs_write_start(buf, threadgroup, &threadgroup_locked); ret = PTR_ERR_OR_ZERO(task); if (ret) goto out_unlock; @@ -5003,7 +5034,7 @@ static ssize_t __cgroup_procs_write(struct kernfs_open_file *of, char *buf, ret = cgroup_attach_task(dst_cgrp, task, threadgroup); out_finish: - cgroup_procs_write_finish(task, locked); + cgroup_procs_write_finish(task, threadgroup_locked); out_unlock: cgroup_kn_unlock(of->kn); diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 58aadfda9b8b3..1f3a55297f39d 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -2289,7 +2289,7 @@ static void cpuset_attach(struct cgroup_taskset *tset) cgroup_taskset_first(tset, &css); cs = css_cs(css); - cpus_read_lock(); + lockdep_assert_cpus_held(); /* see cgroup_attach_lock() */ percpu_down_write(&cpuset_rwsem); guarantee_online_mems(cs, &cpuset_attach_nodemask_to); @@ -2343,7 +2343,6 @@ static void cpuset_attach(struct cgroup_taskset *tset) wake_up(&cpuset_attach_wq); percpu_up_write(&cpuset_rwsem); - cpus_read_unlock(); } /* The various types of files and directories in a cpuset file system */