Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp6285315rwl; Tue, 4 Apr 2023 10:12:39 -0700 (PDT) X-Google-Smtp-Source: AKy350bWByzSZakfPBP5MnvLH1mp0LToWHbGfbsQGKG8a1fYU0UpXJPjHPyPo/nR8jWcFVURYxDZ X-Received: by 2002:a50:ff05:0:b0:4fb:395a:6aa5 with SMTP id a5-20020a50ff05000000b004fb395a6aa5mr207696edu.9.1680628359407; Tue, 04 Apr 2023 10:12:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680628359; cv=none; d=google.com; s=arc-20160816; b=LCGz2O0SJCi+nxe3P2arE/A1Z9HMLCWrWdOSrfIf9TGaNzFmJ9VGRdwydNrHtbTq9P hxZPCIHUEzfYFyjeEygxgCDo0rL8oHHL9W7vjkwd4YSI70g6jclFew1LwJkS7p5V5phr VujTa2wSHzucLIw7LTl6UGX+hQ6np/YedB0t+NfspHfv/oKQoRoJMOfoJkEVVgQFwa/F Sf0zYUuGZ4LqUytLUaggR89V0P8Y1SXks7TjJYHrqfQOSE9gyeE+QnLPtcQK6pfGi+VJ Qy0zQ+9ZKyICzRc2wu9VNSoyJjFiv85Eev/NOMlgZPD1a3T+IehHE8U5zWNFG0zhspJL H20w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=/4wgT1fYjYVeznO6I+FNrA2iUCmOb49EFy/x6QZ8ObU=; b=gbBxaTAPXiJKSPIRvv3wD9qb/p/7ELH5yY7GLvc5iX8Nin6wVNujOMSITFQSGAsu2e rAV3jTAIaHfohV0JNqGPv1VA+1BPRDU/ouPxy1SW5r0OAKlchyH2ZT7m3Kg4qz6vysVc QOz0cy0f4QKnMYzCRDm7kEKCbkNmln6AIZLdonbw5/0kzWDiGur2USgNAOeUxek0rVa8 4GY6P8jWaWh1J7WSoaual9MwofEYss7zA8i23HQ/mYo9GFyxH5CSg+Js6hsQPM+w/A1m nyN6w6IQ6UvAcxNu5QyDNM+qysA+p6qZHhnU9Dc6z7zzf6XNO+4PysyEUDP9WFKr6YYm IDaQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=V6F6O4A8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m17-20020a056402051100b004acef3d53b7si8380586edv.269.2023.04.04.10.12.15; Tue, 04 Apr 2023 10:12:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=V6F6O4A8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235711AbjDDRH5 (ORCPT + 99 others); Tue, 4 Apr 2023 13:07:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235560AbjDDRHv (ORCPT ); Tue, 4 Apr 2023 13:07:51 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78A4E4496 for ; Tue, 4 Apr 2023 10:06:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680627986; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/4wgT1fYjYVeznO6I+FNrA2iUCmOb49EFy/x6QZ8ObU=; b=V6F6O4A8ODTLyKxsJJpJPjRlWA77Mj1pfexCkjuUkmX9lCdmddflxIMO2Kpu5pwJSDt0NK Pn9Ss+MnRcdcO8NFu85//xz6WIhKGQHMk0bOYAXe8wEiVZ0AQ5cXhQVS+6o7clZER8GwrR BaQSMy9VhM9qfH74jKFlOv5CdLfJe/M= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-56-Q4coWh77MBmmO0wbvhdCRw-1; Tue, 04 Apr 2023 13:06:20 -0400 X-MC-Unique: Q4coWh77MBmmO0wbvhdCRw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8F6F3857FB3; Tue, 4 Apr 2023 17:06:17 +0000 (UTC) Received: from llong.com (unknown [10.22.32.153]) by smtp.corp.redhat.com (Postfix) with ESMTP id 231FBFD6E; Tue, 4 Apr 2023 17:06:17 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Christian Brauner Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Juri Lelli , Dietmar Eggemann , =?UTF-8?q?Michal=20Koutn=C3=BD?= , gscrivan@redhat.com, Waiman Long Subject: [PATCH v2 2/4] cgroup/cpuset: Make cpuset_fork() handle CLONE_INTO_CGROUP properly Date: Tue, 4 Apr 2023 13:05:44 -0400 Message-Id: <20230404170546.2585050-3-longman@redhat.com> In-Reply-To: <20230404170546.2585050-1-longman@redhat.com> References: <20230404170546.2585050-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org By default, the clone(2) syscall spawn a child process into the same cgroup as its parent. With the use of the CLONE_INTO_CGROUP flag introduced by commit ef2c41cf38a7 ("clone3: allow spawning processes into cgroups"), the child will be spawned into a different cgroup which is somewhat similar to writing the child's tid into "cgroup.threads". The current cpuset_fork() method does not properly handle the CLONE_INTO_CGROUP case where the cpuset of the child may be different from that of its parent. Update the cpuset_fork() method to treat the CLONE_INTO_CGROUP case similar to cpuset_attach(). Since the newly cloned task has not been running yet, its actual memory usage isn't known. So it is not necessary to make change to mm in cpuset_fork(). Fixes: ef2c41cf38a7 ("clone3: allow spawning processes into cgroups") Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 56 +++++++++++++++++++++++++++--------------- 1 file changed, 36 insertions(+), 20 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 066689a7dcc3..dc82f753373e 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -2520,16 +2520,33 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset) } /* - * Protected by cpuset_rwsem. cpus_attach is used only by cpuset_attach() + * Protected by cpuset_rwsem. cpus_attach is used only by cpuset_attach_task() * but we can't allocate it dynamically there. Define it global and * allocate from cpuset_init(). */ static cpumask_var_t cpus_attach; +static nodemask_t cpuset_attach_nodemask_to; + +static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) +{ + percpu_rwsem_assert_held(&cpuset_rwsem); + + if (cs != &top_cpuset) + guarantee_online_cpus(task, cpus_attach); + else + cpumask_copy(cpus_attach, task_cpu_possible_mask(task)); + /* + * can_attach beforehand should guarantee that this doesn't + * fail. TODO: have a better way to handle failure here + */ + WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach)); + + cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to); + cpuset_update_task_spread_flags(cs, task); +} static void cpuset_attach(struct cgroup_taskset *tset) { - /* static buf protected by cpuset_rwsem */ - static nodemask_t cpuset_attach_nodemask_to; struct task_struct *task; struct task_struct *leader; struct cgroup_subsys_state *css; @@ -2560,20 +2577,8 @@ static void cpuset_attach(struct cgroup_taskset *tset) guarantee_online_mems(cs, &cpuset_attach_nodemask_to); - cgroup_taskset_for_each(task, css, tset) { - if (cs != &top_cpuset) - guarantee_online_cpus(task, cpus_attach); - else - cpumask_copy(cpus_attach, task_cpu_possible_mask(task)); - /* - * can_attach beforehand should guarantee that this doesn't - * fail. TODO: have a better way to handle failure here - */ - WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach)); - - cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to); - cpuset_update_task_spread_flags(cs, task); - } + cgroup_taskset_for_each(task, css, tset) + cpuset_attach_task(cs, task); /* * Change mm for all threadgroup leaders. This is expensive and may @@ -3271,11 +3276,22 @@ static void cpuset_bind(struct cgroup_subsys_state *root_css) */ static void cpuset_fork(struct task_struct *task) { - if (task_css_is_root(task, cpuset_cgrp_id)) + struct cpuset *cs = task_cs(task); + + if (cs == task_cs(current)) { + if (cs == &top_cpuset) + return; + + set_cpus_allowed_ptr(task, current->cpus_ptr); + task->mems_allowed = current->mems_allowed; return; + } - set_cpus_allowed_ptr(task, current->cpus_ptr); - task->mems_allowed = current->mems_allowed; + /* CLONE_INTO_CGROUP */ + percpu_down_write(&cpuset_rwsem); + guarantee_online_mems(cs, &cpuset_attach_nodemask_to); + cpuset_attach_task(cs, task); + percpu_up_write(&cpuset_rwsem); } struct cgroup_subsys cpuset_cgrp_subsys = { -- 2.31.1