Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp2199235rwl; Thu, 6 Apr 2023 07:09:49 -0700 (PDT) X-Google-Smtp-Source: AKy350ZEWBsvsbVeQGfqhStG9K7TB/r5PH2ALRFdBq68JJSl7zviwO3SUhwAvMydj8BICnO9Zpdm X-Received: by 2002:a17:906:3b55:b0:8b0:f58d:2da9 with SMTP id h21-20020a1709063b5500b008b0f58d2da9mr7112489ejf.64.1680790189313; Thu, 06 Apr 2023 07:09:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680790189; cv=none; d=google.com; s=arc-20160816; b=lkkEwD67j59rrEFNCn2FgrrXkiRzvLtHdHAsrraS4EHm+zJB0Gs8EkyvE8BadW5H7Z BOiYMh1fUWXKdXKOr/qT6XaC02jEKi97Wu8QDWmcZedvsJl0h0HPpa/S5IEXaizp4JY4 iCotst3cE9dC6SIISF9CLrTXcINxR3GjuIPcpKTPGd3ppS1rpKPq/OCtTn1QihME+iDk LBAYOyzIvXkiPfAFzPi7MpHvrAEe9m/Ks2VgRHrWJTVvyoxJX4kk2+u61FDtdNc3Muh9 zj5tO63uB8AR7frvi1O+CFMPLOdBuof4Yb+3H2nP00PdYJ8V7j442Iu4ZqJQFNPdRlQN 2M/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=QYTEcYiFot6KPJ48Ux/zDQmQJl6REx6kUo8LE6ieCOA=; b=V9sYjR1nonzr8pShFnt+lZkG29jixiBhu6t2tpNVe0bDnrvoJUO1IlRgTmQTopcghK M4/Wghby7mN1kYWb5giG/7jpQqdrnkuVuO24d5BdX89ZhvaGjhPccYAxYpFFQ0JaVzB7 A6KMG9wkTpH+Mp0+LP1vFhT9hTeIuV+D25U5gHJ9zVfOcF6e/gXRskMlLgDDwSVeqqm1 ICLWEnpkFerV2x5Zm2Zsk9d6Sy+5AeL91d9WI/DJ5f3sBuDsBp2J4jxdrtHgINFBBDYz YOsbLqbKXyDstOrSNWCKMjY689JEnt8+VzECg1YfX1arOwS3g+D5zK8GcTI9z4yL0NwP O6Pw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="Z/4haWgp"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d3-20020aa7ce03000000b005024a74991esi1209798edv.97.2023.04.06.07.09.12; Thu, 06 Apr 2023 07:09:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="Z/4haWgp"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238852AbjDFOFF (ORCPT + 99 others); Thu, 6 Apr 2023 10:05:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229905AbjDFOFE (ORCPT ); Thu, 6 Apr 2023 10:05:04 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2CA87AAC for ; Thu, 6 Apr 2023 07:04:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680789863; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QYTEcYiFot6KPJ48Ux/zDQmQJl6REx6kUo8LE6ieCOA=; b=Z/4haWgpFvVUPB2uFX9RSdbLHUIrYuqmGKve/S9YG5dkI3KBoWZkbwDk5bgqA5pKB8R0VS Owcp0o/vtzkOFUdcJikBQ5xxADZjSeLGVi8D5QvNtgprC8a2zAiA0rUYxzamgSyMoqFlhJ fYBVkPSBhX9wo5LZ8x5xQbMLMsvUGZE= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-76-usfHT14MPvSnNoZUkyRz-Q-1; Thu, 06 Apr 2023 10:04:20 -0400 X-MC-Unique: usfHT14MPvSnNoZUkyRz-Q-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3896729AA2DC; Thu, 6 Apr 2023 14:04:19 +0000 (UTC) Received: from llong.com (unknown [10.22.9.26]) by smtp.corp.redhat.com (Postfix) with ESMTP id BD32F40C6EC4; Thu, 6 Apr 2023 14:04:18 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Christian Brauner Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Juri Lelli , Dietmar Eggemann , =?UTF-8?q?Michal=20Koutn=C3=BD?= , gscrivan@redhat.com, Waiman Long Subject: [PATCH v3 2/4] cgroup/cpuset: Make cpuset_fork() handle CLONE_INTO_CGROUP properly Date: Thu, 6 Apr 2023 10:04:02 -0400 Message-Id: <20230406140404.2718574-3-longman@redhat.com> In-Reply-To: <20230406140404.2718574-1-longman@redhat.com> References: <20230406140404.2718574-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org By default, the clone(2) syscall spawn a child process into the same cgroup as its parent. With the use of the CLONE_INTO_CGROUP flag introduced by commit ef2c41cf38a7 ("clone3: allow spawning processes into cgroups"), the child will be spawned into a different cgroup which is somewhat similar to writing the child's tid into "cgroup.threads". The current cpuset_fork() method does not properly handle the CLONE_INTO_CGROUP case where the cpuset of the child may be different from that of its parent. Update the cpuset_fork() method to treat the CLONE_INTO_CGROUP case similar to cpuset_attach(). Since the newly cloned task has not been running yet, its actual memory usage isn't known. So it is not necessary to make change to mm in cpuset_fork(). Fixes: ef2c41cf38a7 ("clone3: allow spawning processes into cgroups") Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 62 ++++++++++++++++++++++++++++-------------- 1 file changed, 42 insertions(+), 20 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 066689a7dcc3..e954d5abb784 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -2520,16 +2520,33 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset) } /* - * Protected by cpuset_rwsem. cpus_attach is used only by cpuset_attach() + * Protected by cpuset_rwsem. cpus_attach is used only by cpuset_attach_task() * but we can't allocate it dynamically there. Define it global and * allocate from cpuset_init(). */ static cpumask_var_t cpus_attach; +static nodemask_t cpuset_attach_nodemask_to; + +static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) +{ + percpu_rwsem_assert_held(&cpuset_rwsem); + + if (cs != &top_cpuset) + guarantee_online_cpus(task, cpus_attach); + else + cpumask_copy(cpus_attach, task_cpu_possible_mask(task)); + /* + * can_attach beforehand should guarantee that this doesn't + * fail. TODO: have a better way to handle failure here + */ + WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach)); + + cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to); + cpuset_update_task_spread_flags(cs, task); +} static void cpuset_attach(struct cgroup_taskset *tset) { - /* static buf protected by cpuset_rwsem */ - static nodemask_t cpuset_attach_nodemask_to; struct task_struct *task; struct task_struct *leader; struct cgroup_subsys_state *css; @@ -2560,20 +2577,8 @@ static void cpuset_attach(struct cgroup_taskset *tset) guarantee_online_mems(cs, &cpuset_attach_nodemask_to); - cgroup_taskset_for_each(task, css, tset) { - if (cs != &top_cpuset) - guarantee_online_cpus(task, cpus_attach); - else - cpumask_copy(cpus_attach, task_cpu_possible_mask(task)); - /* - * can_attach beforehand should guarantee that this doesn't - * fail. TODO: have a better way to handle failure here - */ - WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach)); - - cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to); - cpuset_update_task_spread_flags(cs, task); - } + cgroup_taskset_for_each(task, css, tset) + cpuset_attach_task(cs, task); /* * Change mm for all threadgroup leaders. This is expensive and may @@ -3271,11 +3276,28 @@ static void cpuset_bind(struct cgroup_subsys_state *root_css) */ static void cpuset_fork(struct task_struct *task) { - if (task_css_is_root(task, cpuset_cgrp_id)) + struct cpuset *cs; + bool same_cs; + + rcu_read_lock(); + cs = task_cs(task); + same_cs = (cs == task_cs(current)); + rcu_read_unlock(); + + if (same_cs) { + if (cs == &top_cpuset) + return; + + set_cpus_allowed_ptr(task, current->cpus_ptr); + task->mems_allowed = current->mems_allowed; return; + } - set_cpus_allowed_ptr(task, current->cpus_ptr); - task->mems_allowed = current->mems_allowed; + /* CLONE_INTO_CGROUP */ + percpu_down_write(&cpuset_rwsem); + guarantee_online_mems(cs, &cpuset_attach_nodemask_to); + cpuset_attach_task(cs, task); + percpu_up_write(&cpuset_rwsem); } struct cgroup_subsys cpuset_cgrp_subsys = { -- 2.31.1