Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp1192068rwl; Fri, 31 Mar 2023 07:53:44 -0700 (PDT) X-Google-Smtp-Source: AKy350Z1/7zhrk2dgU/dmFfDqzIyb16DUvc09LBaO2XfO5GotePbdVr9Pd0r9zSv167a9AXL+xru X-Received: by 2002:a05:6402:524e:b0:500:3fd0:25a8 with SMTP id t14-20020a056402524e00b005003fd025a8mr6001878edd.0.1680274424448; Fri, 31 Mar 2023 07:53:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680274424; cv=none; d=google.com; s=arc-20160816; b=YA7JzwiB7kjulhdCbP1NM/qmPu66ffkubYQTa0BBIaIB8DEPoqxm26uzJ0Qk6WLDgK RMs69mV5+nEuTwrNwYdPGYSrzaL+zQ4ern0OMOaFg4lUsnIl0TRmsqvZn86Qa/3LPvx2 04Bk75n8TWLrn3W6Syh79qCQuXv731zh3K2++FVj+XpICxDCs10MsSovfgt5tHZnpmI8 3TXq2kn09kmhFA34clqY9JdCT1zNBLjDva9d9g/zIC2moGDmdkEZzSHJ/Q/zMshRZgz+ CKNr1iS7c2wj1BsL55EeNBoe6Ef414NT8lWA8fHP3m3E3qkcuVUxiOBdgTJb+9tbbKJE lTEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=304LdT+/7RZOGHsAhGWSWVEPTe0KatK6bcaF4lkQOkY=; b=JxKNEBwRpsHHuIOtJsxkq6Ezjpa9l1LAfeOLA+CTIEFQqJ7WnPMMaRZZIoNfAu8muY LV6I++y7rEZRrUeWKIPC1KgsFHMeMkwWqebSj8MTesKHEZS/IGlNGMyXRT/RupdopNi0 MMxHdXnzXs9oKgMN9rhnyQCBnVKvY4fDtaAbdAfd2+OZHhCeQwEmK7K/n0qlZFiDrR04 cpY+CJEiXVHe+TUM+kAk4bGnjh0ucYyZeUvzcrCS6tB1wtjVpnGQkdLmr+Ilar+1Mxap kr1OXl+1ilk5pgrmZoGf++XUF2v1+XmNq+kQruQyMA5/gxG1jKRR2JB0+nRz8mqztRqG oemg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=jSnLghUq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id zm9-20020a170906994900b00930943f134esi2224661ejb.8.2023.03.31.07.53.19; Fri, 31 Mar 2023 07:53:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=jSnLghUq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232910AbjCaOwy (ORCPT + 99 others); Fri, 31 Mar 2023 10:52:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232556AbjCaOwt (ORCPT ); Fri, 31 Mar 2023 10:52:49 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B43B20C0B for ; Fri, 31 Mar 2023 07:51:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680274278; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=304LdT+/7RZOGHsAhGWSWVEPTe0KatK6bcaF4lkQOkY=; b=jSnLghUqtG+VF9/a6OY6l0rl+lyXIJFIlic5MKoFVIC5jUuzhv/DTBQGM/XJMNDvChACyg hrtdlyuzK3qJJtOtzIbOIntNa5ZiJ/5tkBTMcN0OkWO1Bdptw02B56C3qeZPBtclDGaSh/ WZ6MwLQCKoBJVJn67yThpzu4hGl0k/U= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-74-BwyEO9JYNeiCQkclWjnU5Q-1; Fri, 31 Mar 2023 10:51:17 -0400 X-MC-Unique: BwyEO9JYNeiCQkclWjnU5Q-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 44A6B3823A1B; Fri, 31 Mar 2023 14:51:16 +0000 (UTC) Received: from llong.com (unknown [10.22.17.48]) by smtp.corp.redhat.com (Postfix) with ESMTP id CB063492B00; Fri, 31 Mar 2023 14:51:15 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Christian Brauner Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Juri Lelli , Dietmar Eggemann , gscrivan@redhat.com, Waiman Long Subject: [PATCH 1/3] cgroup/cpuset: Make cpuset_fork() handle CLONE_INTO_CGROUP properly Date: Fri, 31 Mar 2023 10:50:43 -0400 Message-Id: <20230331145045.2251683-2-longman@redhat.com> In-Reply-To: <20230331145045.2251683-1-longman@redhat.com> References: <20230331145045.2251683-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org By default, the clone(2) syscall spawn a child process into the same cgroup as its parent. With the use of the CLONE_INTO_CGROUP flag introduced by commit ef2c41cf38a7 ("clone3: allow spawning processes into cgroups"), the child will be spawned into a different cgroup which is somewhat similar to writing the child's tid into "cgroup.threads". The current cpuset_fork() method does not properly handle the CLONE_INTO_CGROUP case where the cpuset of the child may be different from that of its parent. Update the cpuset_fork() method to treat the CLONE_INTO_CGROUP case similar to cpuset_attach(). Since the newly cloned task has not been running yet, its actual memory usage isn't known. So it is not necessary to make change to mm in cpuset_fork(). Fixes: ef2c41cf38a7 ("clone3: allow spawning processes into cgroups") Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 56 +++++++++++++++++++++++++++--------------- 1 file changed, 36 insertions(+), 20 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index bc4dcfd7bee5..f6d5614982d7 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -2516,16 +2516,33 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset) } /* - * Protected by cpuset_rwsem. cpus_attach is used only by cpuset_attach() + * Protected by cpuset_rwsem. cpus_attach is used only by cpuset_attach_task() * but we can't allocate it dynamically there. Define it global and * allocate from cpuset_init(). */ static cpumask_var_t cpus_attach; +static nodemask_t cpuset_attach_nodemask_to; + +static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) +{ + percpu_rwsem_assert_held(&cpuset_rwsem); + + if (cs != &top_cpuset) + guarantee_online_cpus(task, cpus_attach); + else + cpumask_copy(cpus_attach, task_cpu_possible_mask(task)); + /* + * can_attach beforehand should guarantee that this doesn't + * fail. TODO: have a better way to handle failure here + */ + WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach)); + + cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to); + cpuset_update_task_spread_flags(cs, task); +} static void cpuset_attach(struct cgroup_taskset *tset) { - /* static buf protected by cpuset_rwsem */ - static nodemask_t cpuset_attach_nodemask_to; struct task_struct *task; struct task_struct *leader; struct cgroup_subsys_state *css; @@ -2556,20 +2573,8 @@ static void cpuset_attach(struct cgroup_taskset *tset) guarantee_online_mems(cs, &cpuset_attach_nodemask_to); - cgroup_taskset_for_each(task, css, tset) { - if (cs != &top_cpuset) - guarantee_online_cpus(task, cpus_attach); - else - cpumask_copy(cpus_attach, task_cpu_possible_mask(task)); - /* - * can_attach beforehand should guarantee that this doesn't - * fail. TODO: have a better way to handle failure here - */ - WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach)); - - cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to); - cpuset_update_task_spread_flags(cs, task); - } + cgroup_taskset_for_each(task, css, tset) + cpuset_attach_task(cs, task); /* * Change mm for all threadgroup leaders. This is expensive and may @@ -3267,11 +3272,22 @@ static void cpuset_bind(struct cgroup_subsys_state *root_css) */ static void cpuset_fork(struct task_struct *task) { - if (task_css_is_root(task, cpuset_cgrp_id)) + struct cpuset *cs = task_cs(task); + + if (cs == task_cs(current)) { + if (cs == &top_cpuset) + return; + + set_cpus_allowed_ptr(task, current->cpus_ptr); + task->mems_allowed = current->mems_allowed; return; + } - set_cpus_allowed_ptr(task, current->cpus_ptr); - task->mems_allowed = current->mems_allowed; + /* CLONE_INTO_CGROUP */ + percpu_down_write(&cpuset_rwsem); + guarantee_online_mems(cs, &cpuset_attach_nodemask_to); + cpuset_attach_task(cs, task); + percpu_up_write(&cpuset_rwsem); } struct cgroup_subsys cpuset_cgrp_subsys = { -- 2.31.1