Received: by 2002:a05:7412:37c9:b0:e2:908c:2ebd with SMTP id jz9csp248410rdb; Mon, 18 Sep 2023 13:58:24 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFUrAI1QLux5C54/2IPOieLcAWLLQivLq/Cefe+/fAqYVdlfQzZpPmeWFvUOw6lrrQoTXzt X-Received: by 2002:a17:902:efc6:b0:1bb:7b6c:983a with SMTP id ja6-20020a170902efc600b001bb7b6c983amr8889529plb.59.1695070704252; Mon, 18 Sep 2023 13:58:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695070704; cv=none; d=google.com; s=arc-20160816; b=whBQ1LpK/O1XZc7ooS7x/8VoQWHq9sSL9k3oY+k9hjtlMaYupuUHQcm9Q4SbNjlTWc oZ7SkVJbHRTpk4d8lnUhKmY2IdrMA219pfWmZJ6HLdivsRUblBbW8prCbx4MX5oJ9dWM ioZFc4pk+A9E7Wfw15GoLBQvMzIyncKYdYC6Jpo7gaNFcivsLZ0lMWEk8AD0e92I5XDR Y8mCkM4jl1vn1EevLq/ZoWHmTjXtyKc5TlLLBQbxItaKq9MBlhNt+PF29M/UZXf2QJ1i 5z/5payQZQBp59QUvcbuPbeL31jS7x9bxJPJkDgsEF75N+kgos4Ly/yebAJKapdxuvTb MEhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=nnaf/+YZv/hYci2QoG7tyRLioIFXko+bbyYD+KlJj7w=; fh=63c0tIZsdXzb7lkMkhdcm7rLvhR8FffJ1v/KZpNz/jQ=; b=sAonDlhmYB0IhgqYEP0lWli9XHNaN07JqABILr5aNyCdMI9x6V5gOUOgVlC3/Uufcj iagKxGO0oULe1JimuxRw8kQDUibPr8FOOmmOwXU1S/dCtUW2dysD/W43E7P86OKhG8iu EHMaUMHUYwU2fbMYH/8H9ICj7XD0JljCNFy8ekdavp4z64Ne3CeFx/2Dkjo3wM+3DGq4 oPPveTrGEiH90yJm4uN5Teo42EPWCyd/vSSGVZiKUHA0sPZ0zBx4X9Bw/6fgdc3iEGZQ DeFyY0axD9PzapD3TeyyeEH0/5w5/ZszuB5UFpK8yW3vw20cbsnfkP5mLLQ+QgxRHxSn uoVA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Zog+F6Rx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id o13-20020a170902778d00b001bc52116351si8566908pll.70.2023.09.18.13.58.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Sep 2023 13:58:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Zog+F6Rx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id C2794817DD2C; Mon, 18 Sep 2023 09:51:28 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230292AbjIRQvJ (ORCPT + 99 others); Mon, 18 Sep 2023 12:51:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230129AbjIRQvG (ORCPT ); Mon, 18 Sep 2023 12:51:06 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5D1683 for ; Mon, 18 Sep 2023 09:50:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695055813; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nnaf/+YZv/hYci2QoG7tyRLioIFXko+bbyYD+KlJj7w=; b=Zog+F6Rx+b1dc3Pdx17YeVyGmDpjJr95U3u38ctwVYtFK0Ae0RkwF0VSaJNxG2byLMiv2x ZxaHCg0euZUzl2ErrqFuePvvisSJRpCHGXJuXJkPUi1/eEwcLgnblNoQY5E7z4W479IzsD dQOq6BFvspy2hpoHEihsCq0yX2eGEPI= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-678-WXi8OPpMNaWVQaKM7neKUw-1; Mon, 18 Sep 2023 09:47:04 -0400 X-MC-Unique: WXi8OPpMNaWVQaKM7neKUw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5D5391C0514C; Mon, 18 Sep 2023 13:47:03 +0000 (UTC) Received: from [10.22.9.159] (unknown [10.22.9.159]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9E1F540C6EA8; Mon, 18 Sep 2023 13:47:02 +0000 (UTC) Message-ID: Date: Mon, 18 Sep 2023 09:47:02 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.14.0 Subject: Re: [PATCH 1/1] cgroup/cpuset: Rebuild sched domains if isolated partition changed Content-Language: en-US To: Pierre Gondois , linux-kernel@vger.kernel.org Cc: rui.zhang@intel.com, aaron.lu@intel.com, Zefan Li , Tejun Heo , Johannes Weiner , cgroups@vger.kernel.org References: <20230915154505.363754-1-pierre.gondois@arm.com> <20230915154505.363754-2-pierre.gondois@arm.com> <5e513c72-7198-a55e-6e2b-a811d5529284@redhat.com> <9777f0d2-2fdf-41cb-bd01-19c52939ef42@arm.com> From: Waiman Long In-Reply-To: <9777f0d2-2fdf-41cb-bd01-19c52939ef42@arm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.3 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Mon, 18 Sep 2023 09:51:29 -0700 (PDT) On 9/18/23 03:07, Pierre Gondois wrote: > Hello Longman, > > On 9/17/23 18:51, Waiman Long wrote: >> On 9/15/23 11:45, Pierre Gondois wrote: >>> When an isolated parition is created, the sched domains (sds) are >>> rebuilt and the sds of the isolated CPUs are detached. This only >>> happens at the creation of the isolated parition. Updating >>> the cpuset of the partition doesn't rebuild the sds: >>> >>> To reproduce: >>>     # ls /sys/kernel/debug/sched/domains/cpu0/ >>>     domain0 >>>     # ls /sys/kernel/debug/sched/domains/cpu1/ >>>     domain0 >>>     # >>>     # mkdir cgroup >>>     # mount -t cgroup2 none cgroup/ >>>     # mkdir cgroup/A1/ >>>     # echo "+cpuset" > cgroup/cgroup.subtree_control >>>     # echo 0-3 > cgroup/A1/cpuset.cpus >>>     # echo isolated > cgroup/A1/cpuset.cpus.partition >>>     # >>>     # ls /sys/kernel/debug/sched/domains/cpu0/ >>>     # ls /sys/kernel/debug/sched/domains/cpu1/ >>>     # >>>     # echo 0 > cgroup/A1/cpuset.cpus >>>     # ls /sys/kernel/debug/sched/domains/cpu0/ >>>     # ls /sys/kernel/debug/sched/domains/cpu1/ >>>     # >>> >>> Here CPU1 should have a sched domain re-attached. >>> >>> Signed-off-by: Pierre Gondois >>> --- >>>    kernel/cgroup/cpuset.c | 12 ++++++++---- >>>    1 file changed, 8 insertions(+), 4 deletions(-) >>> >>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c >>> index 58e6f18f01c1..e3eb27ff9b68 100644 >>> --- a/kernel/cgroup/cpuset.c >>> +++ b/kernel/cgroup/cpuset.c >>> @@ -1680,11 +1680,15 @@ static void update_cpumasks_hier(struct >>> cpuset *cs, struct tmpmasks *tmp, >>>             * empty cpuset is changed, we need to rebuild sched >>> domains. >>>             * On default hierarchy, the cpuset needs to be a partition >>>             * root as well. >>> +         * Also rebuild sched domains if the cpuset of an isolated >>> +         * partition changed. >>>             */ >>> -        if (!cpumask_empty(cp->cpus_allowed) && >>> -            is_sched_load_balance(cp) && >>> -           (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) || >>> -            is_partition_valid(cp))) >>> +        if ((!cpumask_empty(cp->cpus_allowed) && >>> +             is_sched_load_balance(cp) && >>> +             (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) || >>> +              is_partition_valid(cp))) || >>> +            (cp->partition_root_state == PRS_ISOLATED || >>> +             cp->partition_root_state == PRS_INVALID_ISOLATED)) >>>                need_rebuild_sched_domains = true; >>>               rcu_read_lock(); >> >> Thanks for spotting the problem and sending out a patch to fix it. >> However, it should be done in the update_cpumask(). The sched_domain >> rebuild in update_cpumasks_hier() is supposed to be used for impacted >> descendant cpusets lower down in the hierarchy. >> >> Anyway, I believe this problem should have been fixed in commit >> a86ce68078b2 ("cgroup/cpuset: Extract out CS_CPU_EXCLUSIVE & >> CS_SCHED_LOAD_BALANCE handling") recently merged into the v6.6 kernel. >> Would you mind testing the latest upstream kernel to see if this problem >> is gone or not? > > Yes right, v6.6-rc2 kernel doesn't have this issue. My bad for not > updating > it ... > > However I think the second issue described in the cover letter can > still be > reproduced on v6.6-rc2. This present patch was not aiming to fix it > though. > > Commands: > # mkdir cgroup > # mount -t cgroup2 none cgroup/ > # mkdir cgroup/A1 cgroup/B1 > # echo "+cpuset" > cgroup/cgroup.subtree_control > # echo 0-3 > cgroup/A1/cpuset.cpus > # echo isolated > cgroup/A1/cpuset.cpus.partition > # echo 4-6 > cgroup/B1/cpuset.cpus > # cat cgroup/A1/cpuset.cpus.partition > isolated > # echo 0-4 > cgroup/A1/cpuset.cpus > # cat cgroup/A1/cpuset.cpus.partition > isolated invalid (Cpu list in cpuset.cpus not exclusive) > # echo 0-3 > cgroup/A1/cpuset.cpus > # cat cgroup/A1/cpuset.cpus.partition > isolated invalid (Cpu list in cpuset.cpus not exclusive) > > Cgroup A1 should be a valid isolated partition once its CPUs become > exclusive again I believe, Yes, the current code doesn't always detect that an invalid partition can be valid again and turn it back on. Will try to fix that. In the mean time, a workaround is to do # echo member > cgroup/A1/cpuset.cpus.partition # echo isolated > cgroup/A1/cpuset.cpus.partition Thanks, Longman