Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp3774116ybb; Mon, 23 Mar 2020 07:29:16 -0700 (PDT) X-Google-Smtp-Source: ADFU+vtH83J96TjNXkcy9cnmrcwbZujZ1/AexiGCL59GtKwsAcI/Cje7TKmUrkPf72jo/8T//ELH X-Received: by 2002:a9d:b8f:: with SMTP id 15mr17926840oth.256.1584973756791; Mon, 23 Mar 2020 07:29:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584973756; cv=none; d=google.com; s=arc-20160816; b=tok2baj81tXLh0ClTMeLCyiuydLVL69XFbBU14ZYeGuGs60+B2iX8+HGR6SxSTB23G U27tECN2s1L5mx7Cd+o90CogbvMc81N2Kp2Pj+u0+ykBpVCTN//uAi9a/nLx0wddmDKb DJJUcljvHvCbtadlJ+wcV+MRde2V2ODly/jIdIpukCrT/ysSFTVoKrOd3MtHlHBGCtr5 Tav8Qus7VafIXDFxOkZYp4ifnYNWEmZRDsuNaoY0+IXY98NvNym9+jsIc9anLqjc+7lC 8EEfAypMoO2mxr5qJQWIZPSJ55Un4adB63ug4pW7fL2sYvlp4N2SDHtTWuKdc/ZIB+RY /cyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=VDQ3g/YWZ10GA02X4IL1EKpW8AzZ882d2um5zY2UCIo=; b=WSYntb6Y5VE9k+DLghTsJejJneILvAwD0v7zfjVmq9XhN6tsraYUCTzuaSgM0ugUAX cb5+oopTOmyiFqeqbx5DK/nEyHrkF4qg9TrK3IcEVa5NAfilM7AUP4wQM7xgObhXzZxI /lKNztjF/Lp5L341nwrXTzX7TRX6k5mA78tr7OUYolgakqlIKV8FEiFitOA7a8Dqm2+Q kHHRfOajJd1UF99CamNXH1TXZnDO6EmfruLy4iLPU0/Dp3kwZzew8cz4o1T6XP6N4dbB 3gT8b+7hv8SVoAg+j3IMjOsdVFEjkrc6AuJ4YAJ8HhU7O/0J7zskrwGC7jAUv4ck7Ixx jeGA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p26si2922526otq.98.2020.03.23.07.29.04; Mon, 23 Mar 2020 07:29:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727116AbgCWO0j (ORCPT + 99 others); Mon, 23 Mar 2020 10:26:39 -0400 Received: from foss.arm.com ([217.140.110.172]:50176 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725776AbgCWO0i (ORCPT ); Mon, 23 Mar 2020 10:26:38 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B28771FB; Mon, 23 Mar 2020 07:26:37 -0700 (PDT) Received: from [192.168.1.19] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A7CEF3F52E; Mon, 23 Mar 2020 07:26:36 -0700 (PDT) Subject: Re: [PATCH v2 3/9] sched: Remove checks against SD_LOAD_BALANCE To: Valentin Schneider Cc: linux-kernel@vger.kernel.org, mingo@kernel.org, peterz@infradead.org, vincent.guittot@linaro.org References: <20200311181601.18314-1-valentin.schneider@arm.com> <20200311181601.18314-4-valentin.schneider@arm.com> From: Dietmar Eggemann Message-ID: <5decd96b-6fe0-3c35-4609-59378a0c8621@arm.com> Date: Mon, 23 Mar 2020 15:26:35 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.4.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 19.03.20 13:05, Valentin Schneider wrote: > > On Thu, Mar 19 2020, Dietmar Eggemann wrote: >> On 11.03.20 19:15, Valentin Schneider wrote: [...] > Your comments make me realize that changelog isn't great, what about the > following? > > --- > > The SD_LOAD_BALANCE flag is set unconditionally for all domains in > sd_init(). By making the sched_domain->flags syctl interface read-only, we > have removed the last piece of code that could clear that flag - as such, > it will now be always present. Rather than to keep carrying it along, we > can work towards getting rid of it entirely. > > cpusets don't need it because they can make CPUs be attached to the NULL > domain (e.g. cpuset with sched_load_balance=0), or to a partitionned s/partitionned/partitioned > root_domain, i.e. a sched_domain hierarchy that doesn't span the entire > system (e.g. root cpuset with sched_load_balance=0 and sibling cpusets with > sched_load_balance=1). > > isolcpus apply the same "trick": isolated CPUs are explicitly taken out of > the sched_domain rebuild (using housekeeping_cpumask()), so they get the > NULL domain treatment as well. > > Remove the checks against SD_LOAD_BALANCE. Sounds better to me: Essentially, I was referring to examples like: Hikey960 - 2x4 (A) exclusive cpusets: root@h960:/sys/fs/cgroup/cpuset# mkdir cs1 root@h960:/sys/fs/cgroup/cpuset# echo 1 > cs1/cpuset.cpu_exclusive root@h960:/sys/fs/cgroup/cpuset# echo 0 > cs1/cpuset.mems root@h960:/sys/fs/cgroup/cpuset# echo 0-2 > cs1/cpuset.cpus root@h960:/sys/fs/cgroup/cpuset# mkdir cs2 root@h960:/sys/fs/cgroup/cpuset# echo 1 > cs2/cpuset.cpu_exclusive root@h960:/sys/fs/cgroup/cpuset# echo 0 > cs2/cpuset.mems root@h960:/sys/fs/cgroup/cpuset# echo 3-5 > cs2/cpuset.cpus root@h960:/sys/fs/cgroup/cpuset# echo 0 > cpuset.sched_load_balance root@h960:/proc/sys/kernel# tree -d sched_domain ├── cpu0 │ └── domain0 ├── cpu1 │ └── domain0 ├── cpu2 │ └── domain0 ├── cpu3 │ └── domain0 ├── cpu4 │ ├── domain0 │ └── domain1 ├── cpu5 │ ├── domain0 │ └── domain1 ├── cpu6 └── cpu7 (B) non-exclusive cpuset: root@h960:/sys/fs/cgroup/cpuset# echo 0 > cpuset.sched_load_balance [ 8661.240385] CPU1 attaching NULL sched-domain. [ 8661.244802] CPU2 attaching NULL sched-domain. [ 8661.249255] CPU3 attaching NULL sched-domain. [ 8661.253623] CPU4 attaching NULL sched-domain. [ 8661.257989] CPU5 attaching NULL sched-domain. [ 8661.262363] CPU6 attaching NULL sched-domain. [ 8661.266730] CPU7 attaching NULL sched-domain. root@h960:/sys/fs/cgroup/cpuset# mkdir cs1 root@h960:/sys/fs/cgroup/cpuset# echo 0-5 > cs1/cpuset.cpus root@h960:/proc/sys/kernel# tree -d sched_domain ├── cpu0 │ ├── domain0 │ └── domain1 ├── cpu1 │ ├── domain0 │ └── domain1 ├── cpu2 │ ├── domain0 │ └── domain1 ├── cpu3 │ ├── domain0 │ └── domain1 ├── cpu4 │ ├── domain0 │ └── domain1 ├── cpu5 │ ├── domain0 │ └── domain1 ├── cpu6 └── cpu7