Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp3932523ybb; Mon, 23 Mar 2020 10:18:42 -0700 (PDT) X-Google-Smtp-Source: ADFU+vsjFVL8O1TzPHo9KhCc0nTb7X5QGsjAWo6jkwUKGI/wBO9AiPZPlf4uyvPcxe1xPLvu9Awx X-Received: by 2002:a05:6830:60a:: with SMTP id w10mr19531340oti.292.1584983922669; Mon, 23 Mar 2020 10:18:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584983922; cv=none; d=google.com; s=arc-20160816; b=UJ/BvHAX7PLVQsBwQaCa5AW76vck9FdVrRSevby1cRE+I07g8P/HpHAFAD9ZQHLDcl 2I0+OyrA5Rw3ZFkwbfJ2dFoummQ5C9tli+j5GnRet07WV26n6ApaP8t/RzyqUh8BatNS MfhvBrGioydpf8xA4KsV+9ZnR8NCK/oSK8OHQFWqmT5hbUHcx7N97aLKJb9NM5ciOmNA 6PLl9t9XzBvmNLh2aSC0a2zuH/RcZ/OV/4ac4Eygin5aVeqgIilQsdjQty/EQ3GAwxpa c7RoF2Ke7/i2ANb+ZxYzNrC3v7ZLJ2WOcCKXb96to1ej/MK8UcCR5GC18NPfiRdE94zR Udhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:in-reply-to:subject:cc:to:from:user-agent :references; bh=k59yJ1oPEeAJTLqF3MrcrZMoCrkN0om4V01Yf5xyICA=; b=fpOGNTiSho4CjMqfrnqTOvLQ2ETo5CLUCw8qNkmJ6cL8cJazxcaLWLslFem30FanDO i4FqS7PsG3SpqaET96SqP7nsIkJD7A6P18ANCnOBUOsp6v+Maz10IyNTe5Gg63J2BEoG mIm2FXNcDyjdnn5RnlDD7hDCH5mZsU3p1qkhI6vJHLNQVLdyk/r6UXfxq9H1mqgrWpNq 5MKjreUhA0II+eu/UpVCQkdzpnsrZMUtu1lhFwyb07RIV6wAMT2Di812De7mZG9tFrfY IkeMJvVrmw2TkCW3a97Bi+MHno9YYYexStHKICTsXB1RLSJ8Olk4sv84c4mwtmql/Qzq 6qUA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p27si8190389ota.194.2020.03.23.10.18.30; Mon, 23 Mar 2020 10:18:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727815AbgCWRRq convert rfc822-to-8bit (ORCPT + 99 others); Mon, 23 Mar 2020 13:17:46 -0400 Received: from foss.arm.com ([217.140.110.172]:52280 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726982AbgCWRRq (ORCPT ); Mon, 23 Mar 2020 13:17:46 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6FF781FB; Mon, 23 Mar 2020 10:17:45 -0700 (PDT) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A1C873F7C3; Mon, 23 Mar 2020 10:17:44 -0700 (PDT) References: <20200311181601.18314-1-valentin.schneider@arm.com> <20200311181601.18314-4-valentin.schneider@arm.com> <5decd96b-6fe0-3c35-4609-59378a0c8621@arm.com> User-agent: mu4e 0.9.17; emacs 26.3 From: Valentin Schneider To: Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, mingo@kernel.org, peterz@infradead.org, vincent.guittot@linaro.org Subject: Re: [PATCH v2 3/9] sched: Remove checks against SD_LOAD_BALANCE In-reply-to: <5decd96b-6fe0-3c35-4609-59378a0c8621@arm.com> Date: Mon, 23 Mar 2020 17:17:42 +0000 Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 23 2020, Dietmar Eggemann wrote: > On 19.03.20 13:05, Valentin Schneider wrote: >> >> On Thu, Mar 19 2020, Dietmar Eggemann wrote: >>> On 11.03.20 19:15, Valentin Schneider wrote: > > [...] > >> Your comments make me realize that changelog isn't great, what about the >> following? >> >> --- >> >> The SD_LOAD_BALANCE flag is set unconditionally for all domains in >> sd_init(). By making the sched_domain->flags syctl interface read-only, we >> have removed the last piece of code that could clear that flag - as such, >> it will now be always present. Rather than to keep carrying it along, we >> can work towards getting rid of it entirely. >> >> cpusets don't need it because they can make CPUs be attached to the NULL >> domain (e.g. cpuset with sched_load_balance=0), or to a partitionned > > s/partitionned/partitioned > >> root_domain, i.e. a sched_domain hierarchy that doesn't span the entire >> system (e.g. root cpuset with sched_load_balance=0 and sibling cpusets with >> sched_load_balance=1). >> >> isolcpus apply the same "trick": isolated CPUs are explicitly taken out of >> the sched_domain rebuild (using housekeeping_cpumask()), so they get the >> NULL domain treatment as well. >> >> Remove the checks against SD_LOAD_BALANCE. > > Sounds better to me: > > Essentially, I was referring to examples like: > > Hikey960 - 2x4 > > (A) exclusive cpusets: > > root@h960:/sys/fs/cgroup/cpuset# mkdir cs1 > root@h960:/sys/fs/cgroup/cpuset# echo 1 > cs1/cpuset.cpu_exclusive > root@h960:/sys/fs/cgroup/cpuset# echo 0 > cs1/cpuset.mems > root@h960:/sys/fs/cgroup/cpuset# echo 0-2 > cs1/cpuset.cpus > root@h960:/sys/fs/cgroup/cpuset# mkdir cs2 > root@h960:/sys/fs/cgroup/cpuset# echo 1 > cs2/cpuset.cpu_exclusive > root@h960:/sys/fs/cgroup/cpuset# echo 0 > cs2/cpuset.mems > root@h960:/sys/fs/cgroup/cpuset# echo 3-5 > cs2/cpuset.cpus > root@h960:/sys/fs/cgroup/cpuset# echo 0 > cpuset.sched_load_balance > AFAICT you don't even have to bother with cpuset.cpu_exclusive if you only care about the end-result wrt sched_domains. > root@h960:/proc/sys/kernel# tree -d sched_domain > > ├── cpu0 > │ └── domain0 > ├── cpu1 > │ └── domain0 > ├── cpu2 > │ └── domain0 > ├── cpu3 > │ └── domain0 > ├── cpu4 > │ ├── domain0 > │ └── domain1 > ├── cpu5 > │ ├── domain0 > │ └── domain1 > ├── cpu6 > └── cpu7 > > (B) non-exclusive cpuset: > > root@h960:/sys/fs/cgroup/cpuset# echo 0 > cpuset.sched_load_balance > > [ 8661.240385] CPU1 attaching NULL sched-domain. > [ 8661.244802] CPU2 attaching NULL sched-domain. > [ 8661.249255] CPU3 attaching NULL sched-domain. > [ 8661.253623] CPU4 attaching NULL sched-domain. > [ 8661.257989] CPU5 attaching NULL sched-domain. > [ 8661.262363] CPU6 attaching NULL sched-domain. > [ 8661.266730] CPU7 attaching NULL sched-domain. > > root@h960:/sys/fs/cgroup/cpuset# mkdir cs1 > root@h960:/sys/fs/cgroup/cpuset# echo 0-5 > cs1/cpuset.cpus > > root@h960:/proc/sys/kernel# tree -d sched_domain > > ├── cpu0 > │ ├── domain0 > │ └── domain1 > ├── cpu1 > │ ├── domain0 > │ └── domain1 > ├── cpu2 > │ ├── domain0 > │ └── domain1 > ├── cpu3 > │ ├── domain0 > │ └── domain1 > ├── cpu4 > │ ├── domain0 > │ └── domain1 > ├── cpu5 > │ ├── domain0 > │ └── domain1 > ├── cpu6 > └── cpu7 I think my updated changelog covers those cases, right?