Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp430075pxa; Wed, 12 Aug 2020 05:54:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwcjD+PliaV9SgW1bfQqwUgEPpdDHLg0v9ferS6ABxmH7y9CGjU4oHp/2zNeL/j6P8TK2CP X-Received: by 2002:a17:906:858:: with SMTP id f24mr30576644ejd.543.1597236856626; Wed, 12 Aug 2020 05:54:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597236856; cv=none; d=google.com; s=arc-20160816; b=Oy2kdz56MASNXYeOM3KCYFP4xtA1ZjFvGeNudyAignDKlQnVFZyrZBN4UYz4EPEBzV x72wLVDXDmur08StuCLV/n079rb/wvhmQgVB+uNtFbqSxI7S4gm1lNAelXiCwx6KYIRQ HxWvDfWraDeJwpsM4Vi/2P2O/NqIYGAR6LkKovcmvxkvDIJ+RWKGI/2rdw2Oa6ayJ5XT piUHuoLFmHOLMEoMoo5PxVn986Tnou4TP7K9D5Al1yRF+FIUCAz6n6joQFp6f4UckUev SZDo4WuTfKoSu/ee40xCOCD3Ct74S0BNtB0+qd3JRReapX6mIhgNC53J8KGDei8U/Shu +6kA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=9Tt5HEsMY7WHoEks8Eu+ZDh49KtF8PzdripJZbEQGDw=; b=ULyFUm+oJcVI7DNJ69+EezjP5WJGak5LDEj+wj0QPRhje2OFGs0dY7vhcCH+9cAnmu pm9T8zBWoGrysJ5j20d9YeDWJBCBFBGxL1k605swwpjCv0y64YOiJg4cBWcDQTsRLCD/ /Kd6lJara5Rf3Hu/clNGKL2RK7tKR+XIMj3leZiQm85TkbdkwCoNcGpTS9k7MWWVs+9u q5v0JtGDlNuMewkVMBhvAlTPZ1zoKlzLQlivOXqWOQz+0uRzGOma7mN5NS59Ic4Zen/6 7igYv0XG1pulGXPgphI69MyEXxkQd62eTsqW99atLv288Z5HExnbF3H/6Uq81H7pt+q6 P5dA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id lz17si1077391ejb.521.2020.08.12.05.53.52; Wed, 12 Aug 2020 05:54:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727944AbgHLMxR (ORCPT + 99 others); Wed, 12 Aug 2020 08:53:17 -0400 Received: from foss.arm.com ([217.140.110.172]:44502 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726698AbgHLMxP (ORCPT ); Wed, 12 Aug 2020 08:53:15 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4186A1045; Wed, 12 Aug 2020 05:53:14 -0700 (PDT) Received: from e113632-lin.cambridge.arm.com (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E54AE3F70D; Wed, 12 Aug 2020 05:53:12 -0700 (PDT) From: Valentin Schneider To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Russell King , Morten Rasmussen , Dietmar Eggemann , mingo@kernel.org, peterz@infradead.org, vincent.guittot@linaro.org, Quentin Perret Subject: [PATCH v5 01/17] ARM, sched/topology: Remove SD_SHARE_POWERDOMAIN Date: Wed, 12 Aug 2020 13:52:44 +0100 Message-Id: <20200812125300.11889-2-valentin.schneider@arm.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200812125300.11889-1-valentin.schneider@arm.com> References: <20200812125300.11889-1-valentin.schneider@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This flag was introduced in 2014 by commit d77b3ed5c9f8 ("sched: Add a new SD_SHARE_POWERDOMAIN for sched_domain") but AFAIA it was never leveraged by the scheduler. The closest thing I can think of is EAS caring about frequency domains, and it does that by leveraging performance domains. Remove the flag. Cc: Russell King Suggested-by: Morten Rasmussen Reviewed-by: Dietmar Eggemann Signed-off-by: Valentin Schneider --- arch/arm/kernel/topology.c | 2 +- include/linux/sched/topology.h | 13 ++++++------- kernel/sched/topology.c | 10 +++------- 3 files changed, 10 insertions(+), 15 deletions(-) diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c index b5adaf744630..353f3ee660e4 100644 --- a/arch/arm/kernel/topology.c +++ b/arch/arm/kernel/topology.c @@ -243,7 +243,7 @@ void store_cpu_topology(unsigned int cpuid) static inline int cpu_corepower_flags(void) { - return SD_SHARE_PKG_RESOURCES | SD_SHARE_POWERDOMAIN; + return SD_SHARE_PKG_RESOURCES; } static struct sched_domain_topology_level arm_topology[] = { diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 820511289857..6ec7d7c1d1e3 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -18,13 +18,12 @@ #define SD_WAKE_AFFINE 0x0010 /* Wake task to waking CPU */ #define SD_ASYM_CPUCAPACITY 0x0020 /* Domain members have different CPU capacities */ #define SD_SHARE_CPUCAPACITY 0x0040 /* Domain members share CPU capacity */ -#define SD_SHARE_POWERDOMAIN 0x0080 /* Domain members share power domain */ -#define SD_SHARE_PKG_RESOURCES 0x0100 /* Domain members share CPU pkg resources */ -#define SD_SERIALIZE 0x0200 /* Only a single load balancing instance */ -#define SD_ASYM_PACKING 0x0400 /* Place busy groups earlier in the domain */ -#define SD_PREFER_SIBLING 0x0800 /* Prefer to place tasks in a sibling domain */ -#define SD_OVERLAP 0x1000 /* sched_domains of this level overlap */ -#define SD_NUMA 0x2000 /* cross-node balancing */ +#define SD_SHARE_PKG_RESOURCES 0x0080 /* Domain members share CPU pkg resources */ +#define SD_SERIALIZE 0x0100 /* Only a single load balancing instance */ +#define SD_ASYM_PACKING 0x0200 /* Place busy groups earlier in the domain */ +#define SD_PREFER_SIBLING 0x0400 /* Prefer to place tasks in a sibling domain */ +#define SD_OVERLAP 0x0800 /* sched_domains of this level overlap */ +#define SD_NUMA 0x1000 /* cross-node balancing */ #ifdef CONFIG_SCHED_SMT static inline int cpu_smt_flags(void) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 9079d865a935..865fff3ef20a 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -148,8 +148,7 @@ static int sd_degenerate(struct sched_domain *sd) SD_BALANCE_EXEC | SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY | - SD_SHARE_PKG_RESOURCES | - SD_SHARE_POWERDOMAIN)) { + SD_SHARE_PKG_RESOURCES)) { if (sd->groups != sd->groups->next) return 0; } @@ -180,8 +179,7 @@ sd_parent_degenerate(struct sched_domain *sd, struct sched_domain *parent) SD_ASYM_CPUCAPACITY | SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES | - SD_PREFER_SIBLING | - SD_SHARE_POWERDOMAIN); + SD_PREFER_SIBLING); if (nr_node_ids == 1) pflags &= ~SD_SERIALIZE; } @@ -1292,7 +1290,6 @@ int __read_mostly node_reclaim_distance = RECLAIM_DISTANCE; * SD_SHARE_CPUCAPACITY - describes SMT topologies * SD_SHARE_PKG_RESOURCES - describes shared caches * SD_NUMA - describes NUMA topologies - * SD_SHARE_POWERDOMAIN - describes shared power domain * * Odd one out, which beside describing the topology has a quirk also * prescribes the desired behaviour that goes along with it: @@ -1303,8 +1300,7 @@ int __read_mostly node_reclaim_distance = RECLAIM_DISTANCE; (SD_SHARE_CPUCAPACITY | \ SD_SHARE_PKG_RESOURCES | \ SD_NUMA | \ - SD_ASYM_PACKING | \ - SD_SHARE_POWERDOMAIN) + SD_ASYM_PACKING) static struct sched_domain * sd_init(struct sched_domain_topology_level *tl, -- 2.27.0