Received: by 2002:a05:7412:3b8b:b0:fc:a2b0:25d7 with SMTP id nd11csp1469112rdb; Sat, 10 Feb 2024 03:37:24 -0800 (PST) X-Google-Smtp-Source: AGHT+IFjJSQdGmPktG4Y8PMTFkCMXpiO2FvmIuL03PHvShD+KBiOh+E6xTz/sN4W8hakg/pVFvPk X-Received: by 2002:a0c:aa18:0:b0:68c:4a73:a82b with SMTP id d24-20020a0caa18000000b0068c4a73a82bmr1784968qvb.43.1707565044473; Sat, 10 Feb 2024 03:37:24 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707565044; cv=pass; d=google.com; s=arc-20160816; b=C7naQmP7Cb4UPYJfW+zBUwdYXx+WZEyeIOLhoQe9Irfi1rex7Wa8Tcgu7a8VaPpfXo poOTGoiE4OvzhsCH2JnJ9dTo68wp2w1+mN0wKVvmhTK5LMFSmIB8Xo8N16yFg6cUNVZt KaliqksMZGCZswyImuCVl9ohjbyYYMYL/pth9gMrKyr33U2LNcKlq0Mtfh0FrqgAxTX3 W1D642GLPfLVv73lX83ZuhFs1+g77jXPoBqjHPPDJGm4Dk9yeAUPgFZhIZKmpHrKVYkY lbHjFHetP3QhkyAC/5Ruti+oBBOy5itthiMBEdkqmINnK4DcfIT6lzC1eKCE6SYlXy45 awhw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=ZQHcy0P26VUEzJfGzNdPhiDd+VC4F6+r8vtqDqwuELI=; fh=E6fecHx9LjdTr4/R4KxSgT9T6AEYYJ06NxpP3etak5c=; b=DxLX7zgzWO9gJdUm6mRp+thCBEAz/8n/dOnMb5uw016Pb5vC0BfAWvZcaAuzmCjePb szp3oeITXcFZLerUmAfyjFeJgDPljcZLY2tXdLM5hO+ChWUz7ZuJTa7QNS/m709kuC6t cE4gRwFJ2cab/5JwyEg45jUVwgzNauX/vYjXE1WkpvF3RStHiDxGUCWoNx2JvP9iZ3jI Vxgc38VJg/kACue8i4y3ShfiEz4z1jCVABN+bjJTcPepAbIX+HWny3mvE8qQb2XiY4j6 wTrzs/tg9hZXlVIxMnvW/Xb957/u+lPAzy7ockEeGStYWPqIsUxCIQy+LLBmfJhyPaQ5 QYkw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=QnKaYv2F; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-60382-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60382-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org X-Forwarded-Encrypted: i=2; AJvYcCWyp/irHZRo6uCeF62pMP6bektdiU1Umboywpha/cpErULMleckKQe9RrMM+F3wl4P4CH8i5BzCPN/UFKk9N7QgLB4DNb/AFgfmVkVSHw== Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id t11-20020a0562140c6b00b0068c7e9a6029si4096588qvj.100.2024.02.10.03.37.24 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 10 Feb 2024 03:37:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-60382-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=QnKaYv2F; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-60382-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60382-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 2F6CB1C225B3 for ; Sat, 10 Feb 2024 11:37:24 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 153E83DBBE; Sat, 10 Feb 2024 11:36:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QnKaYv2F" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D955F3DBB7 for ; Sat, 10 Feb 2024 11:36:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707564999; cv=none; b=sVWEpR102kKexGEtlDmRKPMrlir5C1Vxwq3tlbFUX+zdR0MEg/ESKnmOiQSXBIG5qCJu93rz2bJp/Ay/Y9A+CKklO0MhSaH4t94CDcxvtWH5zBkePbNo9E3ihW1HHjffOSV5PIzwofr7y5D/mH75KHRrviHai6pRzGbGWdEdb9s= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707564999; c=relaxed/simple; bh=yH8HLFxJDhQAiOdU7y5xfW2QGHOpaw69vkAcwdGw4mM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CA5EDEMzlUNKtfUWV4R6OPWz2ayekHtE6tOGSjWg7+wg9MDd2rWy48dgJlNoTF4/CzJFBQVjUlZtdCIXBoK/CEhefDfujgtwqRCI/YSTLHf8HdrXTGZ0m089qxpKQIo1pqMYv7W6MSQOVYVDfKwfN2vEBCw7ZYuslgeTf/zZrUY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QnKaYv2F; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id EDA1BC433A6; Sat, 10 Feb 2024 11:36:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1707564999; bh=yH8HLFxJDhQAiOdU7y5xfW2QGHOpaw69vkAcwdGw4mM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QnKaYv2FqkdZ9sHGV/PBZqIgHSM2tBQj08b7vVx4FxxUMddBUDL00kogD9YA58vew q0MlZfnlSC5bowrnHtEaReNabaMjFkKRTBZ0MJ2bIZOqUqZR3Gka8BhEd5IDWx1AAn VTVzNu/TxntOK0ncY++ee/a0XKbqLczHbRq+/iyOKoB9XdybKw/KGRY5Ej51nV2IbO dlWqx74/RzBr2HK0+O2++J8B0UXvKDhwB4E2hA1leFecuNXMHKSV1PP+xNKfborhvy kml2+9ZzVqNdUwjqZRpq7TcboBljDmh0N9SNUJxzltLdsvUDoZVv3JX4SvI6icz8zj U031FHXt/JBPQ== From: alexs@kernel.org To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org (open list:SCHEDULER), Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Aneesh Kumar K.V" , "Naveen N. Rao" , linuxppc-dev@lists.ozlabs.org (open list:LINUX FOR POWERPC (32-BIT AND 64-BIT)) Cc: Alex Shi , Ricardo Neri , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Miaohe Lin , Barry Song , Mark Rutland , Frederic Weisbecker , "Gautham R. Shenoy" , Yicong Yang , Josh Poimboeuf , Srikar Dronamraju Subject: [PATCH v5 5/5] sched: rename SD_SHARE_PKG_RESOURCES to SD_SHARE_LLC Date: Sat, 10 Feb 2024 19:39:23 +0800 Message-ID: <20240210113924.1130448-5-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240210113924.1130448-1-alexs@kernel.org> References: <20240210113924.1130448-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Alex Shi SD_CLUSTER shares the CPU resources like llc tags or l2 cache, that's easy confuse with SD_SHARE_PKG_RESOURCES. So let's specifical point what the latter shares: LLC. That would reduce some confusing. Suggested-by: Valentin Schneider Signed-off-by: Alex Shi Cc: linux-kernel@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: Miaohe Lin Cc: Barry Song Cc: Mark Rutland Cc: Frederic Weisbecker Cc: Daniel Bristot de Oliveira Cc: Ben Segall Cc: Steven Rostedt Cc: Dietmar Eggemann Cc: Juri Lelli Cc: Ingo Molnar Cc: "Naveen N. Rao" Cc: "Aneesh Kumar K.V" Cc: Christophe Leroy Cc: "Gautham R. Shenoy" Cc: Yicong Yang Cc: Ricardo Neri Cc: Josh Poimboeuf Cc: Srikar Dronamraju Cc: Valentin Schneider Cc: Nicholas Piggin Cc: Michael Ellerman Reviewed-by: Valentin Schneider Reviewed-by: Ricardo Neri --- arch/powerpc/kernel/smp.c | 6 +++--- include/linux/sched/sd_flags.h | 4 ++-- include/linux/sched/topology.h | 6 +++--- kernel/sched/fair.c | 2 +- kernel/sched/topology.c | 28 ++++++++++++++-------------- 5 files changed, 23 insertions(+), 23 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 693334c20d07..a60e4139214b 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -984,7 +984,7 @@ static bool shared_caches __ro_after_init; /* cpumask of CPUs with asymmetric SMT dependency */ static int powerpc_smt_flags(void) { - int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; + int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_LLC; if (cpu_has_feature(CPU_FTR_ASYM_SMT)) { printk_once(KERN_INFO "Enabling Asymmetric SMT scheduling\n"); @@ -1010,9 +1010,9 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(splpar_asym_pack); static int powerpc_shared_cache_flags(void) { if (static_branch_unlikely(&splpar_asym_pack)) - return SD_SHARE_PKG_RESOURCES | SD_ASYM_PACKING; + return SD_SHARE_LLC | SD_ASYM_PACKING; - return SD_SHARE_PKG_RESOURCES; + return SD_SHARE_LLC; } static int powerpc_shared_proc_flags(void) diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h index a8b28647aafc..b04a5d04dee9 100644 --- a/include/linux/sched/sd_flags.h +++ b/include/linux/sched/sd_flags.h @@ -117,13 +117,13 @@ SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS) SD_FLAG(SD_CLUSTER, SDF_NEEDS_GROUPS) /* - * Domain members share CPU package resources (i.e. caches) + * Domain members share CPU Last Level Caches * * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share * the same cache(s). * NEEDS_GROUPS: Caches are shared between groups. */ -SD_FLAG(SD_SHARE_PKG_RESOURCES, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS) +SD_FLAG(SD_SHARE_LLC, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS) /* * Only a single load balancing instance diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index a6e04b4a21d7..191b122158fb 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -38,21 +38,21 @@ extern const struct sd_flag_debug sd_flag_debug[]; #ifdef CONFIG_SCHED_SMT static inline int cpu_smt_flags(void) { - return SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; + return SD_SHARE_CPUCAPACITY | SD_SHARE_LLC; } #endif #ifdef CONFIG_SCHED_CLUSTER static inline int cpu_cluster_flags(void) { - return SD_CLUSTER | SD_SHARE_PKG_RESOURCES; + return SD_CLUSTER | SD_SHARE_LLC; } #endif #ifdef CONFIG_SCHED_MC static inline int cpu_core_flags(void) { - return SD_SHARE_PKG_RESOURCES; + return SD_SHARE_LLC; } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index cd1ec57c0b7b..da6c77d05d07 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -10687,7 +10687,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s */ if (local->group_type == group_has_spare) { if ((busiest->group_type > group_fully_busy) && - !(env->sd->flags & SD_SHARE_PKG_RESOURCES)) { + !(env->sd->flags & SD_SHARE_LLC)) { /* * If busiest is overloaded, try to fill spare * capacity. This might end up creating spare capacity diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 0b33f7b05d21..99ea5986038c 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -657,13 +657,13 @@ static void destroy_sched_domains(struct sched_domain *sd) } /* - * Keep a special pointer to the highest sched_domain that has - * SD_SHARE_PKG_RESOURCE set (Last Level Cache Domain) for this - * allows us to avoid some pointer chasing select_idle_sibling(). + * Keep a special pointer to the highest sched_domain that has SD_SHARE_LLC set + * (Last Level Cache Domain) for this allows us to avoid some pointer chasing + * select_idle_sibling(). * - * Also keep a unique ID per domain (we use the first CPU number in - * the cpumask of the domain), this allows us to quickly tell if - * two CPUs are in the same cache domain, see cpus_share_cache(). + * Also keep a unique ID per domain (we use the first CPU number in the cpumask + * of the domain), this allows us to quickly tell if two CPUs are in the same + * cache domain, see cpus_share_cache(). */ DEFINE_PER_CPU(struct sched_domain __rcu *, sd_llc); DEFINE_PER_CPU(int, sd_llc_size); @@ -684,7 +684,7 @@ static void update_top_cache_domain(int cpu) int id = cpu; int size = 1; - sd = highest_flag_domain(cpu, SD_SHARE_PKG_RESOURCES); + sd = highest_flag_domain(cpu, SD_SHARE_LLC); if (sd) { id = cpumask_first(sched_domain_span(sd)); size = cpumask_weight(sched_domain_span(sd)); @@ -1554,7 +1554,7 @@ static struct cpumask ***sched_domains_numa_masks; * function. For details, see include/linux/sched/sd_flags.h. * * SD_SHARE_CPUCAPACITY - * SD_SHARE_PKG_RESOURCES + * SD_SHARE_LLC * SD_CLUSTER * SD_NUMA * @@ -1566,7 +1566,7 @@ static struct cpumask ***sched_domains_numa_masks; #define TOPOLOGY_SD_FLAGS \ (SD_SHARE_CPUCAPACITY | \ SD_CLUSTER | \ - SD_SHARE_PKG_RESOURCES | \ + SD_SHARE_LLC | \ SD_NUMA | \ SD_ASYM_PACKING) @@ -1609,7 +1609,7 @@ sd_init(struct sched_domain_topology_level *tl, | 0*SD_BALANCE_WAKE | 1*SD_WAKE_AFFINE | 0*SD_SHARE_CPUCAPACITY - | 0*SD_SHARE_PKG_RESOURCES + | 0*SD_SHARE_LLC | 0*SD_SERIALIZE | 1*SD_PREFER_SIBLING | 0*SD_NUMA @@ -1646,7 +1646,7 @@ sd_init(struct sched_domain_topology_level *tl, if (sd->flags & SD_SHARE_CPUCAPACITY) { sd->imbalance_pct = 110; - } else if (sd->flags & SD_SHARE_PKG_RESOURCES) { + } else if (sd->flags & SD_SHARE_LLC) { sd->imbalance_pct = 117; sd->cache_nice_tries = 1; @@ -1671,7 +1671,7 @@ sd_init(struct sched_domain_topology_level *tl, * For all levels sharing cache; connect a sched_domain_shared * instance. */ - if (sd->flags & SD_SHARE_PKG_RESOURCES) { + if (sd->flags & SD_SHARE_LLC) { sd->shared = *per_cpu_ptr(sdd->sds, sd_id); atomic_inc(&sd->shared->ref); atomic_set(&sd->shared->nr_busy_cpus, sd_weight); @@ -2446,8 +2446,8 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att for (sd = *per_cpu_ptr(d.sd, i); sd; sd = sd->parent) { struct sched_domain *child = sd->child; - if (!(sd->flags & SD_SHARE_PKG_RESOURCES) && child && - (child->flags & SD_SHARE_PKG_RESOURCES)) { + if (!(sd->flags & SD_SHARE_LLC) && child && + (child->flags & SD_SHARE_LLC)) { struct sched_domain __rcu *top_p; unsigned int nr_llcs; -- 2.43.0