ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data while each cluster
has local L3 tag. On the other hand, each cluster will share some
internal system bus. This means cache is much more affine inside one cluster
than across clusters.
+-----------------------------------+ +---------+
| +------+ +------+ +---------------------------+ |
| | CPU0 | | cpu1 | | +-----------+ | |
| +------+ +------+ | | | | |
| +----+ L3 | | |
| +------+ +------+ cluster | | tag | | |
| | CPU2 | | CPU3 | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | |
+-----------------------------------+ | |
| +------+ +------+ +--------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
| | | L3 | | |
| +------+ +------+ +----+ tag | | |
| | | | | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | L3 |
| data |
+-----------------------------------+ | |
| +------+ +------+ | +-----------+ | |
| | | | | | | | | |
| +------+ +------+ +----+ L3 | | |
| | | tag | | |
| +------+ +------+ | | | | |
| | | | | ++ +-----------+ | |
| +------+ +------+ |---------------------------+ |
+-----------------------------------| | |
+-----------------------------------| | |
| +------+ +------+ +---------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
| +----+ L3 | | |
| +------+ +------+ | | tag | | |
| | | | | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | |
+-----------------------------------+ | |
| +------+ +------+ +--------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
Through the following small program, you can see the performance impact of
running it in one cluster and across two clusters:
struct foo {
int x;
int y;
} f;
void *thread1_fun(void *param)
{
int s = 0;
for (int i = 0; i < 0xfffffff; i++)
s += f.x;
}
void *thread2_fun(void *param)
{
int s = 0;
for (int i = 0; i < 0xfffffff; i++)
f.y++;
}
int main(int argc, char **argv)
{
pthread_t tid1, tid2;
pthread_create(&tid1, NULL, thread1_fun, NULL);
pthread_create(&tid2, NULL, thread2_fun, NULL);
pthread_join(tid1, NULL);
pthread_join(tid2, NULL);
}
While running this program in one cluster, it takes:
$ time taskset -c 0,1 ./a.out
real 0m0.832s
user 0m1.649s
sys 0m0.004s
As a contrast, it takes much more time if we run the same program
in two clusters:
$ time taskset -c 0,4 ./a.out
real 0m1.133s
user 0m1.960s
sys 0m0.000s
0.832/1.133 = 73%, it is a huge difference.
Also, hackbench running on 4 cpus in single one cluster and 4 cpus in
different clusters also shows a large contrast:
* inside a cluster:
root@ubuntu:~# taskset -c 0,1,2,3 hackbench -p -T -l 20000 -g 1
Running in threaded mode with 1 groups using 40 file descriptors each
(== 40 tasks)
Each sender will pass 20000 messages of 100 bytes
Time: 4.285
* across clusters:
root@ubuntu:~# taskset -c 0,4,8,12 hackbench -p -T -l 20000 -g 1
Running in threaded mode with 1 groups using 40 file descriptors each
(== 40 tasks)
Each sender will pass 20000 messages of 100 bytes
Time: 5.524
The score is 4.285 vs 5.524, shorter time means better performance.
All these testing implies that we should let the Linux scheduler use
this topology to make better load balancing and WAKE_AFFINE decisions.
However, the current scheduler totally has no idea of clusters.
This patchset exposed the cluster topology first, then added the sched
domain for cluster. While it is named as "cluster", architectures and
machines can define the exact meaning of cluster as long as they have
some resources sharing under llc and they can leverage the affinity
of this resource to achive better scheduling performance.
-v3:
- rebased againest 5.11-rc2
- with respect to the comments of Valentin Schneider, Peter Zijlstra,
Vincent Guittot and Mel Gorman etc.
* moved the scheduler changes from arm64 to the common place for all
architectures.
* added SD_SHARE_CLS_RESOURCES sd_flags specifying the sched_domain
where select_idle_cpu() should begin to scan from
* removed redundant select_idle_cluster() function since all code is
in select_idle_cpu() now. it also avoided scanning cluster cpus
twice in v2 code;
* redo the hackbench in one numa after the above changes
Valentin suggested that select_idle_cpu() could begin to scan from
domain with SD_SHARE_PKG_RESOURCES. Changing like this might be too
aggressive and limit the spreading of tasks. Thus, this patch lets
the architectures and machines to decide where to start by adding
a new SD_SHARE_CLS_RESOURCES.
Barry Song (1):
scheduler: add scheduler level for clusters
Jonathan Cameron (1):
topology: Represent clusters of CPUs within a die.
Documentation/admin-guide/cputopology.rst | 26 +++++++++++---
arch/arm64/Kconfig | 7 ++++
arch/arm64/kernel/topology.c | 2 ++
drivers/acpi/pptt.c | 60 +++++++++++++++++++++++++++++++
drivers/base/arch_topology.c | 14 ++++++++
drivers/base/topology.c | 10 ++++++
include/linux/acpi.h | 5 +++
include/linux/arch_topology.h | 5 +++
include/linux/sched/sd_flags.h | 9 +++++
include/linux/sched/topology.h | 7 ++++
include/linux/topology.h | 13 +++++++
kernel/sched/fair.c | 27 ++++++++++----
kernel/sched/topology.c | 6 ++++
13 files changed, 181 insertions(+), 10 deletions(-)
--
2.7.4
From: Jonathan Cameron <[email protected]>
Both ACPI and DT provide the ability to describe additional layers of
topology between that of individual cores and higher level constructs
such as the level at which the last level cache is shared.
In ACPI this can be represented in PPTT as a Processor Hierarchy
Node Structure [1] that is the parent of the CPU cores and in turn
has a parent Processor Hierarchy Nodes Structure representing
a higher level of topology.
For example Kunpeng 920 has 6 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus.
+-----------------------------------+ +---------+
| +------+ +------+ +---------------------------+ |
| | CPU0 | | cpu1 | | +-----------+ | |
| +------+ +------+ | | | | |
| +----+ L3 | | |
| +------+ +------+ cluster | | tag | | |
| | CPU2 | | CPU3 | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | |
+-----------------------------------+ | |
| +------+ +------+ +--------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
| | | L3 | | |
| +------+ +------+ +----+ tag | | |
| | | | | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | L3 |
| data |
+-----------------------------------+ | |
| +------+ +------+ | +-----------+ | |
| | | | | | | | | |
| +------+ +------+ +----+ L3 | | |
| | | tag | | |
| +------+ +------+ | | | | |
| | | | | ++ +-----------+ | |
| +------+ +------+ |---------------------------+ |
+-----------------------------------| | |
+-----------------------------------| | |
| +------+ +------+ +---------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
| +----+ L3 | | |
| +------+ +------+ | | tag | | |
| | | | | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | |
+-----------------------------------+ | |
| +------+ +------+ +--------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
| | | L3 | | |
| +------+ +------+ +---+ tag | | |
| | | | | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | |
+-----------------------------------+ ++ |
| +------+ +------+ +--------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
| | | L3 | | |
| +------+ +------+ +--+ tag | | |
| | | | | | | | | |
| +------+ +------+ | +-----------+ | |
| | +---------+
+-----------------------------------+
That means the cost to transfer ownership of a cacheline between CPUs
within a cluster is lower than between CPUs in different clusters on
the same die. Hence, it can make sense to tell the scheduler to use
the cache affinity of the cluster to make better decision on thread
migration.
This patch simply exposes this information to userspace libraries
like hwloc by providing cluster_cpus and related sysfs attributes.
PoC of HWLOC support at [2].
Note this patch only handle the ACPI case.
Special consideration is needed for SMT processors, where it is
necessary to move 2 levels up the hierarchy from the leaf nodes
(thus skipping the processor core level).
Currently the ID provided is the offset of the Processor
Hierarchy Nodes Structure within PPTT. Whilst this is unique
it is not terribly elegant so alternative suggestions welcome.
Note that arm64 / ACPI does not provide any means of identifying
a die level in the topology but that may be unrelate to the cluster
level.
[1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
structure (Type 0)
[2] https://github.com/hisilicon/hwloc/tree/linux-cluster
Signed-off-by: Jonathan Cameron <[email protected]>
Signed-off-by: Barry Song <[email protected]>
---
-v3:
* no real change(v2 comments not addressed);
* rebased againest 5.11-rc2;
Documentation/admin-guide/cputopology.rst | 26 +++++++++++---
arch/arm64/kernel/topology.c | 2 ++
drivers/acpi/pptt.c | 60 +++++++++++++++++++++++++++++++
drivers/base/arch_topology.c | 14 ++++++++
drivers/base/topology.c | 10 ++++++
include/linux/acpi.h | 5 +++
include/linux/arch_topology.h | 5 +++
include/linux/topology.h | 6 ++++
8 files changed, 124 insertions(+), 4 deletions(-)
diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst
index b90dafc..f9d3745 100644
--- a/Documentation/admin-guide/cputopology.rst
+++ b/Documentation/admin-guide/cputopology.rst
@@ -24,6 +24,12 @@ core_id:
identifier (rather than the kernel's). The actual value is
architecture and platform dependent.
+cluster_id:
+
+ the Cluster ID of cpuX. Typically it is the hardware platform's
+ identifier (rather than the kernel's). The actual value is
+ architecture and platform dependent.
+
book_id:
the book ID of cpuX. Typically it is the hardware platform's
@@ -56,6 +62,14 @@ package_cpus_list:
human-readable list of CPUs sharing the same physical_package_id.
(deprecated name: "core_siblings_list")
+cluster_cpus:
+
+ internal kernel map of CPUs within the same cluster.
+
+cluster_cpus_list:
+
+ human-readable list of CPUs within the same cluster.
+
die_cpus:
internal kernel map of CPUs within the same die.
@@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
#define topology_physical_package_id(cpu)
#define topology_die_id(cpu)
+ #define topology_cluster_id(cpu)
#define topology_core_id(cpu)
#define topology_book_id(cpu)
#define topology_drawer_id(cpu)
#define topology_sibling_cpumask(cpu)
#define topology_core_cpumask(cpu)
+ #define topology_cluster_cpumask(cpu)
#define topology_die_cpumask(cpu)
#define topology_book_cpumask(cpu)
#define topology_drawer_cpumask(cpu)
@@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
1) topology_physical_package_id: -1
2) topology_die_id: -1
-3) topology_core_id: 0
-4) topology_sibling_cpumask: just the given CPU
-5) topology_core_cpumask: just the given CPU
-6) topology_die_cpumask: just the given CPU
+3) topology_cluster_id: -1
+4) topology_core_id: 0
+5) topology_sibling_cpumask: just the given CPU
+6) topology_core_cpumask: just the given CPU
+7) topology_cluster_cpumask: just the given CPU
+8) topology_die_cpumask: just the given CPU
For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
default definitions for topology_book_id() and topology_book_cpumask().
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index f6faa69..fe076b3 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -103,6 +103,8 @@ int __init parse_acpi_topology(void)
cpu_topology[cpu].thread_id = -1;
cpu_topology[cpu].core_id = topology_id;
}
+ topology_id = find_acpi_cpu_topology_cluster(cpu);
+ cpu_topology[cpu].cluster_id = topology_id;
topology_id = find_acpi_cpu_topology_package(cpu);
cpu_topology[cpu].package_id = topology_id;
diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
index 4ae9335..8646a93 100644
--- a/drivers/acpi/pptt.c
+++ b/drivers/acpi/pptt.c
@@ -737,6 +737,66 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
}
/**
+ * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
+ * @cpu: Kernel logical CPU number
+ *
+ * Determine a topology unique cluster ID for the given CPU/thread.
+ * This ID can then be used to group peers, which will have matching ids.
+ *
+ * The cluster, if present is the level of topology above CPUs. In a
+ * multi-thread CPU, it will be the level above the CPU, not the thread.
+ * It may not exist in single CPU systems. In simple multi-CPU systems,
+ * it may be equal to the package topology level.
+ *
+ * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
+ * or there is no toplogy level above the CPU..
+ * Otherwise returns a value which represents the package for this CPU.
+ */
+
+int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+ struct acpi_table_header *table;
+ acpi_status status;
+ struct acpi_pptt_processor *cpu_node, *cluster_node;
+ int retval;
+ int is_thread;
+
+ status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+ if (ACPI_FAILURE(status)) {
+ acpi_pptt_warn_missing();
+ return -ENOENT;
+ }
+ cpu_node = acpi_find_processor_node(table, cpu);
+ if (cpu_node == NULL || !cpu_node->parent) {
+ retval = -ENOENT;
+ goto put_table;
+ }
+
+ is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
+ cluster_node = fetch_pptt_node(table, cpu_node->parent);
+ if (cluster_node == NULL) {
+ retval = -ENOENT;
+ goto put_table;
+ }
+ if (is_thread) {
+ if (!cluster_node->parent) {
+ retval = -ENOENT;
+ goto put_table;
+ }
+ cluster_node = fetch_pptt_node(table, cluster_node->parent);
+ if (cluster_node == NULL) {
+ retval = -ENOENT;
+ goto put_table;
+ }
+ }
+ retval = ACPI_PTR_DIFF(cluster_node, table);
+put_table:
+ acpi_put_table(table);
+
+ return retval;
+}
+
+/**
* find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
* @cpu: Kernel logical CPU number
*
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index de8587c..3079232 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
return core_mask;
}
+const struct cpumask *cpu_clustergroup_mask(int cpu)
+{
+ return &cpu_topology[cpu].cluster_sibling;
+}
+
void update_siblings_masks(unsigned int cpuid)
{
struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
@@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid)
if (cpuid_topo->package_id != cpu_topo->package_id)
continue;
+ if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
+ cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
+ cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
+ }
+
cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
@@ -541,6 +551,9 @@ static void clear_cpu_topology(int cpu)
cpumask_clear(&cpu_topo->llc_sibling);
cpumask_set_cpu(cpu, &cpu_topo->llc_sibling);
+ cpumask_clear(&cpu_topo->cluster_sibling);
+ cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling);
+
cpumask_clear(&cpu_topo->core_sibling);
cpumask_set_cpu(cpu, &cpu_topo->core_sibling);
cpumask_clear(&cpu_topo->thread_sibling);
@@ -571,6 +584,7 @@ void remove_cpu_topology(unsigned int cpu)
cpumask_clear_cpu(cpu, topology_core_cpumask(sibling));
for_each_cpu(sibling, topology_sibling_cpumask(cpu))
cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
+
for_each_cpu(sibling, topology_llc_cpumask(cpu))
cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling));
diff --git a/drivers/base/topology.c b/drivers/base/topology.c
index 4d254fc..7157ac0 100644
--- a/drivers/base/topology.c
+++ b/drivers/base/topology.c
@@ -46,6 +46,9 @@ static DEVICE_ATTR_RO(physical_package_id);
define_id_show_func(die_id);
static DEVICE_ATTR_RO(die_id);
+define_id_show_func(cluster_id);
+static DEVICE_ATTR_RO(cluster_id);
+
define_id_show_func(core_id);
static DEVICE_ATTR_RO(core_id);
@@ -61,6 +64,10 @@ define_siblings_show_func(core_siblings, core_cpumask);
static DEVICE_ATTR_RO(core_siblings);
static DEVICE_ATTR_RO(core_siblings_list);
+define_siblings_show_func(cluster_cpus, cluster_cpumask);
+static DEVICE_ATTR_RO(cluster_cpus);
+static DEVICE_ATTR_RO(cluster_cpus_list);
+
define_siblings_show_func(die_cpus, die_cpumask);
static DEVICE_ATTR_RO(die_cpus);
static DEVICE_ATTR_RO(die_cpus_list);
@@ -88,6 +95,7 @@ static DEVICE_ATTR_RO(drawer_siblings_list);
static struct attribute *default_attrs[] = {
&dev_attr_physical_package_id.attr,
&dev_attr_die_id.attr,
+ &dev_attr_cluster_id.attr,
&dev_attr_core_id.attr,
&dev_attr_thread_siblings.attr,
&dev_attr_thread_siblings_list.attr,
@@ -95,6 +103,8 @@ static struct attribute *default_attrs[] = {
&dev_attr_core_cpus_list.attr,
&dev_attr_core_siblings.attr,
&dev_attr_core_siblings_list.attr,
+ &dev_attr_cluster_cpus.attr,
+ &dev_attr_cluster_cpus_list.attr,
&dev_attr_die_cpus.attr,
&dev_attr_die_cpus_list.attr,
&dev_attr_package_cpus.attr,
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 2630c2e..8f603cd 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -1325,6 +1325,7 @@ static inline int lpit_read_residency_count_address(u64 *address)
#ifdef CONFIG_ACPI_PPTT
int acpi_pptt_cpu_is_thread(unsigned int cpu);
int find_acpi_cpu_topology(unsigned int cpu, int level);
+int find_acpi_cpu_topology_cluster(unsigned int cpu);
int find_acpi_cpu_topology_package(unsigned int cpu);
int find_acpi_cpu_topology_hetero_id(unsigned int cpu);
int find_acpi_cpu_cache_topology(unsigned int cpu, int level);
@@ -1337,6 +1338,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level)
{
return -EINVAL;
}
+static inline int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+ return -EINVAL;
+}
static inline int find_acpi_cpu_topology_package(unsigned int cpu)
{
return -EINVAL;
diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
index 0f6cd6b..987c7ea 100644
--- a/include/linux/arch_topology.h
+++ b/include/linux/arch_topology.h
@@ -49,10 +49,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus,
struct cpu_topology {
int thread_id;
int core_id;
+ int cluster_id;
int package_id;
int llc_id;
cpumask_t thread_sibling;
cpumask_t core_sibling;
+ cpumask_t cluster_sibling;
cpumask_t llc_sibling;
};
@@ -60,13 +62,16 @@ struct cpu_topology {
extern struct cpu_topology cpu_topology[NR_CPUS];
#define topology_physical_package_id(cpu) (cpu_topology[cpu].package_id)
+#define topology_cluster_id(cpu) (cpu_topology[cpu].cluster_id)
#define topology_core_id(cpu) (cpu_topology[cpu].core_id)
#define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling)
#define topology_sibling_cpumask(cpu) (&cpu_topology[cpu].thread_sibling)
+#define topology_cluster_cpumask(cpu) (&cpu_topology[cpu].cluster_sibling)
#define topology_llc_cpumask(cpu) (&cpu_topology[cpu].llc_sibling)
void init_cpu_topology(void);
void store_cpu_topology(unsigned int cpuid);
const struct cpumask *cpu_coregroup_mask(int cpu);
+const struct cpumask *cpu_clustergroup_mask(int cpu);
void update_siblings_masks(unsigned int cpu);
void remove_cpu_topology(unsigned int cpuid);
void reset_cpu_topology(void);
diff --git a/include/linux/topology.h b/include/linux/topology.h
index ad03df1..bf2cc3c 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -185,6 +185,9 @@ static inline int cpu_to_mem(int cpu)
#ifndef topology_die_id
#define topology_die_id(cpu) ((void)(cpu), -1)
#endif
+#ifndef topology_cluster_id
+#define topology_cluster_id(cpu) ((void)(cpu), -1)
+#endif
#ifndef topology_core_id
#define topology_core_id(cpu) ((void)(cpu), 0)
#endif
@@ -194,6 +197,9 @@ static inline int cpu_to_mem(int cpu)
#ifndef topology_core_cpumask
#define topology_core_cpumask(cpu) cpumask_of(cpu)
#endif
+#ifndef topology_cluster_cpumask
+#define topology_cluster_cpumask(cpu) cpumask_of(cpu)
+#endif
#ifndef topology_die_cpumask
#define topology_die_cpumask(cpu) cpumask_of(cpu)
#endif
--
2.7.4
On 1/6/21 12:30 AM, Barry Song wrote:
> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> cluster has 4 cpus. All clusters share L3 cache data while each cluster
> has local L3 tag. On the other hand, each cluster will share some
> internal system bus. This means cache is much more affine inside one cluster
> than across clusters.
>
> +-----------------------------------+ +---------+
> | +------+ +------+ +---------------------------+ |
> | | CPU0 | | cpu1 | | +-----------+ | |
> | +------+ +------+ | | | | |
> | +----+ L3 | | |
> | +------+ +------+ cluster | | tag | | |
> | | CPU2 | | CPU3 | | | | | |
> | +------+ +------+ | +-----------+ | |
> | | | |
> +-----------------------------------+ | |
> +-----------------------------------+ | |
> | +------+ +------+ +--------------------------+ |
> | | | | | | +-----------+ | |
> | +------+ +------+ | | | | |
> | | | L3 | | |
> | +------+ +------+ +----+ tag | | |
> | | | | | | | | | |
> | +------+ +------+ | +-----------+ | |
> | | | |
> +-----------------------------------+ | L3 |
> | data |
> +-----------------------------------+ | |
> | +------+ +------+ | +-----------+ | |
> | | | | | | | | | |
> | +------+ +------+ +----+ L3 | | |
> | | | tag | | |
> | +------+ +------+ | | | | |
> | | | | | ++ +-----------+ | |
> | +------+ +------+ |---------------------------+ |
> +-----------------------------------| | |
> +-----------------------------------| | |
> | +------+ +------+ +---------------------------+ |
> | | | | | | +-----------+ | |
> | +------+ +------+ | | | | |
> | +----+ L3 | | |
> | +------+ +------+ | | tag | | |
> | | | | | | | | | |
> | +------+ +------+ | +-----------+ | |
> | | | |
> +-----------------------------------+ | |
> +-----------------------------------+ | |
> | +------+ +------+ +--------------------------+ |
> | | | | | | +-----------+ | |
> | +------+ +------+ | | | | |
>
>
There is a similar need for clustering in x86. Some x86 cores could share L2 caches that
is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters
of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3).
Having a sched domain at the L2 cluster helps spread load among
L2 domains. This will reduce L2 cache contention and help with
performance for low to moderate load scenarios.
The cluster detection mechanism will need
to be based on L2 cache sharing in this case. I suggest making the
cluster detection to be CPU architecture dependent so both ARM64 and x86 use cases
can be accommodated.
Attached below are two RFC patches for creating x86 L2
cache sched domain, sans the idle cpu selection on wake up code. It is
similar enough in concept to Barry's patch that we should have a
single patchset that accommodates both use cases.
Thanks.
Tim
From e0e7e42e1a033c9634723ff1dc80b426deeec1e9 Mon Sep 17 00:00:00 2001
Message-Id: <e0e7e42e1a033c9634723ff1dc80b426deeec1e9.1609970726.git.tim.c.chen@linux.intel.com>
In-Reply-To: <[email protected]>
References: <[email protected]>
From: Tim Chen <[email protected]>
Date: Wed, 19 Aug 2020 16:22:35 -0700
Subject: [RFC PATCH 1/2] sched: Add L2 cache cpu mask
There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
is shared among a group of cores instead of being exclusive
to one single core.
To prevent oversubscription of L2 cache, load could be
balanced between such L2 domains.
Add CPU masks of CPUs sharing the L2 cache so we can build such
L2 scheduler domain for load balancing at the L2 level.
Signed-off-by: Tim Chen <[email protected]>
---
arch/x86/include/asm/topology.h | 1 +
arch/x86/kernel/smpboot.c | 12 ++++++++++++
2 files changed, 13 insertions(+)
diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
index f4234575f3fd..e35f5f55cb15 100644
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -103,6 +103,7 @@ static inline void setup_node_to_cpumask_map(void) { }
#include <asm-generic/topology.h>
extern const struct cpumask *cpu_coregroup_mask(int cpu);
+extern const struct cpumask *cpu_l2group_mask(int cpu);
#define topology_logical_package_id(cpu) (cpu_data(cpu).logical_proc_id)
#define topology_physical_package_id(cpu) (cpu_data(cpu).phys_proc_id)
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 27aa04a95702..8ba0b505f020 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -56,6 +56,7 @@
#include <linux/numa.h>
#include <linux/pgtable.h>
#include <linux/overflow.h>
+#include <linux/cacheinfo.h>
#include <asm/acpi.h>
#include <asm/desc.h>
@@ -643,6 +644,17 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
return cpu_llc_shared_mask(cpu);
}
+const struct cpumask *cpu_l2group_mask(int cpu)
+{
+ struct cpu_cacheinfo *ci = get_cpu_cacheinfo(cpu);
+
+ /* Sanity check for presence of L2, leaf index 2 */
+ if (ci->num_leaves < 3)
+ return topology_sibling_cpumask(cpu);
+
+ return &ci->info_list[2].shared_cpu_map;
+}
+
static void impress_friends(void)
{
int cpu;
--
2.20.1
From bdc17e2c46bfa5a96edeafde06ead46308bf73e3 Mon Sep 17 00:00:00 2001
Message-Id: <bdc17e2c46bfa5a96edeafde06ead46308bf73e3.1609970726.git.tim.c.chen@linux.intel.com>
In-Reply-To: <[email protected]>
References: <[email protected]>
From: Tim Chen <[email protected]>
Date: Fri, 21 Aug 2020 17:01:22 -0700
Subject: [RFC PATCH 2/2] sched: Build L2 cache scheduler domain for x86
To prevent oversubscription of the L2 cache, load should be balanced
between L2 cache domains.
Add new scheduler domain at the L2 cache level for x86.
On benchmark such as SPECrate mcf test, this change provides a
boost to performance on medium load system on Jacobsville.
Note that this added domain level will increase migrations
between CPUs. So this is not necessarily a universal win if
the migration cost of balancing L2 load outweigh the benefit
from reduced L2 contention. This change tends to benefit CPU bound
threads that get moved around much less.
Note also that if the L2 sched domain is the same as the SMT sched domain
(i.e. 1 core), it will be degenerate and not be added unnecessarily when
sched domains are being built at the cpu_attach_domain phase. This new
sched domain will only be added when L2 is shared among CPU cores.
The L2 cache information is detected after the initial build of scheduler
domains during boot. So it is necessary to rebuild the sched domains
after all the CPUs have been fully brought up.
Signed-off-by: Tim Chen <[email protected]>
---
arch/x86/Kconfig | 15 +++++++++++++++
arch/x86/kernel/cpu/cacheinfo.c | 3 +++
arch/x86/kernel/smpboot.c | 14 ++++++++++++++
init/main.c | 3 +++
4 files changed, 35 insertions(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 7101ac64bb20..97775ec16e72 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1014,6 +1014,21 @@ config SCHED_MC
making when dealing with multi-core CPU chips at a cost of slightly
increased overhead in some places. If unsure say N here.
+config SCHED_MC_L2
+ def_bool n
+ prompt "Multi-core scheduler L2 scheduler domain support"
+ depends on SCHED_MC && SMP
+ help
+ Adding level 2 cache scheduler domain will have CPU scheduler
+ balance load between L2 caches. This reduces oversubscription
+ of L2 cahce on system that has multiple CPU cores sharing
+ a L2 cache. This option benefits system with mostly CPU
+ bound tasks. For tasks that wake up and sleep frequently,
+ this option does increase the frequency of task migraions and
+ increased load balancing latency.
+
+ If unsure say N here.
+
config SCHED_MC_PRIO
bool "CPU core priorities scheduler support"
depends on SCHED_MC && CPU_SUP_INTEL
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index c7503be92f35..fb3facab58d0 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -1030,6 +1030,9 @@ static int __populate_cache_leaves(unsigned int cpu)
__cache_cpumap_setup(cpu, idx, &id4_regs);
}
this_cpu_ci->cpu_map_populated = true;
+#ifdef CONFIG_SCHED_MC_L2
+ x86_topology_update = true;
+#endif
return 0;
}
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 8ba0b505f020..80cdccd1bcab 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -528,6 +528,14 @@ static int x86_core_flags(void)
{
return cpu_core_flags() | x86_sched_itmt_flags();
}
+
+#ifdef CONFIG_SCHED_MC_L2
+static int x86_l2mc_flags(void)
+{
+ return cpu_core_flags() | x86_sched_itmt_flags();
+}
+#endif
+
#endif
#ifdef CONFIG_SCHED_SMT
static int x86_smt_flags(void)
@@ -542,6 +550,9 @@ static struct sched_domain_topology_level x86_numa_in_package_topology[] = {
{ cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) },
#endif
#ifdef CONFIG_SCHED_MC
+#ifdef CONFIG_SCHED_MC_L2
+ { cpu_l2group_mask, x86_l2mc_flags, SD_INIT_NAME(L2MC) },
+#endif
{ cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) },
#endif
{ NULL, },
@@ -552,6 +563,9 @@ static struct sched_domain_topology_level x86_topology[] = {
{ cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) },
#endif
#ifdef CONFIG_SCHED_MC
+#ifdef CONFIG_SCHED_MC_L2
+ { cpu_l2group_mask, x86_l2mc_flags, SD_INIT_NAME(L2MC) },
+#endif
{ cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) },
#endif
{ cpu_cpu_mask, SD_INIT_NAME(DIE) },
diff --git a/init/main.c b/init/main.c
index ae78fb68d231..f4f814f8a127 100644
--- a/init/main.c
+++ b/init/main.c
@@ -1405,6 +1405,9 @@ static int __ref kernel_init(void *unused)
ftrace_free_init_mem();
free_initmem();
mark_readonly();
+#ifdef CONFIG_SCHED_MC_L2
+ rebuild_sched_domains();
+#endif
/*
* Kernel mappings are now finalized - update the userspace page-table
--
2.20.1
On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> On 1/6/21 12:30 AM, Barry Song wrote:
> > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > cluster has 4 cpus. All clusters share L3 cache data while each cluster
> > has local L3 tag. On the other hand, each cluster will share some
> > internal system bus. This means cache is much more affine inside one cluster
> > than across clusters.
>
> There is a similar need for clustering in x86. Some x86 cores could share L2 caches that
> is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters
> of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3).
> Having a sched domain at the L2 cluster helps spread load among
> L2 domains. This will reduce L2 cache contention and help with
> performance for low to moderate load scenarios.
IIUC, you are arguing for the exact opposite behaviour, i.e. balancing
between L2 caches while Barry is after consolidating tasks within the
boundaries of a L3 tag cache. One helps cache utilization, the other
communication latency between tasks. Am I missing something?
IMHO, we need some numbers on the table to say which way to go. Looking
at just benchmarks of one type doesn't show that this is a good idea in
general.
Morten
On 1/8/21 7:12 AM, Morten Rasmussen wrote:
> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
>> On 1/6/21 12:30 AM, Barry Song wrote:
>>> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
>>> cluster has 4 cpus. All clusters share L3 cache data while each cluster
>>> has local L3 tag. On the other hand, each cluster will share some
>>> internal system bus. This means cache is much more affine inside one cluster
>>> than across clusters.
>>
>> There is a similar need for clustering in x86. Some x86 cores could share L2 caches that
>> is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters
>> of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3).
>> Having a sched domain at the L2 cluster helps spread load among
>> L2 domains. This will reduce L2 cache contention and help with
>> performance for low to moderate load scenarios.
>
> IIUC, you are arguing for the exact opposite behaviour, i.e. balancing
> between L2 caches while Barry is after consolidating tasks within the
> boundaries of a L3 tag cache. One helps cache utilization, the other
> communication latency between tasks. Am I missing something?
>
> IMHO, we need some numbers on the table to say which way to go. Looking
> at just benchmarks of one type doesn't show that this is a good idea in
> general.
>
I think it is going to depend on the workload. If there are dependent
tasks that communicate with one another, putting them together
in the same cluster will be the right thing to do to reduce communication
costs. On the other hand, if the tasks are independent, putting them together on the same cluster
will increase resource contention and spreading them out will be better.
Any thoughts on what is the right clustering "tag" to use to clump related tasks together?
Cgroup? Pid? Tasks with same mm?
Tim
> -----Original Message-----
> From: Morten Rasmussen [mailto:[email protected]]
> Sent: Saturday, January 9, 2021 4:13 AM
> To: Tim Chen <[email protected]>
> Cc: Song Bao Hua (Barry Song) <[email protected]>;
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; Jonathan Cameron <[email protected]>;
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; xuwei (O) <[email protected]>; Zengtao (B)
> <[email protected]>; tiantao (H) <[email protected]>
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
> add cluster scheduler
>
> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> > On 1/6/21 12:30 AM, Barry Song wrote:
> > > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > > cluster has 4 cpus. All clusters share L3 cache data while each cluster
> > > has local L3 tag. On the other hand, each cluster will share some
> > > internal system bus. This means cache is much more affine inside one cluster
> > > than across clusters.
> >
> > There is a similar need for clustering in x86. Some x86 cores could share
> L2 caches that
> > is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters
> > of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing
> L3).
> > Having a sched domain at the L2 cluster helps spread load among
> > L2 domains. This will reduce L2 cache contention and help with
> > performance for low to moderate load scenarios.
>
> IIUC, you are arguing for the exact opposite behaviour, i.e. balancing
> between L2 caches while Barry is after consolidating tasks within the
> boundaries of a L3 tag cache. One helps cache utilization, the other
> communication latency between tasks. Am I missing something?
Morten, this is not true.
we are both actually looking for the same behavior. My patch also
has done the exact same behavior of spreading with Tim's patch.
Considering the below two cases:
Case 1. we have two tasks without any relationship running in a system with 2 clusters and 8 cpus.
Without the sched_domain of cluster, these two tasks might be put as below:
+-------------------+ +-----------------+
| +----+ +----+ | | |
| |task| |task| | | |
| |1 | |2 | | | |
| +----+ +----+ | | |
| | | |
| cluster1 | | cluster2 |
+-------------------+ +-----------------+
With the sched_domain of cluster, load balance will spread them as below:
+-------------------+ +-----------------+
| +----+ | | +----+ |
| |task| | | |task| |
| |1 | | | |2 | |
| +----+ | | +----+ |
| | | |
| cluster1 | | cluster2 |
+-------------------+ +-----------------+
Then task1 and tasks2 get more cache and decrease cache contention.
They will get better performance.
That is what my original patch also can make. And tim's patch
is also doing. Once we add a sched_domain, load balance will
get involved.
Case 2. we have 8 tasks, running in a system with 2 clusters and 8 cpus.
But they are working in 4 groups:
Task1 wakes up task4
Task2 wakes up task5
Task3 wakes up task6
Task4 wakes up task7
With my changing in select_idle_sibling, the WAKE_AFFINE mechanism will
try to put task1 and 4, task2 and 5, task3 and 6, task4 and 7 in same clusters rather
than putting all of them in the random one of the 8 cpus. However, the 8 tasks
are still spreading among the 8 cpus with my change in select_idle_sibling
as load balance is still working.
+---------------------------+ +----------------------+
| +----+ +-----+ | | +----+ +-----+ |
| |task| |task | | | |task| |task | |
| |1 | | 4 | | | |2 | |5 | |
| +----+ +-----+ | | +----+ +-----+ |
| | | |
| cluster1 | | cluster2 |
| | | |
| | | |
| +-----+ +------+ | | +-----+ +------+ |
| |task | | task | | | |task | |task | |
| |3 | | 6 | | | |4 | |8 | |
| +-----+ +------+ | | +-----+ +------+ |
+---------------------------+ +----------------------+
Let's consider the 3rd case, that one would be more tricky:
task1 and task2 have close relationship and they are waker-wakee pair.
With my current patch, select_idle_sidling() wants to put them in one
cluster, load balance wants to put them in two clusters. Load balance will win.
Then maybe we need some same mechanism like adjusting numa imbalance:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/kernel/sched/fair.c?id=b396f52326de20
if we permit a light imbalance between clusters, select_idle_sidling()
will win. And task1 and task2 get better cache affinity.
The 3rd case could be our goal for next step.
Thanks
Barry
On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
>
>
> On 1/8/21 7:12 AM, Morten Rasmussen wrote:
> > On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> >> On 1/6/21 12:30 AM, Barry Song wrote:
> >>> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> >>> cluster has 4 cpus. All clusters share L3 cache data while each cluster
> >>> has local L3 tag. On the other hand, each cluster will share some
> >>> internal system bus. This means cache is much more affine inside one cluster
> >>> than across clusters.
> >>
> >> There is a similar need for clustering in x86. Some x86 cores could share L2 caches that
> >> is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters
> >> of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3).
> >> Having a sched domain at the L2 cluster helps spread load among
> >> L2 domains. This will reduce L2 cache contention and help with
> >> performance for low to moderate load scenarios.
> >
> > IIUC, you are arguing for the exact opposite behaviour, i.e. balancing
> > between L2 caches while Barry is after consolidating tasks within the
> > boundaries of a L3 tag cache. One helps cache utilization, the other
> > communication latency between tasks. Am I missing something?
> >
> > IMHO, we need some numbers on the table to say which way to go. Looking
> > at just benchmarks of one type doesn't show that this is a good idea in
> > general.
> >
>
> I think it is going to depend on the workload. If there are dependent
> tasks that communicate with one another, putting them together
> in the same cluster will be the right thing to do to reduce communication
> costs. On the other hand, if the tasks are independent, putting them together on the same cluster
> will increase resource contention and spreading them out will be better.
Agree. That is exactly where I'm coming from. This is all about the task
placement policy. We generally tend to spread tasks to avoid resource
contention, SMT and caches, which seems to be what you are proposing to
extend. I think that makes sense given it can produce significant
benefits.
>
> Any thoughts on what is the right clustering "tag" to use to clump
> related tasks together?
> Cgroup? Pid? Tasks with same mm?
I think this is the real question. I think the closest thing we have at
the moment is the wakee/waker flip heuristic. This seems to be related.
Perhaps the wake_affine tricks can serve as starting point?
Morten
On 11/01/2021 10:28, Morten Rasmussen wrote:
> On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
>>
>>
>> On 1/8/21 7:12 AM, Morten Rasmussen wrote:
>>> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
>>>> On 1/6/21 12:30 AM, Barry Song wrote:
[...]
>> I think it is going to depend on the workload. If there are dependent
>> tasks that communicate with one another, putting them together
>> in the same cluster will be the right thing to do to reduce communication
>> costs. On the other hand, if the tasks are independent, putting them together on the same cluster
>> will increase resource contention and spreading them out will be better.
>
> Agree. That is exactly where I'm coming from. This is all about the task
> placement policy. We generally tend to spread tasks to avoid resource
> contention, SMT and caches, which seems to be what you are proposing to
> extend. I think that makes sense given it can produce significant
> benefits.
>
>>
>> Any thoughts on what is the right clustering "tag" to use to clump
>> related tasks together?
>> Cgroup? Pid? Tasks with same mm?
>
> I think this is the real question. I think the closest thing we have at
> the moment is the wakee/waker flip heuristic. This seems to be related.
> Perhaps the wake_affine tricks can serve as starting point?
wake_wide() switches between packing (select_idle_sibling(), llc_size
CPUs) and spreading (find_idlest_cpu(), all CPUs).
AFAICS, since none of the sched domains set SD_BALANCE_WAKE, currently
all wakeups are (llc-)packed.
select_task_rq_fair()
for_each_domain(cpu, tmp)
if (tmp->flags & sd_flag)
sd = tmp;
In case we would like to further distinguish between llc-packing and
even narrower (cluster or MC-L2)-packing, we would introduce a 2. level
packing vs. spreading heuristic further down in sis().
IMHO, Barry's current implementation doesn't do this right now. Instead
he's trying to pack on cluster first and if not successful look further
among the remaining llc CPUs for an idle CPU.
On 08/01/2021 22:30, Song Bao Hua (Barry Song) wrote:
>
>> -----Original Message-----
>> From: Morten Rasmussen [mailto:[email protected]]
>> Sent: Saturday, January 9, 2021 4:13 AM
>> To: Tim Chen <[email protected]>
>> Cc: Song Bao Hua (Barry Song) <[email protected]>;
>> [email protected]; [email protected]; [email protected];
>> [email protected]; [email protected]; [email protected];
>> [email protected]; Jonathan Cameron <[email protected]>;
>> [email protected]; [email protected]; [email protected];
>> [email protected]; [email protected]; [email protected];
>> [email protected]; [email protected]; [email protected];
>> [email protected]; [email protected];
>> [email protected]; [email protected];
>> [email protected]; xuwei (O) <[email protected]>; Zengtao (B)
>> <[email protected]>; tiantao (H) <[email protected]>
>> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
>> add cluster scheduler
>>
>> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
>>> On 1/6/21 12:30 AM, Barry Song wrote:
>>>> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
>>>> cluster has 4 cpus. All clusters share L3 cache data while each cluster
>>>> has local L3 tag. On the other hand, each cluster will share some
>>>> internal system bus. This means cache is much more affine inside one cluster
>>>> than across clusters.
>>>
>>> There is a similar need for clustering in x86. Some x86 cores could share
>> L2 caches that
>>> is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters
>>> of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing
>> L3).
>>> Having a sched domain at the L2 cluster helps spread load among
>>> L2 domains. This will reduce L2 cache contention and help with
>>> performance for low to moderate load scenarios.
>>
>> IIUC, you are arguing for the exact opposite behaviour, i.e. balancing
>> between L2 caches while Barry is after consolidating tasks within the
>> boundaries of a L3 tag cache. One helps cache utilization, the other
>> communication latency between tasks. Am I missing something?
>
> Morten, this is not true.
>
> we are both actually looking for the same behavior. My patch also
> has done the exact same behavior of spreading with Tim's patch.
That's the case for the load-balance path because of the extra Sched
Domain (SD) (CLS/MC_L2) below MC.
But in wakeup you add code which leads to a different packing strategy.
It looks like that Tim's workload (SPECrate mcf) shows a performance
boost solely because of the changes the additional MC_L2 SD introduces
in load balance. The wakeup path is unchanged, i.e. llc-packing. IMHO we
have to carefully distinguish between packing vs. spreading in wakeup
and load-balance here.
> Considering the below two cases:
> Case 1. we have two tasks without any relationship running in a system with 2 clusters and 8 cpus.
>
> Without the sched_domain of cluster, these two tasks might be put as below:
> +-------------------+ +-----------------+
> | +----+ +----+ | | |
> | |task| |task| | | |
> | |1 | |2 | | | |
> | +----+ +----+ | | |
> | | | |
> | cluster1 | | cluster2 |
> +-------------------+ +-----------------+
> With the sched_domain of cluster, load balance will spread them as below:
> +-------------------+ +-----------------+
> | +----+ | | +----+ |
> | |task| | | |task| |
> | |1 | | | |2 | |
> | +----+ | | +----+ |
> | | | |
> | cluster1 | | cluster2 |
> +-------------------+ +-----------------+
>
> Then task1 and tasks2 get more cache and decrease cache contention.
> They will get better performance.
>
> That is what my original patch also can make. And tim's patch
> is also doing. Once we add a sched_domain, load balance will
> get involved.
>
>
> Case 2. we have 8 tasks, running in a system with 2 clusters and 8 cpus.
> But they are working in 4 groups:
> Task1 wakes up task4
> Task2 wakes up task5
> Task3 wakes up task6
> Task4 wakes up task7
>
> With my changing in select_idle_sibling, the WAKE_AFFINE mechanism will
> try to put task1 and 4, task2 and 5, task3 and 6, task4 and 7 in same clusters rather
> than putting all of them in the random one of the 8 cpus. However, the 8 tasks
> are still spreading among the 8 cpus with my change in select_idle_sibling
> as load balance is still working.
>
> +---------------------------+ +----------------------+
> | +----+ +-----+ | | +----+ +-----+ |
> | |task| |task | | | |task| |task | |
> | |1 | | 4 | | | |2 | |5 | |
> | +----+ +-----+ | | +----+ +-----+ |
> | | | |
> | cluster1 | | cluster2 |
> | | | |
> | | | |
> | +-----+ +------+ | | +-----+ +------+ |
> | |task | | task | | | |task | |task | |
> | |3 | | 6 | | | |4 | |8 | |
> | +-----+ +------+ | | +-----+ +------+ |
> +---------------------------+ +----------------------+
Your use-case (#tasks, runtime/period) seems to be perfectly crafted to
show the benefit of your patch on your specific system (cluster-size =
4). IMHO, this extra infrastructure especially in the wakeup path should
show benefits over a range of different benchmarks.
> Let's consider the 3rd case, that one would be more tricky:
>
> task1 and task2 have close relationship and they are waker-wakee pair.
> With my current patch, select_idle_sidling() wants to put them in one
> cluster, load balance wants to put them in two clusters. Load balance will win.
> Then maybe we need some same mechanism like adjusting numa imbalance:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/kernel/sched/fair.c?id=b396f52326de20
> if we permit a light imbalance between clusters, select_idle_sidling()
> will win. And task1 and task2 get better cache affinity.
This would look weird to allow this kind of imbalance on CLS (MC_L2) and
NUMA domains but not on the MC domain for example.
> -----Original Message-----
> From: Dietmar Eggemann [mailto:[email protected]]
> Sent: Wednesday, January 13, 2021 12:00 AM
> To: Morten Rasmussen <[email protected]>; Tim Chen
> <[email protected]>
> Cc: Song Bao Hua (Barry Song) <[email protected]>;
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; Jonathan Cameron <[email protected]>;
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; [email protected]; xuwei (O)
> <[email protected]>; Zengtao (B) <[email protected]>; tiantao (H)
> <[email protected]>
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
> add cluster scheduler
>
> On 11/01/2021 10:28, Morten Rasmussen wrote:
> > On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
> >>
> >>
> >> On 1/8/21 7:12 AM, Morten Rasmussen wrote:
> >>> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> >>>> On 1/6/21 12:30 AM, Barry Song wrote:
>
> [...]
>
> >> I think it is going to depend on the workload. If there are dependent
> >> tasks that communicate with one another, putting them together
> >> in the same cluster will be the right thing to do to reduce communication
> >> costs. On the other hand, if the tasks are independent, putting them together
> on the same cluster
> >> will increase resource contention and spreading them out will be better.
> >
> > Agree. That is exactly where I'm coming from. This is all about the task
> > placement policy. We generally tend to spread tasks to avoid resource
> > contention, SMT and caches, which seems to be what you are proposing to
> > extend. I think that makes sense given it can produce significant
> > benefits.
> >
> >>
> >> Any thoughts on what is the right clustering "tag" to use to clump
> >> related tasks together?
> >> Cgroup? Pid? Tasks with same mm?
> >
> > I think this is the real question. I think the closest thing we have at
> > the moment is the wakee/waker flip heuristic. This seems to be related.
> > Perhaps the wake_affine tricks can serve as starting point?
>
> wake_wide() switches between packing (select_idle_sibling(), llc_size
> CPUs) and spreading (find_idlest_cpu(), all CPUs).
>
> AFAICS, since none of the sched domains set SD_BALANCE_WAKE, currently
> all wakeups are (llc-)packed.
Sorry for late response. I was struggling with some other topology
issues recently.
For "all wakeups are (llc-)packed",
it seems you mean current want_affine is only affecting the new_cpu,
and for wake-up path, we will always go to select_idle_sibling() rather
than find_idlest_cpu() since nobody sets SD_WAKE_BALANCE in any
sched_domain ?
>
> select_task_rq_fair()
>
> for_each_domain(cpu, tmp)
>
> if (tmp->flags & sd_flag)
> sd = tmp;
>
>
> In case we would like to further distinguish between llc-packing and
> even narrower (cluster or MC-L2)-packing, we would introduce a 2. level
> packing vs. spreading heuristic further down in sis().
I didn't get your point on "2 level packing". Would you like
to describe more? It seems you mean we need to have separate
calculation for avg_scan_cost and sched_feat(SIS_) for cluster
(or MC-L2) since cluster and llc are not in the same level
physically?
>
> IMHO, Barry's current implementation doesn't do this right now. Instead
> he's trying to pack on cluster first and if not successful look further
> among the remaining llc CPUs for an idle CPU.
Yes. That is exactly what the current patch is doing.
Thanks
Barry
> -----Original Message-----
> From: Dietmar Eggemann [mailto:[email protected]]
> Sent: Wednesday, January 13, 2021 1:53 AM
> To: Song Bao Hua (Barry Song) <[email protected]>; Morten Rasmussen
> <[email protected]>; Tim Chen <[email protected]>
> Cc: [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; Jonathan Cameron <[email protected]>;
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; [email protected]; xuwei (O)
> <[email protected]>; Zengtao (B) <[email protected]>; tiantao (H)
> <[email protected]>
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
> add cluster scheduler
>
> On 08/01/2021 22:30, Song Bao Hua (Barry Song) wrote:
> >
> >> -----Original Message-----
> >> From: Morten Rasmussen [mailto:[email protected]]
> >> Sent: Saturday, January 9, 2021 4:13 AM
> >> To: Tim Chen <[email protected]>
> >> Cc: Song Bao Hua (Barry Song) <[email protected]>;
> >> [email protected]; [email protected]; [email protected];
> >> [email protected]; [email protected]; [email protected];
> >> [email protected]; Jonathan Cameron
> <[email protected]>;
> >> [email protected]; [email protected]; [email protected];
> >> [email protected]; [email protected]; [email protected];
> >> [email protected]; [email protected]; [email protected];
> >> [email protected]; [email protected];
> >> [email protected]; [email protected];
> >> [email protected]; xuwei (O) <[email protected]>; Zengtao (B)
> >> <[email protected]>; tiantao (H) <[email protected]>
> >> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters
> and
> >> add cluster scheduler
> >>
> >> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> >>> On 1/6/21 12:30 AM, Barry Song wrote:
> >>>> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> >>>> cluster has 4 cpus. All clusters share L3 cache data while each cluster
> >>>> has local L3 tag. On the other hand, each cluster will share some
> >>>> internal system bus. This means cache is much more affine inside one cluster
> >>>> than across clusters.
> >>>
> >>> There is a similar need for clustering in x86. Some x86 cores could share
> >> L2 caches that
> >>> is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6
> clusters
> >>> of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing
> >> L3).
> >>> Having a sched domain at the L2 cluster helps spread load among
> >>> L2 domains. This will reduce L2 cache contention and help with
> >>> performance for low to moderate load scenarios.
> >>
> >> IIUC, you are arguing for the exact opposite behaviour, i.e. balancing
> >> between L2 caches while Barry is after consolidating tasks within the
> >> boundaries of a L3 tag cache. One helps cache utilization, the other
> >> communication latency between tasks. Am I missing something?
> >
> > Morten, this is not true.
> >
> > we are both actually looking for the same behavior. My patch also
> > has done the exact same behavior of spreading with Tim's patch.
>
> That's the case for the load-balance path because of the extra Sched
> Domain (SD) (CLS/MC_L2) below MC.
>
> But in wakeup you add code which leads to a different packing strategy.
Yes, but I put a note for the 1st case:
"Case 1. we have two tasks *without* any relationship running in a system
with 2 clusters and 8 cpus"
so for tasks without wake-up relationship, the current patch will only
result in spreading.
Anyway, I will also test Tim's benchmark in kunpeng920 with the SCHED_CLUTER
to see what will happen. Till now, benchmark has only covered the case to
figure out the benefit of changing wake-up path.
I would also be interested in figuring out what we have got from the change
of load_balance().
>
> It looks like that Tim's workload (SPECrate mcf) shows a performance
> boost solely because of the changes the additional MC_L2 SD introduces
> in load balance. The wakeup path is unchanged, i.e. llc-packing. IMHO we
> have to carefully distinguish between packing vs. spreading in wakeup
> and load-balance here.
>
> > Considering the below two cases:
> > Case 1. we have two tasks without any relationship running in a system with
> 2 clusters and 8 cpus.
> >
> > Without the sched_domain of cluster, these two tasks might be put as below:
> > +-------------------+ +-----------------+
> > | +----+ +----+ | | |
> > | |task| |task| | | |
> > | |1 | |2 | | | |
> > | +----+ +----+ | | |
> > | | | |
> > | cluster1 | | cluster2 |
> > +-------------------+ +-----------------+
> > With the sched_domain of cluster, load balance will spread them as below:
> > +-------------------+ +-----------------+
> > | +----+ | | +----+ |
> > | |task| | | |task| |
> > | |1 | | | |2 | |
> > | +----+ | | +----+ |
> > | | | |
> > | cluster1 | | cluster2 |
> > +-------------------+ +-----------------+
> >
> > Then task1 and tasks2 get more cache and decrease cache contention.
> > They will get better performance.
> >
> > That is what my original patch also can make. And tim's patch
> > is also doing. Once we add a sched_domain, load balance will
> > get involved.
> >
> >
> > Case 2. we have 8 tasks, running in a system with 2 clusters and 8 cpus.
> > But they are working in 4 groups:
> > Task1 wakes up task4
> > Task2 wakes up task5
> > Task3 wakes up task6
> > Task4 wakes up task7
> >
> > With my changing in select_idle_sibling, the WAKE_AFFINE mechanism will
> > try to put task1 and 4, task2 and 5, task3 and 6, task4 and 7 in same clusters
> rather
> > than putting all of them in the random one of the 8 cpus. However, the 8 tasks
> > are still spreading among the 8 cpus with my change in select_idle_sibling
> > as load balance is still working.
> >
> > +---------------------------+ +----------------------+
> > | +----+ +-----+ | | +----+ +-----+ |
> > | |task| |task | | | |task| |task | |
> > | |1 | | 4 | | | |2 | |5 | |
> > | +----+ +-----+ | | +----+ +-----+ |
> > | | | |
> > | cluster1 | | cluster2 |
> > | | | |
> > | | | |
> > | +-----+ +------+ | | +-----+ +------+ |
> > | |task | | task | | | |task | |task | |
> > | |3 | | 6 | | | |4 | |8 | |
> > | +-----+ +------+ | | +-----+ +------+ |
> > +---------------------------+ +----------------------+
>
> Your use-case (#tasks, runtime/period) seems to be perfectly crafted to
> show the benefit of your patch on your specific system (cluster-size =
> 4). IMHO, this extra infrastructure especially in the wakeup path should
> show benefits over a range of different benchmarks.
>
> > Let's consider the 3rd case, that one would be more tricky:
> >
> > task1 and task2 have close relationship and they are waker-wakee pair.
> > With my current patch, select_idle_sidling() wants to put them in one
> > cluster, load balance wants to put them in two clusters. Load balance will
> win.
> > Then maybe we need some same mechanism like adjusting numa imbalance:
> >
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/
> kernel/sched/fair.c?id=b396f52326de20
> > if we permit a light imbalance between clusters, select_idle_sidling()
> > will win. And task1 and task2 get better cache affinity.
>
> This would look weird to allow this kind of imbalance on CLS (MC_L2) and
> NUMA domains but not on the MC domain for example.
Yes. I guess I actually meant permitting imbalance between sched_group
made by the child sched_cluster domain of the parent sched_mc domain.
sched_mc domain
+----------------------------------+
| +--------+ +----------+ |
| |sched_ | |sched_ | |
| |group | |group | |
| +--+-----+ +----+-----+ |
| | allow small | |
| | imbalance | |
+----------------------------------+
| |
| |
| |
| |
| |
+ +
child domain: child domain:
sched_cluster sched_cluster
For sched_group within one sched_cluster domain, we don't allow this
kind of imbalance.
Anyway, I would be happier to see this kind of imbalance is
only allowed when we exactly know two tasks in the cluster
have wake-up relationship. Right now, SD_NUMA seems to be
simply allowing this imbalance without the knowledge of the
relationships of tasks causing imbalance.
Thanks
Barry
On 25/01/2021 11:50, Song Bao Hua (Barry Song) wrote:
>
>> -----Original Message-----
>> From: Dietmar Eggemann [mailto:[email protected]]
>> Sent: Wednesday, January 13, 2021 12:00 AM
>> To: Morten Rasmussen <[email protected]>; Tim Chen
>> <[email protected]>
>> Cc: Song Bao Hua (Barry Song) <[email protected]>;
>> [email protected]; [email protected]; [email protected];
>> [email protected]; [email protected]; [email protected];
>> [email protected]; Jonathan Cameron <[email protected]>;
>> [email protected]; [email protected]; [email protected];
>> [email protected]; [email protected]; [email protected];
>> [email protected]; [email protected]; [email protected];
>> [email protected]; [email protected];
>> [email protected]; [email protected]; xuwei (O)
>> <[email protected]>; Zengtao (B) <[email protected]>; tiantao (H)
>> <[email protected]>
>> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
>> add cluster scheduler
>>
>> On 11/01/2021 10:28, Morten Rasmussen wrote:
>>> On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
>>>>
>>>>
>>>> On 1/8/21 7:12 AM, Morten Rasmussen wrote:
>>>>> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
>>>>>> On 1/6/21 12:30 AM, Barry Song wrote:
[...]
>> wake_wide() switches between packing (select_idle_sibling(), llc_size
>> CPUs) and spreading (find_idlest_cpu(), all CPUs).
>>
>> AFAICS, since none of the sched domains set SD_BALANCE_WAKE, currently
>> all wakeups are (llc-)packed.
>
> Sorry for late response. I was struggling with some other topology
> issues recently.
>
> For "all wakeups are (llc-)packed",
> it seems you mean current want_affine is only affecting the new_cpu,
> and for wake-up path, we will always go to select_idle_sibling() rather
> than find_idlest_cpu() since nobody sets SD_WAKE_BALANCE in any
> sched_domain ?
>
>>
>> select_task_rq_fair()
>>
>> for_each_domain(cpu, tmp)
>>
>> if (tmp->flags & sd_flag)
>> sd = tmp;
>>
>>
>> In case we would like to further distinguish between llc-packing and
>> even narrower (cluster or MC-L2)-packing, we would introduce a 2. level
>> packing vs. spreading heuristic further down in sis().
>
> I didn't get your point on "2 level packing". Would you like
> to describe more? It seems you mean we need to have separate
> calculation for avg_scan_cost and sched_feat(SIS_) for cluster
> (or MC-L2) since cluster and llc are not in the same level
> physically?
By '1. level packing' I meant going sis() (i.e. sd=per_cpu(sd_llc,
target)) instead of routing WF_TTWU through find_idlest_cpu() which uses
a broader sd span (in case all sd's (or at least up to an sd > llc)
would have SD_BALANCE_WAKE set).
wake_wide() (wakee/waker flip heuristic) is currently used to make this
decision. But since no sd sets SD_BALANCE_WAKE we always go sis() for
WF_TTWU.
'2. level packing' would be the decision between cluster- and
llc-packing. The question was which heuristic could be used here.
>> IMHO, Barry's current implementation doesn't do this right now. Instead
>> he's trying to pack on cluster first and if not successful look further
>> among the remaining llc CPUs for an idle CPU.
>
> Yes. That is exactly what the current patch is doing.
And this will be favoring cluster- over llc-packing for each task instead.
> -----Original Message-----
> From: Tim Chen [mailto:[email protected]]
> Sent: Friday, January 8, 2021 12:17 PM
> To: Song Bao Hua (Barry Song) <[email protected]>;
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; Jonathan Cameron <[email protected]>;
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]
> Cc: [email protected]; [email protected];
> [email protected]; [email protected]; xuwei (O)
> <[email protected]>; Zengtao (B) <[email protected]>; tiantao (H)
> <[email protected]>
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
> add cluster scheduler
>
>
>
> On 1/6/21 12:30 AM, Barry Song wrote:
> > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > cluster has 4 cpus. All clusters share L3 cache data while each cluster
> > has local L3 tag. On the other hand, each cluster will share some
> > internal system bus. This means cache is much more affine inside one cluster
> > than across clusters.
> >
> > +-----------------------------------+ +---------+
> > | +------+ +------+ +---------------------------+ |
> > | | CPU0 | | cpu1 | | +-----------+ | |
> > | +------+ +------+ | | | | |
> > | +----+ L3 | | |
> > | +------+ +------+ cluster | | tag | | |
> > | | CPU2 | | CPU3 | | | | | |
> > | +------+ +------+ | +-----------+ | |
> > | | | |
> > +-----------------------------------+ | |
> > +-----------------------------------+ | |
> > | +------+ +------+ +--------------------------+ |
> > | | | | | | +-----------+ | |
> > | +------+ +------+ | | | | |
> > | | | L3 | | |
> > | +------+ +------+ +----+ tag | | |
> > | | | | | | | | | |
> > | +------+ +------+ | +-----------+ | |
> > | | | |
> > +-----------------------------------+ | L3 |
> > | data |
> > +-----------------------------------+ | |
> > | +------+ +------+ | +-----------+ | |
> > | | | | | | | | | |
> > | +------+ +------+ +----+ L3 | | |
> > | | | tag | | |
> > | +------+ +------+ | | | | |
> > | | | | | ++ +-----------+ | |
> > | +------+ +------+ |---------------------------+ |
> > +-----------------------------------| | |
> > +-----------------------------------| | |
> > | +------+ +------+ +---------------------------+ |
> > | | | | | | +-----------+ | |
> > | +------+ +------+ | | | | |
> > | +----+ L3 | | |
> > | +------+ +------+ | | tag | | |
> > | | | | | | | | | |
> > | +------+ +------+ | +-----------+ | |
> > | | | |
> > +-----------------------------------+ | |
> > +-----------------------------------+ | |
> > | +------+ +------+ +--------------------------+ |
> > | | | | | | +-----------+ | |
> > | +------+ +------+ | | | | |
> >
> >
>
> There is a similar need for clustering in x86. Some x86 cores could share L2
> caches that
> is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters
> of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3).
> Having a sched domain at the L2 cluster helps spread load among
> L2 domains. This will reduce L2 cache contention and help with
> performance for low to moderate load scenarios.
>
> The cluster detection mechanism will need
> to be based on L2 cache sharing in this case. I suggest making the
> cluster detection to be CPU architecture dependent so both ARM64 and x86 use
> cases
> can be accommodated.
>
> Attached below are two RFC patches for creating x86 L2
> cache sched domain, sans the idle cpu selection on wake up code. It is
> similar enough in concept to Barry's patch that we should have a
> single patchset that accommodates both use cases.
Hi Tim, Agreed on this.
hopefully the RFC v4 I am preparing will cover your case.
>
> Thanks.
>
> Tim
Thanks
Barry
On Wed, Jan 06, 2021 at 09:30:25PM +1300, Barry Song wrote:
> From: Jonathan Cameron <[email protected]>
>
> Both ACPI and DT provide the ability to describe additional layers of
> topology between that of individual cores and higher level constructs
> such as the level at which the last level cache is shared.
> In ACPI this can be represented in PPTT as a Processor Hierarchy
> Node Structure [1] that is the parent of the CPU cores and in turn
> has a parent Processor Hierarchy Nodes Structure representing
> a higher level of topology.
>
> For example Kunpeng 920 has 6 clusters in each NUMA node, and each
> cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> has local L3 tag. On the other hand, each clusters will share some
> internal system bus.
>
> +-----------------------------------+ +---------+
> | +------+ +------+ +---------------------------+ |
> | | CPU0 | | cpu1 | | +-----------+ | |
> | +------+ +------+ | | | | |
> | +----+ L3 | | |
> | +------+ +------+ cluster | | tag | | |
> | | CPU2 | | CPU3 | | | | | |
> | +------+ +------+ | +-----------+ | |
> | | | |
> +-----------------------------------+ | |
> +-----------------------------------+ | |
> | +------+ +------+ +--------------------------+ |
> | | | | | | +-----------+ | |
> | +------+ +------+ | | | | |
> | | | L3 | | |
> | +------+ +------+ +----+ tag | | |
> | | | | | | | | | |
> | +------+ +------+ | +-----------+ | |
> | | | |
> +-----------------------------------+ | L3 |
> | data |
> +-----------------------------------+ | |
> | +------+ +------+ | +-----------+ | |
> | | | | | | | | | |
> | +------+ +------+ +----+ L3 | | |
> | | | tag | | |
> | +------+ +------+ | | | | |
> | | | | | ++ +-----------+ | |
> | +------+ +------+ |---------------------------+ |
> +-----------------------------------| | |
> +-----------------------------------| | |
> | +------+ +------+ +---------------------------+ |
> | | | | | | +-----------+ | |
> | +------+ +------+ | | | | |
> | +----+ L3 | | |
> | +------+ +------+ | | tag | | |
> | | | | | | | | | |
> | +------+ +------+ | +-----------+ | |
> | | | |
> +-----------------------------------+ | |
> +-----------------------------------+ | |
> | +------+ +------+ +--------------------------+ |
> | | | | | | +-----------+ | |
> | +------+ +------+ | | | | |
> | | | L3 | | |
> | +------+ +------+ +---+ tag | | |
> | | | | | | | | | |
> | +------+ +------+ | +-----------+ | |
> | | | |
> +-----------------------------------+ | |
> +-----------------------------------+ ++ |
> | +------+ +------+ +--------------------------+ |
> | | | | | | +-----------+ | |
> | +------+ +------+ | | | | |
> | | | L3 | | |
> | +------+ +------+ +--+ tag | | |
> | | | | | | | | | |
> | +------+ +------+ | +-----------+ | |
> | | +---------+
> +-----------------------------------+
>
> That means the cost to transfer ownership of a cacheline between CPUs
> within a cluster is lower than between CPUs in different clusters on
> the same die. Hence, it can make sense to tell the scheduler to use
> the cache affinity of the cluster to make better decision on thread
> migration.
>
> This patch simply exposes this information to userspace libraries
> like hwloc by providing cluster_cpus and related sysfs attributes.
> PoC of HWLOC support at [2].
>
> Note this patch only handle the ACPI case.
>
> Special consideration is needed for SMT processors, where it is
> necessary to move 2 levels up the hierarchy from the leaf nodes
> (thus skipping the processor core level).
>
> Currently the ID provided is the offset of the Processor
> Hierarchy Nodes Structure within PPTT. Whilst this is unique
> it is not terribly elegant so alternative suggestions welcome.
>
> Note that arm64 / ACPI does not provide any means of identifying
> a die level in the topology but that may be unrelate to the cluster
> level.
>
> [1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
> structure (Type 0)
> [2] https://github.com/hisilicon/hwloc/tree/linux-cluster
>
> Signed-off-by: Jonathan Cameron <[email protected]>
> Signed-off-by: Barry Song <[email protected]>
> ---
> -v3:
> * no real change(v2 comments not addressed);
> * rebased againest 5.11-rc2;
>
> Documentation/admin-guide/cputopology.rst | 26 +++++++++++---
> arch/arm64/kernel/topology.c | 2 ++
> drivers/acpi/pptt.c | 60 +++++++++++++++++++++++++++++++
> drivers/base/arch_topology.c | 14 ++++++++
> drivers/base/topology.c | 10 ++++++
> include/linux/acpi.h | 5 +++
> include/linux/arch_topology.h | 5 +++
> include/linux/topology.h | 6 ++++
> 8 files changed, 124 insertions(+), 4 deletions(-)
>
> diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst
> index b90dafc..f9d3745 100644
> --- a/Documentation/admin-guide/cputopology.rst
> +++ b/Documentation/admin-guide/cputopology.rst
> @@ -24,6 +24,12 @@ core_id:
> identifier (rather than the kernel's). The actual value is
> architecture and platform dependent.
>
> +cluster_id:
> +
> + the Cluster ID of cpuX. Typically it is the hardware platform's
> + identifier (rather than the kernel's). The actual value is
> + architecture and platform dependent.
> +
> book_id:
>
> the book ID of cpuX. Typically it is the hardware platform's
> @@ -56,6 +62,14 @@ package_cpus_list:
> human-readable list of CPUs sharing the same physical_package_id.
> (deprecated name: "core_siblings_list")
>
> +cluster_cpus:
> +
> + internal kernel map of CPUs within the same cluster.
> +
> +cluster_cpus_list:
> +
> + human-readable list of CPUs within the same cluster.
> +
> die_cpus:
>
> internal kernel map of CPUs within the same die.
> @@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
>
> #define topology_physical_package_id(cpu)
> #define topology_die_id(cpu)
> + #define topology_cluster_id(cpu)
> #define topology_core_id(cpu)
> #define topology_book_id(cpu)
> #define topology_drawer_id(cpu)
> #define topology_sibling_cpumask(cpu)
> #define topology_core_cpumask(cpu)
> + #define topology_cluster_cpumask(cpu)
> #define topology_die_cpumask(cpu)
> #define topology_book_cpumask(cpu)
> #define topology_drawer_cpumask(cpu)
> @@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
>
> 1) topology_physical_package_id: -1
> 2) topology_die_id: -1
> -3) topology_core_id: 0
> -4) topology_sibling_cpumask: just the given CPU
> -5) topology_core_cpumask: just the given CPU
> -6) topology_die_cpumask: just the given CPU
> +3) topology_cluster_id: -1
> +4) topology_core_id: 0
> +5) topology_sibling_cpumask: just the given CPU
> +6) topology_core_cpumask: just the given CPU
> +7) topology_cluster_cpumask: just the given CPU
> +8) topology_die_cpumask: just the given CPU
>
> For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
> default definitions for topology_book_id() and topology_book_cpumask().
> diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
> index f6faa69..fe076b3 100644
> --- a/arch/arm64/kernel/topology.c
> +++ b/arch/arm64/kernel/topology.c
> @@ -103,6 +103,8 @@ int __init parse_acpi_topology(void)
> cpu_topology[cpu].thread_id = -1;
> cpu_topology[cpu].core_id = topology_id;
> }
> + topology_id = find_acpi_cpu_topology_cluster(cpu);
> + cpu_topology[cpu].cluster_id = topology_id;
> topology_id = find_acpi_cpu_topology_package(cpu);
> cpu_topology[cpu].package_id = topology_id;
>
> diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
> index 4ae9335..8646a93 100644
> --- a/drivers/acpi/pptt.c
> +++ b/drivers/acpi/pptt.c
> @@ -737,6 +737,66 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
> }
>
> /**
> + * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
> + * @cpu: Kernel logical CPU number
> + *
> + * Determine a topology unique cluster ID for the given CPU/thread.
> + * This ID can then be used to group peers, which will have matching ids.
> + *
> + * The cluster, if present is the level of topology above CPUs. In a
> + * multi-thread CPU, it will be the level above the CPU, not the thread.
> + * It may not exist in single CPU systems. In simple multi-CPU systems,
> + * it may be equal to the package topology level.
> + *
> + * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
> + * or there is no toplogy level above the CPU..
> + * Otherwise returns a value which represents the package for this CPU.
> + */
> +
> +int find_acpi_cpu_topology_cluster(unsigned int cpu)
> +{
> + struct acpi_table_header *table;
> + acpi_status status;
> + struct acpi_pptt_processor *cpu_node, *cluster_node;
> + int retval;
> + int is_thread;
> +
> + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
> + if (ACPI_FAILURE(status)) {
> + acpi_pptt_warn_missing();
> + return -ENOENT;
> + }
> + cpu_node = acpi_find_processor_node(table, cpu);
The second argument of acpi_find_processor_node() expects ACPI Processor ID,
not the logical cpu number.
How about the following?
acpi_cpu_id = get_acpi_id_for_cpu(cpu);
cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
Thanks,
Masa
> + if (cpu_node == NULL || !cpu_node->parent) {
> + retval = -ENOENT;
> + goto put_table;
> + }
> +
> + is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
> + cluster_node = fetch_pptt_node(table, cpu_node->parent);
> + if (cluster_node == NULL) {
> + retval = -ENOENT;
> + goto put_table;
> + }
> + if (is_thread) {
> + if (!cluster_node->parent) {
> + retval = -ENOENT;
> + goto put_table;
> + }
> + cluster_node = fetch_pptt_node(table, cluster_node->parent);
> + if (cluster_node == NULL) {
> + retval = -ENOENT;
> + goto put_table;
> + }
> + }
> + retval = ACPI_PTR_DIFF(cluster_node, table);
> +put_table:
> + acpi_put_table(table);
> +
> + return retval;
> +}
> +
> +/**
> * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
> * @cpu: Kernel logical CPU number
> *
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index de8587c..3079232 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
> return core_mask;
> }
>
> +const struct cpumask *cpu_clustergroup_mask(int cpu)
> +{
> + return &cpu_topology[cpu].cluster_sibling;
> +}
> +
> void update_siblings_masks(unsigned int cpuid)
> {
> struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
> @@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid)
> if (cpuid_topo->package_id != cpu_topo->package_id)
> continue;
>
> + if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
> + cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
> + cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
> + }
> +
> cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
> cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
>
> @@ -541,6 +551,9 @@ static void clear_cpu_topology(int cpu)
> cpumask_clear(&cpu_topo->llc_sibling);
> cpumask_set_cpu(cpu, &cpu_topo->llc_sibling);
>
> + cpumask_clear(&cpu_topo->cluster_sibling);
> + cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling);
> +
> cpumask_clear(&cpu_topo->core_sibling);
> cpumask_set_cpu(cpu, &cpu_topo->core_sibling);
> cpumask_clear(&cpu_topo->thread_sibling);
> @@ -571,6 +584,7 @@ void remove_cpu_topology(unsigned int cpu)
> cpumask_clear_cpu(cpu, topology_core_cpumask(sibling));
> for_each_cpu(sibling, topology_sibling_cpumask(cpu))
> cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
> +
> for_each_cpu(sibling, topology_llc_cpumask(cpu))
> cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling));
>
> diff --git a/drivers/base/topology.c b/drivers/base/topology.c
> index 4d254fc..7157ac0 100644
> --- a/drivers/base/topology.c
> +++ b/drivers/base/topology.c
> @@ -46,6 +46,9 @@ static DEVICE_ATTR_RO(physical_package_id);
> define_id_show_func(die_id);
> static DEVICE_ATTR_RO(die_id);
>
> +define_id_show_func(cluster_id);
> +static DEVICE_ATTR_RO(cluster_id);
> +
> define_id_show_func(core_id);
> static DEVICE_ATTR_RO(core_id);
>
> @@ -61,6 +64,10 @@ define_siblings_show_func(core_siblings, core_cpumask);
> static DEVICE_ATTR_RO(core_siblings);
> static DEVICE_ATTR_RO(core_siblings_list);
>
> +define_siblings_show_func(cluster_cpus, cluster_cpumask);
> +static DEVICE_ATTR_RO(cluster_cpus);
> +static DEVICE_ATTR_RO(cluster_cpus_list);
> +
> define_siblings_show_func(die_cpus, die_cpumask);
> static DEVICE_ATTR_RO(die_cpus);
> static DEVICE_ATTR_RO(die_cpus_list);
> @@ -88,6 +95,7 @@ static DEVICE_ATTR_RO(drawer_siblings_list);
> static struct attribute *default_attrs[] = {
> &dev_attr_physical_package_id.attr,
> &dev_attr_die_id.attr,
> + &dev_attr_cluster_id.attr,
> &dev_attr_core_id.attr,
> &dev_attr_thread_siblings.attr,
> &dev_attr_thread_siblings_list.attr,
> @@ -95,6 +103,8 @@ static struct attribute *default_attrs[] = {
> &dev_attr_core_cpus_list.attr,
> &dev_attr_core_siblings.attr,
> &dev_attr_core_siblings_list.attr,
> + &dev_attr_cluster_cpus.attr,
> + &dev_attr_cluster_cpus_list.attr,
> &dev_attr_die_cpus.attr,
> &dev_attr_die_cpus_list.attr,
> &dev_attr_package_cpus.attr,
> diff --git a/include/linux/acpi.h b/include/linux/acpi.h
> index 2630c2e..8f603cd 100644
> --- a/include/linux/acpi.h
> +++ b/include/linux/acpi.h
> @@ -1325,6 +1325,7 @@ static inline int lpit_read_residency_count_address(u64 *address)
> #ifdef CONFIG_ACPI_PPTT
> int acpi_pptt_cpu_is_thread(unsigned int cpu);
> int find_acpi_cpu_topology(unsigned int cpu, int level);
> +int find_acpi_cpu_topology_cluster(unsigned int cpu);
> int find_acpi_cpu_topology_package(unsigned int cpu);
> int find_acpi_cpu_topology_hetero_id(unsigned int cpu);
> int find_acpi_cpu_cache_topology(unsigned int cpu, int level);
> @@ -1337,6 +1338,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level)
> {
> return -EINVAL;
> }
> +static inline int find_acpi_cpu_topology_cluster(unsigned int cpu)
> +{
> + return -EINVAL;
> +}
> static inline int find_acpi_cpu_topology_package(unsigned int cpu)
> {
> return -EINVAL;
> diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
> index 0f6cd6b..987c7ea 100644
> --- a/include/linux/arch_topology.h
> +++ b/include/linux/arch_topology.h
> @@ -49,10 +49,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus,
> struct cpu_topology {
> int thread_id;
> int core_id;
> + int cluster_id;
> int package_id;
> int llc_id;
> cpumask_t thread_sibling;
> cpumask_t core_sibling;
> + cpumask_t cluster_sibling;
> cpumask_t llc_sibling;
> };
>
> @@ -60,13 +62,16 @@ struct cpu_topology {
> extern struct cpu_topology cpu_topology[NR_CPUS];
>
> #define topology_physical_package_id(cpu) (cpu_topology[cpu].package_id)
> +#define topology_cluster_id(cpu) (cpu_topology[cpu].cluster_id)
> #define topology_core_id(cpu) (cpu_topology[cpu].core_id)
> #define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling)
> #define topology_sibling_cpumask(cpu) (&cpu_topology[cpu].thread_sibling)
> +#define topology_cluster_cpumask(cpu) (&cpu_topology[cpu].cluster_sibling)
> #define topology_llc_cpumask(cpu) (&cpu_topology[cpu].llc_sibling)
> void init_cpu_topology(void);
> void store_cpu_topology(unsigned int cpuid);
> const struct cpumask *cpu_coregroup_mask(int cpu);
> +const struct cpumask *cpu_clustergroup_mask(int cpu);
> void update_siblings_masks(unsigned int cpu);
> void remove_cpu_topology(unsigned int cpuid);
> void reset_cpu_topology(void);
> diff --git a/include/linux/topology.h b/include/linux/topology.h
> index ad03df1..bf2cc3c 100644
> --- a/include/linux/topology.h
> +++ b/include/linux/topology.h
> @@ -185,6 +185,9 @@ static inline int cpu_to_mem(int cpu)
> #ifndef topology_die_id
> #define topology_die_id(cpu) ((void)(cpu), -1)
> #endif
> +#ifndef topology_cluster_id
> +#define topology_cluster_id(cpu) ((void)(cpu), -1)
> +#endif
> #ifndef topology_core_id
> #define topology_core_id(cpu) ((void)(cpu), 0)
> #endif
> @@ -194,6 +197,9 @@ static inline int cpu_to_mem(int cpu)
> #ifndef topology_core_cpumask
> #define topology_core_cpumask(cpu) cpumask_of(cpu)
> #endif
> +#ifndef topology_cluster_cpumask
> +#define topology_cluster_cpumask(cpu) cpumask_of(cpu)
> +#endif
> #ifndef topology_die_cpumask
> #define topology_die_cpumask(cpu) cpumask_of(cpu)
> #endif
> --
> 2.7.4
>
On 2/3/21 3:32 AM, Song Bao Hua (Barry Song) wrote:
>>
>> Attached below are two RFC patches for creating x86 L2
>> cache sched domain, sans the idle cpu selection on wake up code. It is
>> similar enough in concept to Barry's patch that we should have a
>> single patchset that accommodates both use cases.
>
> Hi Tim, Agreed on this.
> hopefully the RFC v4 I am preparing will cover your case.
>
Barry,
I've taken a crack at it. Attached is a patch on top of your
v3 patches to implement L2 cluster sched domain for x86.
Thanks.
Tim
---->8------
From 9189e489b019e110ee6e9d4183e243e48f44ff25 Mon Sep 17 00:00:00 2001
From: Tim Chen <[email protected]>
Date: Tue, 16 Feb 2021 08:24:39 -0800
Subject: [RFC PATCH] scheduler: Add cluster scheduler level for x86
To: <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>
Cc: <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, <[email protected]>, Jonathan Cameron <[email protected]>, Barry Song <[email protected]>
There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
is shared among a cluster of cores instead of being exclusive
to one single core.
To prevent oversubscription of L2 cache, load should be
balanced between such L2 clusters, especially for tasks with
no shared data.
Also with cluster scheduling policy where tasks are woken up
in the same L2 cluster, we will benefit from keeping tasks
related to each other and likely sharing data in the same L2
cluster.
Add CPU masks of CPUs sharing the L2 cache so we can build such
L2 cluster scheduler domain.
Signed-off-by: Tim Chen <[email protected]>
---
arch/x86/Kconfig | 8 ++++++
arch/x86/include/asm/smp.h | 7 ++++++
arch/x86/include/asm/topology.h | 1 +
arch/x86/kernel/cpu/cacheinfo.c | 1 +
arch/x86/kernel/cpu/common.c | 3 +++
arch/x86/kernel/smpboot.c | 43 ++++++++++++++++++++++++++++++++-
6 files changed, 62 insertions(+), 1 deletion(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 21f851179ff0..10fc95005df7 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1008,6 +1008,14 @@ config NR_CPUS
This is purely to save memory: each supported CPU adds about 8KB
to the kernel image.
+config SCHED_CLUSTER
+ bool "Cluster scheduler support"
+ default n
+ help
+ Cluster scheduler support improves the CPU scheduler's decision
+ making when dealing with machines that have clusters of CPUs
+ sharing L2 cache. If unsure say N here.
+
config SCHED_SMT
def_bool y if SMP
diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
index c0538f82c9a2..9cbc4ae3078f 100644
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -16,7 +16,9 @@ DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_map);
DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_die_map);
/* cpus sharing the last level cache: */
DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
+DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id);
+DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id);
DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
static inline struct cpumask *cpu_llc_shared_mask(int cpu)
@@ -24,6 +26,11 @@ static inline struct cpumask *cpu_llc_shared_mask(int cpu)
return per_cpu(cpu_llc_shared_map, cpu);
}
+static inline struct cpumask *cpu_l2c_shared_mask(int cpu)
+{
+ return per_cpu(cpu_l2c_shared_map, cpu);
+}
+
DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_cpu_to_apicid);
DECLARE_EARLY_PER_CPU_READ_MOSTLY(u32, x86_cpu_to_acpiid);
DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_bios_cpu_apicid);
diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
index 9239399e5491..2a11ccc14fb1 100644
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -103,6 +103,7 @@ static inline void setup_node_to_cpumask_map(void) { }
#include <asm-generic/topology.h>
extern const struct cpumask *cpu_coregroup_mask(int cpu);
+extern const struct cpumask *cpu_clustergroup_mask(int cpu);
#define topology_logical_package_id(cpu) (cpu_data(cpu).logical_proc_id)
#define topology_physical_package_id(cpu) (cpu_data(cpu).phys_proc_id)
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index 3ca9be482a9e..0d03a71e713e 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -846,6 +846,7 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
l2 = new_l2;
#ifdef CONFIG_SMP
per_cpu(cpu_llc_id, cpu) = l2_id;
+ per_cpu(cpu_l2c_id, cpu) = l2_id;
#endif
}
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 35ad8480c464..fb08c73d752c 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -78,6 +78,9 @@ EXPORT_SYMBOL(smp_num_siblings);
/* Last level cache ID of each logical CPU */
DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID;
+/* L2 cache ID of each logical CPU */
+DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id) = BAD_APICID;
+
/* correctly size the local cpu masks */
void __init setup_cpu_local_masks(void)
{
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 117e24fbfd8a..b9e8780b2617 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -101,6 +101,8 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
+
/* Per CPU bogomips and other parameters */
DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
EXPORT_PER_CPU_SYMBOL(cpu_info);
@@ -501,6 +503,21 @@ static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
return topology_sane(c, o, "llc");
}
+static bool match_l2c(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+{
+ int cpu1 = c->cpu_index, cpu2 = o->cpu_index;
+
+ /* Do not match if we do not have a valid APICID for cpu: */
+ if (per_cpu(cpu_l2c_id, cpu1) == BAD_APICID)
+ return false;
+
+ /* Do not match if L2 cache id does not match: */
+ if (per_cpu(cpu_l2c_id, cpu1) != per_cpu(cpu_l2c_id, cpu2))
+ return false;
+
+ return topology_sane(c, o, "l2c");
+}
+
/*
* Unlike the other levels, we do not enforce keeping a
* multicore group inside a NUMA node. If this happens, we will
@@ -522,7 +539,7 @@ static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
}
-#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_MC)
+#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_CLUSTER) || defined(CONFIG_SCHED_MC)
static inline int x86_sched_itmt_flags(void)
{
return sysctl_sched_itmt_enabled ? SD_ASYM_PACKING : 0;
@@ -540,12 +557,21 @@ static int x86_smt_flags(void)
return cpu_smt_flags() | x86_sched_itmt_flags();
}
#endif
+#ifdef CONFIG_SCHED_CLUSTER
+static int x86_cluster_flags(void)
+{
+ return cpu_cluster_flags() | x86_sched_itmt_flags();
+}
+#endif
#endif
static struct sched_domain_topology_level x86_numa_in_package_topology[] = {
#ifdef CONFIG_SCHED_SMT
{ cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) },
#endif
+#ifdef CONFIG_SCHED_CLUSTER
+ { cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
#ifdef CONFIG_SCHED_MC
{ cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) },
#endif
@@ -556,6 +582,9 @@ static struct sched_domain_topology_level x86_topology[] = {
#ifdef CONFIG_SCHED_SMT
{ cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) },
#endif
+#ifdef CONFIG_SCHED_CLUSTER
+ { cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
#ifdef CONFIG_SCHED_MC
{ cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) },
#endif
@@ -583,6 +612,7 @@ void set_cpu_sibling_map(int cpu)
if (!has_mp) {
cpumask_set_cpu(cpu, topology_sibling_cpumask(cpu));
cpumask_set_cpu(cpu, cpu_llc_shared_mask(cpu));
+ cpumask_set_cpu(cpu, cpu_l2c_shared_mask(cpu));
cpumask_set_cpu(cpu, topology_core_cpumask(cpu));
cpumask_set_cpu(cpu, topology_die_cpumask(cpu));
c->booted_cores = 1;
@@ -598,6 +628,8 @@ void set_cpu_sibling_map(int cpu)
if ((i == cpu) || (has_mp && match_llc(c, o)))
link_mask(cpu_llc_shared_mask, cpu, i);
+ if ((i == cpu) || (has_mp && match_l2c(c, o)))
+ link_mask(cpu_l2c_shared_mask, cpu, i);
}
/*
@@ -649,6 +681,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
return cpu_llc_shared_mask(cpu);
}
+const struct cpumask *cpu_clustergroup_mask(int cpu)
+{
+ return cpu_l2c_shared_mask(cpu);
+}
+
static void impress_friends(void)
{
int cpu;
@@ -1332,6 +1369,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
zalloc_cpumask_var(&per_cpu(cpu_core_map, i), GFP_KERNEL);
zalloc_cpumask_var(&per_cpu(cpu_die_map, i), GFP_KERNEL);
zalloc_cpumask_var(&per_cpu(cpu_llc_shared_map, i), GFP_KERNEL);
+ zalloc_cpumask_var(&per_cpu(cpu_l2c_shared_map, i), GFP_KERNEL);
}
/*
@@ -1556,7 +1594,10 @@ static void remove_siblinginfo(int cpu)
cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
for_each_cpu(sibling, cpu_llc_shared_mask(cpu))
cpumask_clear_cpu(cpu, cpu_llc_shared_mask(sibling));
+ for_each_cpu(sibling, cpu_l2c_shared_mask(cpu))
+ cpumask_clear_cpu(cpu, cpu_l2c_shared_mask(sibling));
cpumask_clear(cpu_llc_shared_mask(cpu));
+ cpumask_clear(cpu_l2c_shared_mask(cpu));
cpumask_clear(topology_sibling_cpumask(cpu));
cpumask_clear(topology_core_cpumask(cpu));
cpumask_clear(topology_die_cpumask(cpu));
--
2.20.1
> -----Original Message-----
> From: Dietmar Eggemann [mailto:[email protected]]
> Sent: Wednesday, January 13, 2021 12:00 AM
> To: Morten Rasmussen <[email protected]>; Tim Chen
> <[email protected]>
> Cc: Song Bao Hua (Barry Song) <[email protected]>;
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; Jonathan Cameron <[email protected]>;
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; [email protected]; xuwei (O)
> <[email protected]>; Zengtao (B) <[email protected]>; tiantao (H)
> <[email protected]>
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
> add cluster scheduler
>
> On 11/01/2021 10:28, Morten Rasmussen wrote:
> > On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
> >>
> >>
> >> On 1/8/21 7:12 AM, Morten Rasmussen wrote:
> >>> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> >>>> On 1/6/21 12:30 AM, Barry Song wrote:
>
> [...]
>
> >> I think it is going to depend on the workload. If there are dependent
> >> tasks that communicate with one another, putting them together
> >> in the same cluster will be the right thing to do to reduce communication
> >> costs. On the other hand, if the tasks are independent, putting them together
> on the same cluster
> >> will increase resource contention and spreading them out will be better.
> >
> > Agree. That is exactly where I'm coming from. This is all about the task
> > placement policy. We generally tend to spread tasks to avoid resource
> > contention, SMT and caches, which seems to be what you are proposing to
> > extend. I think that makes sense given it can produce significant
> > benefits.
> >
> >>
> >> Any thoughts on what is the right clustering "tag" to use to clump
> >> related tasks together?
> >> Cgroup? Pid? Tasks with same mm?
> >
> > I think this is the real question. I think the closest thing we have at
> > the moment is the wakee/waker flip heuristic. This seems to be related.
> > Perhaps the wake_affine tricks can serve as starting point?
>
> wake_wide() switches between packing (select_idle_sibling(), llc_size
> CPUs) and spreading (find_idlest_cpu(), all CPUs).
>
> AFAICS, since none of the sched domains set SD_BALANCE_WAKE, currently
> all wakeups are (llc-)packed.
>
> select_task_rq_fair()
>
> for_each_domain(cpu, tmp)
>
> if (tmp->flags & sd_flag)
> sd = tmp;
>
>
> In case we would like to further distinguish between llc-packing and
> even narrower (cluster or MC-L2)-packing, we would introduce a 2. level
> packing vs. spreading heuristic further down in sis().
>
> IMHO, Barry's current implementation doesn't do this right now. Instead
> he's trying to pack on cluster first and if not successful look further
> among the remaining llc CPUs for an idle CPU.
Right now in the main cases of using wake_affine to achieve
better performance, processes are actually bound within one
numa which is also a LLC in kunpeng920.
Probably LLC=NUMA is also true for X86 Jacobsville, Tim?
So one possible way to pretend a 2-level packing might be:
if the affinity cpuset of waker and waker are both subset
of one same LLC, we totally use cluster as the factor to
determine packing or not and ignore LLC.
I haven't really done this, but the below code can make the
same result by forcing llc_id=cluster_id:
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index d72eb8d..3d78097 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -107,7 +107,7 @@ int __init parse_acpi_topology(void)
cpu_topology[cpu].cluster_id = topology_id;
topology_id = find_acpi_cpu_topology_package(cpu);
cpu_topology[cpu].package_id = topology_id;
-
+#if 0
i = acpi_find_last_cache_level(cpu);
if (i > 0) {
@@ -119,8 +119,11 @@ int __init parse_acpi_topology(void)
if (cache_id > 0)
cpu_topology[cpu].llc_id = cache_id;
}
- }
+#else
+ cpu_topology[cpu].llc_id = cpu_topology[cpu].cluster_id;
+#endif
+ }
return 0;
}
#endif
With this, I have seen some major improvement in hackbench especially
for monogamous communication model (fds_num=1, one sender for one
receiver):
numactl -N 0 hackbench -p -T -l 200000 -f 1 -g $1
I have tested -g(group_nums) 6, 12, 18, 24, 28, 32,
For each different g, I ran 20 times and got the
average value. The result is as below:
g= 6 12 18 24 28 32
w/o 1.3243 1.6741 1.7560 1.9036 2.0262 2.1826
w/ 1.1314 1.1864 1.4494 1.6159 1.9078 2.1249
Using top -H and hit "f" to show cpu of each thread,
I am seeing the two threads in one group are likely
to run in a cluster. That's why the hackbench latency
is decreasing much.
Thanks
Barry
On 4/13/21 3:45 AM, Song Bao Hua (Barry Song) wrote:
>
>
>
> Right now in the main cases of using wake_affine to achieve
> better performance, processes are actually bound within one
> numa which is also a LLC in kunpeng920.
>
> Probably LLC=NUMA is also true for X86 Jacobsville, Tim?
In general for x86, LLC is partitioned at the sub-numa cluster level.
LLC could be divided between sub-numa cluster within a NUMA node.
That said, for Jacobsville, we don't have sub-numa clusters
so LLC=NUMA is true for that platform.
Tim