Changes v2:
- Removed patches 2 and 3 since now this part will be supported by the
kernel.
Sub-Numa Clustering (SNC) allows splitting CPU cores, caches and memory
into multiple NUMA nodes. When enabled, NUMA-aware applications can
achieve better performance on bigger server platforms.
SNC support in the kernel is currently in review [1]. With SNC enabled
and kernel support in place all the tests will function normally. There
might be a problem when SNC is enabled but the system is still using an
older kernel version without SNC support. Currently the only message
displayed in that situation is a guess that SNC might be enabled and is
causing issues. That message also is displayed whenever the test fails
on an Intel platform.
Add a mechanism to discover kernel support for SNC which will add more
meaning and certainty to the error message.
Series was tested on Ice Lake server platforms with SNC disabled, SNC-2
and SNC-4. The tests were also ran with and without kernel support for
SNC.
Series applies cleanly on kselftest/next.
[1] https://lore.kernel.org/all/[email protected]/
Previous versions of this series:
[v1] https://lore.kernel.org/all/[email protected]/
Maciej Wieczor-Retman (2):
selftests/resctrl: Adjust effective L3 cache size with SNC enabled
selftests/resctrl: Adjust SNC support messages
tools/testing/selftests/resctrl/cat_test.c | 2 +-
tools/testing/selftests/resctrl/cmt_test.c | 6 +-
tools/testing/selftests/resctrl/mba_test.c | 2 +
tools/testing/selftests/resctrl/mbm_test.c | 4 +-
tools/testing/selftests/resctrl/resctrl.h | 8 +-
tools/testing/selftests/resctrl/resctrlfs.c | 131 +++++++++++++++++++-
6 files changed, 144 insertions(+), 9 deletions(-)
--
2.45.0
Sub-NUMA Cluster divides CPUs sharing an L3 cache into separate NUMA
nodes. Systems may support splitting into either two or four nodes.
When SNC mode is enabled the effective amount of L3 cache available
for allocation is divided by the number of nodes per L3.
Detect which SNC mode is active by comparing the number of CPUs
that share a cache with CPU0, with the number of CPUs on node0.
Signed-off-by: Tony Luck <[email protected]>
Co-developed-by: Maciej Wieczor-Retman <[email protected]>
Signed-off-by: Maciej Wieczor-Retman <[email protected]>
---
tools/testing/selftests/resctrl/resctrl.h | 3 ++
tools/testing/selftests/resctrl/resctrlfs.c | 59 +++++++++++++++++++++
2 files changed, 62 insertions(+)
diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
index 00d51fa7531c..3dd5d6779786 100644
--- a/tools/testing/selftests/resctrl/resctrl.h
+++ b/tools/testing/selftests/resctrl/resctrl.h
@@ -11,6 +11,7 @@
#include <signal.h>
#include <dirent.h>
#include <stdbool.h>
+#include <ctype.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <sys/mount.h>
@@ -49,6 +50,7 @@
umount_resctrlfs(); \
exit(EXIT_FAILURE); \
} while (0)
+#define MAX_SNC 4
/*
* user_params: User supplied parameters
@@ -131,6 +133,7 @@ extern pid_t bm_pid, ppid;
extern char llc_occup_path[1024];
+int snc_ways(void);
int get_vendor(void);
bool check_resctrlfs_support(void);
int filter_dmesg(void);
diff --git a/tools/testing/selftests/resctrl/resctrlfs.c b/tools/testing/selftests/resctrl/resctrlfs.c
index 1cade75176eb..e4d3624a8817 100644
--- a/tools/testing/selftests/resctrl/resctrlfs.c
+++ b/tools/testing/selftests/resctrl/resctrlfs.c
@@ -156,6 +156,63 @@ int get_domain_id(const char *resource, int cpu_no, int *domain_id)
return 0;
}
+/*
+ * Count number of CPUs in a /sys bit map
+ */
+static unsigned int count_sys_bitmap_bits(char *name)
+{
+ FILE *fp = fopen(name, "r");
+ int count = 0, c;
+
+ if (!fp)
+ return 0;
+
+ while ((c = fgetc(fp)) != EOF) {
+ if (!isxdigit(c))
+ continue;
+ switch (c) {
+ case 'f':
+ count++;
+ case '7': case 'b': case 'd': case 'e':
+ count++;
+ case '3': case '5': case '6': case '9': case 'a': case 'c':
+ count++;
+ case '1': case '2': case '4': case '8':
+ count++;
+ }
+ }
+ fclose(fp);
+
+ return count;
+}
+
+/*
+ * Detect SNC by comparing #CPUs in node0 with #CPUs sharing LLC with CPU0.
+ * If some CPUs are offline the numbers may not be exact multiples of each
+ * other. Any offline CPUs on node0 will be also gone from shared_cpu_map of
+ * CPU0 but offline CPUs from other nodes will only make the cache_cpus value
+ * lower. Still try to get the ratio right by preventing the second possibility.
+ */
+int snc_ways(void)
+{
+ int node_cpus, cache_cpus, i;
+
+ node_cpus = count_sys_bitmap_bits("/sys/devices/system/node/node0/cpumap");
+ cache_cpus = count_sys_bitmap_bits("/sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_map");
+
+ if (!node_cpus || !cache_cpus) {
+ fprintf(stderr, "Warning could not determine Sub-NUMA Cluster mode\n");
+ return 1;
+ }
+
+ for (i = 1; i <= MAX_SNC ; i++) {
+ if (i * node_cpus >= cache_cpus)
+ return i;
+ }
+
+ return 1;
+}
+
/*
* get_cache_size - Get cache size for a specified CPU
* @cpu_no: CPU number
@@ -211,6 +268,8 @@ int get_cache_size(int cpu_no, const char *cache_type, unsigned long *cache_size
break;
}
+ if (cache_num == 3)
+ *cache_size /= snc_ways();
return 0;
}
--
2.45.0
Resctrl selftest prints a message on test failure that Sub-Numa
Clustering (SNC) could be enabled and points the user to check theirs BIOS
settings. No actual check is performed before printing that message so
it is not very accurate in pinpointing a problem.
Figuring out if SNC is enabled is only one part of the problem, the
other being whether the kernel supports it. As there is no easy
interface that simply states SNC support in the kernel one can find that
information by comparing L3 cache sizes from different sources. Cache
size reported by /sys/devices/system/node/node0/cpu0/cache/index3/size
will always show the full cache size even if it's split by enabled SNC.
On the other hand /sys/fs/resctrl/size has information about L3 size,
that with kernel support is adjusted for enabled SNC.
Add a function to find a cache size from /sys/fs/resctrl/size since
finding that information from the other source is already implemented.
Add a function that compares the two cache sizes and use it to make the
SNC support message more meaningful.
Add the SNC support message just after MBA's check_results() since MBA
shares code with MBM and also can suffer from enabled SNC if there is no
support in the kernel.
Signed-off-by: Maciej Wieczor-Retman <[email protected]>
---
Changelog v2:
- Move snc_ways() checks from individual tests into
snc_kernel_support().
- Write better comment for snc_kernel_support().
tools/testing/selftests/resctrl/cat_test.c | 2 +-
tools/testing/selftests/resctrl/cmt_test.c | 6 +-
tools/testing/selftests/resctrl/mba_test.c | 2 +
tools/testing/selftests/resctrl/mbm_test.c | 4 +-
tools/testing/selftests/resctrl/resctrl.h | 5 +-
tools/testing/selftests/resctrl/resctrlfs.c | 72 ++++++++++++++++++++-
6 files changed, 82 insertions(+), 9 deletions(-)
diff --git a/tools/testing/selftests/resctrl/cat_test.c b/tools/testing/selftests/resctrl/cat_test.c
index c7686fb6641a..722b4fcaf788 100644
--- a/tools/testing/selftests/resctrl/cat_test.c
+++ b/tools/testing/selftests/resctrl/cat_test.c
@@ -253,7 +253,7 @@ static int cat_run_test(const struct resctrl_test *test, const struct user_param
return ret;
/* Get L3/L2 cache size */
- ret = get_cache_size(uparams->cpu, test->resource, &cache_total_size);
+ ret = get_sys_cache_size(uparams->cpu, test->resource, &cache_total_size);
if (ret)
return ret;
ksft_print_msg("Cache size :%lu\n", cache_total_size);
diff --git a/tools/testing/selftests/resctrl/cmt_test.c b/tools/testing/selftests/resctrl/cmt_test.c
index a44e6fcd37b7..0ff232d38c26 100644
--- a/tools/testing/selftests/resctrl/cmt_test.c
+++ b/tools/testing/selftests/resctrl/cmt_test.c
@@ -112,7 +112,7 @@ static int cmt_run_test(const struct resctrl_test *test, const struct user_param
if (ret)
return ret;
- ret = get_cache_size(uparams->cpu, "L3", &cache_total_size);
+ ret = get_sys_cache_size(uparams->cpu, "L3", &cache_total_size);
if (ret)
return ret;
ksft_print_msg("Cache size :%lu\n", cache_total_size);
@@ -157,8 +157,8 @@ static int cmt_run_test(const struct resctrl_test *test, const struct user_param
goto out;
ret = check_results(¶m, span, n);
- if (ret && (get_vendor() == ARCH_INTEL))
- ksft_print_msg("Intel CMT may be inaccurate when Sub-NUMA Clustering is enabled. Check BIOS configuration.\n");
+ if (ret && (get_vendor() == ARCH_INTEL) && !snc_kernel_support())
+ ksft_print_msg("Kernel doesn't support Sub-NUMA Clustering but it is enabled. Check BIOS configuration.\n");
out:
free(span_str);
diff --git a/tools/testing/selftests/resctrl/mba_test.c b/tools/testing/selftests/resctrl/mba_test.c
index 5d6af9e8afed..74e1ebb14904 100644
--- a/tools/testing/selftests/resctrl/mba_test.c
+++ b/tools/testing/selftests/resctrl/mba_test.c
@@ -161,6 +161,8 @@ static int mba_run_test(const struct resctrl_test *test, const struct user_param
return ret;
ret = check_results();
+ if (ret && (get_vendor() == ARCH_INTEL) && !snc_kernel_support())
+ ksft_print_msg("Kernel doesn't support Sub-NUMA Clustering but it is enabled. Check BIOS configuration.\n");
return ret;
}
diff --git a/tools/testing/selftests/resctrl/mbm_test.c b/tools/testing/selftests/resctrl/mbm_test.c
index 3059ccc51a5a..e542938272f9 100644
--- a/tools/testing/selftests/resctrl/mbm_test.c
+++ b/tools/testing/selftests/resctrl/mbm_test.c
@@ -129,8 +129,8 @@ static int mbm_run_test(const struct resctrl_test *test, const struct user_param
return ret;
ret = check_results(DEFAULT_SPAN);
- if (ret && (get_vendor() == ARCH_INTEL))
- ksft_print_msg("Intel MBM may be inaccurate when Sub-NUMA Clustering is enabled. Check BIOS configuration.\n");
+ if (ret && (get_vendor() == ARCH_INTEL) && !snc_kernel_support())
+ ksft_print_msg("Kernel doesn't support Sub-NUMA Clustering but it is enabled. Check BIOS configuration.\n");
return ret;
}
diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
index 3dd5d6779786..2bd7c3f71733 100644
--- a/tools/testing/selftests/resctrl/resctrl.h
+++ b/tools/testing/selftests/resctrl/resctrl.h
@@ -28,6 +28,7 @@
#define RESCTRL_PATH "/sys/fs/resctrl"
#define PHYS_ID_PATH "/sys/devices/system/cpu/cpu"
#define INFO_PATH "/sys/fs/resctrl/info"
+#define SIZE_PATH "/sys/fs/resctrl/size"
/*
* CPU vendor IDs
@@ -165,12 +166,14 @@ unsigned long create_bit_mask(unsigned int start, unsigned int len);
unsigned int count_contiguous_bits(unsigned long val, unsigned int *start);
int get_full_cbm(const char *cache_type, unsigned long *mask);
int get_mask_no_shareable(const char *cache_type, unsigned long *mask);
-int get_cache_size(int cpu_no, const char *cache_type, unsigned long *cache_size);
int resource_info_unsigned_get(const char *resource, const char *filename, unsigned int *val);
+int get_sys_cache_size(int cpu_no, const char *cache_type, unsigned long *cache_size);
+int get_resctrl_cache_size(const char *cache_type, unsigned long *cache_size);
void ctrlc_handler(int signum, siginfo_t *info, void *ptr);
int signal_handler_register(const struct resctrl_test *test);
void signal_handler_unregister(void);
unsigned int count_bits(unsigned long n);
+int snc_kernel_support(void);
void perf_event_attr_initialize(struct perf_event_attr *pea, __u64 config);
void perf_event_initialize_read_format(struct perf_event_read *pe_read);
diff --git a/tools/testing/selftests/resctrl/resctrlfs.c b/tools/testing/selftests/resctrl/resctrlfs.c
index e4d3624a8817..88f97db72246 100644
--- a/tools/testing/selftests/resctrl/resctrlfs.c
+++ b/tools/testing/selftests/resctrl/resctrlfs.c
@@ -214,14 +214,14 @@ int snc_ways(void)
}
/*
- * get_cache_size - Get cache size for a specified CPU
+ * get_sys_cache_size - Get cache size for a specified CPU
* @cpu_no: CPU number
* @cache_type: Cache level L2/L3
* @cache_size: pointer to cache_size
*
* Return: = 0 on success, < 0 on failure.
*/
-int get_cache_size(int cpu_no, const char *cache_type, unsigned long *cache_size)
+int get_sys_cache_size(int cpu_no, const char *cache_type, unsigned long *cache_size)
{
char cache_path[1024], cache_str[64];
int length, i, cache_num;
@@ -273,6 +273,44 @@ int get_cache_size(int cpu_no, const char *cache_type, unsigned long *cache_size
return 0;
}
+/*
+ * get_resctrl_cache_size - Get cache size as reported by resctrl
+ * @cache_type: Cache level L2/L3
+ * @cache_size: pointer to cache_size
+ *
+ * Return: = 0 on success, < 0 on failure.
+ */
+int get_resctrl_cache_size(const char *cache_type, unsigned long *cache_size)
+{
+ char line[256], cache_prefix[16], *stripped_line, *token;
+ size_t len;
+ FILE *fp;
+
+ strcpy(cache_prefix, cache_type);
+ strncat(cache_prefix, ":", 1);
+
+ fp = fopen(SIZE_PATH, "r");
+ if (!fp) {
+ ksft_print_msg("Failed to open %s : '%s'\n",
+ SIZE_PATH, strerror(errno));
+ return -1;
+ }
+
+ while (fgets(line, sizeof(line), fp)) {
+ stripped_line = strstr(line, cache_prefix);
+
+ if (stripped_line) {
+ len = strlen(cache_prefix);
+ stripped_line += len;
+ token = strtok(stripped_line, ";");
+ if (sscanf(token, "0=%lu", cache_size) <= 0)
+ return -1;
+ }
+ }
+ fclose(fp);
+ return 0;
+}
+
#define CORE_SIBLINGS_PATH "/sys/bus/cpu/devices/cpu"
/*
@@ -935,3 +973,33 @@ unsigned int count_bits(unsigned long n)
return count;
}
+
+/**
+ * snc_kernel_support - Compare system reported cache size and resctrl
+ * reported cache size to get an idea if SNC is supported on the kernel side.
+ *
+ * Return: 0 if not supported, 1 if SNC is disabled or SNC is both enabled and
+ * supported, < 0 on failure.
+ */
+int snc_kernel_support(void)
+{
+ unsigned long resctrl_cache_size, node_cache_size;
+ int ret;
+
+ /* If SNC is disabled then its kernel support isn't important. */
+ if (snc_ways() == 1)
+ return 1;
+
+ ret = get_sys_cache_size(0, "L3", &node_cache_size);
+ if (ret < 0)
+ return ret;
+
+ ret = get_resctrl_cache_size("L3", &resctrl_cache_size);
+ if (ret < 0)
+ return ret;
+
+ if (resctrl_cache_size == node_cache_size)
+ return 1;
+
+ return 0;
+}
--
2.45.0
If/when my SNC patches go upstream the SNC check could become:
snc_ways=$(ls -d /sys/fs/resctrl/mon_data/mon_L3_00/mon_sub_L3_* 2>/dev/null | wc -l)
assuming you have /sys/fs/resctrl mounted.
-Tony
On 2024-05-15 at 16:48:44 +0000, Luck, Tony wrote:
>If/when my SNC patches go upstream the SNC check could become:
>
>snc_ways=$(ls -d /sys/fs/resctrl/mon_data/mon_L3_00/mon_sub_L3_* 2>/dev/null | wc -l)
But this won't work without your kernel patches right?
If they are already in the kernel used by the person launching the selftests
then there shouldn't be any problems to report. The idea was that if the
CMT/MBM/MBA selftests fail, the message with "SNC is the problem" is only
displayed if SNC is enabled and there is no kernel support for SNC.
>
>assuming you have /sys/fs/resctrl mounted.
>
>-Tony
--
Kind regards
Maciej Wiecz?r-Retman
Hi Maciej,
Regarding shortlog: L3 cache size should no longer be adjusted when
SNC is enabled. You mention that the tests are passing when running
with this adjustment ... I think that this may be because the test
now just runs on a smaller portion of the cache?
On 5/15/24 4:18 AM, Maciej Wieczor-Retman wrote:
> Sub-NUMA Cluster divides CPUs sharing an L3 cache into separate NUMA
> nodes. Systems may support splitting into either two or four nodes.
fyi ... from the most recent kernel submission 2, 3, or 4 nodes
are possible:
https://lore.kernel.org/lkml/[email protected]/
>
> When SNC mode is enabled the effective amount of L3 cache available
> for allocation is divided by the number of nodes per L3.
This was a mistake in original implementation and no longer done.
>
> Detect which SNC mode is active by comparing the number of CPUs
> that share a cache with CPU0, with the number of CPUs on node0.
>
> Signed-off-by: Tony Luck <[email protected]>
> Co-developed-by: Maciej Wieczor-Retman <[email protected]>
> Signed-off-by: Maciej Wieczor-Retman <[email protected]>
> ---
> tools/testing/selftests/resctrl/resctrl.h | 3 ++
> tools/testing/selftests/resctrl/resctrlfs.c | 59 +++++++++++++++++++++
> 2 files changed, 62 insertions(+)
>
> diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
> index 00d51fa7531c..3dd5d6779786 100644
> --- a/tools/testing/selftests/resctrl/resctrl.h
> +++ b/tools/testing/selftests/resctrl/resctrl.h
> @@ -11,6 +11,7 @@
> #include <signal.h>
> #include <dirent.h>
> #include <stdbool.h>
> +#include <ctype.h>
> #include <sys/stat.h>
> #include <sys/ioctl.h>
> #include <sys/mount.h>
> @@ -49,6 +50,7 @@
> umount_resctrlfs(); \
> exit(EXIT_FAILURE); \
> } while (0)
> +#define MAX_SNC 4
>
> /*
> * user_params: User supplied parameters
> @@ -131,6 +133,7 @@ extern pid_t bm_pid, ppid;
>
> extern char llc_occup_path[1024];
>
> +int snc_ways(void);
> int get_vendor(void);
> bool check_resctrlfs_support(void);
> int filter_dmesg(void);
> diff --git a/tools/testing/selftests/resctrl/resctrlfs.c b/tools/testing/selftests/resctrl/resctrlfs.c
> index 1cade75176eb..e4d3624a8817 100644
> --- a/tools/testing/selftests/resctrl/resctrlfs.c
> +++ b/tools/testing/selftests/resctrl/resctrlfs.c
> @@ -156,6 +156,63 @@ int get_domain_id(const char *resource, int cpu_no, int *domain_id)
> return 0;
> }
>
> +/*
> + * Count number of CPUs in a /sys bit map
> + */
> +static unsigned int count_sys_bitmap_bits(char *name)
> +{
> + FILE *fp = fopen(name, "r");
> + int count = 0, c;
> +
> + if (!fp)
> + return 0;
> +
> + while ((c = fgetc(fp)) != EOF) {
> + if (!isxdigit(c))
> + continue;
> + switch (c) {
> + case 'f':
> + count++;
> + case '7': case 'b': case 'd': case 'e':
> + count++;
> + case '3': case '5': case '6': case '9': case 'a': case 'c':
> + count++;
> + case '1': case '2': case '4': case '8':
> + count++;
> + }
> + }
> + fclose(fp);
> +
> + return count;
> +}
> +
> +/*
> + * Detect SNC by comparing #CPUs in node0 with #CPUs sharing LLC with CPU0.
> + * If some CPUs are offline the numbers may not be exact multiples of each
> + * other. Any offline CPUs on node0 will be also gone from shared_cpu_map of
> + * CPU0 but offline CPUs from other nodes will only make the cache_cpus value
> + * lower. Still try to get the ratio right by preventing the second possibility.
> + */
> +int snc_ways(void)
"ways" have a specific meaning in cache terminology. Perhaps rather something
like "snc_nodes_per_cache()" or even copy the kernel's (which is still WIP though)
snc_nodes_per_l3_cache()
> +{
> + int node_cpus, cache_cpus, i;
> +
> + node_cpus = count_sys_bitmap_bits("/sys/devices/system/node/node0/cpumap");
> + cache_cpus = count_sys_bitmap_bits("/sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_map");
> +
> + if (!node_cpus || !cache_cpus) {
> + fprintf(stderr, "Warning could not determine Sub-NUMA Cluster mode\n");
The tests just use "ksft_print_msg()" for error messages. The "Warning could ..."
is somewhat unexpected, perhaps just "Could not determine ..." or "Warning: Could not ..."?
> + return 1;
> + }
> +
> + for (i = 1; i <= MAX_SNC ; i++) {
> + if (i * node_cpus >= cache_cpus)
> + return i;
> + }
This is not obvious to me. From the function comments this seems to address the
scenarios when CPUs from other nodes are offline. It is not clear to me how
this loop addresses this. For example, let's say there are four SNC nodes
associated with a cache and only the node0 CPUs are online. The above would
detect this as "1", not "4", if I read this right?
I wonder if it may not be easier to just follow what the kernel does
(in the new version).
User space can learn the number of online and present CPUs from
/sys/devices/system/cpu/online and /sys/devices/system/cpu/present
respectively. A simple string compare of the contents can be used to
determine if they are identical and a warning can be printed if they are not.
With a warning when accurate detection cannot be done the simple
check will do.
Could you please add an informational message indicating how many SNC nodes
were indeed detected?
> +
> + return 1;
> +}
> +
> /*
> * get_cache_size - Get cache size for a specified CPU
> * @cpu_no: CPU number
> @@ -211,6 +268,8 @@ int get_cache_size(int cpu_no, const char *cache_type, unsigned long *cache_size
> break;
> }
>
> + if (cache_num == 3)
> + *cache_size /= snc_ways();
> return 0;
> }
>
I think this can be dropped.
Reinette
Hi Maciej,
On 5/15/24 4:18 AM, Maciej Wieczor-Retman wrote:
> Resctrl selftest prints a message on test failure that Sub-Numa
> Clustering (SNC) could be enabled and points the user to check theirs BIOS
> settings. No actual check is performed before printing that message so
> it is not very accurate in pinpointing a problem.
>
> Figuring out if SNC is enabled is only one part of the problem, the
> other being whether the kernel supports it. As there is no easy
> interface that simply states SNC support in the kernel one can find that
> information by comparing L3 cache sizes from different sources. Cache
> size reported by /sys/devices/system/node/node0/cpu0/cache/index3/size
> will always show the full cache size even if it's split by enabled SNC.
> On the other hand /sys/fs/resctrl/size has information about L3 size,
> that with kernel support is adjusted for enabled SNC.
>
> Add a function to find a cache size from /sys/fs/resctrl/size since
> finding that information from the other source is already implemented.
>
> Add a function that compares the two cache sizes and use it to make the
> SNC support message more meaningful.
Please note that the new version of SNC kernel support ([1]) that this
series is based on no longer adjusts the cache size. Detecting kernel support
for SNC (if the new solution is accepted) can be done with the test for the
existence of the files Tony mentioned in [2].
>
> Add the SNC support message just after MBA's check_results() since MBA
> shares code with MBM and also can suffer from enabled SNC if there is no
> support in the kernel.
>
> Signed-off-by: Maciej Wieczor-Retman <[email protected]>
> ---
Reinette
[1] https://lore.kernel.org/all/[email protected]/
[2] https://lore.kernel.org/lkml/SJ1PR11MB6083320F30DBCBB59574F0BDFCEC2@SJ1PR11MB6083.namprd11.prod.outlook.com/
Hi Tony,
On 5/30/24 4:46 PM, Luck, Tony wrote:
>>> When SNC mode is enabled the effective amount of L3 cache available
>>> for allocation is divided by the number of nodes per L3.
>>
>> This was a mistake in original implementation and no longer done.
>
> My original kernel code adjusted value reported in the "size" file in resctrl.
> That's no longer done because the effective size depends on how applications
> are allocating and using memory. Since the kernel can't know that, it
> seemed best to just report the total size of the cache.
>
> But I think the resctrl tests still need to take this into account when running
> llc_occupancy tests.
>
> E.g. on a 2-way SNC system with a 100MB L3 cache a test that allocates
> memory from its local SNC node (default behavior without using libnuma)
> will only see 50 MB llc_occupancy with a fully populated L3 mask in the
> schemata file.
This seems to contradict the "Cache and memory bandwidth allocation features
continue to operate at the scope of the L3 cache." statement from [1]?
Reinette
[1] https://lore.kernel.org/lkml/[email protected]/
>> When SNC mode is enabled the effective amount of L3 cache available
>> for allocation is divided by the number of nodes per L3.
>
> This was a mistake in original implementation and no longer done.
My original kernel code adjusted value reported in the "size" file in resctrl.
That's no longer done because the effective size depends on how applications
are allocating and using memory. Since the kernel can't know that, it
seemed best to just report the total size of the cache.
But I think the resctrl tests still need to take this into account when running
llc_occupancy tests.
E.g. on a 2-way SNC system with a 100MB L3 cache a test that allocates
memory from its local SNC node (default behavior without using libnuma)
will only see 50 MB llc_occupancy with a fully populated L3 mask in the
schemata file.
-Tony
> >>> When SNC mode is enabled the effective amount of L3 cache available
> >>> for allocation is divided by the number of nodes per L3.
> >>
> >> This was a mistake in original implementation and no longer done.
> >
> > My original kernel code adjusted value reported in the "size" file in resctrl.
> > That's no longer done because the effective size depends on how applications
> > are allocating and using memory. Since the kernel can't know that, it
> > seemed best to just report the total size of the cache.
> >
> > But I think the resctrl tests still need to take this into account when running
> > llc_occupancy tests.
> >
> > E.g. on a 2-way SNC system with a 100MB L3 cache a test that allocates
> > memory from its local SNC node (default behavior without using libnuma)
> > will only see 50 MB llc_occupancy with a fully populated L3 mask in the
> > schemata file.
>
> This seems to contradict the "Cache and memory bandwidth allocation features
> continue to operate at the scope of the L3 cache." statement from [1]?
I'll clean that up. MBA isn't affected. But cache allocation is affected in that
the amount of cache represented by each bit in the masks in the schemata
file is reduced by a factor equal to SNC nodes per L3 cache.
-Tony
>
> Reinette
>
> [1] https://lore.kernel.org/lkml/[email protected]/
Hello!
On 2024-05-30 at 16:07:34 -0700, Reinette Chatre wrote:
>Hi Maciej,
>
>On 5/15/24 4:18 AM, Maciej Wieczor-Retman wrote:
>> Resctrl selftest prints a message on test failure that Sub-Numa
>> Clustering (SNC) could be enabled and points the user to check theirs BIOS
>> settings. No actual check is performed before printing that message so
>> it is not very accurate in pinpointing a problem.
>>
>> Figuring out if SNC is enabled is only one part of the problem, the
>> other being whether the kernel supports it. As there is no easy
>> interface that simply states SNC support in the kernel one can find that
>> information by comparing L3 cache sizes from different sources. Cache
>> size reported by /sys/devices/system/node/node0/cpu0/cache/index3/size
>> will always show the full cache size even if it's split by enabled SNC.
>> On the other hand /sys/fs/resctrl/size has information about L3 size,
>> that with kernel support is adjusted for enabled SNC.
>>
>> Add a function to find a cache size from /sys/fs/resctrl/size since
>> finding that information from the other source is already implemented.
>>
>> Add a function that compares the two cache sizes and use it to make the
>> SNC support message more meaningful.
>
>Please note that the new version of SNC kernel support ([1]) that this
>series is based on no longer adjusts the cache size. Detecting kernel support
>for SNC (if the new solution is accepted) can be done with the test for the
>existence of the files Tony mentioned in [2].
Thank you for your comments on both patches, I don't know how I missed this
fact. I'll revise my patches accordingly.
>
>>
>> Add the SNC support message just after MBA's check_results() since MBA
>> shares code with MBM and also can suffer from enabled SNC if there is no
>> support in the kernel.
>>
>> Signed-off-by: Maciej Wieczor-Retman <[email protected]>
>> ---
>
>Reinette
>
>[1] https://lore.kernel.org/all/[email protected]/
>[2] https://lore.kernel.org/lkml/SJ1PR11MB6083320F30DBCBB59574F0BDFCEC2@SJ1PR11MB6083.namprd11.prod.outlook.com/
>
--
Kind regards
Maciej Wiecz?r-Retman
Hi Tony and Maciej,
On 5/30/24 5:34 PM, Luck, Tony wrote:
>>>>> When SNC mode is enabled the effective amount of L3 cache available
>>>>> for allocation is divided by the number of nodes per L3.
>>>>
>>>> This was a mistake in original implementation and no longer done.
>>>
>>> My original kernel code adjusted value reported in the "size" file in resctrl.
>>> That's no longer done because the effective size depends on how applications
>>> are allocating and using memory. Since the kernel can't know that, it
>>> seemed best to just report the total size of the cache.
>>>
>>> But I think the resctrl tests still need to take this into account when running
>>> llc_occupancy tests.
>>>
>>> E.g. on a 2-way SNC system with a 100MB L3 cache a test that allocates
>>> memory from its local SNC node (default behavior without using libnuma)
>>> will only see 50 MB llc_occupancy with a fully populated L3 mask in the
>>> schemata file.
>>
>> This seems to contradict the "Cache and memory bandwidth allocation features
>> continue to operate at the scope of the L3 cache." statement from [1]?
>
> I'll clean that up. MBA isn't affected. But cache allocation is affected in that
> the amount of cache represented by each bit in the masks in the schemata
> file is reduced by a factor equal to SNC nodes per L3 cache.
Thanks Tony. I trust that this is what Maciej intended since the change
is specifically named "Adjuct _effective_ L3 cache size". I'd like to
recommend that your comments be added before the change to
get_cache_size() ...
/*
* The amount of cache represented by each bit in the masks
* in the schemata file is reduced by a factor equal to SNC
* nodes per L3 cache.
* E.g. on a SNC-2 system with a 100MB L3 cache a test that
* allocates memory from its local SNC node (default behavior
* without using libnuma) will only see 50 MB llc_occupancy
* with a fully populated L3 mask in the schemata file.
*/
Reinette