From: Roman Sudarikov <[email protected]>
The previous version can be found at:
v2: https://lkml.kernel.org/r/[email protected]
Changes in this revision are:
v2 -> v3:
- Addressed comments from Peter and Kan
The previous version can be found at:
v1: https://lkml.kernel.org/r/[email protected]
Changes in this revision are:
v1 -> v2:
- Fixed process related issues;
- This patch set includes kernel support for IIO stack to PMON mapping;
- Stephane raised concerns regarding output format which may require
code changes in the user space part of the feature only. We will continue
output format discussion in the context of user space update.
Intel® Xeon® Scalable processor family (code name Skylake-SP) makes
significant changes in the integrated I/O (IIO) architecture. The new
solution introduces IIO stacks which are responsible for managing traffic
between the PCIe domain and the Mesh domain. Each IIO stack has its own
PMON block and can handle either DMI port, x16 PCIe root port, MCP-Link
or various built-in accelerators. IIO PMON blocks allow concurrent
monitoring of I/O flows up to 4 x4 bifurcation within each IIO stack.
Software is supposed to program required perf counters within each IIO
stack and gather performance data. The tricky thing here is that IIO PMON
reports data per IIO stack but users have no idea what IIO stacks are -
they only know devices which are connected to the platform.
Understanding IIO stack concept to find which IIO stack that particular
IO device is connected to, or to identify an IIO PMON block to program
for monitoring specific IIO stack assumes a lot of implicit knowledge
about given Intel server platform architecture.
This patch set introduces:
1. An infrastructure for exposing an Uncore unit to Uncore PMON mapping
through sysfs-backend;
2. A new --iiostat mode in perf stat to provide I/O performance metrics
per I/O device.
Usage examples:
1. List all devices below IIO stacks
./perf stat --iiostat=show
Sample output w/o libpci:
S0-RootPort0-uncore_iio_0<00:00.0>
S1-RootPort0-uncore_iio_0<81:00.0>
S0-RootPort1-uncore_iio_1<18:00.0>
S1-RootPort1-uncore_iio_1<86:00.0>
S1-RootPort1-uncore_iio_1<88:00.0>
S0-RootPort2-uncore_iio_2<3d:00.0>
S1-RootPort2-uncore_iio_2<af:00.0>
S1-RootPort3-uncore_iio_3<da:00.0>
Sample output with libpci:
S0-RootPort0-uncore_iio_0<00:00.0 Sky Lake-E DMI3 Registers>
S1-RootPort0-uncore_iio_0<81:00.0 Ethernet Controller X710 for 10GbE SFP+>
S0-RootPort1-uncore_iio_1<18:00.0 Omni-Path HFI Silicon 100 Series [discrete]>
S1-RootPort1-uncore_iio_1<86:00.0 Ethernet Controller XL710 for 40GbE QSFP+>
S1-RootPort1-uncore_iio_1<88:00.0 Ethernet Controller XL710 for 40GbE QSFP+>
S0-RootPort2-uncore_iio_2<3d:00.0 Ethernet Connection X722 for 10GBASE-T>
S1-RootPort2-uncore_iio_2<af:00.0 Omni-Path HFI Silicon 100 Series [discrete]>
S1-RootPort3-uncore_iio_3<da:00.0 NVMe Datacenter SSD [Optane]>
2. Collect metrics for all I/O devices below IIO stack
./perf stat --iiostat -- dd if=/dev/zero of=/dev/nvme0n1 bs=1M oflag=direct
357708+0 records in
357707+0 records out
375083606016 bytes (375 GB, 349 GiB) copied, 215.381 s, 1.7 GB/s
Performance counter stats for 'system wide':
device Inbound Read(MB) Inbound Write(MB) Outbound Read(MB) Outbound Write(MB)
00:00.0 0 0 0 0
81:00.0 0 0 0 0
18:00.0 0 0 0 0
86:00.0 0 0 0 0
88:00.0 0 0 0 0
3b:00.0 3 0 0 0
3c:03.0 3 0 0 0
3d:00.0 3 0 0 0
af:00.0 0 0 0 0
da:00.0 358559 44 0 22
215.383783574 seconds time elapsed
3. Collect metrics for comma separted list of I/O devices
./perf stat --iiostat=da:00.0 -- dd if=/dev/zero of=/dev/nvme0n1 bs=1M oflag=direct
381555+0 records in
381554+0 records out
400088457216 bytes (400 GB, 373 GiB) copied, 374.044 s, 1.1 GB/s
Performance counter stats for 'system wide':
device Inbound Read(MB) Inbound Write(MB) Outbound Read(MB) Outbound Write(MB)
da:00.0 382462 47 0 23
374.045775505 seconds time elapsed
Roman Sudarikov (2):
perf x86: Infrastructure for exposing an Uncore unit to PMON mapping
perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server
platform
arch/x86/events/intel/uncore.c | 39 ++++++-
arch/x86/events/intel/uncore.h | 10 +-
arch/x86/events/intel/uncore_snbep.c | 162 +++++++++++++++++++++++++++
3 files changed, 208 insertions(+), 3 deletions(-)
base-commit: 219d54332a09e8d8741c1e1982f5eae56099de85
--
2.19.1
From: Roman Sudarikov <[email protected]>
Intel® Xeon® Scalable processor family (code name Skylake-SP) makes
significant changes in the integrated I/O (IIO) architecture. The new
solution introduces IIO stacks which are responsible for managing traffic
between the PCIe domain and the Mesh domain. Each IIO stack has its own
PMON block and can handle either DMI port, x16 PCIe root port, MCP-Link
or various built-in accelerators. IIO PMON blocks allow concurrent
monitoring of I/O flows up to 4 x4 bifurcation within each IIO stack.
Software is supposed to program required perf counters within each IIO
stack and gather performance data. The tricky thing here is that IIO PMON
reports data per IIO stack but users have no idea what IIO stacks are -
they only know devices which are connected to the platform.
Understanding IIO stack concept to find which IIO stack that particular
IO device is connected to, or to identify an IIO PMON block to program
for monitoring specific IIO stack assumes a lot of implicit knowledge
about given Intel server platform architecture.
Usage example:
/sys/devices/uncore_<type>_<pmu_idx>/platform_mapping
Each Uncore unit type, by its nature, can be mapped to its own context,
for example:
1. CHA - each uncore_cha_<pmu_idx> is assigned to manage a distinct slice
of LLC capacity;
2. UPI - each uncore_upi_<pmu_idx> is assigned to manage one link of Intel
UPI Subsystem;
3. IIO - each uncore_iio_<pmu_idx> is assigned to manage one stack of the
IIO module;
4. IMC - each uncore_imc_<pmu_idx> is assigned to manage one channel of
Memory Controller.
Implementation details:
Two callbacks added to struct intel_uncore_type to discover and map Uncore
units to PMONs:
int (*get_topology)(void)
int (*set_mapping)(struct intel_uncore_pmu *pmu)
Details of IIO Uncore unit mapping to IIO PMON:
Each IIO stack is either DMI port, x16 PCIe root port, MCP-Link or various
built-in accelerators. For Uncore IIO Unit type, the platform_mapping file
holds bus numbers of devices, which can be monitored by that IIO PMON block
on each die.
Co-developed-by: Alexander Antonov <[email protected]>
Reviewed-by: Kan Liang <[email protected]>
Signed-off-by: Alexander Antonov <[email protected]>
Signed-off-by: Roman Sudarikov <[email protected]>
---
arch/x86/events/intel/uncore.c | 37 +++++++++++++++++++++++++++++++++-
arch/x86/events/intel/uncore.h | 9 ++++++++-
2 files changed, 44 insertions(+), 2 deletions(-)
diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index 86467f85c383..2c53ad44b51f 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -905,6 +905,32 @@ static void uncore_types_exit(struct intel_uncore_type **types)
uncore_type_exit(*types);
}
+static struct attribute *empty_attrs[] = {
+ NULL,
+};
+
+static const struct attribute_group empty_group = {
+ .attrs = empty_attrs,
+};
+
+static ssize_t platform_mapping_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct intel_uncore_pmu *pmu = dev_get_drvdata(dev);
+
+ return snprintf(buf, PAGE_SIZE - 1, "%s\n", pmu->mapping);
+}
+static DEVICE_ATTR_RO(platform_mapping);
+
+static struct attribute *mapping_attrs[] = {
+ &dev_attr_platform_mapping.attr,
+ NULL,
+};
+
+static const struct attribute_group uncore_mapping_group = {
+ .attrs = mapping_attrs,
+};
+
static int __init uncore_type_init(struct intel_uncore_type *type, bool setid)
{
struct intel_uncore_pmu *pmus;
@@ -950,10 +976,19 @@ static int __init uncore_type_init(struct intel_uncore_type *type, bool setid)
attr_group->attrs[j] = &type->event_descs[j].attr.attr;
type->events_group = &attr_group->group;
- }
+ } else
+ type->events_group = &empty_group;
type->pmu_group = &uncore_pmu_attr_group;
+ /*
+ * Exposing mapping of Uncore units to corresponding Uncore PMUs
+ * through /sys/devices/uncore_<type>_<idx>/platform_mapping
+ */
+ if (type->get_topology && type->set_mapping)
+ if (!type->get_topology(type) && !type->set_mapping(type))
+ type->mapping_group = &uncore_mapping_group;
+
return 0;
err:
diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
index bbfdaa720b45..f52dd3f112a7 100644
--- a/arch/x86/events/intel/uncore.h
+++ b/arch/x86/events/intel/uncore.h
@@ -71,13 +71,19 @@ struct intel_uncore_type {
struct intel_uncore_ops *ops;
struct uncore_event_desc *event_descs;
struct freerunning_counters *freerunning;
- const struct attribute_group *attr_groups[4];
+ const struct attribute_group *attr_groups[5];
struct pmu *pmu; /* for custom pmu ops */
+ u64 *topology;
+ /* finding Uncore units */
+ int (*get_topology)(struct intel_uncore_type *type);
+ /* mapping Uncore units to PMON ranges */
+ int (*set_mapping)(struct intel_uncore_type *type);
};
#define pmu_group attr_groups[0]
#define format_group attr_groups[1]
#define events_group attr_groups[2]
+#define mapping_group attr_groups[3]
struct intel_uncore_ops {
void (*init_box)(struct intel_uncore_box *);
@@ -99,6 +105,7 @@ struct intel_uncore_pmu {
int pmu_idx;
int func_id;
bool registered;
+ char *mapping;
atomic_t activeboxes;
struct intel_uncore_type *type;
struct intel_uncore_box **boxes;
--
2.19.1
From: Roman Sudarikov <[email protected]>
Current version supports a server line starting Intel® Xeon® Processor
Scalable Family and introduces mapping for IIO Uncore units only.
Other units can be added on demand.
IIO stack to PMON mapping is exposed through:
/sys/devices/uncore_iio_<pmu_idx>/platform_mapping
in the following format: domain:bus
For example, on a 4-die Intel Xeon® server platform:
$ cat /sys/devices/uncore_iio_0/platform_mapping
0000:00,0000:40,0000:80,0000:c0
Which means:
IIO PMON block 0 on die 0 belongs to IIO stack on bus 0x00, domain 0x0000
IIO PMON block 0 on die 1 belongs to IIO stack on bus 0x40, domain 0x0000
IIO PMON block 0 on die 2 belongs to IIO stack on bus 0x80, domain 0x0000
IIO PMON block 0 on die 3 belongs to IIO stack on bus 0xc0, domain 0x0000
Co-developed-by: Alexander Antonov <[email protected]>
Reviewed-by: Kan Liang <[email protected]>
Signed-off-by: Alexander Antonov <[email protected]>
Signed-off-by: Roman Sudarikov <[email protected]>
---
arch/x86/events/intel/uncore.c | 2 +-
arch/x86/events/intel/uncore.h | 1 +
arch/x86/events/intel/uncore_snbep.c | 162 +++++++++++++++++++++++++++
3 files changed, 164 insertions(+), 1 deletion(-)
diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index 2c53ad44b51f..c0d86bc8e786 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -16,7 +16,7 @@ struct pci_driver *uncore_pci_driver;
DEFINE_RAW_SPINLOCK(pci2phy_map_lock);
struct list_head pci2phy_map_head = LIST_HEAD_INIT(pci2phy_map_head);
struct pci_extra_dev *uncore_extra_pci_dev;
-static int max_dies;
+int max_dies;
/* mask of cpus that collect uncore events */
static cpumask_t uncore_cpu_mask;
diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
index f52dd3f112a7..94eacca6f485 100644
--- a/arch/x86/events/intel/uncore.h
+++ b/arch/x86/events/intel/uncore.h
@@ -523,6 +523,7 @@ extern raw_spinlock_t pci2phy_map_lock;
extern struct list_head pci2phy_map_head;
extern struct pci_extra_dev *uncore_extra_pci_dev;
extern struct event_constraint uncore_constraint_empty;
+extern int max_dies;
/* uncore_snb.c */
int snb_uncore_pci_init(void);
diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
index b10a5ec79e48..2562fde2e5b8 100644
--- a/arch/x86/events/intel/uncore_snbep.c
+++ b/arch/x86/events/intel/uncore_snbep.c
@@ -273,6 +273,30 @@
#define SKX_CPUNODEID 0xc0
#define SKX_GIDNIDMAP 0xd4
+/*
+ * The CPU_BUS_NUMBER MSR returns the values of the respective CPUBUSNO CSR
+ * that BIOS programmed. MSR has package scope.
+ * | Bit | Default | Description
+ * | [63] | 00h | VALID - When set, indicates the CPU bus
+ * numbers have been initialized. (RO)
+ * |[62:48]| --- | Reserved
+ * |[47:40]| 00h | BUS_NUM_5 — Return the bus number BIOS assigned
+ * CPUBUSNO(5). (RO)
+ * |[39:32]| 00h | BUS_NUM_4 — Return the bus number BIOS assigned
+ * CPUBUSNO(4). (RO)
+ * |[31:24]| 00h | BUS_NUM_3 — Return the bus number BIOS assigned
+ * CPUBUSNO(3). (RO)
+ * |[23:16]| 00h | BUS_NUM_2 — Return the bus number BIOS assigned
+ * CPUBUSNO(2). (RO)
+ * |[15:8] | 00h | BUS_NUM_1 — Return the bus number BIOS assigned
+ * CPUBUSNO(1). (RO)
+ * | [7:0] | 00h | BUS_NUM_0 — Return the bus number BIOS assigned
+ * CPUBUSNO(0). (RO)
+ */
+#define SKX_MSR_CPU_BUS_NUMBER 0x300
+#define SKX_MSR_CPU_BUS_VALID_BIT (1ULL << 63)
+#define BUS_NUM_STRIDE 8
+
/* SKX CHA */
#define SKX_CHA_MSR_PMON_BOX_FILTER_TID (0x1ffULL << 0)
#define SKX_CHA_MSR_PMON_BOX_FILTER_LINK (0xfULL << 9)
@@ -3580,6 +3604,9 @@ static struct intel_uncore_ops skx_uncore_iio_ops = {
.read_counter = uncore_msr_read_counter,
};
+static int skx_iio_get_topology(struct intel_uncore_type *type);
+static int skx_iio_set_mapping(struct intel_uncore_type *type);
+
static struct intel_uncore_type skx_uncore_iio = {
.name = "iio",
.num_counters = 4,
@@ -3594,6 +3621,8 @@ static struct intel_uncore_type skx_uncore_iio = {
.constraints = skx_uncore_iio_constraints,
.ops = &skx_uncore_iio_ops,
.format_group = &skx_uncore_iio_format_group,
+ .get_topology = skx_iio_get_topology,
+ .set_mapping = skx_iio_set_mapping,
};
enum perf_uncore_iio_freerunning_type_id {
@@ -3780,6 +3809,139 @@ static int skx_count_chabox(void)
return hweight32(val);
}
+static inline int skx_msr_cpu_bus_read(int cpu, u64 *topology)
+{
+ u64 msr_value;
+
+ if (rdmsrl_on_cpu(cpu, SKX_MSR_CPU_BUS_NUMBER, &msr_value) ||
+ !(msr_value & SKX_MSR_CPU_BUS_VALID_BIT))
+ return -1;
+
+ *topology = msr_value;
+
+ return 0;
+}
+
+static int skx_iio_get_topology(struct intel_uncore_type *type)
+{
+ int ret, cpu, die, current_die;
+ struct pci_bus *bus = NULL;
+
+ /*
+ * Verified single-segment environments only; disabled for multiple
+ * segment topologies for now.
+ */
+ while ((bus = pci_find_next_bus(bus)) && !pci_domain_nr(bus))
+ ;
+ if (bus) {
+ pr_info("I/O stack mapping is not supported for multi-seg\n");
+ return -1;
+ }
+
+ type->topology = kzalloc(max_dies * sizeof(u64), GFP_KERNEL);
+ if (!type->topology)
+ return -ENOMEM;
+
+ /*
+ * Using cpus_read_lock() to ensure cpu is not going down between
+ * looking at cpu_online_mask.
+ */
+ cpus_read_lock();
+ /* Invalid value to start loop.*/
+ current_die = -1;
+ for_each_online_cpu(cpu) {
+ die = topology_logical_die_id(cpu);
+ if (current_die == die)
+ continue;
+ ret = skx_msr_cpu_bus_read(cpu, &type->topology[die]);
+ if (ret) {
+ kfree(type->topology);
+ break;
+ }
+ current_die = die;
+ }
+ cpus_read_unlock();
+
+ return ret;
+}
+
+static inline u8 skx_iio_stack_bus(struct intel_uncore_pmu *pmu, int die)
+{
+ return pmu->type->topology[die] >> (pmu->pmu_idx * BUS_NUM_STRIDE);
+}
+
+static int skx_iio_set_box_mapping(struct intel_uncore_pmu *pmu)
+{
+ char *buf;
+ int die = 0;
+ /* Length of template "%04x:%02x," without null character. */
+ const int template_len = 8;
+
+ /*
+ * Root bus 0x00 is valid only for die 0 AND pmu_idx = 0.
+ * Set "0" platform mapping for PMUs which have zero stack bus and
+ * non-zero index.
+ */
+ if (!skx_iio_stack_bus(pmu, die) && pmu->pmu_idx) {
+ pmu->mapping = kzalloc(2, GFP_KERNEL);
+ if (!pmu->mapping)
+ return -ENOMEM;
+ sprintf(pmu->mapping, "0");
+ return 0;
+ }
+
+ pmu->mapping = kzalloc(max_dies * template_len + 1, GFP_KERNEL);
+ if (!pmu->mapping)
+ return -ENOMEM;
+
+ buf = pmu->mapping;
+ for (; die < max_dies; die++) {
+ buf += snprintf(buf, template_len + 1, "%04x:%02x,", 0,
+ skx_iio_stack_bus(pmu, die));
+ }
+
+ *(--buf) = '\0';
+
+ return 0;
+}
+
+static int skx_iio_set_mapping(struct intel_uncore_type *type)
+{
+ /*
+ * Each IIO stack (PCIe root port) has its own IIO PMON block, so each
+ * platform_mapping holds bus number(s) of PCIe root port(s), which can
+ * be monitored by that IIO PMON block.
+ *
+ * For example, on 4-die Xeon platform with up to 6 IIO stacks per die
+ * and, therefore, 6 IIO PMON blocks per die, the platform_mapping
+ * of IIO PMON block 0 holds "0000:00,0000:40,0000:80,0000:c0":
+ *
+ * $ cat /sys/devices/uncore_iio_0/platform_mapping
+ * 0000:00,0000:40,0000:80,0000:c0
+ *
+ * Which means:
+ * IIO PMON 0 on die 0 belongs to PCIe RP on bus 0x00, domain 0x0000
+ * IIO PMON 0 on die 1 belongs to PCIe RP on bus 0x40, domain 0x0000
+ * IIO PMON 0 on die 2 belongs to PCIe RP on bus 0x80, domain 0x0000
+ * IIO PMON 0 on die 3 belongs to PCIe RP on bus 0xc0, domain 0x0000
+ */
+
+ int ret;
+ struct intel_uncore_pmu *pmu = type->pmus;
+
+ for (; pmu - type->pmus < type->num_boxes; pmu++) {
+ ret = skx_iio_set_box_mapping(pmu);
+ if (ret) {
+ for (; pmu->pmu_idx > 0; --pmu)
+ kfree(pmu->mapping);
+ break;
+ }
+ }
+
+ kfree(type->topology);
+ return ret;
+}
+
void skx_uncore_cpu_init(void)
{
skx_uncore_chabox.num_boxes = skx_count_chabox();
--
2.19.1
On Mon, Jan 13, 2020 at 04:54:43PM +0300, [email protected] wrote:
> From: Roman Sudarikov <[email protected]>
>
> Intel? Xeon? Scalable processor family (code name Skylake-SP) makes
> significant changes in the integrated I/O (IIO) architecture. The new
> solution introduces IIO stacks which are responsible for managing traffic
> between the PCIe domain and the Mesh domain. Each IIO stack has its own
> PMON block and can handle either DMI port, x16 PCIe root port, MCP-Link
> or various built-in accelerators. IIO PMON blocks allow concurrent
> monitoring of I/O flows up to 4 x4 bifurcation within each IIO stack.
>
> Software is supposed to program required perf counters within each IIO
> stack and gather performance data. The tricky thing here is that IIO PMON
> reports data per IIO stack but users have no idea what IIO stacks are -
> they only know devices which are connected to the platform.
>
> Understanding IIO stack concept to find which IIO stack that particular
> IO device is connected to, or to identify an IIO PMON block to program
> for monitoring specific IIO stack assumes a lot of implicit knowledge
> about given Intel server platform architecture.
>
> Usage example:
> /sys/devices/uncore_<type>_<pmu_idx>/platform_mapping
>
> Each Uncore unit type, by its nature, can be mapped to its own context,
> for example:
> 1. CHA - each uncore_cha_<pmu_idx> is assigned to manage a distinct slice
> of LLC capacity;
> 2. UPI - each uncore_upi_<pmu_idx> is assigned to manage one link of Intel
> UPI Subsystem;
> 3. IIO - each uncore_iio_<pmu_idx> is assigned to manage one stack of the
> IIO module;
> 4. IMC - each uncore_imc_<pmu_idx> is assigned to manage one channel of
> Memory Controller.
>
> Implementation details:
> Two callbacks added to struct intel_uncore_type to discover and map Uncore
> units to PMONs:
> int (*get_topology)(void)
> int (*set_mapping)(struct intel_uncore_pmu *pmu)
>
> Details of IIO Uncore unit mapping to IIO PMON:
> Each IIO stack is either DMI port, x16 PCIe root port, MCP-Link or various
> built-in accelerators. For Uncore IIO Unit type, the platform_mapping file
> holds bus numbers of devices, which can be monitored by that IIO PMON block
> on each die.
>
> Co-developed-by: Alexander Antonov <[email protected]>
> Reviewed-by: Kan Liang <[email protected]>
> Signed-off-by: Alexander Antonov <[email protected]>
> Signed-off-by: Roman Sudarikov <[email protected]>
> ---
> arch/x86/events/intel/uncore.c | 37 +++++++++++++++++++++++++++++++++-
> arch/x86/events/intel/uncore.h | 9 ++++++++-
> 2 files changed, 44 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
> index 86467f85c383..2c53ad44b51f 100644
> --- a/arch/x86/events/intel/uncore.c
> +++ b/arch/x86/events/intel/uncore.c
> @@ -905,6 +905,32 @@ static void uncore_types_exit(struct intel_uncore_type **types)
> uncore_type_exit(*types);
> }
>
> +static struct attribute *empty_attrs[] = {
> + NULL,
> +};
> +
> +static const struct attribute_group empty_group = {
> + .attrs = empty_attrs,
> +};
What is this for? Why is it needed? It doesn't do anything?
> +
> +static ssize_t platform_mapping_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + struct intel_uncore_pmu *pmu = dev_get_drvdata(dev);
> +
> + return snprintf(buf, PAGE_SIZE - 1, "%s\n", pmu->mapping);
> +}
> +static DEVICE_ATTR_RO(platform_mapping);
You are creating new sysfs attributes without any Documentation/ABI
updates, which is not ok. Please fix this up for your next round of
patches.
> +static struct attribute *mapping_attrs[] = {
> + &dev_attr_platform_mapping.attr,
> + NULL,
> +};
> +
> +static const struct attribute_group uncore_mapping_group = {
> + .attrs = mapping_attrs,
> +};
ATTRIBUTE_GROUPS()?
Messing around with single attribute_group lists is usually a sign that
something is really wrong as the driver core should handle arrays of
attribute group lists instead.
> +
> static int __init uncore_type_init(struct intel_uncore_type *type, bool setid)
> {
> struct intel_uncore_pmu *pmus;
> @@ -950,10 +976,19 @@ static int __init uncore_type_init(struct intel_uncore_type *type, bool setid)
> attr_group->attrs[j] = &type->event_descs[j].attr.attr;
>
> type->events_group = &attr_group->group;
> - }
> + } else
> + type->events_group = &empty_group;
Why???
Didn't we fix up the x86 attributes to work properly and not mess around
with trying to merge groups and the like? Please don't perpetuate that
more...
thanks,
greg k-h
On Mon, Jan 13, 2020 at 04:54:44PM +0300, [email protected] wrote:
> From: Roman Sudarikov <[email protected]>
>
> Current version supports a server line starting Intel® Xeon® Processor
> Scalable Family and introduces mapping for IIO Uncore units only.
> Other units can be added on demand.
>
> IIO stack to PMON mapping is exposed through:
> /sys/devices/uncore_iio_<pmu_idx>/platform_mapping
> in the following format: domain:bus
>
> For example, on a 4-die Intel Xeon® server platform:
> $ cat /sys/devices/uncore_iio_0/platform_mapping
> 0000:00,0000:40,0000:80,0000:c0
That's horrid to parse. Sysfs should be one value per file, why not
have individual files for all of these things?
> Which means:
> IIO PMON block 0 on die 0 belongs to IIO stack on bus 0x00, domain 0x0000
> IIO PMON block 0 on die 1 belongs to IIO stack on bus 0x40, domain 0x0000
> IIO PMON block 0 on die 2 belongs to IIO stack on bus 0x80, domain 0x0000
> IIO PMON block 0 on die 3 belongs to IIO stack on bus 0xc0, domain 0x0000
Where did you get the die number from the above data?
>
> Co-developed-by: Alexander Antonov <[email protected]>
> Reviewed-by: Kan Liang <[email protected]>
> Signed-off-by: Alexander Antonov <[email protected]>
> Signed-off-by: Roman Sudarikov <[email protected]>
> ---
> arch/x86/events/intel/uncore.c | 2 +-
> arch/x86/events/intel/uncore.h | 1 +
> arch/x86/events/intel/uncore_snbep.c | 162 +++++++++++++++++++++++++++
> 3 files changed, 164 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
> index 2c53ad44b51f..c0d86bc8e786 100644
> --- a/arch/x86/events/intel/uncore.c
> +++ b/arch/x86/events/intel/uncore.c
> @@ -16,7 +16,7 @@ struct pci_driver *uncore_pci_driver;
> DEFINE_RAW_SPINLOCK(pci2phy_map_lock);
> struct list_head pci2phy_map_head = LIST_HEAD_INIT(pci2phy_map_head);
> struct pci_extra_dev *uncore_extra_pci_dev;
> -static int max_dies;
> +int max_dies;
Horrible global variable name :(
"unicore_max_dies" instead please.
> /* mask of cpus that collect uncore events */
> static cpumask_t uncore_cpu_mask;
> diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
> index f52dd3f112a7..94eacca6f485 100644
> --- a/arch/x86/events/intel/uncore.h
> +++ b/arch/x86/events/intel/uncore.h
> @@ -523,6 +523,7 @@ extern raw_spinlock_t pci2phy_map_lock;
> extern struct list_head pci2phy_map_head;
> extern struct pci_extra_dev *uncore_extra_pci_dev;
> extern struct event_constraint uncore_constraint_empty;
> +extern int max_dies;
>
> /* uncore_snb.c */
> int snb_uncore_pci_init(void);
> diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
> index b10a5ec79e48..2562fde2e5b8 100644
> --- a/arch/x86/events/intel/uncore_snbep.c
> +++ b/arch/x86/events/intel/uncore_snbep.c
> @@ -273,6 +273,30 @@
> #define SKX_CPUNODEID 0xc0
> #define SKX_GIDNIDMAP 0xd4
>
> +/*
> + * The CPU_BUS_NUMBER MSR returns the values of the respective CPUBUSNO CSR
> + * that BIOS programmed. MSR has package scope.
> + * | Bit | Default | Description
> + * | [63] | 00h | VALID - When set, indicates the CPU bus
> + * numbers have been initialized. (RO)
> + * |[62:48]| --- | Reserved
> + * |[47:40]| 00h | BUS_NUM_5 — Return the bus number BIOS assigned
> + * CPUBUSNO(5). (RO)
> + * |[39:32]| 00h | BUS_NUM_4 — Return the bus number BIOS assigned
> + * CPUBUSNO(4). (RO)
> + * |[31:24]| 00h | BUS_NUM_3 — Return the bus number BIOS assigned
> + * CPUBUSNO(3). (RO)
> + * |[23:16]| 00h | BUS_NUM_2 — Return the bus number BIOS assigned
> + * CPUBUSNO(2). (RO)
> + * |[15:8] | 00h | BUS_NUM_1 — Return the bus number BIOS assigned
> + * CPUBUSNO(1). (RO)
> + * | [7:0] | 00h | BUS_NUM_0 — Return the bus number BIOS assigned
> + * CPUBUSNO(0). (RO)
> + */
> +#define SKX_MSR_CPU_BUS_NUMBER 0x300
> +#define SKX_MSR_CPU_BUS_VALID_BIT (1ULL << 63)
> +#define BUS_NUM_STRIDE 8
> +
> /* SKX CHA */
> #define SKX_CHA_MSR_PMON_BOX_FILTER_TID (0x1ffULL << 0)
> #define SKX_CHA_MSR_PMON_BOX_FILTER_LINK (0xfULL << 9)
> @@ -3580,6 +3604,9 @@ static struct intel_uncore_ops skx_uncore_iio_ops = {
> .read_counter = uncore_msr_read_counter,
> };
>
> +static int skx_iio_get_topology(struct intel_uncore_type *type);
> +static int skx_iio_set_mapping(struct intel_uncore_type *type);
> +
> static struct intel_uncore_type skx_uncore_iio = {
> .name = "iio",
> .num_counters = 4,
> @@ -3594,6 +3621,8 @@ static struct intel_uncore_type skx_uncore_iio = {
> .constraints = skx_uncore_iio_constraints,
> .ops = &skx_uncore_iio_ops,
> .format_group = &skx_uncore_iio_format_group,
> + .get_topology = skx_iio_get_topology,
> + .set_mapping = skx_iio_set_mapping,
> };
>
> enum perf_uncore_iio_freerunning_type_id {
> @@ -3780,6 +3809,139 @@ static int skx_count_chabox(void)
> return hweight32(val);
> }
>
> +static inline int skx_msr_cpu_bus_read(int cpu, u64 *topology)
> +{
> + u64 msr_value;
> +
> + if (rdmsrl_on_cpu(cpu, SKX_MSR_CPU_BUS_NUMBER, &msr_value) ||
> + !(msr_value & SKX_MSR_CPU_BUS_VALID_BIT))
> + return -1;
> +
> + *topology = msr_value;
> +
> + return 0;
> +}
> +
> +static int skx_iio_get_topology(struct intel_uncore_type *type)
> +{
> + int ret, cpu, die, current_die;
> + struct pci_bus *bus = NULL;
> +
> + /*
> + * Verified single-segment environments only; disabled for multiple
> + * segment topologies for now.
> + */
> + while ((bus = pci_find_next_bus(bus)) && !pci_domain_nr(bus))
> + ;
> + if (bus) {
> + pr_info("I/O stack mapping is not supported for multi-seg\n");
> + return -1;
Do not make up random negative error values, use a #defined one please.
And shouldn't this be dev_err()? What happens if a user gets this, who
do they complain to, their BIOS vendor?
thanks,
greg k-h
On 13.01.2020 17:34, Greg KH wrote:
> On Mon, Jan 13, 2020 at 04:54:43PM +0300, [email protected] wrote:
>> From: Roman Sudarikov <[email protected]>
>>
>> Intel® Xeon® Scalable processor family (code name Skylake-SP) makes
>> significant changes in the integrated I/O (IIO) architecture. The new
>> solution introduces IIO stacks which are responsible for managing traffic
>> between the PCIe domain and the Mesh domain. Each IIO stack has its own
>> PMON block and can handle either DMI port, x16 PCIe root port, MCP-Link
>> or various built-in accelerators. IIO PMON blocks allow concurrent
>> monitoring of I/O flows up to 4 x4 bifurcation within each IIO stack.
>>
>> Software is supposed to program required perf counters within each IIO
>> stack and gather performance data. The tricky thing here is that IIO PMON
>> reports data per IIO stack but users have no idea what IIO stacks are -
>> they only know devices which are connected to the platform.
>>
>> Understanding IIO stack concept to find which IIO stack that particular
>> IO device is connected to, or to identify an IIO PMON block to program
>> for monitoring specific IIO stack assumes a lot of implicit knowledge
>> about given Intel server platform architecture.
>>
>> Usage example:
>> /sys/devices/uncore_<type>_<pmu_idx>/platform_mapping
>>
>> Each Uncore unit type, by its nature, can be mapped to its own context,
>> for example:
>> 1. CHA - each uncore_cha_<pmu_idx> is assigned to manage a distinct slice
>> of LLC capacity;
>> 2. UPI - each uncore_upi_<pmu_idx> is assigned to manage one link of Intel
>> UPI Subsystem;
>> 3. IIO - each uncore_iio_<pmu_idx> is assigned to manage one stack of the
>> IIO module;
>> 4. IMC - each uncore_imc_<pmu_idx> is assigned to manage one channel of
>> Memory Controller.
>>
>> Implementation details:
>> Two callbacks added to struct intel_uncore_type to discover and map Uncore
>> units to PMONs:
>> int (*get_topology)(void)
>> int (*set_mapping)(struct intel_uncore_pmu *pmu)
>>
>> Details of IIO Uncore unit mapping to IIO PMON:
>> Each IIO stack is either DMI port, x16 PCIe root port, MCP-Link or various
>> built-in accelerators. For Uncore IIO Unit type, the platform_mapping file
>> holds bus numbers of devices, which can be monitored by that IIO PMON block
>> on each die.
>>
>> Co-developed-by: Alexander Antonov <[email protected]>
>> Reviewed-by: Kan Liang <[email protected]>
>> Signed-off-by: Alexander Antonov <[email protected]>
>> Signed-off-by: Roman Sudarikov <[email protected]>
>> ---
>> arch/x86/events/intel/uncore.c | 37 +++++++++++++++++++++++++++++++++-
>> arch/x86/events/intel/uncore.h | 9 ++++++++-
>> 2 files changed, 44 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
>> index 86467f85c383..2c53ad44b51f 100644
>> --- a/arch/x86/events/intel/uncore.c
>> +++ b/arch/x86/events/intel/uncore.c
>> @@ -905,6 +905,32 @@ static void uncore_types_exit(struct intel_uncore_type **types)
>> uncore_type_exit(*types);
>> }
>>
>> +static struct attribute *empty_attrs[] = {
>> + NULL,
>> +};
>> +
>> +static const struct attribute_group empty_group = {
>> + .attrs = empty_attrs,
>> +};
> What is this for? Why is it needed? It doesn't do anything?
>
>> +
>> +static ssize_t platform_mapping_show(struct device *dev,
>> + struct device_attribute *attr, char *buf)
>> +{
>> + struct intel_uncore_pmu *pmu = dev_get_drvdata(dev);
>> +
>> + return snprintf(buf, PAGE_SIZE - 1, "%s\n", pmu->mapping);
>> +}
>> +static DEVICE_ATTR_RO(platform_mapping);
> You are creating new sysfs attributes without any Documentation/ABI
> updates, which is not ok. Please fix this up for your next round of
> patches.
>
>> +static struct attribute *mapping_attrs[] = {
>> + &dev_attr_platform_mapping.attr,
>> + NULL,
>> +};
>> +
>> +static const struct attribute_group uncore_mapping_group = {
>> + .attrs = mapping_attrs,
>> +};
> ATTRIBUTE_GROUPS()?
>
> Messing around with single attribute_group lists is usually a sign that
> something is really wrong as the driver core should handle arrays of
> attribute group lists instead.
>
>
>> +
>> static int __init uncore_type_init(struct intel_uncore_type *type, bool setid)
>> {
>> struct intel_uncore_pmu *pmus;
>> @@ -950,10 +976,19 @@ static int __init uncore_type_init(struct intel_uncore_type *type, bool setid)
>> attr_group->attrs[j] = &type->event_descs[j].attr.attr;
>>
>> type->events_group = &attr_group->group;
>> - }
>> + } else
>> + type->events_group = &empty_group;
> Why???
Hi Greg,
Technically, what I'm trying to do is to add an attribute which depends on
the uncore pmu type and BIOS support. New attribute is added to the end of
the attribute groups array. It appears that the events attribute group is
optional for most of the uncore pmus for x86/intel, i.e. events_group =
NULL.
NULL element in the middle of the attribute groups array "hides" all others
attribute groups which follows that element.
To work around it, embedded NULL elements should be either removed from
the attribute groups array [1] or replaced with empty attribute; see
implementation above.
If both approaches are incorrect then please advice what would be correct
solution for that case.
[1]
https://lore.kernel.org/lkml/[email protected]/
Thanks,
Roman
> Didn't we fix up the x86 attributes to work properly and not mess around
> with trying to merge groups and the like? Please don't perpetuate that
> more...
>
> thanks,
>
> greg k-h
On Tue, Jan 14, 2020 at 04:24:34PM +0300, Sudarikov, Roman wrote:
SNIP
> > > {
> > > struct intel_uncore_pmu *pmus;
> > > @@ -950,10 +976,19 @@ static int __init uncore_type_init(struct intel_uncore_type *type, bool setid)
> > > attr_group->attrs[j] = &type->event_descs[j].attr.attr;
> > > type->events_group = &attr_group->group;
> > > - }
> > > + } else
> > > + type->events_group = &empty_group;
> > Why???
> Hi Greg,
>
> Technically, what I'm trying to do is to add an attribute which depends on
> the uncore pmu type and BIOS support. New attribute is added to the end of
> the attribute groups array. It appears that the events attribute group is
> optional for most of the uncore pmus for x86/intel, i.e. events_group =
> NULL.
>
> NULL element in the middle of the attribute groups array "hides" all others
> attribute groups which follows that element.
>
> To work around it, embedded NULL elements should be either removed from
> the attribute groups array [1] or replaced with empty attribute; see
> implementation above.
>
> If both approaches are incorrect then please advice what would be correct
> solution for that case.
hi,
I think Greg is reffering to the recent cleanup where we used attribute
groups with is_vissible callbacks, you can check changes below:
b7c9b3927337 perf/x86/intel: Use ->is_visible callback for default group
6a9f4efe78af perf/x86: Use update attribute groups for default attributes
b657688069a2 perf/x86/intel: Use update attributes for skylake format
3ea40ac77261 perf/x86: Use update attribute groups for extra format
1f157286829c perf/x86: Use update attribute groups for caps
baa0c83363c7 perf/x86: Use the new pmu::update_attrs attribute group
jirka
>
> [1] https://lore.kernel.org/lkml/[email protected]/
>
> Thanks,
> Roman
> > Didn't we fix up the x86 attributes to work properly and not mess around
> > with trying to merge groups and the like? Please don't perpetuate that
> > more...
> >
> > thanks,
> >
> > greg k-h
>
>
On 13.01.2020 17:38, Greg KH wrote:
> On Mon, Jan 13, 2020 at 04:54:44PM +0300, [email protected] wrote:
>> From: Roman Sudarikov <[email protected]>
>>
>> Current version supports a server line starting Intel® Xeon® Processor
>> Scalable Family and introduces mapping for IIO Uncore units only.
>> Other units can be added on demand.
>>
>> IIO stack to PMON mapping is exposed through:
>> /sys/devices/uncore_iio_<pmu_idx>/platform_mapping
>> in the following format: domain:bus
>>
>> For example, on a 4-die Intel Xeon® server platform:
>> $ cat /sys/devices/uncore_iio_0/platform_mapping
>> 0000:00,0000:40,0000:80,0000:c0
> That's horrid to parse. Sysfs should be one value per file, why not
> have individual files for all of these things?
>
>> Which means:
>> IIO PMON block 0 on die 0 belongs to IIO stack on bus 0x00, domain 0x0000
>> IIO PMON block 0 on die 1 belongs to IIO stack on bus 0x40, domain 0x0000
>> IIO PMON block 0 on die 2 belongs to IIO stack on bus 0x80, domain 0x0000
>> IIO PMON block 0 on die 3 belongs to IIO stack on bus 0xc0, domain 0x0000
> Where did you get the die number from the above data?
>
Mapping algorithm requires domain:bus pair for each IO stack for each die.
Current implementation provides comma separated list of domain:bus pairs
for each stack where offset in the list corresponds to die index.
Technically similar approach which was already implemented for the
cpumask attribute.
>
>> Co-developed-by: Alexander Antonov <[email protected]>
>> Reviewed-by: Kan Liang <[email protected]>
>> Signed-off-by: Alexander Antonov <[email protected]>
>> Signed-off-by: Roman Sudarikov <[email protected]>
>> ---
>> arch/x86/events/intel/uncore.c | 2 +-
>> arch/x86/events/intel/uncore.h | 1 +
>> arch/x86/events/intel/uncore_snbep.c | 162 +++++++++++++++++++++++++++
>> 3 files changed, 164 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
>> index 2c53ad44b51f..c0d86bc8e786 100644
>> --- a/arch/x86/events/intel/uncore.c
>> +++ b/arch/x86/events/intel/uncore.c
>> @@ -16,7 +16,7 @@ struct pci_driver *uncore_pci_driver;
>> DEFINE_RAW_SPINLOCK(pci2phy_map_lock);
>> struct list_head pci2phy_map_head = LIST_HEAD_INIT(pci2phy_map_head);
>> struct pci_extra_dev *uncore_extra_pci_dev;
>> -static int max_dies;
>> +int max_dies;
> Horrible global variable name :(
>
> "unicore_max_dies" instead please.
Sorry, not sure I get the point right - the intent is to iterate over
available
dies on the platform which physically are the same for core and uncore.
>
>> /* mask of cpus that collect uncore events */
>> static cpumask_t uncore_cpu_mask;
>> diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
>> index f52dd3f112a7..94eacca6f485 100644
>> --- a/arch/x86/events/intel/uncore.h
>> +++ b/arch/x86/events/intel/uncore.h
>> @@ -523,6 +523,7 @@ extern raw_spinlock_t pci2phy_map_lock;
>> extern struct list_head pci2phy_map_head;
>> extern struct pci_extra_dev *uncore_extra_pci_dev;
>> extern struct event_constraint uncore_constraint_empty;
>> +extern int max_dies;
>>
>> /* uncore_snb.c */
>> int snb_uncore_pci_init(void);
>> diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
>> index b10a5ec79e48..2562fde2e5b8 100644
>> --- a/arch/x86/events/intel/uncore_snbep.c
>> +++ b/arch/x86/events/intel/uncore_snbep.c
>> @@ -273,6 +273,30 @@
>> #define SKX_CPUNODEID 0xc0
>> #define SKX_GIDNIDMAP 0xd4
>>
>> +/*
>> + * The CPU_BUS_NUMBER MSR returns the values of the respective CPUBUSNO CSR
>> + * that BIOS programmed. MSR has package scope.
>> + * | Bit | Default | Description
>> + * | [63] | 00h | VALID - When set, indicates the CPU bus
>> + * numbers have been initialized. (RO)
>> + * |[62:48]| --- | Reserved
>> + * |[47:40]| 00h | BUS_NUM_5 — Return the bus number BIOS assigned
>> + * CPUBUSNO(5). (RO)
>> + * |[39:32]| 00h | BUS_NUM_4 — Return the bus number BIOS assigned
>> + * CPUBUSNO(4). (RO)
>> + * |[31:24]| 00h | BUS_NUM_3 — Return the bus number BIOS assigned
>> + * CPUBUSNO(3). (RO)
>> + * |[23:16]| 00h | BUS_NUM_2 — Return the bus number BIOS assigned
>> + * CPUBUSNO(2). (RO)
>> + * |[15:8] | 00h | BUS_NUM_1 — Return the bus number BIOS assigned
>> + * CPUBUSNO(1). (RO)
>> + * | [7:0] | 00h | BUS_NUM_0 — Return the bus number BIOS assigned
>> + * CPUBUSNO(0). (RO)
>> + */
>> +#define SKX_MSR_CPU_BUS_NUMBER 0x300
>> +#define SKX_MSR_CPU_BUS_VALID_BIT (1ULL << 63)
>> +#define BUS_NUM_STRIDE 8
>> +
>> /* SKX CHA */
>> #define SKX_CHA_MSR_PMON_BOX_FILTER_TID (0x1ffULL << 0)
>> #define SKX_CHA_MSR_PMON_BOX_FILTER_LINK (0xfULL << 9)
>> @@ -3580,6 +3604,9 @@ static struct intel_uncore_ops skx_uncore_iio_ops = {
>> .read_counter = uncore_msr_read_counter,
>> };
>>
>> +static int skx_iio_get_topology(struct intel_uncore_type *type);
>> +static int skx_iio_set_mapping(struct intel_uncore_type *type);
>> +
>> static struct intel_uncore_type skx_uncore_iio = {
>> .name = "iio",
>> .num_counters = 4,
>> @@ -3594,6 +3621,8 @@ static struct intel_uncore_type skx_uncore_iio = {
>> .constraints = skx_uncore_iio_constraints,
>> .ops = &skx_uncore_iio_ops,
>> .format_group = &skx_uncore_iio_format_group,
>> + .get_topology = skx_iio_get_topology,
>> + .set_mapping = skx_iio_set_mapping,
>> };
>>
>> enum perf_uncore_iio_freerunning_type_id {
>> @@ -3780,6 +3809,139 @@ static int skx_count_chabox(void)
>> return hweight32(val);
>> }
>>
>> +static inline int skx_msr_cpu_bus_read(int cpu, u64 *topology)
>> +{
>> + u64 msr_value;
>> +
>> + if (rdmsrl_on_cpu(cpu, SKX_MSR_CPU_BUS_NUMBER, &msr_value) ||
>> + !(msr_value & SKX_MSR_CPU_BUS_VALID_BIT))
>> + return -1;
>> +
>> + *topology = msr_value;
>> +
>> + return 0;
>> +}
>> +
>> +static int skx_iio_get_topology(struct intel_uncore_type *type)
>> +{
>> + int ret, cpu, die, current_die;
>> + struct pci_bus *bus = NULL;
>> +
>> + /*
>> + * Verified single-segment environments only; disabled for multiple
>> + * segment topologies for now.
>> + */
>> + while ((bus = pci_find_next_bus(bus)) && !pci_domain_nr(bus))
>> + ;
>> + if (bus) {
>> + pr_info("I/O stack mapping is not supported for multi-seg\n");
>> + return -1;
> Do not make up random negative error values, use a #defined one please.
will be addressed in the next version.
> And shouldn't this be dev_err()? What happens if a user gets this, who
> do they complain to, their BIOS vendor?
The mapping depends on BIOS support so yes, if BIOS doesn't provide required
information then the mapping will not be available but all other
functionalities remain the same.
>
> thanks,
>
> greg k-h
On Tue, Jan 14, 2020 at 02:39:58PM +0100, Jiri Olsa wrote:
> On Tue, Jan 14, 2020 at 04:24:34PM +0300, Sudarikov, Roman wrote:
>
> SNIP
>
> > > > {
> > > > struct intel_uncore_pmu *pmus;
> > > > @@ -950,10 +976,19 @@ static int __init uncore_type_init(struct intel_uncore_type *type, bool setid)
> > > > attr_group->attrs[j] = &type->event_descs[j].attr.attr;
> > > > type->events_group = &attr_group->group;
> > > > - }
> > > > + } else
> > > > + type->events_group = &empty_group;
> > > Why???
> > Hi Greg,
> >
> > Technically, what I'm trying to do is to add an attribute which depends on
> > the uncore pmu type and BIOS support. New attribute is added to the end of
> > the attribute groups array. It appears that the events attribute group is
> > optional for most of the uncore pmus for x86/intel, i.e. events_group =
> > NULL.
> >
> > NULL element in the middle of the attribute groups array "hides" all others
> > attribute groups which follows that element.
> >
> > To work around it, embedded NULL elements should be either removed from
> > the attribute groups array [1] or replaced with empty attribute; see
> > implementation above.
> >
> > If both approaches are incorrect then please advice what would be correct
> > solution for that case.
>
> hi,
> I think Greg is reffering to the recent cleanup where we used attribute
> groups with is_vissible callbacks, you can check changes below:
>
> b7c9b3927337 perf/x86/intel: Use ->is_visible callback for default group
> 6a9f4efe78af perf/x86: Use update attribute groups for default attributes
> b657688069a2 perf/x86/intel: Use update attributes for skylake format
> 3ea40ac77261 perf/x86: Use update attribute groups for extra format
> 1f157286829c perf/x86: Use update attribute groups for caps
> baa0c83363c7 perf/x86: Use the new pmu::update_attrs attribute group
Yes, that is the case.
Don't abuse things like trying to stick NULL elements in the middle of
an attribute group, that's horrid. Use the apis that we have build
especially for this type of thing, that is what it is there for and will
keep things _MUCH_ simpler over time.
thanks,
greg k-h
On Tue, Jan 14, 2020 at 04:55:03PM +0300, Sudarikov, Roman wrote:
> On 13.01.2020 17:38, Greg KH wrote:
> > On Mon, Jan 13, 2020 at 04:54:44PM +0300, [email protected] wrote:
> > > From: Roman Sudarikov <[email protected]>
> > >
> > > Current version supports a server line starting Intel® Xeon® Processor
> > > Scalable Family and introduces mapping for IIO Uncore units only.
> > > Other units can be added on demand.
> > >
> > > IIO stack to PMON mapping is exposed through:
> > > /sys/devices/uncore_iio_<pmu_idx>/platform_mapping
> > > in the following format: domain:bus
> > >
> > > For example, on a 4-die Intel Xeon® server platform:
> > > $ cat /sys/devices/uncore_iio_0/platform_mapping
> > > 0000:00,0000:40,0000:80,0000:c0
> > That's horrid to parse. Sysfs should be one value per file, why not
> > have individual files for all of these things?
> >
> > > Which means:
> > > IIO PMON block 0 on die 0 belongs to IIO stack on bus 0x00, domain 0x0000
> > > IIO PMON block 0 on die 1 belongs to IIO stack on bus 0x40, domain 0x0000
> > > IIO PMON block 0 on die 2 belongs to IIO stack on bus 0x80, domain 0x0000
> > > IIO PMON block 0 on die 3 belongs to IIO stack on bus 0xc0, domain 0x0000
> > Where did you get the die number from the above data?
> >
> Mapping algorithm requires domain:bus pair for each IO stack for each die.
mapping algorithm where? In the kernel? In userspace? Somewhere else?
> Current implementation provides comma separated list of domain:bus pairs
> for each stack where offset in the list corresponds to die index.
What does this mean? Who uses this information? What tools have been
written for a sysfs attribute that isn't merged yet?
totally confused...
> Technically similar approach which was already implemented for the cpumask
> attribute.
> >
> > > Co-developed-by: Alexander Antonov <[email protected]>
> > > Reviewed-by: Kan Liang <[email protected]>
> > > Signed-off-by: Alexander Antonov <[email protected]>
> > > Signed-off-by: Roman Sudarikov <[email protected]>
> > > ---
> > > arch/x86/events/intel/uncore.c | 2 +-
> > > arch/x86/events/intel/uncore.h | 1 +
> > > arch/x86/events/intel/uncore_snbep.c | 162 +++++++++++++++++++++++++++
> > > 3 files changed, 164 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
> > > index 2c53ad44b51f..c0d86bc8e786 100644
> > > --- a/arch/x86/events/intel/uncore.c
> > > +++ b/arch/x86/events/intel/uncore.c
> > > @@ -16,7 +16,7 @@ struct pci_driver *uncore_pci_driver;
> > > DEFINE_RAW_SPINLOCK(pci2phy_map_lock);
> > > struct list_head pci2phy_map_head = LIST_HEAD_INIT(pci2phy_map_head);
> > > struct pci_extra_dev *uncore_extra_pci_dev;
> > > -static int max_dies;
> > > +int max_dies;
> > Horrible global variable name :(
> >
> > "unicore_max_dies" instead please.
> Sorry, not sure I get the point right - the intent is to iterate over
> available
> dies on the platform which physically are the same for core and uncore.
My point is that "max_dies" is a horrible global symbol name. Please
name it more carefully as you are now in the "global namespace" of the
kernel. "unicore_max_dies" might be better, or if you can think of
something else, great, but you need to make it _MUCH_ more unique and
obvious as to what it is now that it is globally seen.
> > > /* mask of cpus that collect uncore events */
> > > static cpumask_t uncore_cpu_mask;
> > > diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
> > > index f52dd3f112a7..94eacca6f485 100644
> > > --- a/arch/x86/events/intel/uncore.h
> > > +++ b/arch/x86/events/intel/uncore.h
> > > @@ -523,6 +523,7 @@ extern raw_spinlock_t pci2phy_map_lock;
> > > extern struct list_head pci2phy_map_head;
> > > extern struct pci_extra_dev *uncore_extra_pci_dev;
> > > extern struct event_constraint uncore_constraint_empty;
> > > +extern int max_dies;
> > > /* uncore_snb.c */
> > > int snb_uncore_pci_init(void);
> > > diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
> > > index b10a5ec79e48..2562fde2e5b8 100644
> > > --- a/arch/x86/events/intel/uncore_snbep.c
> > > +++ b/arch/x86/events/intel/uncore_snbep.c
> > > @@ -273,6 +273,30 @@
> > > #define SKX_CPUNODEID 0xc0
> > > #define SKX_GIDNIDMAP 0xd4
> > > +/*
> > > + * The CPU_BUS_NUMBER MSR returns the values of the respective CPUBUSNO CSR
> > > + * that BIOS programmed. MSR has package scope.
> > > + * | Bit | Default | Description
> > > + * | [63] | 00h | VALID - When set, indicates the CPU bus
> > > + * numbers have been initialized. (RO)
> > > + * |[62:48]| --- | Reserved
> > > + * |[47:40]| 00h | BUS_NUM_5 — Return the bus number BIOS assigned
> > > + * CPUBUSNO(5). (RO)
> > > + * |[39:32]| 00h | BUS_NUM_4 — Return the bus number BIOS assigned
> > > + * CPUBUSNO(4). (RO)
> > > + * |[31:24]| 00h | BUS_NUM_3 — Return the bus number BIOS assigned
> > > + * CPUBUSNO(3). (RO)
> > > + * |[23:16]| 00h | BUS_NUM_2 — Return the bus number BIOS assigned
> > > + * CPUBUSNO(2). (RO)
> > > + * |[15:8] | 00h | BUS_NUM_1 — Return the bus number BIOS assigned
> > > + * CPUBUSNO(1). (RO)
> > > + * | [7:0] | 00h | BUS_NUM_0 — Return the bus number BIOS assigned
> > > + * CPUBUSNO(0). (RO)
> > > + */
> > > +#define SKX_MSR_CPU_BUS_NUMBER 0x300
> > > +#define SKX_MSR_CPU_BUS_VALID_BIT (1ULL << 63)
> > > +#define BUS_NUM_STRIDE 8
> > > +
> > > /* SKX CHA */
> > > #define SKX_CHA_MSR_PMON_BOX_FILTER_TID (0x1ffULL << 0)
> > > #define SKX_CHA_MSR_PMON_BOX_FILTER_LINK (0xfULL << 9)
> > > @@ -3580,6 +3604,9 @@ static struct intel_uncore_ops skx_uncore_iio_ops = {
> > > .read_counter = uncore_msr_read_counter,
> > > };
> > > +static int skx_iio_get_topology(struct intel_uncore_type *type);
> > > +static int skx_iio_set_mapping(struct intel_uncore_type *type);
> > > +
> > > static struct intel_uncore_type skx_uncore_iio = {
> > > .name = "iio",
> > > .num_counters = 4,
> > > @@ -3594,6 +3621,8 @@ static struct intel_uncore_type skx_uncore_iio = {
> > > .constraints = skx_uncore_iio_constraints,
> > > .ops = &skx_uncore_iio_ops,
> > > .format_group = &skx_uncore_iio_format_group,
> > > + .get_topology = skx_iio_get_topology,
> > > + .set_mapping = skx_iio_set_mapping,
> > > };
> > > enum perf_uncore_iio_freerunning_type_id {
> > > @@ -3780,6 +3809,139 @@ static int skx_count_chabox(void)
> > > return hweight32(val);
> > > }
> > > +static inline int skx_msr_cpu_bus_read(int cpu, u64 *topology)
> > > +{
> > > + u64 msr_value;
> > > +
> > > + if (rdmsrl_on_cpu(cpu, SKX_MSR_CPU_BUS_NUMBER, &msr_value) ||
> > > + !(msr_value & SKX_MSR_CPU_BUS_VALID_BIT))
> > > + return -1;
> > > +
> > > + *topology = msr_value;
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static int skx_iio_get_topology(struct intel_uncore_type *type)
> > > +{
> > > + int ret, cpu, die, current_die;
> > > + struct pci_bus *bus = NULL;
> > > +
> > > + /*
> > > + * Verified single-segment environments only; disabled for multiple
> > > + * segment topologies for now.
> > > + */
> > > + while ((bus = pci_find_next_bus(bus)) && !pci_domain_nr(bus))
> > > + ;
> > > + if (bus) {
> > > + pr_info("I/O stack mapping is not supported for multi-seg\n");
> > > + return -1;
> > Do not make up random negative error values, use a #defined one please.
> will be addressed in the next version.
> > And shouldn't this be dev_err()? What happens if a user gets this, who
> > do they complain to, their BIOS vendor?
> The mapping depends on BIOS support so yes, if BIOS doesn't provide required
> information then the mapping will not be available but all other
> functionalities remain the same.
So what can a user do with this information? If nothing, then don't
print it out and just move on and keep the system working. Only report
things that someone can do something about.
thanks,
greg k-h
On Tue, Jan 14, 2020 at 04:55:03PM +0300, Sudarikov, Roman wrote:
> On 13.01.2020 17:38, Greg KH wrote:
> > On Mon, Jan 13, 2020 at 04:54:44PM +0300, [email protected] wrote:
> > > From: Roman Sudarikov <[email protected]>
> > >
> > > Current version supports a server line starting Intel? Xeon? Processor
> > > Scalable Family and introduces mapping for IIO Uncore units only.
> > > Other units can be added on demand.
> > >
> > > IIO stack to PMON mapping is exposed through:
> > > /sys/devices/uncore_iio_<pmu_idx>/platform_mapping
> > > in the following format: domain:bus
> > >
> > > For example, on a 4-die Intel Xeon? server platform:
> > > $ cat /sys/devices/uncore_iio_0/platform_mapping
> > > 0000:00,0000:40,0000:80,0000:c0
> > That's horrid to parse. Sysfs should be one value per file, why not
> > have individual files for all of these things?
> >
> > > Which means:
> > > IIO PMON block 0 on die 0 belongs to IIO stack on bus 0x00, domain 0x0000
> > > IIO PMON block 0 on die 1 belongs to IIO stack on bus 0x40, domain 0x0000
> > > IIO PMON block 0 on die 2 belongs to IIO stack on bus 0x80, domain 0x0000
> > > IIO PMON block 0 on die 3 belongs to IIO stack on bus 0xc0, domain 0x0000
> > Where did you get the die number from the above data?
> >
> Mapping algorithm requires domain:bus pair for each IO stack for each die.
> Current implementation provides comma separated list of domain:bus pairs
> for each stack where offset in the list corresponds to die index.
>
> Technically similar approach which was already implemented for the cpumask
> attribute.
> >
> > > Co-developed-by: Alexander Antonov <[email protected]>
> > > Reviewed-by: Kan Liang <[email protected]>
> > > Signed-off-by: Alexander Antonov <[email protected]>
> > > Signed-off-by: Roman Sudarikov <[email protected]>
Also, to be a bit of a pest, you all are NOT following the internal
Intel rules for submitting kernel patches. You need a lot more reviews
before this should have "escaped" to lkml.
greg k-h