2019-12-10 09:15:53

by Sudarikov, Roman

[permalink] [raw]
Subject: [PATCH v2 0/3] perf x86: Exposing IO stack to IO PMON mapping through sysfs

From: Roman Sudarikov <[email protected]>

The previous version can be found at:
v1: https://lkml.org/lkml/2019/11/26/447

Changes in this revision are:
v1 -> v2:
- Fixed process related issues;
- This patch set includes kernel support for IIO stack to PMON mapping;
- Stephane raised concerns regarding output format which may require
code changes in the user space part of the feature only. We will continue
output format discussion in the context of user space update.

Intel® Xeon® Scalable processor family (code name Skylake-SP) makes
significant changes in the integrated I/O (IIO) architecture. The new
solution introduces IIO stacks which are responsible for managing traffic
between the PCIe domain and the Mesh domain. Each IIO stack has its own
PMON block and can handle either DMI port, x16 PCIe root port, MCP-Link
or various built-in accelerators. IIO PMON blocks allow concurrent
monitoring of I/O flows up to 4 x4 bifurcation within each IIO stack.

Software is supposed to program required perf counters within each IIO
stack and gather performance data. The tricky thing here is that IIO PMON
reports data per IIO stack but users have no idea what IIO stacks are -
they only know devices which are connected to the platform.

Understanding IIO stack concept to find which IIO stack that particular
IO device is connected to, or to identify an IIO PMON block to program
for monitoring specific IIO stack assumes a lot of implicit knowledge
about given Intel server platform architecture.

This patch set introduces:
1. An infrastructure for exposing an Uncore unit to Uncore PMON mapping
through sysfs-backend;
2. A new --iiostat mode in perf stat to provide I/O performance metrics
per I/O device.

Usage examples:

1. List all devices below IIO stacks
./perf stat --iiostat=show

Sample output w/o libpci:

S0-RootPort0-uncore_iio_0<00:00.0>
S1-RootPort0-uncore_iio_0<81:00.0>
S0-RootPort1-uncore_iio_1<18:00.0>
S1-RootPort1-uncore_iio_1<86:00.0>
S1-RootPort1-uncore_iio_1<88:00.0>
S0-RootPort2-uncore_iio_2<3d:00.0>
S1-RootPort2-uncore_iio_2<af:00.0>
S1-RootPort3-uncore_iio_3<da:00.0>

Sample output with libpci:

S0-RootPort0-uncore_iio_0<00:00.0 Sky Lake-E DMI3 Registers>
S1-RootPort0-uncore_iio_0<81:00.0 Ethernet Controller X710 for 10GbE SFP+>
S0-RootPort1-uncore_iio_1<18:00.0 Omni-Path HFI Silicon 100 Series [discrete]>
S1-RootPort1-uncore_iio_1<86:00.0 Ethernet Controller XL710 for 40GbE QSFP+>
S1-RootPort1-uncore_iio_1<88:00.0 Ethernet Controller XL710 for 40GbE QSFP+>
S0-RootPort2-uncore_iio_2<3d:00.0 Ethernet Connection X722 for 10GBASE-T>
S1-RootPort2-uncore_iio_2<af:00.0 Omni-Path HFI Silicon 100 Series [discrete]>
S1-RootPort3-uncore_iio_3<da:00.0 NVMe Datacenter SSD [Optane]>

2. Collect metrics for all I/O devices below IIO stack

./perf stat --iiostat -- dd if=/dev/zero of=/dev/nvme0n1 bs=1M oflag=direct
357708+0 records in
357707+0 records out
375083606016 bytes (375 GB, 349 GiB) copied, 215.381 s, 1.7 GB/s

Performance counter stats for 'system wide':

device Inbound Read(MB) Inbound Write(MB) Outbound Read(MB) Outbound Write(MB)
00:00.0 0 0 0 0
81:00.0 0 0 0 0
18:00.0 0 0 0 0
86:00.0 0 0 0 0
88:00.0 0 0 0 0
3b:00.0 3 0 0 0
3c:03.0 3 0 0 0
3d:00.0 3 0 0 0
af:00.0 0 0 0 0
da:00.0 358559 44 0 22

215.383783574 seconds time elapsed


3. Collect metrics for comma separted list of I/O devices

./perf stat --iiostat=da:00.0 -- dd if=/dev/zero of=/dev/nvme0n1 bs=1M oflag=direct
381555+0 records in
381554+0 records out
400088457216 bytes (400 GB, 373 GiB) copied, 374.044 s, 1.1 GB/s

Performance counter stats for 'system wide':

device Inbound Read(MB) Inbound Write(MB) Outbound Read(MB) Outbound Write(MB)
da:00.0 382462 47 0 23

374.045775505 seconds time elapsed

Roman Sudarikov (3):
perf x86: Infrastructure for exposing an Uncore unit to PMON mapping
perf x86: Add compaction function for uncore attributes
perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server
platform

arch/x86/events/intel/uncore.c | 50 ++++++++-
arch/x86/events/intel/uncore.h | 10 +-
arch/x86/events/intel/uncore_snbep.c | 162 +++++++++++++++++++++++++++
3 files changed, 220 insertions(+), 2 deletions(-)


base-commit: 219d54332a09e8d8741c1e1982f5eae56099de85
--
2.19.1


2019-12-10 09:16:00

by Sudarikov, Roman

[permalink] [raw]
Subject: [PATCH v2 2/3] perf x86: Add compaction function for uncore attributes

From: Roman Sudarikov <[email protected]>

In current design, there is an implicit assumption that array of pointers
to uncore type attributes is NULL terminated. However, not all attributes
are mandatory for each Uncore unit type, e.g. "events" is required for
IMC but doesn't exist for CHA. That approach correctly supports only one
optional attribute which also must be the last in the row.
The patch removes limitation by safely removing embedded NULL elements.

Co-developed-by: Alexander Antonov <[email protected]>
Signed-off-by: Alexander Antonov <[email protected]>
Signed-off-by: Roman Sudarikov <[email protected]>
---
arch/x86/events/intel/uncore.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)

diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index 24e120289018..a05352c4fc01 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -923,6 +923,22 @@ static void uncore_types_exit(struct intel_uncore_type **types)
uncore_type_exit(*types);
}

+static void uncore_type_attrs_compaction(struct intel_uncore_type *type)
+{
+ int i, j;
+ int size = ARRAY_SIZE(type->attr_groups);
+
+ for (i = 0, j = 0; i < size; i++) {
+ if (!type->attr_groups[i])
+ continue;
+ if (i > j) {
+ type->attr_groups[j] = type->attr_groups[i];
+ type->attr_groups[i] = NULL;
+ }
+ j++;
+ }
+}
+
static int __init uncore_type_init(struct intel_uncore_type *type, bool setid)
{
struct intel_uncore_pmu *pmus;
@@ -980,6 +996,12 @@ static int __init uncore_type_init(struct intel_uncore_type *type, bool setid)
if (!type->get_topology(type) && !type->set_mapping(type))
type->mapping_group = &uncore_mapping_group;

+ /*
+ * For optional attributes, we can safely remove embedded NULL
+ * attr_groups elements.
+ */
+ uncore_type_attrs_compaction(type);
+
return 0;

err:
--
2.19.1

2019-12-10 09:16:30

by Sudarikov, Roman

[permalink] [raw]
Subject: [PATCH v2 3/3] perf x86: Exposing an Uncore u nit to PMON for Intel Xeon® server platform

From: Roman Sudarikov <[email protected]>

Current version supports a server line starting Intel® Xeon® Processor
Scalable Family and introduces mapping for IIO Uncore units only.
Other units can be added on demand.

IIO stack to PMON mapping is exposed through:
/sys/devices/uncore_iio_<pmu_idx>/platform_mapping
in the following format: domain:bus

For example, on a 4-die Intel Xeon® server platform:
$ cat /sys/devices/uncore_iio_0/platform_mapping
0000:00,0000:40,0000:80,0000:c0

Which means:
IIO PMON block 0 on die 0 belongs to IIO stack on bus 0x00, domain 0x0000
IIO PMON block 0 on die 1 belongs to IIO stack on bus 0x40, domain 0x0000
IIO PMON block 0 on die 2 belongs to IIO stack on bus 0x80, domain 0x0000
IIO PMON block 0 on die 3 belongs to IIO stack on bus 0xc0, domain 0x0000

Co-developed-by: Alexander Antonov <[email protected]>
Signed-off-by: Alexander Antonov <[email protected]>
Signed-off-by: Roman Sudarikov <[email protected]>
---
arch/x86/events/intel/uncore.c | 2 +-
arch/x86/events/intel/uncore.h | 1 +
arch/x86/events/intel/uncore_snbep.c | 162 +++++++++++++++++++++++++++
3 files changed, 164 insertions(+), 1 deletion(-)

diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index a05352c4fc01..d462b77aefc9 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -16,7 +16,7 @@ struct pci_driver *uncore_pci_driver;
DEFINE_RAW_SPINLOCK(pci2phy_map_lock);
struct list_head pci2phy_map_head = LIST_HEAD_INIT(pci2phy_map_head);
struct pci_extra_dev *uncore_extra_pci_dev;
-static int max_dies;
+int max_dies;

/* mask of cpus that collect uncore events */
static cpumask_t uncore_cpu_mask;
diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
index 31008e5cea57..6bf8aa3fac42 100644
--- a/arch/x86/events/intel/uncore.h
+++ b/arch/x86/events/intel/uncore.h
@@ -523,6 +523,7 @@ extern raw_spinlock_t pci2phy_map_lock;
extern struct list_head pci2phy_map_head;
extern struct pci_extra_dev *uncore_extra_pci_dev;
extern struct event_constraint uncore_constraint_empty;
+extern int max_dies;

/* uncore_snb.c */
int snb_uncore_pci_init(void);
diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
index b10a5ec79e48..88c2b0f94ed6 100644
--- a/arch/x86/events/intel/uncore_snbep.c
+++ b/arch/x86/events/intel/uncore_snbep.c
@@ -273,6 +273,30 @@
#define SKX_CPUNODEID 0xc0
#define SKX_GIDNIDMAP 0xd4

+/*
+ * The CPU_BUS_NUMBER MSR returns the values of the respective CPUBUSNO CSR
+ * that BIOS programmed. MSR has package scope.
+ * | Bit | Default | Description
+ * | [63] | 00h | VALID - When set, indicates the CPU bus
+ * numbers have been initialized. (RO)
+ * |[62:48]| --- | Reserved
+ * |[47:40]| 00h | BUS_NUM_5 — Return the bus number BIOS assigned
+ * CPUBUSNO(5). (RO)
+ * |[39:32]| 00h | BUS_NUM_4 — Return the bus number BIOS assigned
+ * CPUBUSNO(4). (RO)
+ * |[31:24]| 00h | BUS_NUM_3 — Return the bus number BIOS assigned
+ * CPUBUSNO(3). (RO)
+ * |[23:16]| 00h | BUS_NUM_2 — Return the bus number BIOS assigned
+ * CPUBUSNO(2). (RO)
+ * |[15:8] | 00h | BUS_NUM_1 — Return the bus number BIOS assigned
+ * CPUBUSNO(1). (RO)
+ * | [7:0] | 00h | BUS_NUM_0 — Return the bus number BIOS assigned
+ * CPUBUSNO(0). (RO)
+ */
+#define SKX_MSR_CPU_BUS_NUMBER 0x300
+#define SKX_MSR_CPU_BUS_VALID_BIT (1ULL << 63)
+#define BUS_NUM_STRIDE 8
+
/* SKX CHA */
#define SKX_CHA_MSR_PMON_BOX_FILTER_TID (0x1ffULL << 0)
#define SKX_CHA_MSR_PMON_BOX_FILTER_LINK (0xfULL << 9)
@@ -3580,6 +3604,9 @@ static struct intel_uncore_ops skx_uncore_iio_ops = {
.read_counter = uncore_msr_read_counter,
};

+static int skx_iio_get_topology(struct intel_uncore_type *type);
+static int skx_iio_set_mapping(struct intel_uncore_type *type);
+
static struct intel_uncore_type skx_uncore_iio = {
.name = "iio",
.num_counters = 4,
@@ -3594,6 +3621,8 @@ static struct intel_uncore_type skx_uncore_iio = {
.constraints = skx_uncore_iio_constraints,
.ops = &skx_uncore_iio_ops,
.format_group = &skx_uncore_iio_format_group,
+ .get_topology = skx_iio_get_topology,
+ .set_mapping = skx_iio_set_mapping,
};

enum perf_uncore_iio_freerunning_type_id {
@@ -3780,6 +3809,139 @@ static int skx_count_chabox(void)
return hweight32(val);
}

+static inline int skx_msr_cpu_bus_read(int cpu, u64 *topology)
+{
+ u64 msr_value;
+ int ret = rdmsrl_on_cpu(cpu, SKX_MSR_CPU_BUS_NUMBER, &msr_value);
+
+ if (!ret && !(msr_value & SKX_MSR_CPU_BUS_VALID_BIT))
+ ret = -1;
+ else
+ *topology = msr_value;
+
+ return ret;
+}
+
+static int skx_iio_get_topology(struct intel_uncore_type *type)
+{
+ int ret, cpu, die, current_die;
+ struct pci_bus *bus = NULL;
+
+ /*
+ * Verified single-segment environments only; disabled for multiple
+ * segment topologies for now.
+ */
+ while ((bus = pci_find_next_bus(bus)) && !pci_domain_nr(bus))
+ ;
+ if (bus) {
+ pr_info("Non-supported topology\n");
+ return -1;
+ }
+
+ type->topology = kzalloc(max_dies * sizeof(u64), GFP_KERNEL);
+ if (!type->topology)
+ return -ENOMEM;
+
+ /*
+ * Using cpus_read_lock() to ensure cpu is not going down between
+ * looking at cpu_online_mask.
+ */
+ cpus_read_lock();
+ /* Invalid value to start loop.*/
+ current_die = -1;
+ for_each_online_cpu(cpu) {
+ die = topology_logical_die_id(cpu);
+ if (current_die == die)
+ continue;
+ ret = skx_msr_cpu_bus_read(cpu, &type->topology[die]);
+ if (ret) {
+ kfree(type->topology);
+ break;
+ }
+ current_die = die;
+ }
+ cpus_read_unlock();
+
+ return ret;
+}
+
+static inline u8 skx_iio_stack_bus(struct intel_uncore_pmu *pmu, int die)
+{
+ return pmu->type->topology[die] >> (pmu->pmu_idx * BUS_NUM_STRIDE);
+}
+
+static int skx_iio_set_box_mapping(struct intel_uncore_pmu *pmu)
+{
+ char *buf;
+ int die = 0;
+ /* Length of template "%04x:%02x," without null character. */
+ const int template_len = 8;
+
+ /*
+ * Root bus 0x00 is valid only for die 0 AND pmu_idx = 0.
+ * Set "0" platform mapping for PMUs which have zero stack bus and
+ * non-zero index.
+ */
+ if (!skx_iio_stack_bus(pmu, die) && pmu->pmu_idx) {
+ pmu->mapping = kzalloc(2, GFP_KERNEL);
+ if (!pmu->mapping)
+ return -ENOMEM;
+ sprintf(pmu->mapping, "0");
+ return 0;
+ }
+
+ pmu->mapping = kzalloc(max_dies * template_len + 1, GFP_KERNEL);
+ if (!pmu->mapping)
+ return -ENOMEM;
+
+ buf = (char *)pmu->mapping;
+ for (; die < max_dies; die++) {
+ buf += snprintf(buf, template_len + 1, "%04x:%02x,", 0,
+ skx_iio_stack_bus(pmu, die));
+ }
+
+ *(--buf) = '\0';
+
+ return 0;
+}
+
+static int skx_iio_set_mapping(struct intel_uncore_type *type)
+{
+ /*
+ * Each IIO stack (PCIe root port) has its own IIO PMON block, so each
+ * platform_mapping holds bus number(s) of PCIe root port(s), which can
+ * be monitored by that IIO PMON block.
+ *
+ * For example, on 4-die Xeon platform with up to 6 IIO stacks per die
+ * and, therefore, 6 IIO PMON blocks per die, the platform_mapping
+ * of IIO PMON block 0 holds "0000:00,0000:40,0000:80,0000:c0":
+ *
+ * $ cat /sys/devices/uncore_iio_0/platform_mapping
+ * 0000:00,0000:40,0000:80,0000:c0
+ *
+ * Which means:
+ * IIO PMON 0 on die 0 belongs to PCIe RP on bus 0x00, domain 0x0000
+ * IIO PMON 0 on die 1 belongs to PCIe RP on bus 0x40, domain 0x0000
+ * IIO PMON 0 on die 2 belongs to PCIe RP on bus 0x80, domain 0x0000
+ * IIO PMON 0 on die 3 belongs to PCIe RP on bus 0xc0, domain 0x0000
+ */
+
+ int ret;
+ struct intel_uncore_pmu *pmu = type->pmus;
+
+ for (; pmu - type->pmus < type->num_boxes; pmu++) {
+ ret = skx_iio_set_box_mapping(pmu);
+ if (ret) {
+ for (; pmu->pmu_idx > 0; --pmu)
+ kfree(pmu->mapping);
+ break;
+ }
+ }
+
+ kfree(type->topology);
+ return ret;
+}
+
void skx_uncore_cpu_init(void)
{
skx_uncore_chabox.num_boxes = skx_count_chabox();
--
2.19.1

2019-12-10 09:17:17

by Sudarikov, Roman

[permalink] [raw]
Subject: [PATCH v2 1/3] perf x86: Infrastructure for exposing an Uncore unit to PMON mapping

From: Roman Sudarikov <[email protected]>

Intel® Xeon® Scalable processor family (code name Skylake-SP) makes
significant changes in the integrated I/O (IIO) architecture. The new
solution introduces IIO stacks which are responsible for managing traffic
between the PCIe domain and the Mesh domain. Each IIO stack has its own
PMON block and can handle either DMI port, x16 PCIe root port, MCP-Link
or various built-in accelerators. IIO PMON blocks allow concurrent
monitoring of I/O flows up to 4 x4 bifurcation within each IIO stack.

Software is supposed to program required perf counters within each IIO
stack and gather performance data. The tricky thing here is that IIO PMON
reports data per IIO stack but users have no idea what IIO stacks are -
they only know devices which are connected to the platform.

Understanding IIO stack concept to find which IIO stack that particular
IO device is connected to, or to identify an IIO PMON block to program
for monitoring specific IIO stack assumes a lot of implicit knowledge
about given Intel server platform architecture.

Usage example:
/sys/devices/uncore_<type>_<pmu_idx>/platform_mapping

Each Uncore unit type, by its nature, can be mapped to its own context,
for example:
1. CHA - each uncore_cha_<pmu_idx> is assigned to manage a distinct slice
of LLC capacity;
2. UPI - each uncore_upi_<pmu_idx> is assigned to manage one link of Intel
UPI Subsystem;
3. IIO - each uncore_iio_<pmu_idx> is assigned to manage one stack of the
IIO module;
4. IMC - each uncore_imc_<pmu_idx> is assigned to manage one channel of
Memory Controller.

Implementation details:
Two callbacks added to struct intel_uncore_type to discover and map Uncore
units to PMONs:
int (*get_topology)(void)
int (*set_mapping)(struct intel_uncore_pmu *pmu)

Details of IIO Uncore unit mapping to IIO PMON:
Each IIO stack is either DMI port, x16 PCIe root port, MCP-Link or various
built-in accelerators. For Uncore IIO Unit type, the platform_mapping file
holds bus numbers of devices, which can be monitored by that IIO PMON block
on each die.

Co-developed-by: Alexander Antonov <[email protected]>
Signed-off-by: Alexander Antonov <[email protected]>
Signed-off-by: Roman Sudarikov <[email protected]>
---
arch/x86/events/intel/uncore.c | 26 ++++++++++++++++++++++++++
arch/x86/events/intel/uncore.h | 9 ++++++++-
2 files changed, 34 insertions(+), 1 deletion(-)

diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index 86467f85c383..24e120289018 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -816,6 +816,15 @@ static ssize_t uncore_get_attr_cpumask(struct device *dev,

static DEVICE_ATTR(cpumask, S_IRUGO, uncore_get_attr_cpumask, NULL);

+static ssize_t platform_mapping_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct intel_uncore_pmu *pmu = dev_get_drvdata(dev);
+
+ return snprintf(buf, PAGE_SIZE - 1, "%s\n", (char *)pmu->mapping);
+}
+static DEVICE_ATTR_RO(platform_mapping);
+
static struct attribute *uncore_pmu_attrs[] = {
&dev_attr_cpumask.attr,
NULL,
@@ -825,6 +834,15 @@ static const struct attribute_group uncore_pmu_attr_group = {
.attrs = uncore_pmu_attrs,
};

+static struct attribute *mapping_attrs[] = {
+ &dev_attr_platform_mapping.attr,
+ NULL,
+};
+
+static const struct attribute_group uncore_mapping_group = {
+ .attrs = mapping_attrs,
+};
+
static int uncore_pmu_register(struct intel_uncore_pmu *pmu)
{
int ret;
@@ -954,6 +972,14 @@ static int __init uncore_type_init(struct intel_uncore_type *type, bool setid)

type->pmu_group = &uncore_pmu_attr_group;

+ /*
+ * Exposing mapping of Uncore units to corresponding Uncore PMUs
+ * through /sys/devices/uncore_<type>_<idx>/platform_mapping
+ */
+ if (type->get_topology && type->set_mapping)
+ if (!type->get_topology(type) && !type->set_mapping(type))
+ type->mapping_group = &uncore_mapping_group;
+
return 0;

err:
diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
index bbfdaa720b45..31008e5cea57 100644
--- a/arch/x86/events/intel/uncore.h
+++ b/arch/x86/events/intel/uncore.h
@@ -71,13 +71,19 @@ struct intel_uncore_type {
struct intel_uncore_ops *ops;
struct uncore_event_desc *event_descs;
struct freerunning_counters *freerunning;
- const struct attribute_group *attr_groups[4];
+ const struct attribute_group *attr_groups[5];
struct pmu *pmu; /* for custom pmu ops */
+ u64 *topology;
+ /* finding Uncore units */
+ int (*get_topology)(struct intel_uncore_type *type);
+ /* mapping Uncore units to PMON ranges */
+ int (*set_mapping)(struct intel_uncore_type *type);
};

#define pmu_group attr_groups[0]
#define format_group attr_groups[1]
#define events_group attr_groups[2]
+#define mapping_group attr_groups[3]

struct intel_uncore_ops {
void (*init_box)(struct intel_uncore_box *);
@@ -99,6 +105,7 @@ struct intel_uncore_pmu {
int pmu_idx;
int func_id;
bool registered;
+ void *mapping;
atomic_t activeboxes;
struct intel_uncore_type *type;
struct intel_uncore_box **boxes;
--
2.19.1

2019-12-10 10:39:43

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] perf x86: Add compaction function for uncore attributes

On Tue, Dec 10, 2019 at 12:14:50PM +0300, [email protected] wrote:
> From: Roman Sudarikov <[email protected]>
>
> In current design, there is an implicit assumption that array of pointers
> to uncore type attributes is NULL terminated. However, not all attributes
> are mandatory for each Uncore unit type, e.g. "events" is required for
> IMC but doesn't exist for CHA. That approach correctly supports only one
> optional attribute which also must be the last in the row.
> The patch removes limitation by safely removing embedded NULL elements.
>
> Co-developed-by: Alexander Antonov <[email protected]>
> Signed-off-by: Alexander Antonov <[email protected]>
> Signed-off-by: Roman Sudarikov <[email protected]>
> ---
> arch/x86/events/intel/uncore.c | 22 ++++++++++++++++++++++
> 1 file changed, 22 insertions(+)
>
> diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
> index 24e120289018..a05352c4fc01 100644
> --- a/arch/x86/events/intel/uncore.c
> +++ b/arch/x86/events/intel/uncore.c
> @@ -923,6 +923,22 @@ static void uncore_types_exit(struct intel_uncore_type **types)
> uncore_type_exit(*types);
> }
>
> +static void uncore_type_attrs_compaction(struct intel_uncore_type *type)
> +{
> + int i, j;
> + int size = ARRAY_SIZE(type->attr_groups);
> +
> + for (i = 0, j = 0; i < size; i++) {
> + if (!type->attr_groups[i])
> + continue;
> + if (i > j) {
> + type->attr_groups[j] = type->attr_groups[i];
> + type->attr_groups[i] = NULL;
> + }
> + j++;
> + }
> +}

GregKH had objections to us playing silly games like that and made us
use is_visible() for the regular PMU driver. Also see commit:

baa0c83363c7 ("perf/x86: Use the new pmu::update_attrs attribute group")

2019-12-10 14:03:13

by Liang, Kan

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] perf x86: Exposing an Uncore u nit to PMON for Intel Xeon® server platform



On 12/10/2019 4:14 AM, [email protected] wrote:
> From: Roman Sudarikov <[email protected]>
>
> Current version supports a server line starting Intel® Xeon® Processor
> Scalable Family and introduces mapping for IIO Uncore units only.
> Other units can be added on demand.
>
> IIO stack to PMON mapping is exposed through:
> /sys/devices/uncore_iio_<pmu_idx>/platform_mapping
> in the following format: domain:bus
>
> For example, on a 4-die Intel Xeon® server platform:
> $ cat /sys/devices/uncore_iio_0/platform_mapping
> 0000:00,0000:40,0000:80,0000:c0
>
> Which means:
> IIO PMON block 0 on die 0 belongs to IIO stack on bus 0x00, domain 0x0000
> IIO PMON block 0 on die 1 belongs to IIO stack on bus 0x40, domain 0x0000
> IIO PMON block 0 on die 2 belongs to IIO stack on bus 0x80, domain 0x0000
> IIO PMON block 0 on die 3 belongs to IIO stack on bus 0xc0, domain 0x0000
>
> Co-developed-by: Alexander Antonov <[email protected]>
> Signed-off-by: Alexander Antonov <[email protected]>
> Signed-off-by: Roman Sudarikov <[email protected]>
> ---
> arch/x86/events/intel/uncore.c | 2 +-
> arch/x86/events/intel/uncore.h | 1 +
> arch/x86/events/intel/uncore_snbep.c | 162 +++++++++++++++++++++++++++
> 3 files changed, 164 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
> index a05352c4fc01..d462b77aefc9 100644
> --- a/arch/x86/events/intel/uncore.c
> +++ b/arch/x86/events/intel/uncore.c
> @@ -16,7 +16,7 @@ struct pci_driver *uncore_pci_driver;
> DEFINE_RAW_SPINLOCK(pci2phy_map_lock);
> struct list_head pci2phy_map_head = LIST_HEAD_INIT(pci2phy_map_head);
> struct pci_extra_dev *uncore_extra_pci_dev;
> -static int max_dies;
> +int max_dies;
>
> /* mask of cpus that collect uncore events */
> static cpumask_t uncore_cpu_mask;
> diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
> index 31008e5cea57..6bf8aa3fac42 100644
> --- a/arch/x86/events/intel/uncore.h
> +++ b/arch/x86/events/intel/uncore.h
> @@ -523,6 +523,7 @@ extern raw_spinlock_t pci2phy_map_lock;
> extern struct list_head pci2phy_map_head;
> extern struct pci_extra_dev *uncore_extra_pci_dev;
> extern struct event_constraint uncore_constraint_empty;
> +extern int max_dies;
>
> /* uncore_snb.c */
> int snb_uncore_pci_init(void);
> diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
> index b10a5ec79e48..88c2b0f94ed6 100644
> --- a/arch/x86/events/intel/uncore_snbep.c
> +++ b/arch/x86/events/intel/uncore_snbep.c
> @@ -273,6 +273,30 @@
> #define SKX_CPUNODEID 0xc0
> #define SKX_GIDNIDMAP 0xd4
>
> +/*
> + * The CPU_BUS_NUMBER MSR returns the values of the respective CPUBUSNO CSR
> + * that BIOS programmed. MSR has package scope.
> + * | Bit | Default | Description
> + * | [63] | 00h | VALID - When set, indicates the CPU bus
> + * numbers have been initialized. (RO)
> + * |[62:48]| --- | Reserved
> + * |[47:40]| 00h | BUS_NUM_5 — Return the bus number BIOS assigned
> + * CPUBUSNO(5). (RO)
> + * |[39:32]| 00h | BUS_NUM_4 — Return the bus number BIOS assigned
> + * CPUBUSNO(4). (RO)
> + * |[31:24]| 00h | BUS_NUM_3 — Return the bus number BIOS assigned
> + * CPUBUSNO(3). (RO)
> + * |[23:16]| 00h | BUS_NUM_2 — Return the bus number BIOS assigned
> + * CPUBUSNO(2). (RO)
> + * |[15:8] | 00h | BUS_NUM_1 — Return the bus number BIOS assigned
> + * CPUBUSNO(1). (RO)
> + * | [7:0] | 00h | BUS_NUM_0 — Return the bus number BIOS assigned
> + * CPUBUSNO(0). (RO)
> + */
> +#define SKX_MSR_CPU_BUS_NUMBER 0x300
> +#define SKX_MSR_CPU_BUS_VALID_BIT (1ULL << 63)
> +#define BUS_NUM_STRIDE 8
> +
> /* SKX CHA */
> #define SKX_CHA_MSR_PMON_BOX_FILTER_TID (0x1ffULL << 0)
> #define SKX_CHA_MSR_PMON_BOX_FILTER_LINK (0xfULL << 9)
> @@ -3580,6 +3604,9 @@ static struct intel_uncore_ops skx_uncore_iio_ops = {
> .read_counter = uncore_msr_read_counter,
> };
>
> +static int skx_iio_get_topology(struct intel_uncore_type *type);
> +static int skx_iio_set_mapping(struct intel_uncore_type *type);
> +
> static struct intel_uncore_type skx_uncore_iio = {
> .name = "iio",
> .num_counters = 4,
> @@ -3594,6 +3621,8 @@ static struct intel_uncore_type skx_uncore_iio = {
> .constraints = skx_uncore_iio_constraints,
> .ops = &skx_uncore_iio_ops,
> .format_group = &skx_uncore_iio_format_group,
> + .get_topology = skx_iio_get_topology,
> + .set_mapping = skx_iio_set_mapping,
> };
>
> enum perf_uncore_iio_freerunning_type_id {
> @@ -3780,6 +3809,139 @@ static int skx_count_chabox(void)
> return hweight32(val);
> }
>
> +static inline int skx_msr_cpu_bus_read(int cpu, u64 *topology)
> +{
> + u64 msr_value;
> + int ret = rdmsrl_on_cpu(cpu, SKX_MSR_CPU_BUS_NUMBER, &msr_value);
> +
> + if (!ret && !(msr_value & SKX_MSR_CPU_BUS_VALID_BIT))
> + ret = -1;
> + else
> + *topology = msr_value;

If the rdmsrl fails, topology will still be set. It looks weird.
Can we do something as below?

if (rdmsrl_on_cpu(cpu, SKX_MSR_CPU_BUS_NUMBER, &msr_value))
return -1;

if (!(msr_value & SKX_MSR_CPU_BUS_VALID_BIT))
return -1;

*topology = msr_value;

return 0;

Thanks,
Kan
> +
> + return ret;
> +}
> +
> +static int skx_iio_get_topology(struct intel_uncore_type *type)
> +{
> + int ret, cpu, die, current_die;
> + struct pci_bus *bus = NULL;
> +
> + /*
> + * Verified single-segment environments only; disabled for multiple
> + * segment topologies for now.
> + */
> + while ((bus = pci_find_next_bus(bus)) && !pci_domain_nr(bus))
> + ;
> + if (bus) {
> + pr_info("Non-supported topology\n");
> + return -1;
> + }
> +
> + type->topology = kzalloc(max_dies * sizeof(u64), GFP_KERNEL);
> + if (!type->topology)
> + return -ENOMEM;
> +
> + /*
> + * Using cpus_read_lock() to ensure cpu is not going down between
> + * looking at cpu_online_mask.
> + */
> + cpus_read_lock();
> + /* Invalid value to start loop.*/
> + current_die = -1;
> + for_each_online_cpu(cpu) {
> + die = topology_logical_die_id(cpu);
> + if (current_die == die)
> + continue;
> + ret = skx_msr_cpu_bus_read(cpu, &type->topology[die]);
> + if (ret) {
> + kfree(type->topology);
> + break;
> + }
> + current_die = die;
> + }
> + cpus_read_unlock();
> +
> + return ret;
> +}
> +
> +static inline u8 skx_iio_stack_bus(struct intel_uncore_pmu *pmu, int die)
> +{
> + return pmu->type->topology[die] >> (pmu->pmu_idx * BUS_NUM_STRIDE);
> +}
> +
> +static int skx_iio_set_box_mapping(struct intel_uncore_pmu *pmu)
> +{
> + char *buf;
> + int die = 0;
> + /* Length of template "%04x:%02x," without null character. */
> + const int template_len = 8;
> +
> + /*
> + * Root bus 0x00 is valid only for die 0 AND pmu_idx = 0.
> + * Set "0" platform mapping for PMUs which have zero stack bus and
> + * non-zero index.
> + */
> + if (!skx_iio_stack_bus(pmu, die) && pmu->pmu_idx) {
> + pmu->mapping = kzalloc(2, GFP_KERNEL);
> + if (!pmu->mapping)
> + return -ENOMEM;
> + sprintf(pmu->mapping, "0");
> + return 0;
> + }
> +
> + pmu->mapping = kzalloc(max_dies * template_len + 1, GFP_KERNEL);
> + if (!pmu->mapping)
> + return -ENOMEM;
> +
> + buf = (char *)pmu->mapping;
> + for (; die < max_dies; die++) {
> + buf += snprintf(buf, template_len + 1, "%04x:%02x,", 0,
> + skx_iio_stack_bus(pmu, die));
> + }
> +
> + *(--buf) = '\0';
> +
> + return 0;
> +}
> +
> +static int skx_iio_set_mapping(struct intel_uncore_type *type)
> +{
> + /*
> + * Each IIO stack (PCIe root port) has its own IIO PMON block, so each
> + * platform_mapping holds bus number(s) of PCIe root port(s), which can
> + * be monitored by that IIO PMON block.
> + *
> + * For example, on 4-die Xeon platform with up to 6 IIO stacks per die
> + * and, therefore, 6 IIO PMON blocks per die, the platform_mapping
> + * of IIO PMON block 0 holds "0000:00,0000:40,0000:80,0000:c0":
> + *
> + * $ cat /sys/devices/uncore_iio_0/platform_mapping
> + * 0000:00,0000:40,0000:80,0000:c0
> + *
> + * Which means:
> + * IIO PMON 0 on die 0 belongs to PCIe RP on bus 0x00, domain 0x0000
> + * IIO PMON 0 on die 1 belongs to PCIe RP on bus 0x40, domain 0x0000
> + * IIO PMON 0 on die 2 belongs to PCIe RP on bus 0x80, domain 0x0000
> + * IIO PMON 0 on die 3 belongs to PCIe RP on bus 0xc0, domain 0x0000
> + */
> +
> + int ret;
> + struct intel_uncore_pmu *pmu = type->pmus;
> +
> + for (; pmu - type->pmus < type->num_boxes; pmu++) {
> + ret = skx_iio_set_box_mapping(pmu);
> + if (ret) {
> + for (; pmu->pmu_idx > 0; --pmu)
> + kfree(pmu->mapping);
> + break;
> + }
> + }
> +
> + kfree(type->topology);
> + return ret;
> +}
> +
> void skx_uncore_cpu_init(void)
> {
> skx_uncore_chabox.num_boxes = skx_count_chabox();
>

2019-12-10 18:33:18

by Sudarikov, Roman

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] perf x86: Add compaction function for uncore attributes

On 10.12.2019 13:37, Peter Zijlstra wrote:
> On Tue, Dec 10, 2019 at 12:14:50PM +0300, [email protected] wrote:
>> From: Roman Sudarikov <[email protected]>
>>
>> In current design, there is an implicit assumption that array of pointers
>> to uncore type attributes is NULL terminated. However, not all attributes
>> are mandatory for each Uncore unit type, e.g. "events" is required for
>> IMC but doesn't exist for CHA. That approach correctly supports only one
>> optional attribute which also must be the last in the row.
>> The patch removes limitation by safely removing embedded NULL elements.
>>
>> Co-developed-by: Alexander Antonov <[email protected]>
>> Signed-off-by: Alexander Antonov <[email protected]>
>> Signed-off-by: Roman Sudarikov <[email protected]>
>> ---
>> arch/x86/events/intel/uncore.c | 22 ++++++++++++++++++++++
>> 1 file changed, 22 insertions(+)
>>
>> diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
>> index 24e120289018..a05352c4fc01 100644
>> --- a/arch/x86/events/intel/uncore.c
>> +++ b/arch/x86/events/intel/uncore.c
>> @@ -923,6 +923,22 @@ static void uncore_types_exit(struct intel_uncore_type **types)
>> uncore_type_exit(*types);
>> }
>>
>> +static void uncore_type_attrs_compaction(struct intel_uncore_type *type)
>> +{
>> + int i, j;
>> + int size = ARRAY_SIZE(type->attr_groups);
>> +
>> + for (i = 0, j = 0; i < size; i++) {
>> + if (!type->attr_groups[i])
>> + continue;
>> + if (i > j) {
>> + type->attr_groups[j] = type->attr_groups[i];
>> + type->attr_groups[i] = NULL;
>> + }
>> + j++;
>> + }
>> +}
> GregKH had objections to us playing silly games like that and made us
> use is_visible() for the regular PMU driver. Also see commit:
>
> baa0c83363c7 ("perf/x86: Use the new pmu::update_attrs attribute group")

Hi Peter,

if I understand correctly, that commit is intended for graceful handling
of cases where we need to
probe hardware first before making decision whether to show or not that
particular event for that particular pmu.
What I'm doing has slightly different context - I'm creating the mapping
per pmu type.
That mapping is not hardware dependent but implementation dependent
meaning that if the mapping is not implemented
for some pmu type, then the mapping attribute has no way to show up
following current implementation, right?
If the mapping is implemented then it shows up for all pmus of that type.

I can rework it following the approach implemented in the commit you
mentioned but the benefit is not immediately obvious :-)

Thanks,
Roman

2019-12-11 14:22:42

by Sudarikov, Roman

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] perf x86: Add compaction function for uncore attributes

On 10.12.2019 13:37, Peter Zijlstra wrote:
> On Tue, Dec 10, 2019 at 12:14:50PM +0300, [email protected] wrote:
>> From: Roman Sudarikov <[email protected]>
>>
>> In current design, there is an implicit assumption that array of pointers
>> to uncore type attributes is NULL terminated. However, not all attributes
>> are mandatory for each Uncore unit type, e.g. "events" is required for
>> IMC but doesn't exist for CHA. That approach correctly supports only one
>> optional attribute which also must be the last in the row.
>> The patch removes limitation by safely removing embedded NULL elements.
>>
>> Co-developed-by: Alexander Antonov <[email protected]>
>> Signed-off-by: Alexander Antonov <[email protected]>
>> Signed-off-by: Roman Sudarikov <[email protected]>
>> ---
>> arch/x86/events/intel/uncore.c | 22 ++++++++++++++++++++++
>> 1 file changed, 22 insertions(+)
>>
>> diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
>> index 24e120289018..a05352c4fc01 100644
>> --- a/arch/x86/events/intel/uncore.c
>> +++ b/arch/x86/events/intel/uncore.c
>> @@ -923,6 +923,22 @@ static void uncore_types_exit(struct intel_uncore_type **types)
>> uncore_type_exit(*types);
>> }
>>
>> +static void uncore_type_attrs_compaction(struct intel_uncore_type *type)
>> +{
>> + int i, j;
>> + int size = ARRAY_SIZE(type->attr_groups);
>> +
>> + for (i = 0, j = 0; i < size; i++) {
>> + if (!type->attr_groups[i])
>> + continue;
>> + if (i > j) {
>> + type->attr_groups[j] = type->attr_groups[i];
>> + type->attr_groups[i] = NULL;
>> + }
>> + j++;
>> + }
>> +}
> GregKH had objections to us playing silly games like that and made us
> use is_visible() for the regular PMU driver. Also see commit:
>
> baa0c83363c7 ("perf/x86: Use the new pmu::update_attrs attribute group")


Removed theuncore_type_attrs_compaction() function and implemented Kan's
suggestion to replace NULL events_group by the empty attributes group:


diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index e8532923bd45..110b3603f56f 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -923,6 +923,14 @@ static void uncore_types_exit(struct
intel_uncore_type **types)
                uncore_type_exit(*types);
 }

+static struct attribute *empty_attrs[] = {
+       NULL,
+};
+
+static const struct attribute_group empty_group = {
+       .attrs = empty_attrs,
+};
+
 static int __init uncore_type_init(struct intel_uncore_type *type,
bool setid)
 {
        struct intel_uncore_pmu *pmus;
@@ -968,7 +976,8 @@ static int __init uncore_type_init(struct
intel_uncore_type *type, bool setid)
                        attr_group->attrs[j] =
&type->event_descs[j].attr.attr;

                type->events_group = &attr_group->group;
-       }
+       } else
+               type->events_group = &empty_group;

        type->pmu_group = &uncore_pmu_attr_group;