2020-02-11 17:17:11

by Sudarikov, Roman

[permalink] [raw]
Subject: [PATCH v5 0/3] perf x86: Exposing IO stack to IO PMON mapping through sysfs

From: Roman Sudarikov <[email protected]>

The previous version can be found at:
v4: https://lkml.kernel.org/r/[email protected]/

Changes in this revision are:
v4 -> v5:
- Addressed comments from Greg Kroah-Hartman:
1. Using the attr_update flow for newly introduced optional attributes
2. No subfolder, optional attributes are created the same level as 'cpumask'
3. No symlinks, optional attributes are created as files
4. Single file for each IIO PMON block to node mapping
5. Added Documentation/ABI/sysfs-devices-mapping

The previous version can be found at:
v3: https://lkml.kernel.org/r/[email protected]

Changes in this revision are:
v3 -> v4:
- Addressed comments from Greg Kroah-Hartman:
1. Reworked handling of newly introduced attribute.
2. Required Documentation update is expected in the follow up patchset


The previous version can be found at:
v2: https://lkml.kernel.org/r/[email protected]

Changes in this revision are:
v2 -> v3:
1. Addressed comments from Peter and Kan

The previous version can be found at:
v1: https://lkml.kernel.org/r/[email protected]

Changes in this revision are:
v1 -> v2:
1. Fixed process related issues;
2. This patch set includes kernel support for IIO stack to PMON mapping;
3. Stephane raised concerns regarding output format which may require
code changes in the user space part of the feature only. We will continue
output format discussion in the context of user space update.

Intel® Xeon® Scalable processor family (code name Skylake-SP) makes
significant changes in the integrated I/O (IIO) architecture. The new
solution introduces IIO stacks which are responsible for managing traffic
between the PCIe domain and the Mesh domain. Each IIO stack has its own
PMON block and can handle either DMI port, x16 PCIe root port, MCP-Link
or various built-in accelerators. IIO PMON blocks allow concurrent
monitoring of I/O flows up to 4 x4 bifurcation within each IIO stack.

Software is supposed to program required perf counters within each IIO
stack and gather performance data. The tricky thing here is that IIO PMON
reports data per IIO stack but users have no idea what IIO stacks are -
they only know devices which are connected to the platform.

Understanding IIO stack concept to find which IIO stack that particular
IO device is connected to, or to identify an IIO PMON block to program
for monitoring specific IIO stack assumes a lot of implicit knowledge
about given Intel server platform architecture.

This patch set introduces:
1. An infrastructure for exposing an Uncore unit to Uncore PMON mapping
through sysfs-backend;
2. A new --iiostat mode in perf stat to provide I/O performance metrics
per I/O device.

Usage examples:

1. List all devices below IIO stacks
./perf stat --iiostat=show

Sample output w/o libpci:

S0-RootPort0-uncore_iio_0<00:00.0>
S1-RootPort0-uncore_iio_0<81:00.0>
S0-RootPort1-uncore_iio_1<18:00.0>
S1-RootPort1-uncore_iio_1<86:00.0>
S1-RootPort1-uncore_iio_1<88:00.0>
S0-RootPort2-uncore_iio_2<3d:00.0>
S1-RootPort2-uncore_iio_2<af:00.0>
S1-RootPort3-uncore_iio_3<da:00.0>

Sample output with libpci:

S0-RootPort0-uncore_iio_0<00:00.0 Sky Lake-E DMI3 Registers>
S1-RootPort0-uncore_iio_0<81:00.0 Ethernet Controller X710 for 10GbE SFP+>
S0-RootPort1-uncore_iio_1<18:00.0 Omni-Path HFI Silicon 100 Series [discrete]>
S1-RootPort1-uncore_iio_1<86:00.0 Ethernet Controller XL710 for 40GbE QSFP+>
S1-RootPort1-uncore_iio_1<88:00.0 Ethernet Controller XL710 for 40GbE QSFP+>
S0-RootPort2-uncore_iio_2<3d:00.0 Ethernet Connection X722 for 10GBASE-T>
S1-RootPort2-uncore_iio_2<af:00.0 Omni-Path HFI Silicon 100 Series [discrete]>
S1-RootPort3-uncore_iio_3<da:00.0 NVMe Datacenter SSD [Optane]>

2. Collect metrics for all I/O devices below IIO stack

./perf stat --iiostat -- dd if=/dev/zero of=/dev/nvme0n1 bs=1M oflag=direct
357708+0 records in
357707+0 records out
375083606016 bytes (375 GB, 349 GiB) copied, 215.381 s, 1.7 GB/s

Performance counter stats for 'system wide':

device Inbound Read(MB) Inbound Write(MB) Outbound Read(MB) Outbound Write(MB)
00:00.0 0 0 0 0
81:00.0 0 0 0 0
18:00.0 0 0 0 0
86:00.0 0 0 0 0
88:00.0 0 0 0 0
3b:00.0 3 0 0 0
3c:03.0 3 0 0 0
3d:00.0 3 0 0 0
af:00.0 0 0 0 0
da:00.0 358559 44 0 22

215.383783574 seconds time elapsed


3. Collect metrics for comma separted list of I/O devices

./perf stat --iiostat=da:00.0 -- dd if=/dev/zero of=/dev/nvme0n1 bs=1M oflag=direct
381555+0 records in
381554+0 records out
400088457216 bytes (400 GB, 373 GiB) copied, 374.044 s, 1.1 GB/s

Performance counter stats for 'system wide':

device Inbound Read(MB) Inbound Write(MB) Outbound Read(MB) Outbound Write(MB)
da:00.0 382462 47 0 23

374.045775505 seconds time elapsed


Roman Sudarikov (3):
perf x86: Infrastructure for exposing an Uncore unit to PMON mapping
perf x86: topology max dies for whole system
perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server
platform

.../ABI/testing/sysfs-devices-mapping | 32 +++
arch/x86/events/intel/uncore.c | 18 +-
arch/x86/events/intel/uncore.h | 8 +
arch/x86/events/intel/uncore_snbep.c | 183 ++++++++++++++++++
4 files changed, 235 insertions(+), 6 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-devices-mapping


base-commit: bb6d3fb354c5ee8d6bde2d576eb7220ea09862b9
--
2.19.1


2020-02-11 17:17:55

by Sudarikov, Roman

[permalink] [raw]
Subject: [PATCH v5 2/3] perf x86: topology max dies for whole system

From: Roman Sudarikov <[email protected]>

Helper fuction to return number of dies on the platform.

Co-developed-by: Alexander Antonov <[email protected]>
Signed-off-by: Alexander Antonov <[email protected]>
Signed-off-by: Roman Sudarikov <[email protected]>
---
arch/x86/events/intel/uncore.c | 13 +++++++------
arch/x86/events/intel/uncore.h | 3 +++
2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index 98ab8539f126..e6297fe42c54 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -16,7 +16,7 @@ struct pci_driver *uncore_pci_driver;
DEFINE_RAW_SPINLOCK(pci2phy_map_lock);
struct list_head pci2phy_map_head = LIST_HEAD_INIT(pci2phy_map_head);
struct pci_extra_dev *uncore_extra_pci_dev;
-static int max_dies;
+int __uncore_max_dies;

/* mask of cpus that collect uncore events */
static cpumask_t uncore_cpu_mask;
@@ -108,7 +108,7 @@ struct intel_uncore_box *uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu
* The unsigned check also catches the '-1' return value for non
* existent mappings in the topology map.
*/
- return dieid < max_dies ? pmu->boxes[dieid] : NULL;
+ return dieid < uncore_max_dies() ? pmu->boxes[dieid] : NULL;
}

u64 uncore_msr_read_counter(struct intel_uncore_box *box, struct perf_event *event)
@@ -879,7 +879,7 @@ static void uncore_free_boxes(struct intel_uncore_pmu *pmu)
{
int die;

- for (die = 0; die < max_dies; die++)
+ for (die = 0; die < uncore_max_dies(); die++)
kfree(pmu->boxes[die]);
kfree(pmu->boxes);
}
@@ -917,7 +917,7 @@ static int __init uncore_type_init(struct intel_uncore_type *type, bool setid)
if (!pmus)
return -ENOMEM;

- size = max_dies * sizeof(struct intel_uncore_box *);
+ size = uncore_max_dies() * sizeof(struct intel_uncore_box *);

for (i = 0; i < type->num_boxes; i++) {
pmus[i].func_id = setid ? i : -1;
@@ -1117,7 +1117,7 @@ static int __init uncore_pci_init(void)
size_t size;
int ret;

- size = max_dies * sizeof(struct pci_extra_dev);
+ size = uncore_max_dies() * sizeof(struct pci_extra_dev);
uncore_extra_pci_dev = kzalloc(size, GFP_KERNEL);
if (!uncore_extra_pci_dev) {
ret = -ENOMEM;
@@ -1529,7 +1529,8 @@ static int __init intel_uncore_init(void)
if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
return -ENODEV;

- max_dies = topology_max_packages() * topology_max_die_per_package();
+ __uncore_max_dies =
+ topology_max_packages() * topology_max_die_per_package();

uncore_init = (struct intel_uncore_init_fun *)id->driver_data;
if (uncore_init->pci_init) {
diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
index 8821f35e32f0..12bfcb0a8223 100644
--- a/arch/x86/events/intel/uncore.h
+++ b/arch/x86/events/intel/uncore.h
@@ -173,6 +173,9 @@ int uncore_pcibus_to_physid(struct pci_bus *bus);
ssize_t uncore_event_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf);

+extern int __uncore_max_dies;
+#define uncore_max_dies() (__uncore_max_dies)
+
#define INTEL_UNCORE_EVENT_DESC(_name, _config) \
{ \
.attr = __ATTR(_name, 0444, uncore_event_show, NULL), \
--
2.19.1

2020-02-11 17:17:57

by Sudarikov, Roman

[permalink] [raw]
Subject: [PATCH v5 3/3] perf x86: Exposing an Uncore u nit to PMON for Intel Xeon® server platform

From: Roman Sudarikov <[email protected]>

Current version supports a server line starting Intel® Xeon® Processor
Scalable Family and introduces mapping for IIO Uncore units only.
Other units can be added on demand.

IIO stack to PMON mapping is exposed through:
/sys/devices/uncore_iio_<pmu_idx>/nodeX
where nodeX is file which holds PCIe root bus.

Details are explained in Documentation/ABI/testing/sysfs-devices-mapping

Co-developed-by: Alexander Antonov <[email protected]>
Signed-off-by: Alexander Antonov <[email protected]>
Signed-off-by: Roman Sudarikov <[email protected]>
---
.../ABI/testing/sysfs-devices-mapping | 32 +++
arch/x86/events/intel/uncore_snbep.c | 183 ++++++++++++++++++
2 files changed, 215 insertions(+)
create mode 100644 Documentation/ABI/testing/sysfs-devices-mapping

diff --git a/Documentation/ABI/testing/sysfs-devices-mapping b/Documentation/ABI/testing/sysfs-devices-mapping
new file mode 100644
index 000000000000..c26e4e0b6ca8
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-devices-mapping
@@ -0,0 +1,32 @@
+What: /sys/devices/uncore_iio_x/nodeX
+Date: February 2020
+Contact: Roman Sudarikov <[email protected]>
+Description:
+ Each IIO stack (PCIe root port) has its own IIO PMON block, so
+ each nodeX file (where X node number) holds PCIe root port,
+ which can be monitored by that IIO PMON block.
+ For example, on 4-node Xeon platform with up to 6 IIO stacks per
+ node and, therefore, 6 IIO PMON blocks per node, the mapping of
+ IIO PMON block 0 exposes as the following:
+
+ $ ls /sys/devices/uncore_iio_0/node*
+ -r--r--r-- /sys/devices/uncore_iio_0/node0
+ -r--r--r-- /sys/devices/uncore_iio_0/node1
+ -r--r--r-- /sys/devices/uncore_iio_0/node2
+ -r--r--r-- /sys/devices/uncore_iio_0/node3
+
+ $ tail /sys/devices/uncore_iio_0/node*
+ ==> /sys/devices/uncore_iio_0/node0 <==
+ 0000:00
+ ==> /sys/devices/uncore_iio_0/node1 <==
+ 0000:40
+ ==> /sys/devices/uncore_iio_0/node2 <==
+ 0000:80
+ ==> /sys/devices/uncore_iio_0/node3 <==
+ 0000:c0
+
+ Which means:
+ IIO PMU 0 on node 0 belongs to PCI RP on bus 0x00, domain 0x0000
+ IIO PMU 0 on node 1 belongs to PCI RP on bus 0x40, domain 0x0000
+ IIO PMU 0 on node 2 belongs to PCI RP on bus 0x80, domain 0x0000
+ IIO PMU 0 on node 3 belongs to PCI RP on bus 0xc0, domain 0x0000
\ No newline at end of file
diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
index ad20220af303..96fca1ac22a4 100644
--- a/arch/x86/events/intel/uncore_snbep.c
+++ b/arch/x86/events/intel/uncore_snbep.c
@@ -273,6 +273,30 @@
#define SKX_CPUNODEID 0xc0
#define SKX_GIDNIDMAP 0xd4

+/*
+ * The CPU_BUS_NUMBER MSR returns the values of the respective CPUBUSNO CSR
+ * that BIOS programmed. MSR has package scope.
+ * | Bit | Default | Description
+ * | [63] | 00h | VALID - When set, indicates the CPU bus
+ * numbers have been initialized. (RO)
+ * |[62:48]| --- | Reserved
+ * |[47:40]| 00h | BUS_NUM_5 — Return the bus number BIOS assigned
+ * CPUBUSNO(5). (RO)
+ * |[39:32]| 00h | BUS_NUM_4 — Return the bus number BIOS assigned
+ * CPUBUSNO(4). (RO)
+ * |[31:24]| 00h | BUS_NUM_3 — Return the bus number BIOS assigned
+ * CPUBUSNO(3). (RO)
+ * |[23:16]| 00h | BUS_NUM_2 — Return the bus number BIOS assigned
+ * CPUBUSNO(2). (RO)
+ * |[15:8] | 00h | BUS_NUM_1 — Return the bus number BIOS assigned
+ * CPUBUSNO(1). (RO)
+ * | [7:0] | 00h | BUS_NUM_0 — Return the bus number BIOS assigned
+ * CPUBUSNO(0). (RO)
+ */
+#define SKX_MSR_CPU_BUS_NUMBER 0x300
+#define SKX_MSR_CPU_BUS_VALID_BIT (1ULL << 63)
+#define BUS_NUM_STRIDE 8
+
/* SKX CHA */
#define SKX_CHA_MSR_PMON_BOX_FILTER_TID (0x1ffULL << 0)
#define SKX_CHA_MSR_PMON_BOX_FILTER_LINK (0xfULL << 9)
@@ -3575,6 +3599,163 @@ static struct intel_uncore_ops skx_uncore_iio_ops = {
.read_counter = uncore_msr_read_counter,
};

+static inline u8 skx_iio_stack(struct intel_uncore_pmu *pmu, int die)
+{
+ return pmu->type->topology[die] >> (pmu->pmu_idx * BUS_NUM_STRIDE);
+}
+
+static umode_t
+skx_iio_mapping_visible(struct kobject *kobj, struct attribute *attr, int die)
+{
+ struct intel_uncore_pmu *pmu = dev_get_drvdata(kobj_to_dev(kobj));
+
+ //Root bus 0x00 is valid only for die 0 AND pmu_idx = 0.
+ return (!skx_iio_stack(pmu, die) && pmu->pmu_idx) ? 0 : attr->mode;
+}
+
+static ssize_t skx_iio_mapping_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct pmu *pmu = dev_get_drvdata(dev);
+ struct intel_uncore_pmu *uncore_pmu =
+ container_of(pmu, struct intel_uncore_pmu, pmu);
+
+ struct dev_ext_attribute *ea =
+ container_of(attr, struct dev_ext_attribute, attr);
+ long die = (long)ea->var;
+
+ return sprintf(buf, "0000:%02x\n", skx_iio_stack(uncore_pmu, die));
+}
+
+static int skx_msr_cpu_bus_read(int cpu, u64 *topology)
+{
+ u64 msr_value;
+
+ if (rdmsrl_on_cpu(cpu, SKX_MSR_CPU_BUS_NUMBER, &msr_value) ||
+ !(msr_value & SKX_MSR_CPU_BUS_VALID_BIT))
+ return -ENXIO;
+
+ *topology = msr_value;
+
+ return 0;
+}
+
+static int die_to_cpu(int die)
+{
+ int res = 0, cpu, current_die;
+ /*
+ * Using cpus_read_lock() to ensure cpu is not going down between
+ * looking at cpu_online_mask.
+ */
+ cpus_read_lock();
+ for_each_online_cpu(cpu) {
+ current_die = topology_logical_die_id(cpu);
+ if (current_die == die) {
+ res = cpu;
+ break;
+ }
+ }
+ cpus_read_unlock();
+ return res;
+}
+
+static int skx_iio_get_topology(struct intel_uncore_type *type)
+{
+ int i, ret;
+ struct pci_bus *bus = NULL;
+
+ /*
+ * Verified single-segment environments only; disabled for multiple
+ * segment topologies for now except VMD domains.
+ * VMD domains start at 0x10000 to not clash with ACPI _SEG domains.
+ */
+ while ((bus = pci_find_next_bus(bus))
+ && (!pci_domain_nr(bus) || pci_domain_nr(bus) > 0xffff))
+ ;
+ if (bus)
+ return -EPERM;
+
+ type->topology = kcalloc(uncore_max_dies(), sizeof(u64), GFP_KERNEL);
+ if (!type->topology)
+ return -ENOMEM;
+
+ for (i = 0; i < uncore_max_dies(); i++) {
+ ret = skx_msr_cpu_bus_read(die_to_cpu(i), &type->topology[i]);
+ if (ret) {
+ kfree(type->topology);
+ type->topology = NULL;
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static struct attribute *uncore_empry_attr;
+
+static struct attribute_group skx_iio_mapping_group = {
+ .attrs = &uncore_empry_attr,
+ .is_visible = skx_iio_mapping_visible,
+};
+
+const static struct attribute_group *skx_iio_attr_update[] = {
+ &skx_iio_mapping_group,
+ NULL,
+};
+
+static int skx_iio_set_mapping(struct intel_uncore_type *type)
+{
+ char buf[64];
+ int ret = 0;
+ long die;
+ struct attribute **attrs;
+ struct dev_ext_attribute *eas;
+
+ ret = skx_iio_get_topology(type);
+ if (ret)
+ return ret;
+
+ // One more for NULL.
+ attrs = kzalloc((uncore_max_dies() + 1) * sizeof(*attrs), GFP_KERNEL);
+ if (!attrs) {
+ kfree(type->topology);
+ return -ENOMEM;
+ }
+
+ eas = kzalloc(sizeof(*eas) * uncore_max_dies(), GFP_KERNEL);
+ if (!eas) {
+ kfree(attrs);
+ kfree(type->topology);
+ return -ENOMEM;
+ }
+ for (die = 0; die < uncore_max_dies(); die++) {
+ sprintf(buf, "node%ld", die);
+ eas[die].attr.attr.name = kstrdup(buf, GFP_KERNEL);
+ if (!eas[die].attr.attr.name) {
+ ret = -ENOMEM;
+ goto err;
+ }
+ eas[die].attr.attr.mode = 0444;
+ eas[die].attr.show = skx_iio_mapping_show;
+ eas[die].attr.store = NULL;
+ eas[die].var = (void *)die;
+ attrs[die] = &eas[die].attr.attr;
+ }
+
+ skx_iio_mapping_group.attrs = attrs;
+
+ return 0;
+
+err:
+ for (; die >= 0; die--)
+ kfree(eas[die].attr.attr.name);
+ kfree(eas);
+ kfree(attrs);
+ kfree(type->topology);
+
+ return ret;
+}
+
static struct intel_uncore_type skx_uncore_iio = {
.name = "iio",
.num_counters = 4,
@@ -3589,6 +3770,8 @@ static struct intel_uncore_type skx_uncore_iio = {
.constraints = skx_uncore_iio_constraints,
.ops = &skx_uncore_iio_ops,
.format_group = &skx_uncore_iio_format_group,
+ .attr_update = skx_iio_attr_update,
+ .set_mapping = skx_iio_set_mapping,
};

enum perf_uncore_iio_freerunning_type_id {
--
2.19.1

2020-02-11 20:51:45

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v5 3/3] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform

On Tue, Feb 11, 2020 at 07:15:49PM +0300, [email protected] wrote:
> +static ssize_t skx_iio_mapping_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + struct pmu *pmu = dev_get_drvdata(dev);
> + struct intel_uncore_pmu *uncore_pmu =
> + container_of(pmu, struct intel_uncore_pmu, pmu);
> +
> + struct dev_ext_attribute *ea =
> + container_of(attr, struct dev_ext_attribute, attr);
> + long die = (long)ea->var;
> +
> + return sprintf(buf, "0000:%02x\n", skx_iio_stack(uncore_pmu, die));

If "0000:" is always the "prefix" of the output of this file, why have
it at all as you always know it is there?

What is ever going to cause that to change?

thanks,

greg k-h

2020-02-11 20:57:31

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v5 3/3] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform

On Tue, Feb 11, 2020 at 07:15:49PM +0300, [email protected] wrote:
> +static struct attribute *uncore_empry_attr;

What is this for?

> +static struct attribute_group skx_iio_mapping_group = {
> + .attrs = &uncore_empry_attr,

Again, what for?

You just overwrite this value so why have it at all?


> + .is_visible = skx_iio_mapping_visible,
> +};
> +
> +const static struct attribute_group *skx_iio_attr_update[] = {
> + &skx_iio_mapping_group,
> + NULL,
> +};
> +
> +static int skx_iio_set_mapping(struct intel_uncore_type *type)
> +{
> + char buf[64];
> + int ret = 0;
> + long die;
> + struct attribute **attrs;
> + struct dev_ext_attribute *eas;
> +
> + ret = skx_iio_get_topology(type);
> + if (ret)
> + return ret;
> +
> + // One more for NULL.
> + attrs = kzalloc((uncore_max_dies() + 1) * sizeof(*attrs), GFP_KERNEL);
> + if (!attrs) {
> + kfree(type->topology);
> + return -ENOMEM;
> + }
> +
> + eas = kzalloc(sizeof(*eas) * uncore_max_dies(), GFP_KERNEL);
> + if (!eas) {
> + kfree(attrs);
> + kfree(type->topology);
> + return -ENOMEM;
> + }
> + for (die = 0; die < uncore_max_dies(); die++) {
> + sprintf(buf, "node%ld", die);
> + eas[die].attr.attr.name = kstrdup(buf, GFP_KERNEL);
> + if (!eas[die].attr.attr.name) {
> + ret = -ENOMEM;
> + goto err;
> + }
> + eas[die].attr.attr.mode = 0444;
> + eas[die].attr.show = skx_iio_mapping_show;
> + eas[die].attr.store = NULL;
> + eas[die].var = (void *)die;
> + attrs[die] = &eas[die].attr.attr;

You HAVE to call sysfs_attr_init() on any dynamically created
attributes. I am guessing you never ran this code with lockdep enabled?

{sigh}

greg k-h

2020-02-11 21:37:15

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH v5 3/3] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform

On Tue, Feb 11, 2020 at 09:15:44AM -0800, Greg KH wrote:
> On Tue, Feb 11, 2020 at 07:15:49PM +0300, [email protected] wrote:
> > +static ssize_t skx_iio_mapping_show(struct device *dev,
> > + struct device_attribute *attr, char *buf)
> > +{
> > + struct pmu *pmu = dev_get_drvdata(dev);
> > + struct intel_uncore_pmu *uncore_pmu =
> > + container_of(pmu, struct intel_uncore_pmu, pmu);
> > +
> > + struct dev_ext_attribute *ea =
> > + container_of(attr, struct dev_ext_attribute, attr);
> > + long die = (long)ea->var;
> > +
> > + return sprintf(buf, "0000:%02x\n", skx_iio_stack(uncore_pmu, die));
>
> If "0000:" is always the "prefix" of the output of this file, why have
> it at all as you always know it is there?
>
> What is ever going to cause that to change?

I think it's just to make it a complete PCI address.

In theory it might be different on a complex multi node system with
custom interconnect and multiple PCI segments, but that would need code
changes too.

This version of the patchkit only supports standard SKX systems
at this point.

-Andi

2020-02-11 21:38:24

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v5 3/3] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform

On Tue, Feb 11, 2020 at 10:42:00AM -0800, Andi Kleen wrote:
> On Tue, Feb 11, 2020 at 09:15:44AM -0800, Greg KH wrote:
> > On Tue, Feb 11, 2020 at 07:15:49PM +0300, [email protected] wrote:
> > > +static ssize_t skx_iio_mapping_show(struct device *dev,
> > > + struct device_attribute *attr, char *buf)
> > > +{
> > > + struct pmu *pmu = dev_get_drvdata(dev);
> > > + struct intel_uncore_pmu *uncore_pmu =
> > > + container_of(pmu, struct intel_uncore_pmu, pmu);
> > > +
> > > + struct dev_ext_attribute *ea =
> > > + container_of(attr, struct dev_ext_attribute, attr);
> > > + long die = (long)ea->var;
> > > +
> > > + return sprintf(buf, "0000:%02x\n", skx_iio_stack(uncore_pmu, die));
> >
> > If "0000:" is always the "prefix" of the output of this file, why have
> > it at all as you always know it is there?
> >
> > What is ever going to cause that to change?
>
> I think it's just to make it a complete PCI address.

Is that what this really is? If so, it's not a "complete" pci address,
is it? If it is, use the real pci address please.

> In theory it might be different on a complex multi node system with
> custom interconnect and multiple PCI segments, but that would need code
> changes too.
>
> This version of the patchkit only supports standard SKX systems
> at this point.

I have no idea what that means, please translate for non-Intel people :)

greg k-h

2020-02-11 21:40:40

by Liang, Kan

[permalink] [raw]
Subject: Re: [PATCH v5 3/3] perf x86: Exposing an Uncore u nit to PMON for Intel Xeon® server platform



On 2/11/2020 1:57 PM, Greg KH wrote:
> On Tue, Feb 11, 2020 at 10:42:00AM -0800, Andi Kleen wrote:
>> On Tue, Feb 11, 2020 at 09:15:44AM -0800, Greg KH wrote:
>>> On Tue, Feb 11, 2020 at 07:15:49PM +0300, [email protected] wrote:
>>>> +static ssize_t skx_iio_mapping_show(struct device *dev,
>>>> + struct device_attribute *attr, char *buf)
>>>> +{
>>>> + struct pmu *pmu = dev_get_drvdata(dev);
>>>> + struct intel_uncore_pmu *uncore_pmu =
>>>> + container_of(pmu, struct intel_uncore_pmu, pmu);
>>>> +
>>>> + struct dev_ext_attribute *ea =
>>>> + container_of(attr, struct dev_ext_attribute, attr);
>>>> + long die = (long)ea->var;
>>>> +
>>>> + return sprintf(buf, "0000:%02x\n", skx_iio_stack(uncore_pmu, die));
>>>
>>> If "0000:" is always the "prefix" of the output of this file, why have
>>> it at all as you always know it is there?


I think Roman only test with BIOS configured as single-segment. So he
hard-code the segment# here.

I'm not sure if Roman can do some test with multiple-segment BIOS. If
not, I think we should at least print a warning here.

>>>
>>> What is ever going to cause that to change?
>>
>> I think it's just to make it a complete PCI address.
>
> Is that what this really is? If so, it's not a "complete" pci address,
> is it? If it is, use the real pci address please.

I think we don't need a complete PCI address here. The attr is to
disclose the mapping information between die and PCI BUS. Segment:BUS
should be good enough.

Thanks,
Kan

>
>> In theory it might be different on a complex multi node system with
>> custom interconnect and multiple PCI segments, but that would need code
>> changes too.
>>
>> This version of the patchkit only supports standard SKX systems
>> at this point.
>
> I have no idea what that means, please translate for non-Intel people :)
>
> greg k-h
>

2020-02-11 21:41:13

by Liang, Kan

[permalink] [raw]
Subject: Re: [PATCH v5 3/3] perf x86: Exposing an Uncore u nit to PMON for Intel Xeon® server platform



On 2/11/2020 11:15 AM, [email protected] wrote:
> From: Roman Sudarikov <[email protected]>
>
> Current version supports a server line starting Intel® Xeon® Processor
> Scalable Family and introduces mapping for IIO Uncore units only.
> Other units can be added on demand.
>
> IIO stack to PMON mapping is exposed through:
> /sys/devices/uncore_iio_<pmu_idx>/nodeX
> where nodeX is file which holds PCIe root bus.

I think the mapping information is from die to BUS#.
Can we change the name nodeX to dieX?


Thanks,
Kan
>
> Details are explained in Documentation/ABI/testing/sysfs-devices-mapping
>
> Co-developed-by: Alexander Antonov <[email protected]>
> Signed-off-by: Alexander Antonov <[email protected]>
> Signed-off-by: Roman Sudarikov <[email protected]>
> ---
> .../ABI/testing/sysfs-devices-mapping | 32 +++
> arch/x86/events/intel/uncore_snbep.c | 183 ++++++++++++++++++
> 2 files changed, 215 insertions(+)
> create mode 100644 Documentation/ABI/testing/sysfs-devices-mapping
>
> diff --git a/Documentation/ABI/testing/sysfs-devices-mapping b/Documentation/ABI/testing/sysfs-devices-mapping
> new file mode 100644
> index 000000000000..c26e4e0b6ca8
> --- /dev/null
> +++ b/Documentation/ABI/testing/sysfs-devices-mapping
> @@ -0,0 +1,32 @@
> +What: /sys/devices/uncore_iio_x/nodeX
> +Date: February 2020
> +Contact: Roman Sudarikov <[email protected]>
> +Description:
> + Each IIO stack (PCIe root port) has its own IIO PMON block, so
> + each nodeX file (where X node number) holds PCIe root port,
> + which can be monitored by that IIO PMON block.
> + For example, on 4-node Xeon platform with up to 6 IIO stacks per
> + node and, therefore, 6 IIO PMON blocks per node, the mapping of
> + IIO PMON block 0 exposes as the following:
> +
> + $ ls /sys/devices/uncore_iio_0/node*
> + -r--r--r-- /sys/devices/uncore_iio_0/node0
> + -r--r--r-- /sys/devices/uncore_iio_0/node1
> + -r--r--r-- /sys/devices/uncore_iio_0/node2
> + -r--r--r-- /sys/devices/uncore_iio_0/node3
> +
> + $ tail /sys/devices/uncore_iio_0/node*
> + ==> /sys/devices/uncore_iio_0/node0 <==
> + 0000:00
> + ==> /sys/devices/uncore_iio_0/node1 <==
> + 0000:40
> + ==> /sys/devices/uncore_iio_0/node2 <==
> + 0000:80
> + ==> /sys/devices/uncore_iio_0/node3 <==
> + 0000:c0
> +
> + Which means:
> + IIO PMU 0 on node 0 belongs to PCI RP on bus 0x00, domain 0x0000
> + IIO PMU 0 on node 1 belongs to PCI RP on bus 0x40, domain 0x0000
> + IIO PMU 0 on node 2 belongs to PCI RP on bus 0x80, domain 0x0000
> + IIO PMU 0 on node 3 belongs to PCI RP on bus 0xc0, domain 0x0000
> \ No newline at end of file
> diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
> index ad20220af303..96fca1ac22a4 100644
> --- a/arch/x86/events/intel/uncore_snbep.c
> +++ b/arch/x86/events/intel/uncore_snbep.c
> @@ -273,6 +273,30 @@
> #define SKX_CPUNODEID 0xc0
> #define SKX_GIDNIDMAP 0xd4
>
> +/*
> + * The CPU_BUS_NUMBER MSR returns the values of the respective CPUBUSNO CSR
> + * that BIOS programmed. MSR has package scope.
> + * | Bit | Default | Description
> + * | [63] | 00h | VALID - When set, indicates the CPU bus
> + * numbers have been initialized. (RO)
> + * |[62:48]| --- | Reserved
> + * |[47:40]| 00h | BUS_NUM_5 — Return the bus number BIOS assigned
> + * CPUBUSNO(5). (RO)
> + * |[39:32]| 00h | BUS_NUM_4 — Return the bus number BIOS assigned
> + * CPUBUSNO(4). (RO)
> + * |[31:24]| 00h | BUS_NUM_3 — Return the bus number BIOS assigned
> + * CPUBUSNO(3). (RO)
> + * |[23:16]| 00h | BUS_NUM_2 — Return the bus number BIOS assigned
> + * CPUBUSNO(2). (RO)
> + * |[15:8] | 00h | BUS_NUM_1 — Return the bus number BIOS assigned
> + * CPUBUSNO(1). (RO)
> + * | [7:0] | 00h | BUS_NUM_0 — Return the bus number BIOS assigned
> + * CPUBUSNO(0). (RO)
> + */
> +#define SKX_MSR_CPU_BUS_NUMBER 0x300
> +#define SKX_MSR_CPU_BUS_VALID_BIT (1ULL << 63)
> +#define BUS_NUM_STRIDE 8
> +
> /* SKX CHA */
> #define SKX_CHA_MSR_PMON_BOX_FILTER_TID (0x1ffULL << 0)
> #define SKX_CHA_MSR_PMON_BOX_FILTER_LINK (0xfULL << 9)
> @@ -3575,6 +3599,163 @@ static struct intel_uncore_ops skx_uncore_iio_ops = {
> .read_counter = uncore_msr_read_counter,
> };
>
> +static inline u8 skx_iio_stack(struct intel_uncore_pmu *pmu, int die)
> +{
> + return pmu->type->topology[die] >> (pmu->pmu_idx * BUS_NUM_STRIDE);
> +}
> +
> +static umode_t
> +skx_iio_mapping_visible(struct kobject *kobj, struct attribute *attr, int die)
> +{
> + struct intel_uncore_pmu *pmu = dev_get_drvdata(kobj_to_dev(kobj));
> +
> + //Root bus 0x00 is valid only for die 0 AND pmu_idx = 0.
> + return (!skx_iio_stack(pmu, die) && pmu->pmu_idx) ? 0 : attr->mode;
> +}
> +
> +static ssize_t skx_iio_mapping_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + struct pmu *pmu = dev_get_drvdata(dev);
> + struct intel_uncore_pmu *uncore_pmu =
> + container_of(pmu, struct intel_uncore_pmu, pmu);
> +
> + struct dev_ext_attribute *ea =
> + container_of(attr, struct dev_ext_attribute, attr);
> + long die = (long)ea->var;
> +
> + return sprintf(buf, "0000:%02x\n", skx_iio_stack(uncore_pmu, die));
> +}
> +
> +static int skx_msr_cpu_bus_read(int cpu, u64 *topology)
> +{
> + u64 msr_value;
> +
> + if (rdmsrl_on_cpu(cpu, SKX_MSR_CPU_BUS_NUMBER, &msr_value) ||
> + !(msr_value & SKX_MSR_CPU_BUS_VALID_BIT))
> + return -ENXIO;
> +
> + *topology = msr_value;
> +
> + return 0;
> +}
> +
> +static int die_to_cpu(int die)
> +{
> + int res = 0, cpu, current_die;
> + /*
> + * Using cpus_read_lock() to ensure cpu is not going down between
> + * looking at cpu_online_mask.
> + */
> + cpus_read_lock();
> + for_each_online_cpu(cpu) {
> + current_die = topology_logical_die_id(cpu);
> + if (current_die == die) {
> + res = cpu;
> + break;
> + }
> + }
> + cpus_read_unlock();
> + return res;
> +}
> +
> +static int skx_iio_get_topology(struct intel_uncore_type *type)
> +{
> + int i, ret;
> + struct pci_bus *bus = NULL;
> +
> + /*
> + * Verified single-segment environments only; disabled for multiple
> + * segment topologies for now except VMD domains.
> + * VMD domains start at 0x10000 to not clash with ACPI _SEG domains.
> + */
> + while ((bus = pci_find_next_bus(bus))
> + && (!pci_domain_nr(bus) || pci_domain_nr(bus) > 0xffff))
> + ;
> + if (bus)
> + return -EPERM;
> +
> + type->topology = kcalloc(uncore_max_dies(), sizeof(u64), GFP_KERNEL);
> + if (!type->topology)
> + return -ENOMEM;
> +
> + for (i = 0; i < uncore_max_dies(); i++) {
> + ret = skx_msr_cpu_bus_read(die_to_cpu(i), &type->topology[i]);
> + if (ret) {
> + kfree(type->topology);
> + type->topology = NULL;
> + return ret;
> + }
> + }
> +
> + return 0;
> +}
> +
> +static struct attribute *uncore_empry_attr;
> +
> +static struct attribute_group skx_iio_mapping_group = {
> + .attrs = &uncore_empry_attr,
> + .is_visible = skx_iio_mapping_visible,
> +};
> +
> +const static struct attribute_group *skx_iio_attr_update[] = {
> + &skx_iio_mapping_group,
> + NULL,
> +};
> +
> +static int skx_iio_set_mapping(struct intel_uncore_type *type)
> +{
> + char buf[64];
> + int ret = 0;
> + long die;
> + struct attribute **attrs;
> + struct dev_ext_attribute *eas;
> +
> + ret = skx_iio_get_topology(type);
> + if (ret)
> + return ret;
> +
> + // One more for NULL.
> + attrs = kzalloc((uncore_max_dies() + 1) * sizeof(*attrs), GFP_KERNEL);
> + if (!attrs) {
> + kfree(type->topology);
> + return -ENOMEM;
> + }
> +
> + eas = kzalloc(sizeof(*eas) * uncore_max_dies(), GFP_KERNEL);
> + if (!eas) {
> + kfree(attrs);
> + kfree(type->topology);
> + return -ENOMEM;
> + }
> + for (die = 0; die < uncore_max_dies(); die++) {
> + sprintf(buf, "node%ld", die);
> + eas[die].attr.attr.name = kstrdup(buf, GFP_KERNEL);
> + if (!eas[die].attr.attr.name) {
> + ret = -ENOMEM;
> + goto err;
> + }
> + eas[die].attr.attr.mode = 0444;
> + eas[die].attr.show = skx_iio_mapping_show;
> + eas[die].attr.store = NULL;
> + eas[die].var = (void *)die;
> + attrs[die] = &eas[die].attr.attr;
> + }
> +
> + skx_iio_mapping_group.attrs = attrs;
> +
> + return 0;
> +
> +err:
> + for (; die >= 0; die--)
> + kfree(eas[die].attr.attr.name);
> + kfree(eas);
> + kfree(attrs);
> + kfree(type->topology);
> +
> + return ret;
> +}
> +
> static struct intel_uncore_type skx_uncore_iio = {
> .name = "iio",
> .num_counters = 4,
> @@ -3589,6 +3770,8 @@ static struct intel_uncore_type skx_uncore_iio = {
> .constraints = skx_uncore_iio_constraints,
> .ops = &skx_uncore_iio_ops,
> .format_group = &skx_uncore_iio_format_group,
> + .attr_update = skx_iio_attr_update,
> + .set_mapping = skx_iio_set_mapping,
> };
>
> enum perf_uncore_iio_freerunning_type_id {
>

2020-02-11 21:41:30

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v5 3/3] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform

On Tue, Feb 11, 2020 at 02:59:21PM -0500, Liang, Kan wrote:
>
>
> On 2/11/2020 1:57 PM, Greg KH wrote:
> > On Tue, Feb 11, 2020 at 10:42:00AM -0800, Andi Kleen wrote:
> > > On Tue, Feb 11, 2020 at 09:15:44AM -0800, Greg KH wrote:
> > > > On Tue, Feb 11, 2020 at 07:15:49PM +0300, [email protected] wrote:
> > > > > +static ssize_t skx_iio_mapping_show(struct device *dev,
> > > > > + struct device_attribute *attr, char *buf)
> > > > > +{
> > > > > + struct pmu *pmu = dev_get_drvdata(dev);
> > > > > + struct intel_uncore_pmu *uncore_pmu =
> > > > > + container_of(pmu, struct intel_uncore_pmu, pmu);
> > > > > +
> > > > > + struct dev_ext_attribute *ea =
> > > > > + container_of(attr, struct dev_ext_attribute, attr);
> > > > > + long die = (long)ea->var;
> > > > > +
> > > > > + return sprintf(buf, "0000:%02x\n", skx_iio_stack(uncore_pmu, die));
> > > >
> > > > If "0000:" is always the "prefix" of the output of this file, why have
> > > > it at all as you always know it is there?
>
>
> I think Roman only test with BIOS configured as single-segment. So he
> hard-code the segment# here.
>
> I'm not sure if Roman can do some test with multiple-segment BIOS. If not, I
> think we should at least print a warning here.
>
> > > >
> > > > What is ever going to cause that to change?
> > >
> > > I think it's just to make it a complete PCI address.
> >
> > Is that what this really is? If so, it's not a "complete" pci address,
> > is it? If it is, use the real pci address please.
>
> I think we don't need a complete PCI address here. The attr is to disclose
> the mapping information between die and PCI BUS. Segment:BUS should be good
> enough.

"good enough" for today, but note that you can not change the format of
the data in the file in the future, you would have to create a new file.
So I suggest at least try to future-proof it as much as possible if you
_know_ this could change.

Just use the full pci address, there's no reason not to, otherwise it's
just confusing.

thanks,

greg k-h

2020-02-12 17:33:05

by Sudarikov, Roman

[permalink] [raw]
Subject: Re: [PATCH v5 3/3] perf x86: Exposing an Uncore u nit to PMON for Intel Xeon® server platform

On 11.02.2020 23:14, Greg KH wrote:
> On Tue, Feb 11, 2020 at 02:59:21PM -0500, Liang, Kan wrote:
>>
>> On 2/11/2020 1:57 PM, Greg KH wrote:
>>> On Tue, Feb 11, 2020 at 10:42:00AM -0800, Andi Kleen wrote:
>>>> On Tue, Feb 11, 2020 at 09:15:44AM -0800, Greg KH wrote:
>>>>> On Tue, Feb 11, 2020 at 07:15:49PM +0300, [email protected] wrote:
>>>>>> +static ssize_t skx_iio_mapping_show(struct device *dev,
>>>>>> + struct device_attribute *attr, char *buf)
>>>>>> +{
>>>>>> + struct pmu *pmu = dev_get_drvdata(dev);
>>>>>> + struct intel_uncore_pmu *uncore_pmu =
>>>>>> + container_of(pmu, struct intel_uncore_pmu, pmu);
>>>>>> +
>>>>>> + struct dev_ext_attribute *ea =
>>>>>> + container_of(attr, struct dev_ext_attribute, attr);
>>>>>> + long die = (long)ea->var;
>>>>>> +
>>>>>> + return sprintf(buf, "0000:%02x\n", skx_iio_stack(uncore_pmu, die));
>>>>> If "0000:" is always the "prefix" of the output of this file, why have
>>>>> it at all as you always know it is there?
>>
>> I think Roman only test with BIOS configured as single-segment. So he
>> hard-code the segment# here.
>>
>> I'm not sure if Roman can do some test with multiple-segment BIOS. If not, I
>> think we should at least print a warning here.
>>
>>>>> What is ever going to cause that to change?
>>>> I think it's just to make it a complete PCI address.
>>> Is that what this really is? If so, it's not a "complete" pci address,
>>> is it? If it is, use the real pci address please.
>> I think we don't need a complete PCI address here. The attr is to disclose
>> the mapping information between die and PCI BUS. Segment:BUS should be good
>> enough.
> "good enough" for today, but note that you can not change the format of
> the data in the file in the future, you would have to create a new file.
> So I suggest at least try to future-proof it as much as possible if you
> _know_ this could change.
>
> Just use the full pci address, there's no reason not to, otherwise it's
> just confusing.
>
> thanks,
>
> greg k-h
Hi Greg,

Yes, the "Segment:Bus" pair is enough to distinguish between different
Root ports.
Please see the changes below which are to address all previous comments.

Thanks,
Roman

diff --git a/arch/x86/events/intel/uncore_snbep.c
b/arch/x86/events/intel/uncore_snbep.c
index 96fca1ac22a4..f805fbdbbe81 100644
--- a/arch/x86/events/intel/uncore_snbep.c
+++ b/arch/x86/events/intel/uncore_snbep.c
@@ -3616,15 +3616,22 @@ skx_iio_mapping_visible(struct kobject *kobj,
struct attribute *attr, int die)
 static ssize_t skx_iio_mapping_show(struct device *dev,
                 struct device_attribute *attr, char *buf)
 {
+    struct pci_bus *bus = NULL;
     struct pmu *pmu = dev_get_drvdata(dev);
     struct intel_uncore_pmu *uncore_pmu =
         container_of(pmu, struct intel_uncore_pmu, pmu);
+    int pmu_idx = uncore_pmu->pmu_idx;

     struct dev_ext_attribute *ea =
         container_of(attr, struct dev_ext_attribute, attr);
     long die = (long)ea->var;

-    return sprintf(buf, "0000:%02x\n", skx_iio_stack(uncore_pmu, die));
+    do {
+        bus = pci_find_next_bus(bus);
+    } while (pmu_idx--);
+
+    return sprintf(buf, "%04x:%02x\n", pci_domain_nr(bus),
+                       skx_iio_stack(uncore_pmu, die));
 }

 static int skx_msr_cpu_bus_read(int cpu, u64 *topology)
@@ -3691,10 +3698,7 @@ static int skx_iio_get_topology(struct
intel_uncore_type *type)
     return 0;
 }

-static struct attribute *uncore_empry_attr;
-
 static struct attribute_group skx_iio_mapping_group = {
-    .attrs        = &uncore_empry_attr,
     .is_visible    = skx_iio_mapping_visible,
 };

@@ -3729,7 +3733,8 @@ static int skx_iio_set_mapping(struct
intel_uncore_type *type)
         return -ENOMEM;
     }
     for (die = 0; die < uncore_max_dies(); die++) {
-        sprintf(buf, "node%ld", die);
+        sprintf(buf, "die%ld", die);
+        sysfs_attr_init(&eas[die].attr.attr);
         eas[die].attr.attr.name = kstrdup(buf, GFP_KERNEL);
         if (!eas[die].attr.attr.name) {
             ret = -ENOMEM;
@@ -3752,6 +3757,7 @@ static int skx_iio_set_mapping(struct
intel_uncore_type *type)
     kfree(eas);
     kfree(attrs);
     kfree(type->topology);
+    type->attr_update = NULL;

     return ret;
 }

2020-02-12 20:59:25

by Liang, Kan

[permalink] [raw]
Subject: Re: [PATCH v5 3/3] perf x86: Exposing an Uncore u nit to PMON for Intel Xeon® server platform



On 2/12/2020 12:31 PM, Sudarikov, Roman wrote:
> On 11.02.2020 23:14, Greg KH wrote:
>> On Tue, Feb 11, 2020 at 02:59:21PM -0500, Liang, Kan wrote:
>>>
>>> On 2/11/2020 1:57 PM, Greg KH wrote:
>>>> On Tue, Feb 11, 2020 at 10:42:00AM -0800, Andi Kleen wrote:
>>>>> On Tue, Feb 11, 2020 at 09:15:44AM -0800, Greg KH wrote:
>>>>>> On Tue, Feb 11, 2020 at 07:15:49PM +0300,
>>>>>> [email protected] wrote:
>>>>>>> +static ssize_t skx_iio_mapping_show(struct device *dev,
>>>>>>> +                struct device_attribute *attr, char *buf)
>>>>>>> +{
>>>>>>> +    struct pmu *pmu = dev_get_drvdata(dev);
>>>>>>> +    struct intel_uncore_pmu *uncore_pmu =
>>>>>>> +        container_of(pmu, struct intel_uncore_pmu, pmu);
>>>>>>> +
>>>>>>> +    struct dev_ext_attribute *ea =
>>>>>>> +        container_of(attr, struct dev_ext_attribute, attr);
>>>>>>> +    long die = (long)ea->var;
>>>>>>> +
>>>>>>> +    return sprintf(buf, "0000:%02x\n", skx_iio_stack(uncore_pmu,
>>>>>>> die));
>>>>>> If "0000:" is always the "prefix" of the output of this file, why
>>>>>> have
>>>>>> it at all as you always know it is there?
>>>
>>> I think Roman only test with BIOS configured as single-segment. So he
>>> hard-code the segment# here.
>>>
>>> I'm not sure if Roman can do some test with multiple-segment BIOS. If
>>> not, I
>>> think we should at least print a warning here.
>>>
>>>>>> What is ever going to cause that to change?
>>>>> I think it's just to make it a complete PCI address.
>>>> Is that what this really is?  If so, it's not a "complete" pci address,
>>>> is it?  If it is, use the real pci address please.
>>> I think we don't need a complete PCI address here. The attr is to
>>> disclose
>>> the mapping information between die and PCI BUS. Segment:BUS should
>>> be good
>>> enough.
>> "good enough" for today, but note that you can not change the format of
>> the data in the file in the future, you would have to create a new file.
>> So I suggest at least try to future-proof it as much as possible if you
>> _know_ this could change.
>>
>> Just use the full pci address, there's no reason not to, otherwise it's
>> just confusing.
>>
>> thanks,
>>
>> greg k-h
> Hi Greg,
>
> Yes, the "Segment:Bus" pair is enough to distinguish between different
> Root ports.

I think Greg suggests us to use full PCI address here.

Hi Greg,

There may be several devices are connected to IIO stack. There is no
full PCI address for IIO stack.
I don't think we can list all of devices in the same IIO stack with full
PCI address here either. It's not necessary, and only increase
maintenance overhead.

I think we may have two options here.

Option 1: Roman's proposal.The format of the file is "Segment:Bus". For
the future I can see, the format doesn't need to be changed.
E.g. $ls /sys/devices/uncore_<type>_<pmu_idx>/die0
$0000:7f

Option 2: Use full PCI address, but use -1 to indicate invalid address.
E.g. $ls /sys/devices/uncore_<type>_<pmu_idx>/die0
$0000:7f:-1:-1

Should we use the format in option 2?

Thanks,
Kan


> Please see the changes below which are to address all previous comments.
>
> Thanks,
> Roman
>
> diff --git a/arch/x86/events/intel/uncore_snbep.c
> b/arch/x86/events/intel/uncore_snbep.c
> index 96fca1ac22a4..f805fbdbbe81 100644
> --- a/arch/x86/events/intel/uncore_snbep.c
> +++ b/arch/x86/events/intel/uncore_snbep.c
> @@ -3616,15 +3616,22 @@ skx_iio_mapping_visible(struct kobject *kobj,
> struct attribute *attr, int die)
>  static ssize_t skx_iio_mapping_show(struct device *dev,
>                  struct device_attribute *attr, char *buf)
>  {
> +    struct pci_bus *bus = NULL;
>      struct pmu *pmu = dev_get_drvdata(dev);
>      struct intel_uncore_pmu *uncore_pmu =
>          container_of(pmu, struct intel_uncore_pmu, pmu);
> +    int pmu_idx = uncore_pmu->pmu_idx;
>
>      struct dev_ext_attribute *ea =
>          container_of(attr, struct dev_ext_attribute, attr);
>      long die = (long)ea->var;
>
> -    return sprintf(buf, "0000:%02x\n", skx_iio_stack(uncore_pmu, die));
> +    do {
> +        bus = pci_find_next_bus(bus);
> +    } while (pmu_idx--);
> +
> +    return sprintf(buf, "%04x:%02x\n", pci_domain_nr(bus),
> +                       skx_iio_stack(uncore_pmu, die));
>  }
>
>  static int skx_msr_cpu_bus_read(int cpu, u64 *topology)
> @@ -3691,10 +3698,7 @@ static int skx_iio_get_topology(struct
> intel_uncore_type *type)
>      return 0;
>  }
>
> -static struct attribute *uncore_empry_attr;
> -
>  static struct attribute_group skx_iio_mapping_group = {
> -    .attrs        = &uncore_empry_attr,
>      .is_visible    = skx_iio_mapping_visible,
>  };
>
> @@ -3729,7 +3733,8 @@ static int skx_iio_set_mapping(struct
> intel_uncore_type *type)
>          return -ENOMEM;
>      }
>      for (die = 0; die < uncore_max_dies(); die++) {
> -        sprintf(buf, "node%ld", die);
> +        sprintf(buf, "die%ld", die);
> +        sysfs_attr_init(&eas[die].attr.attr);
>          eas[die].attr.attr.name = kstrdup(buf, GFP_KERNEL);
>          if (!eas[die].attr.attr.name) {
>              ret = -ENOMEM;
> @@ -3752,6 +3757,7 @@ static int skx_iio_set_mapping(struct
> intel_uncore_type *type)
>      kfree(eas);
>      kfree(attrs);
>      kfree(type->topology);
> +    type->attr_update = NULL;
>
>      return ret;
>  }

2020-02-12 22:56:27

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v5 3/3] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform

On Wed, Feb 12, 2020 at 03:58:50PM -0500, Liang, Kan wrote:
>
>
> On 2/12/2020 12:31 PM, Sudarikov, Roman wrote:
> > On 11.02.2020 23:14, Greg KH wrote:
> > > On Tue, Feb 11, 2020 at 02:59:21PM -0500, Liang, Kan wrote:
> > > >
> > > > On 2/11/2020 1:57 PM, Greg KH wrote:
> > > > > On Tue, Feb 11, 2020 at 10:42:00AM -0800, Andi Kleen wrote:
> > > > > > On Tue, Feb 11, 2020 at 09:15:44AM -0800, Greg KH wrote:
> > > > > > > On Tue, Feb 11, 2020 at 07:15:49PM +0300,
> > > > > > > [email protected] wrote:
> > > > > > > > +static ssize_t skx_iio_mapping_show(struct device *dev,
> > > > > > > > +??????????????? struct device_attribute *attr, char *buf)
> > > > > > > > +{
> > > > > > > > +??? struct pmu *pmu = dev_get_drvdata(dev);
> > > > > > > > +??? struct intel_uncore_pmu *uncore_pmu =
> > > > > > > > +??????? container_of(pmu, struct intel_uncore_pmu, pmu);
> > > > > > > > +
> > > > > > > > +??? struct dev_ext_attribute *ea =
> > > > > > > > +??????? container_of(attr, struct dev_ext_attribute, attr);
> > > > > > > > +??? long die = (long)ea->var;
> > > > > > > > +
> > > > > > > > +??? return sprintf(buf, "0000:%02x\n",
> > > > > > > > skx_iio_stack(uncore_pmu, die));
> > > > > > > If "0000:" is always the "prefix" of the output of
> > > > > > > this file, why have
> > > > > > > it at all as you always know it is there?
> > > >
> > > > I think Roman only test with BIOS configured as single-segment. So he
> > > > hard-code the segment# here.
> > > >
> > > > I'm not sure if Roman can do some test with multiple-segment
> > > > BIOS. If not, I
> > > > think we should at least print a warning here.
> > > >
> > > > > > > What is ever going to cause that to change?
> > > > > > I think it's just to make it a complete PCI address.
> > > > > Is that what this really is?? If so, it's not a "complete" pci address,
> > > > > is it?? If it is, use the real pci address please.
> > > > I think we don't need a complete PCI address here. The attr is
> > > > to disclose
> > > > the mapping information between die and PCI BUS. Segment:BUS
> > > > should be good
> > > > enough.
> > > "good enough" for today, but note that you can not change the format of
> > > the data in the file in the future, you would have to create a new file.
> > > So I suggest at least try to future-proof it as much as possible if you
> > > _know_ this could change.
> > >
> > > Just use the full pci address, there's no reason not to, otherwise it's
> > > just confusing.
> > >
> > > thanks,
> > >
> > > greg k-h
> > Hi Greg,
> >
> > Yes, the "Segment:Bus" pair is enough to distinguish between different
> > Root ports.
>
> I think Greg suggests us to use full PCI address here.
>
> Hi Greg,
>
> There may be several devices are connected to IIO stack. There is no full
> PCI address for IIO stack.

Please define "full" for me. Please please don't tell me you are just
using a truncated version of the PCI address. I thought we got rid of
all of that nonsense 10 years ago...

> I don't think we can list all of devices in the same IIO stack with full PCI
> address here either. It's not necessary, and only increase maintenance
> overhead.

Then what exactly _IS_ this number, if not the PCI address?

Something made up to look almost like a PCI address, but not quite?
Somethine else?

> I think we may have two options here.
>
> Option 1: Roman's proposal.The format of the file is "Segment:Bus". For the
> future I can see, the format doesn't need to be changed.
> E.g. $ls /sys/devices/uncore_<type>_<pmu_idx>/die0
> $0000:7f

Again, fake PCI address?

> Option 2: Use full PCI address, but use -1 to indicate invalid address.
> E.g. $ls /sys/devices/uncore_<type>_<pmu_idx>/die0
> $0000:7f:-1:-1

"Invalid"? Why? Why not just refer to the 0:0 device, as that's the
bus "root" address (or whatever it's called, I can't remember PCI stuff
all that well...)

> Should we use the format in option 2?

What could userspace do with a -1 -1 address?

thanks,

greg k-h

2020-02-13 12:37:44

by Sudarikov, Roman

[permalink] [raw]
Subject: Re: [PATCH v5 3/3] perf x86: Exposing an Uncore u nit to PMON for Intel Xeon® server platform

On 13.02.2020 1:56, Greg KH wrote:
> On Wed, Feb 12, 2020 at 03:58:50PM -0500, Liang, Kan wrote:
>>
>> On 2/12/2020 12:31 PM, Sudarikov, Roman wrote:
>>> On 11.02.2020 23:14, Greg KH wrote:
>>>> On Tue, Feb 11, 2020 at 02:59:21PM -0500, Liang, Kan wrote:
>>>>> On 2/11/2020 1:57 PM, Greg KH wrote:
>>>>>> On Tue, Feb 11, 2020 at 10:42:00AM -0800, Andi Kleen wrote:
>>>>>>> On Tue, Feb 11, 2020 at 09:15:44AM -0800, Greg KH wrote:
>>>>>>>> On Tue, Feb 11, 2020 at 07:15:49PM +0300,
>>>>>>>> [email protected] wrote:
>>>>>>>>> +static ssize_t skx_iio_mapping_show(struct device *dev,
>>>>>>>>> +                struct device_attribute *attr, char *buf)
>>>>>>>>> +{
>>>>>>>>> +    struct pmu *pmu = dev_get_drvdata(dev);
>>>>>>>>> +    struct intel_uncore_pmu *uncore_pmu =
>>>>>>>>> +        container_of(pmu, struct intel_uncore_pmu, pmu);
>>>>>>>>> +
>>>>>>>>> +    struct dev_ext_attribute *ea =
>>>>>>>>> +        container_of(attr, struct dev_ext_attribute, attr);
>>>>>>>>> +    long die = (long)ea->var;
>>>>>>>>> +
>>>>>>>>> +    return sprintf(buf, "0000:%02x\n",
>>>>>>>>> skx_iio_stack(uncore_pmu, die));
>>>>>>>> If "0000:" is always the "prefix" of the output of
>>>>>>>> this file, why have
>>>>>>>> it at all as you always know it is there?
>>>>> I think Roman only test with BIOS configured as single-segment. So he
>>>>> hard-code the segment# here.
>>>>>
>>>>> I'm not sure if Roman can do some test with multiple-segment
>>>>> BIOS. If not, I
>>>>> think we should at least print a warning here.
>>>>>
>>>>>>>> What is ever going to cause that to change?
>>>>>>> I think it's just to make it a complete PCI address.
>>>>>> Is that what this really is?  If so, it's not a "complete" pci address,
>>>>>> is it?  If it is, use the real pci address please.
>>>>> I think we don't need a complete PCI address here. The attr is
>>>>> to disclose
>>>>> the mapping information between die and PCI BUS. Segment:BUS
>>>>> should be good
>>>>> enough.
>>>> "good enough" for today, but note that you can not change the format of
>>>> the data in the file in the future, you would have to create a new file.
>>>> So I suggest at least try to future-proof it as much as possible if you
>>>> _know_ this could change.
>>>>
>>>> Just use the full pci address, there's no reason not to, otherwise it's
>>>> just confusing.
>>>>
>>>> thanks,
>>>>
>>>> greg k-h
>>> Hi Greg,
>>>
>>> Yes, the "Segment:Bus" pair is enough to distinguish between different
>>> Root ports.
>> I think Greg suggests us to use full PCI address here.
>>
>> Hi Greg,
>>
>> There may be several devices are connected to IIO stack. There is no full
>> PCI address for IIO stack.
> Please define "full" for me. Please please don't tell me you are just
> using a truncated version of the PCI address. I thought we got rid of
> all of that nonsense 10 years ago...
>
>> I don't think we can list all of devices in the same IIO stack with full PCI
>> address here either. It's not necessary, and only increase maintenance
>> overhead.
> Then what exactly _IS_ this number, if not the PCI address?
>
> Something made up to look almost like a PCI address, but not quite?
> Somethine else?
>
>> I think we may have two options here.
>>
>> Option 1: Roman's proposal.The format of the file is "Segment:Bus". For the
>> future I can see, the format doesn't need to be changed.
>> E.g. $ls /sys/devices/uncore_<type>_<pmu_idx>/die0
>> $0000:7f
> Again, fake PCI address?
Hi Greg,

Actually, there are two reasons why we've chosen the "Segment:Root Bus"
notion to
represent Root port to IO PMU mapping:
1. it meets feature requirements to uniquely identify each Root Port on
the system
2. that notion - "Segment:Root Bus" - is already used by the kernel to
represent
Root ports is sysfs; see commit 37d6a0a6f4700 and example below taken for
Intel Xeon V5 (Skylake Server):

# ls /sys/devices/ | grep pci
pci0000:00
pci0000:17
pci0000:3a
pci0000:5d
pci0000:80
pci0000:85
pci0000:ae
pci0000:d7

Having full conventional PCI address in the form of
"Segment:Bus:Device.Function"
is just not required to distinguish one Root Bus from the other.
But if there is any other agreement regarding the way how PCI Root ports are
supposed to show up in the sysfs then please let us know.

Thanks,
Roman
>> Option 2: Use full PCI address, but use -1 to indicate invalid address.
>> E.g. $ls /sys/devices/uncore_<type>_<pmu_idx>/die0
>> $0000:7f:-1:-1
> "Invalid"? Why? Why not just refer to the 0:0 device, as that's the
> bus "root" address (or whatever it's called, I can't remember PCI stuff
> all that well...)
>
>> Should we use the format in option 2?
> What could userspace do with a -1 -1 address?
>
> thanks,
>
> greg k-h


2020-02-13 13:10:07

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v5 3/3] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform

On Thu, Feb 13, 2020 at 03:36:44PM +0300, Sudarikov, Roman wrote:
> On 13.02.2020 1:56, Greg KH wrote:
> > On Wed, Feb 12, 2020 at 03:58:50PM -0500, Liang, Kan wrote:
> > >
> > > On 2/12/2020 12:31 PM, Sudarikov, Roman wrote:
> > > > On 11.02.2020 23:14, Greg KH wrote:
> > > > > On Tue, Feb 11, 2020 at 02:59:21PM -0500, Liang, Kan wrote:
> > > > > > On 2/11/2020 1:57 PM, Greg KH wrote:
> > > > > > > On Tue, Feb 11, 2020 at 10:42:00AM -0800, Andi Kleen wrote:
> > > > > > > > On Tue, Feb 11, 2020 at 09:15:44AM -0800, Greg KH wrote:
> > > > > > > > > On Tue, Feb 11, 2020 at 07:15:49PM +0300,
> > > > > > > > > [email protected] wrote:
> > > > > > > > > > +static ssize_t skx_iio_mapping_show(struct device *dev,
> > > > > > > > > > +??????????????? struct device_attribute *attr, char *buf)
> > > > > > > > > > +{
> > > > > > > > > > +??? struct pmu *pmu = dev_get_drvdata(dev);
> > > > > > > > > > +??? struct intel_uncore_pmu *uncore_pmu =
> > > > > > > > > > +??????? container_of(pmu, struct intel_uncore_pmu, pmu);
> > > > > > > > > > +
> > > > > > > > > > +??? struct dev_ext_attribute *ea =
> > > > > > > > > > +??????? container_of(attr, struct dev_ext_attribute, attr);
> > > > > > > > > > +??? long die = (long)ea->var;
> > > > > > > > > > +
> > > > > > > > > > +??? return sprintf(buf, "0000:%02x\n",
> > > > > > > > > > skx_iio_stack(uncore_pmu, die));
> > > > > > > > > If "0000:" is always the "prefix" of the output of
> > > > > > > > > this file, why have
> > > > > > > > > it at all as you always know it is there?
> > > > > > I think Roman only test with BIOS configured as single-segment. So he
> > > > > > hard-code the segment# here.
> > > > > >
> > > > > > I'm not sure if Roman can do some test with multiple-segment
> > > > > > BIOS. If not, I
> > > > > > think we should at least print a warning here.
> > > > > >
> > > > > > > > > What is ever going to cause that to change?
> > > > > > > > I think it's just to make it a complete PCI address.
> > > > > > > Is that what this really is?? If so, it's not a "complete" pci address,
> > > > > > > is it?? If it is, use the real pci address please.
> > > > > > I think we don't need a complete PCI address here. The attr is
> > > > > > to disclose
> > > > > > the mapping information between die and PCI BUS. Segment:BUS
> > > > > > should be good
> > > > > > enough.
> > > > > "good enough" for today, but note that you can not change the format of
> > > > > the data in the file in the future, you would have to create a new file.
> > > > > So I suggest at least try to future-proof it as much as possible if you
> > > > > _know_ this could change.
> > > > >
> > > > > Just use the full pci address, there's no reason not to, otherwise it's
> > > > > just confusing.
> > > > >
> > > > > thanks,
> > > > >
> > > > > greg k-h
> > > > Hi Greg,
> > > >
> > > > Yes, the "Segment:Bus" pair is enough to distinguish between different
> > > > Root ports.
> > > I think Greg suggests us to use full PCI address here.
> > >
> > > Hi Greg,
> > >
> > > There may be several devices are connected to IIO stack. There is no full
> > > PCI address for IIO stack.
> > Please define "full" for me. Please please don't tell me you are just
> > using a truncated version of the PCI address. I thought we got rid of
> > all of that nonsense 10 years ago...
> >
> > > I don't think we can list all of devices in the same IIO stack with full PCI
> > > address here either. It's not necessary, and only increase maintenance
> > > overhead.
> > Then what exactly _IS_ this number, if not the PCI address?
> >
> > Something made up to look almost like a PCI address, but not quite?
> > Somethine else?
> >
> > > I think we may have two options here.
> > >
> > > Option 1: Roman's proposal.The format of the file is "Segment:Bus". For the
> > > future I can see, the format doesn't need to be changed.
> > > E.g. $ls /sys/devices/uncore_<type>_<pmu_idx>/die0
> > > $0000:7f
> > Again, fake PCI address?
> Hi Greg,
>
> Actually, there are two reasons why we've chosen the "Segment:Root Bus"
> notion to
> represent Root port to IO PMU mapping:
> 1. it meets feature requirements to uniquely identify each Root Port on the
> system
> 2. that notion - "Segment:Root Bus" - is already used by the kernel to
> represent
> Root ports is sysfs; see commit 37d6a0a6f4700 and example below taken for
> Intel Xeon V5 (Skylake Server):
>
> # ls /sys/devices/ | grep pci
> pci0000:00
> pci0000:17
> pci0000:3a
> pci0000:5d
> pci0000:80
> pci0000:85
> pci0000:ae
> pci0000:d7
>
> Having full conventional PCI address in the form of
> "Segment:Bus:Device.Function"
> is just not required to distinguish one Root Bus from the other.
> But if there is any other agreement regarding the way how PCI Root ports are
> supposed to show up in the sysfs then please let us know.

Ok, if you all are set on this, and feel like it will work properly for
the next few years without needing to be changed, I can live with it.
Thanks for the explanations.

greg k-h