2021-11-06 08:48:15

by Ricardo Neri

[permalink] [raw]
Subject: [PATCH 0/7] Thermal: Introduce the Hardware Feedback Interface for thermal and performance management

The Intel Hardware Feedback Interface (HFI) [1] provides information about
the performance and energy efficiency of each CPU in the system. It uses a
table that is shared between hardware and the operating system. The
contents of the table may be updated as a result of changes in the
operating conditions of the system (e.g., reaching a thermal limit) or the
action of external factors (e.g., changes in the thermal design power).

The information that HFI provides are specified as numeric, unit-less
capabilities relative to other CPUs in the system. These capabilities have
a range of [0-255] where higher numbers represent higher capabilities.
Energy efficiency and performance are reported in separate capabilities.
If either the performance or energy capabilities efficiency of a CPU are 0,
the hardware recommends to not schedule any tasks on such CPU for
performance, energy efficiency or thermal reasons, respectively.

The kernel or user space may use the information from the HFI to modify
task placement and/or adjust power limits. This patchset focuses on the
user space. The thermal notification framework is extended to relay
updates of CPU capacity. Thus, a userspace daemon can affinitize workloads
to certain CPUs and/or offline CPUs whose capabilities are zero.

The frequency of HFI updates is specific to each processor model. On some
systems, there is just a single HFI update at boot. On other systems, there
may be updates every tens of milliseconds. In order to not overwhelm
userspace with too many updates, they are limited to one update every
CONFIG_HZ jiffies.

Thanks and BR,
Ricardo

[1]. https://www.intel.com/sdm

Ricardo Neri (5):
x86/Documentation: Describe the Intel Hardware Feedback Interface
x86: Add definitions for the Intel Hardware Feedback Interface
thermal: intel: hfi: Minimally initialize the Hardware Feedback
Interface
thermal: intel: hfi: Handle CPU hotplug events
thermal: intel: hfi: Enable notification interrupt

Srinivas Pandruvada (2):
thermal: netlink: Add a new event to notify CPU capabilities change
thermal: intel: hfi: Notify user space for HFI events

Documentation/x86/index.rst | 1 +
Documentation/x86/intel-hfi.rst | 68 ++++
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/msr-index.h | 6 +
drivers/thermal/intel/Kconfig | 13 +
drivers/thermal/intel/Makefile | 1 +
drivers/thermal/intel/intel_hfi.c | 508 ++++++++++++++++++++++++++++
drivers/thermal/intel/intel_hfi.h | 40 +++
drivers/thermal/intel/therm_throt.c | 21 ++
drivers/thermal/thermal_netlink.c | 52 +++
drivers/thermal/thermal_netlink.h | 13 +
include/uapi/linux/thermal.h | 6 +-
12 files changed, 729 insertions(+), 1 deletion(-)
create mode 100644 Documentation/x86/intel-hfi.rst
create mode 100644 drivers/thermal/intel/intel_hfi.c
create mode 100644 drivers/thermal/intel/intel_hfi.h

--
2.17.1


2021-11-06 08:53:17

by Ricardo Neri

[permalink] [raw]
Subject: [PATCH 4/7] thermal: intel: hfi: Handle CPU hotplug events

All CPUs in a package are represented in an HFI table. There exists an
HFI table per package. Thus, CPUs in a package need to coordinate to
initialize and access the table. Do such coordination during CPU hotplug.
Use the first CPU to come online in a package to initialize the HFI table
and the data structure representing it. Other CPUs in the same package need
only to register or unregister themselves in that data structure.

The HFI depends on both the package-level thermal management and the local
APIC thermal local vector. Thus, ensure that both are properly configured
before calling intel_hfi_online(). The CPU hotplug callbacks of the thermal
throttle events code already meet these conditions. Enable the HFI from
thermal_throttle_online().

Cc: Andi Kleen <[email protected]>
Cc: Aubrey Li <[email protected]>
Cc: Srinivas Pandruvada <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: "Ravi V. Shankar" <[email protected]>
Reviewed-by: Len Brown <[email protected]>
Signed-off-by: Ricardo Neri <[email protected]>
---
drivers/thermal/intel/intel_hfi.c | 211 ++++++++++++++++++++++++++++
drivers/thermal/intel/intel_hfi.h | 4 +
drivers/thermal/intel/therm_throt.c | 8 ++
3 files changed, 223 insertions(+)

diff --git a/drivers/thermal/intel/intel_hfi.c b/drivers/thermal/intel/intel_hfi.c
index edfe343507b3..6a3adfd57d72 100644
--- a/drivers/thermal/intel/intel_hfi.c
+++ b/drivers/thermal/intel/intel_hfi.c
@@ -21,6 +21,7 @@

#define pr_fmt(fmt) "intel-hfi: " fmt

+#include <linux/io.h>
#include <linux/slab.h>

#include "intel_hfi.h"
@@ -52,16 +53,26 @@ struct hfi_hdr {

/**
* struct hfi_instance - Representation of an HFI instance (i.e., a table)
+ * @table_base: Base of the local copy of the HFI table
* @ts_counter: Time stamp of the last update of the table
* @hdr: Base address of the table header
* @data: Base address of the table data
+ * @die_id: Logical die ID this HFI table instance
+ * @cpus: CPUs represented in this HFI table instance
+ * @hw_table: Pointer to the HFI table of this instance
+ * @initialized: True if this HFI instance has bee initialized
*
* A set of parameters to parse and navigate a specific HFI table.
*/
struct hfi_instance {
+ void *table_base;
u64 *ts_counter;
void *hdr;
void *data;
+ u16 die_id;
+ struct cpumask *cpus;
+ void *hw_table;
+ bool initialized;
};

/**
@@ -83,10 +94,210 @@ struct hfi_features {
bool parsed;
};

+/**
+ * struct hfi_cpu_info - Per-CPU attributes to consume HFI data
+ * @index: Row of this CPU in its HFI table
+ * @hfi_instance: Attributes of the HFI table to which this CPU belongs
+ *
+ * Parameters to link a logical processor to an HFI table and a row within it.
+ */
+struct hfi_cpu_info {
+ s16 index;
+ struct hfi_instance *hfi_instance;
+};
+
+static DEFINE_PER_CPU(struct hfi_cpu_info, hfi_cpu_info) = { .index = -1 };
+
static int max_hfi_instances;
static struct hfi_instance *hfi_instances;

static struct hfi_features hfi_features;
+static DEFINE_MUTEX(hfi_lock);
+
+static void init_hfi_cpu_index(unsigned int cpu)
+{
+ s16 hfi_idx;
+ u32 edx;
+
+ /* Do not re-read @cpu's index if it has already been initialized. */
+ if (per_cpu(hfi_cpu_info, cpu).index > -1)
+ return;
+
+ edx = cpuid_edx(CPUID_HFI_LEAF);
+ hfi_idx = (edx & CPUID_HFI_CPU_INDEX_MASK) >> CPUID_HFI_CPU_INDEX_SHIFT;
+
+ per_cpu(hfi_cpu_info, cpu).index = hfi_idx;
+}
+
+/*
+ * The format of the HFI table depends on the number of capabilities that the
+ * hardware supports. Keep a data structure to navigate the table.
+ */
+static void init_hfi_instance(struct hfi_instance *hfi_instance)
+{
+ /* The HFI time-stamp is located at the base of the table. */
+ hfi_instance->ts_counter = hfi_instance->table_base;
+
+ /* The HFI header is below the time-stamp. */
+ hfi_instance->hdr = hfi_instance->table_base +
+ sizeof(*hfi_instance->ts_counter);
+
+ /* The HFI data starts below the header. */
+ hfi_instance->data = hfi_instance->hdr + hfi_features.hdr_size;
+}
+
+/**
+ * intel_hfi_online() - Enable HFI on @cpu
+ * @cpu: CPU in which the HFI will be enabled
+ *
+ * Enable the HFI to be used in @cpu. The HFI is enabled at the die/package
+ * level. The first CPU in the die/package to come online does the full HFI
+ * initialization. Subsequent CPUs will just link themselves to the HFI
+ * instance of their die/package.
+ */
+void intel_hfi_online(unsigned int cpu)
+{
+ struct hfi_cpu_info *info = &per_cpu(hfi_cpu_info, cpu);
+ u16 die_id = topology_logical_die_id(cpu);
+ struct hfi_instance *hfi_instance;
+ phys_addr_t hw_table_pa;
+ u64 msr_val;
+
+ if (!boot_cpu_has(X86_FEATURE_INTEL_HFI))
+ return;
+
+ init_hfi_cpu_index(cpu);
+
+ /*
+ * The HFI instance of this @cpu may exist already but they have not
+ * been linked to @cpu.
+ */
+ hfi_instance = info->hfi_instance;
+ if (!hfi_instance) {
+ if (!hfi_instances)
+ return;
+
+ if (die_id >= 0 && die_id < max_hfi_instances)
+ hfi_instance = &hfi_instances[die_id];
+
+ if (!hfi_instance)
+ return;
+ }
+
+ /*
+ * Now check if the HFI instance of the package/die of this CPU has
+ * been initialized. In such case, all we have to do is link @cpu's info
+ * to the HFI instance of its die/package.
+ */
+ mutex_lock(&hfi_lock);
+ if (hfi_instance->initialized) {
+ info->hfi_instance = hfi_instance;
+
+ /*
+ * @cpu is the first one in its die/package to come back online.
+ * Use it to track the CPUs in the die/package.
+ */
+ if (!hfi_instance->cpus)
+ hfi_instance->cpus = topology_core_cpumask(cpu);
+
+ mutex_unlock(&hfi_lock);
+ return;
+ }
+
+ /*
+ * Hardware is programmed with the physical address of the first page
+ * frame of the table. Hence, the allocated memory must be page-aligned.
+ */
+ hfi_instance->hw_table = alloc_pages_exact(hfi_features.nr_table_pages,
+ GFP_KERNEL | __GFP_ZERO);
+ if (!hfi_instance->hw_table)
+ goto err_out;
+
+ hw_table_pa = virt_to_phys(hfi_instance->hw_table);
+
+ hfi_instance->table_base = kzalloc(hfi_features.nr_table_pages << PAGE_SHIFT,
+ GFP_KERNEL);
+ if (!hfi_instance->table_base)
+ goto free_hw_table;
+
+ /*
+ * Program the address of the feedback table of this die/package. On
+ * some processors, hardware remembers the old address of the HFI table
+ * even after having been reprogrammed and re-enabled. Thus, do not free
+ * pages allocated for the table or reprogram the hardware with a new
+ * base address. Namely, program the hardware only once.
+ */
+ msr_val = hw_table_pa | HFI_PTR_VALID_BIT;
+ wrmsrl(MSR_IA32_HW_FEEDBACK_PTR, msr_val);
+
+ init_hfi_instance(hfi_instance);
+
+ hfi_instance->die_id = die_id;
+
+ /*
+ * We can use the core cpumask of any cpu in the die/package. Any of
+ * them will reflect all the CPUs the same package that are online.
+ */
+ hfi_instance->cpus = topology_core_cpumask(cpu);
+ info->hfi_instance = hfi_instance;
+ hfi_instance->initialized = true;
+
+ mutex_unlock(&hfi_lock);
+
+ return;
+
+free_hw_table:
+ free_pages_exact(hfi_instance->hw_table, hfi_features.nr_table_pages);
+err_out:
+ mutex_unlock(&hfi_lock);
+}
+
+/**
+ * intel_hfi_offline() - Disable HFI on @cpu
+ * @cpu: CPU in which the HFI will be disabled
+ *
+ * Remove @cpu from those covered by its HFI instance.
+ *
+ * On some processors, hardware remembers previous programming settings even
+ * after being reprogrammed. Thus, keep HFI enabled even if all CPUs in the
+ * die/package of @cpu are offline. See note in intel_hfi_online().
+ */
+void intel_hfi_offline(unsigned int cpu)
+{
+ struct cpumask *die_cpumask = topology_core_cpumask(cpu);
+ struct hfi_cpu_info *info = &per_cpu(hfi_cpu_info, cpu);
+ struct hfi_instance *hfi_instance;
+
+ if (!boot_cpu_has(X86_FEATURE_INTEL_HFI))
+ return;
+
+ hfi_instance = info->hfi_instance;
+ if (!hfi_instance)
+ return;
+
+ if (!hfi_instance->initialized)
+ return;
+
+ mutex_lock(&hfi_lock);
+
+ /*
+ * We were using the core cpumask of @cpu to track CPUs in the same
+ * die/package. Now it is going offline and we need to find another
+ * CPU we can use.
+ */
+ if (die_cpumask == hfi_instance->cpus) {
+ int new_cpu;
+
+ new_cpu = cpumask_any_but(hfi_instance->cpus, cpu);
+ if (new_cpu >= nr_cpu_ids)
+ /* All other CPUs in the package are offline. */
+ hfi_instance->cpus = NULL;
+ else
+ hfi_instance->cpus = topology_core_cpumask(new_cpu);
+ }
+
+ mutex_unlock(&hfi_lock);
+}

static __init int hfi_parse_features(void)
{
diff --git a/drivers/thermal/intel/intel_hfi.h b/drivers/thermal/intel/intel_hfi.h
index 42529d3ac92d..d87c3823bb76 100644
--- a/drivers/thermal/intel/intel_hfi.h
+++ b/drivers/thermal/intel/intel_hfi.h
@@ -27,8 +27,12 @@

#if defined(CONFIG_INTEL_HFI)
void __init intel_hfi_init(void);
+void intel_hfi_online(unsigned int cpu);
+void intel_hfi_offline(unsigned int cpu);
#else
static inline void intel_hfi_init(void) { }
+static inline void intel_hfi_online(unsigned int cpu) { }
+static inline void intel_hfi_offline(unsigned int cpu) { }
#endif

#endif /* _INTEL_HFI_H */
diff --git a/drivers/thermal/intel/therm_throt.c b/drivers/thermal/intel/therm_throt.c
index ac408714d52b..2a79598a7f7a 100644
--- a/drivers/thermal/intel/therm_throt.c
+++ b/drivers/thermal/intel/therm_throt.c
@@ -480,6 +480,12 @@ static int thermal_throttle_online(unsigned int cpu)
l = apic_read(APIC_LVTTHMR);
apic_write(APIC_LVTTHMR, l & ~APIC_LVT_MASKED);

+ /*
+ * Enable the package-level HFI interrupt. By now the local APIC is
+ * ready to get thermal interrupts.
+ */
+ intel_hfi_online(cpu);
+
return thermal_throttle_add_dev(dev, cpu);
}

@@ -489,6 +495,8 @@ static int thermal_throttle_offline(unsigned int cpu)
struct device *dev = get_cpu_device(cpu);
u32 l;

+ intel_hfi_offline(cpu);
+
/* Mask the thermal vector before draining evtl. pending work */
l = apic_read(APIC_LVTTHMR);
apic_write(APIC_LVTTHMR, l | APIC_LVT_MASKED);
--
2.17.1

2021-11-06 09:18:03

by Ricardo Neri

[permalink] [raw]
Subject: [PATCH 1/7] x86/Documentation: Describe the Intel Hardware Feedback Interface

Start a documentation file to describe the purpose and operation of Intel's
Hardware Feedback Interface. Describe how this interface is used in Linux
to relay performance and energy efficiency updates to userspace.

Cc: Andi Kleen <[email protected]>
Cc: Aubrey Li <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: "Ravi V. Shankar" <[email protected]>
Reviewed-by: Len Brown <[email protected]>
Suggested-by: Srinivas Pandruvada <[email protected]>
Signed-off-by: Ricardo Neri <[email protected]>
---
Documentation/x86/index.rst | 1 +
Documentation/x86/intel-hfi.rst | 68 +++++++++++++++++++++++++++++++++
2 files changed, 69 insertions(+)
create mode 100644 Documentation/x86/intel-hfi.rst

diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst
index 383048396336..f103821ee095 100644
--- a/Documentation/x86/index.rst
+++ b/Documentation/x86/index.rst
@@ -21,6 +21,7 @@ x86-specific Documentation
tlb
mtrr
pat
+ intel-hfi
intel-iommu
intel_txt
amd-memory-encryption
diff --git a/Documentation/x86/intel-hfi.rst b/Documentation/x86/intel-hfi.rst
new file mode 100644
index 000000000000..f5cb738170a5
--- /dev/null
+++ b/Documentation/x86/intel-hfi.rst
@@ -0,0 +1,68 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+============================================================
+Hardware-Feedback Interface for scheduling on Intel Hardware
+============================================================
+
+Overview
+--------
+
+Intel has described the Hardware Feedback Interface (HFI) in the Intel 64 and
+IA-32 Architectures Software Developer's Manual (Intel SDM) Volume 3 Section
+14.6 [1]_.
+
+The HFI gives the operating system a performance and energy efficiency
+capability data for each CPU in the system. Linux can use the information from
+the HFI to influence task placement decisions.
+
+The Hardware Feedback Interface
+-------------------------------
+
+The Hardware Feedback Interface provides to the operating system information
+about the performance and energy efficiency of each CPU in the system. Each
+capability is given as a unit-less quantity in the range [0-255]. Higher values
+indicate higher capability. Energy efficiency and performance are reported in
+separate capabilities.
+
+These capabilities may change at runtime as a result of changes in the
+operating conditions of the system or the action of external factors. The rate
+at which these capabilities are updated is specific to each processor model. On
+some models, capabilities are set at boot time and never change. On others,
+capabilities may change every tens of milliseconds.
+
+The kernel or a userspace policy daemon can use these capabilities to modify
+task placement decisions. For instance, if either the performance or energy
+capabilities of a given logical processor becomes zero, it is an indication that
+the hardware recommends to the operating system to not schedule any tasks on
+that processor for performance or energy efficiency reasons, respectively.
+
+Implementation details for Linux
+--------------------------------
+
+The infrastructure to handle thermal event interrupts has two parts. In the
+Local Vector Table of a CPU's local APIC, there exists a register for the
+Thermal Monitor Register. This register controls how interrupts are delivered
+to a CPU when the thermal monitor generates and interrupt. Further details
+can be found in the Intel SDM Vol. 3 Section 10.5 [1]_.
+
+The thermal monitor may generate interrupts per CPU or per package. The HFI
+generates package-level interrupts. This monitor is configured and initialized
+via a set of machine-specific registers. Specifically, the HFI interrupt and
+status are controlled via designated bits in the IA32_PACKAGE_THERM_INTERRUPT
+and IA32_PACKAGE_THERM_STATUS registers, respectively. There exists one HFI
+table per package. Further details can be found in the Intel SDM Vol. 3
+Section 14.9 [1]_.
+
+The hardware issues an HFI interrupt after updating the HFI table and is ready
+for the operating system to consume it. CPUs receive such interrupt via the
+thermal entry in the Local APIC's Local Vector Table.
+
+When servicing such interrupt, the HFI driver parses the updated table and
+relays the update to userspace using the thermal notification framework. Given
+that there may be many HFI updates every second, the updates relayed to
+userspace are throttled at a rate of CONFIG_HZ jiffies.
+
+References
+----------
+
+.. [1] https://www.intel.com/sdm
--
2.17.1

2021-11-06 09:19:41

by Ricardo Neri

[permalink] [raw]
Subject: [PATCH 6/7] thermal: netlink: Add a new event to notify CPU capabilities change

From: Srinivas Pandruvada <[email protected]>

Add a new netlink event to notify change in CPU capabilities in terms of
performance and efficiency.

Firmware may change CPU capabilities as a result of thermal events in the
system or to account for changes in the TDP (thermal design power) level.

This notification type will allow user space to avoid running workloads
on certain CPUs or proactively adjust power limits to avoid future events.

Cc: Andi Kleen <[email protected]>
Cc: Aubrey Li <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: "Ravi V. Shankar" <[email protected]>
Reviewed-by: Len Brown <[email protected]>
Signed-off-by: Srinivas Pandruvada <[email protected]>
---
drivers/thermal/thermal_netlink.c | 52 +++++++++++++++++++++++++++++++
drivers/thermal/thermal_netlink.h | 13 ++++++++
include/uapi/linux/thermal.h | 6 +++-
3 files changed, 70 insertions(+), 1 deletion(-)

diff --git a/drivers/thermal/thermal_netlink.c b/drivers/thermal/thermal_netlink.c
index 1234dbe95895..0d293c6e6957 100644
--- a/drivers/thermal/thermal_netlink.c
+++ b/drivers/thermal/thermal_netlink.c
@@ -43,6 +43,11 @@ static const struct nla_policy thermal_genl_policy[THERMAL_GENL_ATTR_MAX + 1] =
[THERMAL_GENL_ATTR_CDEV_MAX_STATE] = { .type = NLA_U32 },
[THERMAL_GENL_ATTR_CDEV_NAME] = { .type = NLA_STRING,
.len = THERMAL_NAME_LENGTH },
+ /* CPU capabilities */
+ [THERMAL_GENL_ATTR_CPU_CAPABILITY] = { .type = NLA_NESTED },
+ [THERMAL_GENL_ATTR_CPU_CAPABILITY_ID] = { .type = NLA_U32 },
+ [THERMAL_GENL_ATTR_CPU_CAPABILITY_PERF] = { .type = NLA_U32 },
+ [THERMAL_GENL_ATTR_CPU_CAPABILITY_EFF] = { .type = NLA_U32 },
};

struct param {
@@ -58,6 +63,8 @@ struct param {
int temp;
int cdev_state;
int cdev_max_state;
+ struct cpu_capability *cpu_capabilities;
+ int cpu_capabilities_count;
};

typedef int (*cb_t)(struct param *);
@@ -189,6 +196,42 @@ static int thermal_genl_event_gov_change(struct param *p)
return 0;
}

+static int thermal_genl_event_cpu_capability_change(struct param *p)
+{
+ struct cpu_capability *cpu_cap = p->cpu_capabilities;
+ struct sk_buff *msg = p->msg;
+ struct nlattr *start_cap;
+ int i, ret;
+
+ start_cap = nla_nest_start(msg, THERMAL_GENL_ATTR_CPU_CAPABILITY);
+ if (!start_cap)
+ return -EMSGSIZE;
+
+ for (i = 0; i < p->cpu_capabilities_count; ++i) {
+ if (nla_put_u32(msg, THERMAL_GENL_ATTR_CPU_CAPABILITY_ID, cpu_cap->cpu)) {
+ ret = -EMSGSIZE;
+ goto out_cancel_nest;
+ }
+ if (nla_put_u32(msg, THERMAL_GENL_ATTR_CPU_CAPABILITY_PERF, cpu_cap->perf)) {
+ ret = -EMSGSIZE;
+ goto out_cancel_nest;
+ }
+ if (nla_put_u32(msg, THERMAL_GENL_ATTR_CPU_CAPABILITY_EFF, cpu_cap->eff)) {
+ ret = -EMSGSIZE;
+ goto out_cancel_nest;
+ }
+ ++cpu_cap;
+ }
+
+ nla_nest_end(msg, start_cap);
+
+ return 0;
+out_cancel_nest:
+ nla_nest_cancel(msg, start_cap);
+
+ return ret;
+}
+
int thermal_genl_event_tz_delete(struct param *p)
__attribute__((alias("thermal_genl_event_tz")));

@@ -218,6 +261,7 @@ static cb_t event_cb[] = {
[THERMAL_GENL_EVENT_CDEV_DELETE] = thermal_genl_event_cdev_delete,
[THERMAL_GENL_EVENT_CDEV_STATE_UPDATE] = thermal_genl_event_cdev_state_update,
[THERMAL_GENL_EVENT_TZ_GOV_CHANGE] = thermal_genl_event_gov_change,
+ [THERMAL_GENL_EVENT_CPU_CAPABILITY_CHANGE] = thermal_genl_event_cpu_capability_change,
};

/*
@@ -355,6 +399,14 @@ int thermal_notify_tz_gov_change(int tz_id, const char *name)
return thermal_genl_send_event(THERMAL_GENL_EVENT_TZ_GOV_CHANGE, &p);
}

+int thermal_genl_cpu_capability_event(int count, struct cpu_capability *caps)
+{
+ struct param p = { .cpu_capabilities_count = count, .cpu_capabilities = caps };
+
+ return thermal_genl_send_event(THERMAL_GENL_EVENT_CPU_CAPABILITY_CHANGE, &p);
+}
+EXPORT_SYMBOL_GPL(thermal_genl_cpu_capability_event);
+
/*************************** Command encoding ********************************/

static int __thermal_genl_cmd_tz_get_id(struct thermal_zone_device *tz,
diff --git a/drivers/thermal/thermal_netlink.h b/drivers/thermal/thermal_netlink.h
index 828d1dddfa98..725e4d075909 100644
--- a/drivers/thermal/thermal_netlink.h
+++ b/drivers/thermal/thermal_netlink.h
@@ -4,6 +4,12 @@
* Author: Daniel Lezcano <[email protected]>
*/

+struct cpu_capability {
+ int cpu;
+ int perf;
+ int eff;
+};
+
/* Netlink notification function */
#ifdef CONFIG_THERMAL_NETLINK
int __init thermal_netlink_init(void);
@@ -23,6 +29,7 @@ int thermal_notify_cdev_add(int cdev_id, const char *name, int max_state);
int thermal_notify_cdev_delete(int cdev_id);
int thermal_notify_tz_gov_change(int tz_id, const char *name);
int thermal_genl_sampling_temp(int id, int temp);
+int thermal_genl_cpu_capability_event(int count, struct cpu_capability *caps);
#else
static inline int thermal_netlink_init(void)
{
@@ -101,4 +108,10 @@ static inline int thermal_genl_sampling_temp(int id, int temp)
{
return 0;
}
+
+static inline int thermal_genl_cpu_capability_event(int count, struct cpu_capability *caps)
+{
+ return 0;
+}
+
#endif /* CONFIG_THERMAL_NETLINK */
diff --git a/include/uapi/linux/thermal.h b/include/uapi/linux/thermal.h
index 9aa2fedfa309..964e9791fd6d 100644
--- a/include/uapi/linux/thermal.h
+++ b/include/uapi/linux/thermal.h
@@ -44,7 +44,10 @@ enum thermal_genl_attr {
THERMAL_GENL_ATTR_CDEV_MAX_STATE,
THERMAL_GENL_ATTR_CDEV_NAME,
THERMAL_GENL_ATTR_GOV_NAME,
-
+ THERMAL_GENL_ATTR_CPU_CAPABILITY,
+ THERMAL_GENL_ATTR_CPU_CAPABILITY_ID,
+ THERMAL_GENL_ATTR_CPU_CAPABILITY_PERF,
+ THERMAL_GENL_ATTR_CPU_CAPABILITY_EFF,
__THERMAL_GENL_ATTR_MAX,
};
#define THERMAL_GENL_ATTR_MAX (__THERMAL_GENL_ATTR_MAX - 1)
@@ -71,6 +74,7 @@ enum thermal_genl_event {
THERMAL_GENL_EVENT_CDEV_DELETE, /* Cdev unbound */
THERMAL_GENL_EVENT_CDEV_STATE_UPDATE, /* Cdev state updated */
THERMAL_GENL_EVENT_TZ_GOV_CHANGE, /* Governor policy changed */
+ THERMAL_GENL_EVENT_CPU_CAPABILITY_CHANGE, /* CPU capability changed */
__THERMAL_GENL_EVENT_MAX,
};
#define THERMAL_GENL_EVENT_MAX (__THERMAL_GENL_EVENT_MAX - 1)
--
2.17.1

2021-11-06 10:05:28

by Ricardo Neri

[permalink] [raw]
Subject: [PATCH 5/7] thermal: intel: hfi: Enable notification interrupt

When hardware wants to inform the operating system about updates in the HFI
table, it issues a package-level thermal event interrupt. For this,
hardware has new interrupt and status bits in the IA32_PACKAGE_THERM_
INTERRUPT and IA32_PACKAGE_THERM_STATUS registers. The existing thermal
throttle driver already handles thermal event interrupts: it initializes
the thermal vector of the local APIC as well as per-CPU and package-level
interrupt reporting. It also provides routines to service such interrupts.
Extend its functionality to also handle HFI interrupts.

The frequency of the thermal HFI interrupt is specific to each processor
model. On some processors, a single interrupt happens as soon as the HFI is
enabled and hardware will never update HFI capabilities afterwards. On
other processors, thermal and power constraints may cause thermal HFI
interrupts every tens of milliseconds.

To not overwhelm consumers of the HFI data, use delayed work to throttle
the rate at which HFI updates are processed.

Cc: Andi Kleen <[email protected]>
Cc: Aubrey Li <[email protected]>
Cc: Srinivas Pandruvada <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: "Ravi V. Shankar" <[email protected]>
Reviewed-by: Len Brown <[email protected]>
Signed-off-by: Ricardo Neri <[email protected]>
---
drivers/thermal/intel/intel_hfi.c | 89 +++++++++++++++++++++++++++++
drivers/thermal/intel/intel_hfi.h | 2 +
drivers/thermal/intel/therm_throt.c | 10 ++++
3 files changed, 101 insertions(+)

diff --git a/drivers/thermal/intel/intel_hfi.c b/drivers/thermal/intel/intel_hfi.c
index 6a3adfd57d72..1df24b39f2e6 100644
--- a/drivers/thermal/intel/intel_hfi.c
+++ b/drivers/thermal/intel/intel_hfi.c
@@ -26,6 +26,9 @@

#include "intel_hfi.h"

+#define THERM_STATUS_CLEAR_PKG_MASK (BIT(1) | BIT(3) | BIT(5) | BIT(7) | \
+ BIT(9) | BIT(11) | BIT(26))
+
/**
* struct hfi_cpu_data - HFI capabilities per CPU
* @perf_cap: Performance capability
@@ -60,6 +63,9 @@ struct hfi_hdr {
* @die_id: Logical die ID this HFI table instance
* @cpus: CPUs represented in this HFI table instance
* @hw_table: Pointer to the HFI table of this instance
+ * @update_work: Delayed work to process HFI updates
+ * @event_lock: Lock to protect HFI updates
+ * @timestamp: Timestamp of the last HFI update
* @initialized: True if this HFI instance has bee initialized
*
* A set of parameters to parse and navigate a specific HFI table.
@@ -72,6 +78,9 @@ struct hfi_instance {
u16 die_id;
struct cpumask *cpus;
void *hw_table;
+ struct delayed_work update_work;
+ raw_spinlock_t event_lock;
+ u64 timestamp;
bool initialized;
};

@@ -114,6 +123,75 @@ static struct hfi_instance *hfi_instances;
static struct hfi_features hfi_features;
static DEFINE_MUTEX(hfi_lock);

+#define HFI_UPDATE_INTERVAL HZ
+
+static void hfi_update_work_fn(struct work_struct *work)
+{
+ struct hfi_instance *hfi_instance;
+
+ hfi_instance = container_of(to_delayed_work(work), struct hfi_instance,
+ update_work);
+ if (!hfi_instance)
+ return;
+
+ /* TODO: Consume update here. */
+}
+
+void intel_hfi_process_event(__u64 pkg_therm_status_msr_val)
+{
+ struct hfi_instance *hfi_instance;
+ int cpu = smp_processor_id();
+ struct hfi_cpu_info *info;
+ unsigned long flags;
+ u64 timestamp;
+
+ if (!pkg_therm_status_msr_val)
+ return;
+
+ info = &per_cpu(hfi_cpu_info, cpu);
+ if (!info)
+ return;
+
+ /*
+ * It is possible that we get an HFI thermal interrupt on this CPU
+ * before its HFI instance is initialized. This is not a problem. The
+ * CPU that enabled the interrupt for this package will also get the
+ * interrupt and is fully initialized.
+ */
+ hfi_instance = info->hfi_instance;
+ if (!hfi_instance)
+ return;
+
+ raw_spin_lock_irqsave(&hfi_instance->event_lock, flags);
+
+ /*
+ * On most systems, all CPUs in the package receive a package-level
+ * thermal interrupt when there is an HFI update. Since they all are
+ * dealing with the same update (as indicated by the update timestamp),
+ * it is sufficient to let a single CPU to acknowledge the update and
+ * schedule work to process it.
+ */
+ timestamp = *(u64 *)hfi_instance->hw_table;
+ if (hfi_instance->timestamp >= timestamp)
+ goto unlock_spinlock;
+
+ hfi_instance->timestamp = timestamp;
+
+ memcpy(hfi_instance->table_base, hfi_instance->hw_table,
+ hfi_features.nr_table_pages << PAGE_SHIFT);
+ /*
+ * Let hardware and other CPUs know that we are done reading the HFI
+ * table and it is free to update it again.
+ */
+ pkg_therm_status_msr_val &= THERM_STATUS_CLEAR_PKG_MASK &
+ ~PACKAGE_THERM_STATUS_HFI_UPDATED;
+ wrmsrl(MSR_IA32_PACKAGE_THERM_STATUS, pkg_therm_status_msr_val);
+ schedule_delayed_work(&hfi_instance->update_work, HFI_UPDATE_INTERVAL);
+
+unlock_spinlock:
+ raw_spin_unlock_irqrestore(&hfi_instance->event_lock, flags);
+}
+
static void init_hfi_cpu_index(unsigned int cpu)
{
s16 hfi_idx;
@@ -232,6 +310,9 @@ void intel_hfi_online(unsigned int cpu)

init_hfi_instance(hfi_instance);

+ INIT_DELAYED_WORK(&hfi_instance->update_work, hfi_update_work_fn);
+ raw_spin_lock_init(&hfi_instance->event_lock);
+
hfi_instance->die_id = die_id;

/*
@@ -242,6 +323,14 @@ void intel_hfi_online(unsigned int cpu)
info->hfi_instance = hfi_instance;
hfi_instance->initialized = true;

+ /*
+ * Enable the hardware feedback interface and never disable it. See
+ * comment on programming the address of the table.
+ */
+ rdmsrl(MSR_IA32_HW_FEEDBACK_CONFIG, msr_val);
+ msr_val |= HFI_CONFIG_ENABLE_BIT;
+ wrmsrl(MSR_IA32_HW_FEEDBACK_CONFIG, msr_val);
+
mutex_unlock(&hfi_lock);

return;
diff --git a/drivers/thermal/intel/intel_hfi.h b/drivers/thermal/intel/intel_hfi.h
index d87c3823bb76..062e233a7b5d 100644
--- a/drivers/thermal/intel/intel_hfi.h
+++ b/drivers/thermal/intel/intel_hfi.h
@@ -29,10 +29,12 @@
void __init intel_hfi_init(void);
void intel_hfi_online(unsigned int cpu);
void intel_hfi_offline(unsigned int cpu);
+void intel_hfi_process_event(__u64 pkg_therm_status_msr_val);
#else
static inline void intel_hfi_init(void) { }
static inline void intel_hfi_online(unsigned int cpu) { }
static inline void intel_hfi_offline(unsigned int cpu) { }
+static inline void intel_hfi_process_event(__u64 pkg_therm_status_msr_val) { }
#endif

#endif /* _INTEL_HFI_H */
diff --git a/drivers/thermal/intel/therm_throt.c b/drivers/thermal/intel/therm_throt.c
index 2a79598a7f7a..9a73607a7df5 100644
--- a/drivers/thermal/intel/therm_throt.c
+++ b/drivers/thermal/intel/therm_throt.c
@@ -619,6 +619,10 @@ void intel_thermal_interrupt(void)
PACKAGE_THERM_STATUS_POWER_LIMIT,
POWER_LIMIT_EVENT,
PACKAGE_LEVEL);
+
+ if (this_cpu_has(X86_FEATURE_INTEL_HFI))
+ intel_hfi_process_event(msr_val &
+ PACKAGE_THERM_STATUS_HFI_UPDATED);
}
}

@@ -728,6 +732,12 @@ void intel_init_thermal(struct cpuinfo_x86 *c)
wrmsr(MSR_IA32_PACKAGE_THERM_INTERRUPT,
l | (PACKAGE_THERM_INT_LOW_ENABLE
| PACKAGE_THERM_INT_HIGH_ENABLE), h);
+
+ if (cpu_has(c, X86_FEATURE_INTEL_HFI)) {
+ rdmsr(MSR_IA32_PACKAGE_THERM_INTERRUPT, l, h);
+ wrmsr(MSR_IA32_PACKAGE_THERM_INTERRUPT,
+ l | PACKAGE_THERM_INT_HFI_ENABLE, h);
+ }
}

rdmsr(MSR_IA32_MISC_ENABLE, l, h);
--
2.17.1

2021-11-08 12:44:01

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 5/7] thermal: intel: hfi: Enable notification interrupt

On Fri, Nov 05, 2021 at 06:33:10PM -0700, Ricardo Neri wrote:
> + /*
> + * On most systems, all CPUs in the package receive a package-level
> + * thermal interrupt when there is an HFI update. Since they all are
> + * dealing with the same update (as indicated by the update timestamp),
> + * it is sufficient to let a single CPU to acknowledge the update and
> + * schedule work to process it.
> + */

That's pretty crap hardware behaviour. Is there really no way to steer
these interrupts?

2021-11-08 12:45:28

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 5/7] thermal: intel: hfi: Enable notification interrupt

On Fri, Nov 05, 2021 at 06:33:10PM -0700, Ricardo Neri wrote:

> @@ -72,6 +78,9 @@ struct hfi_instance {
> u16 die_id;
> struct cpumask *cpus;
> void *hw_table;
> + struct delayed_work update_work;
> + raw_spinlock_t event_lock;
+ raw_spinlock_t interrupt_lock;
> + u64 timestamp;
> bool initialized;
> };
>
> @@ -114,6 +123,75 @@ static struct hfi_instance *hfi_instances;
> static struct hfi_features hfi_features;
> static DEFINE_MUTEX(hfi_lock);
>
> +#define HFI_UPDATE_INTERVAL HZ
> +
> +static void hfi_update_work_fn(struct work_struct *work)
> +{
> + struct hfi_instance *hfi_instance;
> +
> + hfi_instance = container_of(to_delayed_work(work), struct hfi_instance,
> + update_work);
> + if (!hfi_instance)
> + return;
> +
> + /* TODO: Consume update here. */

// this here uses ->event_lock to serialize against the
// interrupt below changing the data...

> +}
> +
> +void intel_hfi_process_event(__u64 pkg_therm_status_msr_val)
> +{
> + struct hfi_instance *hfi_instance;
> + int cpu = smp_processor_id();
> + struct hfi_cpu_info *info;
> + unsigned long flags;
> + u64 timestamp;
> +
> + if (!pkg_therm_status_msr_val)
> + return;
> +
> + info = &per_cpu(hfi_cpu_info, cpu);
> + if (!info)
> + return;
> +
> + /*
> + * It is possible that we get an HFI thermal interrupt on this CPU
> + * before its HFI instance is initialized. This is not a problem. The
> + * CPU that enabled the interrupt for this package will also get the
> + * interrupt and is fully initialized.
> + */
> + hfi_instance = info->hfi_instance;
> + if (!hfi_instance)
> + return;
> +

/*
* If someone is already handling the interrupt, we shouldn't be
* burning time waiting for them to then do more nothing.
*/
if (!raw_spin_trylock(&hfi_instance->interrupt_lock))
return;


> + raw_spin_lock_irqsave(&hfi_instance->event_lock, flags);
> +
> + /*
> + * On most systems, all CPUs in the package receive a package-level
> + * thermal interrupt when there is an HFI update. Since they all are
> + * dealing with the same update (as indicated by the update timestamp),
> + * it is sufficient to let a single CPU to acknowledge the update and
> + * schedule work to process it.
> + */
> + timestamp = *(u64 *)hfi_instance->hw_table;
> + if (hfi_instance->timestamp >= timestamp)
> + goto unlock_spinlock;

This can go the way of the dodo.

> +
> + hfi_instance->timestamp = timestamp;
> +
> + memcpy(hfi_instance->table_base, hfi_instance->hw_table,
> + hfi_features.nr_table_pages << PAGE_SHIFT);
> + /*
> + * Let hardware and other CPUs know that we are done reading the HFI
> + * table and it is free to update it again.
> + */
> + pkg_therm_status_msr_val &= THERM_STATUS_CLEAR_PKG_MASK &
> + ~PACKAGE_THERM_STATUS_HFI_UPDATED;
> + wrmsrl(MSR_IA32_PACKAGE_THERM_STATUS, pkg_therm_status_msr_val);
> + schedule_delayed_work(&hfi_instance->update_work, HFI_UPDATE_INTERVAL);
> +
> +unlock_spinlock:
> + raw_spin_unlock_irqrestore(&hfi_instance->event_lock, flags);

raw_spin_unlock(&hfi_instance->interrupt_lock);

> +}

2021-11-09 12:40:40

by Ricardo Neri

[permalink] [raw]
Subject: Re: [PATCH 5/7] thermal: intel: hfi: Enable notification interrupt

On Mon, Nov 08, 2021 at 10:07:40AM +0100, Peter Zijlstra wrote:
> On Fri, Nov 05, 2021 at 06:33:10PM -0700, Ricardo Neri wrote:
>
> > @@ -72,6 +78,9 @@ struct hfi_instance {
> > u16 die_id;
> > struct cpumask *cpus;
> > void *hw_table;
> > + struct delayed_work update_work;
> > + raw_spinlock_t event_lock;
> + raw_spinlock_t interrupt_lock;

Thank you very much for your feedback Peter!

I would like to confirm that I understand your feedback correctly: you are
suggesting to use to spinlocks...


> > + u64 timestamp;
> > bool initialized;
> > };
> >
> > @@ -114,6 +123,75 @@ static struct hfi_instance *hfi_instances;
> > static struct hfi_features hfi_features;
> > static DEFINE_MUTEX(hfi_lock);
> >
> > +#define HFI_UPDATE_INTERVAL HZ
> > +
> > +static void hfi_update_work_fn(struct work_struct *work)
> > +{
> > + struct hfi_instance *hfi_instance;
> > +
> > + hfi_instance = container_of(to_delayed_work(work), struct hfi_instance,
> > + update_work);
> > + if (!hfi_instance)
> > + return;
> > +
> > + /* TODO: Consume update here. */
>
> // this here uses ->event_lock to serialize against the
> // interrupt below changing the data...

Anyone reading the HFI table would need to take ->event_lock.

>
> > +}
> > +
> > +void intel_hfi_process_event(__u64 pkg_therm_status_msr_val)
> > +{
> > + struct hfi_instance *hfi_instance;
> > + int cpu = smp_processor_id();
> > + struct hfi_cpu_info *info;
> > + unsigned long flags;
> > + u64 timestamp;
> > +
> > + if (!pkg_therm_status_msr_val)
> > + return;
> > +
> > + info = &per_cpu(hfi_cpu_info, cpu);
> > + if (!info)
> > + return;
> > +
> > + /*
> > + * It is possible that we get an HFI thermal interrupt on this CPU
> > + * before its HFI instance is initialized. This is not a problem. The
> > + * CPU that enabled the interrupt for this package will also get the
> > + * interrupt and is fully initialized.
> > + */
> > + hfi_instance = info->hfi_instance;
> > + if (!hfi_instance)
> > + return;
> > +
>
> /*
> * If someone is already handling the interrupt, we shouldn't be
> * burning time waiting for them to then do more nothing.
> */
> if (!raw_spin_trylock(&hfi_instance->interrupt_lock))
> return;
>
>
> > + raw_spin_lock_irqsave(&hfi_instance->event_lock, flags);

The CPU who acquired ->interrupt_lock successfully now will acquire
->event_lock to serialize writes and reads to the HFI table.

> > +
> > + /*
> > + * On most systems, all CPUs in the package receive a package-level
> > + * thermal interrupt when there is an HFI update. Since they all are
> > + * dealing with the same update (as indicated by the update timestamp),
> > + * it is sufficient to let a single CPU to acknowledge the update and
> > + * schedule work to process it.
> > + */
> > + timestamp = *(u64 *)hfi_instance->hw_table;
> > + if (hfi_instance->timestamp >= timestamp)
> > + goto unlock_spinlock;
>
> This can go the way of the dodo.

(I guess I can still check the timestamp in case buggy firmware triggers
updates with the same timestamp, right?)

>
> > +
> > + hfi_instance->timestamp = timestamp;
> > +
> > + memcpy(hfi_instance->table_base, hfi_instance->hw_table,
> > + hfi_features.nr_table_pages << PAGE_SHIFT);
> > + /*
> > + * Let hardware and other CPUs know that we are done reading the HFI
> > + * table and it is free to update it again.
> > + */
> > + pkg_therm_status_msr_val &= THERM_STATUS_CLEAR_PKG_MASK &
> > + ~PACKAGE_THERM_STATUS_HFI_UPDATED;
> > + wrmsrl(MSR_IA32_PACKAGE_THERM_STATUS, pkg_therm_status_msr_val);
> > + schedule_delayed_work(&hfi_instance->update_work, HFI_UPDATE_INTERVAL);
> > +
> > +unlock_spinlock:
> > + raw_spin_unlock_irqrestore(&hfi_instance->event_lock, flags);
>
> raw_spin_unlock(&hfi_instance->interrupt_lock);

... and here we release both locks.

Thanks and BR,
Ricardo

2021-11-09 17:16:10

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 5/7] thermal: intel: hfi: Enable notification interrupt

On Mon, Nov 08, 2021 at 06:26:13PM -0800, Ricardo Neri wrote:
> On Mon, Nov 08, 2021 at 10:07:40AM +0100, Peter Zijlstra wrote:
> > On Fri, Nov 05, 2021 at 06:33:10PM -0700, Ricardo Neri wrote:

> > > +static void hfi_update_work_fn(struct work_struct *work)
> > > +{
> > > + struct hfi_instance *hfi_instance;
> > > +
> > > + hfi_instance = container_of(to_delayed_work(work), struct hfi_instance,
> > > + update_work);
> > > + if (!hfi_instance)
> > > + return;
> > > +
> > > + /* TODO: Consume update here. */
> >
> > // this here uses ->event_lock to serialize against the
> > // interrupt below changing the data...
>
> Anyone reading the HFI table would need to take ->event_lock.

Right.. that implies ->event_lock can be taken while there is no
interrupt active, which then necessitates the additional lock.

> > > +}
> > > +
> > > +void intel_hfi_process_event(__u64 pkg_therm_status_msr_val)
> > > +{
> > > + struct hfi_instance *hfi_instance;
> > > + int cpu = smp_processor_id();
> > > + struct hfi_cpu_info *info;
> > > + unsigned long flags;
> > > + u64 timestamp;
> > > +
> > > + if (!pkg_therm_status_msr_val)
> > > + return;
> > > +
> > > + info = &per_cpu(hfi_cpu_info, cpu);
> > > + if (!info)
> > > + return;
> > > +
> > > + /*
> > > + * It is possible that we get an HFI thermal interrupt on this CPU
> > > + * before its HFI instance is initialized. This is not a problem. The
> > > + * CPU that enabled the interrupt for this package will also get the
> > > + * interrupt and is fully initialized.
> > > + */
> > > + hfi_instance = info->hfi_instance;
> > > + if (!hfi_instance)
> > > + return;
> > > +
> >
> > /*
> > * If someone is already handling the interrupt, we shouldn't be
> > * burning time waiting for them to then do more nothing.
> > */
> > if (!raw_spin_trylock(&hfi_instance->interrupt_lock))
> > return;
> >
> >
> > > + raw_spin_lock_irqsave(&hfi_instance->event_lock, flags);
>
> The CPU who acquired ->interrupt_lock successfully now will acquire
> ->event_lock to serialize writes and reads to the HFI table.

Right, so ->interrupt_lock is purely used to serialize interrupts, and
only one interrupt gets to do the update, while the others can exit and
resume with what they were doing asap, without wasting cycles spinning
on ->event_lock only to then not do anything.

> > > + /*
> > > + * On most systems, all CPUs in the package receive a package-level
> > > + * thermal interrupt when there is an HFI update. Since they all are
> > > + * dealing with the same update (as indicated by the update timestamp),
> > > + * it is sufficient to let a single CPU to acknowledge the update and
> > > + * schedule work to process it.
> > > + */
> > > + timestamp = *(u64 *)hfi_instance->hw_table;
> > > + if (hfi_instance->timestamp >= timestamp)
> > > + goto unlock_spinlock;
> >
> > This can go the way of the dodo.
>
> (I guess I can still check the timestamp in case buggy firmware triggers
> updates with the same timestamp, right?)

Sure..

> >
> > > +
> > > + hfi_instance->timestamp = timestamp;
> > > +
> > > + memcpy(hfi_instance->table_base, hfi_instance->hw_table,
> > > + hfi_features.nr_table_pages << PAGE_SHIFT);

I think we actually need to release ->interrupt_lock here, *before* the
WRMSR that ACKs the HFI update. Because I think the moment that WRMSR
goes through we can get another interrupt, and that *must* not find
->interrupt_lock taken, otherwise it will not process the update etc..
leading to lost interrupts.

> > > + /*
> > > + * Let hardware and other CPUs know that we are done reading the HFI
> > > + * table and it is free to update it again.
> > > + */
> > > + pkg_therm_status_msr_val &= THERM_STATUS_CLEAR_PKG_MASK &
> > > + ~PACKAGE_THERM_STATUS_HFI_UPDATED;
> > > + wrmsrl(MSR_IA32_PACKAGE_THERM_STATUS, pkg_therm_status_msr_val);
> > > + schedule_delayed_work(&hfi_instance->update_work, HFI_UPDATE_INTERVAL);
> > > +
> > > +unlock_spinlock:
> > > + raw_spin_unlock_irqrestore(&hfi_instance->event_lock, flags);
> >
> > raw_spin_unlock(&hfi_instance->interrupt_lock);
>
> ... and here we release both locks.

See above.

2021-11-09 22:25:21

by Lukasz Luba

[permalink] [raw]
Subject: Re: [PATCH 6/7] thermal: netlink: Add a new event to notify CPU capabilities change

Hi Srinivas,

On 11/9/21 1:23 PM, Srinivas Pandruvada wrote:
> Hi Lukasz,
>
> On Tue, 2021-11-09 at 12:39 +0000, Lukasz Luba wrote:
>> Hi Ricardo,
>>
>>
>> On 11/6/21 1:33 AM, Ricardo Neri wrote:
>>> From: Srinivas Pandruvada <[email protected]>
>>>
>>> Add a new netlink event to notify change in CPU capabilities in
>>> terms of
>>> performance and efficiency.
>>
>> Is this going to be handled by some 'generic' tools? If yes, maybe
>> the values for 'performance' might be aligned with capacity
>> [0,1024] ? Or are they completely not related so the mapping is
>> simply impossible?
>>
>
> That would have been very useful.
>
> The problem is that we may not know the maximum performance as system
> may be booting with few CPUs (using maxcpus kernel command line) and
> then user hot adding them. So we may need to rescale when we get a new
> maximum performance CPU and send to user space.
>
> We can't just use max from HFI table at in instance as it is not
> necessary that HFI table contains data for all CPUs.
>
> If HFI max performance value of 255 is a scaled value to max
> performance CPU value in the system, then this conversion would have
> been easy. But that is not.

I see. I was asking because I'm working on similar interface and
just wanted to understand your approach better. In my case we
would probably simply use 'capacity' scale, or more
precisely available capacity after subtracting 'thermal pressure' value.
That might confuse a generic tool which listens to these socket
messages, though. So probably I would have to add a new
THERMAL_GENL_ATTR_CPU_CAPABILITY_* id
to handle this different normalized across CPUs scale.

2021-11-09 23:43:12

by Lukasz Luba

[permalink] [raw]
Subject: Re: [PATCH 6/7] thermal: netlink: Add a new event to notify CPU capabilities change

Hi Ricardo,


On 11/6/21 1:33 AM, Ricardo Neri wrote:
> From: Srinivas Pandruvada <[email protected]>
>
> Add a new netlink event to notify change in CPU capabilities in terms of
> performance and efficiency.

Is this going to be handled by some 'generic' tools? If yes, maybe
the values for 'performance' might be aligned with capacity
[0,1024] ? Or are they completely not related so the mapping is
simply impossible?

>
> Firmware may change CPU capabilities as a result of thermal events in the
> system or to account for changes in the TDP (thermal design power) level.
>
> This notification type will allow user space to avoid running workloads
> on certain CPUs or proactively adjust power limits to avoid future events.
>
> Cc: Andi Kleen <[email protected]>
> Cc: Aubrey Li <[email protected]>
> Cc: Tim Chen <[email protected]>
> Cc: "Ravi V. Shankar" <[email protected]>
> Reviewed-by: Len Brown <[email protected]>
> Signed-off-by: Srinivas Pandruvada <[email protected]>
> ---
> drivers/thermal/thermal_netlink.c | 52 +++++++++++++++++++++++++++++++
> drivers/thermal/thermal_netlink.h | 13 ++++++++
> include/uapi/linux/thermal.h | 6 +++-
> 3 files changed, 70 insertions(+), 1 deletion(-)

[snip]

>
> +struct cpu_capability {
> + int cpu;
> + int perf;
> + int eff;

Why not use the full names, instead of thse shortcuts? We use full
naming e.g. in cpufreq framework such as 'frequency' not 'freq'.
The 'eff' is really not meaningful ('perf' a bit less but it has
to meanings in kernel).

Regards,
Lukasz

2021-11-09 23:48:37

by Srinivas Pandruvada

[permalink] [raw]
Subject: Re: [PATCH 5/7] thermal: intel: hfi: Enable notification interrupt

On Tue, 2021-11-09 at 09:48 +0100, Peter Zijlstra wrote:
> On Mon, Nov 08, 2021 at 06:26:13PM -0800, Ricardo Neri wrote:
> > On Mon, Nov 08, 2021 at 10:07:40AM +0100, Peter Zijlstra wrote:
> > > On Fri, Nov 05, 2021 at 06:33:10PM -0700, Ricardo Neri wrote:
>
> > > > +static void hfi_update_work_fn(struct work_struct *work)
> > > > +{
> > > > +       struct hfi_instance *hfi_instance;
> > > > +
> > > > +       hfi_instance = container_of(to_delayed_work(work),
> > > > struct hfi_instance,
> > > > +                                   update_work);
> > > > +       if (!hfi_instance)
> > > > +               return;
> > > > +
> > > > +       /* TODO: Consume update here. */
> > >
> > >         // this here uses ->event_lock to serialize against the
> > >         // interrupt below changing the data...
> >
> > Anyone reading the HFI table would need to take ->event_lock.
>
> Right.. that implies ->event_lock can be taken while there is no
> interrupt active, which then necessitates the additional lock.
>
Correct.
With the raw_spin_trylock() optimization, we will need additional lock.
So need another lock to protect hfi_instance->table_base.

> > > > +}
> > > > +
> > > > +void intel_hfi_process_event(__u64 pkg_therm_status_msr_val)
> > > > +{
> > > > +       struct hfi_instance *hfi_instance;
> > > > +       int cpu = smp_processor_id();
> > > > +       struct hfi_cpu_info *info;
> > > > +       unsigned long flags;
> > > > +       u64 timestamp;
> > > > +
> > > > +       if (!pkg_therm_status_msr_val)
> > > > +               return;
> > > > +
> > > > +       info = &per_cpu(hfi_cpu_info, cpu);
> > > > +       if (!info)
> > > > +               return;
> > > >

[...]

> > > > +       memcpy(hfi_instance->table_base, hfi_instance-
> > > > >hw_table,
> > > > +              hfi_features.nr_table_pages << PAGE_SHIFT);
>
> I think we actually need to release ->interrupt_lock here, *before*
> the
> WRMSR that ACKs the HFI update. Because I think the moment that WRMSR
> goes through we can get another interrupt, and that *must* not find
> ->interrupt_lock taken, otherwise it will not process the update
> etc..
> leading to lost interrupts.

Correct.
Once we use raw_spin_trylock() change suggested above, then we need to
release lock here.

Thanks,
Srinivas

>

> > > > +       /*
> > > > +        * Let hardware and other CPUs know that we are done
> > > > reading the HFI
> > > > +        * table and it is free to update it again.
> > > > +        */
> > > > +       pkg_therm_status_msr_val &= THERM_STATUS_CLEAR_PKG_MASK
> > > > &
> > > > +                                  
> > > > ~PACKAGE_THERM_STATUS_HFI_UPDATED;
> > > > +       wrmsrl(MSR_IA32_PACKAGE_THERM_STATUS,
> > > > pkg_therm_status_msr_val);
> > > > +       schedule_delayed_work(&hfi_instance->update_work,
> > > > HFI_UPDATE_INTERVAL);
> > > > +
> > > > +unlock_spinlock:
> > > > +       raw_spin_unlock_irqrestore(&hfi_instance->event_lock,
> > > > flags);
> > >
> > >         raw_spin_unlock(&hfi_instance->interrupt_lock);
> >
> > ... and here we release both locks.
>
> See above.


2021-11-09 23:50:48

by Srinivas Pandruvada

[permalink] [raw]
Subject: Re: [PATCH 6/7] thermal: netlink: Add a new event to notify CPU capabilities change

Hi Lukasz,

On Tue, 2021-11-09 at 12:39 +0000, Lukasz Luba wrote:
> Hi Ricardo,
>
>
> On 11/6/21 1:33 AM, Ricardo Neri wrote:
> > From: Srinivas Pandruvada <[email protected]>
> >
> > Add a new netlink event to notify change in CPU capabilities in
> > terms of
> > performance and efficiency.
>
> Is this going to be handled by some 'generic' tools? If yes, maybe
> the values for 'performance' might be aligned with capacity
> [0,1024] ? Or are they completely not related so the mapping is
> simply impossible?
>

That would have been very useful.

The problem is that we may not know the maximum performance as system
may be booting with few CPUs (using maxcpus kernel command line) and
then user hot adding them. So we may need to rescale when we get a new
maximum performance CPU and send to user space.

We can't just use max from HFI table at in instance as it is not
necessary that HFI table contains data for all CPUs.

If HFI max performance value of 255 is a scaled value to max
performance CPU value in the system, then this conversion would have
been easy. But that is not.



> >
> > Firmware may change CPU capabilities as a result of thermal events
> > in the
> > system or to account for changes in the TDP (thermal design power)
> > level.
> >
> > This notification type will allow user space to avoid running
> > workloads
> > on certain CPUs or proactively adjust power limits to avoid future
> > events.
> >
> > Cc: Andi Kleen <[email protected]>
> > Cc: Aubrey Li <[email protected]>
> > Cc: Tim Chen <[email protected]>
> > Cc: "Ravi V. Shankar" <[email protected]>
> > Reviewed-by: Len Brown <[email protected]>
> > Signed-off-by: Srinivas Pandruvada <
> > [email protected]>
> > ---
> >   drivers/thermal/thermal_netlink.c | 52
> > +++++++++++++++++++++++++++++++
> >   drivers/thermal/thermal_netlink.h | 13 ++++++++
> >   include/uapi/linux/thermal.h      |  6 +++-
> >   3 files changed, 70 insertions(+), 1 deletion(-)
>
> [snip]
>
> >  
> > +struct cpu_capability {
> > +       int cpu;
> > +       int perf;
> > +       int eff;

Good idea.

Thanks,
Srinivas

>
> Why not use the full names, instead of thse shortcuts? We use full
> naming e.g. in cpufreq framework such as 'frequency' not 'freq'.
> The 'eff' is really not meaningful ('perf' a bit less but it has
> to meanings in kernel).
>
> Regards,
> Lukasz


2021-11-09 23:54:37

by Srinivas Pandruvada

[permalink] [raw]
Subject: Re: [PATCH 6/7] thermal: netlink: Add a new event to notify CPU capabilities change

On Tue, 2021-11-09 at 13:53 +0000, Lukasz Luba wrote:
> Hi Srinivas,
>
> On 11/9/21 1:23 PM, Srinivas Pandruvada wrote:
> > Hi Lukasz,
> >
> > On Tue, 2021-11-09 at 12:39 +0000, Lukasz Luba wrote:
> > > Hi Ricardo,
> > >
> > >
> > > On 11/6/21 1:33 AM, Ricardo Neri wrote:
> > > > From: Srinivas Pandruvada <[email protected]>
> > > >
> > > > Add a new netlink event to notify change in CPU capabilities in
> > > > terms of
> > > > performance and efficiency.
> > >
> > > Is this going to be handled by some 'generic' tools? If yes,
> > > maybe
> > > the values for 'performance' might be aligned with capacity
> > > [0,1024] ? Or are they completely not related so the mapping is
> > > simply impossible?
> > >
> >
> > That would have been very useful.
> >
> > The problem is that we may not know the maximum performance as
> > system
> > may be booting with few CPUs (using maxcpus kernel command line)
> > and
> > then user hot adding them. So we may need to rescale when we get a
> > new
> > maximum performance CPU and send to user space.
> >
> > We can't just use max from HFI table at in instance as it is not
> > necessary that HFI table contains data for all CPUs.
> >
> > If HFI max performance value of 255 is a scaled value to max
> > performance CPU value in the system, then this conversion would
> > have
> > been easy. But that is not.
>
> I see. I was asking because I'm working on similar interface and
> just wanted to understand your approach better. In my case we
> would probably simply use 'capacity' scale, or more
> precisely available capacity after subtracting 'thermal pressure'
> value.
> That might confuse a generic tool which listens to these socket
> messages, though. So probably I would have to add a new
> THERMAL_GENL_ATTR_CPU_CAPABILITY_* id
> to handle this different normalized across CPUs scale.
I can add a field capacity_scale. In HFI case it will always be 255. In
your cases it will 1024.


2021-11-09 23:58:43

by Ricardo Neri

[permalink] [raw]
Subject: Re: [PATCH 5/7] thermal: intel: hfi: Enable notification interrupt

On Mon, Nov 08, 2021 at 10:01:04AM +0100, Peter Zijlstra wrote:
> On Fri, Nov 05, 2021 at 06:33:10PM -0700, Ricardo Neri wrote:
> > + /*
> > + * On most systems, all CPUs in the package receive a package-level
> > + * thermal interrupt when there is an HFI update. Since they all are
> > + * dealing with the same update (as indicated by the update timestamp),
> > + * it is sufficient to let a single CPU to acknowledge the update and
> > + * schedule work to process it.
> > + */
>
> That's pretty crap hardware behaviour. Is there really no way to steer
> these interrupts?

The HFI is a package-level thermal interrupt. Hence, all CPUs get it.
AFAIK there is no way to steer the interrupts.

Thanks and BR,
Ricardo

2021-11-10 00:11:28

by Lukasz Luba

[permalink] [raw]
Subject: Re: [PATCH 6/7] thermal: netlink: Add a new event to notify CPU capabilities change



On 11/9/21 2:15 PM, Srinivas Pandruvada wrote:
> On Tue, 2021-11-09 at 13:53 +0000, Lukasz Luba wrote:
>> Hi Srinivas,
>>
>> On 11/9/21 1:23 PM, Srinivas Pandruvada wrote:
>>> Hi Lukasz,
>>>
>>> On Tue, 2021-11-09 at 12:39 +0000, Lukasz Luba wrote:
>>>> Hi Ricardo,
>>>>
>>>>
>>>> On 11/6/21 1:33 AM, Ricardo Neri wrote:
>>>>> From: Srinivas Pandruvada <[email protected]>
>>>>>
>>>>> Add a new netlink event to notify change in CPU capabilities in
>>>>> terms of
>>>>> performance and efficiency.
>>>>
>>>> Is this going to be handled by some 'generic' tools? If yes,
>>>> maybe
>>>> the values for 'performance' might be aligned with capacity
>>>> [0,1024] ? Or are they completely not related so the mapping is
>>>> simply impossible?
>>>>
>>>
>>> That would have been very useful.
>>>
>>> The problem is that we may not know the maximum performance as
>>> system
>>> may be booting with few CPUs (using maxcpus kernel command line)
>>> and
>>> then user hot adding them. So we may need to rescale when we get a
>>> new
>>> maximum performance CPU and send to user space.
>>>
>>> We can't just use max from HFI table at in instance as it is not
>>> necessary that HFI table contains data for all CPUs.
>>>
>>> If HFI max performance value of 255 is a scaled value to max
>>> performance CPU value in the system, then this conversion would
>>> have
>>> been easy. But that is not.
>>
>> I see. I was asking because I'm working on similar interface and
>> just wanted to understand your approach better. In my case we
>> would probably simply use 'capacity' scale, or more
>> precisely available capacity after subtracting 'thermal pressure'
>> value.
>> That might confuse a generic tool which listens to these socket
>> messages, though. So probably I would have to add a new
>> THERMAL_GENL_ATTR_CPU_CAPABILITY_* id
>> to handle this different normalized across CPUs scale.
> I can add a field capacity_scale. In HFI case it will always be 255. In
> your cases it will 1024.
>
>

Sounds good, with that upper limit those tools would not build
up assumptions (they would have to parse that scale value).
Although, I would prefer to call it 'performance_scale' if you don't
mind.
I've done similar renaming s/capacity/performance/ in the Energy Model
(EM) some time ago [1]. Some reasons:
- in the scheduler we have 'Performance Domains (PDs)'
- for GPUs we talk about 'performance', because 'capacity' sounds odd
in that case

[1]
https://lore.kernel.org/linux-pm/[email protected]/

2021-11-10 00:21:21

by Srinivas Pandruvada

[permalink] [raw]
Subject: Re: [PATCH 6/7] thermal: netlink: Add a new event to notify CPU capabilities change

On Tue, 2021-11-09 at 17:51 +0000, Lukasz Luba wrote:
>
>
> On 11/9/21 2:15 PM, Srinivas Pandruvada wrote:
> > On Tue, 2021-11-09 at 13:53 +0000, Lukasz Luba wrote:
> > > Hi Srinivas,
> > >
> > > On 11/9/21 1:23 PM, Srinivas Pandruvada wrote:
> > > > Hi Lukasz,
> > > >
> > > > On Tue, 2021-11-09 at 12:39 +0000, Lukasz Luba wrote:
> > > > > Hi Ricardo,
> > > > >
> > > > >
> > > > > On 11/6/21 1:33 AM, Ricardo Neri wrote:
> > > > > > From: Srinivas Pandruvada <
> > > > > > [email protected]>
> > > > > >
> > > > > > Add a new netlink event to notify change in CPU capabilities
> > > > > > in
> > > > > > terms of
> > > > > > performance and efficiency.
> > > > >
> > > > > Is this going to be handled by some 'generic' tools? If yes,
> > > > > maybe
> > > > > the values for 'performance' might be aligned with capacity
> > > > > [0,1024] ? Or are they completely not related so the mapping is
> > > > > simply impossible?
> > > > >
> > > >
> > > > That would have been very useful.
> > > >
> > > > The problem is that we may not know the maximum performance as
> > > > system
> > > > may be booting with few CPUs (using maxcpus kernel command line)
> > > > and
> > > > then user hot adding them. So we may need to rescale when we get
> > > > a
> > > > new
> > > > maximum performance CPU and send to user space.
> > > >
> > > > We can't just use max from HFI table at in instance as it is not
> > > > necessary that HFI table contains data for all CPUs.
> > > >
> > > > If HFI max performance value of 255 is a scaled value to max
> > > > performance CPU value in the system, then this conversion would
> > > > have
> > > > been easy. But that is not.
> > >
> > > I see. I was asking because I'm working on similar interface and
> > > just wanted to understand your approach better. In my case we
> > > would probably simply use 'capacity' scale, or more
> > > precisely available capacity after subtracting 'thermal pressure'
> > > value.
> > > That might confuse a generic tool which listens to these socket
> > > messages, though. So probably I would have to add a new
> > > THERMAL_GENL_ATTR_CPU_CAPABILITY_* id
> > > to handle this different normalized across CPUs scale.
> > I can add a field capacity_scale. In HFI case it will always be 255.
> > In
> > your cases it will 1024.
> >
> >
>
> Sounds good, with that upper limit those tools would not build
> up assumptions (they would have to parse that scale value).
> Although, I would prefer to call it 'performance_scale' if you don't
> mind.
Sure.

Thanks,
Srinivas

> I've done similar renaming  s/capacity/performance/ in the Energy Model
> (EM) some time ago [1]. Some reasons:
> - in the scheduler we have 'Performance Domains (PDs)'
> - for GPUs we talk about 'performance', because 'capacity' sounds odd
>    in that case
>
> [1]
> https://lore.kernel.org/linux-pm/[email protected]/


2021-11-24 14:49:17

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH 4/7] thermal: intel: hfi: Handle CPU hotplug events

On Sat, Nov 6, 2021 at 2:34 AM Ricardo Neri
<[email protected]> wrote:
>
> All CPUs in a package are represented in an HFI table. There exists an
> HFI table per package. Thus, CPUs in a package need to coordinate to
> initialize and access the table. Do such coordination during CPU hotplug.
> Use the first CPU to come online in a package to initialize the HFI table
> and the data structure representing it. Other CPUs in the same package need
> only to register or unregister themselves in that data structure.
>
> The HFI depends on both the package-level thermal management and the local
> APIC thermal local vector. Thus, ensure that both are properly configured
> before calling intel_hfi_online(). The CPU hotplug callbacks of the thermal
> throttle events code already meet these conditions. Enable the HFI from
> thermal_throttle_online().
>
> Cc: Andi Kleen <[email protected]>
> Cc: Aubrey Li <[email protected]>
> Cc: Srinivas Pandruvada <[email protected]>
> Cc: Tim Chen <[email protected]>
> Cc: "Ravi V. Shankar" <[email protected]>
> Reviewed-by: Len Brown <[email protected]>
> Signed-off-by: Ricardo Neri <[email protected]>
> ---
> drivers/thermal/intel/intel_hfi.c | 211 ++++++++++++++++++++++++++++
> drivers/thermal/intel/intel_hfi.h | 4 +
> drivers/thermal/intel/therm_throt.c | 8 ++
> 3 files changed, 223 insertions(+)
>
> diff --git a/drivers/thermal/intel/intel_hfi.c b/drivers/thermal/intel/intel_hfi.c
> index edfe343507b3..6a3adfd57d72 100644
> --- a/drivers/thermal/intel/intel_hfi.c
> +++ b/drivers/thermal/intel/intel_hfi.c
> @@ -21,6 +21,7 @@
>
> #define pr_fmt(fmt) "intel-hfi: " fmt
>
> +#include <linux/io.h>
> #include <linux/slab.h>
>
> #include "intel_hfi.h"
> @@ -52,16 +53,26 @@ struct hfi_hdr {
>
> /**
> * struct hfi_instance - Representation of an HFI instance (i.e., a table)
> + * @table_base: Base of the local copy of the HFI table
> * @ts_counter: Time stamp of the last update of the table
> * @hdr: Base address of the table header
> * @data: Base address of the table data
> + * @die_id: Logical die ID this HFI table instance
> + * @cpus: CPUs represented in this HFI table instance
> + * @hw_table: Pointer to the HFI table of this instance
> + * @initialized: True if this HFI instance has bee initialized
> *
> * A set of parameters to parse and navigate a specific HFI table.
> */
> struct hfi_instance {
> + void *table_base;
> u64 *ts_counter;
> void *hdr;
> void *data;
> + u16 die_id;
> + struct cpumask *cpus;
> + void *hw_table;
> + bool initialized;
> };
>
> /**
> @@ -83,10 +94,210 @@ struct hfi_features {
> bool parsed;
> };
>
> +/**
> + * struct hfi_cpu_info - Per-CPU attributes to consume HFI data
> + * @index: Row of this CPU in its HFI table
> + * @hfi_instance: Attributes of the HFI table to which this CPU belongs
> + *
> + * Parameters to link a logical processor to an HFI table and a row within it.
> + */
> +struct hfi_cpu_info {
> + s16 index;
> + struct hfi_instance *hfi_instance;
> +};
> +
> +static DEFINE_PER_CPU(struct hfi_cpu_info, hfi_cpu_info) = { .index = -1 };
> +
> static int max_hfi_instances;
> static struct hfi_instance *hfi_instances;
>
> static struct hfi_features hfi_features;
> +static DEFINE_MUTEX(hfi_lock);
> +
> +static void init_hfi_cpu_index(unsigned int cpu)

I would make this function take a (struct hfi_cpu_info *) argument
instead of the CPU number. It would be more concise then.

> +{
> + s16 hfi_idx;
> + u32 edx;
> +
> + /* Do not re-read @cpu's index if it has already been initialized. */
> + if (per_cpu(hfi_cpu_info, cpu).index > -1)
> + return;
> +
> + edx = cpuid_edx(CPUID_HFI_LEAF);
> + hfi_idx = (edx & CPUID_HFI_CPU_INDEX_MASK) >> CPUID_HFI_CPU_INDEX_SHIFT;
> +
> + per_cpu(hfi_cpu_info, cpu).index = hfi_idx;
> +}
> +
> +/*
> + * The format of the HFI table depends on the number of capabilities that the
> + * hardware supports. Keep a data structure to navigate the table.
> + */
> +static void init_hfi_instance(struct hfi_instance *hfi_instance)
> +{
> + /* The HFI time-stamp is located at the base of the table. */
> + hfi_instance->ts_counter = hfi_instance->table_base;
> +
> + /* The HFI header is below the time-stamp. */
> + hfi_instance->hdr = hfi_instance->table_base +
> + sizeof(*hfi_instance->ts_counter);
> +
> + /* The HFI data starts below the header. */
> + hfi_instance->data = hfi_instance->hdr + hfi_features.hdr_size;
> +}
> +
> +/**
> + * intel_hfi_online() - Enable HFI on @cpu
> + * @cpu: CPU in which the HFI will be enabled
> + *
> + * Enable the HFI to be used in @cpu. The HFI is enabled at the die/package
> + * level. The first CPU in the die/package to come online does the full HFI
> + * initialization. Subsequent CPUs will just link themselves to the HFI
> + * instance of their die/package.
> + */
> +void intel_hfi_online(unsigned int cpu)
> +{
> + struct hfi_cpu_info *info = &per_cpu(hfi_cpu_info, cpu);
> + u16 die_id = topology_logical_die_id(cpu);
> + struct hfi_instance *hfi_instance;
> + phys_addr_t hw_table_pa;
> + u64 msr_val;
> +
> + if (!boot_cpu_has(X86_FEATURE_INTEL_HFI))
> + return;

IMO it is not useful to do anything below in this function if
hfi_instances is NULL, so I would check it along with the above.

> +
> + init_hfi_cpu_index(cpu);
> +
> + /*
> + * The HFI instance of this @cpu may exist already but they have not
> + * been linked to @cpu.
> + */
> + hfi_instance = info->hfi_instance;
> + if (!hfi_instance) {
> + if (!hfi_instances)
> + return;
> +
> + if (die_id >= 0 && die_id < max_hfi_instances)
> + hfi_instance = &hfi_instances[die_id];
> +
> + if (!hfi_instance)
> + return;

And here I would do

if (die_id < 0 || die_id >= max_hfi_instances)
return;

hfi_instance = &hfi_instances[die_id];

which is one branch less and fewer LOC.

> + }
> +
> + /*
> + * Now check if the HFI instance of the package/die of this CPU has
> + * been initialized. In such case, all we have to do is link @cpu's info
> + * to the HFI instance of its die/package.
> + */
> + mutex_lock(&hfi_lock);
> + if (hfi_instance->initialized) {
> + info->hfi_instance = hfi_instance;
> +
> + /*
> + * @cpu is the first one in its die/package to come back online.
> + * Use it to track the CPUs in the die/package.
> + */
> + if (!hfi_instance->cpus)
> + hfi_instance->cpus = topology_core_cpumask(cpu);
> +
> + mutex_unlock(&hfi_lock);
> + return;
> + }
> +
> + /*
> + * Hardware is programmed with the physical address of the first page
> + * frame of the table. Hence, the allocated memory must be page-aligned.
> + */
> + hfi_instance->hw_table = alloc_pages_exact(hfi_features.nr_table_pages,
> + GFP_KERNEL | __GFP_ZERO);
> + if (!hfi_instance->hw_table)
> + goto err_out;
> +
> + hw_table_pa = virt_to_phys(hfi_instance->hw_table);
> +
> + hfi_instance->table_base = kzalloc(hfi_features.nr_table_pages << PAGE_SHIFT,
> + GFP_KERNEL);
> + if (!hfi_instance->table_base)
> + goto free_hw_table;
> +
> + /*
> + * Program the address of the feedback table of this die/package. On
> + * some processors, hardware remembers the old address of the HFI table
> + * even after having been reprogrammed and re-enabled. Thus, do not free
> + * pages allocated for the table or reprogram the hardware with a new
> + * base address. Namely, program the hardware only once.
> + */
> + msr_val = hw_table_pa | HFI_PTR_VALID_BIT;
> + wrmsrl(MSR_IA32_HW_FEEDBACK_PTR, msr_val);
> +
> + init_hfi_instance(hfi_instance);
> +
> + hfi_instance->die_id = die_id;
> +
> + /*
> + * We can use the core cpumask of any cpu in the die/package. Any of
> + * them will reflect all the CPUs the same package that are online.
> + */
> + hfi_instance->cpus = topology_core_cpumask(cpu);
> + info->hfi_instance = hfi_instance;
> + hfi_instance->initialized = true;
> +
> + mutex_unlock(&hfi_lock);
> +
> + return;
> +
> +free_hw_table:
> + free_pages_exact(hfi_instance->hw_table, hfi_features.nr_table_pages);
> +err_out:
> + mutex_unlock(&hfi_lock);
> +}
> +
> +/**
> + * intel_hfi_offline() - Disable HFI on @cpu
> + * @cpu: CPU in which the HFI will be disabled
> + *
> + * Remove @cpu from those covered by its HFI instance.
> + *
> + * On some processors, hardware remembers previous programming settings even
> + * after being reprogrammed. Thus, keep HFI enabled even if all CPUs in the
> + * die/package of @cpu are offline. See note in intel_hfi_online().
> + */
> +void intel_hfi_offline(unsigned int cpu)
> +{
> + struct cpumask *die_cpumask = topology_core_cpumask(cpu);
> + struct hfi_cpu_info *info = &per_cpu(hfi_cpu_info, cpu);
> + struct hfi_instance *hfi_instance;
> +
> + if (!boot_cpu_has(X86_FEATURE_INTEL_HFI))
> + return;
> +
> + hfi_instance = info->hfi_instance;
> + if (!hfi_instance)
> + return;
> +
> + if (!hfi_instance->initialized)
> + return;
> +
> + mutex_lock(&hfi_lock);
> +
> + /*
> + * We were using the core cpumask of @cpu to track CPUs in the same
> + * die/package. Now it is going offline and we need to find another
> + * CPU we can use.
> + */
> + if (die_cpumask == hfi_instance->cpus) {
> + int new_cpu;
> +
> + new_cpu = cpumask_any_but(hfi_instance->cpus, cpu);
> + if (new_cpu >= nr_cpu_ids)
> + /* All other CPUs in the package are offline. */
> + hfi_instance->cpus = NULL;
> + else
> + hfi_instance->cpus = topology_core_cpumask(new_cpu);

Hmmm. Is topology_core_cpumask() updated when CPUs go offline and online?

> + }
> +
> + mutex_unlock(&hfi_lock);
> +}
>
> static __init int hfi_parse_features(void)
> {
> diff --git a/drivers/thermal/intel/intel_hfi.h b/drivers/thermal/intel/intel_hfi.h
> index 42529d3ac92d..d87c3823bb76 100644
> --- a/drivers/thermal/intel/intel_hfi.h
> +++ b/drivers/thermal/intel/intel_hfi.h
> @@ -27,8 +27,12 @@
>
> #if defined(CONFIG_INTEL_HFI)
> void __init intel_hfi_init(void);
> +void intel_hfi_online(unsigned int cpu);
> +void intel_hfi_offline(unsigned int cpu);
> #else
> static inline void intel_hfi_init(void) { }
> +static inline void intel_hfi_online(unsigned int cpu) { }
> +static inline void intel_hfi_offline(unsigned int cpu) { }
> #endif
>
> #endif /* _INTEL_HFI_H */
> diff --git a/drivers/thermal/intel/therm_throt.c b/drivers/thermal/intel/therm_throt.c
> index ac408714d52b..2a79598a7f7a 100644
> --- a/drivers/thermal/intel/therm_throt.c
> +++ b/drivers/thermal/intel/therm_throt.c
> @@ -480,6 +480,12 @@ static int thermal_throttle_online(unsigned int cpu)
> l = apic_read(APIC_LVTTHMR);
> apic_write(APIC_LVTTHMR, l & ~APIC_LVT_MASKED);
>
> + /*
> + * Enable the package-level HFI interrupt. By now the local APIC is
> + * ready to get thermal interrupts.
> + */
> + intel_hfi_online(cpu);
> +
> return thermal_throttle_add_dev(dev, cpu);
> }
>
> @@ -489,6 +495,8 @@ static int thermal_throttle_offline(unsigned int cpu)
> struct device *dev = get_cpu_device(cpu);
> u32 l;
>
> + intel_hfi_offline(cpu);
> +
> /* Mask the thermal vector before draining evtl. pending work */
> l = apic_read(APIC_LVTTHMR);
> apic_write(APIC_LVTTHMR, l | APIC_LVT_MASKED);
> --
> 2.17.1
>

2021-11-30 09:24:43

by Daniel Lezcano

[permalink] [raw]
Subject: Re: [PATCH 1/7] x86/Documentation: Describe the Intel Hardware Feedback Interface

Hi Ricardo,

On 06/11/2021 02:33, Ricardo Neri wrote:
> Start a documentation file to describe the purpose and operation of Intel's
> Hardware Feedback Interface. Describe how this interface is used in Linux
> to relay performance and energy efficiency updates to userspace.
>
> Cc: Andi Kleen <[email protected]>
> Cc: Aubrey Li <[email protected]>
> Cc: Tim Chen <[email protected]>
> Cc: "Ravi V. Shankar" <[email protected]>
> Reviewed-by: Len Brown <[email protected]>
> Suggested-by: Srinivas Pandruvada <[email protected]>
> Signed-off-by: Ricardo Neri <[email protected]>
> ---
> Documentation/x86/index.rst | 1 +
> Documentation/x86/intel-hfi.rst | 68 +++++++++++++++++++++++++++++++++
> 2 files changed, 69 insertions(+)
> create mode 100644 Documentation/x86/intel-hfi.rst
>
> diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst
> index 383048396336..f103821ee095 100644
> --- a/Documentation/x86/index.rst
> +++ b/Documentation/x86/index.rst
> @@ -21,6 +21,7 @@ x86-specific Documentation
> tlb
> mtrr
> pat
> + intel-hfi
> intel-iommu
> intel_txt
> amd-memory-encryption
> diff --git a/Documentation/x86/intel-hfi.rst b/Documentation/x86/intel-hfi.rst
> new file mode 100644
> index 000000000000..f5cb738170a5
> --- /dev/null
> +++ b/Documentation/x86/intel-hfi.rst
> @@ -0,0 +1,68 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +============================================================
> +Hardware-Feedback Interface for scheduling on Intel Hardware
> +============================================================
> +
> +Overview
> +--------
> +
> +Intel has described the Hardware Feedback Interface (HFI) in the Intel 64 and
> +IA-32 Architectures Software Developer's Manual (Intel SDM) Volume 3 Section
> +14.6 [1]_.
> +
> +The HFI gives the operating system a performance and energy efficiency
> +capability data for each CPU in the system. Linux can use the information from
> +the HFI to influence task placement decisions.
> +
> +The Hardware Feedback Interface
> +-------------------------------
> +
> +The Hardware Feedback Interface provides to the operating system information
> +about the performance and energy efficiency of each CPU in the system. Each
> +capability is given as a unit-less quantity in the range [0-255]. Higher values
> +indicate higher capability. Energy efficiency and performance are reported in
> +separate capabilities.

Are they linked together (eg. higher energy efficiency => lower
performance)?

> +These capabilities may change at runtime as a result of changes in the
> +operating conditions of the system or the action of external factors.

Is it possible to give examples?

> The rate
> +at which these capabilities are updated is specific to each processor model. On
> +some models, capabilities are set at boot time and never change. On others,
> +capabilities may change every tens of milliseconds.
> +
> +The kernel or a userspace policy daemon can use these capabilities to modify
> +task placement decisions. For instance, if either the performance or energy
> +capabilities of a given logical processor becomes zero, it is an indication that
> +the hardware recommends to the operating system to not schedule any tasks on
> +that processor for performance or energy efficiency reasons, respectively.

How the userspace can be involved in these decisions? If the performance
is impacted then that should be reflected in the CPU capacity. The
scheduler will prevent to put task on CPU with a low capacity, no?

I'm also worried about the overhead of the userspace notifications.

That sounds like similar to the thermal pressure? Wouldn't make sense to
create a generic component where HFI, cpufreq cooling, LMh, etc ... are
the backend?



> +Implementation details for Linux
> +--------------------------------
> +
> +The infrastructure to handle thermal event interrupts has two parts. In the
> +Local Vector Table of a CPU's local APIC, there exists a register for the
> +Thermal Monitor Register. This register controls how interrupts are delivered
> +to a CPU when the thermal monitor generates and interrupt. Further details
> +can be found in the Intel SDM Vol. 3 Section 10.5 [1]_.
> +
> +The thermal monitor may generate interrupts per CPU or per package. The HFI
> +generates package-level interrupts. This monitor is configured and initialized
> +via a set of machine-specific registers. Specifically, the HFI interrupt and
> +status are controlled via designated bits in the IA32_PACKAGE_THERM_INTERRUPT
> +and IA32_PACKAGE_THERM_STATUS registers, respectively. There exists one HFI
> +table per package. Further details can be found in the Intel SDM Vol. 3
> +Section 14.9 [1]_.
> +
> +The hardware issues an HFI interrupt after updating the HFI table and is ready
> +for the operating system to consume it. CPUs receive such interrupt via the
> +thermal entry in the Local APIC's Local Vector Table.
> +
> +When servicing such interrupt, the HFI driver parses the updated table and
> +relays the update to userspace using the thermal notification framework. Given
> +that there may be many HFI updates every second, the updates relayed to
> +userspace are throttled at a rate of CONFIG_HZ jiffies.
> +
> +References
> +----------
> +
> +.. [1] https://www.intel.com/sdm
>


--
<http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs

Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog

2021-11-30 09:29:53

by Daniel Lezcano

[permalink] [raw]
Subject: Re: [PATCH 6/7] thermal: netlink: Add a new event to notify CPU capabilities change

On 06/11/2021 02:33, Ricardo Neri wrote:
> From: Srinivas Pandruvada <[email protected]>
>
> Add a new netlink event to notify change in CPU capabilities in terms of
> performance and efficiency.
>
> Firmware may change CPU capabilities as a result of thermal events in the
> system or to account for changes in the TDP (thermal design power) level.
>
> This notification type will allow user space to avoid running workloads
> on certain CPUs or proactively adjust power limits to avoid future events.
>

[ ... ]

> + [THERMAL_GENL_ATTR_CPU_CAPABILITY_ID] = { .type = NLA_U32 },
> + [THERMAL_GENL_ATTR_CPU_CAPABILITY_PERF] = { .type = NLA_U32 },
> + [THERMAL_GENL_ATTR_CPU_CAPABILITY_EFF] = { .type = NLA_U32 },
> };

AFAIU, 0 <= perf < 256 and 0 <= eff < 256, right?

Is the following true?

0 <= perf + eff < 256



--
<http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs

Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog

2021-11-30 10:20:27

by Srinivas Pandruvada

[permalink] [raw]
Subject: Re: [PATCH 1/7] x86/Documentation: Describe the Intel Hardware Feedback Interface

Hi Daniel,

On Tue, 2021-11-30 at 10:24 +0100, Daniel Lezcano wrote:
> Hi Ricardo,
> >

[...]

> > +The Hardware Feedback Interface
> > +-------------------------------
> > +
> > +The Hardware Feedback Interface provides to the operating system
> > information
> > +about the performance and energy efficiency of each CPU in the
> > system. Each
> > +capability is given as a unit-less quantity in the range [0-255].
> > Higher values
> > +indicate higher capability. Energy efficiency and performance are
> > reported in
> > +separate capabilities.
>
> Are they linked together (eg. higher energy efficiency => lower
> performance)?

Generally true.
But for some workload and condition higher energy efficient point
doesn't mean lower performance.

>
> > +These capabilities may change at runtime as a result of changes in
> > the
> > +operating conditions of the system or the action of external
> > factors.
>
> Is it possible to give examples?
For example a server farm decide to save power by reduce cooling cost,
by lowering TDP. This can be done remotely. This will result in
notification of a lower performance value or even perf=eff=0 on some
CPUs via HFI. Intel CPU has capability to change TDP level runtime.

Or if the system is over heating the firmware can indicate lower
performance, so OSPM can take action.

>
> > The rate
> > +at which these capabilities are updated is specific to each
> > processor model. On
> > +some models, capabilities are set at boot time and never change.
> > On others,
> > +capabilities may change every tens of milliseconds.
> > +
> > +The kernel or a userspace policy daemon can use these capabilities
> > to modify
> > +task placement decisions. For instance, if either the performance
> > or energy
> > +capabilities of a given logical processor becomes zero, it is an
> > indication that
> > +the hardware recommends to the operating system to not schedule
> > any tasks on
> > +that processor for performance or energy efficiency reasons,
> > respectively.
>
> How the userspace can be involved in these decisions? If the
> performance
> is impacted then that should be reflected in the CPU capacity. The
> scheduler will prevent to put task on CPU with a low capacity, no?
>
> I'm also worried about the overhead of the userspace notifications.
>
> That sounds like similar to the thermal pressure? Wouldn't make sense
> to
> create a generic component where HFI, cpufreq cooling, LMh, etc ...
> are
> the backend?

The problem is treatment of perf/eff == 0 of a CPU, which we can
indicate as capacity = 0 to scheduler. But this doesn't prevent
scheduler for using that CPU on a overloaded system. We can offline
that CPU in kernel, which will be intrusive without notifying user
space or may fail for CPU0. Tried cpu idle injection, remove from cpu
sets. But doesn't work when interrupt are affined to that CPU, soft
irqs or timer scheduled there.

Here the notification are in order of several ms in order ( In reality
they are in seconds for current use cases). These are not emergency
events. Same as other thermal notifications, if something urgent FW can
already force to lowest performance without even notifying user space.


Thanks,
Srinivas

>
>
>
> > +Implementation details for Linux
> > +--------------------------------
> > +
> > +The infrastructure to handle thermal event interrupts has two
> > parts. In the
> > +Local Vector Table of a CPU's local APIC, there exists a register
> > for the
> > +Thermal Monitor Register. This register controls how interrupts
> > are delivered
> > +to a CPU when the thermal monitor generates and interrupt. Further
> > details
> > +can be found in the Intel SDM Vol. 3 Section 10.5 [1]_.
> > +
> > +The thermal monitor may generate interrupts per CPU or per
> > package. The HFI
> > +generates package-level interrupts. This monitor is configured and
> > initialized
> > +via a set of machine-specific registers. Specifically, the HFI
> > interrupt and
> > +status are controlled via designated bits in the
> > IA32_PACKAGE_THERM_INTERRUPT
> > +and IA32_PACKAGE_THERM_STATUS registers, respectively. There
> > exists one HFI
> > +table per package. Further details can be found in the Intel SDM
> > Vol. 3
> > +Section 14.9 [1]_.
> > +
> > +The hardware issues an HFI interrupt after updating the HFI table
> > and is ready
> > +for the operating system to consume it. CPUs receive such
> > interrupt via the
> > +thermal entry in the Local APIC's Local Vector Table.
> > +
> > +When servicing such interrupt, the HFI driver parses the updated
> > table and
> > +relays the update to userspace using the thermal notification
> > framework. Given
> > +that there may be many HFI updates every second, the updates
> > relayed to
> > +userspace are throttled at a rate of CONFIG_HZ jiffies.
> > +
> > +References
> > +----------
> > +
> > +.. [1] https://www.intel.com/sdm
> >
>
>



2021-11-30 13:22:48

by Ricardo Neri

[permalink] [raw]
Subject: Re: [PATCH 4/7] thermal: intel: hfi: Handle CPU hotplug events

On Wed, Nov 24, 2021 at 03:48:49PM +0100, Rafael J. Wysocki wrote:
> On Sat, Nov 6, 2021 at 2:34 AM Ricardo Neri
> <[email protected]> wrote:
> >
> > All CPUs in a package are represented in an HFI table. There exists an
> > HFI table per package. Thus, CPUs in a package need to coordinate to
> > initialize and access the table. Do such coordination during CPU hotplug.
> > Use the first CPU to come online in a package to initialize the HFI table
> > and the data structure representing it. Other CPUs in the same package need
> > only to register or unregister themselves in that data structure.
> >
> > The HFI depends on both the package-level thermal management and the local
> > APIC thermal local vector. Thus, ensure that both are properly configured
> > before calling intel_hfi_online(). The CPU hotplug callbacks of the thermal
> > throttle events code already meet these conditions. Enable the HFI from
> > thermal_throttle_online().
> >
> > Cc: Andi Kleen <[email protected]>
> > Cc: Aubrey Li <[email protected]>
> > Cc: Srinivas Pandruvada <[email protected]>
> > Cc: Tim Chen <[email protected]>
> > Cc: "Ravi V. Shankar" <[email protected]>
> > Reviewed-by: Len Brown <[email protected]>
> > Signed-off-by: Ricardo Neri <[email protected]>
> > ---
> > drivers/thermal/intel/intel_hfi.c | 211 ++++++++++++++++++++++++++++
> > drivers/thermal/intel/intel_hfi.h | 4 +
> > drivers/thermal/intel/therm_throt.c | 8 ++
> > 3 files changed, 223 insertions(+)
> >
> > diff --git a/drivers/thermal/intel/intel_hfi.c b/drivers/thermal/intel/intel_hfi.c
> > index edfe343507b3..6a3adfd57d72 100644
> > --- a/drivers/thermal/intel/intel_hfi.c
> > +++ b/drivers/thermal/intel/intel_hfi.c
> > @@ -21,6 +21,7 @@
> >
> > #define pr_fmt(fmt) "intel-hfi: " fmt
> >
> > +#include <linux/io.h>
> > #include <linux/slab.h>
> >
> > #include "intel_hfi.h"
> > @@ -52,16 +53,26 @@ struct hfi_hdr {
> >
> > /**
> > * struct hfi_instance - Representation of an HFI instance (i.e., a table)
> > + * @table_base: Base of the local copy of the HFI table
> > * @ts_counter: Time stamp of the last update of the table
> > * @hdr: Base address of the table header
> > * @data: Base address of the table data
> > + * @die_id: Logical die ID this HFI table instance
> > + * @cpus: CPUs represented in this HFI table instance
> > + * @hw_table: Pointer to the HFI table of this instance
> > + * @initialized: True if this HFI instance has bee initialized
> > *
> > * A set of parameters to parse and navigate a specific HFI table.
> > */
> > struct hfi_instance {
> > + void *table_base;
> > u64 *ts_counter;
> > void *hdr;
> > void *data;
> > + u16 die_id;
> > + struct cpumask *cpus;
> > + void *hw_table;
> > + bool initialized;
> > };
> >
> > /**
> > @@ -83,10 +94,210 @@ struct hfi_features {
> > bool parsed;
> > };
> >
> > +/**
> > + * struct hfi_cpu_info - Per-CPU attributes to consume HFI data
> > + * @index: Row of this CPU in its HFI table
> > + * @hfi_instance: Attributes of the HFI table to which this CPU belongs
> > + *
> > + * Parameters to link a logical processor to an HFI table and a row within it.
> > + */
> > +struct hfi_cpu_info {
> > + s16 index;
> > + struct hfi_instance *hfi_instance;
> > +};
> > +
> > +static DEFINE_PER_CPU(struct hfi_cpu_info, hfi_cpu_info) = { .index = -1 };
> > +
> > static int max_hfi_instances;
> > static struct hfi_instance *hfi_instances;
> >
> > static struct hfi_features hfi_features;
> > +static DEFINE_MUTEX(hfi_lock);
> > +
> > +static void init_hfi_cpu_index(unsigned int cpu)
>
> I would make this function take a (struct hfi_cpu_info *) argument
> instead of the CPU number. It would be more concise then.

Sure Rafael, it would be more concise. I will make the change.
>
> > +{
> > + s16 hfi_idx;
> > + u32 edx;
> > +
> > + /* Do not re-read @cpu's index if it has already been initialized. */
> > + if (per_cpu(hfi_cpu_info, cpu).index > -1)
> > + return;
> > +
> > + edx = cpuid_edx(CPUID_HFI_LEAF);
> > + hfi_idx = (edx & CPUID_HFI_CPU_INDEX_MASK) >> CPUID_HFI_CPU_INDEX_SHIFT;
> > +
> > + per_cpu(hfi_cpu_info, cpu).index = hfi_idx;
> > +}
> > +
> > +/*
> > + * The format of the HFI table depends on the number of capabilities that the
> > + * hardware supports. Keep a data structure to navigate the table.
> > + */
> > +static void init_hfi_instance(struct hfi_instance *hfi_instance)
> > +{
> > + /* The HFI time-stamp is located at the base of the table. */
> > + hfi_instance->ts_counter = hfi_instance->table_base;
> > +
> > + /* The HFI header is below the time-stamp. */
> > + hfi_instance->hdr = hfi_instance->table_base +
> > + sizeof(*hfi_instance->ts_counter);
> > +
> > + /* The HFI data starts below the header. */
> > + hfi_instance->data = hfi_instance->hdr + hfi_features.hdr_size;
> > +}
> > +
> > +/**
> > + * intel_hfi_online() - Enable HFI on @cpu
> > + * @cpu: CPU in which the HFI will be enabled
> > + *
> > + * Enable the HFI to be used in @cpu. The HFI is enabled at the die/package
> > + * level. The first CPU in the die/package to come online does the full HFI
> > + * initialization. Subsequent CPUs will just link themselves to the HFI
> > + * instance of their die/package.
> > + */
> > +void intel_hfi_online(unsigned int cpu)
> > +{
> > + struct hfi_cpu_info *info = &per_cpu(hfi_cpu_info, cpu);
> > + u16 die_id = topology_logical_die_id(cpu);
> > + struct hfi_instance *hfi_instance;
> > + phys_addr_t hw_table_pa;
> > + u64 msr_val;
> > +
> > + if (!boot_cpu_has(X86_FEATURE_INTEL_HFI))
> > + return;
>
> IMO it is not useful to do anything below in this function if
> hfi_instances is NULL, so I would check it along with the above.

Indeed, there is no point on keep going it we didn't find memory for
hfi_instances. I will make the change.

> > +
> > + init_hfi_cpu_index(cpu);
> > +
> > + /*
> > + * The HFI instance of this @cpu may exist already but they have not
> > + * been linked to @cpu.
> > + */
> > + hfi_instance = info->hfi_instance;
> > + if (!hfi_instance) {
> > + if (!hfi_instances)
> > + return;
> > +
> > + if (die_id >= 0 && die_id < max_hfi_instances)
> > + hfi_instance = &hfi_instances[die_id];
> > +
> > + if (!hfi_instance)
> > + return;
>
> And here I would do
>
> if (die_id < 0 || die_id >= max_hfi_instances)
> return;
>
> hfi_instance = &hfi_instances[die_id];
>
> which is one branch less and fewer LOC.
>

Thanks Rafael, this looks better.

> > + }
> > +
> > + /*
> > + * Now check if the HFI instance of the package/die of this CPU has
> > + * been initialized. In such case, all we have to do is link @cpu's info
> > + * to the HFI instance of its die/package.
> > + */
> > + mutex_lock(&hfi_lock);
> > + if (hfi_instance->initialized) {
> > + info->hfi_instance = hfi_instance;
> > +
> > + /*
> > + * @cpu is the first one in its die/package to come back online.
> > + * Use it to track the CPUs in the die/package.
> > + */
> > + if (!hfi_instance->cpus)
> > + hfi_instance->cpus = topology_core_cpumask(cpu);
> > +
> > + mutex_unlock(&hfi_lock);
> > + return;
> > + }
> > +
> > + /*
> > + * Hardware is programmed with the physical address of the first page
> > + * frame of the table. Hence, the allocated memory must be page-aligned.
> > + */
> > + hfi_instance->hw_table = alloc_pages_exact(hfi_features.nr_table_pages,
> > + GFP_KERNEL | __GFP_ZERO);
> > + if (!hfi_instance->hw_table)
> > + goto err_out;
> > +
> > + hw_table_pa = virt_to_phys(hfi_instance->hw_table);
> > +
> > + hfi_instance->table_base = kzalloc(hfi_features.nr_table_pages << PAGE_SHIFT,
> > + GFP_KERNEL);
> > + if (!hfi_instance->table_base)
> > + goto free_hw_table;
> > +
> > + /*
> > + * Program the address of the feedback table of this die/package. On
> > + * some processors, hardware remembers the old address of the HFI table
> > + * even after having been reprogrammed and re-enabled. Thus, do not free
> > + * pages allocated for the table or reprogram the hardware with a new
> > + * base address. Namely, program the hardware only once.
> > + */
> > + msr_val = hw_table_pa | HFI_PTR_VALID_BIT;
> > + wrmsrl(MSR_IA32_HW_FEEDBACK_PTR, msr_val);
> > +
> > + init_hfi_instance(hfi_instance);
> > +
> > + hfi_instance->die_id = die_id;
> > +
> > + /*
> > + * We can use the core cpumask of any cpu in the die/package. Any of
> > + * them will reflect all the CPUs the same package that are online.
> > + */
> > + hfi_instance->cpus = topology_core_cpumask(cpu);
> > + info->hfi_instance = hfi_instance;
> > + hfi_instance->initialized = true;
> > +
> > + mutex_unlock(&hfi_lock);
> > +
> > + return;
> > +
> > +free_hw_table:
> > + free_pages_exact(hfi_instance->hw_table, hfi_features.nr_table_pages);
> > +err_out:
> > + mutex_unlock(&hfi_lock);
> > +}
> > +
> > +/**
> > + * intel_hfi_offline() - Disable HFI on @cpu
> > + * @cpu: CPU in which the HFI will be disabled
> > + *
> > + * Remove @cpu from those covered by its HFI instance.
> > + *
> > + * On some processors, hardware remembers previous programming settings even
> > + * after being reprogrammed. Thus, keep HFI enabled even if all CPUs in the
> > + * die/package of @cpu are offline. See note in intel_hfi_online().
> > + */
> > +void intel_hfi_offline(unsigned int cpu)
> > +{
> > + struct cpumask *die_cpumask = topology_core_cpumask(cpu);
> > + struct hfi_cpu_info *info = &per_cpu(hfi_cpu_info, cpu);
> > + struct hfi_instance *hfi_instance;
> > +
> > + if (!boot_cpu_has(X86_FEATURE_INTEL_HFI))
> > + return;
> > +
> > + hfi_instance = info->hfi_instance;
> > + if (!hfi_instance)
> > + return;
> > +
> > + if (!hfi_instance->initialized)
> > + return;
> > +
> > + mutex_lock(&hfi_lock);
> > +
> > + /*
> > + * We were using the core cpumask of @cpu to track CPUs in the same
> > + * die/package. Now it is going offline and we need to find another
> > + * CPU we can use.
> > + */
> > + if (die_cpumask == hfi_instance->cpus) {
> > + int new_cpu;
> > +
> > + new_cpu = cpumask_any_but(hfi_instance->cpus, cpu);
> > + if (new_cpu >= nr_cpu_ids)
> > + /* All other CPUs in the package are offline. */
> > + hfi_instance->cpus = NULL;
> > + else
> > + hfi_instance->cpus = topology_core_cpumask(new_cpu);
>
> Hmmm. Is topology_core_cpumask() updated when CPUs go offline and online?

Yes. A CPU going offline is cleared from its siblings' cpumask [1] and its own [2]
in remove_siblinginfo() via cpu_disable_common(). A CPU going online is set
in its siblings' cpumask and its own in set_cpu_sibling_map() [3].


Thanks and BR,
Ricardo

[1]. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/x86/kernel/smpboot.c?h=v5.16-rc3#n1592
[2]. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/x86/kernel/smpboot.c?h=v5.16-rc3#n1617
[3]. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/x86/kernel/smpboot.c?h=v5.16-rc3#n657
>

2021-11-30 13:33:01

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH 4/7] thermal: intel: hfi: Handle CPU hotplug events

On Tue, Nov 30, 2021 at 2:22 PM Ricardo Neri
<[email protected]> wrote:
>
> On Wed, Nov 24, 2021 at 03:48:49PM +0100, Rafael J. Wysocki wrote:
> > On Sat, Nov 6, 2021 at 2:34 AM Ricardo Neri
> > <[email protected]> wrote:

[cut]

> > > +/**
> > > + * intel_hfi_offline() - Disable HFI on @cpu
> > > + * @cpu: CPU in which the HFI will be disabled
> > > + *
> > > + * Remove @cpu from those covered by its HFI instance.
> > > + *
> > > + * On some processors, hardware remembers previous programming settings even
> > > + * after being reprogrammed. Thus, keep HFI enabled even if all CPUs in the
> > > + * die/package of @cpu are offline. See note in intel_hfi_online().
> > > + */
> > > +void intel_hfi_offline(unsigned int cpu)
> > > +{
> > > + struct cpumask *die_cpumask = topology_core_cpumask(cpu);
> > > + struct hfi_cpu_info *info = &per_cpu(hfi_cpu_info, cpu);
> > > + struct hfi_instance *hfi_instance;
> > > +
> > > + if (!boot_cpu_has(X86_FEATURE_INTEL_HFI))
> > > + return;
> > > +
> > > + hfi_instance = info->hfi_instance;
> > > + if (!hfi_instance)
> > > + return;
> > > +
> > > + if (!hfi_instance->initialized)
> > > + return;
> > > +
> > > + mutex_lock(&hfi_lock);
> > > +
> > > + /*
> > > + * We were using the core cpumask of @cpu to track CPUs in the same
> > > + * die/package. Now it is going offline and we need to find another
> > > + * CPU we can use.
> > > + */
> > > + if (die_cpumask == hfi_instance->cpus) {
> > > + int new_cpu;
> > > +
> > > + new_cpu = cpumask_any_but(hfi_instance->cpus, cpu);
> > > + if (new_cpu >= nr_cpu_ids)
> > > + /* All other CPUs in the package are offline. */
> > > + hfi_instance->cpus = NULL;
> > > + else
> > > + hfi_instance->cpus = topology_core_cpumask(new_cpu);
> >
> > Hmmm. Is topology_core_cpumask() updated when CPUs go offline and online?
>
> Yes. A CPU going offline is cleared from its siblings' cpumask [1] and its own [2]
> in remove_siblinginfo() via cpu_disable_common(). A CPU going online is set
> in its siblings' cpumask and its own in set_cpu_sibling_map() [3].

OK, so it is necessary to ensure that intel_hfi_offline() will always
run after remove_siblinginfo() so it sees the updated mask. How do we
ensure that?

> [1]. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/x86/kernel/smpboot.c?h=v5.16-rc3#n1592
> [2]. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/x86/kernel/smpboot.c?h=v5.16-rc3#n1617
> [3]. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/x86/kernel/smpboot.c?h=v5.16-rc3#n657

2021-12-02 23:44:23

by Ricardo Neri

[permalink] [raw]
Subject: Re: [PATCH 4/7] thermal: intel: hfi: Handle CPU hotplug events

On Tue, Nov 30, 2021 at 02:32:42PM +0100, Rafael J. Wysocki wrote:
> On Tue, Nov 30, 2021 at 2:22 PM Ricardo Neri
> <[email protected]> wrote:
> >
> > On Wed, Nov 24, 2021 at 03:48:49PM +0100, Rafael J. Wysocki wrote:
> > > On Sat, Nov 6, 2021 at 2:34 AM Ricardo Neri
> > > <[email protected]> wrote:
>
> [cut]
>
> > > > +/**
> > > > + * intel_hfi_offline() - Disable HFI on @cpu
> > > > + * @cpu: CPU in which the HFI will be disabled
> > > > + *
> > > > + * Remove @cpu from those covered by its HFI instance.
> > > > + *
> > > > + * On some processors, hardware remembers previous programming settings even
> > > > + * after being reprogrammed. Thus, keep HFI enabled even if all CPUs in the
> > > > + * die/package of @cpu are offline. See note in intel_hfi_online().
> > > > + */
> > > > +void intel_hfi_offline(unsigned int cpu)
> > > > +{
> > > > + struct cpumask *die_cpumask = topology_core_cpumask(cpu);
> > > > + struct hfi_cpu_info *info = &per_cpu(hfi_cpu_info, cpu);
> > > > + struct hfi_instance *hfi_instance;
> > > > +
> > > > + if (!boot_cpu_has(X86_FEATURE_INTEL_HFI))
> > > > + return;
> > > > +
> > > > + hfi_instance = info->hfi_instance;
> > > > + if (!hfi_instance)
> > > > + return;
> > > > +
> > > > + if (!hfi_instance->initialized)
> > > > + return;
> > > > +
> > > > + mutex_lock(&hfi_lock);
> > > > +
> > > > + /*
> > > > + * We were using the core cpumask of @cpu to track CPUs in the same
> > > > + * die/package. Now it is going offline and we need to find another
> > > > + * CPU we can use.
> > > > + */
> > > > + if (die_cpumask == hfi_instance->cpus) {
> > > > + int new_cpu;
> > > > +
> > > > + new_cpu = cpumask_any_but(hfi_instance->cpus, cpu);
> > > > + if (new_cpu >= nr_cpu_ids)
> > > > + /* All other CPUs in the package are offline. */
> > > > + hfi_instance->cpus = NULL;
> > > > + else
> > > > + hfi_instance->cpus = topology_core_cpumask(new_cpu);
> > >
> > > Hmmm. Is topology_core_cpumask() updated when CPUs go offline and online?
> >
> > Yes. A CPU going offline is cleared from its siblings' cpumask [1] and its own [2]
> > in remove_siblinginfo() via cpu_disable_common(). A CPU going online is set
> > in its siblings' cpumask and its own in set_cpu_sibling_map() [3].
>
> OK, so it is necessary to ensure that intel_hfi_offline() will always
> run after remove_siblinginfo() so it sees the updated mask. How do we
> ensure that?

I don't think that is possible. remove_siblinginfo() is called from
CPUHP_TEARDOWN_CPU, which always happens after CPUHP_AP_OFFLINE, if I
understand correctly.

I guess that I will need to use a local cpumask as other drivers do.

Thanks and BR,
Ricardo

2021-12-09 16:11:46

by Ricardo Neri

[permalink] [raw]
Subject: Re: [PATCH 6/7] thermal: netlink: Add a new event to notify CPU capabilities change

On Tue, Nov 30, 2021 at 10:29:46AM +0100, Daniel Lezcano wrote:
> On 06/11/2021 02:33, Ricardo Neri wrote:
> > From: Srinivas Pandruvada <[email protected]>
> >
> > Add a new netlink event to notify change in CPU capabilities in terms of
> > performance and efficiency.
> >
> > Firmware may change CPU capabilities as a result of thermal events in the
> > system or to account for changes in the TDP (thermal design power) level.
> >
> > This notification type will allow user space to avoid running workloads
> > on certain CPUs or proactively adjust power limits to avoid future events.
> >
>
> [ ... ]
>
> > + [THERMAL_GENL_ATTR_CPU_CAPABILITY_ID] = { .type = NLA_U32 },
> > + [THERMAL_GENL_ATTR_CPU_CAPABILITY_PERF] = { .type = NLA_U32 },
> > + [THERMAL_GENL_ATTR_CPU_CAPABILITY_EFF] = { .type = NLA_U32 },
> > };
>
> AFAIU, 0 <= perf < 256 and 0 <= eff < 256, right?
>
> Is the following true?
>
> 0 <= perf + eff < 256

No, they are not. They are set independently.

Thanks and BR,
Ricardo


2021-12-09 16:57:41

by Daniel Lezcano

[permalink] [raw]
Subject: Re: [PATCH 6/7] thermal: netlink: Add a new event to notify CPU capabilities change

On 09/12/2021 17:03, Ricardo Neri wrote:
> On Tue, Nov 30, 2021 at 10:29:46AM +0100, Daniel Lezcano wrote:
>> On 06/11/2021 02:33, Ricardo Neri wrote:
>>> From: Srinivas Pandruvada <[email protected]>
>>>
>>> Add a new netlink event to notify change in CPU capabilities in terms of
>>> performance and efficiency.
>>>
>>> Firmware may change CPU capabilities as a result of thermal events in the
>>> system or to account for changes in the TDP (thermal design power) level.
>>>
>>> This notification type will allow user space to avoid running workloads
>>> on certain CPUs or proactively adjust power limits to avoid future events.
>>>
>>
>> [ ... ]
>>
>>> + [THERMAL_GENL_ATTR_CPU_CAPABILITY_ID] = { .type = NLA_U32 },
>>> + [THERMAL_GENL_ATTR_CPU_CAPABILITY_PERF] = { .type = NLA_U32 },
>>> + [THERMAL_GENL_ATTR_CPU_CAPABILITY_EFF] = { .type = NLA_U32 },
>>> };
>>
>> AFAIU, 0 <= perf < 256 and 0 <= eff < 256, right?
>>
>> Is the following true?
>>
>> 0 <= perf + eff < 256
>
> No, they are not. They are set independently.

I understand they can be set independently but is the constraint above
correct? For example, can the system send perf=255 and eff=255 or perf=0
and eff=0 ?

May be I misunderstood but I was expecting at least some kind of
connection between perf and eff (when eff is high, perf is low and the
opposite).


--
<http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs

Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog

2021-12-09 17:40:06

by Srinivas Pandruvada

[permalink] [raw]
Subject: Re: [PATCH 6/7] thermal: netlink: Add a new event to notify CPU capabilities change

On Thu, 2021-12-09 at 17:57 +0100, Daniel Lezcano wrote:
> On 09/12/2021 17:03, Ricardo Neri wrote:
> > On Tue, Nov 30, 2021 at 10:29:46AM +0100, Daniel Lezcano wrote:
> > > On 06/11/2021 02:33, Ricardo Neri wrote:
> > > > From: Srinivas Pandruvada <[email protected]>
> > > >
> > > > Add a new netlink event to notify change in CPU capabilities in
> > > > terms of
> > > > performance and efficiency.
> > > >
> > > > Firmware may change CPU capabilities as a result of thermal
> > > > events in the
> > > > system or to account for changes in the TDP (thermal design
> > > > power) level.
> > > >
> > > > This notification type will allow user space to avoid running
> > > > workloads
> > > > on certain CPUs or proactively adjust power limits to avoid
> > > > future events.
> > > >
> > >
> > > [ ... ]
> > >
> > > > +       [THERMAL_GENL_ATTR_CPU_CAPABILITY_ID]   = { .type =
> > > > NLA_U32 },
> > > > +       [THERMAL_GENL_ATTR_CPU_CAPABILITY_PERF] = { .type =
> > > > NLA_U32 },
> > > > +       [THERMAL_GENL_ATTR_CPU_CAPABILITY_EFF]  = { .type =
> > > > NLA_U32 },
> > > >  };
> > >
> > > AFAIU, 0 <= perf < 256 and 0 <= eff < 256, right?
> > >
> > > Is the following true?
> > >
> > >         0 <= perf + eff < 256
> >
> > No, they are not. They are set independently.
>
> I understand they can be set independently but is the constraint
> above
> correct? For example, can the system send perf=255 and eff=255 or
> perf=0
> and eff=0 ?
perf = 0 and eff = 0 is already the case in the current processors.
Both FF is not the case as the current generation use real performance
which can't be FF. Also it is unlikely that at max performance you have
max efficiency.

Thanks,
Srinivas

>
> May be I misunderstood but I was expecting at least some kind of
> connection between perf and eff (when eff is high, perf is low and
> the
> opposite).
>
>