2020-04-16 07:57:01

by Pratik R. Sampat

[permalink] [raw]
Subject: [PATCH v7 0/3] powerpc/powernv: Introduce interface for self-restore support

v6: https://lkml.org/lkml/2020/3/26/99
Changelog
v6-->v7
Based on comments from Gautham Shenoy
1. Using static keys instead of booleans to cache support
2. extract_save_restore_state_dt device tree parser function documented

Background
==========

The power management framework on POWER systems include core idle
states that lose context. Deep idle states namely "winkle" on POWER8
and "stop4" and "stop5" on POWER9 can be entered by a CPU to save
different levels of power, as a consequence of which all the
hypervisor resources such as SPRs and SCOMs are lost.

For most SPRs, saving and restoration of content for SPRs and SCOMs
is handled by the hypervisor kernel prior to entering an post exit
from an idle state respectively. However, there is a small set of
critical SPRs and XSCOMs that are expected to contain sane values even
before the control is transferred to the hypervisor kernel at system
reset vector.

For this purpose, microcode firmware provides a mechanism to restore
values on certain SPRs. The communication mechanism between the
hypervisor kernel and the microcode is a standard interface called
sleep-winkle-engine (SLW) on Power8 and Stop-API on Power9 which is
abstracted by OPAL calls from the hypervisor kernel. The Stop-API
provides an interface known as the self-restore API, to which the SPR
number and a predefined value to be restored on wake-up from a deep
stop state is supplied.


Motivation to introduce a new Stop-API
======================================

The self-restore API expects not just the SPR number but also the
value with which the SPR is restored. This is good for those SPRs such
as HSPRG0 whose values do not change at runtime, since for them, the
kernel can invoke the self-restore API at boot time once the values of
these SPRs are determined.

However, there are use-cases where-in the value to be saved cannot be
known or cannot be updated in the layer it currently is.
The shortcomings and the new use-cases which cannot be served by the
existing self-restore API, serves as motivation for a new API:

Shortcoming1:
------------
In a special wakeup scenario, SPRs such as PSSCR, whose values can
change at runtime, are compelled to make the self-restore API call
every time before entering a deep-idle state rendering it to be
prohibitively expensive

Shortcoming2:
------------
The value of LPCR is dynamic based on if the CPU is entered a stop
state during cpu idle versus cpu hotplug.
Today, an additional self-restore call is made before entering
CPU-Hotplug to clear the PECE1 bit in stop-API so that if we are
woken up by a special wakeup on an offlined CPU, we go back to stop
with the the bit cleared.
There is a overhead of an extra call

New Use-case:
-------------
In the case where the hypervisor is running on an
ultravisor environment, the boot time is too late in the cycle to make
the self-restore API calls, as these cannot be invoked from an
non-secure context anymore

To address these shortcomings, the firmware provides another API known
as the self-save API. The self-save API only takes the SPR number as a
parameter and will ensure that on wakeup from a deep-stop state the
SPR is restored with the value that it contained prior to entering the
deep-stop.

Contrast between self-save and self-restore APIs
================================================

Before entering
deep idle |---------------|
------------> | HCODE A |
| |---------------|
|---------| |
| CPU |----|
|---------| |
| |---------------|
|------------>| HCODE B |
On waking up |---------------|
from deep idle

When a self-restore API is invoked, the HCODE inserts instructions
into "HCODE B" region of the above figure to restore the content of
the SPR to the said value. The "HCODE B" region gets executed soon
after the CPU wakes up from a deep idle state, thus executing the
inserted instructions, thereby restoring the contents of the SPRs to
the required values.

When a self-save API is invoked, the HCODE inserts instructions into
the "HCODE A" region of the above figure to save the content of the
SPR into some location in memory. It also inserts instructions into
the "HCODE B" region to restore the content of the SPR to the
corresponding value saved in the memory by the instructions in "HCODE
A" region.

Thus, in contrast with self-restore, the self-save API *does not* need
a value to be passed to it, since it ensures that the value of SPR
before entering deep stop is saved, and subsequently the same value is
restored.

Self-save and self-restore are complementary features since,
self-restore can help in restoring a different value in the SPR on
wakeup from a deep-idle state than what it had before entering the
deep idle state. This was used in POWER8 for HSPRG0 to distinguish a
wakeup from Winkle vs Fastsleep.

Limitations of self-save
========================
Ideally all SPRs should be available for self-save, but HID0 is very
tricky to implement in microcode due to various endianess quirks.
Couple of implementation schemes were buggy and hence HID0 was left
out to be self-restore only.

The fallout of this limitation is as follows:

* In Non PEF environment, no issue. Linux will use self-restore for
HID0 as it does today and no functional impact.

* In PEF environment, the HID0 restore value is decided by OPAL during
boot and it is setup for LE hypervisor with radix MMU. This is the
default and current working configuration of a PEF environment.
However if there is a change, then HV Linux will try to change the
HID0 value to something different than what OPAL decided, at which
time deep-stop states will be disabled under this new PEF
environment.

A simple and workable design is achieved by scoping the power
management deep-stop state support only to a known default PEF
environment. Any deviation will affect *only* deep stop-state support
(stop4,5) in that environment and not have any functional impediment
to the environment itself.

In future, if there is a need to support changing of HID0 to various
values under PEF environment and support deep-stop states, it can be
worked out via an ultravisor call or improve the microcode design to
include HID0 in self-save. These future scheme would be an extension
and does not break or make the current implementation scheme
redundant.

Design Choices
==============

Presenting the design choices in front of us:

Design-Choice 1:
----------------
A simple implementation is to just replace self-restore calls with
self-save as it is direct super-set.

Pros:
A simple design, quick to implement


Cons:
* Breaks backward compatibility. Self-restore has historically been
supported in the firmware and an old firmware running on an new
kernel will be incompatible and deep stop states will be cut.
* Furthermore, critical SPRs which need to be restored
before 0x100 vector like HID0 are not supported by self-save.

Design-Choice 2:
----------------
Advertise both self-restore and self-save from OPAL including the set
of registers that each support. The kernel can then choose which API
to go with.
For the sake of simplicity, in case both modes are supported for an
SPR by default self-save would be called for it.

Pros:
* Backwards compatible

Cons:
Overhead in parsing device tree with the SPR list

Possible optimization with Approach2:
-------------------------------------
There are SPRs whose values don't tend to change over time and invoking
self-save on them, where the values are gotten each time may turn out to
be inefficient. In that case calling a self-restore where passing the
value makes more sense as, if the value is same, the memory location
is not updated.
SPRs that dont change are as follows:
SPRN_HSPRG0,
SPRN_LPCR,
SPRN_PTCR,
SPRN_HMEER,
SPRN_HID0,

The values of PSSCR and MSR change at runtime and hence, the kernel
cannot determine during boot time what their values will be before
entering a particular deep-stop state.

Therefore, a preference based interface is introduced for choosing
between self-save or self-restore between for each SPR.
The per-SPR preference is only a refinement of
approach 2 purely for performance reasons. It can be dropped if the
complexity is not deemed worth the returns.

Patches Organization
====================
Design Choice 2 has been chosen as an implementation to demonstrate in
the patch series.

Patch1:
Devises an interface which lists all the interested SPRs, along with
highlighting the support of mode.
It is an isomorphic patch to replicate the functionality of the older
self-restore firmware for the new interface

Patch2:
Introduces the self-save API and leverages upon the struct interface to
add another supported mode in the mix of saving and restoring. It also
enforces that in case both modes are supported self-save is chosen over
self-restore

The commit also parses the device-tree and populate support for
self-save and self-restore in the supported mask

Patch3:
Introduce an optimization to allow preference to choose between one more
over the one when both both modes are supported. This optimization can
allow for better performance for the SPRs that don't change in value and
hence self-restore is a better alternative, and in cases when it is
known for values to change self-save is more convenient.

Pratik Rajesh Sampat (3):
powerpc/powernv: Introduce interface for self-restore support
powerpc/powernv: Introduce support and parsing for self-save API
powerpc/powernv: Preference optimization for SPRs with constant values

.../bindings/powerpc/opal/power-mgt.txt | 18 +
arch/powerpc/include/asm/opal-api.h | 3 +-
arch/powerpc/include/asm/opal.h | 1 +
arch/powerpc/platforms/powernv/idle.c | 389 +++++++++++++++---
arch/powerpc/platforms/powernv/opal-call.c | 1 +
5 files changed, 355 insertions(+), 57 deletions(-)

--
2.17.1


2020-04-16 07:57:21

by Pratik R. Sampat

[permalink] [raw]
Subject: [PATCH v7 2/3] powerpc/powernv: Introduce support and parsing for self-save API

This commit introduces and leverages the Self save API. The difference
between self-save and self-restore is that the value to be saved for the
SPR does not need to be passed to the call.

Add the new Self Save OPAL API call in the list of OPAL calls.
Implement the self saving of the SPRs based on the support populated.
This commit prefers the self-save over self-restore in case both are
supported for a particular SPR.

Along with support for self-save, kernel supported save restore is also
populated in the list. This property is only populated for those SPRs
which encapsulate support from the kernel and have the possibility to
garner support from a firmware mode too.

In addition, the commit also parses the device tree for nodes self-save,
self-restore and populate support for the preferred SPRs based on what
was advertised by the device tree.

In the case a SPR is supported by the firmware self-save, self-restore
and kernel save restore then the preference of execution is also in the
same order as above.

Signed-off-by: Pratik Rajesh Sampat <[email protected]>
---
.../bindings/powerpc/opal/power-mgt.txt | 18 +++
arch/powerpc/include/asm/opal-api.h | 3 +-
arch/powerpc/include/asm/opal.h | 1 +
arch/powerpc/platforms/powernv/idle.c | 135 +++++++++++++++++-
arch/powerpc/platforms/powernv/opal-call.c | 1 +
5 files changed, 150 insertions(+), 8 deletions(-)

diff --git a/Documentation/devicetree/bindings/powerpc/opal/power-mgt.txt b/Documentation/devicetree/bindings/powerpc/opal/power-mgt.txt
index 9d619e955576..5fb03c6d7de9 100644
--- a/Documentation/devicetree/bindings/powerpc/opal/power-mgt.txt
+++ b/Documentation/devicetree/bindings/powerpc/opal/power-mgt.txt
@@ -116,3 +116,21 @@ otherwise. The length of all the property arrays must be the same.
which of the fields of the PMICR are set in the corresponding
entries in ibm,cpu-idle-state-pmicr. This is an optional
property on POWER8 and is absent on POWER9.
+
+- self-restore:
+ Array of unsigned 64-bit values containing a property for sprn-mask
+ with each bit indicating the index of the supported SPR for the
+ functionality. This is an optional property for both Power8 and Power9
+
+- self-save:
+ Array of unsigned 64-bit values containing a property for sprn-mask
+ with each bit indicating the index of the supported SPR for the
+ functionality. This is an optional property for both Power8 and Power9
+
+Example of arrangement of self-restore and self-save arrays:
+For instance if PSSCR is supported, the value is 0x357 = 855.
+Since the array is of 64 bit values, the index of the array is determined by
+855 / 64 = 13th element. Within that index, the bit number is determined by
+855 % 64 = 23rd bit.
+This means that if the 23rd bit in array[13] is set, then that SPR is supported
+by the corresponding self-save or self-restore API.
diff --git a/arch/powerpc/include/asm/opal-api.h b/arch/powerpc/include/asm/opal-api.h
index 1dffa3cb16ba..7ba698369083 100644
--- a/arch/powerpc/include/asm/opal-api.h
+++ b/arch/powerpc/include/asm/opal-api.h
@@ -214,7 +214,8 @@
#define OPAL_SECVAR_GET 176
#define OPAL_SECVAR_GET_NEXT 177
#define OPAL_SECVAR_ENQUEUE_UPDATE 178
-#define OPAL_LAST 178
+#define OPAL_SLW_SELF_SAVE_REG 181
+#define OPAL_LAST 181

#define QUIESCE_HOLD 1 /* Spin all calls at entry */
#define QUIESCE_REJECT 2 /* Fail all calls with OPAL_BUSY */
diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h
index 9986ac34b8e2..a370b0e8d899 100644
--- a/arch/powerpc/include/asm/opal.h
+++ b/arch/powerpc/include/asm/opal.h
@@ -204,6 +204,7 @@ int64_t opal_handle_hmi2(__be64 *out_flags);
int64_t opal_register_dump_region(uint32_t id, uint64_t start, uint64_t end);
int64_t opal_unregister_dump_region(uint32_t id);
int64_t opal_slw_set_reg(uint64_t cpu_pir, uint64_t sprn, uint64_t val);
+int64_t opal_slw_self_save_reg(uint64_t cpu_pir, uint64_t sprn);
int64_t opal_config_cpu_idle_state(uint64_t state, uint64_t flag);
int64_t opal_pci_set_phb_cxl_mode(uint64_t phb_id, uint64_t mode, uint64_t pe_number);
int64_t opal_pci_get_pbcq_tunnel_bar(uint64_t phb_id, uint64_t *addr);
diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
index 858ceb86394d..fdcb18a8a05b 100644
--- a/arch/powerpc/platforms/powernv/idle.c
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -35,13 +35,20 @@
/*
* Type of support for each SPR
* FIRMWARE_RESTORE: firmware restoration supported: calls self-restore OPAL API
+ * FIRMWARE_SELF_SAVE: firmware save and restore: calls self-save OPAL API
+ * KERNEL_SAVE_RESTORE: kernel handles the saving and restoring of SPR
*/
#define UNSUPPORTED 0x0
#define FIRMWARE_RESTORE 0x1
+#define FIRMWARE_SELF_SAVE 0x2
+#define KERNEL_SAVE_RESTORE 0x4

static u32 supported_cpuidle_states;
struct pnv_idle_states_t *pnv_idle_states;
int nr_pnv_idle_states;
+/* Caching the lpcr & ptcr support to use later */
+DEFINE_STATIC_KEY_FALSE(is_lpcr_self_save);
+DEFINE_STATIC_KEY_FALSE(is_ptcr_self_save);

struct preferred_sprs {
u64 spr;
@@ -51,6 +58,10 @@ struct preferred_sprs {
/*
* Supported mode: Default support. Can be overwritten during system
* initialization
+ * Note: SPRs with support for KERNEL_SAVE_RESTORE in this list are only those
+ * which have a possibility of support from another firmware mode (i.e self-save
+ * or self-restore)
+ * SPRs with exclusive kernel save support are implicit.
*/
struct preferred_sprs preferred_sprs[] = {
{
@@ -61,6 +72,10 @@ struct preferred_sprs preferred_sprs[] = {
.spr = SPRN_LPCR,
.supported_mode = FIRMWARE_RESTORE,
},
+ {
+ .spr = SPRN_PTCR,
+ .supported_mode = KERNEL_SAVE_RESTORE,
+ },
{
.spr = SPRN_HMEER,
.supported_mode = FIRMWARE_RESTORE,
@@ -219,11 +234,33 @@ static int pnv_self_save_restore_sprs(void)
curr_spr.spr == SPRN_HID4 ||
curr_spr.spr == SPRN_HID5))
continue;
- if (curr_spr.supported_mode & FIRMWARE_RESTORE) {
+
+ if (curr_spr.supported_mode & FIRMWARE_SELF_SAVE) {
+ rc = opal_slw_self_save_reg(pir,
+ curr_spr.spr);
+ if (rc != 0)
+ return rc;
+ switch (curr_spr.spr) {
+ case SPRN_LPCR:
+ static_branch_enable(&is_lpcr_self_save);
+ break;
+ case SPRN_PTCR:
+ static_branch_enable(&is_ptcr_self_save);
+ break;
+ }
+ } else if (curr_spr.supported_mode & FIRMWARE_RESTORE) {
rc = pnv_self_restore_sprs(pir, cpu,
curr_spr.spr);
if (rc != 0)
return rc;
+ } else {
+ if (curr_spr.supported_mode & KERNEL_SAVE_RESTORE ||
+ (cpu_has_feature(CPU_FTR_ARCH_300) &&
+ (curr_spr.spr == SPRN_HID1 ||
+ curr_spr.spr == SPRN_HID4 ||
+ curr_spr.spr == SPRN_HID5)))
+ continue;
+ return OPAL_UNSUPPORTED;
}
}
}
@@ -762,7 +799,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
mmcr0 = mfspr(SPRN_MMCR0);
}
if ((psscr & PSSCR_RL_MASK) >= pnv_first_spr_loss_level) {
- sprs.lpcr = mfspr(SPRN_LPCR);
+ if (!static_branch_unlikely(&is_lpcr_self_save))
+ sprs.lpcr = mfspr(SPRN_LPCR);
sprs.hfscr = mfspr(SPRN_HFSCR);
sprs.fscr = mfspr(SPRN_FSCR);
sprs.pid = mfspr(SPRN_PID);
@@ -776,7 +814,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
sprs.mmcr1 = mfspr(SPRN_MMCR1);
sprs.mmcr2 = mfspr(SPRN_MMCR2);

- sprs.ptcr = mfspr(SPRN_PTCR);
+ if (!static_branch_unlikely(&is_ptcr_self_save))
+ sprs.ptcr = mfspr(SPRN_PTCR);
sprs.rpr = mfspr(SPRN_RPR);
sprs.tscr = mfspr(SPRN_TSCR);
if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
@@ -860,7 +899,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
goto core_woken;

/* Per-core SPRs */
- mtspr(SPRN_PTCR, sprs.ptcr);
+ if (!static_branch_unlikely(&is_ptcr_self_save))
+ mtspr(SPRN_PTCR, sprs.ptcr);
mtspr(SPRN_RPR, sprs.rpr);
mtspr(SPRN_TSCR, sprs.tscr);

@@ -881,7 +921,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
atomic_unlock_and_stop_thread_idle();

/* Per-thread SPRs */
- mtspr(SPRN_LPCR, sprs.lpcr);
+ if (!static_branch_unlikely(&is_lpcr_self_save))
+ mtspr(SPRN_LPCR, sprs.lpcr);
mtspr(SPRN_HFSCR, sprs.hfscr);
mtspr(SPRN_FSCR, sprs.fscr);
mtspr(SPRN_PID, sprs.pid);
@@ -1060,8 +1101,10 @@ void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val)
* Program the LPCR via stop-api only if the deepest stop state
* can lose hypervisor context.
*/
- if (supported_cpuidle_states & OPAL_PM_LOSE_FULL_CONTEXT)
- opal_slw_set_reg(pir, SPRN_LPCR, lpcr_val);
+ if (supported_cpuidle_states & OPAL_PM_LOSE_FULL_CONTEXT) {
+ if (!static_branch_unlikely(&is_lpcr_self_save))
+ opal_slw_set_reg(pir, SPRN_LPCR, lpcr_val);
+ }
}

/*
@@ -1316,6 +1359,81 @@ static void __init pnv_probe_idle_states(void)
supported_cpuidle_states |= pnv_idle_states[i].flags;
}

+/*
+ * Extracts and populates the self save or restore capabilities
+ * passed from the device tree node
+ * @np: /ibm,opal/power-mgt/self-save or
+ * /ibm,opal/power-mgt/self-restore device node
+ * @support: Activation bit for each SPR to define support for the save-restore
+ * mode
+ */
+static int extract_save_restore_state_dt(struct device_node *np, u32 support)
+{
+ int nr_sprns = 0, i, bitmask_index;
+ u64 *temp_u64;
+ u64 bit_pos;
+
+ nr_sprns = of_property_count_u64_elems(np, "sprn-bitmask");
+ if (nr_sprns <= 0)
+ return -EINVAL;
+ temp_u64 = kcalloc(nr_sprns, sizeof(u64), GFP_KERNEL);
+ if (of_property_read_u64_array(np, "sprn-bitmask",
+ temp_u64, nr_sprns)) {
+ pr_warn("cpuidle-powernv: failed to find registers in DT\n");
+ kfree(temp_u64);
+ return -EINVAL;
+ }
+ /*
+ * Populate acknowledgment of support for the sprs in the global vector
+ * gotten by the registers supplied by the firmware.
+ * The registers are in a bitmask, bit index within
+ * that specifies the SPR
+ */
+ for (i = 0; i < nr_preferred_sprs; i++) {
+ bitmask_index = BIT_ULL_WORD(preferred_sprs[i].spr);
+ bit_pos = BIT_ULL_MASK(preferred_sprs[i].spr);
+ if ((temp_u64[bitmask_index] & bit_pos) == 0) {
+ preferred_sprs[i].supported_mode &= ~support;
+ continue;
+ }
+ preferred_sprs[i].supported_mode |= support;
+ }
+
+ kfree(temp_u64);
+ return 0;
+}
+
+static int pnv_parse_deepstate_dt(void)
+{
+ struct device_node *np;
+ int rc = 0, i;
+
+ /*
+ * Self restore register population
+ * In the case the node is not found, the support for self-restore for
+ * already populated SPRs is *not* cut. This is because self-restore
+ * assumes legacy support. In an event, self-restore is actually not
+ * supported then the call to the firmware fails and deep stop states
+ * will be cut.
+ */
+ np = of_find_compatible_node(NULL, NULL, "ibm,opal-self-restore");
+ if (np) {
+ rc = extract_save_restore_state_dt(np, FIRMWARE_RESTORE);
+ if (rc != 0)
+ return rc;
+ }
+ /* Self save register population */
+ np = of_find_compatible_node(NULL, NULL, "ibm,opal-self-save");
+ if (!np) {
+ for (i = 0; i < nr_preferred_sprs; i++)
+ preferred_sprs[i].supported_mode &= ~FIRMWARE_SELF_SAVE;
+ } else {
+ rc = extract_save_restore_state_dt(np, FIRMWARE_SELF_SAVE);
+ }
+ of_node_put(np);
+ return rc;
+}
+
/*
* This function parses device-tree and populates all the information
* into pnv_idle_states structure. It also sets up nr_pnv_idle_states
@@ -1464,6 +1582,9 @@ static int __init pnv_init_idle_states(void)
return rc;
pnv_probe_idle_states();

+ rc = pnv_parse_deepstate_dt();
+ if (rc)
+ return rc;
if (!cpu_has_feature(CPU_FTR_ARCH_300)) {
if (!(supported_cpuidle_states & OPAL_PM_SLEEP_ENABLED_ER1)) {
power7_fastsleep_workaround_entry = false;
diff --git a/arch/powerpc/platforms/powernv/opal-call.c b/arch/powerpc/platforms/powernv/opal-call.c
index 5cd0f52d258f..11e0ceb90de0 100644
--- a/arch/powerpc/platforms/powernv/opal-call.c
+++ b/arch/powerpc/platforms/powernv/opal-call.c
@@ -223,6 +223,7 @@ OPAL_CALL(opal_handle_hmi, OPAL_HANDLE_HMI);
OPAL_CALL(opal_handle_hmi2, OPAL_HANDLE_HMI2);
OPAL_CALL(opal_config_cpu_idle_state, OPAL_CONFIG_CPU_IDLE_STATE);
OPAL_CALL(opal_slw_set_reg, OPAL_SLW_SET_REG);
+OPAL_CALL(opal_slw_self_save_reg, OPAL_SLW_SELF_SAVE_REG);
OPAL_CALL(opal_register_dump_region, OPAL_REGISTER_DUMP_REGION);
OPAL_CALL(opal_unregister_dump_region, OPAL_UNREGISTER_DUMP_REGION);
OPAL_CALL(opal_pci_set_phb_cxl_mode, OPAL_PCI_SET_PHB_CAPI_MODE);
--
2.17.1

2020-04-16 07:59:07

by Pratik R. Sampat

[permalink] [raw]
Subject: [PATCH v7 3/3] powerpc/powernv: Preference optimization for SPRs with constant values

There are SPRs whose values don't tend to change over time and invoking
self-save on them, where the values are gotten each time may turn out to
be inefficient. In that case calling a self-restore where passing the
value makes more sense as, if the value is same the memory location
is not updated.
SPRs that dont change are as follows:
SPRN_HSPRG0,
SPRN_LPCR,
SPRN_PTCR,
SPRN_HMEER,
SPRN_HID0,

There are also SPRs whose values change and/or their value may not be
correcty determinable in the kernel. Eg: MSR and PSSCR

The value of LPCR is dynamic based on if the CPU is entered a stop
state during cpu idle versus cpu hotplug.

Therefore in this optimization patch, introducing the concept of
preference for each SPR to choose from in the case both self-save and
self-restore is supported.

The preference bitmask is shown as below:
----------------------------
|... | 2nd pref | 1st pref |
----------------------------
MSB LSB

The preference from higher to lower is from LSB to MSB with a shift of 8
bits.
Example:
Prefer self save first, if not available then prefer self
restore
The preference mask for this scenario will be seen as below.
((FIRMWARE_RESTORE << PREFERENCE_SHIFT) | FIRMWARE_SELF_SAVE)
---------------------------------
|... | Self restore | Self save |
---------------------------------
MSB LSB

Signed-off-by: Pratik Rajesh Sampat <[email protected]>
---
arch/powerpc/platforms/powernv/idle.c | 88 +++++++++++++++++++++------
1 file changed, 70 insertions(+), 18 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
index fdcb18a8a05b..daa2f920bd05 100644
--- a/arch/powerpc/platforms/powernv/idle.c
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -43,6 +43,31 @@
#define FIRMWARE_SELF_SAVE 0x2
#define KERNEL_SAVE_RESTORE 0x4

+#define NR_PREFERENCES 2
+#define PREFERENCE_SHIFT 4
+#define PREFERENCE_MASK 0xf
+/*
+ * Bitmask defining the kind of preferences available.
+ * Note : The higher to lower preference is from LSB to MSB, with a shift of
+ * 4 bits.
+ * ----------------------------
+ * | | 2nd pref | 1st pref |
+ * ----------------------------
+ * MSB LSB
+ */
+/* Prefer Restore if available, otherwise unsupported */
+#define PREFER_SELF_RESTORE_ONLY FIRMWARE_RESTORE
+/* Prefer Save if available, otherwise unsupported */
+#define PREFER_SELF_SAVE_ONLY FIRMWARE_SELF_SAVE
+/* Prefer Restore when available, otherwise prefer Save */
+#define PREFER_RESTORE_SAVE ((FIRMWARE_SELF_SAVE << \
+ PREFERENCE_SHIFT)\
+ | FIRMWARE_RESTORE)
+/* Prefer Save when available, otherwise prefer Restore*/
+#define PREFER_SAVE_RESTORE ((FIRMWARE_RESTORE <<\
+ PREFERENCE_SHIFT)\
+ | FIRMWARE_SELF_SAVE)
+
static u32 supported_cpuidle_states;
struct pnv_idle_states_t *pnv_idle_states;
int nr_pnv_idle_states;
@@ -52,6 +77,7 @@ DEFINE_STATIC_KEY_FALSE(is_ptcr_self_save);

struct preferred_sprs {
u64 spr;
+ u32 preferred_mode;
u32 supported_mode;
};

@@ -66,42 +92,52 @@ struct preferred_sprs {
struct preferred_sprs preferred_sprs[] = {
{
.spr = SPRN_HSPRG0,
+ .preferred_mode = PREFER_RESTORE_SAVE,
.supported_mode = FIRMWARE_RESTORE,
},
{
.spr = SPRN_LPCR,
+ .preferred_mode = PREFER_SAVE_RESTORE,
.supported_mode = FIRMWARE_RESTORE,
},
{
.spr = SPRN_PTCR,
+ .preferred_mode = PREFER_RESTORE_SAVE,
.supported_mode = KERNEL_SAVE_RESTORE,
},
{
.spr = SPRN_HMEER,
+ .preferred_mode = PREFER_RESTORE_SAVE,
.supported_mode = FIRMWARE_RESTORE,
},
{
.spr = SPRN_HID0,
+ .preferred_mode = PREFER_RESTORE_SAVE,
.supported_mode = FIRMWARE_RESTORE,
},
{
.spr = P9_STOP_SPR_MSR,
+ .preferred_mode = PREFER_SAVE_RESTORE,
.supported_mode = FIRMWARE_RESTORE,
},
{
.spr = P9_STOP_SPR_PSSCR,
+ .preferred_mode = PREFER_SAVE_RESTORE,
.supported_mode = FIRMWARE_RESTORE,
},
{
.spr = SPRN_HID1,
+ .preferred_mode = PREFER_RESTORE_SAVE,
.supported_mode = FIRMWARE_RESTORE,
},
{
.spr = SPRN_HID4,
+ .preferred_mode = PREFER_SELF_RESTORE_ONLY,
.supported_mode = FIRMWARE_RESTORE,
},
{
.spr = SPRN_HID5,
+ .preferred_mode = PREFER_SELF_RESTORE_ONLY,
.supported_mode = FIRMWARE_RESTORE,
}
};
@@ -218,7 +254,9 @@ static int pnv_self_restore_sprs(u64 pir, int cpu, u64 spr)

static int pnv_self_save_restore_sprs(void)
{
- int rc, index, cpu;
+ int rc, index, cpu, k;
+ bool is_initialized;
+ u32 preferred;
u64 pir;
struct preferred_sprs curr_spr;

@@ -234,26 +272,40 @@ static int pnv_self_save_restore_sprs(void)
curr_spr.spr == SPRN_HID4 ||
curr_spr.spr == SPRN_HID5))
continue;
-
- if (curr_spr.supported_mode & FIRMWARE_SELF_SAVE) {
- rc = opal_slw_self_save_reg(pir,
- curr_spr.spr);
- if (rc != 0)
- return rc;
- switch (curr_spr.spr) {
- case SPRN_LPCR:
- static_branch_enable(&is_lpcr_self_save);
+ for (k = 0; k < NR_PREFERENCES; k++) {
+ preferred = curr_spr.preferred_mode
+ & PREFERENCE_MASK;
+ if (preferred & curr_spr.supported_mode &
+ FIRMWARE_SELF_SAVE) {
+ is_initialized = true;
+ rc = opal_slw_self_save_reg(pir,
+ curr_spr.spr);
+ if (rc != 0)
+ return rc;
+ switch (curr_spr.spr) {
+ case SPRN_LPCR:
+ static_branch_enable(&is_lpcr_self_save);
+ break;
+ case SPRN_PTCR:
+ static_branch_enable(&is_ptcr_self_save);
+ break;
+ }
break;
- case SPRN_PTCR:
- static_branch_enable(&is_ptcr_self_save);
+ } else if (preferred & curr_spr.supported_mode &
+ FIRMWARE_RESTORE) {
+ is_initialized = true;
+ rc = pnv_self_restore_sprs(pir, cpu,
+ curr_spr.spr);
+ if (rc != 0)
+ return rc;
break;
}
- } else if (curr_spr.supported_mode & FIRMWARE_RESTORE) {
- rc = pnv_self_restore_sprs(pir, cpu,
- curr_spr.spr);
- if (rc != 0)
- return rc;
- } else {
+ preferred_sprs[index].preferred_mode =
+ preferred_sprs[index].preferred_mode >>
+ PREFERENCE_SHIFT;
+ curr_spr = preferred_sprs[index];
+ }
+ if (!is_initialized) {
if (curr_spr.supported_mode & KERNEL_SAVE_RESTORE ||
(cpu_has_feature(CPU_FTR_ARCH_300) &&
(curr_spr.spr == SPRN_HID1 ||
--
2.17.1

2020-04-16 07:59:38

by Pratik R. Sampat

[permalink] [raw]
Subject: [PATCH v7 1/3] powerpc/powernv: Introduce interface for self-restore support

Introduces an interface that helps determine support for the
self-restore API. The commit is isomorphic to the original interface of
declaring SPRs to self-restore.

Signed-off-by: Pratik Rajesh Sampat <[email protected]>
Reviewed-by: Gautham R. Shenoy <[email protected]>
---
arch/powerpc/platforms/powernv/idle.c | 200 +++++++++++++++++++-------
1 file changed, 152 insertions(+), 48 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
index 78599bca66c2..858ceb86394d 100644
--- a/arch/powerpc/platforms/powernv/idle.c
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -32,10 +32,67 @@
#define P9_STOP_SPR_MSR 2000
#define P9_STOP_SPR_PSSCR 855

+/*
+ * Type of support for each SPR
+ * FIRMWARE_RESTORE: firmware restoration supported: calls self-restore OPAL API
+ */
+#define UNSUPPORTED 0x0
+#define FIRMWARE_RESTORE 0x1
+
static u32 supported_cpuidle_states;
struct pnv_idle_states_t *pnv_idle_states;
int nr_pnv_idle_states;

+struct preferred_sprs {
+ u64 spr;
+ u32 supported_mode;
+};
+
+/*
+ * Supported mode: Default support. Can be overwritten during system
+ * initialization
+ */
+struct preferred_sprs preferred_sprs[] = {
+ {
+ .spr = SPRN_HSPRG0,
+ .supported_mode = FIRMWARE_RESTORE,
+ },
+ {
+ .spr = SPRN_LPCR,
+ .supported_mode = FIRMWARE_RESTORE,
+ },
+ {
+ .spr = SPRN_HMEER,
+ .supported_mode = FIRMWARE_RESTORE,
+ },
+ {
+ .spr = SPRN_HID0,
+ .supported_mode = FIRMWARE_RESTORE,
+ },
+ {
+ .spr = P9_STOP_SPR_MSR,
+ .supported_mode = FIRMWARE_RESTORE,
+ },
+ {
+ .spr = P9_STOP_SPR_PSSCR,
+ .supported_mode = FIRMWARE_RESTORE,
+ },
+ {
+ .spr = SPRN_HID1,
+ .supported_mode = FIRMWARE_RESTORE,
+ },
+ {
+ .spr = SPRN_HID4,
+ .supported_mode = FIRMWARE_RESTORE,
+ },
+ {
+ .spr = SPRN_HID5,
+ .supported_mode = FIRMWARE_RESTORE,
+ }
+};
+
+const int nr_preferred_sprs = ARRAY_SIZE(preferred_sprs);
+
/*
* The default stop state that will be used by ppc_md.power_save
* function on platforms that support stop instruction.
@@ -61,78 +118,125 @@ static bool deepest_stop_found;

static unsigned long power7_offline_type;

-static int pnv_save_sprs_for_deep_states(void)
+static int pnv_self_restore_sprs(u64 pir, int cpu, u64 spr)
{
- int cpu;
+ u64 reg_val;
int rc;

- /*
- * hid0, hid1, hid4, hid5, hmeer and lpcr values are symmetric across
- * all cpus at boot. Get these reg values of current cpu and use the
- * same across all cpus.
- */
- uint64_t lpcr_val = mfspr(SPRN_LPCR);
- uint64_t hid0_val = mfspr(SPRN_HID0);
- uint64_t hid1_val = mfspr(SPRN_HID1);
- uint64_t hid4_val = mfspr(SPRN_HID4);
- uint64_t hid5_val = mfspr(SPRN_HID5);
- uint64_t hmeer_val = mfspr(SPRN_HMEER);
- uint64_t msr_val = MSR_IDLE;
- uint64_t psscr_val = pnv_deepest_stop_psscr_val;
-
- for_each_present_cpu(cpu) {
- uint64_t pir = get_hard_smp_processor_id(cpu);
- uint64_t hsprg0_val = (uint64_t)paca_ptrs[cpu];
-
- rc = opal_slw_set_reg(pir, SPRN_HSPRG0, hsprg0_val);
+ switch (spr) {
+ case SPRN_HSPRG0:
+ reg_val = (uint64_t)paca_ptrs[cpu];
+ rc = opal_slw_set_reg(pir, SPRN_HSPRG0, reg_val);
if (rc != 0)
return rc;
-
- rc = opal_slw_set_reg(pir, SPRN_LPCR, lpcr_val);
+ break;
+ case SPRN_LPCR:
+ reg_val = mfspr(SPRN_LPCR);
+ rc = opal_slw_set_reg(pir, SPRN_LPCR, reg_val);
if (rc != 0)
return rc;
-
+ break;
+ case P9_STOP_SPR_MSR:
+ reg_val = MSR_IDLE;
if (cpu_has_feature(CPU_FTR_ARCH_300)) {
- rc = opal_slw_set_reg(pir, P9_STOP_SPR_MSR, msr_val);
+ rc = opal_slw_set_reg(pir, P9_STOP_SPR_MSR, reg_val);
if (rc)
return rc;
-
- rc = opal_slw_set_reg(pir,
- P9_STOP_SPR_PSSCR, psscr_val);
-
+ }
+ break;
+ case P9_STOP_SPR_PSSCR:
+ reg_val = pnv_deepest_stop_psscr_val;
+ if (cpu_has_feature(CPU_FTR_ARCH_300)) {
+ rc = opal_slw_set_reg(pir, P9_STOP_SPR_PSSCR, reg_val);
if (rc)
return rc;
}
-
- /* HIDs are per core registers */
+ break;
+ case SPRN_HMEER:
+ reg_val = mfspr(SPRN_HMEER);
if (cpu_thread_in_core(cpu) == 0) {
-
- rc = opal_slw_set_reg(pir, SPRN_HMEER, hmeer_val);
- if (rc != 0)
+ rc = opal_slw_set_reg(pir, SPRN_HMEER, reg_val);
+ if (rc)
return rc;
-
- rc = opal_slw_set_reg(pir, SPRN_HID0, hid0_val);
- if (rc != 0)
+ }
+ break;
+ case SPRN_HID0:
+ reg_val = mfspr(SPRN_HID0);
+ if (cpu_thread_in_core(cpu) == 0) {
+ rc = opal_slw_set_reg(pir, SPRN_HID0, reg_val);
+ if (rc)
return rc;
+ }
+ break;
+ case SPRN_HID1:
+ reg_val = mfspr(SPRN_HID1);
+ if (!cpu_has_feature(CPU_FTR_ARCH_300) &&
+ cpu_thread_in_core(cpu) == 0) {
+ rc = opal_slw_set_reg(pir, SPRN_HID1, reg_val);
+ if (rc)
+ return rc;
+ }
+ break;
+ case SPRN_HID4:
+ reg_val = mfspr(SPRN_HID4);
+ if (!cpu_has_feature(CPU_FTR_ARCH_300) &&
+ cpu_thread_in_core(cpu) == 0) {
+ rc = opal_slw_set_reg(pir, SPRN_HID4, reg_val);
+ if (rc)
+ return rc;
+ }
+ break;
+ case SPRN_HID5:
+ reg_val = mfspr(SPRN_HID5);
+ if (!cpu_has_feature(CPU_FTR_ARCH_300) &&
+ cpu_thread_in_core(cpu) == 0) {
+ rc = opal_slw_set_reg(pir, SPRN_HID5, reg_val);
+ if (rc)
+ return rc;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}

- /* Only p8 needs to set extra HID regiters */
- if (!cpu_has_feature(CPU_FTR_ARCH_300)) {
-
- rc = opal_slw_set_reg(pir, SPRN_HID1, hid1_val);
- if (rc != 0)
- return rc;
-
- rc = opal_slw_set_reg(pir, SPRN_HID4, hid4_val);
- if (rc != 0)
- return rc;
+static int pnv_self_save_restore_sprs(void)
+{
+ int rc, index, cpu;
+ u64 pir;
+ struct preferred_sprs curr_spr;

- rc = opal_slw_set_reg(pir, SPRN_HID5, hid5_val);
+ for_each_present_cpu(cpu) {
+ pir = get_hard_smp_processor_id(cpu);
+ for (index = 0; index < nr_preferred_sprs; index++) {
+ curr_spr = preferred_sprs[index];
+ /* HIDs are per core register */
+ if (cpu_thread_in_core(cpu) != 0 &&
+ (curr_spr.spr == SPRN_HMEER ||
+ curr_spr.spr == SPRN_HID0 ||
+ curr_spr.spr == SPRN_HID1 ||
+ curr_spr.spr == SPRN_HID4 ||
+ curr_spr.spr == SPRN_HID5))
+ continue;
+ if (curr_spr.supported_mode & FIRMWARE_RESTORE) {
+ rc = pnv_self_restore_sprs(pir, cpu,
+ curr_spr.spr);
if (rc != 0)
return rc;
}
}
}
+ return 0;
+}

+static int pnv_save_sprs_for_deep_states(void)
+{
+ int rc;
+
+ rc = pnv_self_save_restore_sprs();
+ if (rc != 0)
+ return rc;
return 0;
}

--
2.17.1