Skiboot patches v4: https://lists.ozlabs.org/pipermail/skiboot/2020-February/016404.html
Linux patches v4: https://lkml.org/lkml/2020/2/12/446
Changelog
v4 --> v5
Based on Michael Ellerman's comments, in patch3:
1. Added documentation in power-mgt.txt for self-save and self-restore
2. Used bitmap abstractions while parsing device tree
3. Fixed scenario where zero or less SPRs were treated as a success
4. Removed redundant device_node pointer
5. Removed unnecessary pr_warn messages
6. Changed the old of_find_node_by_path calls to of_find_compatible_node
7. Fixed leaking reference of the parsed device_node pointer
Currently the stop-API supports a mechanism called as self-restore
which allows us to restore the values of certain SPRs on wakeup from a
deep-stop state to a desired value. To use this, the Kernel makes an
OPAL call passing the PIR of the CPU, the SPR number and the value to
which the SPR should be restored when that CPU wakes up from a deep
stop state.
Recently, a new feature, named self-save has been enabled in the
stop-api, which is an alternative mechanism to do the same, except
that self-save will save the current content of the SPR before
entering a deep stop state and also restore the content back on
waking up from a deep stop state.
This patch series aims at introducing and leveraging the self-save feature in
the kernel.
Now, as the kernel has a choice to prefer one mode over the other and
there can be registers in both the save/restore SPR list which are sent
from the device tree, a new interface has been defined for the seamless
handing of the modes for each SPR.
A list of preferred SPRs are maintained in the kernel which contains two
properties:
1. supported_mode: Helps in identifying if it strictly supports self
save or restore or both.
Initialized using the information from device tree.
2. preferred_mode: Calls out what mode is preferred for each SPR. It
could be strictly self save or restore, or it can also
determine the preference of mode over the other if both
are present by encapsulating the other in bitmask from
LSB to MSB.
Initialized statically.
Below is a table to show the Scenario::Consequence when the self save and
self restore modes are available or disabled in different combinations as
perceived from the device tree thus giving complete backwards compatibly
regardless of an older firmware running a newer kernel or vise-versa.
Support for self save or self-restore is embedded in the device tree,
along with the set of registers it supports.
SR = Self restore; SS = Self save
.-----------------------------------.----------------------------------------.
| Scenario | Consequence |
:-----------------------------------+----------------------------------------:
| Legacy Firmware. No SS or SR node | Self restore is called for all |
| | supported SPRs |
:-----------------------------------+----------------------------------------:
| SR: !active SS: !active | Deep stop states disabled |
:-----------------------------------+----------------------------------------:
| SR: active SS: !active | Self restore is called for all |
| | supported SPRs |
:-----------------------------------+----------------------------------------:
| SR: active SS: active | Goes through the preferences for each |
| | SPR and executes of the modes |
| | accordingly. Currently, Self restore is|
| | called for all the SPRs except PSSCR |
| | which is self saved |
:-----------------------------------+----------------------------------------:
| SR: active(only HID0) SS: active | Self save called for all supported |
| | registers expect HID0 (as HID0 cannot |
| | be self saved currently) |
:-----------------------------------+----------------------------------------:
| SR: !active SS: active | currently will disable deep states as |
| | HID0 is needed to be self restored and |
| | cannot be self saved |
'-----------------------------------'----------------------------------------'
Pratik Rajesh Sampat (3):
powerpc/powernv: Interface to define support and preference for a SPR
powerpc/powernv: Introduce Self save support
powerpc/powernv: Parse device tree, population of SPR support
.../bindings/powerpc/opal/power-mgt.txt | 10 +
arch/powerpc/include/asm/opal-api.h | 3 +-
arch/powerpc/include/asm/opal.h | 1 +
arch/powerpc/platforms/powernv/idle.c | 416 +++++++++++++++---
arch/powerpc/platforms/powernv/opal-call.c | 1 +
5 files changed, 373 insertions(+), 58 deletions(-)
--
2.17.1
Parse the device tree for nodes self-save, self-restore and populate
support for the preferred SPRs based what was advertised by the device
tree.
Signed-off-by: Pratik Rajesh Sampat <[email protected]>
Reviewed-by: Ram Pai <[email protected]>
---
.../bindings/powerpc/opal/power-mgt.txt | 10 +++
arch/powerpc/platforms/powernv/idle.c | 78 +++++++++++++++++++
2 files changed, 88 insertions(+)
diff --git a/Documentation/devicetree/bindings/powerpc/opal/power-mgt.txt b/Documentation/devicetree/bindings/powerpc/opal/power-mgt.txt
index 9d619e955576..093cb5fe3d2d 100644
--- a/Documentation/devicetree/bindings/powerpc/opal/power-mgt.txt
+++ b/Documentation/devicetree/bindings/powerpc/opal/power-mgt.txt
@@ -116,3 +116,13 @@ otherwise. The length of all the property arrays must be the same.
which of the fields of the PMICR are set in the corresponding
entries in ibm,cpu-idle-state-pmicr. This is an optional
property on POWER8 and is absent on POWER9.
+
+- self-restore:
+ Array of unsigned 64-bit values containing a property for sprn-mask
+ with each bit indicating the index of the supported SPR for the
+ functionality. This is an optional property for both Power8 and Power9
+
+- self-save:
+ Array of unsigned 64-bit values containing a property for sprn-mask
+ with each bit indicating the index of the supported SPR for the
+ functionality. This is an optional property for both Power8 and Power9
diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
index 97aeb45e897b..c39111b338ff 100644
--- a/arch/powerpc/platforms/powernv/idle.c
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -1436,6 +1436,81 @@ static void __init pnv_probe_idle_states(void)
supported_cpuidle_states |= pnv_idle_states[i].flags;
}
+/*
+ * Extracts and populates the self save or restore capabilities
+ * passed from the device tree node
+ */
+static int extract_save_restore_state_dt(struct device_node *np, int type)
+{
+ int nr_sprns = 0, i, bitmask_index;
+ u64 *temp_u64;
+ u64 bit_pos;
+
+ nr_sprns = of_property_count_u64_elems(np, "sprn-bitmask");
+ if (nr_sprns <= 0)
+ return -EINVAL;
+ temp_u64 = kcalloc(nr_sprns, sizeof(u64), GFP_KERNEL);
+ if (of_property_read_u64_array(np, "sprn-bitmask",
+ temp_u64, nr_sprns)) {
+ pr_warn("cpuidle-powernv: failed to find registers in DT\n");
+ kfree(temp_u64);
+ return -EINVAL;
+ }
+ /*
+ * Populate acknowledgment of support for the sprs in the global vector
+ * gotten by the registers supplied by the firmware.
+ * The registers are in a bitmask, bit index within
+ * that specifies the SPR
+ */
+ for (i = 0; i < nr_preferred_sprs; i++) {
+ bitmask_index = BIT_WORD(preferred_sprs[i].spr);
+ bit_pos = BIT_MASK(preferred_sprs[i].spr);
+ if ((temp_u64[bitmask_index] & bit_pos) == 0) {
+ if (type == SELF_RESTORE_TYPE)
+ preferred_sprs[i].supported_mode &=
+ ~SELF_RESTORE_STRICT;
+ else
+ preferred_sprs[i].supported_mode &=
+ ~SELF_SAVE_STRICT;
+ continue;
+ }
+ if (type == SELF_RESTORE_TYPE) {
+ preferred_sprs[i].supported_mode |=
+ SELF_RESTORE_STRICT;
+ } else {
+ preferred_sprs[i].supported_mode |=
+ SELF_SAVE_STRICT;
+ }
+ }
+
+ kfree(temp_u64);
+ return 0;
+}
+
+static int pnv_parse_deepstate_dt(void)
+{
+ struct device_node *np;
+ int rc = 0, i;
+
+ /* Self restore register population */
+ np = of_find_compatible_node(NULL, NULL, "ibm,opal-self-restore");
+ if (np) {
+ rc = extract_save_restore_state_dt(np, SELF_RESTORE_TYPE);
+ if (rc != 0)
+ return rc;
+ }
+ /* Self save register population */
+ np = of_find_compatible_node(NULL, NULL, "ibm,opal-self-save");
+ if (!np) {
+ for (i = 0; i < nr_preferred_sprs; i++)
+ preferred_sprs[i].supported_mode &= ~SELF_SAVE_STRICT;
+ } else {
+ rc = extract_save_restore_state_dt(np, SELF_SAVE_TYPE);
+ }
+ of_node_put(np);
+ return rc;
+}
+
/*
* This function parses device-tree and populates all the information
* into pnv_idle_states structure. It also sets up nr_pnv_idle_states
@@ -1584,6 +1659,9 @@ static int __init pnv_init_idle_states(void)
return rc;
pnv_probe_idle_states();
+ rc = pnv_parse_deepstate_dt();
+ if (rc)
+ return rc;
if (!cpu_has_feature(CPU_FTR_ARCH_300)) {
if (!(supported_cpuidle_states & OPAL_PM_SLEEP_ENABLED_ER1)) {
power7_fastsleep_workaround_entry = false;
--
2.17.1
Define a bitmask interface to determine support for the Self Restore,
Self Save or both.
Also define an interface to determine the preference of that SPR to
be strictly saved or restored or encapsulated with an order of preference.
The preference bitmask is shown as below:
----------------------------
|... | 2nd pref | 1st pref |
----------------------------
MSB LSB
The preference from higher to lower is from LSB to MSB with a shift of 8
bits.
Example:
Prefer self save first, if not available then prefer self
restore
The preference mask for this scenario will be seen as below.
((SELF_RESTORE_STRICT << PREFERENCE_SHIFT) | SELF_SAVE_STRICT)
---------------------------------
|... | Self restore | Self save |
---------------------------------
MSB LSB
Finally, declare a list of preferred SPRs which encapsulate the bitmaks
for preferred and supported with defaults of both being set to support
legacy firmware.
This commit also implements using the above interface and retains the
legacy functionality of self restore.
Signed-off-by: Pratik Rajesh Sampat <[email protected]>
Reviewed-by: Ram Pai <[email protected]>
---
arch/powerpc/platforms/powernv/idle.c | 316 +++++++++++++++++++++-----
1 file changed, 259 insertions(+), 57 deletions(-)
diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
index 78599bca66c2..03fe835aadd1 100644
--- a/arch/powerpc/platforms/powernv/idle.c
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -32,9 +32,112 @@
#define P9_STOP_SPR_MSR 2000
#define P9_STOP_SPR_PSSCR 855
+/* Interface for the stop state supported and preference */
+#define SELF_RESTORE_TYPE 0
+#define SELF_SAVE_TYPE 1
+
+#define NR_PREFERENCES 2
+#define PREFERENCE_SHIFT 4
+#define PREFERENCE_MASK 0xf
+
+#define UNSUPPORTED 0x0
+#define SELF_RESTORE_STRICT 0x1
+#define SELF_SAVE_STRICT 0x2
+
+/*
+ * Bitmask defining the kind of preferences available.
+ * Note : The higher to lower preference is from LSB to MSB, with a shift of
+ * 4 bits.
+ * ----------------------------
+ * | | 2nd pref | 1st pref |
+ * ----------------------------
+ * MSB LSB
+ */
+/* Prefer Restore if available, otherwise unsupported */
+#define PREFER_SELF_RESTORE_ONLY SELF_RESTORE_STRICT
+/* Prefer Save if available, otherwise unsupported */
+#define PREFER_SELF_SAVE_ONLY SELF_SAVE_STRICT
+/* Prefer Restore when available, otherwise prefer Save */
+#define PREFER_RESTORE_SAVE ((SELF_SAVE_STRICT << \
+ PREFERENCE_SHIFT)\
+ | SELF_RESTORE_STRICT)
+/* Prefer Save when available, otherwise prefer Restore*/
+#define PREFER_SAVE_RESTORE ((SELF_RESTORE_STRICT <<\
+ PREFERENCE_SHIFT)\
+ | SELF_SAVE_STRICT)
static u32 supported_cpuidle_states;
struct pnv_idle_states_t *pnv_idle_states;
int nr_pnv_idle_states;
+/* Caching the lpcr & ptcr support to use later */
+static bool is_lpcr_self_save;
+static bool is_ptcr_self_save;
+
+struct preferred_sprs {
+ u64 spr;
+ u32 preferred_mode;
+ u32 supported_mode;
+};
+
+/*
+ * Preferred mode: Order of precedence when both self-save and self-restore
+ * supported
+ * Supported mode: Default support. Can be overwritten during system
+ * initialization
+ */
+struct preferred_sprs preferred_sprs[] = {
+ {
+ .spr = SPRN_HSPRG0,
+ .preferred_mode = PREFER_RESTORE_SAVE,
+ .supported_mode = SELF_RESTORE_STRICT,
+ },
+ {
+ .spr = SPRN_LPCR,
+ .preferred_mode = PREFER_RESTORE_SAVE,
+ .supported_mode = SELF_RESTORE_STRICT,
+ },
+ {
+ .spr = SPRN_PTCR,
+ .preferred_mode = PREFER_SAVE_RESTORE,
+ .supported_mode = SELF_RESTORE_STRICT,
+ },
+ {
+ .spr = SPRN_HMEER,
+ .preferred_mode = PREFER_RESTORE_SAVE,
+ .supported_mode = SELF_RESTORE_STRICT,
+ },
+ {
+ .spr = SPRN_HID0,
+ .preferred_mode = PREFER_RESTORE_SAVE,
+ .supported_mode = SELF_RESTORE_STRICT,
+ },
+ {
+ .spr = P9_STOP_SPR_MSR,
+ .preferred_mode = PREFER_RESTORE_SAVE,
+ .supported_mode = SELF_RESTORE_STRICT,
+ },
+ {
+ .spr = P9_STOP_SPR_PSSCR,
+ .preferred_mode = PREFER_SAVE_RESTORE,
+ .supported_mode = SELF_RESTORE_STRICT,
+ },
+ {
+ .spr = SPRN_HID1,
+ .preferred_mode = PREFER_RESTORE_SAVE,
+ .supported_mode = SELF_RESTORE_STRICT,
+ },
+ {
+ .spr = SPRN_HID4,
+ .preferred_mode = PREFER_RESTORE_SAVE,
+ .supported_mode = SELF_RESTORE_STRICT,
+ },
+ {
+ .spr = SPRN_HID5,
+ .preferred_mode = PREFER_RESTORE_SAVE,
+ .supported_mode = SELF_RESTORE_STRICT,
+ }
+};
+
+const int nr_preferred_sprs = ARRAY_SIZE(preferred_sprs);
/*
* The default stop state that will be used by ppc_md.power_save
@@ -61,78 +164,170 @@ static bool deepest_stop_found;
static unsigned long power7_offline_type;
-static int pnv_save_sprs_for_deep_states(void)
+static int pnv_self_restore_sprs(u64 pir, int cpu, u64 spr)
{
- int cpu;
+ u64 reg_val;
int rc;
- /*
- * hid0, hid1, hid4, hid5, hmeer and lpcr values are symmetric across
- * all cpus at boot. Get these reg values of current cpu and use the
- * same across all cpus.
- */
- uint64_t lpcr_val = mfspr(SPRN_LPCR);
- uint64_t hid0_val = mfspr(SPRN_HID0);
- uint64_t hid1_val = mfspr(SPRN_HID1);
- uint64_t hid4_val = mfspr(SPRN_HID4);
- uint64_t hid5_val = mfspr(SPRN_HID5);
- uint64_t hmeer_val = mfspr(SPRN_HMEER);
- uint64_t msr_val = MSR_IDLE;
- uint64_t psscr_val = pnv_deepest_stop_psscr_val;
-
- for_each_present_cpu(cpu) {
- uint64_t pir = get_hard_smp_processor_id(cpu);
- uint64_t hsprg0_val = (uint64_t)paca_ptrs[cpu];
-
- rc = opal_slw_set_reg(pir, SPRN_HSPRG0, hsprg0_val);
+ switch (spr) {
+ case SPRN_HSPRG0:
+ reg_val = (uint64_t)paca_ptrs[cpu];
+ rc = opal_slw_set_reg(pir, SPRN_HSPRG0, reg_val);
if (rc != 0)
return rc;
-
- rc = opal_slw_set_reg(pir, SPRN_LPCR, lpcr_val);
+ break;
+ case SPRN_LPCR:
+ reg_val = mfspr(SPRN_LPCR);
+ rc = opal_slw_set_reg(pir, SPRN_LPCR, reg_val);
if (rc != 0)
return rc;
-
+ break;
+ case P9_STOP_SPR_MSR:
+ reg_val = MSR_IDLE;
if (cpu_has_feature(CPU_FTR_ARCH_300)) {
- rc = opal_slw_set_reg(pir, P9_STOP_SPR_MSR, msr_val);
+ rc = opal_slw_set_reg(pir, P9_STOP_SPR_MSR, reg_val);
if (rc)
return rc;
-
- rc = opal_slw_set_reg(pir,
- P9_STOP_SPR_PSSCR, psscr_val);
-
+ }
+ break;
+ case P9_STOP_SPR_PSSCR:
+ reg_val = pnv_deepest_stop_psscr_val;
+ if (cpu_has_feature(CPU_FTR_ARCH_300)) {
+ rc = opal_slw_set_reg(pir, P9_STOP_SPR_PSSCR, reg_val);
if (rc)
return rc;
}
-
- /* HIDs are per core registers */
+ break;
+ case SPRN_HMEER:
+ reg_val = mfspr(SPRN_HMEER);
if (cpu_thread_in_core(cpu) == 0) {
-
- rc = opal_slw_set_reg(pir, SPRN_HMEER, hmeer_val);
- if (rc != 0)
+ rc = opal_slw_set_reg(pir, SPRN_HMEER, reg_val);
+ if (rc)
return rc;
-
- rc = opal_slw_set_reg(pir, SPRN_HID0, hid0_val);
- if (rc != 0)
+ }
+ break;
+ case SPRN_HID0:
+ reg_val = mfspr(SPRN_HID0);
+ if (cpu_thread_in_core(cpu) == 0) {
+ rc = opal_slw_set_reg(pir, SPRN_HID0, reg_val);
+ if (rc)
return rc;
+ }
+ break;
+ case SPRN_HID1:
+ reg_val = mfspr(SPRN_HID1);
+ if (cpu_thread_in_core(cpu) == 0 &&
+ !cpu_has_feature(CPU_FTR_ARCH_300)) {
+ rc = opal_slw_set_reg(pir, SPRN_HID1, reg_val);
+ if (rc)
+ return rc;
+ }
+ break;
+ case SPRN_HID4:
+ reg_val = mfspr(SPRN_HID4);
+ if (cpu_thread_in_core(cpu) == 0 &&
+ !cpu_has_feature(CPU_FTR_ARCH_300)) {
+ rc = opal_slw_set_reg(pir, SPRN_HID4, reg_val);
+ if (rc)
+ return rc;
+ }
+ break;
+ case SPRN_HID5:
+ reg_val = mfspr(SPRN_HID5);
+ if (cpu_thread_in_core(cpu) == 0 &&
+ !cpu_has_feature(CPU_FTR_ARCH_300)) {
+ rc = opal_slw_set_reg(pir, SPRN_HID5, reg_val);
+ if (rc)
+ return rc;
+ }
+ break;
+ case SPRN_PTCR:
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
- /* Only p8 needs to set extra HID regiters */
- if (!cpu_has_feature(CPU_FTR_ARCH_300)) {
-
- rc = opal_slw_set_reg(pir, SPRN_HID1, hid1_val);
- if (rc != 0)
- return rc;
-
- rc = opal_slw_set_reg(pir, SPRN_HID4, hid4_val);
- if (rc != 0)
- return rc;
-
- rc = opal_slw_set_reg(pir, SPRN_HID5, hid5_val);
- if (rc != 0)
- return rc;
+static int pnv_self_save_restore_sprs(void)
+{
+ int rc, index, cpu, k;
+ u64 pir;
+ struct preferred_sprs curr_spr;
+ bool is_initialized;
+ u32 preferred;
+
+ is_lpcr_self_save = false;
+ is_ptcr_self_save = false;
+ for_each_present_cpu(cpu) {
+ pir = get_hard_smp_processor_id(cpu);
+ for (index = 0; index < nr_preferred_sprs; index++) {
+ curr_spr = preferred_sprs[index];
+ is_initialized = false;
+ /*
+ * Go through each of the preferences
+ * Check if it is preferred as well as supported
+ */
+ for (k = 0; k < NR_PREFERENCES; k++) {
+ preferred = curr_spr.preferred_mode
+ & PREFERENCE_MASK;
+ if (preferred & curr_spr.supported_mode
+ & SELF_RESTORE_STRICT) {
+ is_initialized = true;
+ rc = pnv_self_restore_sprs(pir, cpu,
+ curr_spr.spr);
+ if (rc != 0)
+ return rc;
+ break;
+ }
+ preferred_sprs[index].preferred_mode =
+ preferred_sprs[index].preferred_mode >>
+ PREFERENCE_SHIFT;
+ curr_spr = preferred_sprs[index];
+ }
+ if (!is_initialized) {
+ if (preferred_sprs[index].spr == SPRN_PTCR ||
+ (cpu_has_feature(CPU_FTR_ARCH_300) &&
+ (preferred_sprs[index].spr == SPRN_HID1 ||
+ preferred_sprs[index].spr == SPRN_HID4 ||
+ preferred_sprs[index].spr == SPRN_HID5)))
+ continue;
+ return OPAL_UNSUPPORTED;
}
}
}
+ return 0;
+}
+static int pnv_save_sprs_for_deep_states(void)
+{
+ int rc;
+ int index;
+
+ /*
+ * Iterate over the preffered SPRs and if even one of them is
+ * still unsupported We cut support for deep stop states
+ */
+ for (index = 0; index < nr_preferred_sprs; index++) {
+ if (preferred_sprs[index].supported_mode == UNSUPPORTED) {
+ if (preferred_sprs[index].spr == SPRN_PTCR ||
+ (cpu_has_feature(CPU_FTR_ARCH_300) &&
+ (preferred_sprs[index].spr == SPRN_HID1 ||
+ preferred_sprs[index].spr == SPRN_HID4 ||
+ preferred_sprs[index].spr == SPRN_HID5)))
+ continue;
+ return OPAL_UNSUPPORTED;
+ }
+ }
+ /*
+ * Try to self-restore the registers that can be self restored if self
+ * restore is active, try the same for the registers that
+ * can be self saved too.
+ * Note : If both are supported, self restore is given more priority
+ */
+ rc = pnv_self_save_restore_sprs();
+ if (rc != 0)
+ return rc;
return 0;
}
@@ -658,7 +853,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
mmcr0 = mfspr(SPRN_MMCR0);
}
if ((psscr & PSSCR_RL_MASK) >= pnv_first_spr_loss_level) {
- sprs.lpcr = mfspr(SPRN_LPCR);
+ if (!is_lpcr_self_save)
+ sprs.lpcr = mfspr(SPRN_LPCR);
sprs.hfscr = mfspr(SPRN_HFSCR);
sprs.fscr = mfspr(SPRN_FSCR);
sprs.pid = mfspr(SPRN_PID);
@@ -672,7 +868,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
sprs.mmcr1 = mfspr(SPRN_MMCR1);
sprs.mmcr2 = mfspr(SPRN_MMCR2);
- sprs.ptcr = mfspr(SPRN_PTCR);
+ if (!is_ptcr_self_save)
+ sprs.ptcr = mfspr(SPRN_PTCR);
sprs.rpr = mfspr(SPRN_RPR);
sprs.tscr = mfspr(SPRN_TSCR);
if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
@@ -756,7 +953,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
goto core_woken;
/* Per-core SPRs */
- mtspr(SPRN_PTCR, sprs.ptcr);
+ if (!is_ptcr_self_save)
+ mtspr(SPRN_PTCR, sprs.ptcr);
mtspr(SPRN_RPR, sprs.rpr);
mtspr(SPRN_TSCR, sprs.tscr);
@@ -777,7 +975,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
atomic_unlock_and_stop_thread_idle();
/* Per-thread SPRs */
- mtspr(SPRN_LPCR, sprs.lpcr);
+ if (!is_lpcr_self_save)
+ mtspr(SPRN_LPCR, sprs.lpcr);
mtspr(SPRN_HFSCR, sprs.hfscr);
mtspr(SPRN_FSCR, sprs.fscr);
mtspr(SPRN_PID, sprs.pid);
@@ -956,8 +1155,11 @@ void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val)
* Program the LPCR via stop-api only if the deepest stop state
* can lose hypervisor context.
*/
- if (supported_cpuidle_states & OPAL_PM_LOSE_FULL_CONTEXT)
- opal_slw_set_reg(pir, SPRN_LPCR, lpcr_val);
+ if (supported_cpuidle_states & OPAL_PM_LOSE_FULL_CONTEXT) {
+ if (!is_lpcr_self_save)
+ opal_slw_set_reg(pir, SPRN_LPCR,
+ lpcr_val);
+ }
}
/*
--
2.17.1
This commit introduces and leverages the Self save API which OPAL now
supports.
Add the new Self Save OPAL API call in the list of OPAL calls.
Implement the self saving of the SPRs based on the support populated
while respecting it's preferences.
This implementation allows mixing of support for the SPRs, which
means that a SPR can be self restored while another SPR be self saved if
they support and prefer it to be so.
Signed-off-by: Pratik Rajesh Sampat <[email protected]>
Reviewed-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/opal-api.h | 3 ++-
arch/powerpc/include/asm/opal.h | 1 +
arch/powerpc/platforms/powernv/idle.c | 22 ++++++++++++++++++++++
arch/powerpc/platforms/powernv/opal-call.c | 1 +
4 files changed, 26 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/opal-api.h b/arch/powerpc/include/asm/opal-api.h
index c1f25a760eb1..1b6e1a68d431 100644
--- a/arch/powerpc/include/asm/opal-api.h
+++ b/arch/powerpc/include/asm/opal-api.h
@@ -214,7 +214,8 @@
#define OPAL_SECVAR_GET 176
#define OPAL_SECVAR_GET_NEXT 177
#define OPAL_SECVAR_ENQUEUE_UPDATE 178
-#define OPAL_LAST 178
+#define OPAL_SLW_SELF_SAVE_REG 181
+#define OPAL_LAST 181
#define QUIESCE_HOLD 1 /* Spin all calls at entry */
#define QUIESCE_REJECT 2 /* Fail all calls with OPAL_BUSY */
diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h
index 9986ac34b8e2..389a85b63805 100644
--- a/arch/powerpc/include/asm/opal.h
+++ b/arch/powerpc/include/asm/opal.h
@@ -203,6 +203,7 @@ int64_t opal_handle_hmi(void);
int64_t opal_handle_hmi2(__be64 *out_flags);
int64_t opal_register_dump_region(uint32_t id, uint64_t start, uint64_t end);
int64_t opal_unregister_dump_region(uint32_t id);
+int64_t opal_slw_self_save_reg(uint64_t cpu_pir, uint64_t sprn);
int64_t opal_slw_set_reg(uint64_t cpu_pir, uint64_t sprn, uint64_t val);
int64_t opal_config_cpu_idle_state(uint64_t state, uint64_t flag);
int64_t opal_pci_set_phb_cxl_mode(uint64_t phb_id, uint64_t mode, uint64_t pe_number);
diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
index 03fe835aadd1..97aeb45e897b 100644
--- a/arch/powerpc/platforms/powernv/idle.c
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -279,6 +279,26 @@ static int pnv_self_save_restore_sprs(void)
if (rc != 0)
return rc;
break;
+ } else if (preferred & curr_spr.supported_mode
+ & SELF_SAVE_STRICT) {
+ is_initialized = true;
+ if (curr_spr.spr == SPRN_HMEER &&
+ cpu_thread_in_core(cpu) != 0) {
+ continue;
+ }
+ rc = opal_slw_self_save_reg(pir,
+ curr_spr.spr);
+ if (rc != 0)
+ return rc;
+ switch (curr_spr.spr) {
+ case SPRN_LPCR:
+ is_lpcr_self_save = true;
+ break;
+ case SPRN_PTCR:
+ is_ptcr_self_save = true;
+ break;
+ }
+ break;
}
preferred_sprs[index].preferred_mode =
preferred_sprs[index].preferred_mode >>
@@ -1159,6 +1179,8 @@ void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val)
if (!is_lpcr_self_save)
opal_slw_set_reg(pir, SPRN_LPCR,
lpcr_val);
+ else
+ opal_slw_self_save_reg(pir, SPRN_LPCR);
}
}
diff --git a/arch/powerpc/platforms/powernv/opal-call.c b/arch/powerpc/platforms/powernv/opal-call.c
index 5cd0f52d258f..11e0ceb90de0 100644
--- a/arch/powerpc/platforms/powernv/opal-call.c
+++ b/arch/powerpc/platforms/powernv/opal-call.c
@@ -223,6 +223,7 @@ OPAL_CALL(opal_handle_hmi, OPAL_HANDLE_HMI);
OPAL_CALL(opal_handle_hmi2, OPAL_HANDLE_HMI2);
OPAL_CALL(opal_config_cpu_idle_state, OPAL_CONFIG_CPU_IDLE_STATE);
OPAL_CALL(opal_slw_set_reg, OPAL_SLW_SET_REG);
+OPAL_CALL(opal_slw_self_save_reg, OPAL_SLW_SELF_SAVE_REG);
OPAL_CALL(opal_register_dump_region, OPAL_REGISTER_DUMP_REGION);
OPAL_CALL(opal_unregister_dump_region, OPAL_UNREGISTER_DUMP_REGION);
OPAL_CALL(opal_pci_set_phb_cxl_mode, OPAL_PCI_SET_PHB_CAPI_MODE);
--
2.17.1
* Pratik Rajesh Sampat <[email protected]> [2020-03-17 19:40:16]:
> Define a bitmask interface to determine support for the Self Restore,
> Self Save or both.
>
> Also define an interface to determine the preference of that SPR to
> be strictly saved or restored or encapsulated with an order of preference.
>
> The preference bitmask is shown as below:
> ----------------------------
> |... | 2nd pref | 1st pref |
> ----------------------------
> MSB LSB
>
> The preference from higher to lower is from LSB to MSB with a shift of 8
> bits.
> Example:
> Prefer self save first, if not available then prefer self
> restore
> The preference mask for this scenario will be seen as below.
> ((SELF_RESTORE_STRICT << PREFERENCE_SHIFT) | SELF_SAVE_STRICT)
> ---------------------------------
> |... | Self restore | Self save |
> ---------------------------------
> MSB LSB
>
> Finally, declare a list of preferred SPRs which encapsulate the bitmaks
> for preferred and supported with defaults of both being set to support
> legacy firmware.
>
> This commit also implements using the above interface and retains the
> legacy functionality of self restore.
>
> Signed-off-by: Pratik Rajesh Sampat <[email protected]>
> Reviewed-by: Ram Pai <[email protected]>
Reviewed-by: Vaidyanathan Srinivasan <[email protected]>
> ---
> arch/powerpc/platforms/powernv/idle.c | 316 +++++++++++++++++++++-----
> 1 file changed, 259 insertions(+), 57 deletions(-)
>
> diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
> index 78599bca66c2..03fe835aadd1 100644
> --- a/arch/powerpc/platforms/powernv/idle.c
> +++ b/arch/powerpc/platforms/powernv/idle.c
> @@ -32,9 +32,112 @@
> #define P9_STOP_SPR_MSR 2000
> #define P9_STOP_SPR_PSSCR 855
>
> +/* Interface for the stop state supported and preference */
> +#define SELF_RESTORE_TYPE 0
> +#define SELF_SAVE_TYPE 1
> +
> +#define NR_PREFERENCES 2
> +#define PREFERENCE_SHIFT 4
> +#define PREFERENCE_MASK 0xf
> +
> +#define UNSUPPORTED 0x0
> +#define SELF_RESTORE_STRICT 0x1
> +#define SELF_SAVE_STRICT 0x2
> +
> +/*
> + * Bitmask defining the kind of preferences available.
> + * Note : The higher to lower preference is from LSB to MSB, with a shift of
> + * 4 bits.
> + * ----------------------------
> + * | | 2nd pref | 1st pref |
> + * ----------------------------
> + * MSB LSB
> + */
> +/* Prefer Restore if available, otherwise unsupported */
> +#define PREFER_SELF_RESTORE_ONLY SELF_RESTORE_STRICT
> +/* Prefer Save if available, otherwise unsupported */
> +#define PREFER_SELF_SAVE_ONLY SELF_SAVE_STRICT
> +/* Prefer Restore when available, otherwise prefer Save */
> +#define PREFER_RESTORE_SAVE ((SELF_SAVE_STRICT << \
> + PREFERENCE_SHIFT)\
> + | SELF_RESTORE_STRICT)
> +/* Prefer Save when available, otherwise prefer Restore*/
> +#define PREFER_SAVE_RESTORE ((SELF_RESTORE_STRICT <<\
> + PREFERENCE_SHIFT)\
> + | SELF_SAVE_STRICT)
> static u32 supported_cpuidle_states;
> struct pnv_idle_states_t *pnv_idle_states;
> int nr_pnv_idle_states;
> +/* Caching the lpcr & ptcr support to use later */
> +static bool is_lpcr_self_save;
> +static bool is_ptcr_self_save;
> +
> +struct preferred_sprs {
> + u64 spr;
> + u32 preferred_mode;
> + u32 supported_mode;
> +};
> +
> +/*
> + * Preferred mode: Order of precedence when both self-save and self-restore
> + * supported
> + * Supported mode: Default support. Can be overwritten during system
> + * initialization
> + */
> +struct preferred_sprs preferred_sprs[] = {
> + {
> + .spr = SPRN_HSPRG0,
> + .preferred_mode = PREFER_RESTORE_SAVE,
> + .supported_mode = SELF_RESTORE_STRICT,
> + },
> + {
> + .spr = SPRN_LPCR,
> + .preferred_mode = PREFER_RESTORE_SAVE,
> + .supported_mode = SELF_RESTORE_STRICT,
> + },
> + {
> + .spr = SPRN_PTCR,
> + .preferred_mode = PREFER_SAVE_RESTORE,
> + .supported_mode = SELF_RESTORE_STRICT,
> + },
> + {
> + .spr = SPRN_HMEER,
> + .preferred_mode = PREFER_RESTORE_SAVE,
> + .supported_mode = SELF_RESTORE_STRICT,
> + },
> + {
> + .spr = SPRN_HID0,
> + .preferred_mode = PREFER_RESTORE_SAVE,
> + .supported_mode = SELF_RESTORE_STRICT,
> + },
> + {
> + .spr = P9_STOP_SPR_MSR,
> + .preferred_mode = PREFER_RESTORE_SAVE,
> + .supported_mode = SELF_RESTORE_STRICT,
> + },
> + {
> + .spr = P9_STOP_SPR_PSSCR,
> + .preferred_mode = PREFER_SAVE_RESTORE,
> + .supported_mode = SELF_RESTORE_STRICT,
> + },
> + {
> + .spr = SPRN_HID1,
> + .preferred_mode = PREFER_RESTORE_SAVE,
> + .supported_mode = SELF_RESTORE_STRICT,
> + },
> + {
> + .spr = SPRN_HID4,
> + .preferred_mode = PREFER_RESTORE_SAVE,
> + .supported_mode = SELF_RESTORE_STRICT,
> + },
> + {
> + .spr = SPRN_HID5,
> + .preferred_mode = PREFER_RESTORE_SAVE,
> + .supported_mode = SELF_RESTORE_STRICT,
> + }
> +};
> +
> +const int nr_preferred_sprs = ARRAY_SIZE(preferred_sprs);
>
> /*
> * The default stop state that will be used by ppc_md.power_save
> @@ -61,78 +164,170 @@ static bool deepest_stop_found;
>
> static unsigned long power7_offline_type;
>
> -static int pnv_save_sprs_for_deep_states(void)
> +static int pnv_self_restore_sprs(u64 pir, int cpu, u64 spr)
> {
> - int cpu;
> + u64 reg_val;
> int rc;
>
> - /*
> - * hid0, hid1, hid4, hid5, hmeer and lpcr values are symmetric across
> - * all cpus at boot. Get these reg values of current cpu and use the
> - * same across all cpus.
> - */
> - uint64_t lpcr_val = mfspr(SPRN_LPCR);
> - uint64_t hid0_val = mfspr(SPRN_HID0);
> - uint64_t hid1_val = mfspr(SPRN_HID1);
> - uint64_t hid4_val = mfspr(SPRN_HID4);
> - uint64_t hid5_val = mfspr(SPRN_HID5);
> - uint64_t hmeer_val = mfspr(SPRN_HMEER);
> - uint64_t msr_val = MSR_IDLE;
> - uint64_t psscr_val = pnv_deepest_stop_psscr_val;
> -
> - for_each_present_cpu(cpu) {
> - uint64_t pir = get_hard_smp_processor_id(cpu);
> - uint64_t hsprg0_val = (uint64_t)paca_ptrs[cpu];
> -
> - rc = opal_slw_set_reg(pir, SPRN_HSPRG0, hsprg0_val);
> + switch (spr) {
> + case SPRN_HSPRG0:
> + reg_val = (uint64_t)paca_ptrs[cpu];
> + rc = opal_slw_set_reg(pir, SPRN_HSPRG0, reg_val);
> if (rc != 0)
> return rc;
> -
> - rc = opal_slw_set_reg(pir, SPRN_LPCR, lpcr_val);
> + break;
> + case SPRN_LPCR:
> + reg_val = mfspr(SPRN_LPCR);
> + rc = opal_slw_set_reg(pir, SPRN_LPCR, reg_val);
> if (rc != 0)
> return rc;
> -
> + break;
> + case P9_STOP_SPR_MSR:
> + reg_val = MSR_IDLE;
> if (cpu_has_feature(CPU_FTR_ARCH_300)) {
> - rc = opal_slw_set_reg(pir, P9_STOP_SPR_MSR, msr_val);
> + rc = opal_slw_set_reg(pir, P9_STOP_SPR_MSR, reg_val);
> if (rc)
> return rc;
> -
> - rc = opal_slw_set_reg(pir,
> - P9_STOP_SPR_PSSCR, psscr_val);
> -
> + }
> + break;
> + case P9_STOP_SPR_PSSCR:
> + reg_val = pnv_deepest_stop_psscr_val;
> + if (cpu_has_feature(CPU_FTR_ARCH_300)) {
> + rc = opal_slw_set_reg(pir, P9_STOP_SPR_PSSCR, reg_val);
> if (rc)
> return rc;
> }
> -
> - /* HIDs are per core registers */
> + break;
> + case SPRN_HMEER:
> + reg_val = mfspr(SPRN_HMEER);
> if (cpu_thread_in_core(cpu) == 0) {
> -
> - rc = opal_slw_set_reg(pir, SPRN_HMEER, hmeer_val);
> - if (rc != 0)
> + rc = opal_slw_set_reg(pir, SPRN_HMEER, reg_val);
> + if (rc)
> return rc;
> -
> - rc = opal_slw_set_reg(pir, SPRN_HID0, hid0_val);
> - if (rc != 0)
> + }
> + break;
> + case SPRN_HID0:
> + reg_val = mfspr(SPRN_HID0);
> + if (cpu_thread_in_core(cpu) == 0) {
> + rc = opal_slw_set_reg(pir, SPRN_HID0, reg_val);
> + if (rc)
> return rc;
> + }
> + break;
> + case SPRN_HID1:
> + reg_val = mfspr(SPRN_HID1);
> + if (cpu_thread_in_core(cpu) == 0 &&
> + !cpu_has_feature(CPU_FTR_ARCH_300)) {
> + rc = opal_slw_set_reg(pir, SPRN_HID1, reg_val);
> + if (rc)
> + return rc;
> + }
> + break;
> + case SPRN_HID4:
> + reg_val = mfspr(SPRN_HID4);
> + if (cpu_thread_in_core(cpu) == 0 &&
> + !cpu_has_feature(CPU_FTR_ARCH_300)) {
> + rc = opal_slw_set_reg(pir, SPRN_HID4, reg_val);
> + if (rc)
> + return rc;
> + }
> + break;
> + case SPRN_HID5:
> + reg_val = mfspr(SPRN_HID5);
> + if (cpu_thread_in_core(cpu) == 0 &&
> + !cpu_has_feature(CPU_FTR_ARCH_300)) {
> + rc = opal_slw_set_reg(pir, SPRN_HID5, reg_val);
> + if (rc)
> + return rc;
> + }
> + break;
> + case SPRN_PTCR:
> + break;
> + default:
> + return -EINVAL;
> + }
> + return 0;
> +}
>
> - /* Only p8 needs to set extra HID regiters */
> - if (!cpu_has_feature(CPU_FTR_ARCH_300)) {
> -
> - rc = opal_slw_set_reg(pir, SPRN_HID1, hid1_val);
> - if (rc != 0)
> - return rc;
> -
> - rc = opal_slw_set_reg(pir, SPRN_HID4, hid4_val);
> - if (rc != 0)
> - return rc;
> -
> - rc = opal_slw_set_reg(pir, SPRN_HID5, hid5_val);
> - if (rc != 0)
> - return rc;
> +static int pnv_self_save_restore_sprs(void)
> +{
> + int rc, index, cpu, k;
> + u64 pir;
> + struct preferred_sprs curr_spr;
> + bool is_initialized;
> + u32 preferred;
> +
> + is_lpcr_self_save = false;
> + is_ptcr_self_save = false;
> + for_each_present_cpu(cpu) {
> + pir = get_hard_smp_processor_id(cpu);
> + for (index = 0; index < nr_preferred_sprs; index++) {
> + curr_spr = preferred_sprs[index];
> + is_initialized = false;
> + /*
> + * Go through each of the preferences
> + * Check if it is preferred as well as supported
> + */
> + for (k = 0; k < NR_PREFERENCES; k++) {
> + preferred = curr_spr.preferred_mode
> + & PREFERENCE_MASK;
> + if (preferred & curr_spr.supported_mode
> + & SELF_RESTORE_STRICT) {
> + is_initialized = true;
> + rc = pnv_self_restore_sprs(pir, cpu,
> + curr_spr.spr);
> + if (rc != 0)
> + return rc;
> + break;
> + }
> + preferred_sprs[index].preferred_mode =
> + preferred_sprs[index].preferred_mode >>
> + PREFERENCE_SHIFT;
> + curr_spr = preferred_sprs[index];
> + }
> + if (!is_initialized) {
> + if (preferred_sprs[index].spr == SPRN_PTCR ||
> + (cpu_has_feature(CPU_FTR_ARCH_300) &&
> + (preferred_sprs[index].spr == SPRN_HID1 ||
> + preferred_sprs[index].spr == SPRN_HID4 ||
> + preferred_sprs[index].spr == SPRN_HID5)))
> + continue;
> + return OPAL_UNSUPPORTED;
> }
> }
> }
> + return 0;
> +}
>
> +static int pnv_save_sprs_for_deep_states(void)
> +{
> + int rc;
> + int index;
> +
> + /*
> + * Iterate over the preffered SPRs and if even one of them is
> + * still unsupported We cut support for deep stop states
> + */
> + for (index = 0; index < nr_preferred_sprs; index++) {
> + if (preferred_sprs[index].supported_mode == UNSUPPORTED) {
> + if (preferred_sprs[index].spr == SPRN_PTCR ||
> + (cpu_has_feature(CPU_FTR_ARCH_300) &&
> + (preferred_sprs[index].spr == SPRN_HID1 ||
> + preferred_sprs[index].spr == SPRN_HID4 ||
> + preferred_sprs[index].spr == SPRN_HID5)))
> + continue;
> + return OPAL_UNSUPPORTED;
> + }
> + }
> + /*
> + * Try to self-restore the registers that can be self restored if self
> + * restore is active, try the same for the registers that
> + * can be self saved too.
> + * Note : If both are supported, self restore is given more priority
> + */
> + rc = pnv_self_save_restore_sprs();
> + if (rc != 0)
> + return rc;
> return 0;
> }
>
> @@ -658,7 +853,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
> mmcr0 = mfspr(SPRN_MMCR0);
> }
> if ((psscr & PSSCR_RL_MASK) >= pnv_first_spr_loss_level) {
> - sprs.lpcr = mfspr(SPRN_LPCR);
> + if (!is_lpcr_self_save)
> + sprs.lpcr = mfspr(SPRN_LPCR);
> sprs.hfscr = mfspr(SPRN_HFSCR);
> sprs.fscr = mfspr(SPRN_FSCR);
> sprs.pid = mfspr(SPRN_PID);
> @@ -672,7 +868,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
> sprs.mmcr1 = mfspr(SPRN_MMCR1);
> sprs.mmcr2 = mfspr(SPRN_MMCR2);
>
> - sprs.ptcr = mfspr(SPRN_PTCR);
> + if (!is_ptcr_self_save)
> + sprs.ptcr = mfspr(SPRN_PTCR);
> sprs.rpr = mfspr(SPRN_RPR);
> sprs.tscr = mfspr(SPRN_TSCR);
> if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
> @@ -756,7 +953,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
> goto core_woken;
>
> /* Per-core SPRs */
> - mtspr(SPRN_PTCR, sprs.ptcr);
> + if (!is_ptcr_self_save)
> + mtspr(SPRN_PTCR, sprs.ptcr);
> mtspr(SPRN_RPR, sprs.rpr);
> mtspr(SPRN_TSCR, sprs.tscr);
>
> @@ -777,7 +975,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
> atomic_unlock_and_stop_thread_idle();
>
> /* Per-thread SPRs */
> - mtspr(SPRN_LPCR, sprs.lpcr);
> + if (!is_lpcr_self_save)
> + mtspr(SPRN_LPCR, sprs.lpcr);
> mtspr(SPRN_HFSCR, sprs.hfscr);
> mtspr(SPRN_FSCR, sprs.fscr);
> mtspr(SPRN_PID, sprs.pid);
> @@ -956,8 +1155,11 @@ void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val)
> * Program the LPCR via stop-api only if the deepest stop state
> * can lose hypervisor context.
> */
> - if (supported_cpuidle_states & OPAL_PM_LOSE_FULL_CONTEXT)
> - opal_slw_set_reg(pir, SPRN_LPCR, lpcr_val);
> + if (supported_cpuidle_states & OPAL_PM_LOSE_FULL_CONTEXT) {
> + if (!is_lpcr_self_save)
> + opal_slw_set_reg(pir, SPRN_LPCR,
> + lpcr_val);
> + }
> }
>
> /*
This framework provides a flexible interface to exploit microcode
capabilities to save and restore and SPR. The complexity in the
implementation and the various options are mainly to provide backward
compatibility to OPAL and Linux on top of different microcode
capabilities and platforms.
--Vaidy
* Pratik Rajesh Sampat <[email protected]> [2020-03-17 19:40:17]:
> This commit introduces and leverages the Self save API which OPAL now
> supports.
>
> Add the new Self Save OPAL API call in the list of OPAL calls.
> Implement the self saving of the SPRs based on the support populated
> while respecting it's preferences.
>
> This implementation allows mixing of support for the SPRs, which
> means that a SPR can be self restored while another SPR be self saved if
> they support and prefer it to be so.
>
> Signed-off-by: Pratik Rajesh Sampat <[email protected]>
> Reviewed-by: Ram Pai <[email protected]>
Reviewed-by: Vaidyanathan Srinivasan <[email protected]>
> ---
> arch/powerpc/include/asm/opal-api.h | 3 ++-
> arch/powerpc/include/asm/opal.h | 1 +
> arch/powerpc/platforms/powernv/idle.c | 22 ++++++++++++++++++++++
> arch/powerpc/platforms/powernv/opal-call.c | 1 +
> 4 files changed, 26 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/include/asm/opal-api.h b/arch/powerpc/include/asm/opal-api.h
> index c1f25a760eb1..1b6e1a68d431 100644
> --- a/arch/powerpc/include/asm/opal-api.h
> +++ b/arch/powerpc/include/asm/opal-api.h
> @@ -214,7 +214,8 @@
> #define OPAL_SECVAR_GET 176
> #define OPAL_SECVAR_GET_NEXT 177
> #define OPAL_SECVAR_ENQUEUE_UPDATE 178
> -#define OPAL_LAST 178
> +#define OPAL_SLW_SELF_SAVE_REG 181
> +#define OPAL_LAST 181
>
> #define QUIESCE_HOLD 1 /* Spin all calls at entry */
> #define QUIESCE_REJECT 2 /* Fail all calls with OPAL_BUSY */
> diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h
> index 9986ac34b8e2..389a85b63805 100644
> --- a/arch/powerpc/include/asm/opal.h
> +++ b/arch/powerpc/include/asm/opal.h
> @@ -203,6 +203,7 @@ int64_t opal_handle_hmi(void);
> int64_t opal_handle_hmi2(__be64 *out_flags);
> int64_t opal_register_dump_region(uint32_t id, uint64_t start, uint64_t end);
> int64_t opal_unregister_dump_region(uint32_t id);
> +int64_t opal_slw_self_save_reg(uint64_t cpu_pir, uint64_t sprn);
> int64_t opal_slw_set_reg(uint64_t cpu_pir, uint64_t sprn, uint64_t val);
> int64_t opal_config_cpu_idle_state(uint64_t state, uint64_t flag);
> int64_t opal_pci_set_phb_cxl_mode(uint64_t phb_id, uint64_t mode, uint64_t pe_number);
> diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
> index 03fe835aadd1..97aeb45e897b 100644
> --- a/arch/powerpc/platforms/powernv/idle.c
> +++ b/arch/powerpc/platforms/powernv/idle.c
> @@ -279,6 +279,26 @@ static int pnv_self_save_restore_sprs(void)
> if (rc != 0)
> return rc;
> break;
> + } else if (preferred & curr_spr.supported_mode
> + & SELF_SAVE_STRICT) {
> + is_initialized = true;
> + if (curr_spr.spr == SPRN_HMEER &&
> + cpu_thread_in_core(cpu) != 0) {
> + continue;
> + }
> + rc = opal_slw_self_save_reg(pir,
> + curr_spr.spr);
> + if (rc != 0)
> + return rc;
> + switch (curr_spr.spr) {
> + case SPRN_LPCR:
> + is_lpcr_self_save = true;
> + break;
> + case SPRN_PTCR:
> + is_ptcr_self_save = true;
> + break;
> + }
> + break;
> }
> preferred_sprs[index].preferred_mode =
> preferred_sprs[index].preferred_mode >>
> @@ -1159,6 +1179,8 @@ void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val)
> if (!is_lpcr_self_save)
> opal_slw_set_reg(pir, SPRN_LPCR,
> lpcr_val);
> + else
> + opal_slw_self_save_reg(pir, SPRN_LPCR);
> }
> }
>
> diff --git a/arch/powerpc/platforms/powernv/opal-call.c b/arch/powerpc/platforms/powernv/opal-call.c
> index 5cd0f52d258f..11e0ceb90de0 100644
> --- a/arch/powerpc/platforms/powernv/opal-call.c
> +++ b/arch/powerpc/platforms/powernv/opal-call.c
> @@ -223,6 +223,7 @@ OPAL_CALL(opal_handle_hmi, OPAL_HANDLE_HMI);
> OPAL_CALL(opal_handle_hmi2, OPAL_HANDLE_HMI2);
> OPAL_CALL(opal_config_cpu_idle_state, OPAL_CONFIG_CPU_IDLE_STATE);
> OPAL_CALL(opal_slw_set_reg, OPAL_SLW_SET_REG);
> +OPAL_CALL(opal_slw_self_save_reg, OPAL_SLW_SELF_SAVE_REG);
> OPAL_CALL(opal_register_dump_region, OPAL_REGISTER_DUMP_REGION);
> OPAL_CALL(opal_unregister_dump_region, OPAL_UNREGISTER_DUMP_REGION);
> OPAL_CALL(opal_pci_set_phb_cxl_mode, OPAL_PCI_SET_PHB_CAPI_MODE);
> --
The new opal_slw_self_save_reg() call and related interface are more
ideal to provide backward compatibility and simplifies implementation
for future platforms.
--Vaidy
* Pratik Rajesh Sampat <[email protected]> [2020-03-17 19:40:18]:
> Parse the device tree for nodes self-save, self-restore and populate
> support for the preferred SPRs based what was advertised by the device
> tree.
>
> Signed-off-by: Pratik Rajesh Sampat <[email protected]>
> Reviewed-by: Ram Pai <[email protected]>
Reviewed-by: Vaidyanathan Srinivasan <[email protected]>
>
> ---
> .../bindings/powerpc/opal/power-mgt.txt | 10 +++
> arch/powerpc/platforms/powernv/idle.c | 78 +++++++++++++++++++
> 2 files changed, 88 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/powerpc/opal/power-mgt.txt b/Documentation/devicetree/bindings/powerpc/opal/power-mgt.txt
> index 9d619e955576..093cb5fe3d2d 100644
> --- a/Documentation/devicetree/bindings/powerpc/opal/power-mgt.txt
> +++ b/Documentation/devicetree/bindings/powerpc/opal/power-mgt.txt
> @@ -116,3 +116,13 @@ otherwise. The length of all the property arrays must be the same.
> which of the fields of the PMICR are set in the corresponding
> entries in ibm,cpu-idle-state-pmicr. This is an optional
> property on POWER8 and is absent on POWER9.
> +
> +- self-restore:
> + Array of unsigned 64-bit values containing a property for sprn-mask
> + with each bit indicating the index of the supported SPR for the
> + functionality. This is an optional property for both Power8 and Power9
> +
> +- self-save:
> + Array of unsigned 64-bit values containing a property for sprn-mask
> + with each bit indicating the index of the supported SPR for the
> + functionality. This is an optional property for both Power8 and Power9
> diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
> index 97aeb45e897b..c39111b338ff 100644
> --- a/arch/powerpc/platforms/powernv/idle.c
> +++ b/arch/powerpc/platforms/powernv/idle.c
> @@ -1436,6 +1436,81 @@ static void __init pnv_probe_idle_states(void)
> supported_cpuidle_states |= pnv_idle_states[i].flags;
> }
>
> +/*
> + * Extracts and populates the self save or restore capabilities
> + * passed from the device tree node
> + */
> +static int extract_save_restore_state_dt(struct device_node *np, int type)
> +{
> + int nr_sprns = 0, i, bitmask_index;
> + u64 *temp_u64;
> + u64 bit_pos;
> +
> + nr_sprns = of_property_count_u64_elems(np, "sprn-bitmask");
> + if (nr_sprns <= 0)
> + return -EINVAL;
> + temp_u64 = kcalloc(nr_sprns, sizeof(u64), GFP_KERNEL);
> + if (of_property_read_u64_array(np, "sprn-bitmask",
> + temp_u64, nr_sprns)) {
> + pr_warn("cpuidle-powernv: failed to find registers in DT\n");
> + kfree(temp_u64);
> + return -EINVAL;
> + }
> + /*
> + * Populate acknowledgment of support for the sprs in the global vector
> + * gotten by the registers supplied by the firmware.
> + * The registers are in a bitmask, bit index within
> + * that specifies the SPR
> + */
> + for (i = 0; i < nr_preferred_sprs; i++) {
> + bitmask_index = BIT_WORD(preferred_sprs[i].spr);
> + bit_pos = BIT_MASK(preferred_sprs[i].spr);
> + if ((temp_u64[bitmask_index] & bit_pos) == 0) {
> + if (type == SELF_RESTORE_TYPE)
> + preferred_sprs[i].supported_mode &=
> + ~SELF_RESTORE_STRICT;
> + else
> + preferred_sprs[i].supported_mode &=
> + ~SELF_SAVE_STRICT;
> + continue;
> + }
> + if (type == SELF_RESTORE_TYPE) {
> + preferred_sprs[i].supported_mode |=
> + SELF_RESTORE_STRICT;
> + } else {
> + preferred_sprs[i].supported_mode |=
> + SELF_SAVE_STRICT;
> + }
> + }
> +
> + kfree(temp_u64);
> + return 0;
> +}
> +
> +static int pnv_parse_deepstate_dt(void)
> +{
> + struct device_node *np;
> + int rc = 0, i;
> +
> + /* Self restore register population */
> + np = of_find_compatible_node(NULL, NULL, "ibm,opal-self-restore");
> + if (np) {
> + rc = extract_save_restore_state_dt(np, SELF_RESTORE_TYPE);
> + if (rc != 0)
> + return rc;
> + }
> + /* Self save register population */
> + np = of_find_compatible_node(NULL, NULL, "ibm,opal-self-save");
> + if (!np) {
> + for (i = 0; i < nr_preferred_sprs; i++)
> + preferred_sprs[i].supported_mode &= ~SELF_SAVE_STRICT;
> + } else {
> + rc = extract_save_restore_state_dt(np, SELF_SAVE_TYPE);
> + }
> + of_node_put(np);
> + return rc;
> +}
> +
> /*
> * This function parses device-tree and populates all the information
> * into pnv_idle_states structure. It also sets up nr_pnv_idle_states
> @@ -1584,6 +1659,9 @@ static int __init pnv_init_idle_states(void)
> return rc;
> pnv_probe_idle_states();
>
> + rc = pnv_parse_deepstate_dt();
> + if (rc)
> + return rc;
> if (!cpu_has_feature(CPU_FTR_ARCH_300)) {
> if (!(supported_cpuidle_states & OPAL_PM_SLEEP_ENABLED_ER1)) {
> power7_fastsleep_workaround_entry = false;
> --
Thanks Michael for the detailed review and feedback. Your review
comments have been addressed by Pratik.
--Vaidy