2021-05-19 08:34:15

by Saripalli, RK

[permalink] [raw]
Subject: [v6 0/1] Introduce support for PSF control.

From: Ramakrishna Saripalli <[email protected]>

Predictive Store Forwarding:
AMD Zen3 processors feature a new technology called
Predictive Store Forwarding (PSF).

https://www.amd.com/system/files/documents/security-analysis-predictive-store-forwarding.pdf

PSF is a hardware-based micro-architectural optimization designed
to improve the performance of code execution by predicting address
dependencies between loads and stores.

How PSF works:

It is very common for a CPU to execute a load instruction to an address
that was recently written by a store. Modern CPUs implement a technique
known as Store-To-Load-Forwarding (STLF) to improve performance in such
cases. With STLF, data from the store is forwarded directly to the load
without having to wait for it to be written to memory. In a typical CPU,
STLF occurs after the address of both the load and store are calculated
and determined to match.

PSF expands on this by speculating on the relationship between loads and
stores without waiting for the address calculation to complete. With PSF,
the CPU learns over time the relationship between loads and stores.
If STLF typically occurs between a particular store and load, the CPU will
remember this.

In typical code, PSF provides a performance benefit by speculating on
the load result and allowing later instructions to begin execution
sooner than they otherwise would be able to.

Causes of Incorrect PSF:

Incorrect PSF predictions can occur due to two reasons.

First, it is possible that the store/load pair had a dependency for a
while but later stops having a dependency. This can occur if the address
of either the store or load changes during the execution of the program.

The second source of incorrect PSF predictions can occur if there is an
alias in the PSF predictor structure. The PSF predictor tracks
store-load pairs based on portions of their RIP. It is possible that a
store-load pair which does have a dependency may alias in the predictor
with another store-load pair which does not.

This can result in incorrect speculation when the second store/load pair
is executed.

Security Analysis:

Previous research has shown that when CPUs speculate on non-architectural
paths it can lead to the potential of side channel attacks.
In particular, programs that implement isolation, also known as
‘sandboxing’, entirely in software may need to be concerned with incorrect
CPU speculation as they can occur due to bad PSF predictions.

Because PSF speculation is limited to the current program context,
the impact of bad PSF speculation is very similar to that of
Speculative Store Bypass (Spectre v4)

Predictive Store Forwarding controls:
There are two hardware control bits which influence the PSF feature:
- MSR 48h bit 2 – Speculative Store Bypass (SSBD)
- MSR 48h bit 7 – Predictive Store Forwarding Disable (PSFD)

The PSF feature is disabled if either of these bits are set. These bits
are controllable on a per-thread basis in an SMT system. By default, both
SSBD and PSFD are 0 meaning that the speculation features are enabled.

While the SSBD bit disables PSF and speculative store bypass, PSFD only
disables PSF.

PSFD may be desirable for software which is concerned with the
speculative behavior of PSF but desires a smaller performance impact than
setting SSBD.

Support for PSFD is indicated in CPUID Fn8000_0008 EBX[28].
All processors that support PSF will also support PSFD.

ChangeLogs:
V6->V5:
Moved PSF control code to arch/x86/kernel/cpu/bugs.c
PSF mitigation is similar to spec_control_bypass mitigation.
PSF mitigation has only ON and OFF controls.
Kernel parameter changed to predictive_store_fwd_disable.
V5->V4:
Replaced rdmsrl and wrmsrl for setting SPEC_CTRL_PSFD with
a single call to msr_set_bit.
Removed temporary variable to read and write the MSR
V4->V3:
Write to MSR_IA32_SPEC_CTRL properly
Read MSR, modify PSFD bit based on kernel parameter and
write back to MSR.

Changes made in psf_cmdline() and check_bugs().
V3->V2:
Set the X86_FEATURE_SPEC_CTRL_MSR cap in boot cpu caps.
Fix kernel documentation for the kernel parameter.
Rename PSF to a control instead of mitigation.

V1->V2:
- Smashed multiple commits into one commit.
- Rename PSF to a control instead of mitigation.

V1:
- Initial patchset.
- Kernel parameter controls enable and disable of PSF.




Ramakrishna Saripalli (1):
x86/bugs: Implement mitigation for Predictive Store Forwarding

.../admin-guide/kernel-parameters.txt | 5 +
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/msr-index.h | 2 +
arch/x86/include/asm/nospec-branch.h | 6 ++
arch/x86/kernel/cpu/bugs.c | 94 +++++++++++++++++++
5 files changed, 108 insertions(+)


base-commit: 0e16f466004d7f04296b9676a712a32a12367d1f
--
2.25.1



2021-05-19 08:34:30

by Saripalli, RK

[permalink] [raw]
Subject: [v6 1/1] x86/bugs: Implement mitigation for Predictive Store Forwarding

From: Ramakrishna Saripalli <[email protected]>

Certain AMD processors feature a new technology called Predictive Store
Forwarding (PSF).

PSF is a micro-architectural optimization designed to improve the
performance of code execution by predicting dependencies between
loads and stores.

Incorrect PSF predictions can occur due to two reasons.

- It is possible that the load/store pair may have had dependency for
a while but the dependency has stopped because the address in the
load/store pair has changed.

- Second source of incorrect PSF prediction can occur because of an alias
in the PSF predictor structure stored in the microarchitectural state.
PSF predictor tracks load/store pair based on portions of instruction
pointer. It is possible that a load/store pair which does have a
dependency may be aliased by another load/store pair which does not have
the same dependency. This can result in incorrect speculation.

Software may be able to detect this aliasing and perform side-channel
attacks.

All CPUs that implement PSF provide one bit to disable this feature.
If the bit to disable this feature is available, it means that the CPU
implements PSF feature and is therefore vulnerable to PSF risks.

The bits that are introduced

X86_FEATURE_PSFD: CPUID_Fn80000008_EBX[28] ("PSF disable")
If this bit is 1, CPU implements PSF and PSF control
via SPEC_CTRL_MSR is supported in the CPU.

All AMD processors that support PSF implement a bit in
SPEC_CTRL MSR (0x48) to disable or enable Predictive Store
Forwarding.

PSF control introduces a new kernel parameter called
predictive_store_fwd_disable.

Kernel parameter predictive_store_fwd_disable has the following values

- on. Disable PSF on all CPUs.

- off. Enable PSF on all CPUs.
This is also the default setting.

Signed-off-by: Ramakrishna Saripalli<[email protected]>
---
.../admin-guide/kernel-parameters.txt | 5 +
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/msr-index.h | 2 +
arch/x86/include/asm/nospec-branch.h | 6 ++
arch/x86/kernel/cpu/bugs.c | 94 +++++++++++++++++++
5 files changed, 108 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 04545725f187..a5f694dccb24 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3940,6 +3940,11 @@
Format: {"off"}
Disable Hardware Transactional Memory

+ predictive_store_fwd_disable= [X86] This option controls PSF.
+ off - Turns on PSF.
+ on - Turns off PSF.
+ default : off.
+
preempt= [KNL]
Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
none - Limited to cond_resched() calls
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index cc96e26d69f7..078f46022293 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -309,6 +309,7 @@
#define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */
#define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */
#define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
+#define X86_FEATURE_PSFD (13*32+28) /* Predictive Store Forward Disable */

/* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
#define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 546d6ecf0a35..f569918c8754 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -51,6 +51,8 @@
#define SPEC_CTRL_STIBP BIT(SPEC_CTRL_STIBP_SHIFT) /* STIBP mask */
#define SPEC_CTRL_SSBD_SHIFT 2 /* Speculative Store Bypass Disable bit */
#define SPEC_CTRL_SSBD BIT(SPEC_CTRL_SSBD_SHIFT) /* Speculative Store Bypass Disable */
+#define SPEC_CTRL_PSFD_SHIFT 7
+#define SPEC_CTRL_PSFD BIT(SPEC_CTRL_PSFD_SHIFT) /* Predictive Store Forwarding Disable */

#define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */
#define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index cb9ad6b73973..1231099e5c07 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -198,6 +198,12 @@ enum ssb_mitigation {
SPEC_STORE_BYPASS_SECCOMP,
};

+/* The Predictive Store forward control variant */
+enum psf_mitigation {
+ PREDICTIVE_STORE_FORWARD_NONE,
+ PREDICTIVE_STORE_FORWARD_DISABLE,
+};
+
extern char __indirect_thunk_start[];
extern char __indirect_thunk_end[];

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index d41b70fe4918..c80ef4d86b72 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -38,6 +38,7 @@
static void __init spectre_v1_select_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
static void __init ssb_select_mitigation(void);
+static void __init psf_select_mitigation(void);
static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
static void __init mds_print_mitigation(void);
@@ -107,6 +108,7 @@ void __init check_bugs(void)
spectre_v1_select_mitigation();
spectre_v2_select_mitigation();
ssb_select_mitigation();
+ psf_select_mitigation();
l1tf_select_mitigation();
mds_select_mitigation();
taa_select_mitigation();
@@ -1195,6 +1197,98 @@ static void ssb_select_mitigation(void)
pr_info("%s\n", ssb_strings[ssb_mode]);
}

+#undef pr_fmt
+#define pr_fmt(fmt) "Predictive Store Forward: " fmt
+
+static enum psf_mitigation psf_mode __ro_after_init = PREDICTIVE_STORE_FORWARD_NONE;
+
+/* The kernel command line selection */
+enum psf_mitigation_cmd {
+ PREDICTIVE_STORE_FORWARD_CMD_NONE,
+ PREDICTIVE_STORE_FORWARD_CMD_ON,
+};
+
+static const char * const psf_strings[] = {
+ [PREDICTIVE_STORE_FORWARD_NONE] = "Vulnerable",
+ [PREDICTIVE_STORE_FORWARD_DISABLE] = "Mitigation: Predictive Store Forward disabled",
+};
+
+static const struct {
+ const char *option;
+ enum psf_mitigation_cmd cmd;
+} psf_mitigation_options[] __initconst = {
+ { "on", PREDICTIVE_STORE_FORWARD_CMD_ON }, /* Disable Speculative Store Bypass */
+ { "off", PREDICTIVE_STORE_FORWARD_CMD_NONE }, /* Don't touch Speculative Store Bypass */
+};
+
+static enum psf_mitigation_cmd __init psf_parse_cmdline(void)
+{
+ enum psf_mitigation_cmd cmd = PREDICTIVE_STORE_FORWARD_CMD_NONE;
+ char arg[20];
+ int ret, i;
+
+ ret = cmdline_find_option(boot_command_line, "predictive_store_fwd_disable",
+ arg, sizeof(arg));
+ if (ret < 0)
+ return PREDICTIVE_STORE_FORWARD_CMD_NONE;
+
+ for (i = 0; i < ARRAY_SIZE(psf_mitigation_options); i++) {
+ if (!match_option(arg, ret, psf_mitigation_options[i].option))
+ continue;
+
+ cmd = psf_mitigation_options[i].cmd;
+ break;
+ }
+
+ if (i >= ARRAY_SIZE(psf_mitigation_options)) {
+ pr_err("unknown option (%s). Switching to AUTO select\n", arg);
+ return PREDICTIVE_STORE_FORWARD_CMD_NONE;
+ }
+
+ return cmd;
+}
+
+static enum psf_mitigation __init __psf_select_mitigation(void)
+{
+ enum psf_mitigation mode = PREDICTIVE_STORE_FORWARD_NONE;
+ enum psf_mitigation_cmd cmd;
+
+ if (!boot_cpu_has(X86_FEATURE_PSFD))
+ return mode;
+
+ cmd = psf_parse_cmdline();
+
+ switch (cmd) {
+ case PREDICTIVE_STORE_FORWARD_CMD_ON:
+ mode = PREDICTIVE_STORE_FORWARD_DISABLE;
+ break;
+ default:
+ mode = PREDICTIVE_STORE_FORWARD_NONE;
+ break;
+ }
+
+ x86_spec_ctrl_mask |= SPEC_CTRL_PSFD;
+
+ if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
+ mode = PREDICTIVE_STORE_FORWARD_DISABLE;
+
+ if (mode == PREDICTIVE_STORE_FORWARD_DISABLE) {
+ setup_force_cpu_cap(X86_FEATURE_PSFD);
+ x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
+ wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+ }
+
+ return mode;
+}
+
+static void psf_select_mitigation(void)
+{
+ psf_mode = __psf_select_mitigation();
+
+ if (boot_cpu_has(X86_FEATURE_PSFD))
+ pr_info("%s\n", psf_strings[psf_mode]);
+}
+
#undef pr_fmt
#define pr_fmt(fmt) "Speculation prctl: " fmt

--
2.25.1


2021-05-19 11:34:09

by Randy Dunlap

[permalink] [raw]
Subject: Re: [v6 1/1] x86/bugs: Implement mitigation for Predictive Store Forwarding

Hi again,

On 5/17/21 3:00 PM, Ramakrishna Saripalli wrote:
> From: Ramakrishna Saripalli <[email protected]>
>
> Certain AMD processors feature a new technology called Predictive Store
> Forwarding (PSF).
>
> PSF is a micro-architectural optimization designed to improve the
> performance of code execution by predicting dependencies between
> loads and stores.
>
> Incorrect PSF predictions can occur due to two reasons.
>
...

>
> Kernel parameter predictive_store_fwd_disable has the following values
>
> - on. Disable PSF on all CPUs.
>
> - off. Enable PSF on all CPUs.
> This is also the default setting.
>
> Signed-off-by: Ramakrishna Saripalli<[email protected]>
> ---
> .../admin-guide/kernel-parameters.txt | 5 +
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/include/asm/msr-index.h | 2 +
> arch/x86/include/asm/nospec-branch.h | 6 ++
> arch/x86/kernel/cpu/bugs.c | 94 +++++++++++++++++++
> 5 files changed, 108 insertions(+)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 04545725f187..a5f694dccb24 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -3940,6 +3940,11 @@
> Format: {"off"}
> Disable Hardware Transactional Memory
>
> + predictive_store_fwd_disable= [X86] This option controls PSF.
> + off - Turns on PSF.
> + on - Turns off PSF.
> + default : off.


and as I did earlier, I still object to "off" meaning PSF is on
and "on" meaning that PSF is off.

It's not at all user friendly.

If it's done this way because that's how the h/w bit is defined/used,
that's not a good excuse IMHO.

Hm, it sorta seems to be a common "theme" when dealing with mitigations.
And too late to fix that.

I look forward to h/w that doesn't need mitigations. ;)

--
~Randy


2021-05-19 18:35:55

by Pawan Gupta

[permalink] [raw]
Subject: Re: [v6 1/1] x86/bugs: Implement mitigation for Predictive Store Forwarding

On 18.05.2021 07:27, Saripalli, RK wrote:
>
>
>On 5/17/2021 9:55 PM, Randy Dunlap wrote:
>> Hi again,
>>
>> On 5/17/21 3:00 PM, Ramakrishna Saripalli wrote:
>>> From: Ramakrishna Saripalli <[email protected]>
>>>
>>> Certain AMD processors feature a new technology called Predictive Store
>>> Forwarding (PSF).
>>>
>>> PSF is a micro-architectural optimization designed to improve the
>>> performance of code execution by predicting dependencies between
>>> loads and stores.
>>>
>>> Incorrect PSF predictions can occur due to two reasons.
>>>
>> ...
>>
>>>
>>> Kernel parameter predictive_store_fwd_disable has the following values
>>>
>>> - on. Disable PSF on all CPUs.
>>>
>>> - off. Enable PSF on all CPUs.
>>> This is also the default setting.
>>>
>>> Signed-off-by: Ramakrishna Saripalli<[email protected]>
>>> ---
>>> .../admin-guide/kernel-parameters.txt | 5 +
>>> arch/x86/include/asm/cpufeatures.h | 1 +
>>> arch/x86/include/asm/msr-index.h | 2 +
>>> arch/x86/include/asm/nospec-branch.h | 6 ++
>>> arch/x86/kernel/cpu/bugs.c | 94 +++++++++++++++++++
>>> 5 files changed, 108 insertions(+)
>>>
>>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>>> index 04545725f187..a5f694dccb24 100644
>>> --- a/Documentation/admin-guide/kernel-parameters.txt
>>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>>> @@ -3940,6 +3940,11 @@
>>> Format: {"off"}
>>> Disable Hardware Transactional Memory
>>>
>>> + predictive_store_fwd_disable= [X86] This option controls PSF.
>>> + off - Turns on PSF.
>>> + on - Turns off PSF.
>>> + default : off.
>>
>>
>> and as I did earlier, I still object to "off" meaning PSF is on
>> and "on" meaning that PSF is off.
>>
>> It's not at all user friendly.
>>
>> If it's done this way because that's how the h/w bit is defined/used,
>> that's not a good excuse IMHO.
>>
>> Hm, it sorta seems to be a common "theme" when dealing with mitigations.
>> And too late to fix that.
>
>Based on previous feedback from Thomas Gleixner, I reworded this as a mitigation instead of as a "feature".
>In that vein, all the mitigation code moved into bugs.c like other mitigations, similar to
>spec_control_bypass_disable with an ON/OFF but no prctl/seccomp/auto.

Maybe change the help text to something like:

on - Turns on PSF mitigation.
off - Turns off PSF mitigation.

Thanks,
Pawan

2021-05-19 19:15:28

by Pawan Gupta

[permalink] [raw]
Subject: Re: [v6 1/1] x86/bugs: Implement mitigation for Predictive Store Forwarding

On 17.05.2021 17:00, Ramakrishna Saripalli wrote:
>From: Ramakrishna Saripalli <[email protected]>
[...]
>--- a/arch/x86/include/asm/nospec-branch.h
>+++ b/arch/x86/include/asm/nospec-branch.h
>@@ -198,6 +198,12 @@ enum ssb_mitigation {
> SPEC_STORE_BYPASS_SECCOMP,
> };
>
>+/* The Predictive Store forward control variant */
>+enum psf_mitigation {
>+ PREDICTIVE_STORE_FORWARD_NONE,
>+ PREDICTIVE_STORE_FORWARD_DISABLE,
>+};
>+

This can be moved to bugs.c which is the only user of psf_mitigation.

Thanks,
Pawan

2021-05-19 19:16:52

by Pawan Gupta

[permalink] [raw]
Subject: Re: [v6 1/1] x86/bugs: Implement mitigation for Predictive Store Forwarding

On 17.05.2021 17:00, Ramakrishna Saripalli wrote:
>From: Ramakrishna Saripalli <[email protected]>
>
>Certain AMD processors feature a new technology called Predictive Store
>Forwarding (PSF).
>
>PSF is a micro-architectural optimization designed to improve the
>performance of code execution by predicting dependencies between
>loads and stores.
>
>Incorrect PSF predictions can occur due to two reasons.
>
>- It is possible that the load/store pair may have had dependency for
> a while but the dependency has stopped because the address in the
> load/store pair has changed.
>
>- Second source of incorrect PSF prediction can occur because of an alias
> in the PSF predictor structure stored in the microarchitectural state.
> PSF predictor tracks load/store pair based on portions of instruction
> pointer. It is possible that a load/store pair which does have a
> dependency may be aliased by another load/store pair which does not have
> the same dependency. This can result in incorrect speculation.
>
> Software may be able to detect this aliasing and perform side-channel
> attacks.
>
>All CPUs that implement PSF provide one bit to disable this feature.
>If the bit to disable this feature is available, it means that the CPU
>implements PSF feature and is therefore vulnerable to PSF risks.
>
>The bits that are introduced
>
>X86_FEATURE_PSFD: CPUID_Fn80000008_EBX[28] ("PSF disable")
> If this bit is 1, CPU implements PSF and PSF control
> via SPEC_CTRL_MSR is supported in the CPU.
>
>All AMD processors that support PSF implement a bit in
>SPEC_CTRL MSR (0x48) to disable or enable Predictive Store
>Forwarding.
>
>PSF control introduces a new kernel parameter called
> predictive_store_fwd_disable.
>
>Kernel parameter predictive_store_fwd_disable has the following values
>
>- on. Disable PSF on all CPUs.
>
>- off. Enable PSF on all CPUs.
> This is also the default setting.
>
>Signed-off-by: Ramakrishna Saripalli<[email protected]>
>---
> .../admin-guide/kernel-parameters.txt | 5 +
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/include/asm/msr-index.h | 2 +
> arch/x86/include/asm/nospec-branch.h | 6 ++
> arch/x86/kernel/cpu/bugs.c | 94 +++++++++++++++++++
> 5 files changed, 108 insertions(+)
>
>diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>index 04545725f187..a5f694dccb24 100644
>--- a/Documentation/admin-guide/kernel-parameters.txt
>+++ b/Documentation/admin-guide/kernel-parameters.txt
>@@ -3940,6 +3940,11 @@
> Format: {"off"}
> Disable Hardware Transactional Memory
>
>+ predictive_store_fwd_disable= [X86] This option controls PSF.
>+ off - Turns on PSF.
>+ on - Turns off PSF.
>+ default : off.
>+
> preempt= [KNL]
> Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
> none - Limited to cond_resched() calls
>diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
>index cc96e26d69f7..078f46022293 100644
>--- a/arch/x86/include/asm/cpufeatures.h
>+++ b/arch/x86/include/asm/cpufeatures.h
>@@ -309,6 +309,7 @@
> #define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */
> #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */
> #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
>+#define X86_FEATURE_PSFD (13*32+28) /* Predictive Store Forward Disable */
>
> /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
> #define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */
>diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
>index 546d6ecf0a35..f569918c8754 100644
>--- a/arch/x86/include/asm/msr-index.h
>+++ b/arch/x86/include/asm/msr-index.h
>@@ -51,6 +51,8 @@
> #define SPEC_CTRL_STIBP BIT(SPEC_CTRL_STIBP_SHIFT) /* STIBP mask */
> #define SPEC_CTRL_SSBD_SHIFT 2 /* Speculative Store Bypass Disable bit */
> #define SPEC_CTRL_SSBD BIT(SPEC_CTRL_SSBD_SHIFT) /* Speculative Store Bypass Disable */
>+#define SPEC_CTRL_PSFD_SHIFT 7
>+#define SPEC_CTRL_PSFD BIT(SPEC_CTRL_PSFD_SHIFT) /* Predictive Store Forwarding Disable */
>
> #define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */
> #define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */
>diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
>index cb9ad6b73973..1231099e5c07 100644
>--- a/arch/x86/include/asm/nospec-branch.h
>+++ b/arch/x86/include/asm/nospec-branch.h
>@@ -198,6 +198,12 @@ enum ssb_mitigation {
> SPEC_STORE_BYPASS_SECCOMP,
> };
>
>+/* The Predictive Store forward control variant */
>+enum psf_mitigation {
>+ PREDICTIVE_STORE_FORWARD_NONE,
>+ PREDICTIVE_STORE_FORWARD_DISABLE,
>+};
>+
> extern char __indirect_thunk_start[];
> extern char __indirect_thunk_end[];
>
>diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>index d41b70fe4918..c80ef4d86b72 100644
>--- a/arch/x86/kernel/cpu/bugs.c
>+++ b/arch/x86/kernel/cpu/bugs.c
>@@ -38,6 +38,7 @@
> static void __init spectre_v1_select_mitigation(void);
> static void __init spectre_v2_select_mitigation(void);
> static void __init ssb_select_mitigation(void);
>+static void __init psf_select_mitigation(void);
> static void __init l1tf_select_mitigation(void);
> static void __init mds_select_mitigation(void);
> static void __init mds_print_mitigation(void);
>@@ -107,6 +108,7 @@ void __init check_bugs(void)
> spectre_v1_select_mitigation();
> spectre_v2_select_mitigation();
> ssb_select_mitigation();
>+ psf_select_mitigation();
> l1tf_select_mitigation();
> mds_select_mitigation();
> taa_select_mitigation();
>@@ -1195,6 +1197,98 @@ static void ssb_select_mitigation(void)
> pr_info("%s\n", ssb_strings[ssb_mode]);
> }
>
>+#undef pr_fmt
>+#define pr_fmt(fmt) "Predictive Store Forward: " fmt
>+
>+static enum psf_mitigation psf_mode __ro_after_init = PREDICTIVE_STORE_FORWARD_NONE;
>+
>+/* The kernel command line selection */
>+enum psf_mitigation_cmd {
>+ PREDICTIVE_STORE_FORWARD_CMD_NONE,
>+ PREDICTIVE_STORE_FORWARD_CMD_ON,
>+};
>+
>+static const char * const psf_strings[] = {
>+ [PREDICTIVE_STORE_FORWARD_NONE] = "Vulnerable",
>+ [PREDICTIVE_STORE_FORWARD_DISABLE] = "Mitigation: Predictive Store Forward disabled",
>+};
>+
>+static const struct {
>+ const char *option;
>+ enum psf_mitigation_cmd cmd;
>+} psf_mitigation_options[] __initconst = {
>+ { "on", PREDICTIVE_STORE_FORWARD_CMD_ON }, /* Disable Speculative Store Bypass */
>+ { "off", PREDICTIVE_STORE_FORWARD_CMD_NONE }, /* Don't touch Speculative Store Bypass */
>+};
>+
>+static enum psf_mitigation_cmd __init psf_parse_cmdline(void)
>+{
>+ enum psf_mitigation_cmd cmd = PREDICTIVE_STORE_FORWARD_CMD_NONE;
>+ char arg[20];
>+ int ret, i;
>+
>+ ret = cmdline_find_option(boot_command_line, "predictive_store_fwd_disable",
>+ arg, sizeof(arg));
>+ if (ret < 0)
>+ return PREDICTIVE_STORE_FORWARD_CMD_NONE;
>+
>+ for (i = 0; i < ARRAY_SIZE(psf_mitigation_options); i++) {
>+ if (!match_option(arg, ret, psf_mitigation_options[i].option))
>+ continue;
>+
>+ cmd = psf_mitigation_options[i].cmd;
>+ break;
>+ }
>+
>+ if (i >= ARRAY_SIZE(psf_mitigation_options)) {
>+ pr_err("unknown option (%s). Switching to AUTO select\n", arg);
>+ return PREDICTIVE_STORE_FORWARD_CMD_NONE;
>+ }
>+
>+ return cmd;
>+}
>+
>+static enum psf_mitigation __init __psf_select_mitigation(void)
>+{
>+ enum psf_mitigation mode = PREDICTIVE_STORE_FORWARD_NONE;
>+ enum psf_mitigation_cmd cmd;
>+
>+ if (!boot_cpu_has(X86_FEATURE_PSFD))
>+ return mode;
>+
>+ cmd = psf_parse_cmdline();
>+
>+ switch (cmd) {
>+ case PREDICTIVE_STORE_FORWARD_CMD_ON:
>+ mode = PREDICTIVE_STORE_FORWARD_DISABLE;
>+ break;
>+ default:
>+ mode = PREDICTIVE_STORE_FORWARD_NONE;
>+ break;
>+ }
>+
>+ x86_spec_ctrl_mask |= SPEC_CTRL_PSFD;
>+
>+ if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
>+ mode = PREDICTIVE_STORE_FORWARD_DISABLE;
>+
>+ if (mode == PREDICTIVE_STORE_FORWARD_DISABLE) {
>+ setup_force_cpu_cap(X86_FEATURE_PSFD);

Why do we need to force set X86_FEATURE_PSFD here? Control will not
reach here if this is not already set (because of boot_cpu_has() check
at function entry).

>+ x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
>+ wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>+ }
>+
>+ return mode;
>+}
>+
>+static void psf_select_mitigation(void)
>+{
>+ psf_mode = __psf_select_mitigation();
>+
>+ if (boot_cpu_has(X86_FEATURE_PSFD))
>+ pr_info("%s\n", psf_strings[psf_mode]);
>+}

For an on/off control this patch looks like an overkill. I think we can
simplify it as below:

static void psf_select_mitigation(void)
{
if (!boot_cpu_has(X86_FEATURE_PSFD))
return;

x86_spec_ctrl_mask |= SPEC_CTRL_PSFD;

if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
psf_mode = PREDICTIVE_STORE_FORWARD_DISABLE;

if (psf_mode == PREDICTIVE_STORE_FORWARD_DISABLE) {
x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
}

pr_info("%s\n", psf_strings[psf_mode]);
}

static int __init psfd_cmdline(char *str)
{
if (!boot_cpu_has(X86_FEATURE_PSFD))
return;

if (!str)
return -EINVAL;

if (!strcmp(str, "on"))
psf_mode = PREDICTIVE_STORE_FORWARD_DISABLE;

return 0;
}
early_param("predictive_store_fwd_disable", psfd_cmdline);

---
Thanks,
Pawan

2021-05-19 19:34:11

by Saripalli, RK

[permalink] [raw]
Subject: Re: [v6 1/1] x86/bugs: Implement mitigation for Predictive Store Forwarding



On 5/19/2021 12:38 AM, Pawan Gupta wrote:
> On 17.05.2021 17:00, Ramakrishna Saripalli wrote:
>> From: Ramakrishna Saripalli <[email protected]>
>>
>> Certain AMD processors feature a new technology called Predictive Store
>> Forwarding (PSF).
>>
>> PSF is a micro-architectural optimization designed to improve the
>> performance of code execution by predicting dependencies between
>> loads and stores.
>>
>> Incorrect PSF predictions can occur due to two reasons.
>>
>> - It is possible that the load/store pair may have had dependency for
>>  a while but the dependency has stopped because the address in the
>>  load/store pair has changed.
>>
>> - Second source of incorrect PSF prediction can occur because of an alias
>>  in the PSF predictor structure stored in the microarchitectural state.
>>  PSF predictor tracks load/store pair based on portions of instruction
>>  pointer. It is possible that a load/store pair which does have a
>>  dependency may be aliased by another load/store pair which does not have
>>  the same dependency. This can result in incorrect speculation.
>>
>>  Software may be able to detect this aliasing and perform side-channel
>>  attacks.
>>
>> All CPUs that implement PSF provide one bit to disable this feature.
>> If the bit to disable this feature is available, it means that the CPU
>> implements PSF feature and is therefore vulnerable to PSF risks.
>>
>> The bits that are introduced
>>
>> X86_FEATURE_PSFD: CPUID_Fn80000008_EBX[28] ("PSF disable")
>>     If this bit is 1, CPU implements PSF and PSF control
>>     via SPEC_CTRL_MSR is supported in the CPU.
>>
>> All AMD processors that support PSF implement a bit in
>> SPEC_CTRL MSR (0x48) to disable or enable Predictive Store
>> Forwarding.
>>
>> PSF control introduces a new kernel parameter called
>>     predictive_store_fwd_disable.
>>
>> Kernel parameter predictive_store_fwd_disable has the following values
>>
>> - on. Disable PSF on all CPUs.
>>
>> - off. Enable PSF on all CPUs.
>>       This is also the default setting.
>>
>> Signed-off-by: Ramakrishna Saripalli<[email protected]>
>> ---
>> .../admin-guide/kernel-parameters.txt         |  5 +
>> arch/x86/include/asm/cpufeatures.h            |  1 +
>> arch/x86/include/asm/msr-index.h              |  2 +
>> arch/x86/include/asm/nospec-branch.h          |  6 ++
>> arch/x86/kernel/cpu/bugs.c                    | 94 +++++++++++++++++++
>> 5 files changed, 108 insertions(+)
>>
>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>> index 04545725f187..a5f694dccb24 100644
>> --- a/Documentation/admin-guide/kernel-parameters.txt
>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>> @@ -3940,6 +3940,11 @@
>>             Format: {"off"}
>>             Disable Hardware Transactional Memory
>>
>> +    predictive_store_fwd_disable=    [X86] This option controls PSF.
>> +            off - Turns on PSF.
>> +            on  - Turns off PSF.
>> +            default : off.
>> +
>>     preempt=    [KNL]
>>             Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
>>             none - Limited to cond_resched() calls
>> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
>> index cc96e26d69f7..078f46022293 100644
>> --- a/arch/x86/include/asm/cpufeatures.h
>> +++ b/arch/x86/include/asm/cpufeatures.h
>> @@ -309,6 +309,7 @@
>> #define X86_FEATURE_AMD_SSBD        (13*32+24) /* "" Speculative Store Bypass Disable */
>> #define X86_FEATURE_VIRT_SSBD        (13*32+25) /* Virtualized Speculative Store Bypass Disable */
>> #define X86_FEATURE_AMD_SSB_NO        (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
>> +#define X86_FEATURE_PSFD        (13*32+28) /* Predictive Store Forward Disable */
>>
>> /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
>> #define X86_FEATURE_DTHERM        (14*32+ 0) /* Digital Thermal Sensor */
>> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
>> index 546d6ecf0a35..f569918c8754 100644
>> --- a/arch/x86/include/asm/msr-index.h
>> +++ b/arch/x86/include/asm/msr-index.h
>> @@ -51,6 +51,8 @@
>> #define SPEC_CTRL_STIBP            BIT(SPEC_CTRL_STIBP_SHIFT)    /* STIBP mask */
>> #define SPEC_CTRL_SSBD_SHIFT        2       /* Speculative Store Bypass Disable bit */
>> #define SPEC_CTRL_SSBD            BIT(SPEC_CTRL_SSBD_SHIFT)    /* Speculative Store Bypass Disable */
>> +#define SPEC_CTRL_PSFD_SHIFT        7
>> +#define SPEC_CTRL_PSFD            BIT(SPEC_CTRL_PSFD_SHIFT)    /* Predictive Store Forwarding Disable */
>>
>> #define MSR_IA32_PRED_CMD        0x00000049 /* Prediction Command */
>> #define PRED_CMD_IBPB            BIT(0)       /* Indirect Branch Prediction Barrier */
>> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
>> index cb9ad6b73973..1231099e5c07 100644
>> --- a/arch/x86/include/asm/nospec-branch.h
>> +++ b/arch/x86/include/asm/nospec-branch.h
>> @@ -198,6 +198,12 @@ enum ssb_mitigation {
>>     SPEC_STORE_BYPASS_SECCOMP,
>> };
>>
>> +/* The Predictive Store forward control variant */
>> +enum psf_mitigation {
>> +    PREDICTIVE_STORE_FORWARD_NONE,
>> +    PREDICTIVE_STORE_FORWARD_DISABLE,
>> +};
>> +
>> extern char __indirect_thunk_start[];
>> extern char __indirect_thunk_end[];
>>
>> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>> index d41b70fe4918..c80ef4d86b72 100644
>> --- a/arch/x86/kernel/cpu/bugs.c
>> +++ b/arch/x86/kernel/cpu/bugs.c
>> @@ -38,6 +38,7 @@
>> static void __init spectre_v1_select_mitigation(void);
>> static void __init spectre_v2_select_mitigation(void);
>> static void __init ssb_select_mitigation(void);
>> +static void __init psf_select_mitigation(void);
>> static void __init l1tf_select_mitigation(void);
>> static void __init mds_select_mitigation(void);
>> static void __init mds_print_mitigation(void);
>> @@ -107,6 +108,7 @@ void __init check_bugs(void)
>>     spectre_v1_select_mitigation();
>>     spectre_v2_select_mitigation();
>>     ssb_select_mitigation();
>> +    psf_select_mitigation();
>>     l1tf_select_mitigation();
>>     mds_select_mitigation();
>>     taa_select_mitigation();
>> @@ -1195,6 +1197,98 @@ static void ssb_select_mitigation(void)
>>         pr_info("%s\n", ssb_strings[ssb_mode]);
>> }
>>
>> +#undef pr_fmt
>> +#define pr_fmt(fmt)    "Predictive Store Forward: " fmt
>> +
>> +static enum psf_mitigation psf_mode __ro_after_init = PREDICTIVE_STORE_FORWARD_NONE;
>> +
>> +/* The kernel command line selection */
>> +enum psf_mitigation_cmd {
>> +    PREDICTIVE_STORE_FORWARD_CMD_NONE,
>> +    PREDICTIVE_STORE_FORWARD_CMD_ON,
>> +};
>> +
>> +static const char * const psf_strings[] = {
>> +    [PREDICTIVE_STORE_FORWARD_NONE]        = "Vulnerable",
>> +    [PREDICTIVE_STORE_FORWARD_DISABLE]    = "Mitigation: Predictive Store Forward disabled",
>> +};
>> +
>> +static const struct {
>> +    const char *option;
>> +    enum psf_mitigation_cmd cmd;
>> +} psf_mitigation_options[]  __initconst = {
>> +    { "on",        PREDICTIVE_STORE_FORWARD_CMD_ON },      /* Disable Speculative Store Bypass */
>> +    { "off",    PREDICTIVE_STORE_FORWARD_CMD_NONE },    /* Don't touch Speculative Store Bypass */
>> +};
>> +
>> +static enum psf_mitigation_cmd __init psf_parse_cmdline(void)
>> +{
>> +    enum psf_mitigation_cmd cmd = PREDICTIVE_STORE_FORWARD_CMD_NONE;
>> +    char arg[20];
>> +    int ret, i;
>> +
>> +    ret = cmdline_find_option(boot_command_line, "predictive_store_fwd_disable",
>> +                  arg, sizeof(arg));
>> +    if (ret < 0)
>> +        return PREDICTIVE_STORE_FORWARD_CMD_NONE;
>> +
>> +    for (i = 0; i < ARRAY_SIZE(psf_mitigation_options); i++) {
>> +        if (!match_option(arg, ret, psf_mitigation_options[i].option))
>> +            continue;
>> +
>> +        cmd = psf_mitigation_options[i].cmd;
>> +        break;
>> +    }
>> +
>> +    if (i >= ARRAY_SIZE(psf_mitigation_options)) {
>> +        pr_err("unknown option (%s). Switching to AUTO select\n", arg);
>> +        return PREDICTIVE_STORE_FORWARD_CMD_NONE;
>> +    }
>> +
>> +    return cmd;
>> +}
>> +
>> +static enum psf_mitigation __init __psf_select_mitigation(void)
>> +{
>> +    enum psf_mitigation mode = PREDICTIVE_STORE_FORWARD_NONE;
>> +    enum psf_mitigation_cmd cmd;
>> +
>> +    if (!boot_cpu_has(X86_FEATURE_PSFD))
>> +        return mode;
>> +
>> +    cmd = psf_parse_cmdline();
>> +
>> +    switch (cmd) {
>> +    case PREDICTIVE_STORE_FORWARD_CMD_ON:
>> +        mode = PREDICTIVE_STORE_FORWARD_DISABLE;
>> +        break;
>> +    default:
>> +        mode = PREDICTIVE_STORE_FORWARD_NONE;
>> +        break;
>> +    }
>> +
>> +    x86_spec_ctrl_mask |= SPEC_CTRL_PSFD;
>> +
>> +    if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
>> +        mode = PREDICTIVE_STORE_FORWARD_DISABLE;
>> +
>> +    if (mode == PREDICTIVE_STORE_FORWARD_DISABLE) {
>> +        setup_force_cpu_cap(X86_FEATURE_PSFD);
>
> Why do we need to force set X86_FEATURE_PSFD here? Control will not
> reach here if this is not already set (because of boot_cpu_has() check
> at function entry).
>
>> +        x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
>> +        wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>> +    }
>> +
>> +    return mode;
>> +}
>> +
>> +static void psf_select_mitigation(void)
>> +{
>> +    psf_mode = __psf_select_mitigation();
>> +
>> +    if (boot_cpu_has(X86_FEATURE_PSFD))
>> +        pr_info("%s\n", psf_strings[psf_mode]);
>> +}
>
> For an on/off control this patch looks like an overkill. I think we can
> simplify it as below:
>
> static void psf_select_mitigation(void)
> {
>     if (!boot_cpu_has(X86_FEATURE_PSFD))
>         return;
>
>     x86_spec_ctrl_mask |= SPEC_CTRL_PSFD;
>
>     if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
>         psf_mode = PREDICTIVE_STORE_FORWARD_DISABLE;
>
>     if (psf_mode == PREDICTIVE_STORE_FORWARD_DISABLE) {
>         x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
>         wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>     }
>
>     pr_info("%s\n", psf_strings[psf_mode]);
> }
>
> static int __init psfd_cmdline(char *str)
> {
>     if (!boot_cpu_has(X86_FEATURE_PSFD))
>         return;
>
>     if (!str)
>         return -EINVAL;
>
>     if (!strcmp(str, "on"))
>         psf_mode = PREDICTIVE_STORE_FORWARD_DISABLE;
>
>     return 0;
> }
> early_param("predictive_store_fwd_disable", psfd_cmdline);

I agree that on/off can be simple like what you suggested.
My patches version 2 to 5 were in fact structured this way
but implemented in kernel/cpu/amd.c as it was deemed that that was
the file logically speaking to put these changes in.

Later reviews suggested that since this mitigation is similar to spec_control_bypass_disable
(although it is a subset of the above), that it is better to have this in kernel/cpu/bugs.c and
structured similar to spec_control_bypass_disable hence this patchset.


>
> ---
> Thanks,
> Pawan

2021-05-19 21:02:52

by Saripalli, RK

[permalink] [raw]
Subject: Re: [v6 1/1] x86/bugs: Implement mitigation for Predictive Store Forwarding



On 5/17/2021 9:55 PM, Randy Dunlap wrote:
> Hi again,
>
> On 5/17/21 3:00 PM, Ramakrishna Saripalli wrote:
>> From: Ramakrishna Saripalli <[email protected]>
>>
>> Certain AMD processors feature a new technology called Predictive Store
>> Forwarding (PSF).
>>
>> PSF is a micro-architectural optimization designed to improve the
>> performance of code execution by predicting dependencies between
>> loads and stores.
>>
>> Incorrect PSF predictions can occur due to two reasons.
>>
> ...
>
>>
>> Kernel parameter predictive_store_fwd_disable has the following values
>>
>> - on. Disable PSF on all CPUs.
>>
>> - off. Enable PSF on all CPUs.
>> This is also the default setting.
>>
>> Signed-off-by: Ramakrishna Saripalli<[email protected]>
>> ---
>> .../admin-guide/kernel-parameters.txt | 5 +
>> arch/x86/include/asm/cpufeatures.h | 1 +
>> arch/x86/include/asm/msr-index.h | 2 +
>> arch/x86/include/asm/nospec-branch.h | 6 ++
>> arch/x86/kernel/cpu/bugs.c | 94 +++++++++++++++++++
>> 5 files changed, 108 insertions(+)
>>
>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>> index 04545725f187..a5f694dccb24 100644
>> --- a/Documentation/admin-guide/kernel-parameters.txt
>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>> @@ -3940,6 +3940,11 @@
>> Format: {"off"}
>> Disable Hardware Transactional Memory
>>
>> + predictive_store_fwd_disable= [X86] This option controls PSF.
>> + off - Turns on PSF.
>> + on - Turns off PSF.
>> + default : off.
>
>
> and as I did earlier, I still object to "off" meaning PSF is on
> and "on" meaning that PSF is off.
>
> It's not at all user friendly.
>
> If it's done this way because that's how the h/w bit is defined/used,
> that's not a good excuse IMHO.
>
> Hm, it sorta seems to be a common "theme" when dealing with mitigations.
> And too late to fix that.

Based on previous feedback from Thomas Gleixner, I reworded this as a mitigation instead of as a "feature".
In that vein, all the mitigation code moved into bugs.c like other mitigations, similar to
spec_control_bypass_disable with an ON/OFF but no prctl/seccomp/auto.


>
> I look forward to h/w that doesn't need mitigations. ;)
>

2021-06-17 21:34:34

by Saripalli, RK

[permalink] [raw]
Subject: Re: [v6 0/1] Introduce support for PSF control.



On 5/17/2021 5:00 PM, Ramakrishna Saripalli wrote:
> From: Ramakrishna Saripalli <[email protected]>
>
> Predictive Store Forwarding:
> AMD Zen3 processors feature a new technology called
> Predictive Store Forwarding (PSF).
>
> https://www.amd.com/system/files/documents/security-analysis-predictive-store-forwarding.pdf
>
> PSF is a hardware-based micro-architectural optimization designed
> to improve the performance of code execution by predicting address
> dependencies between loads and stores.
>
> How PSF works:
>
> It is very common for a CPU to execute a load instruction to an address
> that was recently written by a store. Modern CPUs implement a technique
> known as Store-To-Load-Forwarding (STLF) to improve performance in such
> cases. With STLF, data from the store is forwarded directly to the load
> without having to wait for it to be written to memory. In a typical CPU,
> STLF occurs after the address of both the load and store are calculated
> and determined to match.
>
> PSF expands on this by speculating on the relationship between loads and
> stores without waiting for the address calculation to complete. With PSF,
> the CPU learns over time the relationship between loads and stores.
> If STLF typically occurs between a particular store and load, the CPU will
> remember this.
>
> In typical code, PSF provides a performance benefit by speculating on
> the load result and allowing later instructions to begin execution
> sooner than they otherwise would be able to.
>
> Causes of Incorrect PSF:
>
> Incorrect PSF predictions can occur due to two reasons.
>
> First, it is possible that the store/load pair had a dependency for a
> while but later stops having a dependency. This can occur if the address
> of either the store or load changes during the execution of the program.
>
> The second source of incorrect PSF predictions can occur if there is an
> alias in the PSF predictor structure. The PSF predictor tracks
> store-load pairs based on portions of their RIP. It is possible that a
> store-load pair which does have a dependency may alias in the predictor
> with another store-load pair which does not.
>
> This can result in incorrect speculation when the second store/load pair
> is executed.
>
> Security Analysis:
>
> Previous research has shown that when CPUs speculate on non-architectural
> paths it can lead to the potential of side channel attacks.
> In particular, programs that implement isolation, also known as
> ‘sandboxing’, entirely in software may need to be concerned with incorrect
> CPU speculation as they can occur due to bad PSF predictions.
>
> Because PSF speculation is limited to the current program context,
> the impact of bad PSF speculation is very similar to that of
> Speculative Store Bypass (Spectre v4)
>
> Predictive Store Forwarding controls:
> There are two hardware control bits which influence the PSF feature:
> - MSR 48h bit 2 – Speculative Store Bypass (SSBD)
> - MSR 48h bit 7 – Predictive Store Forwarding Disable (PSFD)
>
> The PSF feature is disabled if either of these bits are set. These bits
> are controllable on a per-thread basis in an SMT system. By default, both
> SSBD and PSFD are 0 meaning that the speculation features are enabled.
>
> While the SSBD bit disables PSF and speculative store bypass, PSFD only
> disables PSF.
>
> PSFD may be desirable for software which is concerned with the
> speculative behavior of PSF but desires a smaller performance impact than
> setting SSBD.
>
> Support for PSFD is indicated in CPUID Fn8000_0008 EBX[28].
> All processors that support PSF will also support PSFD.
>
> ChangeLogs:
> V6->V5:
> Moved PSF control code to arch/x86/kernel/cpu/bugs.c
> PSF mitigation is similar to spec_control_bypass mitigation.
> PSF mitigation has only ON and OFF controls.
> Kernel parameter changed to predictive_store_fwd_disable.
> V5->V4:
> Replaced rdmsrl and wrmsrl for setting SPEC_CTRL_PSFD with
> a single call to msr_set_bit.
> Removed temporary variable to read and write the MSR
> V4->V3:
> Write to MSR_IA32_SPEC_CTRL properly
> Read MSR, modify PSFD bit based on kernel parameter and
> write back to MSR.
>
> Changes made in psf_cmdline() and check_bugs().
> V3->V2:
> Set the X86_FEATURE_SPEC_CTRL_MSR cap in boot cpu caps.
> Fix kernel documentation for the kernel parameter.
> Rename PSF to a control instead of mitigation.
>
> V1->V2:
> - Smashed multiple commits into one commit.
> - Rename PSF to a control instead of mitigation.
>
> V1:
> - Initial patchset.
> - Kernel parameter controls enable and disable of PSF.
>
>
>

Gentle ping. Any more concerns or feedback with this patch series?.
Thanks,
RK
>
> Ramakrishna Saripalli (1):
> x86/bugs: Implement mitigation for Predictive Store Forwarding
>
> .../admin-guide/kernel-parameters.txt | 5 +
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/include/asm/msr-index.h | 2 +
> arch/x86/include/asm/nospec-branch.h | 6 ++
> arch/x86/kernel/cpu/bugs.c | 94 +++++++++++++++++++
> 5 files changed, 108 insertions(+)
>
>
> base-commit: 0e16f466004d7f04296b9676a712a32a12367d1f
>

2021-08-12 23:54:58

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [v6 1/1] x86/bugs: Implement mitigation for Predictive Store Forwarding

On Mon, May 17, 2021 at 05:00:58PM -0500, Ramakrishna Saripalli wrote:
> From: Ramakrishna Saripalli <[email protected]>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 04545725f187..a5f694dccb24 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -3940,6 +3940,11 @@
> Format: {"off"}
> Disable Hardware Transactional Memory
>
> + predictive_store_fwd_disable= [X86] This option controls PSF.
> + off - Turns on PSF.
> + on - Turns off PSF.
> + default : off.
> +

This needs a lot more text.

> +static const char * const psf_strings[] = {
> + [PREDICTIVE_STORE_FORWARD_NONE] = "Vulnerable",
> + [PREDICTIVE_STORE_FORWARD_DISABLE] = "Mitigation: Predictive Store Forward disabled",

This defaults to "Vulnerable", which is problematic for at least a few
reasons.

1) I'm fairly sure this would be the first mitigation designed to
default to "Vulnerable". Aside from whether that's a good idea, many
users will be alarmed to see "Vulnerable" in sysfs.

2) If the system has the default per-process SSB mitigation
(prctl/seccomp) then PSF will be automatically mitigated in the same
way. In that case "Vulnerable" isn't an accurate description. (More
on that below.)

> +static const struct {
> + const char *option;
> + enum psf_mitigation_cmd cmd;
> +} psf_mitigation_options[] __initconst = {
> + { "on", PREDICTIVE_STORE_FORWARD_CMD_ON }, /* Disable Speculative Store Bypass */
> + { "off", PREDICTIVE_STORE_FORWARD_CMD_NONE }, /* Don't touch Speculative Store Bypass */

Copy/paste error in the comments: "Speculative Store Bypass" -> "Predictive Store Forwarding"

I'd also recommend an "auto" option:

{ "auto", PREDICTIVE_STORE_FORWARD_CMD_AUTO }, /* Platform decides */

which would be the default. For now it would function the same as
"off", but would give room for tweaking defaults later.

> +static enum psf_mitigation __init __psf_select_mitigation(void)
> +{
> + enum psf_mitigation mode = PREDICTIVE_STORE_FORWARD_NONE;
> + enum psf_mitigation_cmd cmd;
> +
> + if (!boot_cpu_has(X86_FEATURE_PSFD))
> + return mode;
> +
> + cmd = psf_parse_cmdline();
> +
> + switch (cmd) {
> + case PREDICTIVE_STORE_FORWARD_CMD_ON:
> + mode = PREDICTIVE_STORE_FORWARD_DISABLE;
> + break;
> + default:
> + mode = PREDICTIVE_STORE_FORWARD_NONE;
> + break;
> + }
> +
> + x86_spec_ctrl_mask |= SPEC_CTRL_PSFD;

A comment would help for this last line. I assume it's virt-related.

> +
> + if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
> + mode = PREDICTIVE_STORE_FORWARD_DISABLE;
> +
> + if (mode == PREDICTIVE_STORE_FORWARD_DISABLE) {
> + setup_force_cpu_cap(X86_FEATURE_PSFD);
> + x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
> + wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> + }

The PSF mitigation is (to some extent) dependent on the SSB mitigation,
since turning off SSB implicitly turns off PSF. That should be
reflected properly in sysfs for the prctl/seccomp cases. Here I'd
propose adding something like:

} else if (ssb_mode == SPEC_STORE_BYPASS_PRCTL) {
mode = PREDICTIVE_STORE_FORWARD_SSB_PRCTL;
} else if (ssb_mode == SPEC_STORE_BYPASS_SECCOMP) {
mode = PREDICTIVE_STORE_FORWARD_SSB_SECCOMP;
}

And of course you'd need additional strings for those:

[PREDICTIVE_STORE_FORWARD_SSB_PRCTL] = "Mitigation: Predictive Store Forward disabled via SSB prctl",
[PREDICTIVE_STORE_FORWARD_SSB_SECCOMP] = "Mitigation: Predictive Store Forward disabled via SSB seccomp",

--
Josh

2021-09-01 22:46:11

by Moger, Babu

[permalink] [raw]
Subject: Re: [v6 1/1] x86/bugs: Implement mitigation for Predictive Store

Pawan, That is kind of odd. The ssb_mitigation enums are defined in bug.c. To be consistent it is better to keep it in nospec-branch.h as they are related.
Thanks
Babu

2021-09-02 18:18:31

by Moger, Babu

[permalink] [raw]
Subject: Re: [v6 1/1] x86/bugs: Implement mitigation for Predictive Store

I dont have this thread in my mailbox. Replying via git send-email.

Josh,
I took care of all your comments except this one below.

>I'd also recommend an "auto" option:
>
> { "auto", PREDICTIVE_STORE_FORWARD_CMD_AUTO }, /* Platform decides */

> which would be the default. For now it would function the same as
>"off", but would give room for tweaking defaults later.

There is no plan for tweaking this option in the near future. I feel adding 'auto'
option now is probably overkill and confusing.

Thanks
Babu

2021-09-03 03:25:19

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [v6 1/1] x86/bugs: Implement mitigation for Predictive Store

On Thu, Sep 02, 2021 at 01:16:37PM -0500, Babu Moger wrote:
> I dont have this thread in my mailbox. Replying via git send-email.
>
> Josh,
> I took care of all your comments except this one below.
>
> >I'd also recommend an "auto" option:
> >
> > { "auto", PREDICTIVE_STORE_FORWARD_CMD_AUTO }, /* Platform decides */
>
> > which would be the default. For now it would function the same as
> >"off", but would give room for tweaking defaults later.
>
> There is no plan for tweaking this option in the near future. I feel
> adding 'auto' option now is probably overkill and confusing.

But if the PSF disable interface is modeled after SSB disable (which I
believe it needs to be) then it's only logical to mirror SSB's default
"auto" option.

And, I actually think that calling it 'psf_disable=off' is *more*
confusing than 'psf_disable=auto'. Because in this case, 'off' doesn't
actually mean "off". It means

"off, unless !ssb_disable=off, in which case implicitly mirror the SSB policy".

So maybe there shouldn't even be an 'psf_disable=off' option, because
it's not possible to ensure that PSF doesn't get disabled by SSB
disable.

BTW, is the list of PSF-affected CPUs the same as the list of
SSB-affected CPUs? If there might be PSF CPUs which don't have SSB,
then more logic will need to be added to ensure a sensible default.

On a related note, is there a realistic, non-hypothetical need to have
separate policies and cmdline options for both SSB and PSF? i.e. is
there a real-world scenario where a user needs to disable PSF while
leaving SSB enabled?

Because trying to give them separate interfaces, when PSF disable is
intertwined with SSB disable in hardware, is awkward and confusing. And
the idea of adding another double-negative interface (disable=off!),
just because a vulnerability is considered to be a CPU "feature", isn't
very appetizing.

So instead of adding a new double-negative interface, which only *half*
works due to the ssb_disable dependency, and which is guaranteed to
further confuse users, and which not even be used in the real world
except possibly by confused users...

I'm wondering if we can just start out with the simplest possible
approach: don't change any code and instead just document the fact that
"spec_store_bypass_disable=" also affects PSF.

Then, later on, if a real-world need is demonstrated, actual code could
be added to support disabling PSF independently (but of course it would
never be fully independent since PSF disable is forced by SSB disable).

--
Josh