2022-11-15 19:22:42

by Pawan Gupta

[permalink] [raw]
Subject: [PATCH v3 0/2] Check enumeration before MSR save/restore

v3:
- Rebased to latest upstream.
- Made MSR_AMD64_DE_CFG restore depend on X86_FEATURE_LFENCE_RDTSC.

v2: https://lore.kernel.org/lkml/[email protected]/
- Dropped patch for X86_FEATURE_AMD64_LS_CFG, using X86_FEATURE_AMD64_LS_CFG_SSBD instead.
- Commit message updated.

v1: https://lore.kernel.org/lkml/[email protected]/

Hi,

This patchset is to fix the "unchecked MSR access error" [1] during S3
resume. Patch 1/3 adds a feature bit for MSR_IA32_TSX_CTRL.

Patch 2/3 adds a feature bit for MSR_AMD64_LS_CFG.

Patch 3/3 adds check for feature bit before adding any speculation
control MSR to the list of MSRs to save/restore.

[1] https://lore.kernel.org/lkml/[email protected]/

Thanks,
Pawan

Pawan Gupta (2):
x86/tsx: Add feature bit for TSX control MSR support
x86/pm: Add enumeration check before spec MSRs save/restore setup

arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/tsx.c | 30 +++++++++++++++---------------
arch/x86/power/cpu.c | 25 +++++++++++++++++--------
3 files changed, 33 insertions(+), 23 deletions(-)

--
2.37.3



2022-11-15 19:27:01

by Pawan Gupta

[permalink] [raw]
Subject: [PATCH v3 1/2] x86/tsx: Add feature bit for TSX control MSR support

Support for TSX control MSR is enumerated in MSR_IA32_ARCH_CAPABILITIES.
This is different from how other CPU features are enumerated i.e. via
CPUID. Currently a call to tsx_ctrl_is_supported() is required for
enumerating the feature. In the absence of feature bit for TSX control,
any code that relies on checking feature bits directly will not work.

In preparation for adding a feature bit check in MSR save/restore during
suspend/resume, set a new feature bit X86_FEATURE_TSX_CTRL when
MSR_IA32_TSX_CTRL is present. Also make tsx_ctrl_is_supported() use the
new feature bit to avoid any overhead of reading the MSR.

Suggested-by: Andrew Cooper <[email protected]>
Signed-off-by: Pawan Gupta <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/tsx.c | 30 +++++++++++++++---------------
2 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index b71f4f2ecdd5..3cda06ebe046 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -304,6 +304,7 @@
#define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */
#define X86_FEATURE_USE_IBPB_FW (11*32+16) /* "" Use IBPB during runtime firmware calls */
#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */
+#define X86_FEATURE_MSR_TSX_CTRL (11*32+18) /* "" MSR IA32_TSX_CTRL (Intel) implemented */

/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index ec7bbac3a9f2..9fe488dbed15 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -60,20 +60,7 @@ static void tsx_enable(void)

static bool tsx_ctrl_is_supported(void)
{
- u64 ia32_cap = x86_read_arch_cap_msr();
-
- /*
- * TSX is controlled via MSR_IA32_TSX_CTRL. However, support for this
- * MSR is enumerated by ARCH_CAP_TSX_MSR bit in MSR_IA32_ARCH_CAPABILITIES.
- *
- * TSX control (aka MSR_IA32_TSX_CTRL) is only available after a
- * microcode update on CPUs that have their MSR_IA32_ARCH_CAPABILITIES
- * bit MDS_NO=1. CPUs with MDS_NO=0 are not planned to get
- * MSR_IA32_TSX_CTRL support even after a microcode update. Thus,
- * tsx= cmdline requests will do nothing on CPUs without
- * MSR_IA32_TSX_CTRL support.
- */
- return !!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR);
+ return cpu_feature_enabled(X86_FEATURE_MSR_TSX_CTRL);
}

static enum tsx_ctrl_states x86_get_tsx_auto_mode(void)
@@ -191,7 +178,20 @@ void __init tsx_init(void)
return;
}

- if (!tsx_ctrl_is_supported()) {
+ /*
+ * TSX is controlled via MSR_IA32_TSX_CTRL. However, support for this
+ * MSR is enumerated by ARCH_CAP_TSX_MSR bit in MSR_IA32_ARCH_CAPABILITIES.
+ *
+ * TSX control (aka MSR_IA32_TSX_CTRL) is only available after a
+ * microcode update on CPUs that have their MSR_IA32_ARCH_CAPABILITIES
+ * bit MDS_NO=1. CPUs with MDS_NO=0 are not planned to get
+ * MSR_IA32_TSX_CTRL support even after a microcode update. Thus,
+ * tsx= cmdline requests will do nothing on CPUs without
+ * MSR_IA32_TSX_CTRL support.
+ */
+ if (x86_read_arch_cap_msr() & ARCH_CAP_TSX_CTRL_MSR) {
+ setup_force_cpu_cap(X86_FEATURE_MSR_TSX_CTRL);
+ } else {
tsx_ctrl_state = TSX_CTRL_NOT_SUPPORTED;
return;
}
--
2.37.3


2022-11-15 19:35:22

by Pawan Gupta

[permalink] [raw]
Subject: [PATCH v3 2/2] x86/pm: Add enumeration check before spec MSRs save/restore setup

pm_save_spec_msr() keeps a list of all the MSRs which _might_ need to be
saved and restored at hibernate and resume. However, it has zero
awareness of CPU support for these MSRs. It mostly works by
unconditionally attempting to manipulate these MSRs and relying on
rdmsrl_safe() being able to handle a #GP on CPUs where the support is
unavailable.

However, it's possible for reads (RDMSR) to be supported for a given MSR
while writes (WRMSR) are not. In this case, msr_build_context() sees a
successful read (RDMSR) and marks the MSR as 'valid'. Then, later, a
write (WRMSR) fails, producing a nasty (but harmless) error message.
This causes restore_processor_state() to try and restore it, but writing
this MSR is not allowed on the Intel Atom N2600 leading to:

unchecked MSR access error: WRMSR to 0x122 (tried to write 0x0000000000000002) \
at rIP: 0xffffffff8b07a574 (native_write_msr+0x4/0x20)
Call Trace:
<TASK>
restore_processor_state
x86_acpi_suspend_lowlevel
acpi_suspend_enter
suspend_devices_and_enter
pm_suspend.cold
state_store
kernfs_fop_write_iter
vfs_write
ksys_write
do_syscall_64
? do_syscall_64
? up_read
? lock_is_held_type
? asm_exc_page_fault
? lockdep_hardirqs_on
entry_SYSCALL_64_after_hwframe

To fix this, add the corresponding X86_FEATURE bit for each MSR. Avoid
trying to manipulate the MSR when the feature bit is clear. This
required adding a X86_FEATURE bit for MSRs that do not have one already,
but it's a small price to pay.

Fixes: 73924ec4d560 ("x86/pm: Save the MSR validity status at context setup")
Reported-by: Hans de Goede <[email protected]>
Signed-off-by: Pawan Gupta <[email protected]>
Cc: [email protected]
---
arch/x86/power/cpu.c | 25 +++++++++++++++++--------
1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
index 4cd39f304e20..11a7e28f8985 100644
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -511,18 +511,27 @@ static int pm_cpu_check(const struct x86_cpu_id *c)
return ret;
}

+struct msr_enumeration {
+ u32 msr_no;
+ u32 feature;
+};
+
static void pm_save_spec_msr(void)
{
- u32 spec_msr_id[] = {
- MSR_IA32_SPEC_CTRL,
- MSR_IA32_TSX_CTRL,
- MSR_TSX_FORCE_ABORT,
- MSR_IA32_MCU_OPT_CTRL,
- MSR_AMD64_LS_CFG,
- MSR_AMD64_DE_CFG,
+ struct msr_enumeration msr_enum[] = {
+ {MSR_IA32_SPEC_CTRL, X86_FEATURE_MSR_SPEC_CTRL},
+ {MSR_IA32_TSX_CTRL, X86_FEATURE_MSR_TSX_CTRL},
+ {MSR_TSX_FORCE_ABORT, X86_FEATURE_TSX_FORCE_ABORT},
+ {MSR_IA32_MCU_OPT_CTRL, X86_FEATURE_SRBDS_CTRL},
+ {MSR_AMD64_LS_CFG, X86_FEATURE_LS_CFG_SSBD},
+ {MSR_AMD64_DE_CFG, X86_FEATURE_LFENCE_RDTSC},
};
+ int i;

- msr_build_context(spec_msr_id, ARRAY_SIZE(spec_msr_id));
+ for (i = 0; i < ARRAY_SIZE(msr_enum); i++) {
+ if (boot_cpu_has(msr_enum[i].feature))
+ msr_build_context(&msr_enum[i].msr_no, 1);
+ }
}

static int pm_check_save_msr(void)
--
2.37.3


2022-11-15 19:35:34

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] x86/pm: Add enumeration check before spec MSRs save/restore setup

On Tue, Nov 15, 2022 at 8:17 PM Pawan Gupta
<[email protected]> wrote:
>
> pm_save_spec_msr() keeps a list of all the MSRs which _might_ need to be
> saved and restored at hibernate and resume. However, it has zero
> awareness of CPU support for these MSRs. It mostly works by
> unconditionally attempting to manipulate these MSRs and relying on
> rdmsrl_safe() being able to handle a #GP on CPUs where the support is
> unavailable.
>
> However, it's possible for reads (RDMSR) to be supported for a given MSR
> while writes (WRMSR) are not. In this case, msr_build_context() sees a
> successful read (RDMSR) and marks the MSR as 'valid'. Then, later, a
> write (WRMSR) fails, producing a nasty (but harmless) error message.
> This causes restore_processor_state() to try and restore it, but writing
> this MSR is not allowed on the Intel Atom N2600 leading to:
>
> unchecked MSR access error: WRMSR to 0x122 (tried to write 0x0000000000000002) \
> at rIP: 0xffffffff8b07a574 (native_write_msr+0x4/0x20)
> Call Trace:
> <TASK>
> restore_processor_state
> x86_acpi_suspend_lowlevel
> acpi_suspend_enter
> suspend_devices_and_enter
> pm_suspend.cold
> state_store
> kernfs_fop_write_iter
> vfs_write
> ksys_write
> do_syscall_64
> ? do_syscall_64
> ? up_read
> ? lock_is_held_type
> ? asm_exc_page_fault
> ? lockdep_hardirqs_on
> entry_SYSCALL_64_after_hwframe
>
> To fix this, add the corresponding X86_FEATURE bit for each MSR. Avoid
> trying to manipulate the MSR when the feature bit is clear. This
> required adding a X86_FEATURE bit for MSRs that do not have one already,
> but it's a small price to pay.
>
> Fixes: 73924ec4d560 ("x86/pm: Save the MSR validity status at context setup")
> Reported-by: Hans de Goede <[email protected]>
> Signed-off-by: Pawan Gupta <[email protected]>
> Cc: [email protected]

Fine with me:

Acked-by: Rafael J. Wysocki <[email protected]>

> ---
> arch/x86/power/cpu.c | 25 +++++++++++++++++--------
> 1 file changed, 17 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
> index 4cd39f304e20..11a7e28f8985 100644
> --- a/arch/x86/power/cpu.c
> +++ b/arch/x86/power/cpu.c
> @@ -511,18 +511,27 @@ static int pm_cpu_check(const struct x86_cpu_id *c)
> return ret;
> }
>
> +struct msr_enumeration {
> + u32 msr_no;
> + u32 feature;
> +};
> +
> static void pm_save_spec_msr(void)
> {
> - u32 spec_msr_id[] = {
> - MSR_IA32_SPEC_CTRL,
> - MSR_IA32_TSX_CTRL,
> - MSR_TSX_FORCE_ABORT,
> - MSR_IA32_MCU_OPT_CTRL,
> - MSR_AMD64_LS_CFG,
> - MSR_AMD64_DE_CFG,
> + struct msr_enumeration msr_enum[] = {
> + {MSR_IA32_SPEC_CTRL, X86_FEATURE_MSR_SPEC_CTRL},
> + {MSR_IA32_TSX_CTRL, X86_FEATURE_MSR_TSX_CTRL},
> + {MSR_TSX_FORCE_ABORT, X86_FEATURE_TSX_FORCE_ABORT},
> + {MSR_IA32_MCU_OPT_CTRL, X86_FEATURE_SRBDS_CTRL},
> + {MSR_AMD64_LS_CFG, X86_FEATURE_LS_CFG_SSBD},
> + {MSR_AMD64_DE_CFG, X86_FEATURE_LFENCE_RDTSC},
> };
> + int i;
>
> - msr_build_context(spec_msr_id, ARRAY_SIZE(spec_msr_id));
> + for (i = 0; i < ARRAY_SIZE(msr_enum); i++) {
> + if (boot_cpu_has(msr_enum[i].feature))
> + msr_build_context(&msr_enum[i].msr_no, 1);
> + }
> }
>
> static int pm_check_save_msr(void)
> --
> 2.37.3
>

2022-11-15 21:20:22

by Pawan Gupta

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] x86/pm: Add enumeration check before spec MSRs save/restore setup

On Tue, Nov 15, 2022 at 08:23:35PM +0100, Rafael J. Wysocki wrote:
>On Tue, Nov 15, 2022 at 8:17 PM Pawan Gupta
><[email protected]> wrote:
>>
>> pm_save_spec_msr() keeps a list of all the MSRs which _might_ need to be
>> saved and restored at hibernate and resume. However, it has zero
>> awareness of CPU support for these MSRs. It mostly works by
>> unconditionally attempting to manipulate these MSRs and relying on
>> rdmsrl_safe() being able to handle a #GP on CPUs where the support is
>> unavailable.
>>
>> However, it's possible for reads (RDMSR) to be supported for a given MSR
>> while writes (WRMSR) are not. In this case, msr_build_context() sees a
>> successful read (RDMSR) and marks the MSR as 'valid'. Then, later, a
>> write (WRMSR) fails, producing a nasty (but harmless) error message.
>> This causes restore_processor_state() to try and restore it, but writing
>> this MSR is not allowed on the Intel Atom N2600 leading to:
>>
>> unchecked MSR access error: WRMSR to 0x122 (tried to write 0x0000000000000002) \
>> at rIP: 0xffffffff8b07a574 (native_write_msr+0x4/0x20)
>> Call Trace:
>> <TASK>
>> restore_processor_state
>> x86_acpi_suspend_lowlevel
>> acpi_suspend_enter
>> suspend_devices_and_enter
>> pm_suspend.cold
>> state_store
>> kernfs_fop_write_iter
>> vfs_write
>> ksys_write
>> do_syscall_64
>> ? do_syscall_64
>> ? up_read
>> ? lock_is_held_type
>> ? asm_exc_page_fault
>> ? lockdep_hardirqs_on
>> entry_SYSCALL_64_after_hwframe
>>
>> To fix this, add the corresponding X86_FEATURE bit for each MSR. Avoid
>> trying to manipulate the MSR when the feature bit is clear. This
>> required adding a X86_FEATURE bit for MSRs that do not have one already,
>> but it's a small price to pay.
>>
>> Fixes: 73924ec4d560 ("x86/pm: Save the MSR validity status at context setup")
>> Reported-by: Hans de Goede <[email protected]>
>> Signed-off-by: Pawan Gupta <[email protected]>
>> Cc: [email protected]
>
>Fine with me:
>
>Acked-by: Rafael J. Wysocki <[email protected]>

Thanks.

2022-11-18 18:19:41

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v3 1/2] x86/tsx: Add feature bit for TSX control MSR support

On 11/15/22 11:17, Pawan Gupta wrote:
> Support for TSX control MSR is enumerated in MSR_IA32_ARCH_CAPABILITIES.
> This is different from how other CPU features are enumerated i.e. via
> CPUID. Currently a call to tsx_ctrl_is_supported() is required for
> enumerating the feature. In the absence of feature bit for TSX control,
> any code that relies on checking feature bits directly will not work.
>
> In preparation for adding a feature bit check in MSR save/restore during
> suspend/resume, set a new feature bit X86_FEATURE_TSX_CTRL when
> MSR_IA32_TSX_CTRL is present. Also make tsx_ctrl_is_supported() use the
> new feature bit to avoid any overhead of reading the MSR.


Reviewed-by: Dave Hansen <[email protected]>

2022-11-18 18:19:50

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] x86/pm: Add enumeration check before spec MSRs save/restore setup

On 11/15/22 11:17, Pawan Gupta wrote:
> To fix this, add the corresponding X86_FEATURE bit for each MSR. Avoid
> trying to manipulate the MSR when the feature bit is clear. This
> required adding a X86_FEATURE bit for MSRs that do not have one already,
> but it's a small price to pay.
>
> Fixes: 73924ec4d560 ("x86/pm: Save the MSR validity status at context setup")
> Reported-by: Hans de Goede <[email protected]>
> Signed-off-by: Pawan Gupta <[email protected]>
> Cc: [email protected]

Thanks for fixing this up. The X86_FEATURE approach is a good one. The
"poking at random MSRs" always seemed a bit wonky.

Reviewed-by: Dave Hansen <[email protected]>

2022-11-18 19:23:56

by Pawan Gupta

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] x86/pm: Add enumeration check before spec MSRs save/restore setup

On Fri, Nov 18, 2022 at 10:12:32AM -0800, Dave Hansen wrote:
>On 11/15/22 11:17, Pawan Gupta wrote:
>> To fix this, add the corresponding X86_FEATURE bit for each MSR. Avoid
>> trying to manipulate the MSR when the feature bit is clear. This
>> required adding a X86_FEATURE bit for MSRs that do not have one already,
>> but it's a small price to pay.
>>
>> Fixes: 73924ec4d560 ("x86/pm: Save the MSR validity status at context setup")
>> Reported-by: Hans de Goede <[email protected]>
>> Signed-off-by: Pawan Gupta <[email protected]>
>> Cc: [email protected]
>
>Thanks for fixing this up. The X86_FEATURE approach is a good one. The
>"poking at random MSRs" always seemed a bit wonky.
>
>Reviewed-by: Dave Hansen <[email protected]>

Thanks.