Respect DISABLED_MASK when clearing XSAVE features, such that features
that are disabled do not appear in the xfeatures mask.
This is important for kvm_load_{guest|host}_xsave_state, which look
at host_xcr0 and will do an expensive xsetbv when the guest and host
do not match.
A prime example if CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS is disabled,
the guest OS will not see PKU masked; however, the guest will incur
xsetbv since the host mask will never match the guest, even though
DISABLED_MASK16 has DISABLE_PKU set.
Signed-off-by: Jon Kohler <[email protected]>
CC: [email protected]
CC: Sean Christopherson <[email protected]>
CC: Paolo Bonzini <[email protected]>
---
arch/x86/kernel/fpu/xstate.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
index 0bab497c9436..211ef82b53e3 100644
--- a/arch/x86/kernel/fpu/xstate.c
+++ b/arch/x86/kernel/fpu/xstate.c
@@ -798,7 +798,8 @@ void __init fpu__init_system_xstate(unsigned int legacy_size)
unsigned short cid = xsave_cpuid_features[i];
/* Careful: X86_FEATURE_FPU is 0! */
- if ((i != XFEATURE_FP && !cid) || !boot_cpu_has(cid))
+ if ((i != XFEATURE_FP && !cid) || !boot_cpu_has(cid) ||
+ DISABLED_MASK_BIT_SET(cid))
fpu_kernel_cfg.max_features &= ~BIT_ULL(i);
}
--
2.30.1 (Apple Git-130)
On 5/30/23 13:01, Jon Kohler wrote:
> Respect DISABLED_MASK when clearing XSAVE features, such that features
> that are disabled do not appear in the xfeatures mask.
One sanity check that I'd suggest adopting is "How many other users in
the code do this?" How many DISABLED_MASK_BIT_SET() users are there?
> This is important for kvm_load_{guest|host}_xsave_state, which look
> at host_xcr0 and will do an expensive xsetbv when the guest and host
> do not match.
Is that the only problem? kvm_load_guest_xsave_state() seems to have
some #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS code and I can't
imagine that KVM guests can even use PKRU if this code is compiled out.
Also, this will set XFEATURE_PKRU in xcr0 and go to the trouble of
XSAVE/XRSTOR'ing it all over the place even for regular tasks.
> A prime example if CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS is disabled,
> the guest OS will not see PKU masked; however, the guest will incur
> xsetbv since the host mask will never match the guest, even though
> DISABLED_MASK16 has DISABLE_PKU set.
OK, so you care because you're seeing KVM go slow. You tracked it down
to lots of XSETBV's? You said, "what the heck, why is it doing XSETBV
so much?" and tracked it down to 'max_features' which we use to populate
XCR0?
> diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
> index 0bab497c9436..211ef82b53e3 100644
> --- a/arch/x86/kernel/fpu/xstate.c
> +++ b/arch/x86/kernel/fpu/xstate.c
> @@ -798,7 +798,8 @@ void __init fpu__init_system_xstate(unsigned int legacy_size)
> unsigned short cid = xsave_cpuid_features[i];
>
> /* Careful: X86_FEATURE_FPU is 0! */
> - if ((i != XFEATURE_FP && !cid) || !boot_cpu_has(cid))
> + if ((i != XFEATURE_FP && !cid) || !boot_cpu_has(cid) ||
> + DISABLED_MASK_BIT_SET(cid))
> fpu_kernel_cfg.max_features &= ~BIT_ULL(i);
> }
I _think_ I'd rather this just be cpu_feature_enabled(cid) rather than
using DISABLED_MASK_BIT_SET() directly.
But, I guess this probably also isn't a big deal for _most_ people. Any
sane distro kernel will just set CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
since it's pretty widespread on modern CPUs and works across Intel and
AMD now.
> On May 30, 2023, at 6:22 PM, Dave Hansen <[email protected]> wrote:
>
> On 5/30/23 13:01, Jon Kohler wrote:
>> Respect DISABLED_MASK when clearing XSAVE features, such that features
>> that are disabled do not appear in the xfeatures mask.
>
> One sanity check that I'd suggest adopting is "How many other users in
> the code do this?" How many DISABLED_MASK_BIT_SET() users are there?
Good tip, thank you. Just cpu_feature_enabled(), though I felt that using
DISABLED_MASK_BIT_SET() really does capture *exactly* what I’m trying to
do here.
Happy to take suggestions, perhaps !cpu_feature_enabled(cid) instead?
Or, I did noodle with the idea of making a cpu_feature_disabled() as an
alias for DISABLED_MASK_BIT_SET(), but that felt like bloating the change
for little gain.
>
>> This is important for kvm_load_{guest|host}_xsave_state, which look
>> at host_xcr0 and will do an expensive xsetbv when the guest and host
>> do not match.
>
> Is that the only problem? kvm_load_guest_xsave_state() seems to have
> some #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS code and I can't
> imagine that KVM guests can even use PKRU if this code is compiled out.
>
> Also, this will set XFEATURE_PKRU in xcr0 and go to the trouble of
> XSAVE/XRSTOR'ing it all over the place even for regular tasks.
Correct, KVM isn’t the only beneficiary here as you rightly pointed out. I’m
happy to clarify that in the commit msg if you’d like.
Also, ack on the ifdef’s, I added those myself way back when, this change is
an addendum to that one to nuke the xsetbv overhead.
>> A prime example if CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS is disabled,
>> the guest OS will not see PKU masked; however, the guest will incur
>> xsetbv since the host mask will never match the guest, even though
>> DISABLED_MASK16 has DISABLE_PKU set.
>
> OK, so you care because you're seeing KVM go slow. You tracked it down
> to lots of XSETBV's? You said, "what the heck, why is it doing XSETBV
> so much?" and tracked it down to 'max_features' which we use to populate
> XCR0?
Yes and Yes, that is exactly how I arrived here. kvm_load_{guest|host}_xsave_state
is on the critical path for vmexit / vmentry. This overhead sticks out like a sore
thumb when looking at perf top or visualizing threads with a flamegraph if the guest,
for whatever reason, has a different xcr0 than the host. That is easy to do if guests
have PKU masked out.
>
>> diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
>> index 0bab497c9436..211ef82b53e3 100644
>> --- a/arch/x86/kernel/fpu/xstate.c
>> +++ b/arch/x86/kernel/fpu/xstate.c
>> @@ -798,7 +798,8 @@ void __init fpu__init_system_xstate(unsigned int legacy_size)
>> unsigned short cid = xsave_cpuid_features[i];
>>
>> /* Careful: X86_FEATURE_FPU is 0! */
>> - if ((i != XFEATURE_FP && !cid) || !boot_cpu_has(cid))
>> + if ((i != XFEATURE_FP && !cid) || !boot_cpu_has(cid) ||
>> + DISABLED_MASK_BIT_SET(cid))
>> fpu_kernel_cfg.max_features &= ~BIT_ULL(i);
>> }
>
> I _think_ I'd rather this just be cpu_feature_enabled(cid) rather than
> using DISABLED_MASK_BIT_SET() directly.
>
> But, I guess this probably also isn't a big deal for _most_ people. Any
> sane distro kernel will just set CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
> since it's pretty widespread on modern CPUs and works across Intel and
> AMD now.
Ack, I’m using PKU as the key example here, but looking forward this is more of a
correctness thing than anything else. If for any reason, any xsave feature is disabled
In the way that PKU is disabled, it will slip thru the cracks.
If it would make it cleaner, I’m happy to drop the example from the commit msg to
prevent any confusion that this is PKU specific in any way.
Thoughts?
On Wed, May 31, 2023, Jon Kohler wrote:
>
> > On May 30, 2023, at 6:22 PM, Dave Hansen <[email protected]> wrote:
> >
> > On 5/30/23 13:01, Jon Kohler wrote:
> > Is that the only problem? kvm_load_guest_xsave_state() seems to have
> > some #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS code and I can't
> > imagine that KVM guests can even use PKRU if this code is compiled out.
...
> >> diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
> >> index 0bab497c9436..211ef82b53e3 100644
> >> --- a/arch/x86/kernel/fpu/xstate.c
> >> +++ b/arch/x86/kernel/fpu/xstate.c
> >> @@ -798,7 +798,8 @@ void __init fpu__init_system_xstate(unsigned int legacy_size)
> >> unsigned short cid = xsave_cpuid_features[i];
> >>
> >> /* Careful: X86_FEATURE_FPU is 0! */
> >> - if ((i != XFEATURE_FP && !cid) || !boot_cpu_has(cid))
> >> + if ((i != XFEATURE_FP && !cid) || !boot_cpu_has(cid) ||
> >> + DISABLED_MASK_BIT_SET(cid))
> >> fpu_kernel_cfg.max_features &= ~BIT_ULL(i);
> >> }
> >
> > I _think_ I'd rather this just be cpu_feature_enabled(cid) rather than
> > using DISABLED_MASK_BIT_SET() directly.
+1, xstate.c uses cpu_feature_enabled() all over the place, and IMO effectively
open coding cpu_feature_enabled() yields less intuitive code.
And on the KVM side, we can and should replace the #ifdef with cpu_feature_enabled()
(I'll post a patch), as modern compilers are clever enough to completely optimize
out the code when CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=n. At that point, using
cpu_feature_enabled() in both KVM and xstate.c will provide a nice bit of symmetry.
Caveat #1: cpu_feature_enabled() has a flaw that's relevant to this code: in the
unlikely scenario that the compiler doesn't resolve "cid" to a compile-time
constant value, cpu_feature_enabled() won't query DISABLED_MASK_BIT_SET(). I don't
see any other use of cpu_feature_enabled() without a hardcoded X86_FEATURE_*, and
the below compiles with my config, so I think/hope we can just require a compile-time
constant when using cpu_feature_enabled().
diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index ce0c8f7d3218..886200fbf8d9 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -141,8 +141,11 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
* supporting a possible guest feature where host support for it
* is not relevant.
*/
-#define cpu_feature_enabled(bit) \
- (__builtin_constant_p(bit) && DISABLED_MASK_BIT_SET(bit) ? 0 : static_cpu_has(bit))
+#define cpu_feature_enabled(bit) \
+({ \
+ BUILD_BUG_ON(!__builtin_constant_p(bit)); \
+ DISABLED_MASK_BIT_SET(bit) ? 0 : static_cpu_has(bit); \
+})
#define boot_cpu_has(bit) cpu_has(&boot_cpu_data, bit)
Caveat #2: Using cpu_feature_enabled() could subtly break KVM, as KVM advertises
support for features based on boot_cpu_data. E.g. if a feature were disabled by
Kconfig but present in hardware, KVM would allow the guest to use the feature
without properly context switching the data. PKU isn't problematic because KVM
explicitly gates PKU on boot_cpu_has(X86_FEATURE_OSPKE), but KVM learned that
lesson the hard way (see commit c469268cd523, "KVM: x86: block guest protection
keys unless the host has them enabled"). Exposing a feature that's disabled in
the host isn't completely absurd, e.g. KVM already effectively does this for MPX.
The only reason using cpu_feature_enabled() wouldn't be problematic for MPX is
because there's no longer a Kconfig for MPX.
I'm totally ok gating xfeature bits on cpu_feature_enabled(), but there should be
a prep patch for KVM to clear features bits in kvm_cpu_caps if the corresponding
XCR0/XSS bit is not set in the host. If KVM ever wants to expose an xstate feature
(other than MPX) that's disabled in the host, then we can revisit
fpu__init_system_xstate(). But we need to ensure the "failure" mode is that
KVM doesn't advertise the feature, as opposed to advertising a feature without
without context switching its data.
> > But, I guess this probably also isn't a big deal for _most_ people. Any
> > sane distro kernel will just set CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
> > since it's pretty widespread on modern CPUs and works across Intel and
> > AMD now.
>
> Ack, I’m using PKU as the key example here, but looking forward this is more of a
> correctness thing than anything else. If for any reason, any xsave feature is disabled
> In the way that PKU is disabled, it will slip thru the cracks.
I'd be careful about billing this as a correctness thing. See above.
> On May 31, 2023, at 12:30 PM, Sean Christopherson <[email protected]> wrote:
>
> On Wed, May 31, 2023, Jon Kohler wrote:
>>
>>> On May 30, 2023, at 6:22 PM, Dave Hansen <[email protected]> wrote:
>>>
>>> On 5/30/23 13:01, Jon Kohler wrote:
>>> Is that the only problem? kvm_load_guest_xsave_state() seems to have
>>> some #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS code and I can't
>>> imagine that KVM guests can even use PKRU if this code is compiled out.
>
> ...
>
>>>> diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
>>>> index 0bab497c9436..211ef82b53e3 100644
>>>> --- a/arch/x86/kernel/fpu/xstate.c
>>>> +++ b/arch/x86/kernel/fpu/xstate.c
>>>> @@ -798,7 +798,8 @@ void __init fpu__init_system_xstate(unsigned int legacy_size)
>>>> unsigned short cid = xsave_cpuid_features[i];
>>>>
>>>> /* Careful: X86_FEATURE_FPU is 0! */
>>>> - if ((i != XFEATURE_FP && !cid) || !boot_cpu_has(cid))
>>>> + if ((i != XFEATURE_FP && !cid) || !boot_cpu_has(cid) ||
>>>> + DISABLED_MASK_BIT_SET(cid))
>>>> fpu_kernel_cfg.max_features &= ~BIT_ULL(i);
>>>> }
>>>
>>> I _think_ I'd rather this just be cpu_feature_enabled(cid) rather than
>>> using DISABLED_MASK_BIT_SET() directly.
>
> +1, xstate.c uses cpu_feature_enabled() all over the place, and IMO effectively
> open coding cpu_feature_enabled() yields less intuitive code.
Ack, thank you for the feedback
>
> And on the KVM side, we can and should replace the #ifdef with cpu_feature_enabled()
> (I'll post a patch), as modern compilers are clever enough to completely optimize
> out the code when CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=n. At that point, using
> cpu_feature_enabled() in both KVM and xstate.c will provide a nice bit of symmetry.
Ok, thanks for helping to clean that up, I appreciate it.
>
> Caveat #1: cpu_feature_enabled() has a flaw that's relevant to this code: in the
> unlikely scenario that the compiler doesn't resolve "cid" to a compile-time
> constant value, cpu_feature_enabled() won't query DISABLED_MASK_BIT_SET(). I don't
> see any other use of cpu_feature_enabled() without a hardcoded X86_FEATURE_*, and
> the below compiles with my config, so I think/hope we can just require a compile-time
> constant when using cpu_feature_enabled().
>
Yea I think that should work. I’ll club that into v2 of this patch.
> diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
> index ce0c8f7d3218..886200fbf8d9 100644
> --- a/arch/x86/include/asm/cpufeature.h
> +++ b/arch/x86/include/asm/cpufeature.h
> @@ -141,8 +141,11 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
> * supporting a possible guest feature where host support for it
> * is not relevant.
> */
> -#define cpu_feature_enabled(bit) \
> - (__builtin_constant_p(bit) && DISABLED_MASK_BIT_SET(bit) ? 0 : static_cpu_has(bit))
> +#define cpu_feature_enabled(bit) \
> +({ \
> + BUILD_BUG_ON(!__builtin_constant_p(bit)); \
> + DISABLED_MASK_BIT_SET(bit) ? 0 : static_cpu_has(bit); \
> +})
>
> #define boot_cpu_has(bit) cpu_has(&boot_cpu_data, bit)
>
> Caveat #2: Using cpu_feature_enabled() could subtly break KVM, as KVM advertises
> support for features based on boot_cpu_data. E.g. if a feature were disabled by
> Kconfig but present in hardware, KVM would allow the guest to use the feature
> without properly context switching the data. PKU isn't problematic because KVM
> explicitly gates PKU on boot_cpu_has(X86_FEATURE_OSPKE), but KVM learned that
> lesson the hard way (see commit c469268cd523, "KVM: x86: block guest protection
> keys unless the host has them enabled"). Exposing a feature that's disabled in
> the host isn't completely absurd, e.g. KVM already effectively does this for MPX.
> The only reason using cpu_feature_enabled() wouldn't be problematic for MPX is
> because there's no longer a Kconfig for MPX.
>
> I'm totally ok gating xfeature bits on cpu_feature_enabled(), but there should be
> a prep patch for KVM to clear features bits in kvm_cpu_caps if the corresponding
> XCR0/XSS bit is not set in the host. If KVM ever wants to expose an xstate feature
> (other than MPX) that's disabled in the host, then we can revisit
> fpu__init_system_xstate(). But we need to ensure the "failure" mode is that
> KVM doesn't advertise the feature, as opposed to advertising a feature without
> without context switching its data.
Looking into this, trying to understand the comment about a feature being used
without context switching its data.
In __kvm_x86_vendor_init() we’re already populating host_xcr0 using the
XCR_XFEATURE_ENABLED_MASK, which should be populated on boot
by fpu__init_cpu_xstate(), which happens almost immediately after the code that I
modified in this commit.
That then flows into guest_supported_xcr0 (as well as user_xfeatures).
guest_supported_xcr0 is then plumbed into __kvm_set_xcr, which specifically says
that we’re using that to prevent the guest from setting bits that we won’t save in the
first place.
/*
* Do not allow the guest to set bits that we do not support
* saving. However, xcr0 bit 0 is always set, even if the
* emulated CPU does not support XSAVE (see kvm_vcpu_reset()).
*/
valid_bits = vcpu->arch.guest_supported_xcr0 | XFEATURE_MASK_FP;
Wouldn’t this mean that the *guest* xstate initialization would not see a given
feature too and take care of the problem naturally?
Or are you saying you’d want an even more detailed clearing?
>
>>> But, I guess this probably also isn't a big deal for _most_ people. Any
>>> sane distro kernel will just set CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
>>> since it's pretty widespread on modern CPUs and works across Intel and
>>> AMD now.
>>
>> Ack, I’m using PKU as the key example here, but looking forward this is more of a
>> correctness thing than anything else. If for any reason, any xsave feature is disabled
>> In the way that PKU is disabled, it will slip thru the cracks.
>
> I'd be careful about billing this as a correctness thing. See above.
Fair enough. I’ll see about simplifying the commit msg when we get thru the comments
on this thread.
On Wed, May 31, 2023, Jon Kohler wrote:
>
> > On May 31, 2023, at 12:30 PM, Sean Christopherson <[email protected]> wrote:
> > Caveat #1: cpu_feature_enabled() has a flaw that's relevant to this code: in the
> > unlikely scenario that the compiler doesn't resolve "cid" to a compile-time
> > constant value, cpu_feature_enabled() won't query DISABLED_MASK_BIT_SET(). I don't
> > see any other use of cpu_feature_enabled() without a hardcoded X86_FEATURE_*, and
> > the below compiles with my config, so I think/hope we can just require a compile-time
> > constant when using cpu_feature_enabled().
> >
>
> Yea I think that should work. I’ll club that into v2 of this patch.
I recommend doing it as a separate patch, hardening cpu_feature_enabled() shouldn't
have a dependency on tweaking the xfeatures mask. I tested this with an allyesconfig
if you want to throw it in as a prep patch.
---
From: Sean Christopherson <[email protected]>
Date: Wed, 31 May 2023 09:41:12 -0700
Subject: [PATCH] x86/cpufeature: Require compile-time constant in
cpu_feature_enabled()
Assert that the to-be-checked bit passed to cpu_feature_enabled() is a
compile-time constant instead of applying the DISABLED_MASK_BIT_SET()
logic if and only if the bit is a constant. Conditioning the check on
the bit being constant instead of requiring the bit to be constant could
result in compiler specific kernel behavior, e.g. running on hardware that
supports a disabled feature would return %false if the compiler resolved
the bit to a constant, but %true if not.
All current usage of cpu_feature_enabled() specifies a hardcoded
X86_FEATURE_* flag, so this *should* be a glorified nop.
Signed-off-by: Sean Christopherson <[email protected]>
---
arch/x86/include/asm/cpufeature.h | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index ce0c8f7d3218..886200fbf8d9 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -141,8 +141,11 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
* supporting a possible guest feature where host support for it
* is not relevant.
*/
-#define cpu_feature_enabled(bit) \
- (__builtin_constant_p(bit) && DISABLED_MASK_BIT_SET(bit) ? 0 : static_cpu_has(bit))
+#define cpu_feature_enabled(bit) \
+({ \
+ BUILD_BUG_ON(!__builtin_constant_p(bit)); \
+ DISABLED_MASK_BIT_SET(bit) ? 0 : static_cpu_has(bit); \
+})
#define boot_cpu_has(bit) cpu_has(&boot_cpu_data, bit)
base-commit: 39428f6ea9eace95011681628717062ff7f5eb5f
--
> > I'm totally ok gating xfeature bits on cpu_feature_enabled(), but there should be
> > a prep patch for KVM to clear features bits in kvm_cpu_caps if the corresponding
> > XCR0/XSS bit is not set in the host. If KVM ever wants to expose an xstate feature
> > (other than MPX) that's disabled in the host, then we can revisit
> > fpu__init_system_xstate(). But we need to ensure the "failure" mode is that
> > KVM doesn't advertise the feature, as opposed to advertising a feature without
> > without context switching its data.
>
>
> Looking into this, trying to understand the comment about a feature being used
> without context switching its data.
>
> In __kvm_x86_vendor_init() we’re already populating host_xcr0 using the
> XCR_XFEATURE_ENABLED_MASK, which should be populated on boot
> by fpu__init_cpu_xstate(), which happens almost immediately after the code that I
> modified in this commit.
>
> That then flows into guest_supported_xcr0 (as well as user_xfeatures).
> guest_supported_xcr0 is then plumbed into __kvm_set_xcr, which specifically says
> that we’re using that to prevent the guest from setting bits that we won’t save in the
> first place.
>
> /*
> * Do not allow the guest to set bits that we do not support
> * saving. However, xcr0 bit 0 is always set, even if the
> * emulated CPU does not support XSAVE (see kvm_vcpu_reset()).
> */
> valid_bits = vcpu->arch.guest_supported_xcr0 | XFEATURE_MASK_FP;
>
> Wouldn’t this mean that the *guest* xstate initialization would not see a given
> feature too and take care of the problem naturally?
>
> Or are you saying you’d want an even more detailed clearing?
The CPUID bits that enumerate support for a feature are independent from the CPUID
bits that enumerate what XCR0 bits are supported, i.e. what features can be saved
and restored via XSAVE/XRSTOR.
KVM does mostly account for host XCR0, but in a very ad hoc way. E.g. MPX is
handled by manually checking host XCR0.
if (kvm_mpx_supported())
kvm_cpu_cap_check_and_set(X86_FEATURE_MPX);
PKU manually checks too, but indirectly by looking at whether or not the kernel
has enabled CR4.OSPKE.
if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
kvm_cpu_cap_clear(X86_FEATURE_PKU);
But unless I'm missing something, the various AVX and AMX bits rely solely on
boot_cpu_data, i.e. would break if someone added CONFIG_X86_AVX or CONFIG_X86_AMX.
> On May 31, 2023, at 4:18 PM, Sean Christopherson <[email protected]> wrote:
>
> On Wed, May 31, 2023, Jon Kohler wrote:
>>
>>> On May 31, 2023, at 12:30 PM, Sean Christopherson <[email protected]> wrote:
>>> Caveat #1: cpu_feature_enabled() has a flaw that's relevant to this code: in the
>>> unlikely scenario that the compiler doesn't resolve "cid" to a compile-time
>>> constant value, cpu_feature_enabled() won't query DISABLED_MASK_BIT_SET(). I don't
>>> see any other use of cpu_feature_enabled() without a hardcoded X86_FEATURE_*, and
>>> the below compiles with my config, so I think/hope we can just require a compile-time
>>> constant when using cpu_feature_enabled().
>>>
>>
>> Yea I think that should work. I’ll club that into v2 of this patch.
>
> I recommend doing it as a separate patch, hardening cpu_feature_enabled() shouldn't
> have a dependency on tweaking the xfeatures mask. I tested this with an allyesconfig
> if you want to throw it in as a prep patch.
Ack, I’ll do that and make this into a small series, thanks for the help!
>
> ---
> From: Sean Christopherson <[email protected]>
> Date: Wed, 31 May 2023 09:41:12 -0700
> Subject: [PATCH] x86/cpufeature: Require compile-time constant in
> cpu_feature_enabled()
>
> Assert that the to-be-checked bit passed to cpu_feature_enabled() is a
> compile-time constant instead of applying the DISABLED_MASK_BIT_SET()
> logic if and only if the bit is a constant. Conditioning the check on
> the bit being constant instead of requiring the bit to be constant could
> result in compiler specific kernel behavior, e.g. running on hardware that
> supports a disabled feature would return %false if the compiler resolved
> the bit to a constant, but %true if not.
>
> All current usage of cpu_feature_enabled() specifies a hardcoded
> X86_FEATURE_* flag, so this *should* be a glorified nop.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> arch/x86/include/asm/cpufeature.h | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
> index ce0c8f7d3218..886200fbf8d9 100644
> --- a/arch/x86/include/asm/cpufeature.h
> +++ b/arch/x86/include/asm/cpufeature.h
> @@ -141,8 +141,11 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
> * supporting a possible guest feature where host support for it
> * is not relevant.
> */
> -#define cpu_feature_enabled(bit) \
> - (__builtin_constant_p(bit) && DISABLED_MASK_BIT_SET(bit) ? 0 : static_cpu_has(bit))
> +#define cpu_feature_enabled(bit) \
> +({ \
> + BUILD_BUG_ON(!__builtin_constant_p(bit)); \
> + DISABLED_MASK_BIT_SET(bit) ? 0 : static_cpu_has(bit); \
> +})
>
> #define boot_cpu_has(bit) cpu_has(&boot_cpu_data, bit)
>
>
> base-commit: 39428f6ea9eace95011681628717062ff7f5eb5f
> --
>
>>> I'm totally ok gating xfeature bits on cpu_feature_enabled(), but there should be
>>> a prep patch for KVM to clear features bits in kvm_cpu_caps if the corresponding
>>> XCR0/XSS bit is not set in the host. If KVM ever wants to expose an xstate feature
>>> (other than MPX) that's disabled in the host, then we can revisit
>>> fpu__init_system_xstate(). But we need to ensure the "failure" mode is that
>>> KVM doesn't advertise the feature, as opposed to advertising a feature without
>>> without context switching its data.
>>
>>
>> Looking into this, trying to understand the comment about a feature being used
>> without context switching its data.
>>
>> In __kvm_x86_vendor_init() we’re already populating host_xcr0 using the
>> XCR_XFEATURE_ENABLED_MASK, which should be populated on boot
>> by fpu__init_cpu_xstate(), which happens almost immediately after the code that I
>> modified in this commit.
>>
>> That then flows into guest_supported_xcr0 (as well as user_xfeatures).
>> guest_supported_xcr0 is then plumbed into __kvm_set_xcr, which specifically says
>> that we’re using that to prevent the guest from setting bits that we won’t save in the
>> first place.
>>
>> /*
>> * Do not allow the guest to set bits that we do not support
>> * saving. However, xcr0 bit 0 is always set, even if the
>> * emulated CPU does not support XSAVE (see kvm_vcpu_reset()).
>> */
>> valid_bits = vcpu->arch.guest_supported_xcr0 | XFEATURE_MASK_FP;
>>
>> Wouldn’t this mean that the *guest* xstate initialization would not see a given
>> feature too and take care of the problem naturally?
>>
>> Or are you saying you’d want an even more detailed clearing?
>
> The CPUID bits that enumerate support for a feature are independent from the CPUID
> bits that enumerate what XCR0 bits are supported, i.e. what features can be saved
> and restored via XSAVE/XRSTOR.
>
> KVM does mostly account for host XCR0, but in a very ad hoc way. E.g. MPX is
> handled by manually checking host XCR0.
>
> if (kvm_mpx_supported())
> kvm_cpu_cap_check_and_set(X86_FEATURE_MPX);
>
> PKU manually checks too, but indirectly by looking at whether or not the kernel
> has enabled CR4.OSPKE.
>
> if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
> kvm_cpu_cap_clear(X86_FEATURE_PKU);
>
> But unless I'm missing something, the various AVX and AMX bits rely solely on
> boot_cpu_data, i.e. would break if someone added CONFIG_X86_AVX or CONFIG_X86_AMX.
What if we simply moved static unsigned short xsave_cpuid_features[] … into xstate.h, which
is already included in arch/x86/kvm/cpuid.c, and do something similar to what I’m proposing in this
patch already
This would future proof such breakages I’d imagine?
void kvm_set_cpu_caps(void)
{
...
/*
* Clear CPUID for XSAVE features that are disabled.
*/
for (i = 0; i < ARRAY_SIZE(xsave_cpuid_features); i++) {
unsigned short cid = xsave_cpuid_features[i];
/* Careful: X86_FEATURE_FPU is 0! */
if ((i != XFEATURE_FP && !cid) || !boot_cpu_has(cid) ||
!cpu_feature_enabled(cid))
kvm_cpu_cap_clear(cid);
}
...
}
On Wed, May 31, 2023 at 08:18:34PM +0000, Sean Christopherson wrote:
> Assert that the to-be-checked bit passed to cpu_feature_enabled() is a
> compile-time constant instead of applying the DISABLED_MASK_BIT_SET()
> logic if and only if the bit is a constant. Conditioning the check on
> the bit being constant instead of requiring the bit to be constant could
> result in compiler specific kernel behavior, e.g. running on hardware that
> supports a disabled feature would return %false if the compiler resolved
> the bit to a constant, but %true if not.
Uff, more mirroring CPUID inconsistencies.
So *actually*, we should clear all those build-time disabled bits from
x86_capability so that this doesn't happen.
> All current usage of cpu_feature_enabled() specifies a hardcoded
> X86_FEATURE_* flag, so this *should* be a glorified nop.
Also, pls add here the exact example which prompted this as I'm sure we
will forget why this was done.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
On Wed, May 31, 2023, Borislav Petkov wrote:
> On Wed, May 31, 2023 at 08:18:34PM +0000, Sean Christopherson wrote:
> > Assert that the to-be-checked bit passed to cpu_feature_enabled() is a
> > compile-time constant instead of applying the DISABLED_MASK_BIT_SET()
> > logic if and only if the bit is a constant. Conditioning the check on
> > the bit being constant instead of requiring the bit to be constant could
> > result in compiler specific kernel behavior, e.g. running on hardware that
> > supports a disabled feature would return %false if the compiler resolved
> > the bit to a constant, but %true if not.
>
> Uff, more mirroring CPUID inconsistencies.
>
> So *actually*, we should clear all those build-time disabled bits from
> x86_capability so that this doesn't happen.
Heh, I almost suggested that, but there is a non-zero amount of code that wants
to ignore the disabled bits and query the "raw" CPUID information. In quotes
because the kernel still massages x86_capability. Changing that behavior will
require auditing a lot of code, because in most cases any breakage will be mostly
silent, e.g. loss of features/performance and not explosions.
E.g. KVM emulates UMIP when it's not supported in hardware, and so advertises UMIP
support irrespective of hardware/host support. But emulating UMIP is imperfect
and suboptimal (requires intercepting L*DT instructions), so KVM intercepts L*DT
instructions iff UMIP is not supported in hardware, as detected by
boot_cpu_has(X86_FEATURE_UMIP).
The comment for cpu_feature_enabled() even calls out this type of use case:
Use the cpu_has() family if you want true runtime testing of CPU features, like
in hypervisor code where you are supporting a possible guest feature where host
support for it is not relevant.
That said, the behavior of cpu_has() is wildly inconsistent, e.g. LA57 is
indirectly cleared in x86_capability if it's a disabled bit because of this code
in early_identify_cpu().
if (!pgtable_l5_enabled())
setup_clear_cpu_cap(X86_FEATURE_LA57);
KVM works around that by manually doing CPUID to query hardware directly:
/* Set LA57 based on hardware capability. */
if (cpuid_ecx(7) & F(LA57))
kvm_cpu_cap_set(X86_FEATURE_LA57);
So yeah, I 100% agree the current state is messy and would love to have
cpu_feature_enabled() be a pure optimization with respect to boot_cpu_has(), but
it's not as trivial at it looks.
> On May 31, 2023, at 5:09 PM, Jon Kohler <[email protected]> wrote:
>
>
>>
>> The CPUID bits that enumerate support for a feature are independent from the CPUID
>> bits that enumerate what XCR0 bits are supported, i.e. what features can be saved
>> and restored via XSAVE/XRSTOR.
>>
>> KVM does mostly account for host XCR0, but in a very ad hoc way. E.g. MPX is
>> handled by manually checking host XCR0.
>>
>> if (kvm_mpx_supported())
>> kvm_cpu_cap_check_and_set(X86_FEATURE_MPX);
>>
>> PKU manually checks too, but indirectly by looking at whether or not the kernel
>> has enabled CR4.OSPKE.
>>
>> if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
>> kvm_cpu_cap_clear(X86_FEATURE_PKU);
>>
>> But unless I'm missing something, the various AVX and AMX bits rely solely on
>> boot_cpu_data, i.e. would break if someone added CONFIG_X86_AVX or CONFIG_X86_AMX.
>
> What if we simply moved static unsigned short xsave_cpuid_features[] … into xstate.h, which
> is already included in arch/x86/kvm/cpuid.c, and do something similar to what I’m proposing in this
> patch already
>
> This would future proof such breakages I’d imagine?
>
> void kvm_set_cpu_caps(void)
> {
> ...
> /*
> * Clear CPUID for XSAVE features that are disabled.
> */
> for (i = 0; i < ARRAY_SIZE(xsave_cpuid_features); i++) {
> unsigned short cid = xsave_cpuid_features[i];
>
> /* Careful: X86_FEATURE_FPU is 0! */
> if ((i != XFEATURE_FP && !cid) || !boot_cpu_has(cid) ||
> !cpu_feature_enabled(cid))
> kvm_cpu_cap_clear(cid);
> }
> …
> }
>
Sean - following up on this rough idea code above, wanted to validate that this was the
direction you were thinking of having kvm_set_cpu_caps() clear caps when a particular
xsave feature was disabled?
On Mon, Jun 05, 2023, Jon Kohler wrote:
> > On May 31, 2023, at 5:09 PM, Jon Kohler <[email protected]> wrote:
> >> The CPUID bits that enumerate support for a feature are independent from the CPUID
> >> bits that enumerate what XCR0 bits are supported, i.e. what features can be saved
> >> and restored via XSAVE/XRSTOR.
> >>
> >> KVM does mostly account for host XCR0, but in a very ad hoc way. E.g. MPX is
> >> handled by manually checking host XCR0.
> >>
> >> if (kvm_mpx_supported())
> >> kvm_cpu_cap_check_and_set(X86_FEATURE_MPX);
> >>
> >> PKU manually checks too, but indirectly by looking at whether or not the kernel
> >> has enabled CR4.OSPKE.
> >>
> >> if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
> >> kvm_cpu_cap_clear(X86_FEATURE_PKU);
> >>
> >> But unless I'm missing something, the various AVX and AMX bits rely solely on
> >> boot_cpu_data, i.e. would break if someone added CONFIG_X86_AVX or CONFIG_X86_AMX.
> >
> > What if we simply moved static unsigned short xsave_cpuid_features[] … into
> > xstate.h, which is already included in arch/x86/kvm/cpuid.c, and do
> > something similar to what I’m proposing in this patch already
> >
> > This would future proof such breakages I’d imagine?
> >
> > void kvm_set_cpu_caps(void)
> > {
> > ...
> > /*
> > * Clear CPUID for XSAVE features that are disabled.
> > */
> > for (i = 0; i < ARRAY_SIZE(xsave_cpuid_features); i++) {
> > unsigned short cid = xsave_cpuid_features[i];
> >
> > /* Careful: X86_FEATURE_FPU is 0! */
> > if ((i != XFEATURE_FP && !cid) || !boot_cpu_has(cid) ||
> > !cpu_feature_enabled(cid))
> > kvm_cpu_cap_clear(cid);
> > }
> > …
> > }
> >
>
> Sean - following up on this rough idea code above, wanted to validate that
> this was the direction you were thinking of having kvm_set_cpu_caps() clear
> caps when a particular xsave feature was disabled?
Ya, more or or less. But for KVM, that should be kvm_cpu_cap_has(), not boot_cpu_has().
And then I think KVM could actually WARN on a feature being disabled, i.e. put up
a tripwire to detect if things change in the future and the kernel lets the user
disable a feature that KVM wants to expose to a guest.
Side topic, I find the "cid" nomenclature super confusing, and the established
name in KVM is x86_feature.
Something like this?
for (i = 0; i < ARRAY_SIZE(xsave_cpuid_features); i++) {
unsigned int x86_feature = xsave_cpuid_features[i];
if (i != XFEATURE_FP && !x86_feature)
continue;
if (!kvm_cpu_cap_has(x86_feature))
continue;
if (WARN_ON_ONCE(!cpu_feature_enabled(x86_feature)))
kvm_cpu_cap_clear(x86_feature);
}