In guest protected mode, if the current privilege level
is not 0 and the PCE flag in the CR4 register is cleared,
we will inject a #GP for RDPMC usage.
Signed-off-by: Like Xu <[email protected]>
---
arch/x86/kvm/pmu.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index b86346903f2e..d080d475c808 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -372,6 +372,11 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
if (!pmc)
return 1;
+ if ((kvm_x86_ops.get_cpl(vcpu) != 0) &&
+ !(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
+ (kvm_read_cr4(vcpu) & X86_CR0_PE))
+ return 1;
+
*data = pmc_read_counter(pmc) & mask;
return 0;
}
--
2.21.3
On Wed, Jul 08, 2020 at 03:44:09PM +0800, Like Xu wrote:
> in guest protected mode, if the current privilege level
> is not 0 and the pce flag in the cr4 register is cleared,
> we will inject a #gp for rdpmc usage.
Wrapping at ~58 characters is a bit aggressive. checkpatch enforces 75
chars, something near that would be prefereable.
> Signed-off-by: Like Xu <[email protected]>
> ---
> arch/x86/kvm/pmu.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index b86346903f2e..d080d475c808 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -372,6 +372,11 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
> if (!pmc)
> return 1;
>
> + if ((kvm_x86_ops.get_cpl(vcpu) != 0) &&
> + !(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
> + (kvm_read_cr4(vcpu) & X86_CR0_PE))
This reads CR4 but checks CR0.PE.
And maybe put the X86_CR4_PCE check first so that it's the focus of the
statement?
> + return 1;
> +
> *data = pmc_read_counter(pmc) & mask;
> return 0;
> }
> --
> 2.21.3
>
On 08/07/20 17:18, Sean Christopherson wrote:
> On Wed, Jul 08, 2020 at 03:44:09PM +0800, Like Xu wrote:
>> in guest protected mode, if the current privilege level
>> is not 0 and the pce flag in the cr4 register is cleared,
>> we will inject a #gp for rdpmc usage.
>
> Wrapping at ~58 characters is a bit aggressive. checkpatch enforces 75
> chars, something near that would be prefereable.
>
>> Signed-off-by: Like Xu <[email protected]>
>> ---
>> arch/x86/kvm/pmu.c | 5 +++++
>> 1 file changed, 5 insertions(+)
>>
>> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
>> index b86346903f2e..d080d475c808 100644
>> --- a/arch/x86/kvm/pmu.c
>> +++ b/arch/x86/kvm/pmu.c
>> @@ -372,6 +372,11 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
>> if (!pmc)
>> return 1;
>>
>> + if ((kvm_x86_ops.get_cpl(vcpu) != 0) &&
>> + !(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
>> + (kvm_read_cr4(vcpu) & X86_CR0_PE))
>
> This reads CR4 but checks CR0.PE.
>
> And maybe put the X86_CR4_PCE check first so that it's the focus of the
> statement?
I'll squash this to fix it (I'm OOO next week and would like to get kvm/queue
sorted out these few days that I've left).
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index d080d475c808..67741d2a0308 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -372,9 +372,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
if (!pmc)
return 1;
- if ((kvm_x86_ops.get_cpl(vcpu) != 0) &&
- !(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
- (kvm_read_cr4(vcpu) & X86_CR0_PE))
+ if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
+ (kvm_x86_ops.get_cpl(vcpu) != 0) &&
+ (kvm_read_cr0(vcpu) & X86_CR0_PE))
return 1;
*data = pmc_read_counter(pmc) & mask;
The order follows the SDM. I'm tempted to remove the CR0 check
altogether, since non-protected-mode always runs at CPL0 AFAIK, but let's
keep it close to what the manual says.
Paolo
On Wed, Jul 08, 2020 at 05:31:14PM +0200, Paolo Bonzini wrote:
> The order follows the SDM. I'm tempted to remove the CR0 check
> altogether, since non-protected-mode always runs at CPL0 AFAIK, but let's
> keep it close to what the manual says.
Heh, it wouldn't surprise me in the least if there's a way to get the SS
arbyte to hold a non-zero DPL in real mode :-).
On 08/07/20 17:45, Sean Christopherson wrote:
> On Wed, Jul 08, 2020 at 05:31:14PM +0200, Paolo Bonzini wrote:
>> The order follows the SDM. I'm tempted to remove the CR0 check
>> altogether, since non-protected-mode always runs at CPL0 AFAIK, but let's
>> keep it close to what the manual says.
>
> Heh, it wouldn't surprise me in the least if there's a way to get the SS
> arbyte to hold a non-zero DPL in real mode :-).
I'm not sure if SMM lets you set non-zero SS.DPL in real mode. It's one
of the few things that are checked with unrestricted guest mode so
there's hope; on the other hand I know for sure that in the past RSM
could get you to VM86 mode with CPL=0, while in VMX it causes vmentry to
fail.
It would be an interesting testcase to write for KVM, to see if you get
a vmentry failure after you set the hidden AR bytes that way and RSM...
Paolo