2016-03-08 11:44:38

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH 0/2] KVM: MMU: fix ept=0/pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0

I found this while testing the permission_fault patch with ept=0.

Paolo Bonzini (2):
KVM: MMU: fix ept=0/pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0
combo
KVM: MMU: fix reserved bit check for
pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0

Documentation/virtual/kvm/mmu.txt | 3 ++-
arch/x86/kvm/mmu.c | 4 +++-
arch/x86/kvm/vmx.c | 25 +++++++++++++++----------
3 files changed, 20 insertions(+), 12 deletions(-)

--
1.8.3.1


2016-03-08 11:45:01

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH 2/2] KVM: MMU: fix reserved bit check for pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0

KVM handles supervisor writes of a pte.u=0/pte.w=0/CR0.WP=0 page by
setting U=0 and W=1 in the shadow PTE. This will cause a user write
to fault and a supervisor write to succeed (which is correct because
CR0.WP=0). A user read instead will flip U=0 to 1 and W=1 back to 0.
This enables user reads; it also disables supervisor writes, the next
of which will then flip the bits again.

When SMEP is in effect, however, pte.u=0 will enable kernel execution
of this page. To avoid this, KVM also sets pte.nx=1. The reserved bit
catches this because it only looks at the guest's EFER.NX bit. Teach it
that smep_andnot_wp will also use the NX bit of SPTEs.

Cc: [email protected]
Cc: Xiao Guangrong <[email protected]>
Fixes: c258b62b264fdc469b6d3610a907708068145e3b
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/mmu.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 95a955de5964..0cd4ee01de94 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3721,13 +3721,15 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu,
void
reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context)
{
+ int uses_nx = context->nx || context->base_role.smep_andnot_wp;
+
/*
* Passing "true" to the last argument is okay; it adds a check
* on bit 8 of the SPTEs which KVM doesn't use anyway.
*/
__reset_rsvds_bits_mask(vcpu, &context->shadow_zero_check,
boot_cpu_data.x86_phys_bits,
- context->shadow_root_level, context->nx,
+ context->shadow_root_level, uses_nx,
guest_cpuid_has_gbpages(vcpu), is_pse(vcpu),
true);
}
--
1.8.3.1

2016-03-08 11:46:53

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH 1/2] KVM: MMU: fix ept=0/pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0 combo

Yes, all of these are needed. :) This is admittedly a bit odd, but
kvm-unit-tests access.flat tests this if you run it with "-cpu host"
and of course ept=0.

KVM handles supervisor writes of a pte.u=0/pte.w=0/CR0.WP=0 page by
setting U=0 and W=1 in the shadow PTE. This will cause a user write
to fault and a supervisor write to succeed (which is correct because
CR0.WP=0). A user read instead will flip U=0 to 1 and W=1 back to 0.
This enables user reads; it also disables supervisor writes, the next
of which will then flip the bits again.

When SMEP is in effect, however, U=0 will enable kernel execution of
this page. To avoid this, KVM also sets NX=1 in the shadow PTE together
with U=0. If the guest has not enabled NX, the result is a continuous
stream of page faults due to the NX bit being reserved.

The fix is to force EFER.NX=1 even if the CPU is taking care of the EFER
switch.

There is another bug in the reserved bit check, which I've split to a
separate patch for easier application to stable kernels.

Cc: [email protected]
Cc: Xiao Guangrong <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Fixes: f6577a5fa15d82217ca73c74cd2dcbc0f6c781dd
Signed-off-by: Paolo Bonzini <[email protected]>
---
Documentation/virtual/kvm/mmu.txt | 3 ++-
arch/x86/kvm/vmx.c | 25 +++++++++++++++----------
2 files changed, 17 insertions(+), 11 deletions(-)

diff --git a/Documentation/virtual/kvm/mmu.txt b/Documentation/virtual/kvm/mmu.txt
index daf9c0f742d2..c81731096a43 100644
--- a/Documentation/virtual/kvm/mmu.txt
+++ b/Documentation/virtual/kvm/mmu.txt
@@ -358,7 +358,8 @@ In the first case there are two additional complications:
- if CR4.SMEP is enabled: since we've turned the page into a kernel page,
the kernel may now execute it. We handle this by also setting spte.nx.
If we get a user fetch or read fault, we'll change spte.u=1 and
- spte.nx=gpte.nx back.
+ spte.nx=gpte.nx back. For this to work, KVM forces EFER.NX to 1 when
+ shadow paging is in use.
- if CR4.SMAP is disabled: since the page has been changed to a kernel
page, it can not be reused when CR4.SMAP is enabled. We set
CR4.SMAP && !CR0.WP into shadow page's role to avoid this case. Note,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 6e51493ff4f9..91830809d837 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -1863,20 +1863,20 @@ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset)
guest_efer = vmx->vcpu.arch.efer;

/*
- * NX is emulated; LMA and LME handled by hardware; SCE meaningless
- * outside long mode
+ * LMA and LME handled by hardware; SCE meaningless outside long mode.
*/
- ignore_bits = EFER_NX | EFER_SCE;
+ ignore_bits = EFER_SCE;
#ifdef CONFIG_X86_64
ignore_bits |= EFER_LMA | EFER_LME;
/* SCE is meaningful only in long mode on Intel */
if (guest_efer & EFER_LMA)
ignore_bits &= ~(u64)EFER_SCE;
#endif
- guest_efer &= ~ignore_bits;
- guest_efer |= host_efer & ignore_bits;
- vmx->guest_msrs[efer_offset].data = guest_efer;
- vmx->guest_msrs[efer_offset].mask = ~ignore_bits;
+ /* NX is needed to handle CR0.WP=1, CR4.SMEP=1. */
+ if (!enable_ept) {
+ guest_efer |= EFER_NX;
+ ignore_bits |= EFER_NX;
+ }

clear_atomic_switch_msr(vmx, MSR_EFER);

@@ -1887,16 +1887,21 @@ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset)
*/
if (cpu_has_load_ia32_efer ||
(enable_ept && ((vmx->vcpu.arch.efer ^ host_efer) & EFER_NX))) {
- guest_efer = vmx->vcpu.arch.efer;
if (!(guest_efer & EFER_LMA))
guest_efer &= ~EFER_LME;
if (guest_efer != host_efer)
add_atomic_switch_msr(vmx, MSR_EFER,
guest_efer, host_efer);
return false;
- }
+ } else {
+ guest_efer &= ~ignore_bits;
+ guest_efer |= host_efer & ignore_bits;

- return true;
+ vmx->guest_msrs[efer_offset].data = guest_efer;
+ vmx->guest_msrs[efer_offset].mask = ~ignore_bits;
+
+ return true;
+ }
}

static unsigned long segment_base(u16 selector)
--
1.8.3.1


2016-03-10 08:28:05

by Xiao Guangrong

[permalink] [raw]
Subject: Re: [PATCH 1/2] KVM: MMU: fix ept=0/pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0 combo



On 03/08/2016 07:44 PM, Paolo Bonzini wrote:
> Yes, all of these are needed. :) This is admittedly a bit odd, but
> kvm-unit-tests access.flat tests this if you run it with "-cpu host"
> and of course ept=0.
>
> KVM handles supervisor writes of a pte.u=0/pte.w=0/CR0.WP=0 page by
> setting U=0 and W=1 in the shadow PTE. This will cause a user write
> to fault and a supervisor write to succeed (which is correct because
> CR0.WP=0). A user read instead will flip U=0 to 1 and W=1 back to 0.
> This enables user reads; it also disables supervisor writes, the next
> of which will then flip the bits again.
>
> When SMEP is in effect, however, U=0 will enable kernel execution of
> this page. To avoid this, KVM also sets NX=1 in the shadow PTE together
> with U=0. If the guest has not enabled NX, the result is a continuous
> stream of page faults due to the NX bit being reserved.
>
> The fix is to force EFER.NX=1 even if the CPU is taking care of the EFER
> switch.

Good catch!

So it only hurts the box which has cpu_has_load_ia32_efer support otherwise
NX is inherited from kernel (kernel always sets NX if CPU supports it),
right?

>
> There is another bug in the reserved bit check, which I've split to a
> separate patch for easier application to stable kernels.
>

> Cc: [email protected]
> Cc: Xiao Guangrong <[email protected]>
> Cc: Andy Lutomirski <[email protected]>
> Fixes: f6577a5fa15d82217ca73c74cd2dcbc0f6c781dd
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> Documentation/virtual/kvm/mmu.txt | 3 ++-
> arch/x86/kvm/vmx.c | 25 +++++++++++++++----------
> 2 files changed, 17 insertions(+), 11 deletions(-)
>
> diff --git a/Documentation/virtual/kvm/mmu.txt b/Documentation/virtual/kvm/mmu.txt
> index daf9c0f742d2..c81731096a43 100644
> --- a/Documentation/virtual/kvm/mmu.txt
> +++ b/Documentation/virtual/kvm/mmu.txt
> @@ -358,7 +358,8 @@ In the first case there are two additional complications:
> - if CR4.SMEP is enabled: since we've turned the page into a kernel page,
> the kernel may now execute it. We handle this by also setting spte.nx.
> If we get a user fetch or read fault, we'll change spte.u=1 and
> - spte.nx=gpte.nx back.
> + spte.nx=gpte.nx back. For this to work, KVM forces EFER.NX to 1 when
> + shadow paging is in use.
> - if CR4.SMAP is disabled: since the page has been changed to a kernel
> page, it can not be reused when CR4.SMAP is enabled. We set
> CR4.SMAP && !CR0.WP into shadow page's role to avoid this case. Note,
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 6e51493ff4f9..91830809d837 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -1863,20 +1863,20 @@ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset)
> guest_efer = vmx->vcpu.arch.efer;
>
> /*
> - * NX is emulated; LMA and LME handled by hardware; SCE meaningless
> - * outside long mode
> + * LMA and LME handled by hardware; SCE meaningless outside long mode.
> */
> - ignore_bits = EFER_NX | EFER_SCE;
> + ignore_bits = EFER_SCE;
> #ifdef CONFIG_X86_64
> ignore_bits |= EFER_LMA | EFER_LME;
> /* SCE is meaningful only in long mode on Intel */
> if (guest_efer & EFER_LMA)
> ignore_bits &= ~(u64)EFER_SCE;
> #endif
> - guest_efer &= ~ignore_bits;
> - guest_efer |= host_efer & ignore_bits;
> - vmx->guest_msrs[efer_offset].data = guest_efer;
> - vmx->guest_msrs[efer_offset].mask = ~ignore_bits;
> + /* NX is needed to handle CR0.WP=1, CR4.SMEP=1. */

> + if (!enable_ept) {
> + guest_efer |= EFER_NX;
> + ignore_bits |= EFER_NX;

Update ignore_bits is not necessary i think.

> + }
>
> clear_atomic_switch_msr(vmx, MSR_EFER);
>
> @@ -1887,16 +1887,21 @@ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset)
> */
> if (cpu_has_load_ia32_efer ||
> (enable_ept && ((vmx->vcpu.arch.efer ^ host_efer) & EFER_NX))) {
> - guest_efer = vmx->vcpu.arch.efer;
> if (!(guest_efer & EFER_LMA))
> guest_efer &= ~EFER_LME;
> if (guest_efer != host_efer)
> add_atomic_switch_msr(vmx, MSR_EFER,
> guest_efer, host_efer);

So, why not set EFER_NX (if !ept) just in this branch to make the fix more simpler?

2016-03-10 08:37:32

by Xiao Guangrong

[permalink] [raw]
Subject: Re: [PATCH 2/2] KVM: MMU: fix reserved bit check for pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0



On 03/08/2016 07:44 PM, Paolo Bonzini wrote:
> KVM handles supervisor writes of a pte.u=0/pte.w=0/CR0.WP=0 page by
> setting U=0 and W=1 in the shadow PTE. This will cause a user write
> to fault and a supervisor write to succeed (which is correct because
> CR0.WP=0). A user read instead will flip U=0 to 1 and W=1 back to 0.
> This enables user reads; it also disables supervisor writes, the next
> of which will then flip the bits again.
>
> When SMEP is in effect, however, pte.u=0 will enable kernel execution
> of this page. To avoid this, KVM also sets pte.nx=1. The reserved bit
> catches this because it only looks at the guest's EFER.NX bit. Teach it
> that smep_andnot_wp will also use the NX bit of SPTEs.
>
> Cc: [email protected]
> Cc: Xiao Guangrong <[email protected]>

As a redhat guy i am so proud. :)

> Fixes: c258b62b264fdc469b6d3610a907708068145e3b

Thanks for you fixing it, Paolo!

Reviewed-by: Xiao Guangrong <[email protected]>

> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> arch/x86/kvm/mmu.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 95a955de5964..0cd4ee01de94 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3721,13 +3721,15 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu,
> void
> reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context)
> {
> + int uses_nx = context->nx || context->base_role.smep_andnot_wp;

It would be better if it is 'bool'

2016-03-10 08:46:46

by Xiao Guangrong

[permalink] [raw]
Subject: Re: [PATCH 1/2] KVM: MMU: fix ept=0/pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0 combo



On 03/08/2016 07:44 PM, Paolo Bonzini wrote:
> Yes, all of these are needed. :) This is admittedly a bit odd, but
> kvm-unit-tests access.flat tests this if you run it with "-cpu host"
> and of course ept=0.
>
> KVM handles supervisor writes of a pte.u=0/pte.w=0/CR0.WP=0 page by
> setting U=0 and W=1 in the shadow PTE. This will cause a user write
> to fault and a supervisor write to succeed (which is correct because
> CR0.WP=0). A user read instead will flip U=0 to 1 and W=1 back to 0.

BTW, it should be pte.u = 1 where you mentioned above.

2016-03-10 10:02:58

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH 2/2] KVM: MMU: fix reserved bit check for pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0



On 10/03/2016 09:36, Xiao Guangrong wrote:
>
>
> On 03/08/2016 07:44 PM, Paolo Bonzini wrote:
>> KVM handles supervisor writes of a pte.u=0/pte.w=0/CR0.WP=0 page by
>> setting U=0 and W=1 in the shadow PTE. This will cause a user write
>> to fault and a supervisor write to succeed (which is correct because
>> CR0.WP=0). A user read instead will flip U=0 to 1 and W=1 back to 0.
>> This enables user reads; it also disables supervisor writes, the next
>> of which will then flip the bits again.
>>
>> When SMEP is in effect, however, pte.u=0 will enable kernel execution
>> of this page. To avoid this, KVM also sets pte.nx=1. The reserved bit
>> catches this because it only looks at the guest's EFER.NX bit. Teach it
>> that smep_andnot_wp will also use the NX bit of SPTEs.
>>
>> Cc: [email protected]
>> Cc: Xiao Guangrong <[email protected]>
>
> As a redhat guy i am so proud. :)
>
>> Fixes: c258b62b264fdc469b6d3610a907708068145e3b
>
> Thanks for you fixing it, Paolo!
>
> Reviewed-by: Xiao Guangrong <[email protected]>
>
>> Signed-off-by: Paolo Bonzini <[email protected]>
>> ---
>> arch/x86/kvm/mmu.c | 4 +++-
>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index 95a955de5964..0cd4ee01de94 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -3721,13 +3721,15 @@ static void reset_rsvds_bits_mask_ept(struct
>> kvm_vcpu *vcpu,
>> void
>> reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu
>> *context)
>> {
>> + int uses_nx = context->nx || context->base_role.smep_andnot_wp;
>
> It would be better if it is 'bool'

Ok, will do.

Paolo

2016-03-10 10:03:18

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH 1/2] KVM: MMU: fix ept=0/pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0 combo



On 10/03/2016 09:46, Xiao Guangrong wrote:
>
>> Yes, all of these are needed. :) This is admittedly a bit odd, but
>> kvm-unit-tests access.flat tests this if you run it with "-cpu host"
>> and of course ept=0.
>>
>> KVM handles supervisor writes of a pte.u=0/pte.w=0/CR0.WP=0 page by
>> setting U=0 and W=1 in the shadow PTE. This will cause a user write
>> to fault and a supervisor write to succeed (which is correct because
>> CR0.WP=0). A user read instead will flip U=0 to 1 and W=1 back to 0.
>
> BTW, it should be pte.u = 1 where you mentioned above.

Ok, will fix.

Paolo

2016-03-10 10:09:37

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH 1/2] KVM: MMU: fix ept=0/pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0 combo



On 10/03/2016 09:27, Xiao Guangrong wrote:
>>
>
>> + if (!enable_ept) {
>> + guest_efer |= EFER_NX;
>> + ignore_bits |= EFER_NX;
>
> Update ignore_bits is not necessary i think.

More precisely, ignore_bits is only needed if guest EFER.NX=0 and we're
not in this CR0.WP=1/CR4.SMEP=0 situation. In theory you could have
guest EFER.NX=1 and host EFER.NX=0.

This is what I came up with (plus some comments :)):

u64 guest_efer = vmx->vcpu.arch.efer;
u64 ignore_bits = 0;

if (!enable_ept) {
if (boot_cpu_has(X86_FEATURE_SMEP))
guest_efer |= EFER_NX;
else if (!(guest_efer & EFER_NX))
ignore_bits |= EFER_NX;
}

>> - guest_efer = vmx->vcpu.arch.efer;
>> if (!(guest_efer & EFER_LMA))
>> guest_efer &= ~EFER_LME;
>> if (guest_efer != host_efer)
>> add_atomic_switch_msr(vmx, MSR_EFER,
>> guest_efer, host_efer);
>
> So, why not set EFER_NX (if !ept) just in this branch to make the fix
> more simpler?

I didn't like having

guest_efer = vmx->vcpu.arch.efer;
...
if (!enable_ept)
guest_efer |= EFER_NX;
guest_efer &= ~ignore_bits;
guest_efer |= host_efer & ignore_bits;
...
if (...) {
guest_efer = vmx->vcpu.arch.efer;
if (!enable_ept)
guest_efer |= EFER_NX;
...
}

My patch is bigger but the resulting code is smaller and easier to follow:

guest_efer = vmx->vcpu.arch.efer;
if (!enable_ept)
guest_efer |= EFER_NX;
...
if (...) {
...
} else {
guest_efer &= ~ignore_bits;
guest_efer |= host_efer & ignore_bits;
}

Paolo

2016-03-10 10:47:45

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH 1/2] KVM: MMU: fix ept=0/pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0 combo



On 10/03/2016 09:27, Xiao Guangrong wrote:
> So it only hurts the box which has cpu_has_load_ia32_efer support otherwise
> NX is inherited from kernel (kernel always sets NX if CPU supports it),
> right?

Yes, but I think !cpu_has_load_ia32_efer && SMEP does not exist. On the
other hand it's really only when disabling ept, so it's a weird corner
case that only happens during testing.

Paolo

2016-03-10 12:15:05

by Xiao Guangrong

[permalink] [raw]
Subject: Re: [PATCH 1/2] KVM: MMU: fix ept=0/pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0 combo



On 03/10/2016 06:09 PM, Paolo Bonzini wrote:
>
>
> On 10/03/2016 09:27, Xiao Guangrong wrote:
>>>
>>
>>> + if (!enable_ept) {
>>> + guest_efer |= EFER_NX;
>>> + ignore_bits |= EFER_NX;
>>
>> Update ignore_bits is not necessary i think.
>
> More precisely, ignore_bits is only needed if guest EFER.NX=0 and we're
> not in this CR0.WP=1/CR4.SMEP=0 situation. In theory you could have
> guest EFER.NX=1 and host EFER.NX=0.

It is not in linux, the kernel always set EFER.NX if CPUID reports it,
arch/x86/kernel/head_64.S:

204 /* Setup EFER (Extended Feature Enable Register) */
205 movl $MSR_EFER, %ecx
206 rdmsr
207 btsl $_EFER_SCE, %eax /* Enable System Call */
208 btl $20,%edi /* No Execute supported? */
209 jnc 1f
210 btsl $_EFER_NX, %eax
211 btsq $_PAGE_BIT_NX,early_pmd_flags(%rip)
212 1: wrmsr /* Make changes effective */

So if guest sees NX in its cpuid then host EFER.NX should be 1.

>
> This is what I came up with (plus some comments :)):
>
> u64 guest_efer = vmx->vcpu.arch.efer;
> u64 ignore_bits = 0;
>
> if (!enable_ept) {
> if (boot_cpu_has(X86_FEATURE_SMEP))
> guest_efer |= EFER_NX;
> else if (!(guest_efer & EFER_NX))
> ignore_bits |= EFER_NX;
> }

Your logic is very right.

What my suggestion is we can keep ignore_bits = EFER_NX | EFER_SCE;
(needn't conditionally adjust it) because EFER_NX must be the same
between guest and host if we switch EFER manually.

> My patch is bigger but the resulting code is smaller and easier to follow:
>
> guest_efer = vmx->vcpu.arch.efer;
> if (!enable_ept)
> guest_efer |= EFER_NX;
> ...
> if (...) {
> ...
> } else {
> guest_efer &= ~ignore_bits;
> guest_efer |= host_efer & ignore_bits;
> }

I agreed. :)

2016-03-10 12:26:25

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH 1/2] KVM: MMU: fix ept=0/pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0 combo



On 10/03/2016 13:14, Xiao Guangrong wrote:
>> More precisely, ignore_bits is only needed if guest EFER.NX=0 and we're
>> not in this CR0.WP=1/CR4.SMEP=0 situation. In theory you could have
>> guest EFER.NX=1 and host EFER.NX=0.
>
> It is not in linux, the kernel always set EFER.NX if CPUID reports it,
> arch/x86/kernel/head_64.S:
>
> 204 /* Setup EFER (Extended Feature Enable Register) */
> 205 movl $MSR_EFER, %ecx
> 206 rdmsr
> 207 btsl $_EFER_SCE, %eax /* Enable System Call */
> 208 btl $20,%edi /* No Execute supported? */
> 209 jnc 1f
> 210 btsl $_EFER_NX, %eax
> 211 btsq $_PAGE_BIT_NX,early_pmd_flags(%rip)
> 212 1: wrmsr /* Make changes effective */
>
> So if guest sees NX in its cpuid then host EFER.NX should be 1.

You're right. It's just in theory. But ignoring EFER.NX when it is 1
is technically not correct; since we have to add some special EFER_NX
logic anyway, I preferred to make it pedantically right. :)

Paolo