2017-12-14 14:12:20

by Lan Tianyu

[permalink] [raw]
Subject: [PATCH V3] KVM/x86: Check input paging mode when cs.l is set

Reported by syzkaller:
WARNING: CPU: 0 PID: 27962 at arch/x86/kvm/emulate.c:5631 x86_emulate_insn+0x557/0x15f0 [kvm]
Modules linked in: kvm_intel kvm [last unloaded: kvm]
CPU: 0 PID: 27962 Comm: syz-executor Tainted: G B W 4.15.0-rc2-next-20171208+ #32
Hardware name: Intel Corporation S1200SP/S1200SP, BIOS S1200SP.86B.01.03.0006.040720161253 04/07/2016
RIP: 0010:x86_emulate_insn+0x557/0x15f0 [kvm]
RSP: 0018:ffff8807234476d0 EFLAGS: 00010282
RAX: 0000000000000000 RBX: ffff88072d0237a0 RCX: ffffffffa0065c4d
RDX: 1ffff100e5a046f9 RSI: 0000000000000003 RDI: ffff88072d0237c8
RBP: ffff880723447728 R08: ffff88072d020000 R09: ffffffffa008d240
R10: 0000000000000002 R11: ffffed00e7d87db3 R12: ffff88072d0237c8
R13: ffff88072d023870 R14: ffff88072d0238c2 R15: ffffffffa008d080
FS: 00007f8a68666700(0000) GS:ffff880802200000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000002009506c CR3: 000000071fec4005 CR4: 00000000003626f0
Call Trace:
x86_emulate_instruction+0x3bc/0xb70 [kvm]
? reexecute_instruction.part.162+0x130/0x130 [kvm]
vmx_handle_exit+0x46d/0x14f0 [kvm_intel]
? trace_event_raw_event_kvm_entry+0xe7/0x150 [kvm]
? handle_vmfunc+0x2f0/0x2f0 [kvm_intel]
? wait_lapic_expire+0x25/0x270 [kvm]
vcpu_enter_guest+0x720/0x1ef0 [kvm]
...

When CS.L is set, vcpu should run in the 64 bit paging mode.
Current kvm set_sregs function doesn't have such check when
userspace inputs sreg values. This will lead unexpected behavior.
This patch is to add checks for CS.L, EFER.LME, EFER.LMA and
CR4.PAE when get SREG inputs from userspace in order to avoid
unexpected behavior.

Suggested-by: Paolo Bonzini <[email protected]>
Reported-by: Dmitry Vyukov <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Radim Krčmář <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Jim Mattson <[email protected]>
Signed-off-by: Tianyu Lan <[email protected]>
---
Change since v2:
Change coding style.
Change since v1:
Add check for EFER.LME, EFER.LMA and CR4.PAE.
---
arch/x86/kvm/x86.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 1e1e617..9eb1217 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7485,6 +7485,29 @@ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int idt_index,
}
EXPORT_SYMBOL_GPL(kvm_task_switch);

+int kvm_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+{
+ if ((sregs->efer & EFER_LME) && (sregs->cr0 & X86_CR0_PG_BIT)) {
+ /*
+ * When EFER.LME and CR0.PG are set, the processor is in
+ * 64-bit mode (though maybe in a 32-bit code segment).
+ * CR4.PAE and EFER.LMA must be set.
+ */
+ if (!(sregs->cr4 & X86_CR4_PAE_BIT)
+ || !(sregs->efer & EFER_LMA))
+ return -EINVAL;
+ } else {
+ /*
+ * Not in 64-bit mode: EFER.LMA is clear and the code
+ * segment cannot be 64-bit.
+ */
+ if (sregs->efer & EFER_LMA || sregs->cs.l)
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
struct kvm_sregs *sregs)
{
@@ -7497,6 +7520,9 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
(sregs->cr4 & X86_CR4_OSXSAVE))
return -EINVAL;

+ if (kvm_valid_sregs(vcpu, sregs))
+ return -EINVAL;
+
apic_base_msr.data = sregs->apic_base;
apic_base_msr.host_initiated = true;
if (kvm_set_apic_base(vcpu, &apic_base_msr))
--
1.8.3.1


2017-12-15 09:02:29

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH V3] KVM/x86: Check input paging mode when cs.l is set

On 14/12/2017 09:01, Lan Tianyu wrote:
> Reported by syzkaller:
> WARNING: CPU: 0 PID: 27962 at arch/x86/kvm/emulate.c:5631 x86_emulate_insn+0x557/0x15f0 [kvm]
> Modules linked in: kvm_intel kvm [last unloaded: kvm]
> CPU: 0 PID: 27962 Comm: syz-executor Tainted: G B W 4.15.0-rc2-next-20171208+ #32
> Hardware name: Intel Corporation S1200SP/S1200SP, BIOS S1200SP.86B.01.03.0006.040720161253 04/07/2016
> RIP: 0010:x86_emulate_insn+0x557/0x15f0 [kvm]
> RSP: 0018:ffff8807234476d0 EFLAGS: 00010282
> RAX: 0000000000000000 RBX: ffff88072d0237a0 RCX: ffffffffa0065c4d
> RDX: 1ffff100e5a046f9 RSI: 0000000000000003 RDI: ffff88072d0237c8
> RBP: ffff880723447728 R08: ffff88072d020000 R09: ffffffffa008d240
> R10: 0000000000000002 R11: ffffed00e7d87db3 R12: ffff88072d0237c8
> R13: ffff88072d023870 R14: ffff88072d0238c2 R15: ffffffffa008d080
> FS: 00007f8a68666700(0000) GS:ffff880802200000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 000000002009506c CR3: 000000071fec4005 CR4: 00000000003626f0
> Call Trace:
> x86_emulate_instruction+0x3bc/0xb70 [kvm]
> ? reexecute_instruction.part.162+0x130/0x130 [kvm]
> vmx_handle_exit+0x46d/0x14f0 [kvm_intel]
> ? trace_event_raw_event_kvm_entry+0xe7/0x150 [kvm]
> ? handle_vmfunc+0x2f0/0x2f0 [kvm_intel]
> ? wait_lapic_expire+0x25/0x270 [kvm]
> vcpu_enter_guest+0x720/0x1ef0 [kvm]
> ...
>
> When CS.L is set, vcpu should run in the 64 bit paging mode.
> Current kvm set_sregs function doesn't have such check when
> userspace inputs sreg values. This will lead unexpected behavior.
> This patch is to add checks for CS.L, EFER.LME, EFER.LMA and
> CR4.PAE when get SREG inputs from userspace in order to avoid
> unexpected behavior.
>
> Suggested-by: Paolo Bonzini <[email protected]>
> Reported-by: Dmitry Vyukov <[email protected]>
> Cc: Paolo Bonzini <[email protected]>
> Cc: Radim Krčmář <[email protected]>
> Cc: Dmitry Vyukov <[email protected]>
> Cc: Jim Mattson <[email protected]>
> Signed-off-by: Tianyu Lan <[email protected]>
> ---
> Change since v2:
> Change coding style.
> Change since v1:
> Add check for EFER.LME, EFER.LMA and CR4.PAE.
> ---
> arch/x86/kvm/x86.c | 26 ++++++++++++++++++++++++++
> 1 file changed, 26 insertions(+)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 1e1e617..9eb1217 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7485,6 +7485,29 @@ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int idt_index,
> }
> EXPORT_SYMBOL_GPL(kvm_task_switch);
>
> +int kvm_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
> +{
> + if ((sregs->efer & EFER_LME) && (sregs->cr0 & X86_CR0_PG_BIT)) {
> + /*
> + * When EFER.LME and CR0.PG are set, the processor is in
> + * 64-bit mode (though maybe in a 32-bit code segment).
> + * CR4.PAE and EFER.LMA must be set.
> + */
> + if (!(sregs->cr4 & X86_CR4_PAE_BIT)
> + || !(sregs->efer & EFER_LMA))
> + return -EINVAL;
> + } else {
> + /*
> + * Not in 64-bit mode: EFER.LMA is clear and the code
> + * segment cannot be 64-bit.
> + */
> + if (sregs->efer & EFER_LMA || sregs->cs.l)
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
> struct kvm_sregs *sregs)
> {
> @@ -7497,6 +7520,9 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
> (sregs->cr4 & X86_CR4_OSXSAVE))
> return -EINVAL;
>
> + if (kvm_valid_sregs(vcpu, sregs))
> + return -EINVAL;
> +
> apic_base_msr.data = sregs->apic_base;
> apic_base_msr.host_initiated = true;
> if (kvm_set_apic_base(vcpu, &apic_base_msr))
>

Queued, thanks.

Paolo