2008-06-26 12:47:46

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 00/50] KVM patches review for the 2.6.27 merge window

The following patchset contains the kvm updates for 2.6.27. Nothing
particularly exciting: NMI, MTRR, mmio coalescing, and the inevitable
x86 emulator work.

Anthony Liguori (1):
KVM: Handle vma regions with no backing page

Aurelien Jarno (1):
KVM: PIT: support mode 3

Avi Kivity (22):
KVM: VMX: Trivial vmcs_write64() code simplification
KVM: MMU: Fix false flooding when a pte points to page table
KVM: Handle virtualization instruction #UD faults during reboot
KVM: VMX: Add list of potentially locally cached vcpus
KVM: Remove decache_vcpus_on_cpu() and related callbacks
KVM: Remove unnecessary ->decache_regs() call
KVM: Order segment register constants in the same way as cpu operand
encoding
KVM: MTRR support
KVM: MMU: Optimize prefetch_page()
KVM: x86 emulator: simplify push imm8 emulation
KVM: x86 emulator: implement 'push imm' (opcode 0x68)
KVM: MMU: Move nonpaging_prefetch_page()
KVM: MMU: Avoid page prefetch on SVM
KVM: Use printk_rlimit() instead of reporting emulation failures just
once
KVM: x86 emulator: handle undecoded rex.b with r/m = 5 in certain
cases
KVM: x86 emulator: simplify sib decoding
KVM: x86 emulator: simplify r/m decoding
KVM: x86 emulator: simplify rip relative decoding
KVM: x86 emulator: avoid segment base adjust for lea
KVM: x86 emulator: lazily evaluate segment registers
KVM: MMU: When debug is enabled, make it a run-time parameter
KVM: MMU: Fix printk format

Chris Lalancette (1):
KVM: SVM: Fake MSR_K7 performance counters

Christian Borntraeger (1):
KVM: remove long -> void *user -> long cast

Glauber Costa (1):
KVM: Do not calculate linear rip in emulation failure report

Guillaume Thouvenin (6):
KVM: Prefixes segment functions that will be exported with "kvm_"
KVM: x86 emulator: Update c->dst.bytes in decode instruction
KVM: x86 emulator: add support for jmp far 0xea
KVM: x86 emulator: adds support to mov r,imm (opcode 0xb8)
instruction
KVM: x86 emulator: Add support for mov seg, r (0x8e) instruction
KVM: x86 emulator: Add support for mov r, sreg (0x8c) instruction

Harvey Harrison (1):
KVM: add statics were possible, function definition in lapic.h

Joerg Roedel (6):
KVM: VMX: move APIC_ACCESS trace entry to generic code
KVM: SVM: implement dedicated NMI exit handler
KVM: SVM: implement dedicated INTR exit handler
KVM: add missing kvmtrace bits
KVM: SVM: add missing kvmtrace markers
KVM: SVM: add tracing support for TDP page faults

Laurent Vivier (5):
KVM: kvm_io_device: extend in_range() to manage len and write
attribute
KVM: Add coalesced MMIO support (common part)
KVM: Add coalesced MMIO support (x86 part)
KVM: Add coalesced MMIO support (powerpc part)
KVM: Add coalesced MMIO support (ia64 part)

Marcelo Tosatti (1):
KVM: only abort guest entry if timer count goes from 0->1

Mohammed Gamal (1):
KVM: x86 emulator: emulate nop and xchg reg, acc (opcodes 0x90 -
0x97)

Sheng Yang (2):
KVM: IOAPIC/LAPIC: Enable NMI support
KVM: VMX: Enable NMI with in-kernel irqchip

Tan, Li (1):
KVM: Support mixed endian machines

arch/ia64/kvm/Makefile | 3 +-
arch/ia64/kvm/kvm-ia64.c | 17 +--
arch/powerpc/kvm/Makefile | 2 +-
arch/powerpc/kvm/powerpc.c | 7 +-
arch/s390/kvm/kvm-s390.c | 4 -
arch/x86/kvm/Makefile | 3 +-
arch/x86/kvm/i8254.c | 24 ++--
arch/x86/kvm/i8259.c | 3 +-
arch/x86/kvm/lapic.c | 14 ++-
arch/x86/kvm/lapic.h | 1 +
arch/x86/kvm/mmu.c | 47 +++++--
arch/x86/kvm/mmu.h | 3 +-
arch/x86/kvm/paging_tmpl.h | 28 ++--
arch/x86/kvm/svm.c | 85 +++++++++----
arch/x86/kvm/vmx.c | 187 ++++++++++++++++++++------
arch/x86/kvm/vmx.h | 12 ++-
arch/x86/kvm/x86.c | 262 ++++++++++++++++++++-----------------
arch/x86/kvm/x86_emulate.c | 251 ++++++++++++++++++++++-------------
include/asm-ia64/kvm_host.h | 3 +
include/asm-powerpc/kvm_host.h | 2 +
include/asm-x86/kvm.h | 1 +
include/asm-x86/kvm_host.h | 43 ++++++-
include/asm-x86/kvm_x86_emulate.h | 11 +-
include/linux/kvm.h | 33 +++++-
include/linux/kvm_host.h | 10 +-
virt/kvm/coalesced_mmio.c | 156 ++++++++++++++++++++++
virt/kvm/coalesced_mmio.h | 23 ++++
virt/kvm/ioapic.c | 23 +++-
virt/kvm/iodev.h | 8 +-
virt/kvm/kvm_main.c | 130 ++++++++++++++++---
virt/kvm/kvm_trace.c | 18 ++-
31 files changed, 1019 insertions(+), 395 deletions(-)
create mode 100644 virt/kvm/coalesced_mmio.c
create mode 100644 virt/kvm/coalesced_mmio.h


2008-06-26 12:28:55

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 01/50] KVM: remove long -> void *user -> long cast

From: Christian Borntraeger <[email protected]>

kvm_dev_ioctl casts the arg value to void __user *, just to recast it
again to long. This seems unnecessary.

According to objdump the binary code on x86 is unchanged by this patch.

Signed-off-by: Christian Borntraeger <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
virt/kvm/kvm_main.c | 3 +--
1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 2d29e26..764d702 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1179,7 +1179,6 @@ static int kvm_dev_ioctl_create_vm(void)
static long kvm_dev_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg)
{
- void __user *argp = (void __user *)arg;
long r = -EINVAL;

switch (ioctl) {
@@ -1196,7 +1195,7 @@ static long kvm_dev_ioctl(struct file *filp,
r = kvm_dev_ioctl_create_vm();
break;
case KVM_CHECK_EXTENSION:
- r = kvm_dev_ioctl_check_extension((long)argp);
+ r = kvm_dev_ioctl_check_extension(arg);
break;
case KVM_GET_VCPU_MMAP_SIZE:
r = -EINVAL;
--
1.5.6

2008-06-26 12:29:28

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 03/50] KVM: VMX: move APIC_ACCESS trace entry to generic code

From: Joerg Roedel <[email protected]>

This patch moves the trace entry for APIC accesses from the VMX code to the
generic lapic code. This way APIC accesses from SVM will also be traced.

Signed-off-by: Joerg Roedel <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/lapic.c | 4 ++++
arch/x86/kvm/vmx.c | 2 --
2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index ebc03f5..f9201fb 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -572,6 +572,8 @@ static u32 __apic_read(struct kvm_lapic *apic, unsigned int offset)
{
u32 val = 0;

+ KVMTRACE_1D(APIC_ACCESS, apic->vcpu, (u32)offset, handler);
+
if (offset >= LAPIC_MMIO_LENGTH)
return 0;

@@ -695,6 +697,8 @@ static void apic_mmio_write(struct kvm_io_device *this,

offset &= 0xff0;

+ KVMTRACE_1D(APIC_ACCESS, apic->vcpu, (u32)offset, handler);
+
switch (offset) {
case APIC_ID: /* Local APIC ID */
apic_set_reg(apic, APIC_ID, val);
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index bc3452a..158a7d8 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -2554,8 +2554,6 @@ static int handle_apic_access(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
exit_qualification = vmcs_read64(EXIT_QUALIFICATION);
offset = exit_qualification & 0xffful;

- KVMTRACE_1D(APIC_ACCESS, vcpu, (u32)offset, handler);
-
er = emulate_instruction(vcpu, kvm_run, 0, 0, 0);

if (er != EMULATE_DONE) {
--
1.5.6

2008-06-26 12:29:53

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 04/50] KVM: SVM: implement dedicated NMI exit handler

From: Joerg Roedel <[email protected]>

With an exit handler for NMI intercepts its possible to account them using
kvmtrace.

Signed-off-by: Joerg Roedel <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/svm.c | 7 ++++++-
1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 6b0d5fa..8a2118b 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1081,6 +1081,11 @@ static int io_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
return kvm_emulate_pio(&svm->vcpu, kvm_run, in, size, port);
}

+static int nmi_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
+{
+ return 1;
+}
+
static int nop_on_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
{
return 1;
@@ -1365,7 +1370,7 @@ static int (*svm_exit_handlers[])(struct vcpu_svm *svm,
[SVM_EXIT_EXCP_BASE + NM_VECTOR] = nm_interception,
[SVM_EXIT_EXCP_BASE + MC_VECTOR] = mc_interception,
[SVM_EXIT_INTR] = nop_on_interception,
- [SVM_EXIT_NMI] = nop_on_interception,
+ [SVM_EXIT_NMI] = nmi_interception,
[SVM_EXIT_SMI] = nop_on_interception,
[SVM_EXIT_INIT] = nop_on_interception,
[SVM_EXIT_VINTR] = interrupt_window_interception,
--
1.5.6

2008-06-26 12:30:32

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 02/50] KVM: add statics were possible, function definition in lapic.h

From: Harvey Harrison <[email protected]>

Noticed by sparse:
arch/x86/kvm/vmx.c:1583:6: warning: symbol 'vmx_disable_intercept_for_msr' was not declared. Should it be static?
arch/x86/kvm/x86.c:3406:5: warning: symbol 'kvm_task_switch_16' was not declared. Should it be static?
arch/x86/kvm/x86.c:3429:5: warning: symbol 'kvm_task_switch_32' was not declared. Should it be static?
arch/x86/kvm/mmu.c:1968:6: warning: symbol 'kvm_mmu_remove_one_alloc_mmu_page' was not declared. Should it be static?
arch/x86/kvm/mmu.c:2014:6: warning: symbol 'mmu_destroy_caches' was not declared. Should it be static?
arch/x86/kvm/lapic.c:862:5: warning: symbol 'kvm_lapic_get_base' was not declared. Should it be static?
arch/x86/kvm/i8254.c:94:5: warning: symbol 'pit_get_gate' was not declared. Should it be static?
arch/x86/kvm/i8254.c:196:5: warning: symbol '__pit_timer_fn' was not declared. Should it be static?
arch/x86/kvm/i8254.c:561:6: warning: symbol '__inject_pit_timer_intr' was not declared. Should it be static?

Signed-off-by: Harvey Harrison <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/i8254.c | 6 +++---
arch/x86/kvm/lapic.h | 1 +
arch/x86/kvm/mmu.c | 2 +-
arch/x86/kvm/vmx.c | 2 +-
arch/x86/kvm/x86.c | 4 ++--
5 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
index 3829aa7..735ec9a 100644
--- a/arch/x86/kvm/i8254.c
+++ b/arch/x86/kvm/i8254.c
@@ -91,7 +91,7 @@ static void pit_set_gate(struct kvm *kvm, int channel, u32 val)
c->gate = val;
}

-int pit_get_gate(struct kvm *kvm, int channel)
+static int pit_get_gate(struct kvm *kvm, int channel)
{
WARN_ON(!mutex_is_locked(&kvm->arch.vpit->pit_state.lock));

@@ -193,7 +193,7 @@ static void pit_latch_status(struct kvm *kvm, int channel)
}
}

-int __pit_timer_fn(struct kvm_kpit_state *ps)
+static int __pit_timer_fn(struct kvm_kpit_state *ps)
{
struct kvm_vcpu *vcpu0 = ps->pit->kvm->vcpus[0];
struct kvm_kpit_timer *pt = &ps->pit_timer;
@@ -575,7 +575,7 @@ void kvm_free_pit(struct kvm *kvm)
}
}

-void __inject_pit_timer_intr(struct kvm *kvm)
+static void __inject_pit_timer_intr(struct kvm *kvm)
{
mutex_lock(&kvm->lock);
kvm_ioapic_set_irq(kvm->arch.vioapic, 0, 1);
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index 676c396..8185888 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -31,6 +31,7 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu);
u64 kvm_lapic_get_cr8(struct kvm_vcpu *vcpu);
void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8);
void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value);
+u64 kvm_lapic_get_base(struct kvm_vcpu *vcpu);

int kvm_apic_match_physical_addr(struct kvm_lapic *apic, u16 dest);
int kvm_apic_match_logical_addr(struct kvm_lapic *apic, u8 mda);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7e7c396..8e449db 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1948,7 +1948,7 @@ void kvm_mmu_zap_all(struct kvm *kvm)
kvm_flush_remote_tlbs(kvm);
}

-void kvm_mmu_remove_one_alloc_mmu_page(struct kvm *kvm)
+static void kvm_mmu_remove_one_alloc_mmu_page(struct kvm *kvm)
{
struct kvm_mmu_page *page;

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 540e951..bc3452a 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -1821,7 +1821,7 @@ static void allocate_vpid(struct vcpu_vmx *vmx)
spin_unlock(&vmx_vpid_lock);
}

-void vmx_disable_intercept_for_msr(struct page *msr_bitmap, u32 msr)
+static void vmx_disable_intercept_for_msr(struct page *msr_bitmap, u32 msr)
{
void *va;

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 63a77ca..9a85a29 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3449,7 +3449,7 @@ static int load_state_from_tss16(struct kvm_vcpu *vcpu,
return 0;
}

-int kvm_task_switch_16(struct kvm_vcpu *vcpu, u16 tss_selector,
+static int kvm_task_switch_16(struct kvm_vcpu *vcpu, u16 tss_selector,
struct desc_struct *cseg_desc,
struct desc_struct *nseg_desc)
{
@@ -3472,7 +3472,7 @@ out:
return ret;
}

-int kvm_task_switch_32(struct kvm_vcpu *vcpu, u16 tss_selector,
+static int kvm_task_switch_32(struct kvm_vcpu *vcpu, u16 tss_selector,
struct desc_struct *cseg_desc,
struct desc_struct *nseg_desc)
{
--
1.5.6

2008-06-26 12:30:51

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 08/50] KVM: SVM: add tracing support for TDP page faults

From: Joerg Roedel <[email protected]>

To distinguish between real page faults and nested page faults they should be
traced as different events. This is implemented by this patch.

Signed-off-by: Joerg Roedel <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/svm.c | 4 ++++
include/asm-x86/kvm.h | 1 +
2 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 8953292..218949c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1011,6 +1011,10 @@ static int pf_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
KVMTRACE_3D(PAGE_FAULT, &svm->vcpu, error_code,
(u32)fault_address, (u32)(fault_address >> 32),
handler);
+ else
+ KVMTRACE_3D(TDP_FAULT, &svm->vcpu, error_code,
+ (u32)fault_address, (u32)(fault_address >> 32),
+ handler);

return kvm_mmu_page_fault(&svm->vcpu, fault_address, error_code);
}
diff --git a/include/asm-x86/kvm.h b/include/asm-x86/kvm.h
index 80eefef..6f18408 100644
--- a/include/asm-x86/kvm.h
+++ b/include/asm-x86/kvm.h
@@ -228,5 +228,6 @@ struct kvm_pit_state {
#define KVM_TRC_CLTS (KVM_TRC_HANDLER + 0x12)
#define KVM_TRC_LMSW (KVM_TRC_HANDLER + 0x13)
#define KVM_TRC_APIC_ACCESS (KVM_TRC_HANDLER + 0x14)
+#define KVM_TRC_TDP_FAULT (KVM_TRC_HANDLER + 0x15)

#endif
--
1.5.6

2008-06-26 12:31:12

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 07/50] KVM: SVM: add missing kvmtrace markers

From: Joerg Roedel <[email protected]>

This patch adds the missing kvmtrace markers to the svm
module of kvm.

Signed-off-by: Joerg Roedel <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/svm.c | 26 +++++++++++++++++++++++++-
1 files changed, 25 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 0eac1a5..8953292 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -949,7 +949,9 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *svm_data)

static unsigned long svm_get_dr(struct kvm_vcpu *vcpu, int dr)
{
- return to_svm(vcpu)->db_regs[dr];
+ unsigned long val = to_svm(vcpu)->db_regs[dr];
+ KVMTRACE_2D(DR_READ, vcpu, (u32)dr, (u32)val, handler);
+ return val;
}

static void svm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long value,
@@ -1004,6 +1006,12 @@ static int pf_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)

fault_address = svm->vmcb->control.exit_info_2;
error_code = svm->vmcb->control.exit_info_1;
+
+ if (!npt_enabled)
+ KVMTRACE_3D(PAGE_FAULT, &svm->vcpu, error_code,
+ (u32)fault_address, (u32)(fault_address >> 32),
+ handler);
+
return kvm_mmu_page_fault(&svm->vcpu, fault_address, error_code);
}

@@ -1083,12 +1091,14 @@ static int io_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)

static int nmi_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
{
+ KVMTRACE_0D(NMI, &svm->vcpu, handler);
return 1;
}

static int intr_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
{
++svm->vcpu.stat.irq_exits;
+ KVMTRACE_0D(INTR, &svm->vcpu, handler);
return 1;
}

@@ -1230,6 +1240,9 @@ static int rdmsr_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
if (svm_get_msr(&svm->vcpu, ecx, &data))
kvm_inject_gp(&svm->vcpu, 0);
else {
+ KVMTRACE_3D(MSR_READ, &svm->vcpu, ecx, (u32)data,
+ (u32)(data >> 32), handler);
+
svm->vmcb->save.rax = data & 0xffffffff;
svm->vcpu.arch.regs[VCPU_REGS_RDX] = data >> 32;
svm->next_rip = svm->vmcb->save.rip + 2;
@@ -1315,6 +1328,10 @@ static int wrmsr_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
u32 ecx = svm->vcpu.arch.regs[VCPU_REGS_RCX];
u64 data = (svm->vmcb->save.rax & -1u)
| ((u64)(svm->vcpu.arch.regs[VCPU_REGS_RDX] & -1u) << 32);
+
+ KVMTRACE_3D(MSR_WRITE, &svm->vcpu, ecx, (u32)data, (u32)(data >> 32),
+ handler);
+
svm->next_rip = svm->vmcb->save.rip + 2;
if (svm_set_msr(&svm->vcpu, ecx, data))
kvm_inject_gp(&svm->vcpu, 0);
@@ -1334,6 +1351,8 @@ static int msr_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
static int interrupt_window_interception(struct vcpu_svm *svm,
struct kvm_run *kvm_run)
{
+ KVMTRACE_0D(PEND_INTR, &svm->vcpu, handler);
+
svm->vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR);
svm->vmcb->control.int_ctl &= ~V_IRQ_MASK;
/*
@@ -1408,6 +1427,9 @@ static int handle_exit(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
struct vcpu_svm *svm = to_svm(vcpu);
u32 exit_code = svm->vmcb->control.exit_code;

+ KVMTRACE_3D(VMEXIT, vcpu, exit_code, (u32)svm->vmcb->save.rip,
+ (u32)((u64)svm->vmcb->save.rip >> 32), entryexit);
+
if (npt_enabled) {
int mmu_reload = 0;
if ((vcpu->arch.cr0 ^ svm->vmcb->save.cr0) & X86_CR0_PG) {
@@ -1481,6 +1503,8 @@ static inline void svm_inject_irq(struct vcpu_svm *svm, int irq)
{
struct vmcb_control_area *control;

+ KVMTRACE_1D(INJ_VIRQ, &svm->vcpu, (u32)irq, handler);
+
control = &svm->vmcb->control;
control->int_vector = irq;
control->int_ctl &= ~V_INTR_PRIO_MASK;
--
1.5.6

2008-06-26 12:31:43

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 12/50] KVM: VMX: Trivial vmcs_write64() code simplification

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/vmx.c | 4 +---
1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 158a7d8..1b286fa 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -431,10 +431,8 @@ static void vmcs_write32(unsigned long field, u32 value)

static void vmcs_write64(unsigned long field, u64 value)
{
-#ifdef CONFIG_X86_64
- vmcs_writel(field, value);
-#else
vmcs_writel(field, value);
+#ifndef CONFIG_X86_64
asm volatile ("");
vmcs_writel(field+1, value >> 32);
#endif
--
1.5.6

2008-06-26 12:31:31

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 09/50] KVM: Handle vma regions with no backing page

From: Anthony Liguori <[email protected]>

This patch allows VMAs that contain no backing page to be used for guest
memory. This is useful for assigning mmio regions to a guest.

Signed-off-by: Anthony Liguori <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
virt/kvm/kvm_main.c | 49 +++++++++++++++++++++++++++++++++++++------------
1 files changed, 37 insertions(+), 12 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 764d702..4d30ac5 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -532,6 +532,7 @@ pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn)
struct page *page[1];
unsigned long addr;
int npages;
+ pfn_t pfn;

might_sleep();

@@ -544,19 +545,38 @@ pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn)
npages = get_user_pages(current, current->mm, addr, 1, 1, 1, page,
NULL);

- if (npages != 1) {
- get_page(bad_page);
- return page_to_pfn(bad_page);
- }
+ if (unlikely(npages != 1)) {
+ struct vm_area_struct *vma;

- return page_to_pfn(page[0]);
+ vma = find_vma(current->mm, addr);
+ if (vma == NULL || addr < vma->vm_start ||
+ !(vma->vm_flags & VM_PFNMAP)) {
+ get_page(bad_page);
+ return page_to_pfn(bad_page);
+ }
+
+ pfn = ((addr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
+ BUG_ON(pfn_valid(pfn));
+ } else
+ pfn = page_to_pfn(page[0]);
+
+ return pfn;
}

EXPORT_SYMBOL_GPL(gfn_to_pfn);

struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn)
{
- return pfn_to_page(gfn_to_pfn(kvm, gfn));
+ pfn_t pfn;
+
+ pfn = gfn_to_pfn(kvm, gfn);
+ if (pfn_valid(pfn))
+ return pfn_to_page(pfn);
+
+ WARN_ON(!pfn_valid(pfn));
+
+ get_page(bad_page);
+ return bad_page;
}

EXPORT_SYMBOL_GPL(gfn_to_page);
@@ -569,7 +589,8 @@ EXPORT_SYMBOL_GPL(kvm_release_page_clean);

void kvm_release_pfn_clean(pfn_t pfn)
{
- put_page(pfn_to_page(pfn));
+ if (pfn_valid(pfn))
+ put_page(pfn_to_page(pfn));
}
EXPORT_SYMBOL_GPL(kvm_release_pfn_clean);

@@ -594,21 +615,25 @@ EXPORT_SYMBOL_GPL(kvm_set_page_dirty);

void kvm_set_pfn_dirty(pfn_t pfn)
{
- struct page *page = pfn_to_page(pfn);
- if (!PageReserved(page))
- SetPageDirty(page);
+ if (pfn_valid(pfn)) {
+ struct page *page = pfn_to_page(pfn);
+ if (!PageReserved(page))
+ SetPageDirty(page);
+ }
}
EXPORT_SYMBOL_GPL(kvm_set_pfn_dirty);

void kvm_set_pfn_accessed(pfn_t pfn)
{
- mark_page_accessed(pfn_to_page(pfn));
+ if (pfn_valid(pfn))
+ mark_page_accessed(pfn_to_page(pfn));
}
EXPORT_SYMBOL_GPL(kvm_set_pfn_accessed);

void kvm_get_pfn(pfn_t pfn)
{
- get_page(pfn_to_page(pfn));
+ if (pfn_valid(pfn))
+ get_page(pfn_to_page(pfn));
}
EXPORT_SYMBOL_GPL(kvm_get_pfn);

--
1.5.6

2008-06-26 12:32:03

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 15/50] KVM: VMX: Add list of potentially locally cached vcpus

VMX hardware can cache the contents of a vcpu's vmcs. This cache needs
to be flushed when migrating a vcpu to another cpu, or (which is the case
that interests us here) when disabling hardware virtualization on a cpu.

The current implementation of decaching iterates over the list of all vcpus,
picks the ones that are potentially cached on the cpu that is being offlined,
and flushes the cache. The problem is that it uses mutex_trylock() to gain
exclusive access to the vcpu, which fires off a (benign) warning about using
the mutex in an interrupt context.

To avoid this, and to make things generally nicer, add a new per-cpu list
of potentially cached vcus. This makes the decaching code much simpler. The
list is vmx-specific since other hardware doesn't have this issue.

[andrea: fix crash on suspend/resume]

Signed-off-by: Andrea Arcangeli <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/vmx.c | 24 ++++++++++++++++++++++--
arch/x86/kvm/x86.c | 27 ---------------------------
2 files changed, 22 insertions(+), 29 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index fa4ea2e..b99d045 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -55,6 +55,7 @@ struct vmcs {

struct vcpu_vmx {
struct kvm_vcpu vcpu;
+ struct list_head local_vcpus_link;
int launched;
u8 fail;
u32 idt_vectoring_info;
@@ -93,6 +94,7 @@ static int init_rmode(struct kvm *kvm);

static DEFINE_PER_CPU(struct vmcs *, vmxarea);
static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
+static DEFINE_PER_CPU(struct list_head, vcpus_on_cpu);

static struct page *vmx_io_bitmap_a;
static struct page *vmx_io_bitmap_b;
@@ -331,6 +333,9 @@ static void __vcpu_clear(void *arg)
if (per_cpu(current_vmcs, cpu) == vmx->vmcs)
per_cpu(current_vmcs, cpu) = NULL;
rdtscll(vmx->vcpu.arch.host_tsc);
+ list_del(&vmx->local_vcpus_link);
+ vmx->vcpu.cpu = -1;
+ vmx->launched = 0;
}

static void vcpu_clear(struct vcpu_vmx *vmx)
@@ -338,7 +343,6 @@ static void vcpu_clear(struct vcpu_vmx *vmx)
if (vmx->vcpu.cpu == -1)
return;
smp_call_function_single(vmx->vcpu.cpu, __vcpu_clear, vmx, 0, 1);
- vmx->launched = 0;
}

static inline void vpid_sync_vcpu_all(struct vcpu_vmx *vmx)
@@ -617,6 +621,10 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
vcpu_clear(vmx);
kvm_migrate_timers(vcpu);
vpid_sync_vcpu_all(vmx);
+ local_irq_disable();
+ list_add(&vmx->local_vcpus_link,
+ &per_cpu(vcpus_on_cpu, cpu));
+ local_irq_enable();
}

if (per_cpu(current_vmcs, cpu) != vmx->vmcs) {
@@ -1022,6 +1030,7 @@ static void hardware_enable(void *garbage)
u64 phys_addr = __pa(per_cpu(vmxarea, cpu));
u64 old;

+ INIT_LIST_HEAD(&per_cpu(vcpus_on_cpu, cpu));
rdmsrl(MSR_IA32_FEATURE_CONTROL, old);
if ((old & (MSR_IA32_FEATURE_CONTROL_LOCKED |
MSR_IA32_FEATURE_CONTROL_VMXON_ENABLED))
@@ -1037,8 +1046,19 @@ static void hardware_enable(void *garbage)
: "memory", "cc");
}

+static void vmclear_local_vcpus(void)
+{
+ int cpu = raw_smp_processor_id();
+ struct vcpu_vmx *vmx, *n;
+
+ list_for_each_entry_safe(vmx, n, &per_cpu(vcpus_on_cpu, cpu),
+ local_vcpus_link)
+ __vcpu_clear(vmx);
+}
+
static void hardware_disable(void *garbage)
{
+ vmclear_local_vcpus();
asm volatile (__ex(ASM_VMX_VMXOFF) : : : "cc");
write_cr4(read_cr4() & ~X86_CR4_VMXE);
}
@@ -2967,7 +2987,7 @@ static void vmx_free_vmcs(struct kvm_vcpu *vcpu)
struct vcpu_vmx *vmx = to_vmx(vcpu);

if (vmx->vmcs) {
- on_each_cpu(__vcpu_clear, vmx, 0, 1);
+ vcpu_clear(vmx);
free_vmcs(vmx->vmcs);
vmx->vmcs = NULL;
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index cc5b0d3..111f432 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -823,33 +823,6 @@ out:
*/
void decache_vcpus_on_cpu(int cpu)
{
- struct kvm *vm;
- struct kvm_vcpu *vcpu;
- int i;
-
- spin_lock(&kvm_lock);
- list_for_each_entry(vm, &vm_list, vm_list)
- for (i = 0; i < KVM_MAX_VCPUS; ++i) {
- vcpu = vm->vcpus[i];
- if (!vcpu)
- continue;
- /*
- * If the vcpu is locked, then it is running on some
- * other cpu and therefore it is not cached on the
- * cpu in question.
- *
- * If it's not locked, check the last cpu it executed
- * on.
- */
- if (mutex_trylock(&vcpu->mutex)) {
- if (vcpu->cpu == cpu) {
- kvm_x86_ops->vcpu_decache(vcpu);
- vcpu->cpu = -1;
- }
- mutex_unlock(&vcpu->mutex);
- }
- }
- spin_unlock(&kvm_lock);
}

int kvm_dev_ioctl_check_extension(long ext)
--
1.5.6

2008-06-26 12:32:25

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 14/50] KVM: Handle virtualization instruction #UD faults during reboot

KVM turns off hardware virtualization extensions during reboot, in order
to disassociate the memory used by the virtualization extensions from the
processor, and in order to have the system in a consistent state.
Unfortunately virtual machines may still be running while this goes on,
and once virtualization extensions are turned off, any virtulization
instruction will #UD on execution.

Fix by adding an exception handler to virtualization instructions; if we get
an exception during reboot, we simply spin waiting for the reset to complete.
If it's a true exception, BUG() so we can have our stack trace.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/svm.c | 20 +++++++++++---------
arch/x86/kvm/vmx.c | 25 ++++++++++++++-----------
include/asm-x86/kvm_host.h | 24 ++++++++++++++++++++++++
virt/kvm/kvm_main.c | 15 +++++++++++++++
4 files changed, 64 insertions(+), 20 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 992ab71..9390a31 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -27,6 +27,8 @@

#include <asm/desc.h>

+#define __ex(x) __kvm_handle_fault_on_reboot(x)
+
MODULE_AUTHOR("Qumranet");
MODULE_LICENSE("GPL");

@@ -129,17 +131,17 @@ static inline void push_irq(struct kvm_vcpu *vcpu, u8 irq)

static inline void clgi(void)
{
- asm volatile (SVM_CLGI);
+ asm volatile (__ex(SVM_CLGI));
}

static inline void stgi(void)
{
- asm volatile (SVM_STGI);
+ asm volatile (__ex(SVM_STGI));
}

static inline void invlpga(unsigned long addr, u32 asid)
{
- asm volatile (SVM_INVLPGA :: "a"(addr), "c"(asid));
+ asm volatile (__ex(SVM_INVLPGA) :: "a"(addr), "c"(asid));
}

static inline unsigned long kvm_read_cr2(void)
@@ -1758,17 +1760,17 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
/* Enter guest mode */
"push %%rax \n\t"
"mov %c[vmcb](%[svm]), %%rax \n\t"
- SVM_VMLOAD "\n\t"
- SVM_VMRUN "\n\t"
- SVM_VMSAVE "\n\t"
+ __ex(SVM_VMLOAD) "\n\t"
+ __ex(SVM_VMRUN) "\n\t"
+ __ex(SVM_VMSAVE) "\n\t"
"pop %%rax \n\t"
#else
/* Enter guest mode */
"push %%eax \n\t"
"mov %c[vmcb](%[svm]), %%eax \n\t"
- SVM_VMLOAD "\n\t"
- SVM_VMRUN "\n\t"
- SVM_VMSAVE "\n\t"
+ __ex(SVM_VMLOAD) "\n\t"
+ __ex(SVM_VMRUN) "\n\t"
+ __ex(SVM_VMSAVE) "\n\t"
"pop %%eax \n\t"
#endif

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 1b286fa..fa4ea2e 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -30,6 +30,8 @@
#include <asm/io.h>
#include <asm/desc.h>

+#define __ex(x) __kvm_handle_fault_on_reboot(x)
+
MODULE_AUTHOR("Qumranet");
MODULE_LICENSE("GPL");

@@ -278,7 +280,7 @@ static inline void __invvpid(int ext, u16 vpid, gva_t gva)
u64 gva;
} operand = { vpid, 0, gva };

- asm volatile (ASM_VMX_INVVPID
+ asm volatile (__ex(ASM_VMX_INVVPID)
/* CF==1 or ZF==1 --> rc = -1 */
"; ja 1f ; ud2 ; 1:"
: : "a"(&operand), "c"(ext) : "cc", "memory");
@@ -290,7 +292,7 @@ static inline void __invept(int ext, u64 eptp, gpa_t gpa)
u64 eptp, gpa;
} operand = {eptp, gpa};

- asm volatile (ASM_VMX_INVEPT
+ asm volatile (__ex(ASM_VMX_INVEPT)
/* CF==1 or ZF==1 --> rc = -1 */
"; ja 1f ; ud2 ; 1:\n"
: : "a" (&operand), "c" (ext) : "cc", "memory");
@@ -311,7 +313,7 @@ static void vmcs_clear(struct vmcs *vmcs)
u64 phys_addr = __pa(vmcs);
u8 error;

- asm volatile (ASM_VMX_VMCLEAR_RAX "; setna %0"
+ asm volatile (__ex(ASM_VMX_VMCLEAR_RAX) "; setna %0"
: "=g"(error) : "a"(&phys_addr), "m"(phys_addr)
: "cc", "memory");
if (error)
@@ -378,7 +380,7 @@ static unsigned long vmcs_readl(unsigned long field)
{
unsigned long value;

- asm volatile (ASM_VMX_VMREAD_RDX_RAX
+ asm volatile (__ex(ASM_VMX_VMREAD_RDX_RAX)
: "=a"(value) : "d"(field) : "cc");
return value;
}
@@ -413,7 +415,7 @@ static void vmcs_writel(unsigned long field, unsigned long value)
{
u8 error;

- asm volatile (ASM_VMX_VMWRITE_RAX_RDX "; setna %0"
+ asm volatile (__ex(ASM_VMX_VMWRITE_RAX_RDX) "; setna %0"
: "=q"(error) : "a"(value), "d"(field) : "cc");
if (unlikely(error))
vmwrite_error(field, value);
@@ -621,7 +623,7 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
u8 error;

per_cpu(current_vmcs, cpu) = vmx->vmcs;
- asm volatile (ASM_VMX_VMPTRLD_RAX "; setna %0"
+ asm volatile (__ex(ASM_VMX_VMPTRLD_RAX) "; setna %0"
: "=g"(error) : "a"(&phys_addr), "m"(phys_addr)
: "cc");
if (error)
@@ -1030,13 +1032,14 @@ static void hardware_enable(void *garbage)
MSR_IA32_FEATURE_CONTROL_LOCKED |
MSR_IA32_FEATURE_CONTROL_VMXON_ENABLED);
write_cr4(read_cr4() | X86_CR4_VMXE); /* FIXME: not cpu hotplug safe */
- asm volatile (ASM_VMX_VMXON_RAX : : "a"(&phys_addr), "m"(phys_addr)
+ asm volatile (ASM_VMX_VMXON_RAX
+ : : "a"(&phys_addr), "m"(phys_addr)
: "memory", "cc");
}

static void hardware_disable(void *garbage)
{
- asm volatile (ASM_VMX_VMXOFF : : : "cc");
+ asm volatile (__ex(ASM_VMX_VMXOFF) : : : "cc");
write_cr4(read_cr4() & ~X86_CR4_VMXE);
}

@@ -2834,7 +2837,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
"push %%edx; push %%ebp;"
"push %%ecx \n\t"
#endif
- ASM_VMX_VMWRITE_RSP_RDX "\n\t"
+ __ex(ASM_VMX_VMWRITE_RSP_RDX) "\n\t"
/* Check if vmlaunch of vmresume is needed */
"cmpl $0, %c[launched](%0) \n\t"
/* Load guest registers. Don't clobber flags. */
@@ -2869,9 +2872,9 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
#endif
/* Enter guest mode */
"jne .Llaunched \n\t"
- ASM_VMX_VMLAUNCH "\n\t"
+ __ex(ASM_VMX_VMLAUNCH) "\n\t"
"jmp .Lkvm_vmx_return \n\t"
- ".Llaunched: " ASM_VMX_VMRESUME "\n\t"
+ ".Llaunched: " __ex(ASM_VMX_VMRESUME) "\n\t"
".Lkvm_vmx_return: "
/* Save guest registers, load host registers, keep flags */
#ifdef CONFIG_X86_64
diff --git a/include/asm-x86/kvm_host.h b/include/asm-x86/kvm_host.h
index c2d066e..0df9d5f 100644
--- a/include/asm-x86/kvm_host.h
+++ b/include/asm-x86/kvm_host.h
@@ -692,4 +692,28 @@ enum {
trace_mark(kvm_trace_##name, "%u %p %u %u %u %u %u %u", KVM_TRC_##evt, \
vcpu, 0, 0, 0, 0, 0, 0)

+#ifdef CONFIG_64BIT
+#define KVM_EX_ENTRY ".quad"
+#else
+#define KVM_EX_ENTRY ".long"
+#endif
+
+/*
+ * Hardware virtualization extension instructions may fault if a
+ * reboot turns off virtualization while processes are running.
+ * Trap the fault and ignore the instruction if that happens.
+ */
+asmlinkage void kvm_handle_fault_on_reboot(void);
+
+#define __kvm_handle_fault_on_reboot(insn) \
+ "666: " insn "\n\t" \
+ ".pushsection .text.fixup, \"ax\" \n" \
+ "667: \n\t" \
+ "push $666b \n\t" \
+ "jmp kvm_handle_fault_on_reboot \n\t" \
+ ".popsection \n\t" \
+ ".pushsection __ex_table, \"a\" \n\t" \
+ KVM_EX_ENTRY " 666b, 667b \n\t" \
+ ".popsection"
+
#endif
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 4d30ac5..67af90c 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -65,6 +65,8 @@ struct dentry *kvm_debugfs_dir;
static long kvm_vcpu_ioctl(struct file *file, unsigned int ioctl,
unsigned long arg);

+bool kvm_rebooting;
+
static inline int valid_vcpu(int n)
{
return likely(n >= 0 && n < KVM_MAX_VCPUS);
@@ -1301,6 +1303,18 @@ static int kvm_cpu_hotplug(struct notifier_block *notifier, unsigned long val,
return NOTIFY_OK;
}

+
+asmlinkage void kvm_handle_fault_on_reboot(void)
+{
+ if (kvm_rebooting)
+ /* spin while reset goes on */
+ while (true)
+ ;
+ /* Fault while not rebooting. We want the trace. */
+ BUG();
+}
+EXPORT_SYMBOL_GPL(kvm_handle_fault_on_reboot);
+
static int kvm_reboot(struct notifier_block *notifier, unsigned long val,
void *v)
{
@@ -1310,6 +1324,7 @@ static int kvm_reboot(struct notifier_block *notifier, unsigned long val,
* in vmx root mode.
*/
printk(KERN_INFO "kvm: exiting hardware virtualization\n");
+ kvm_rebooting = true;
on_each_cpu(hardware_disable, NULL, 0, 1);
}
return NOTIFY_OK;
--
1.5.6

2008-06-26 12:33:25

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 33/50] KVM: kvm_io_device: extend in_range() to manage len and write attribute

From: Laurent Vivier <[email protected]>

Modify member in_range() of structure kvm_io_device to pass length and the type
of the I/O (write or read).

This modification allows to use kvm_io_device with coalesced MMIO.

Signed-off-by: Laurent Vivier <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/ia64/kvm/kvm-ia64.c | 6 +++---
arch/x86/kvm/i8254.c | 6 ++++--
arch/x86/kvm/i8259.c | 3 ++-
arch/x86/kvm/lapic.c | 3 ++-
arch/x86/kvm/x86.c | 28 +++++++++++++++++-----------
include/linux/kvm_host.h | 3 ++-
virt/kvm/ioapic.c | 3 ++-
virt/kvm/iodev.h | 8 +++++---
virt/kvm/kvm_main.c | 5 +++--
9 files changed, 40 insertions(+), 25 deletions(-)

diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index adb74f7..b59231b 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -195,11 +195,11 @@ int kvm_dev_ioctl_check_extension(long ext)
}

static struct kvm_io_device *vcpu_find_mmio_dev(struct kvm_vcpu *vcpu,
- gpa_t addr)
+ gpa_t addr, int len, int is_write)
{
struct kvm_io_device *dev;

- dev = kvm_io_bus_find_dev(&vcpu->kvm->mmio_bus, addr);
+ dev = kvm_io_bus_find_dev(&vcpu->kvm->mmio_bus, addr, len, is_write);

return dev;
}
@@ -231,7 +231,7 @@ static int handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
kvm_run->exit_reason = KVM_EXIT_MMIO;
return 0;
mmio:
- mmio_dev = vcpu_find_mmio_dev(vcpu, p->addr);
+ mmio_dev = vcpu_find_mmio_dev(vcpu, p->addr, p->size, !p->dir);
if (mmio_dev) {
if (!p->dir)
kvm_iodevice_write(mmio_dev, p->addr, p->size,
diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
index 60074dc..9e3391e 100644
--- a/arch/x86/kvm/i8254.c
+++ b/arch/x86/kvm/i8254.c
@@ -460,7 +460,8 @@ static void pit_ioport_read(struct kvm_io_device *this,
mutex_unlock(&pit_state->lock);
}

-static int pit_in_range(struct kvm_io_device *this, gpa_t addr)
+static int pit_in_range(struct kvm_io_device *this, gpa_t addr,
+ int len, int is_write)
{
return ((addr >= KVM_PIT_BASE_ADDRESS) &&
(addr < KVM_PIT_BASE_ADDRESS + KVM_PIT_MEM_LENGTH));
@@ -501,7 +502,8 @@ static void speaker_ioport_read(struct kvm_io_device *this,
mutex_unlock(&pit_state->lock);
}

-static int speaker_in_range(struct kvm_io_device *this, gpa_t addr)
+static int speaker_in_range(struct kvm_io_device *this, gpa_t addr,
+ int len, int is_write)
{
return (addr == KVM_SPEAKER_BASE_ADDRESS);
}
diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c
index ab29cf2..5857f59 100644
--- a/arch/x86/kvm/i8259.c
+++ b/arch/x86/kvm/i8259.c
@@ -346,7 +346,8 @@ static u32 elcr_ioport_read(void *opaque, u32 addr1)
return s->elcr;
}

-static int picdev_in_range(struct kvm_io_device *this, gpa_t addr)
+static int picdev_in_range(struct kvm_io_device *this, gpa_t addr,
+ int len, int is_write)
{
switch (addr) {
case 0x20:
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index e48d193..180ba73 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -785,7 +785,8 @@ static void apic_mmio_write(struct kvm_io_device *this,

}

-static int apic_mmio_range(struct kvm_io_device *this, gpa_t addr)
+static int apic_mmio_range(struct kvm_io_device *this, gpa_t addr,
+ int len, int size)
{
struct kvm_lapic *apic = (struct kvm_lapic *)this->private;
int ret = 0;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 51810d8..00333b6 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1797,13 +1797,14 @@ static void kvm_init_msr_list(void)
* Only apic need an MMIO device hook, so shortcut now..
*/
static struct kvm_io_device *vcpu_find_pervcpu_dev(struct kvm_vcpu *vcpu,
- gpa_t addr)
+ gpa_t addr, int len,
+ int is_write)
{
struct kvm_io_device *dev;

if (vcpu->arch.apic) {
dev = &vcpu->arch.apic->dev;
- if (dev->in_range(dev, addr))
+ if (dev->in_range(dev, addr, len, is_write))
return dev;
}
return NULL;
@@ -1811,13 +1812,15 @@ static struct kvm_io_device *vcpu_find_pervcpu_dev(struct kvm_vcpu *vcpu,


static struct kvm_io_device *vcpu_find_mmio_dev(struct kvm_vcpu *vcpu,
- gpa_t addr)
+ gpa_t addr, int len,
+ int is_write)
{
struct kvm_io_device *dev;

- dev = vcpu_find_pervcpu_dev(vcpu, addr);
+ dev = vcpu_find_pervcpu_dev(vcpu, addr, len, is_write);
if (dev == NULL)
- dev = kvm_io_bus_find_dev(&vcpu->kvm->mmio_bus, addr);
+ dev = kvm_io_bus_find_dev(&vcpu->kvm->mmio_bus, addr, len,
+ is_write);
return dev;
}

@@ -1885,7 +1888,7 @@ mmio:
* Is this MMIO handled locally?
*/
mutex_lock(&vcpu->kvm->lock);
- mmio_dev = vcpu_find_mmio_dev(vcpu, gpa);
+ mmio_dev = vcpu_find_mmio_dev(vcpu, gpa, bytes, 0);
if (mmio_dev) {
kvm_iodevice_read(mmio_dev, gpa, bytes, val);
mutex_unlock(&vcpu->kvm->lock);
@@ -1940,7 +1943,7 @@ mmio:
* Is this MMIO handled locally?
*/
mutex_lock(&vcpu->kvm->lock);
- mmio_dev = vcpu_find_mmio_dev(vcpu, gpa);
+ mmio_dev = vcpu_find_mmio_dev(vcpu, gpa, bytes, 1);
if (mmio_dev) {
kvm_iodevice_write(mmio_dev, gpa, bytes, val);
mutex_unlock(&vcpu->kvm->lock);
@@ -2317,9 +2320,10 @@ static void pio_string_write(struct kvm_io_device *pio_dev,
}

static struct kvm_io_device *vcpu_find_pio_dev(struct kvm_vcpu *vcpu,
- gpa_t addr)
+ gpa_t addr, int len,
+ int is_write)
{
- return kvm_io_bus_find_dev(&vcpu->kvm->pio_bus, addr);
+ return kvm_io_bus_find_dev(&vcpu->kvm->pio_bus, addr, len, is_write);
}

int kvm_emulate_pio(struct kvm_vcpu *vcpu, struct kvm_run *run, int in,
@@ -2351,7 +2355,7 @@ int kvm_emulate_pio(struct kvm_vcpu *vcpu, struct kvm_run *run, int in,

kvm_x86_ops->skip_emulated_instruction(vcpu);

- pio_dev = vcpu_find_pio_dev(vcpu, port);
+ pio_dev = vcpu_find_pio_dev(vcpu, port, size, !in);
if (pio_dev) {
kernel_pio(pio_dev, vcpu, vcpu->arch.pio_data);
complete_pio(vcpu);
@@ -2433,7 +2437,9 @@ int kvm_emulate_pio_string(struct kvm_vcpu *vcpu, struct kvm_run *run, int in,
}
}

- pio_dev = vcpu_find_pio_dev(vcpu, port);
+ pio_dev = vcpu_find_pio_dev(vcpu, port,
+ vcpu->arch.pio.cur_count,
+ !vcpu->arch.pio.in);
if (!vcpu->arch.pio.in) {
/* string PIO write */
ret = pio_copy_data(vcpu);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 865dcbc..499ff06 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -52,7 +52,8 @@ struct kvm_io_bus {

void kvm_io_bus_init(struct kvm_io_bus *bus);
void kvm_io_bus_destroy(struct kvm_io_bus *bus);
-struct kvm_io_device *kvm_io_bus_find_dev(struct kvm_io_bus *bus, gpa_t addr);
+struct kvm_io_device *kvm_io_bus_find_dev(struct kvm_io_bus *bus,
+ gpa_t addr, int len, int is_write);
void kvm_io_bus_register_dev(struct kvm_io_bus *bus,
struct kvm_io_device *dev);

diff --git a/virt/kvm/ioapic.c b/virt/kvm/ioapic.c
index cb8a20b..9d02136 100644
--- a/virt/kvm/ioapic.c
+++ b/virt/kvm/ioapic.c
@@ -307,7 +307,8 @@ void kvm_ioapic_update_eoi(struct kvm *kvm, int vector)
__kvm_ioapic_update_eoi(ioapic, i);
}

-static int ioapic_in_range(struct kvm_io_device *this, gpa_t addr)
+static int ioapic_in_range(struct kvm_io_device *this, gpa_t addr,
+ int len, int is_write)
{
struct kvm_ioapic *ioapic = (struct kvm_ioapic *)this->private;

diff --git a/virt/kvm/iodev.h b/virt/kvm/iodev.h
index c14e642..55e8846 100644
--- a/virt/kvm/iodev.h
+++ b/virt/kvm/iodev.h
@@ -27,7 +27,8 @@ struct kvm_io_device {
gpa_t addr,
int len,
const void *val);
- int (*in_range)(struct kvm_io_device *this, gpa_t addr);
+ int (*in_range)(struct kvm_io_device *this, gpa_t addr, int len,
+ int is_write);
void (*destructor)(struct kvm_io_device *this);

void *private;
@@ -49,9 +50,10 @@ static inline void kvm_iodevice_write(struct kvm_io_device *dev,
dev->write(dev, addr, len, val);
}

-static inline int kvm_iodevice_inrange(struct kvm_io_device *dev, gpa_t addr)
+static inline int kvm_iodevice_inrange(struct kvm_io_device *dev,
+ gpa_t addr, int len, int is_write)
{
- return dev->in_range(dev, addr);
+ return dev->in_range(dev, addr, len, is_write);
}

static inline void kvm_iodevice_destructor(struct kvm_io_device *dev)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e8f9fda..d602700 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1350,14 +1350,15 @@ void kvm_io_bus_destroy(struct kvm_io_bus *bus)
}
}

-struct kvm_io_device *kvm_io_bus_find_dev(struct kvm_io_bus *bus, gpa_t addr)
+struct kvm_io_device *kvm_io_bus_find_dev(struct kvm_io_bus *bus,
+ gpa_t addr, int len, int is_write)
{
int i;

for (i = 0; i < bus->dev_count; i++) {
struct kvm_io_device *pos = bus->devs[i];

- if (pos->in_range(pos, addr))
+ if (pos->in_range(pos, addr, len, is_write))
return pos;
}

--
1.5.6

2008-06-26 12:32:56

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 31/50] KVM: MMU: Move nonpaging_prefetch_page()

In preparation for next patch. No code change.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/mmu.c | 18 +++++++++---------
1 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 53f1ed8..62741b7 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -776,6 +776,15 @@ static void mmu_page_remove_parent_pte(struct kvm_mmu_page *sp,
BUG();
}

+static void nonpaging_prefetch_page(struct kvm_vcpu *vcpu,
+ struct kvm_mmu_page *sp)
+{
+ int i;
+
+ for (i = 0; i < PT64_ENT_PER_PAGE; ++i)
+ sp->spt[i] = shadow_trap_nonpresent_pte;
+}
+
static struct kvm_mmu_page *kvm_mmu_lookup_page(struct kvm *kvm, gfn_t gfn)
{
unsigned index;
@@ -1213,15 +1222,6 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, int write, gfn_t gfn)
}


-static void nonpaging_prefetch_page(struct kvm_vcpu *vcpu,
- struct kvm_mmu_page *sp)
-{
- int i;
-
- for (i = 0; i < PT64_ENT_PER_PAGE; ++i)
- sp->spt[i] = shadow_trap_nonpresent_pte;
-}
-
static void mmu_free_roots(struct kvm_vcpu *vcpu)
{
int i;
--
1.5.6

2008-06-26 12:33:45

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 30/50] KVM: x86 emulator: implement 'push imm' (opcode 0x68)

Encountered in FC6 boot sequence, now that we don't force ss.rpl = 0 during
the protected mode transition. Not really necessary, but nice to have.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86_emulate.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index 4e037ea..b90857c 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -121,7 +121,7 @@ static u16 opcode_table[256] = {
0, 0, 0, DstReg | SrcMem32 | ModRM | Mov /* movsxd (x86/64) */ ,
0, 0, 0, 0,
/* 0x68 - 0x6F */
- 0, 0, SrcImmByte | Mov | Stack, 0,
+ SrcImm | Mov | Stack, 0, SrcImmByte | Mov | Stack, 0,
SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps, /* insb, insw/insd */
SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps, /* outsb, outsw/outsd */
/* 0x70 - 0x77 */
@@ -1424,6 +1424,7 @@ special_insn:
goto cannot_emulate;
c->dst.val = (s32) c->src.val;
break;
+ case 0x68: /* push imm */
case 0x6a: /* push imm8 */
emulate_push(ctxt);
break;
--
1.5.6

2008-06-26 12:32:42

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 32/50] KVM: MMU: Avoid page prefetch on SVM

SVM cannot benefit from page prefetching since guest page fault bypass
cannot by made to work there. Avoid accessing the guest page table in
this case.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/mmu.c | 5 ++++-
1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 62741b7..5ebb278 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -850,7 +850,10 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
hlist_add_head(&sp->hash_link, bucket);
if (!metaphysical)
rmap_write_protect(vcpu->kvm, gfn);
- vcpu->arch.mmu.prefetch_page(vcpu, sp);
+ if (shadow_trap_nonpresent_pte != shadow_notrap_nonpresent_pte)
+ vcpu->arch.mmu.prefetch_page(vcpu, sp);
+ else
+ nonpaging_prefetch_page(vcpu, sp);
return sp;
}

--
1.5.6

2008-06-26 12:34:06

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 35/50] KVM: Add coalesced MMIO support (x86 part)

From: Laurent Vivier <[email protected]>

This patch enables coalesced MMIO for x86 architecture.
It defines KVM_MMIO_PAGE_OFFSET and KVM_CAP_COALESCED_MMIO.
It enables the compilation of coalesced_mmio.c.

Signed-off-by: Laurent Vivier <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/Makefile | 3 ++-
arch/x86/kvm/x86.c | 3 +++
include/asm-x86/kvm_host.h | 1 +
3 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index c97d35c..d0e940b 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -2,7 +2,8 @@
# Makefile for Kernel-based Virtual Machine module
#

-common-objs = $(addprefix ../../../virt/kvm/, kvm_main.o ioapic.o)
+common-objs = $(addprefix ../../../virt/kvm/, kvm_main.o ioapic.o \
+ coalesced_mmio.o)
ifeq ($(CONFIG_KVM_TRACE),y)
common-objs += $(addprefix ../../../virt/kvm/, kvm_trace.o)
endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 00333b6..d568792 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -885,6 +885,9 @@ int kvm_dev_ioctl_check_extension(long ext)
case KVM_CAP_MP_STATE:
r = 1;
break;
+ case KVM_CAP_COALESCED_MMIO:
+ r = KVM_COALESCED_MMIO_PAGE_OFFSET;
+ break;
case KVM_CAP_VAPIC:
r = !kvm_x86_ops->cpu_has_accelerated_tpr();
break;
diff --git a/include/asm-x86/kvm_host.h b/include/asm-x86/kvm_host.h
index cd6a4bb..c64d124 100644
--- a/include/asm-x86/kvm_host.h
+++ b/include/asm-x86/kvm_host.h
@@ -27,6 +27,7 @@
#define KVM_PRIVATE_MEM_SLOTS 4

#define KVM_PIO_PAGE_OFFSET 1
+#define KVM_COALESCED_MMIO_PAGE_OFFSET 2

#define CR3_PAE_RESERVED_BITS ((X86_CR3_PWT | X86_CR3_PCD) - 1)
#define CR3_NONPAE_RESERVED_BITS ((PAGE_SIZE-1) & ~(X86_CR3_PWT | X86_CR3_PCD))
--
1.5.6

2008-06-26 12:34:43

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 26/50] KVM: x86 emulator: Add support for mov seg, r (0x8e) instruction

From: Guillaume Thouvenin <[email protected]>

Add support for mov r, sreg (0x8c) instruction.

[avi: drop the sreg decoding table in favor of 1:1 encoding]

Signed-off-by: Guillaume Thouvenin <[email protected]>
Signed-off-by: Laurent Vivier <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86_emulate.c | 25 ++++++++++++++++++++++++-
1 files changed, 24 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index 21d7ff6..b049b6b 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -138,7 +138,8 @@ static u16 opcode_table[256] = {
/* 0x88 - 0x8F */
ByteOp | DstMem | SrcReg | ModRM | Mov, DstMem | SrcReg | ModRM | Mov,
ByteOp | DstReg | SrcMem | ModRM | Mov, DstReg | SrcMem | ModRM | Mov,
- 0, ModRM | DstReg, 0, Group | Group1A,
+ 0, ModRM | DstReg,
+ DstReg | SrcMem | ModRM | Mov, Group | Group1A,
/* 0x90 - 0x9F */
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, ImplicitOps | Stack, ImplicitOps | Stack, 0, 0,
@@ -1520,6 +1521,28 @@ special_insn:
case 0x8d: /* lea r16/r32, m */
c->dst.val = c->modrm_ea;
break;
+ case 0x8e: { /* mov seg, r/m16 */
+ uint16_t sel;
+ int type_bits;
+ int err;
+
+ sel = c->src.val;
+ if (c->modrm_reg <= 5) {
+ type_bits = (c->modrm_reg == 1) ? 9 : 1;
+ err = kvm_load_segment_descriptor(ctxt->vcpu, sel,
+ type_bits, c->modrm_reg);
+ } else {
+ printk(KERN_INFO "Invalid segreg in modrm byte 0x%02x\n",
+ c->modrm);
+ goto cannot_emulate;
+ }
+
+ if (err < 0)
+ goto cannot_emulate;
+
+ c->dst.type = OP_NONE; /* Disable writeback. */
+ break;
+ }
case 0x8f: /* pop (sole member of Grp1a) */
rc = emulate_grp1a(ctxt, ops);
if (rc != 0)
--
1.5.6

2008-06-26 12:34:25

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 27/50] KVM: x86 emulator: Add support for mov r, sreg (0x8c) instruction

From: Guillaume Thouvenin <[email protected]>

Add support for mov r, sreg (0x8c) instruction

Signed-off-by: Guillaume Thouvenin <[email protected]>
Signed-off-by: Laurent Vivier <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86_emulate.c | 15 ++++++++++++++-
1 files changed, 14 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index b049b6b..2a9db4d 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -138,7 +138,7 @@ static u16 opcode_table[256] = {
/* 0x88 - 0x8F */
ByteOp | DstMem | SrcReg | ModRM | Mov, DstMem | SrcReg | ModRM | Mov,
ByteOp | DstReg | SrcMem | ModRM | Mov, DstReg | SrcMem | ModRM | Mov,
- 0, ModRM | DstReg,
+ DstMem | SrcReg | ModRM | Mov, ModRM | DstReg,
DstReg | SrcMem | ModRM | Mov, Group | Group1A,
/* 0x90 - 0x9F */
0, 0, 0, 0, 0, 0, 0, 0,
@@ -1518,6 +1518,19 @@ special_insn:
break;
case 0x88 ... 0x8b: /* mov */
goto mov;
+ case 0x8c: { /* mov r/m, sreg */
+ struct kvm_segment segreg;
+
+ if (c->modrm_reg <= 5)
+ kvm_get_segment(ctxt->vcpu, &segreg, c->modrm_reg);
+ else {
+ printk(KERN_INFO "0x8c: Invalid segreg in modrm byte 0x%02x\n",
+ c->modrm);
+ goto cannot_emulate;
+ }
+ c->dst.val = segreg.selector;
+ break;
+ }
case 0x8d: /* lea r16/r32, m */
c->dst.val = c->modrm_ea;
break;
--
1.5.6

2008-06-26 12:35:30

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 34/50] KVM: Add coalesced MMIO support (common part)

From: Laurent Vivier <[email protected]>

This patch adds all needed structures to coalesce MMIOs.
Until an architecture uses it, it is not compiled.

Coalesced MMIO introduces two ioctl() to define where are the MMIO zones that
can be coalesced:

- KVM_REGISTER_COALESCED_MMIO registers a coalesced MMIO zone.
It requests one parameter (struct kvm_coalesced_mmio_zone) which defines
a memory area where MMIOs can be coalesced until the next switch to
user space. The maximum number of MMIO zones is KVM_COALESCED_MMIO_ZONE_MAX.

- KVM_UNREGISTER_COALESCED_MMIO cancels all registered zones inside
the given bounds (bounds are also given by struct kvm_coalesced_mmio_zone).

The userspace client can check kernel coalesced MMIO availability by asking
ioctl(KVM_CHECK_EXTENSION) for the KVM_CAP_COALESCED_MMIO capability.
The ioctl() call to KVM_CAP_COALESCED_MMIO will return 0 if not supported,
or the page offset where will be stored the ring buffer.
The page offset depends on the architecture.

After an ioctl(KVM_RUN), the first page of the KVM memory mapped points to
a kvm_run structure. The offset given by KVM_CAP_COALESCED_MMIO is
an offset to the coalesced MMIO ring expressed in PAGE_SIZE relatively
to the address of the start of th kvm_run structure. The MMIO ring buffer
is defined by the structure kvm_coalesced_mmio_ring.

[akio: fix oops during guest shutdown]

Signed-off-by: Laurent Vivier <[email protected]>
Signed-off-by: Akio Takebe <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
include/linux/kvm.h | 29 ++++++++
include/linux/kvm_host.h | 4 +
virt/kvm/coalesced_mmio.c | 156 +++++++++++++++++++++++++++++++++++++++++++++
virt/kvm/coalesced_mmio.h | 23 +++++++
virt/kvm/kvm_main.c | 57 ++++++++++++++++
5 files changed, 269 insertions(+), 0 deletions(-)
create mode 100644 virt/kvm/coalesced_mmio.c
create mode 100644 virt/kvm/coalesced_mmio.h

diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index a281afe..1c908ac 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -173,6 +173,30 @@ struct kvm_run {
};
};

+/* for KVM_REGISTER_COALESCED_MMIO / KVM_UNREGISTER_COALESCED_MMIO */
+
+struct kvm_coalesced_mmio_zone {
+ __u64 addr;
+ __u32 size;
+ __u32 pad;
+};
+
+struct kvm_coalesced_mmio {
+ __u64 phys_addr;
+ __u32 len;
+ __u32 pad;
+ __u8 data[8];
+};
+
+struct kvm_coalesced_mmio_ring {
+ __u32 first, last;
+ struct kvm_coalesced_mmio coalesced_mmio[0];
+};
+
+#define KVM_COALESCED_MMIO_MAX \
+ ((PAGE_SIZE - sizeof(struct kvm_coalesced_mmio_ring)) / \
+ sizeof(struct kvm_coalesced_mmio))
+
/* for KVM_TRANSLATE */
struct kvm_translation {
/* in */
@@ -346,6 +370,7 @@ struct kvm_trace_rec {
#define KVM_CAP_NOP_IO_DELAY 12
#define KVM_CAP_PV_MMU 13
#define KVM_CAP_MP_STATE 14
+#define KVM_CAP_COALESCED_MMIO 15

/*
* ioctls for VM fds
@@ -371,6 +396,10 @@ struct kvm_trace_rec {
#define KVM_CREATE_PIT _IO(KVMIO, 0x64)
#define KVM_GET_PIT _IOWR(KVMIO, 0x65, struct kvm_pit_state)
#define KVM_SET_PIT _IOR(KVMIO, 0x66, struct kvm_pit_state)
+#define KVM_REGISTER_COALESCED_MMIO \
+ _IOW(KVMIO, 0x67, struct kvm_coalesced_mmio_zone)
+#define KVM_UNREGISTER_COALESCED_MMIO \
+ _IOW(KVMIO, 0x68, struct kvm_coalesced_mmio_zone)

/*
* ioctls for vcpu fds
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 499ff06..d220b49 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -117,6 +117,10 @@ struct kvm {
struct kvm_vm_stat stat;
struct kvm_arch arch;
atomic_t users_count;
+#ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
+ struct kvm_coalesced_mmio_dev *coalesced_mmio_dev;
+ struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;
+#endif
};

/* The guest did something we don't support. */
diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
new file mode 100644
index 0000000..5ae620d
--- /dev/null
+++ b/virt/kvm/coalesced_mmio.c
@@ -0,0 +1,156 @@
+/*
+ * KVM coalesced MMIO
+ *
+ * Copyright (c) 2008 Bull S.A.S.
+ *
+ * Author: Laurent Vivier <[email protected]>
+ *
+ */
+
+#include "iodev.h"
+
+#include <linux/kvm_host.h>
+#include <linux/kvm.h>
+
+#include "coalesced_mmio.h"
+
+static int coalesced_mmio_in_range(struct kvm_io_device *this,
+ gpa_t addr, int len, int is_write)
+{
+ struct kvm_coalesced_mmio_dev *dev =
+ (struct kvm_coalesced_mmio_dev*)this->private;
+ struct kvm_coalesced_mmio_zone *zone;
+ int next;
+ int i;
+
+ if (!is_write)
+ return 0;
+
+ /* kvm->lock is taken by the caller and must be not released before
+ * dev.read/write
+ */
+
+ /* Are we able to batch it ? */
+
+ /* last is the first free entry
+ * check if we don't meet the first used entry
+ * there is always one unused entry in the buffer
+ */
+
+ next = (dev->kvm->coalesced_mmio_ring->last + 1) %
+ KVM_COALESCED_MMIO_MAX;
+ if (next == dev->kvm->coalesced_mmio_ring->first) {
+ /* full */
+ return 0;
+ }
+
+ /* is it in a batchable area ? */
+
+ for (i = 0; i < dev->nb_zones; i++) {
+ zone = &dev->zone[i];
+
+ /* (addr,len) is fully included in
+ * (zone->addr, zone->size)
+ */
+
+ if (zone->addr <= addr &&
+ addr + len <= zone->addr + zone->size)
+ return 1;
+ }
+ return 0;
+}
+
+static void coalesced_mmio_write(struct kvm_io_device *this,
+ gpa_t addr, int len, const void *val)
+{
+ struct kvm_coalesced_mmio_dev *dev =
+ (struct kvm_coalesced_mmio_dev*)this->private;
+ struct kvm_coalesced_mmio_ring *ring = dev->kvm->coalesced_mmio_ring;
+
+ /* kvm->lock must be taken by caller before call to in_range()*/
+
+ /* copy data in first free entry of the ring */
+
+ ring->coalesced_mmio[ring->last].phys_addr = addr;
+ ring->coalesced_mmio[ring->last].len = len;
+ memcpy(ring->coalesced_mmio[ring->last].data, val, len);
+ smp_wmb();
+ ring->last = (ring->last + 1) % KVM_COALESCED_MMIO_MAX;
+}
+
+static void coalesced_mmio_destructor(struct kvm_io_device *this)
+{
+ kfree(this);
+}
+
+int kvm_coalesced_mmio_init(struct kvm *kvm)
+{
+ struct kvm_coalesced_mmio_dev *dev;
+
+ dev = kzalloc(sizeof(struct kvm_coalesced_mmio_dev), GFP_KERNEL);
+ if (!dev)
+ return -ENOMEM;
+ dev->dev.write = coalesced_mmio_write;
+ dev->dev.in_range = coalesced_mmio_in_range;
+ dev->dev.destructor = coalesced_mmio_destructor;
+ dev->dev.private = dev;
+ dev->kvm = kvm;
+ kvm->coalesced_mmio_dev = dev;
+ kvm_io_bus_register_dev(&kvm->mmio_bus, &dev->dev);
+
+ return 0;
+}
+
+int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm,
+ struct kvm_coalesced_mmio_zone *zone)
+{
+ struct kvm_coalesced_mmio_dev *dev = kvm->coalesced_mmio_dev;
+
+ if (dev == NULL)
+ return -EINVAL;
+
+ mutex_lock(&kvm->lock);
+ if (dev->nb_zones >= KVM_COALESCED_MMIO_ZONE_MAX) {
+ mutex_unlock(&kvm->lock);
+ return -ENOBUFS;
+ }
+
+ dev->zone[dev->nb_zones] = *zone;
+ dev->nb_zones++;
+
+ mutex_unlock(&kvm->lock);
+ return 0;
+}
+
+int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm,
+ struct kvm_coalesced_mmio_zone *zone)
+{
+ int i;
+ struct kvm_coalesced_mmio_dev *dev = kvm->coalesced_mmio_dev;
+ struct kvm_coalesced_mmio_zone *z;
+
+ if (dev == NULL)
+ return -EINVAL;
+
+ mutex_lock(&kvm->lock);
+
+ i = dev->nb_zones;
+ while(i) {
+ z = &dev->zone[i - 1];
+
+ /* unregister all zones
+ * included in (zone->addr, zone->size)
+ */
+
+ if (zone->addr <= z->addr &&
+ z->addr + z->size <= zone->addr + zone->size) {
+ dev->nb_zones--;
+ *z = dev->zone[dev->nb_zones];
+ }
+ i--;
+ }
+
+ mutex_unlock(&kvm->lock);
+
+ return 0;
+}
diff --git a/virt/kvm/coalesced_mmio.h b/virt/kvm/coalesced_mmio.h
new file mode 100644
index 0000000..5ac0ec6
--- /dev/null
+++ b/virt/kvm/coalesced_mmio.h
@@ -0,0 +1,23 @@
+/*
+ * KVM coalesced MMIO
+ *
+ * Copyright (c) 2008 Bull S.A.S.
+ *
+ * Author: Laurent Vivier <[email protected]>
+ *
+ */
+
+#define KVM_COALESCED_MMIO_ZONE_MAX 100
+
+struct kvm_coalesced_mmio_dev {
+ struct kvm_io_device dev;
+ struct kvm *kvm;
+ int nb_zones;
+ struct kvm_coalesced_mmio_zone zone[KVM_COALESCED_MMIO_ZONE_MAX];
+};
+
+int kvm_coalesced_mmio_init(struct kvm *kvm);
+int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm,
+ struct kvm_coalesced_mmio_zone *zone);
+int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm,
+ struct kvm_coalesced_mmio_zone *zone);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index d602700..f9427e2 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -47,6 +47,10 @@
#include <asm/uaccess.h>
#include <asm/pgtable.h>

+#ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
+#include "coalesced_mmio.h"
+#endif
+
MODULE_AUTHOR("Qumranet");
MODULE_LICENSE("GPL");

@@ -185,10 +189,23 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_uninit);
static struct kvm *kvm_create_vm(void)
{
struct kvm *kvm = kvm_arch_create_vm();
+#ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
+ struct page *page;
+#endif

if (IS_ERR(kvm))
goto out;

+#ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
+ page = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ if (!page) {
+ kfree(kvm);
+ return ERR_PTR(-ENOMEM);
+ }
+ kvm->coalesced_mmio_ring =
+ (struct kvm_coalesced_mmio_ring *)page_address(page);
+#endif
+
kvm->mm = current->mm;
atomic_inc(&kvm->mm->mm_count);
spin_lock_init(&kvm->mmu_lock);
@@ -200,6 +217,9 @@ static struct kvm *kvm_create_vm(void)
spin_lock(&kvm_lock);
list_add(&kvm->vm_list, &vm_list);
spin_unlock(&kvm_lock);
+#ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
+ kvm_coalesced_mmio_init(kvm);
+#endif
out:
return kvm;
}
@@ -242,6 +262,10 @@ static void kvm_destroy_vm(struct kvm *kvm)
spin_unlock(&kvm_lock);
kvm_io_bus_destroy(&kvm->pio_bus);
kvm_io_bus_destroy(&kvm->mmio_bus);
+#ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
+ if (kvm->coalesced_mmio_ring != NULL)
+ free_page((unsigned long)kvm->coalesced_mmio_ring);
+#endif
kvm_arch_destroy_vm(kvm);
mmdrop(mm);
}
@@ -826,6 +850,10 @@ static int kvm_vcpu_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
else if (vmf->pgoff == KVM_PIO_PAGE_OFFSET)
page = virt_to_page(vcpu->arch.pio_data);
#endif
+#ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
+ else if (vmf->pgoff == KVM_COALESCED_MMIO_PAGE_OFFSET)
+ page = virt_to_page(vcpu->kvm->coalesced_mmio_ring);
+#endif
else
return VM_FAULT_SIGBUS;
get_page(page);
@@ -1148,6 +1176,32 @@ static long kvm_vm_ioctl(struct file *filp,
goto out;
break;
}
+#ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
+ case KVM_REGISTER_COALESCED_MMIO: {
+ struct kvm_coalesced_mmio_zone zone;
+ r = -EFAULT;
+ if (copy_from_user(&zone, argp, sizeof zone))
+ goto out;
+ r = -ENXIO;
+ r = kvm_vm_ioctl_register_coalesced_mmio(kvm, &zone);
+ if (r)
+ goto out;
+ r = 0;
+ break;
+ }
+ case KVM_UNREGISTER_COALESCED_MMIO: {
+ struct kvm_coalesced_mmio_zone zone;
+ r = -EFAULT;
+ if (copy_from_user(&zone, argp, sizeof zone))
+ goto out;
+ r = -ENXIO;
+ r = kvm_vm_ioctl_unregister_coalesced_mmio(kvm, &zone);
+ if (r)
+ goto out;
+ r = 0;
+ break;
+ }
+#endif
default:
r = kvm_arch_vm_ioctl(filp, ioctl, arg);
}
@@ -1232,6 +1286,9 @@ static long kvm_dev_ioctl(struct file *filp,
#ifdef CONFIG_X86
r += PAGE_SIZE; /* pio data page */
#endif
+#ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
+ r += PAGE_SIZE; /* coalesced mmio ring page */
+#endif
break;
case KVM_TRACE_ENABLE:
case KVM_TRACE_PAUSE:
--
1.5.6

2008-06-26 12:35:00

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 29/50] KVM: x86 emulator: simplify push imm8 emulation

Instead of fetching the data explicitly, use SrcImmByte.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86_emulate.c | 4 +---
1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index 2a9db4d..4e037ea 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -121,7 +121,7 @@ static u16 opcode_table[256] = {
0, 0, 0, DstReg | SrcMem32 | ModRM | Mov /* movsxd (x86/64) */ ,
0, 0, 0, 0,
/* 0x68 - 0x6F */
- 0, 0, ImplicitOps | Mov | Stack, 0,
+ 0, 0, SrcImmByte | Mov | Stack, 0,
SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps, /* insb, insw/insd */
SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps, /* outsb, outsw/outsd */
/* 0x70 - 0x77 */
@@ -1425,8 +1425,6 @@ special_insn:
c->dst.val = (s32) c->src.val;
break;
case 0x6a: /* push imm8 */
- c->src.val = 0L;
- c->src.val = insn_fetch(s8, 1, c->eip);
emulate_push(ctxt);
break;
case 0x6c: /* insb */
--
1.5.6

2008-06-26 12:35:55

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 48/50] KVM: x86 emulator: lazily evaluate segment registers

Instead of prefetching all segment bases before emulation, read them at the
last moment. Since most of them are unneeded, we save some cycles on
Intel machines where this is a bit expensive.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86.c | 21 --------
arch/x86/kvm/x86_emulate.c | 96 ++++++++++++++++++++++---------------
include/asm-x86/kvm_x86_emulate.h | 10 +---
3 files changed, 60 insertions(+), 67 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ccb4e1e..0fbc032 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2126,27 +2126,6 @@ int emulate_instruction(struct kvm_vcpu *vcpu,
? X86EMUL_MODE_PROT64 : cs_db
? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16;

- if (vcpu->arch.emulate_ctxt.mode == X86EMUL_MODE_PROT64) {
- vcpu->arch.emulate_ctxt.cs_base = 0;
- vcpu->arch.emulate_ctxt.ds_base = 0;
- vcpu->arch.emulate_ctxt.es_base = 0;
- vcpu->arch.emulate_ctxt.ss_base = 0;
- } else {
- vcpu->arch.emulate_ctxt.cs_base =
- get_segment_base(vcpu, VCPU_SREG_CS);
- vcpu->arch.emulate_ctxt.ds_base =
- get_segment_base(vcpu, VCPU_SREG_DS);
- vcpu->arch.emulate_ctxt.es_base =
- get_segment_base(vcpu, VCPU_SREG_ES);
- vcpu->arch.emulate_ctxt.ss_base =
- get_segment_base(vcpu, VCPU_SREG_SS);
- }
-
- vcpu->arch.emulate_ctxt.gs_base =
- get_segment_base(vcpu, VCPU_SREG_GS);
- vcpu->arch.emulate_ctxt.fs_base =
- get_segment_base(vcpu, VCPU_SREG_FS);
-
r = x86_decode_insn(&vcpu->arch.emulate_ctxt, &emulate_ops);

/* Reject the instructions other than VMCALL/VMMCALL when
diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index 38926b7..18ca25c 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -522,6 +522,39 @@ static inline void jmp_rel(struct decode_cache *c, int rel)
register_address_increment(c, &c->eip, rel);
}

+static void set_seg_override(struct decode_cache *c, int seg)
+{
+ c->has_seg_override = true;
+ c->seg_override = seg;
+}
+
+static unsigned long seg_base(struct x86_emulate_ctxt *ctxt, int seg)
+{
+ if (ctxt->mode == X86EMUL_MODE_PROT64 && seg < VCPU_SREG_FS)
+ return 0;
+
+ return kvm_x86_ops->get_segment_base(ctxt->vcpu, seg);
+}
+
+static unsigned long seg_override_base(struct x86_emulate_ctxt *ctxt,
+ struct decode_cache *c)
+{
+ if (!c->has_seg_override)
+ return 0;
+
+ return seg_base(ctxt, c->seg_override);
+}
+
+static unsigned long es_base(struct x86_emulate_ctxt *ctxt)
+{
+ return seg_base(ctxt, VCPU_SREG_ES);
+}
+
+static unsigned long ss_base(struct x86_emulate_ctxt *ctxt)
+{
+ return seg_base(ctxt, VCPU_SREG_SS);
+}
+
static int do_fetch_insn_byte(struct x86_emulate_ctxt *ctxt,
struct x86_emulate_ops *ops,
unsigned long linear, u8 *dest)
@@ -735,8 +768,8 @@ static int decode_modrm(struct x86_emulate_ctxt *ctxt,
}
if (c->modrm_rm == 2 || c->modrm_rm == 3 ||
(c->modrm_rm == 6 && c->modrm_mod != 0))
- if (!c->override_base)
- c->override_base = &ctxt->ss_base;
+ if (!c->has_seg_override)
+ set_seg_override(c, VCPU_SREG_SS);
c->modrm_ea = (u16)c->modrm_ea;
} else {
/* 32/64-bit ModR/M decode. */
@@ -807,6 +840,7 @@ x86_decode_insn(struct x86_emulate_ctxt *ctxt, struct x86_emulate_ops *ops)

memset(c, 0, sizeof(struct decode_cache));
c->eip = ctxt->vcpu->arch.rip;
+ ctxt->cs_base = seg_base(ctxt, VCPU_SREG_CS);
memcpy(c->regs, ctxt->vcpu->arch.regs, sizeof c->regs);

switch (mode) {
@@ -845,23 +879,15 @@ x86_decode_insn(struct x86_emulate_ctxt *ctxt, struct x86_emulate_ops *ops)
/* switch between 2/4 bytes */
c->ad_bytes = def_ad_bytes ^ 6;
break;
+ case 0x26: /* ES override */
case 0x2e: /* CS override */
- c->override_base = &ctxt->cs_base;
- break;
+ case 0x36: /* SS override */
case 0x3e: /* DS override */
- c->override_base = &ctxt->ds_base;
- break;
- case 0x26: /* ES override */
- c->override_base = &ctxt->es_base;
+ set_seg_override(c, (c->b >> 3) & 3);
break;
case 0x64: /* FS override */
- c->override_base = &ctxt->fs_base;
- break;
case 0x65: /* GS override */
- c->override_base = &ctxt->gs_base;
- break;
- case 0x36: /* SS override */
- c->override_base = &ctxt->ss_base;
+ set_seg_override(c, c->b & 7);
break;
case 0x40 ... 0x4f: /* REX */
if (mode != X86EMUL_MODE_PROT64)
@@ -933,15 +959,11 @@ done_prefixes:
if (rc)
goto done;

- if (!c->override_base)
- c->override_base = &ctxt->ds_base;
- if (mode == X86EMUL_MODE_PROT64 &&
- c->override_base != &ctxt->fs_base &&
- c->override_base != &ctxt->gs_base)
- c->override_base = NULL;
+ if (!c->has_seg_override)
+ set_seg_override(c, VCPU_SREG_DS);

- if (c->override_base && !(!c->twobyte && c->b == 0x8d))
- c->modrm_ea += *c->override_base;
+ if (!(!c->twobyte && c->b == 0x8d))
+ c->modrm_ea += seg_override_base(ctxt, c);

if (c->ad_bytes != 8)
c->modrm_ea = (u32)c->modrm_ea;
@@ -1043,7 +1065,7 @@ static inline void emulate_push(struct x86_emulate_ctxt *ctxt)
c->dst.bytes = c->op_bytes;
c->dst.val = c->src.val;
register_address_increment(c, &c->regs[VCPU_REGS_RSP], -c->op_bytes);
- c->dst.ptr = (void *) register_address(c, ctxt->ss_base,
+ c->dst.ptr = (void *) register_address(c, ss_base(ctxt),
c->regs[VCPU_REGS_RSP]);
}

@@ -1053,7 +1075,7 @@ static inline int emulate_grp1a(struct x86_emulate_ctxt *ctxt,
struct decode_cache *c = &ctxt->decode;
int rc;

- rc = ops->read_std(register_address(c, ctxt->ss_base,
+ rc = ops->read_std(register_address(c, ss_base(ctxt),
c->regs[VCPU_REGS_RSP]),
&c->dst.val, c->dst.bytes, ctxt->vcpu);
if (rc != 0)
@@ -1375,11 +1397,11 @@ special_insn:
register_address_increment(c, &c->regs[VCPU_REGS_RSP],
-c->op_bytes);
c->dst.ptr = (void *) register_address(
- c, ctxt->ss_base, c->regs[VCPU_REGS_RSP]);
+ c, ss_base(ctxt), c->regs[VCPU_REGS_RSP]);
break;
case 0x58 ... 0x5f: /* pop reg */
pop_instruction:
- if ((rc = ops->read_std(register_address(c, ctxt->ss_base,
+ if ((rc = ops->read_std(register_address(c, ss_base(ctxt),
c->regs[VCPU_REGS_RSP]), c->dst.ptr,
c->op_bytes, ctxt->vcpu)) != 0)
goto done;
@@ -1405,7 +1427,7 @@ special_insn:
c->rep_prefix ?
address_mask(c, c->regs[VCPU_REGS_RCX]) : 1,
(ctxt->eflags & EFLG_DF),
- register_address(c, ctxt->es_base,
+ register_address(c, es_base(ctxt),
c->regs[VCPU_REGS_RDI]),
c->rep_prefix,
c->regs[VCPU_REGS_RDX]) == 0) {
@@ -1421,9 +1443,8 @@ special_insn:
c->rep_prefix ?
address_mask(c, c->regs[VCPU_REGS_RCX]) : 1,
(ctxt->eflags & EFLG_DF),
- register_address(c, c->override_base ?
- *c->override_base :
- ctxt->ds_base,
+ register_address(c,
+ seg_override_base(ctxt, c),
c->regs[VCPU_REGS_RSI]),
c->rep_prefix,
c->regs[VCPU_REGS_RDX]) == 0) {
@@ -1559,11 +1580,10 @@ special_insn:
c->dst.type = OP_MEM;
c->dst.bytes = (c->d & ByteOp) ? 1 : c->op_bytes;
c->dst.ptr = (unsigned long *)register_address(c,
- ctxt->es_base,
+ es_base(ctxt),
c->regs[VCPU_REGS_RDI]);
if ((rc = ops->read_emulated(register_address(c,
- c->override_base ? *c->override_base :
- ctxt->ds_base,
+ seg_override_base(ctxt, c),
c->regs[VCPU_REGS_RSI]),
&c->dst.val,
c->dst.bytes, ctxt->vcpu)) != 0)
@@ -1579,8 +1599,7 @@ special_insn:
c->src.type = OP_NONE; /* Disable writeback. */
c->src.bytes = (c->d & ByteOp) ? 1 : c->op_bytes;
c->src.ptr = (unsigned long *)register_address(c,
- c->override_base ? *c->override_base :
- ctxt->ds_base,
+ seg_override_base(ctxt, c),
c->regs[VCPU_REGS_RSI]);
if ((rc = ops->read_emulated((unsigned long)c->src.ptr,
&c->src.val,
@@ -1591,7 +1610,7 @@ special_insn:
c->dst.type = OP_NONE; /* Disable writeback. */
c->dst.bytes = (c->d & ByteOp) ? 1 : c->op_bytes;
c->dst.ptr = (unsigned long *)register_address(c,
- ctxt->es_base,
+ es_base(ctxt),
c->regs[VCPU_REGS_RDI]);
if ((rc = ops->read_emulated((unsigned long)c->dst.ptr,
&c->dst.val,
@@ -1615,7 +1634,7 @@ special_insn:
c->dst.type = OP_MEM;
c->dst.bytes = (c->d & ByteOp) ? 1 : c->op_bytes;
c->dst.ptr = (unsigned long *)register_address(c,
- ctxt->es_base,
+ es_base(ctxt),
c->regs[VCPU_REGS_RDI]);
c->dst.val = c->regs[VCPU_REGS_RAX];
register_address_increment(c, &c->regs[VCPU_REGS_RDI],
@@ -1627,8 +1646,7 @@ special_insn:
c->dst.bytes = (c->d & ByteOp) ? 1 : c->op_bytes;
c->dst.ptr = (unsigned long *)&c->regs[VCPU_REGS_RAX];
if ((rc = ops->read_emulated(register_address(c,
- c->override_base ? *c->override_base :
- ctxt->ds_base,
+ seg_override_base(ctxt, c),
c->regs[VCPU_REGS_RSI]),
&c->dst.val,
c->dst.bytes,
diff --git a/include/asm-x86/kvm_x86_emulate.h b/include/asm-x86/kvm_x86_emulate.h
index 9fda4b3..4e8c1e4 100644
--- a/include/asm-x86/kvm_x86_emulate.h
+++ b/include/asm-x86/kvm_x86_emulate.h
@@ -124,7 +124,8 @@ struct decode_cache {
u8 rex_prefix;
struct operand src;
struct operand dst;
- unsigned long *override_base;
+ bool has_seg_override;
+ u8 seg_override;
unsigned int d;
unsigned long regs[NR_VCPU_REGS];
unsigned long eip;
@@ -151,12 +152,7 @@ struct x86_emulate_ctxt {
/* Emulated execution mode, represented by an X86EMUL_MODE value. */
int mode;

- unsigned long cs_base;
- unsigned long ds_base;
- unsigned long es_base;
- unsigned long ss_base;
- unsigned long gs_base;
- unsigned long fs_base;
+ u32 cs_base;

/* decode cache */

--
1.5.6

2008-06-26 12:36:38

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 49/50] KVM: MMU: When debug is enabled, make it a run-time parameter

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/mmu.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 5ebb278..5994645 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -66,7 +66,8 @@ static void kvm_mmu_audit(struct kvm_vcpu *vcpu, const char *msg) {}
#endif

#if defined(MMU_DEBUG) || defined(AUDIT)
-static int dbg = 1;
+static int dbg = 0;
+module_param(dbg, bool, 0644);
#endif

#ifndef MMU_DEBUG
--
1.5.6

2008-06-26 12:36:21

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 50/50] KVM: MMU: Fix printk format

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/mmu.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 5994645..1fd8e3b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1116,7 +1116,7 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *shadow_pte,
mark_page_dirty(vcpu->kvm, gfn);

pgprintk("%s: setting spte %llx\n", __func__, spte);
- pgprintk("instantiating %s PTE (%s) at %d (%llx) addr %llx\n",
+ pgprintk("instantiating %s PTE (%s) at %ld (%llx) addr %p\n",
(spte&PT_PAGE_SIZE_MASK)? "2MB" : "4kB",
(spte&PT_WRITABLE_MASK)?"RW":"R", gfn, spte, shadow_pte);
set_shadow_pte(shadow_pte, spte);
--
1.5.6

2008-06-26 12:36:54

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 47/50] KVM: x86 emulator: avoid segment base adjust for lea

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86_emulate.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index 20b6044..38926b7 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -940,7 +940,7 @@ done_prefixes:
c->override_base != &ctxt->gs_base)
c->override_base = NULL;

- if (c->override_base)
+ if (c->override_base && !(!c->twobyte && c->b == 0x8d))
c->modrm_ea += *c->override_base;

if (c->ad_bytes != 8)
--
1.5.6

2008-06-26 12:37:29

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 46/50] KVM: x86 emulator: simplify rip relative decoding

rip relative decoding is relative to the instruction pointer of the next
instruction; by moving address adjustment until after decoding is complete,
we remove the need to determine the instruction size.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86_emulate.c | 23 +++++------------------
include/asm-x86/kvm_x86_emulate.h | 1 +
2 files changed, 6 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index c3a8231..20b6044 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -664,7 +664,7 @@ static int decode_modrm(struct x86_emulate_ctxt *ctxt,
{
struct decode_cache *c = &ctxt->decode;
u8 sib;
- int index_reg = 0, base_reg = 0, scale, rip_relative = 0;
+ int index_reg = 0, base_reg = 0, scale;
int rc = 0;

if (c->rex_prefix) {
@@ -754,7 +754,7 @@ static int decode_modrm(struct x86_emulate_ctxt *ctxt,
c->modrm_ea += c->regs[index_reg] << scale;
} else if ((c->modrm_rm & 7) == 5 && c->modrm_mod == 0) {
if (ctxt->mode == X86EMUL_MODE_PROT64)
- rip_relative = 1;
+ c->rip_relative = 1;
} else
c->modrm_ea += c->regs[c->modrm_rm];
switch (c->modrm_mod) {
@@ -770,22 +770,6 @@ static int decode_modrm(struct x86_emulate_ctxt *ctxt,
break;
}
}
- if (rip_relative) {
- c->modrm_ea += c->eip;
- switch (c->d & SrcMask) {
- case SrcImmByte:
- c->modrm_ea += 1;
- break;
- case SrcImm:
- if (c->d & ByteOp)
- c->modrm_ea += 1;
- else
- if (c->op_bytes == 8)
- c->modrm_ea += 4;
- else
- c->modrm_ea += c->op_bytes;
- }
- }
done:
return rc;
}
@@ -1044,6 +1028,9 @@ done_prefixes:
break;
}

+ if (c->rip_relative)
+ c->modrm_ea += c->eip;
+
done:
return (rc == X86EMUL_UNHANDLEABLE) ? -1 : 0;
}
diff --git a/include/asm-x86/kvm_x86_emulate.h b/include/asm-x86/kvm_x86_emulate.h
index b877bbd..9fda4b3 100644
--- a/include/asm-x86/kvm_x86_emulate.h
+++ b/include/asm-x86/kvm_x86_emulate.h
@@ -134,6 +134,7 @@ struct decode_cache {
u8 modrm_reg;
u8 modrm_rm;
u8 use_modrm_ea;
+ bool rip_relative;
unsigned long modrm_ea;
void *modrm_ptr;
unsigned long modrm_val;
--
1.5.6

2008-06-26 12:38:24

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 45/50] KVM: x86 emulator: simplify r/m decoding

Consolidate the duplicated code when not in any special case.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86_emulate.c | 17 ++++-------------
1 files changed, 4 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index ca7ab24..c3a8231 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -740,9 +740,7 @@ static int decode_modrm(struct x86_emulate_ctxt *ctxt,
c->modrm_ea = (u16)c->modrm_ea;
} else {
/* 32/64-bit ModR/M decode. */
- switch (c->modrm_rm) {
- case 4:
- case 12:
+ if ((c->modrm_rm & 7) == 4) {
sib = insn_fetch(u8, 1, c->eip);
index_reg |= (sib >> 3) & 7;
base_reg |= sib & 7;
@@ -754,18 +752,11 @@ static int decode_modrm(struct x86_emulate_ctxt *ctxt,
c->modrm_ea += c->regs[base_reg];
if (index_reg != 4)
c->modrm_ea += c->regs[index_reg] << scale;
- break;
- case 5:
- case 13:
- if (c->modrm_mod != 0)
- c->modrm_ea += c->regs[c->modrm_rm];
- else if (ctxt->mode == X86EMUL_MODE_PROT64)
+ } else if ((c->modrm_rm & 7) == 5 && c->modrm_mod == 0) {
+ if (ctxt->mode == X86EMUL_MODE_PROT64)
rip_relative = 1;
- break;
- default:
+ } else
c->modrm_ea += c->regs[c->modrm_rm];
- break;
- }
switch (c->modrm_mod) {
case 0:
if (c->modrm_rm == 5)
--
1.5.6

2008-06-26 12:37:55

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 44/50] KVM: x86 emulator: simplify sib decoding

Instead of using sparse switches, use simpler if/else sequences.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86_emulate.c | 20 ++++----------------
1 files changed, 4 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index 3721cfd..ca7ab24 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -748,24 +748,12 @@ static int decode_modrm(struct x86_emulate_ctxt *ctxt,
base_reg |= sib & 7;
scale = sib >> 6;

- switch (base_reg) {
- case 5:
- case 13:
- if (c->modrm_mod != 0)
- c->modrm_ea += c->regs[base_reg];
- else
- c->modrm_ea +=
- insn_fetch(s32, 4, c->eip);
- break;
- default:
+ if ((base_reg & 7) == 5 && c->modrm_mod == 0)
+ c->modrm_ea += insn_fetch(s32, 4, c->eip);
+ else
c->modrm_ea += c->regs[base_reg];
- }
- switch (index_reg) {
- case 4:
- break;
- default:
+ if (index_reg != 4)
c->modrm_ea += c->regs[index_reg] << scale;
- }
break;
case 5:
case 13:
--
1.5.6

2008-06-26 12:38:43

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 43/50] KVM: x86 emulator: handle undecoded rex.b with r/m = 5 in certain cases

x86_64 does not decode rex.b in certain cases, where the r/m field = 5.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86_emulate.c | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index 2808291..3721cfd 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -750,6 +750,7 @@ static int decode_modrm(struct x86_emulate_ctxt *ctxt,

switch (base_reg) {
case 5:
+ case 13:
if (c->modrm_mod != 0)
c->modrm_ea += c->regs[base_reg];
else
@@ -767,6 +768,7 @@ static int decode_modrm(struct x86_emulate_ctxt *ctxt,
}
break;
case 5:
+ case 13:
if (c->modrm_mod != 0)
c->modrm_ea += c->regs[c->modrm_rm];
else if (ctxt->mode == X86EMUL_MODE_PROT64)
--
1.5.6

2008-06-26 12:38:59

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 42/50] KVM: x86 emulator: emulate nop and xchg reg, acc (opcodes 0x90 - 0x97)

From: Mohammed Gamal <[email protected]>

Signed-off-by: Mohammed Gamal <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86_emulate.c | 17 +++++++++++++++--
1 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index b90857c..2808291 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -140,8 +140,9 @@ static u16 opcode_table[256] = {
ByteOp | DstReg | SrcMem | ModRM | Mov, DstReg | SrcMem | ModRM | Mov,
DstMem | SrcReg | ModRM | Mov, ModRM | DstReg,
DstReg | SrcMem | ModRM | Mov, Group | Group1A,
- /* 0x90 - 0x9F */
- 0, 0, 0, 0, 0, 0, 0, 0,
+ /* 0x90 - 0x97 */
+ DstReg, DstReg, DstReg, DstReg, DstReg, DstReg, DstReg, DstReg,
+ /* 0x98 - 0x9F */
0, 0, 0, 0, ImplicitOps | Stack, ImplicitOps | Stack, 0, 0,
/* 0xA0 - 0xA7 */
ByteOp | DstReg | SrcMem | Mov | MemAbs, DstReg | SrcMem | Mov | MemAbs,
@@ -1493,6 +1494,7 @@ special_insn:
emulate_2op_SrcV("test", c->src, c->dst, ctxt->eflags);
break;
case 0x86 ... 0x87: /* xchg */
+ xchg:
/* Write back the register source. */
switch (c->dst.bytes) {
case 1:
@@ -1560,6 +1562,17 @@ special_insn:
if (rc != 0)
goto done;
break;
+ case 0x90: /* nop / xchg r8,rax */
+ if (!(c->rex_prefix & 1)) { /* nop */
+ c->dst.type = OP_NONE;
+ break;
+ }
+ case 0x91 ... 0x97: /* xchg reg,rax */
+ c->src.type = c->dst.type = OP_REG;
+ c->src.bytes = c->dst.bytes = c->op_bytes;
+ c->src.ptr = (unsigned long *) &c->regs[VCPU_REGS_RAX];
+ c->src.val = *(c->src.ptr);
+ goto xchg;
case 0x9c: /* pushf */
c->src.val = (unsigned long) ctxt->eflags;
emulate_push(ctxt);
--
1.5.6

2008-06-26 12:39:43

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 40/50] KVM: Support mixed endian machines

From: Tan, Li <[email protected]>

Currently kvmtrace is not portable. This will prevent from copying a
trace file from big-endian target to little-endian workstation for analysis.
In the patch, kernel outputs metadata containing a magic number to trace
log, and changes 64-bit words to be u64 instead of a pair of u32s.

Signed-off-by: Tan Li <[email protected]>
Acked-by: Jerone Young <[email protected]>
Acked-by: Hollis Blanchard <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
include/linux/kvm.h | 4 ++--
virt/kvm/kvm_trace.c | 18 ++++++++++++------
2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index 1c908ac..0ea064c 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -318,14 +318,14 @@ struct kvm_trace_rec {
__u32 vcpu_id;
union {
struct {
- __u32 cycle_lo, cycle_hi;
+ __u64 cycle_u64;
__u32 extra_u32[KVM_TRC_EXTRA_MAX];
} cycle;
struct {
__u32 extra_u32[KVM_TRC_EXTRA_MAX];
} nocycle;
} u;
-};
+} __attribute__((packed));

#define KVMIO 0xAE

diff --git a/virt/kvm/kvm_trace.c b/virt/kvm/kvm_trace.c
index 0e49547..58141f3 100644
--- a/virt/kvm/kvm_trace.c
+++ b/virt/kvm/kvm_trace.c
@@ -72,11 +72,7 @@ static void kvm_add_trace(void *probe_private, void *call_data,
rec.cycle_in = p->cycle_in;

if (rec.cycle_in) {
- u64 cycle = 0;
-
- cycle = get_cycles();
- rec.u.cycle.cycle_lo = (u32)cycle;
- rec.u.cycle.cycle_hi = (u32)(cycle >> 32);
+ rec.u.cycle.cycle_u64 = get_cycles();

for (i = 0; i < rec.extra_u32; i++)
rec.u.cycle.extra_u32[i] = va_arg(*args, u32);
@@ -114,8 +110,18 @@ static int kvm_subbuf_start_callback(struct rchan_buf *buf, void *subbuf,
{
struct kvm_trace *kt;

- if (!relay_buf_full(buf))
+ if (!relay_buf_full(buf)) {
+ if (!prev_subbuf) {
+ /*
+ * executed only once when the channel is opened
+ * save metadata as first record
+ */
+ subbuf_start_reserve(buf, sizeof(u32));
+ *(u32 *)subbuf = 0x12345678;
+ }
+
return 1;
+ }

kt = buf->chan->private_data;
atomic_inc(&kt->lost_records);
--
1.5.6

2008-06-26 12:39:26

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 41/50] KVM: Use printk_rlimit() instead of reporting emulation failures just once

Emulation failure reports are useful, so allow more than one per the lifetime
of the module.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86.c | 4 +---
1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 82196c3..ccb4e1e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2076,12 +2076,11 @@ int emulator_set_dr(struct x86_emulate_ctxt *ctxt, int dr, unsigned long value)

void kvm_report_emulation_failure(struct kvm_vcpu *vcpu, const char *context)
{
- static int reported;
u8 opcodes[4];
unsigned long rip = vcpu->arch.rip;
unsigned long rip_linear;

- if (reported)
+ if (!printk_ratelimit())
return;

rip_linear = rip + get_segment_base(vcpu, VCPU_SREG_CS);
@@ -2090,7 +2089,6 @@ void kvm_report_emulation_failure(struct kvm_vcpu *vcpu, const char *context)

printk(KERN_ERR "emulation failed (%s) rip %lx %02x %02x %02x %02x\n",
context, rip, opcodes[0], opcodes[1], opcodes[2], opcodes[3]);
- reported = 1;
}
EXPORT_SYMBOL_GPL(kvm_report_emulation_failure);

--
1.5.6

2008-06-26 12:40:01

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 39/50] KVM: Do not calculate linear rip in emulation failure report

From: Glauber Costa <[email protected]>

If we're not gonna do anything (case in which failure is already
reported), we do not need to even bother with calculating the linear rip.

Signed-off-by: Glauber Costa <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d568792..82196c3 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2081,11 +2081,11 @@ void kvm_report_emulation_failure(struct kvm_vcpu *vcpu, const char *context)
unsigned long rip = vcpu->arch.rip;
unsigned long rip_linear;

- rip_linear = rip + get_segment_base(vcpu, VCPU_SREG_CS);
-
if (reported)
return;

+ rip_linear = rip + get_segment_base(vcpu, VCPU_SREG_CS);
+
emulator_read_std(rip_linear, (void *)opcodes, 4, vcpu);

printk(KERN_ERR "emulation failed (%s) rip %lx %02x %02x %02x %02x\n",
--
1.5.6

2008-06-26 12:40:32

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 38/50] KVM: only abort guest entry if timer count goes from 0->1

From: Marcelo Tosatti <[email protected]>

Only abort guest entry if the timer count went from 0->1, since for 1->2
or larger the bit will either be set already or a timer irq will have
been injected.

Using atomic_inc_and_test() for it also introduces an SMP barrier
to the LAPIC version (thought it was unecessary because of timer
migration, but guest can be scheduled to a different pCPU between exit
and kvm_vcpu_block(), so there is the possibility for a race).

Noticed by Avi.

Signed-off-by: Marcelo Tosatti <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/i8254.c | 11 ++++-------
arch/x86/kvm/lapic.c | 4 ++--
2 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
index 9e3391e..c0f7872 100644
--- a/arch/x86/kvm/i8254.c
+++ b/arch/x86/kvm/i8254.c
@@ -198,14 +198,11 @@ static int __pit_timer_fn(struct kvm_kpit_state *ps)
struct kvm_vcpu *vcpu0 = ps->pit->kvm->vcpus[0];
struct kvm_kpit_timer *pt = &ps->pit_timer;

- atomic_inc(&pt->pending);
- smp_mb__after_atomic_inc();
- if (vcpu0) {
+ if (!atomic_inc_and_test(&pt->pending))
set_bit(KVM_REQ_PENDING_TIMER, &vcpu0->requests);
- if (waitqueue_active(&vcpu0->wq)) {
- vcpu0->arch.mp_state = KVM_MP_STATE_RUNNABLE;
- wake_up_interruptible(&vcpu0->wq);
- }
+ if (vcpu0 && waitqueue_active(&vcpu0->wq)) {
+ vcpu0->arch.mp_state = KVM_MP_STATE_RUNNABLE;
+ wake_up_interruptible(&vcpu0->wq);
}

pt->timer.expires = ktime_add_ns(pt->timer.expires, pt->period);
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 180ba73..73f43de 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -945,8 +945,8 @@ static int __apic_timer_fn(struct kvm_lapic *apic)
int result = 0;
wait_queue_head_t *q = &apic->vcpu->wq;

- atomic_inc(&apic->timer.pending);
- set_bit(KVM_REQ_PENDING_TIMER, &apic->vcpu->requests);
+ if(!atomic_inc_and_test(&apic->timer.pending))
+ set_bit(KVM_REQ_PENDING_TIMER, &apic->vcpu->requests);
if (waitqueue_active(q)) {
apic->vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
wake_up_interruptible(q);
--
1.5.6

2008-06-26 12:40:52

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 37/50] KVM: Add coalesced MMIO support (ia64 part)

From: Laurent Vivier <[email protected]>

This patch enables coalesced MMIO for ia64 architecture.
It defines KVM_MMIO_PAGE_OFFSET and KVM_CAP_COALESCED_MMIO.
It enables the compilation of coalesced_mmio.c.

[akpm: fix compile error on ia64]

Signed-off-by: Laurent Vivier <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/ia64/kvm/Makefile | 3 ++-
arch/ia64/kvm/kvm-ia64.c | 3 +++
include/asm-ia64/kvm_host.h | 1 +
3 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/arch/ia64/kvm/Makefile b/arch/ia64/kvm/Makefile
index 112791d..bf22fb9 100644
--- a/arch/ia64/kvm/Makefile
+++ b/arch/ia64/kvm/Makefile
@@ -43,7 +43,8 @@ $(obj)/$(offsets-file): arch/ia64/kvm/asm-offsets.s
EXTRA_CFLAGS += -Ivirt/kvm -Iarch/ia64/kvm/
EXTRA_AFLAGS += -Ivirt/kvm -Iarch/ia64/kvm/

-common-objs = $(addprefix ../../../virt/kvm/, kvm_main.o ioapic.o)
+common-objs = $(addprefix ../../../virt/kvm/, kvm_main.o ioapic.o \
+ coalesced_mmio.o)

kvm-objs := $(common-objs) kvm-ia64.o kvm_fw.o
obj-$(CONFIG_KVM) += kvm.o
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index b59231b..a4cf4a2 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -187,6 +187,9 @@ int kvm_dev_ioctl_check_extension(long ext)

r = 1;
break;
+ case KVM_CAP_COALESCED_MMIO:
+ r = KVM_COALESCED_MMIO_PAGE_OFFSET;
+ break;
default:
r = 0;
}
diff --git a/include/asm-ia64/kvm_host.h b/include/asm-ia64/kvm_host.h
index 5c958b0..1efe513 100644
--- a/include/asm-ia64/kvm_host.h
+++ b/include/asm-ia64/kvm_host.h
@@ -38,6 +38,7 @@
/* memory slots that does not exposed to userspace */
#define KVM_PRIVATE_MEM_SLOTS 4

+#define KVM_COALESCED_MMIO_PAGE_OFFSET 1

/* define exit reasons from vmm to kvm*/
#define EXIT_REASON_VM_PANIC 0
--
1.5.6

2008-06-26 12:41:28

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 36/50] KVM: Add coalesced MMIO support (powerpc part)

From: Laurent Vivier <[email protected]>

This patch enables coalesced MMIO for powerpc architecture.
It defines KVM_MMIO_PAGE_OFFSET and KVM_CAP_COALESCED_MMIO.
It enables the compilation of coalesced_mmio.c.

Signed-off-by: Laurent Vivier <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/powerpc/kvm/Makefile | 2 +-
arch/powerpc/kvm/powerpc.c | 3 +++
include/asm-powerpc/kvm_host.h | 2 ++
3 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index d0d358d..04e3449 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -4,7 +4,7 @@

EXTRA_CFLAGS += -Ivirt/kvm -Iarch/powerpc/kvm

-common-objs = $(addprefix ../../../virt/kvm/, kvm_main.o)
+common-objs = $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o)

kvm-objs := $(common-objs) powerpc.o emulate.o booke_guest.o
obj-$(CONFIG_KVM) += kvm.o
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 0513b35..b850d24 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -145,6 +145,9 @@ int kvm_dev_ioctl_check_extension(long ext)
case KVM_CAP_USER_MEMORY:
r = 1;
break;
+ case KVM_CAP_COALESCED_MMIO:
+ r = KVM_COALESCED_MMIO_PAGE_OFFSET;
+ break;
default:
r = 0;
break;
diff --git a/include/asm-powerpc/kvm_host.h b/include/asm-powerpc/kvm_host.h
index 81a69d7..2655e2a 100644
--- a/include/asm-powerpc/kvm_host.h
+++ b/include/asm-powerpc/kvm_host.h
@@ -31,6 +31,8 @@
/* memory slots that does not exposed to userspace */
#define KVM_PRIVATE_MEM_SLOTS 4

+#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
+
/* We don't currently support large pages. */
#define KVM_PAGES_PER_HPAGE (1<<31)

--
1.5.6

2008-06-26 12:41:55

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 28/50] KVM: MMU: Optimize prefetch_page()

Instead of reading each pte individually, read 256 bytes worth of ptes and
batch process them.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/paging_tmpl.h | 28 +++++++++++++++-------------
1 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 934c7b6..4d91822 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -460,8 +460,9 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, gva_t vaddr)
static void FNAME(prefetch_page)(struct kvm_vcpu *vcpu,
struct kvm_mmu_page *sp)
{
- int i, offset = 0, r = 0;
- pt_element_t pt;
+ int i, j, offset, r;
+ pt_element_t pt[256 / sizeof(pt_element_t)];
+ gpa_t pte_gpa;

if (sp->role.metaphysical
|| (PTTYPE == 32 && sp->role.level > PT_PAGE_TABLE_LEVEL)) {
@@ -469,19 +470,20 @@ static void FNAME(prefetch_page)(struct kvm_vcpu *vcpu,
return;
}

- if (PTTYPE == 32)
+ pte_gpa = gfn_to_gpa(sp->gfn);
+ if (PTTYPE == 32) {
offset = sp->role.quadrant << PT64_LEVEL_BITS;
+ pte_gpa += offset * sizeof(pt_element_t);
+ }

- for (i = 0; i < PT64_ENT_PER_PAGE; ++i) {
- gpa_t pte_gpa = gfn_to_gpa(sp->gfn);
- pte_gpa += (i+offset) * sizeof(pt_element_t);
-
- r = kvm_read_guest_atomic(vcpu->kvm, pte_gpa, &pt,
- sizeof(pt_element_t));
- if (r || is_present_pte(pt))
- sp->spt[i] = shadow_trap_nonpresent_pte;
- else
- sp->spt[i] = shadow_notrap_nonpresent_pte;
+ for (i = 0; i < PT64_ENT_PER_PAGE; i += ARRAY_SIZE(pt)) {
+ r = kvm_read_guest_atomic(vcpu->kvm, pte_gpa, pt, sizeof pt);
+ pte_gpa += ARRAY_SIZE(pt) * sizeof(pt_element_t);
+ for (j = 0; j < ARRAY_SIZE(pt); ++j)
+ if (r || is_present_pte(pt[j]))
+ sp->spt[i+j] = shadow_trap_nonpresent_pte;
+ else
+ sp->spt[i+j] = shadow_notrap_nonpresent_pte;
}
}

--
1.5.6

2008-06-26 12:42:19

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 22/50] KVM: Prefixes segment functions that will be exported with "kvm_"

From: Guillaume Thouvenin <[email protected]>

Prefixes functions that will be exported with kvm_.
We also prefixed set_segment() even if it still static
to be coherent.

signed-off-by: Guillaume Thouvenin <[email protected]>
Signed-off-by: Laurent Vivier <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86.c | 78 ++++++++++++++++++++++----------------------
include/asm-x86/kvm_host.h | 4 ++
2 files changed, 43 insertions(+), 39 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2095e79..51810d8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3100,8 +3100,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
return 0;
}

-static void get_segment(struct kvm_vcpu *vcpu,
- struct kvm_segment *var, int seg)
+void kvm_get_segment(struct kvm_vcpu *vcpu,
+ struct kvm_segment *var, int seg)
{
kvm_x86_ops->get_segment(vcpu, var, seg);
}
@@ -3110,7 +3110,7 @@ void kvm_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
{
struct kvm_segment cs;

- get_segment(vcpu, &cs, VCPU_SREG_CS);
+ kvm_get_segment(vcpu, &cs, VCPU_SREG_CS);
*db = cs.db;
*l = cs.l;
}
@@ -3124,15 +3124,15 @@ int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,

vcpu_load(vcpu);

- get_segment(vcpu, &sregs->cs, VCPU_SREG_CS);
- get_segment(vcpu, &sregs->ds, VCPU_SREG_DS);
- get_segment(vcpu, &sregs->es, VCPU_SREG_ES);
- get_segment(vcpu, &sregs->fs, VCPU_SREG_FS);
- get_segment(vcpu, &sregs->gs, VCPU_SREG_GS);
- get_segment(vcpu, &sregs->ss, VCPU_SREG_SS);
+ kvm_get_segment(vcpu, &sregs->cs, VCPU_SREG_CS);
+ kvm_get_segment(vcpu, &sregs->ds, VCPU_SREG_DS);
+ kvm_get_segment(vcpu, &sregs->es, VCPU_SREG_ES);
+ kvm_get_segment(vcpu, &sregs->fs, VCPU_SREG_FS);
+ kvm_get_segment(vcpu, &sregs->gs, VCPU_SREG_GS);
+ kvm_get_segment(vcpu, &sregs->ss, VCPU_SREG_SS);

- get_segment(vcpu, &sregs->tr, VCPU_SREG_TR);
- get_segment(vcpu, &sregs->ldt, VCPU_SREG_LDTR);
+ kvm_get_segment(vcpu, &sregs->tr, VCPU_SREG_TR);
+ kvm_get_segment(vcpu, &sregs->ldt, VCPU_SREG_LDTR);

kvm_x86_ops->get_idt(vcpu, &dt);
sregs->idt.limit = dt.limit;
@@ -3184,7 +3184,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
return 0;
}

-static void set_segment(struct kvm_vcpu *vcpu,
+static void kvm_set_segment(struct kvm_vcpu *vcpu,
struct kvm_segment *var, int seg)
{
kvm_x86_ops->set_segment(vcpu, var, seg);
@@ -3221,7 +3221,7 @@ static void get_segment_descritptor_dtable(struct kvm_vcpu *vcpu,
if (selector & 1 << 2) {
struct kvm_segment kvm_seg;

- get_segment(vcpu, &kvm_seg, VCPU_SREG_LDTR);
+ kvm_get_segment(vcpu, &kvm_seg, VCPU_SREG_LDTR);

if (kvm_seg.unusable)
dtable->limit = 0;
@@ -3327,7 +3327,7 @@ static u16 get_segment_selector(struct kvm_vcpu *vcpu, int seg)
{
struct kvm_segment kvm_seg;

- get_segment(vcpu, &kvm_seg, seg);
+ kvm_get_segment(vcpu, &kvm_seg, seg);
return kvm_seg.selector;
}

@@ -3343,8 +3343,8 @@ static int load_segment_descriptor_to_kvm_desct(struct kvm_vcpu *vcpu,
return 0;
}

-static int load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector,
- int type_bits, int seg)
+int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector,
+ int type_bits, int seg)
{
struct kvm_segment kvm_seg;

@@ -3357,7 +3357,7 @@ static int load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector,
if (!kvm_seg.s)
kvm_seg.unusable = 1;

- set_segment(vcpu, &kvm_seg, seg);
+ kvm_set_segment(vcpu, &kvm_seg, seg);
return 0;
}

@@ -3403,25 +3403,25 @@ static int load_state_from_tss32(struct kvm_vcpu *vcpu,
vcpu->arch.regs[VCPU_REGS_RSI] = tss->esi;
vcpu->arch.regs[VCPU_REGS_RDI] = tss->edi;

- if (load_segment_descriptor(vcpu, tss->ldt_selector, 0, VCPU_SREG_LDTR))
+ if (kvm_load_segment_descriptor(vcpu, tss->ldt_selector, 0, VCPU_SREG_LDTR))
return 1;

- if (load_segment_descriptor(vcpu, tss->es, 1, VCPU_SREG_ES))
+ if (kvm_load_segment_descriptor(vcpu, tss->es, 1, VCPU_SREG_ES))
return 1;

- if (load_segment_descriptor(vcpu, tss->cs, 9, VCPU_SREG_CS))
+ if (kvm_load_segment_descriptor(vcpu, tss->cs, 9, VCPU_SREG_CS))
return 1;

- if (load_segment_descriptor(vcpu, tss->ss, 1, VCPU_SREG_SS))
+ if (kvm_load_segment_descriptor(vcpu, tss->ss, 1, VCPU_SREG_SS))
return 1;

- if (load_segment_descriptor(vcpu, tss->ds, 1, VCPU_SREG_DS))
+ if (kvm_load_segment_descriptor(vcpu, tss->ds, 1, VCPU_SREG_DS))
return 1;

- if (load_segment_descriptor(vcpu, tss->fs, 1, VCPU_SREG_FS))
+ if (kvm_load_segment_descriptor(vcpu, tss->fs, 1, VCPU_SREG_FS))
return 1;

- if (load_segment_descriptor(vcpu, tss->gs, 1, VCPU_SREG_GS))
+ if (kvm_load_segment_descriptor(vcpu, tss->gs, 1, VCPU_SREG_GS))
return 1;
return 0;
}
@@ -3462,19 +3462,19 @@ static int load_state_from_tss16(struct kvm_vcpu *vcpu,
vcpu->arch.regs[VCPU_REGS_RSI] = tss->si;
vcpu->arch.regs[VCPU_REGS_RDI] = tss->di;

- if (load_segment_descriptor(vcpu, tss->ldt, 0, VCPU_SREG_LDTR))
+ if (kvm_load_segment_descriptor(vcpu, tss->ldt, 0, VCPU_SREG_LDTR))
return 1;

- if (load_segment_descriptor(vcpu, tss->es, 1, VCPU_SREG_ES))
+ if (kvm_load_segment_descriptor(vcpu, tss->es, 1, VCPU_SREG_ES))
return 1;

- if (load_segment_descriptor(vcpu, tss->cs, 9, VCPU_SREG_CS))
+ if (kvm_load_segment_descriptor(vcpu, tss->cs, 9, VCPU_SREG_CS))
return 1;

- if (load_segment_descriptor(vcpu, tss->ss, 1, VCPU_SREG_SS))
+ if (kvm_load_segment_descriptor(vcpu, tss->ss, 1, VCPU_SREG_SS))
return 1;

- if (load_segment_descriptor(vcpu, tss->ds, 1, VCPU_SREG_DS))
+ if (kvm_load_segment_descriptor(vcpu, tss->ds, 1, VCPU_SREG_DS))
return 1;
return 0;
}
@@ -3532,7 +3532,7 @@ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason)
struct desc_struct nseg_desc;
int ret = 0;

- get_segment(vcpu, &tr_seg, VCPU_SREG_TR);
+ kvm_get_segment(vcpu, &tr_seg, VCPU_SREG_TR);

if (load_guest_segment_descriptor(vcpu, tss_selector, &nseg_desc))
goto out;
@@ -3591,7 +3591,7 @@ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason)
kvm_x86_ops->set_cr0(vcpu, vcpu->arch.cr0 | X86_CR0_TS);
seg_desct_to_kvm_desct(&nseg_desc, tss_selector, &tr_seg);
tr_seg.type = 11;
- set_segment(vcpu, &tr_seg, VCPU_SREG_TR);
+ kvm_set_segment(vcpu, &tr_seg, VCPU_SREG_TR);
out:
kvm_x86_ops->decache_regs(vcpu);
return ret;
@@ -3658,15 +3658,15 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
}
}

- set_segment(vcpu, &sregs->cs, VCPU_SREG_CS);
- set_segment(vcpu, &sregs->ds, VCPU_SREG_DS);
- set_segment(vcpu, &sregs->es, VCPU_SREG_ES);
- set_segment(vcpu, &sregs->fs, VCPU_SREG_FS);
- set_segment(vcpu, &sregs->gs, VCPU_SREG_GS);
- set_segment(vcpu, &sregs->ss, VCPU_SREG_SS);
+ kvm_set_segment(vcpu, &sregs->cs, VCPU_SREG_CS);
+ kvm_set_segment(vcpu, &sregs->ds, VCPU_SREG_DS);
+ kvm_set_segment(vcpu, &sregs->es, VCPU_SREG_ES);
+ kvm_set_segment(vcpu, &sregs->fs, VCPU_SREG_FS);
+ kvm_set_segment(vcpu, &sregs->gs, VCPU_SREG_GS);
+ kvm_set_segment(vcpu, &sregs->ss, VCPU_SREG_SS);

- set_segment(vcpu, &sregs->tr, VCPU_SREG_TR);
- set_segment(vcpu, &sregs->ldt, VCPU_SREG_LDTR);
+ kvm_set_segment(vcpu, &sregs->tr, VCPU_SREG_TR);
+ kvm_set_segment(vcpu, &sregs->ldt, VCPU_SREG_LDTR);

vcpu_put(vcpu);

diff --git a/include/asm-x86/kvm_host.h b/include/asm-x86/kvm_host.h
index fc72bad..cd6a4bb 100644
--- a/include/asm-x86/kvm_host.h
+++ b/include/asm-x86/kvm_host.h
@@ -503,6 +503,10 @@ int emulator_get_dr(struct x86_emulate_ctxt *ctxt, int dr,
int emulator_set_dr(struct x86_emulate_ctxt *ctxt, int dr,
unsigned long value);

+void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
+int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector,
+ int type_bits, int seg);
+
int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason);

void kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
--
1.5.6

2008-06-26 12:42:58

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 25/50] KVM: x86 emulator: adds support to mov r,imm (opcode 0xb8) instruction

From: Guillaume Thouvenin <[email protected]>

Add support to mov r, imm (0xb8) instruction.

Signed-off-by: Guillaume Thouvenin <[email protected]>
Signed-off-by: Laurent Vivier <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86_emulate.c | 5 ++++-
1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index 48b62cc..21d7ff6 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -152,7 +152,8 @@ static u16 opcode_table[256] = {
ByteOp | ImplicitOps | Mov | String, ImplicitOps | Mov | String,
ByteOp | ImplicitOps | String, ImplicitOps | String,
/* 0xB0 - 0xBF */
- 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ DstReg | SrcImm | Mov, 0, 0, 0, 0, 0, 0, 0,
/* 0xC0 - 0xC7 */
ByteOp | DstMem | SrcImm | ModRM, DstMem | SrcImmByte | ModRM,
0, ImplicitOps | Stack, 0, 0,
@@ -1624,6 +1625,8 @@ special_insn:
case 0xae ... 0xaf: /* scas */
DPRINTF("Urk! I don't handle SCAS.\n");
goto cannot_emulate;
+ case 0xb8: /* mov r, imm */
+ goto mov;
case 0xc0 ... 0xc1:
emulate_grp2(ctxt);
break;
--
1.5.6

2008-06-26 12:42:38

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 24/50] KVM: x86 emulator: add support for jmp far 0xea

From: Guillaume Thouvenin <[email protected]>

Add support for jmp far (opcode 0xea) instruction.

Signed-off-by: Guillaume Thouvenin <[email protected]>
Signed-off-by: Laurent Vivier <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86_emulate.c | 31 +++++++++++++++++++++++++++++--
1 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index a928aa6..48b62cc 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -168,7 +168,8 @@ static u16 opcode_table[256] = {
/* 0xE0 - 0xE7 */
0, 0, 0, 0, 0, 0, 0, 0,
/* 0xE8 - 0xEF */
- ImplicitOps | Stack, SrcImm|ImplicitOps, 0, SrcImmByte|ImplicitOps,
+ ImplicitOps | Stack, SrcImm | ImplicitOps,
+ ImplicitOps, SrcImmByte | ImplicitOps,
0, 0, 0, 0,
/* 0xF0 - 0xF7 */
0, 0, 0, 0,
@@ -1661,7 +1662,33 @@ special_insn:
break;
}
case 0xe9: /* jmp rel */
- case 0xeb: /* jmp rel short */
+ goto jmp;
+ case 0xea: /* jmp far */ {
+ uint32_t eip;
+ uint16_t sel;
+
+ switch (c->op_bytes) {
+ case 2:
+ eip = insn_fetch(u16, 2, c->eip);
+ break;
+ case 4:
+ eip = insn_fetch(u32, 4, c->eip);
+ break;
+ default:
+ DPRINTF("jmp far: Invalid op_bytes\n");
+ goto cannot_emulate;
+ }
+ sel = insn_fetch(u16, 2, c->eip);
+ if (kvm_load_segment_descriptor(ctxt->vcpu, sel, 9, VCPU_SREG_CS) < 0) {
+ DPRINTF("jmp far: Failed to load CS descriptor\n");
+ goto cannot_emulate;
+ }
+
+ c->eip = eip;
+ break;
+ }
+ case 0xeb:
+ jmp: /* jmp rel short */
jmp_rel(c, c->src.val);
c->dst.type = OP_NONE; /* Disable writeback. */
break;
--
1.5.6

2008-06-26 12:43:25

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 23/50] KVM: x86 emulator: Update c->dst.bytes in decode instruction

From: Guillaume Thouvenin <[email protected]>

Update c->dst.bytes in decode instruction instead of instruction
itself. It's needed because if c->dst.bytes is equal to 0, the
instruction is not emulated.

Signed-off-by: Guillaume Thouvenin <[email protected]>
Signed-off-by: Laurent Vivier <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86_emulate.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index 932f216..a928aa6 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -1049,6 +1049,7 @@ done_prefixes:
break;
case DstMem:
if ((c->d & ModRM) && c->modrm_mod == 3) {
+ c->dst.bytes = (c->d & ByteOp) ? 1 : c->op_bytes;
c->dst.type = OP_REG;
c->dst.val = c->dst.orig_val = c->modrm_val;
c->dst.ptr = c->modrm_ptr;
--
1.5.6

2008-06-26 12:43:43

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 21/50] KVM: MTRR support

Add emulation for the memory type range registers, needed by VMware esx 3.5,
and by pci device assignment.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86.c | 52 ++++++++++++++++++++++++++++++++++++++++---
include/asm-x86/kvm_host.h | 3 ++
2 files changed, 51 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index cab5f92..2095e79 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -611,6 +611,38 @@ static void kvm_write_guest_time(struct kvm_vcpu *v)
mark_page_dirty(v->kvm, vcpu->time >> PAGE_SHIFT);
}

+static bool msr_mtrr_valid(unsigned msr)
+{
+ switch (msr) {
+ case 0x200 ... 0x200 + 2 * KVM_NR_VAR_MTRR - 1:
+ case MSR_MTRRfix64K_00000:
+ case MSR_MTRRfix16K_80000:
+ case MSR_MTRRfix16K_A0000:
+ case MSR_MTRRfix4K_C0000:
+ case MSR_MTRRfix4K_C8000:
+ case MSR_MTRRfix4K_D0000:
+ case MSR_MTRRfix4K_D8000:
+ case MSR_MTRRfix4K_E0000:
+ case MSR_MTRRfix4K_E8000:
+ case MSR_MTRRfix4K_F0000:
+ case MSR_MTRRfix4K_F8000:
+ case MSR_MTRRdefType:
+ case MSR_IA32_CR_PAT:
+ return true;
+ case 0x2f8:
+ return true;
+ }
+ return false;
+}
+
+static int set_msr_mtrr(struct kvm_vcpu *vcpu, u32 msr, u64 data)
+{
+ if (!msr_mtrr_valid(msr))
+ return 1;
+
+ vcpu->arch.mtrr[msr - 0x200] = data;
+ return 0;
+}

int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data)
{
@@ -632,8 +664,9 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data)
break;
case MSR_IA32_UCODE_REV:
case MSR_IA32_UCODE_WRITE:
- case 0x200 ... 0x2ff: /* MTRRs */
break;
+ case 0x200 ... 0x2ff:
+ return set_msr_mtrr(vcpu, msr, data);
case MSR_IA32_APICBASE:
kvm_set_apic_base(vcpu, data);
break;
@@ -691,6 +724,15 @@ int kvm_get_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 *pdata)
return kvm_x86_ops->get_msr(vcpu, msr_index, pdata);
}

+static int get_msr_mtrr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
+{
+ if (!msr_mtrr_valid(msr))
+ return 1;
+
+ *pdata = vcpu->arch.mtrr[msr - 0x200];
+ return 0;
+}
+
int kvm_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
{
u64 data;
@@ -712,11 +754,13 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
case MSR_IA32_MC0_MISC+16:
case MSR_IA32_UCODE_REV:
case MSR_IA32_EBL_CR_POWERON:
- /* MTRR registers */
- case 0xfe:
- case 0x200 ... 0x2ff:
data = 0;
break;
+ case MSR_MTRRcap:
+ data = 0x500 | KVM_NR_VAR_MTRR;
+ break;
+ case 0x200 ... 0x2ff:
+ return get_msr_mtrr(vcpu, msr, pdata);
case 0xcd: /* fsb frequency */
data = 3;
break;
diff --git a/include/asm-x86/kvm_host.h b/include/asm-x86/kvm_host.h
index 075598b..fc72bad 100644
--- a/include/asm-x86/kvm_host.h
+++ b/include/asm-x86/kvm_host.h
@@ -79,6 +79,7 @@
#define KVM_MIN_FREE_MMU_PAGES 5
#define KVM_REFILL_PAGES 25
#define KVM_MAX_CPUID_ENTRIES 40
+#define KVM_NR_VAR_MTRR 8

extern spinlock_t kvm_lock;
extern struct list_head vm_list;
@@ -290,6 +291,8 @@ struct kvm_vcpu_arch {
struct page *time_page;

bool nmi_pending;
+
+ u64 mtrr[0x100];
};

struct kvm_mem_alias {
--
1.5.6

2008-06-26 12:44:29

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 19/50] KVM: VMX: Enable NMI with in-kernel irqchip

From: Sheng Yang <[email protected]>

Signed-off-by: Sheng Yang <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/vmx.c | 124 +++++++++++++++++++++++++++++++++++++------
arch/x86/kvm/vmx.h | 12 ++++-
arch/x86/kvm/x86.c | 1 +
include/asm-x86/kvm_host.h | 1 +
4 files changed, 119 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 96377bf..332ed24 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -264,6 +264,11 @@ static inline int cpu_has_vmx_vpid(void)
SECONDARY_EXEC_ENABLE_VPID);
}

+static inline int cpu_has_virtual_nmis(void)
+{
+ return vmcs_config.pin_based_exec_ctrl & PIN_BASED_VIRTUAL_NMIS;
+}
+
static int __find_msr_index(struct vcpu_vmx *vmx, u32 msr)
{
int i;
@@ -1088,7 +1093,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
u32 _vmentry_control = 0;

min = PIN_BASED_EXT_INTR_MASK | PIN_BASED_NMI_EXITING;
- opt = 0;
+ opt = PIN_BASED_VIRTUAL_NMIS;
if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_PINBASED_CTLS,
&_pin_based_exec_control) < 0)
return -EIO;
@@ -2130,6 +2135,13 @@ static void vmx_inject_irq(struct kvm_vcpu *vcpu, int irq)
irq | INTR_TYPE_EXT_INTR | INTR_INFO_VALID_MASK);
}

+static void vmx_inject_nmi(struct kvm_vcpu *vcpu)
+{
+ vmcs_write32(VM_ENTRY_INTR_INFO_FIELD,
+ INTR_TYPE_NMI_INTR | INTR_INFO_VALID_MASK | NMI_VECTOR);
+ vcpu->arch.nmi_pending = 0;
+}
+
static void kvm_do_inject_irq(struct kvm_vcpu *vcpu)
{
int word_index = __ffs(vcpu->arch.irq_summary);
@@ -2653,6 +2665,19 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
return 1;
}

+static int handle_nmi_window(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
+{
+ u32 cpu_based_vm_exec_control;
+
+ /* clear pending NMI */
+ cpu_based_vm_exec_control = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL);
+ cpu_based_vm_exec_control &= ~CPU_BASED_VIRTUAL_NMI_PENDING;
+ vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control);
+ ++vcpu->stat.nmi_window_exits;
+
+ return 1;
+}
+
/*
* The exit handlers return 1 if the exit was handled fully and guest execution
* may resume. Otherwise they set the kvm_run parameter to indicate what needs
@@ -2663,6 +2688,7 @@ static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu,
[EXIT_REASON_EXCEPTION_NMI] = handle_exception,
[EXIT_REASON_EXTERNAL_INTERRUPT] = handle_external_interrupt,
[EXIT_REASON_TRIPLE_FAULT] = handle_triple_fault,
+ [EXIT_REASON_NMI_WINDOW] = handle_nmi_window,
[EXIT_REASON_IO_INSTRUCTION] = handle_io,
[EXIT_REASON_CR_ACCESS] = handle_cr,
[EXIT_REASON_DR_ACCESS] = handle_dr,
@@ -2750,17 +2776,52 @@ static void enable_irq_window(struct kvm_vcpu *vcpu)
vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control);
}

+static void enable_nmi_window(struct kvm_vcpu *vcpu)
+{
+ u32 cpu_based_vm_exec_control;
+
+ if (!cpu_has_virtual_nmis())
+ return;
+
+ cpu_based_vm_exec_control = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL);
+ cpu_based_vm_exec_control |= CPU_BASED_VIRTUAL_NMI_PENDING;
+ vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control);
+}
+
+static int vmx_nmi_enabled(struct kvm_vcpu *vcpu)
+{
+ u32 guest_intr = vmcs_read32(GUEST_INTERRUPTIBILITY_INFO);
+ return !(guest_intr & (GUEST_INTR_STATE_NMI |
+ GUEST_INTR_STATE_MOV_SS |
+ GUEST_INTR_STATE_STI));
+}
+
+static int vmx_irq_enabled(struct kvm_vcpu *vcpu)
+{
+ u32 guest_intr = vmcs_read32(GUEST_INTERRUPTIBILITY_INFO);
+ return (!(guest_intr & (GUEST_INTR_STATE_MOV_SS |
+ GUEST_INTR_STATE_STI)) &&
+ (vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF));
+}
+
+static void enable_intr_window(struct kvm_vcpu *vcpu)
+{
+ if (vcpu->arch.nmi_pending)
+ enable_nmi_window(vcpu);
+ else if (kvm_cpu_has_interrupt(vcpu))
+ enable_irq_window(vcpu);
+}
+
static void vmx_intr_assist(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
- u32 idtv_info_field, intr_info_field;
- int has_ext_irq, interrupt_window_open;
+ u32 idtv_info_field, intr_info_field, exit_intr_info_field;
int vector;

update_tpr_threshold(vcpu);

- has_ext_irq = kvm_cpu_has_interrupt(vcpu);
intr_info_field = vmcs_read32(VM_ENTRY_INTR_INFO_FIELD);
+ exit_intr_info_field = vmcs_read32(VM_EXIT_INTR_INFO);
idtv_info_field = vmx->idt_vectoring_info;
if (intr_info_field & INTR_INFO_VALID_MASK) {
if (idtv_info_field & INTR_INFO_VALID_MASK) {
@@ -2768,8 +2829,7 @@ static void vmx_intr_assist(struct kvm_vcpu *vcpu)
if (printk_ratelimit())
printk(KERN_ERR "Fault when IDT_Vectoring\n");
}
- if (has_ext_irq)
- enable_irq_window(vcpu);
+ enable_intr_window(vcpu);
return;
}
if (unlikely(idtv_info_field & INTR_INFO_VALID_MASK)) {
@@ -2779,30 +2839,56 @@ static void vmx_intr_assist(struct kvm_vcpu *vcpu)
u8 vect = idtv_info_field & VECTORING_INFO_VECTOR_MASK;

vmx_inject_irq(vcpu, vect);
- if (unlikely(has_ext_irq))
- enable_irq_window(vcpu);
+ enable_intr_window(vcpu);
return;
}

KVMTRACE_1D(REDELIVER_EVT, vcpu, idtv_info_field, handler);

- vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, idtv_info_field);
+ /*
+ * SDM 3: 25.7.1.2
+ * Clear bit "block by NMI" before VM entry if a NMI delivery
+ * faulted.
+ */
+ if ((idtv_info_field & VECTORING_INFO_TYPE_MASK)
+ == INTR_TYPE_NMI_INTR && cpu_has_virtual_nmis())
+ vmcs_write32(GUEST_INTERRUPTIBILITY_INFO,
+ vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
+ ~GUEST_INTR_STATE_NMI);
+
+ vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, idtv_info_field
+ & ~INTR_INFO_RESVD_BITS_MASK);
vmcs_write32(VM_ENTRY_INSTRUCTION_LEN,
vmcs_read32(VM_EXIT_INSTRUCTION_LEN));

if (unlikely(idtv_info_field & INTR_INFO_DELIVER_CODE_MASK))
vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE,
vmcs_read32(IDT_VECTORING_ERROR_CODE));
- if (unlikely(has_ext_irq))
- enable_irq_window(vcpu);
+ enable_intr_window(vcpu);
return;
}
- if (!has_ext_irq)
+ if (cpu_has_virtual_nmis()) {
+ /*
+ * SDM 3: 25.7.1.2
+ * Re-set bit "block by NMI" before VM entry if vmexit caused by
+ * a guest IRET fault.
+ */
+ if ((exit_intr_info_field & INTR_INFO_UNBLOCK_NMI) &&
+ (exit_intr_info_field & INTR_INFO_VECTOR_MASK) != 8)
+ vmcs_write32(GUEST_INTERRUPTIBILITY_INFO,
+ vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) |
+ GUEST_INTR_STATE_NMI);
+ else if (vcpu->arch.nmi_pending) {
+ if (vmx_nmi_enabled(vcpu))
+ vmx_inject_nmi(vcpu);
+ enable_intr_window(vcpu);
+ return;
+ }
+
+ }
+ if (!kvm_cpu_has_interrupt(vcpu))
return;
- interrupt_window_open =
- ((vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) &&
- (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & 3) == 0);
- if (interrupt_window_open) {
+ if (vmx_irq_enabled(vcpu)) {
vector = kvm_cpu_get_interrupt(vcpu);
vmx_inject_irq(vcpu, vector);
kvm_timer_intr_post(vcpu, vector);
@@ -2963,7 +3049,8 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
fixup_rmode_irq(vmx);

vcpu->arch.interrupt_window_open =
- (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & 3) == 0;
+ (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
+ (GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS)) == 0;

asm("mov %0, %%ds; mov %0, %%es" : : "r"(__USER_DS));
vmx->launched = 1;
@@ -2971,7 +3058,8 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
intr_info = vmcs_read32(VM_EXIT_INTR_INFO);

/* We need to handle NMIs before interrupts are enabled */
- if ((intr_info & INTR_INFO_INTR_TYPE_MASK) == 0x200) { /* nmi */
+ if ((intr_info & INTR_INFO_INTR_TYPE_MASK) == 0x200 &&
+ (intr_info & INTR_INFO_VALID_MASK)) {
KVMTRACE_0D(NMI, vcpu, handler);
asm("int $2");
}
diff --git a/arch/x86/kvm/vmx.h b/arch/x86/kvm/vmx.h
index 79d94c6..425a134 100644
--- a/arch/x86/kvm/vmx.h
+++ b/arch/x86/kvm/vmx.h
@@ -40,6 +40,7 @@
#define CPU_BASED_CR8_LOAD_EXITING 0x00080000
#define CPU_BASED_CR8_STORE_EXITING 0x00100000
#define CPU_BASED_TPR_SHADOW 0x00200000
+#define CPU_BASED_VIRTUAL_NMI_PENDING 0x00400000
#define CPU_BASED_MOV_DR_EXITING 0x00800000
#define CPU_BASED_UNCOND_IO_EXITING 0x01000000
#define CPU_BASED_USE_IO_BITMAPS 0x02000000
@@ -216,7 +217,7 @@ enum vmcs_field {
#define EXIT_REASON_TRIPLE_FAULT 2

#define EXIT_REASON_PENDING_INTERRUPT 7
-
+#define EXIT_REASON_NMI_WINDOW 8
#define EXIT_REASON_TASK_SWITCH 9
#define EXIT_REASON_CPUID 10
#define EXIT_REASON_HLT 12
@@ -251,7 +252,9 @@ enum vmcs_field {
#define INTR_INFO_VECTOR_MASK 0xff /* 7:0 */
#define INTR_INFO_INTR_TYPE_MASK 0x700 /* 10:8 */
#define INTR_INFO_DELIVER_CODE_MASK 0x800 /* 11 */
+#define INTR_INFO_UNBLOCK_NMI 0x1000 /* 12 */
#define INTR_INFO_VALID_MASK 0x80000000 /* 31 */
+#define INTR_INFO_RESVD_BITS_MASK 0x7ffff000

#define VECTORING_INFO_VECTOR_MASK INTR_INFO_VECTOR_MASK
#define VECTORING_INFO_TYPE_MASK INTR_INFO_INTR_TYPE_MASK
@@ -259,9 +262,16 @@ enum vmcs_field {
#define VECTORING_INFO_VALID_MASK INTR_INFO_VALID_MASK

#define INTR_TYPE_EXT_INTR (0 << 8) /* external interrupt */
+#define INTR_TYPE_NMI_INTR (2 << 8) /* NMI */
#define INTR_TYPE_EXCEPTION (3 << 8) /* processor exception */
#define INTR_TYPE_SOFT_INTR (4 << 8) /* software interrupt */

+/* GUEST_INTERRUPTIBILITY_INFO flags. */
+#define GUEST_INTR_STATE_STI 0x00000001
+#define GUEST_INTR_STATE_MOV_SS 0x00000002
+#define GUEST_INTR_STATE_SMI 0x00000004
+#define GUEST_INTR_STATE_NMI 0x00000008
+
/*
* Exit Qualifications for MOV for Control Register Access
*/
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2c308f0..cab5f92 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -72,6 +72,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
{ "mmio_exits", VCPU_STAT(mmio_exits) },
{ "signal_exits", VCPU_STAT(signal_exits) },
{ "irq_window", VCPU_STAT(irq_window_exits) },
+ { "nmi_window", VCPU_STAT(nmi_window_exits) },
{ "halt_exits", VCPU_STAT(halt_exits) },
{ "halt_wakeup", VCPU_STAT(halt_wakeup) },
{ "hypercalls", VCPU_STAT(hypercalls) },
diff --git a/include/asm-x86/kvm_host.h b/include/asm-x86/kvm_host.h
index b666219..bacb1e2 100644
--- a/include/asm-x86/kvm_host.h
+++ b/include/asm-x86/kvm_host.h
@@ -347,6 +347,7 @@ struct kvm_vcpu_stat {
u32 mmio_exits;
u32 signal_exits;
u32 irq_window_exits;
+ u32 nmi_window_exits;
u32 halt_exits;
u32 halt_wakeup;
u32 request_irq_exits;
--
1.5.6

2008-06-26 12:43:58

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 20/50] KVM: Order segment register constants in the same way as cpu operand encoding

This can be used to simplify the x86 instruction decoder.

Signed-off-by: Avi Kivity <[email protected]>
---
include/asm-x86/kvm_host.h | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/asm-x86/kvm_host.h b/include/asm-x86/kvm_host.h
index bacb1e2..075598b 100644
--- a/include/asm-x86/kvm_host.h
+++ b/include/asm-x86/kvm_host.h
@@ -109,12 +109,12 @@ enum {
};

enum {
+ VCPU_SREG_ES,
VCPU_SREG_CS,
+ VCPU_SREG_SS,
VCPU_SREG_DS,
- VCPU_SREG_ES,
VCPU_SREG_FS,
VCPU_SREG_GS,
- VCPU_SREG_SS,
VCPU_SREG_TR,
VCPU_SREG_LDTR,
};
--
1.5.6

2008-06-26 12:44:48

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 16/50] KVM: Remove decache_vcpus_on_cpu() and related callbacks

Obsoleted by the vmx-specific per-cpu list.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/ia64/kvm/kvm-ia64.c | 8 --------
arch/powerpc/kvm/powerpc.c | 4 ----
arch/s390/kvm/kvm-s390.c | 4 ----
arch/x86/kvm/svm.c | 5 -----
arch/x86/kvm/vmx.c | 6 ------
arch/x86/kvm/x86.c | 8 --------
include/asm-x86/kvm_host.h | 1 -
include/linux/kvm_host.h | 3 ---
virt/kvm/kvm_main.c | 1 -
9 files changed, 0 insertions(+), 40 deletions(-)

diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index 318b811..adb74f7 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -1035,14 +1035,6 @@ static void kvm_free_vmm_area(void)
}
}

-/*
- * Make sure that a cpu that is being hot-unplugged does not have any vcpus
- * cached on it. Leave it as blank for IA64.
- */
-void decache_vcpus_on_cpu(int cpu)
-{
-}
-
static void vti_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
}
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 777e0f3..0513b35 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -240,10 +240,6 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
{
}

-void decache_vcpus_on_cpu(int cpu)
-{
-}
-
int kvm_arch_vcpu_ioctl_debug_guest(struct kvm_vcpu *vcpu,
struct kvm_debug_guest *dbg)
{
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 6558b09..4585c8a 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -79,10 +79,6 @@ void kvm_arch_hardware_disable(void *garbage)
{
}

-void decache_vcpus_on_cpu(int cpu)
-{
-}
-
int kvm_arch_hardware_setup(void)
{
return 0;
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9390a31..238e8f3 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -709,10 +709,6 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu)
rdtscll(vcpu->arch.host_tsc);
}

-static void svm_vcpu_decache(struct kvm_vcpu *vcpu)
-{
-}
-
static void svm_cache_regs(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
@@ -1933,7 +1929,6 @@ static struct kvm_x86_ops svm_x86_ops = {
.prepare_guest_switch = svm_prepare_guest_switch,
.vcpu_load = svm_vcpu_load,
.vcpu_put = svm_vcpu_put,
- .vcpu_decache = svm_vcpu_decache,

.set_guest_debug = svm_guest_debug,
.get_msr = svm_get_msr,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index b99d045..96377bf 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -692,11 +692,6 @@ static void vmx_fpu_deactivate(struct kvm_vcpu *vcpu)
update_exception_bitmap(vcpu);
}

-static void vmx_vcpu_decache(struct kvm_vcpu *vcpu)
-{
- vcpu_clear(to_vmx(vcpu));
-}
-
static unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu)
{
return vmcs_readl(GUEST_RFLAGS);
@@ -3114,7 +3109,6 @@ static struct kvm_x86_ops vmx_x86_ops = {
.prepare_guest_switch = vmx_save_host_state,
.vcpu_load = vmx_vcpu_load,
.vcpu_put = vmx_vcpu_put,
- .vcpu_decache = vmx_vcpu_decache,

.set_guest_debug = set_guest_debug,
.guest_debug_pre = kvm_guest_debug_pre,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 111f432..9156d63 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -817,14 +817,6 @@ out:
return r;
}

-/*
- * Make sure that a cpu that is being hot-unplugged does not have any vcpus
- * cached on it.
- */
-void decache_vcpus_on_cpu(int cpu)
-{
-}
-
int kvm_dev_ioctl_check_extension(long ext)
{
int r;
diff --git a/include/asm-x86/kvm_host.h b/include/asm-x86/kvm_host.h
index 0df9d5f..4bcdc7d 100644
--- a/include/asm-x86/kvm_host.h
+++ b/include/asm-x86/kvm_host.h
@@ -380,7 +380,6 @@ struct kvm_x86_ops {
void (*prepare_guest_switch)(struct kvm_vcpu *vcpu);
void (*vcpu_load)(struct kvm_vcpu *vcpu, int cpu);
void (*vcpu_put)(struct kvm_vcpu *vcpu);
- void (*vcpu_decache)(struct kvm_vcpu *vcpu);

int (*set_guest_debug)(struct kvm_vcpu *vcpu,
struct kvm_debug_guest *dbg);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index de9d1df..865dcbc 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -135,9 +135,6 @@ void kvm_vcpu_uninit(struct kvm_vcpu *vcpu);
void vcpu_load(struct kvm_vcpu *vcpu);
void vcpu_put(struct kvm_vcpu *vcpu);

-void decache_vcpus_on_cpu(int cpu);
-
-
int kvm_init(void *opaque, unsigned int vcpu_size,
struct module *module);
void kvm_exit(void);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 67af90c..e8f9fda 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1273,7 +1273,6 @@ static void hardware_disable(void *junk)
if (!cpu_isset(cpu, cpus_hardware_enabled))
return;
cpu_clear(cpu, cpus_hardware_enabled);
- decache_vcpus_on_cpu(cpu);
kvm_arch_hardware_disable(NULL);
}

--
1.5.6

2008-06-26 12:45:07

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 17/50] KVM: Remove unnecessary ->decache_regs() call

Since we aren't modifying any register, there's no need to decache
the register state.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86.c | 1 -
1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9156d63..28642c4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2297,7 +2297,6 @@ int kvm_emulate_pio(struct kvm_vcpu *vcpu, struct kvm_run *run, int in,

kvm_x86_ops->cache_regs(vcpu);
memcpy(vcpu->arch.pio_data, &vcpu->arch.regs[VCPU_REGS_RAX], 4);
- kvm_x86_ops->decache_regs(vcpu);

kvm_x86_ops->skip_emulated_instruction(vcpu);

--
1.5.6

2008-06-26 12:46:05

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 13/50] KVM: MMU: Fix false flooding when a pte points to page table

The KVM MMU tries to detect when a speculative pte update is not actually
used by demand fault, by checking the accessed bit of the shadow pte. If
the shadow pte has not been accessed, we deem that page table flooded and
remove the shadow page table, allowing further pte updates to proceed
without emulation.

However, if the pte itself points at a page table and only used for write
operations, the accessed bit will never be set since all access will happen
through the emulator.

This is exactly what happens with kscand on old (2.4.x) HIGHMEM kernels.
The kernel points a kmap_atomic() pte at a page table, and then
proceeds with read-modify-write operations to look at the dirty and accessed
bits. We get a false flood trigger on the kmap ptes, which results in the
mmu spending all its time setting up and tearing down shadows.

Fix by setting the shadow accessed bit on emulated accesses.

Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/mmu.c | 17 ++++++++++++++++-
arch/x86/kvm/mmu.h | 3 ++-
include/asm-x86/kvm_host.h | 1 +
3 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 8e449db..53f1ed8 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1122,8 +1122,10 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *shadow_pte,
else
kvm_release_pfn_clean(pfn);
}
- if (!ptwrite || !*ptwrite)
+ if (speculative) {
vcpu->arch.last_pte_updated = shadow_pte;
+ vcpu->arch.last_pte_gfn = gfn;
+ }
}

static void nonpaging_new_cr3(struct kvm_vcpu *vcpu)
@@ -1671,6 +1673,18 @@ static void mmu_guess_page_from_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
vcpu->arch.update_pte.pfn = pfn;
}

+static void kvm_mmu_access_page(struct kvm_vcpu *vcpu, gfn_t gfn)
+{
+ u64 *spte = vcpu->arch.last_pte_updated;
+
+ if (spte
+ && vcpu->arch.last_pte_gfn == gfn
+ && shadow_accessed_mask
+ && !(*spte & shadow_accessed_mask)
+ && is_shadow_present_pte(*spte))
+ set_bit(PT_ACCESSED_SHIFT, (unsigned long *)spte);
+}
+
void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
const u8 *new, int bytes)
{
@@ -1694,6 +1708,7 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
pgprintk("%s: gpa %llx bytes %d\n", __func__, gpa, bytes);
mmu_guess_page_from_pte_write(vcpu, gpa, new, bytes);
spin_lock(&vcpu->kvm->mmu_lock);
+ kvm_mmu_access_page(vcpu, gfn);
kvm_mmu_free_some_pages(vcpu);
++vcpu->kvm->stat.mmu_pte_write;
kvm_mmu_audit(vcpu, "pre pte write");
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 1730757..258e5d5 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -15,7 +15,8 @@
#define PT_USER_MASK (1ULL << 2)
#define PT_PWT_MASK (1ULL << 3)
#define PT_PCD_MASK (1ULL << 4)
-#define PT_ACCESSED_MASK (1ULL << 5)
+#define PT_ACCESSED_SHIFT 5
+#define PT_ACCESSED_MASK (1ULL << PT_ACCESSED_SHIFT)
#define PT_DIRTY_MASK (1ULL << 6)
#define PT_PAGE_SIZE_MASK (1ULL << 7)
#define PT_PAT_MASK (1ULL << 7)
diff --git a/include/asm-x86/kvm_host.h b/include/asm-x86/kvm_host.h
index 844f2a8..c2d066e 100644
--- a/include/asm-x86/kvm_host.h
+++ b/include/asm-x86/kvm_host.h
@@ -243,6 +243,7 @@ struct kvm_vcpu_arch {
gfn_t last_pt_write_gfn;
int last_pt_write_count;
u64 *last_pte_updated;
+ gfn_t last_pte_gfn;

struct {
gfn_t gfn; /* presumed gfn during guest pte update */
--
1.5.6

2008-06-26 12:45:36

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 18/50] KVM: IOAPIC/LAPIC: Enable NMI support

From: Sheng Yang <[email protected]>

[avi: fix ia64 build breakage]

Signed-off-by: Sheng Yang <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/lapic.c | 3 ++-
arch/x86/kvm/x86.c | 6 ++++++
include/asm-ia64/kvm_host.h | 2 ++
include/asm-x86/kvm_host.h | 4 ++++
virt/kvm/ioapic.c | 20 ++++++++++++++++++--
5 files changed, 32 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index f9201fb..e48d193 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -356,8 +356,9 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
case APIC_DM_SMI:
printk(KERN_DEBUG "Ignoring guest SMI\n");
break;
+
case APIC_DM_NMI:
- printk(KERN_DEBUG "Ignoring guest NMI\n");
+ kvm_inject_nmi(vcpu);
break;

case APIC_DM_INIT:
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 28642c4..2c308f0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -173,6 +173,12 @@ void kvm_inject_page_fault(struct kvm_vcpu *vcpu, unsigned long addr,
kvm_queue_exception_e(vcpu, PF_VECTOR, error_code);
}

+void kvm_inject_nmi(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.nmi_pending = 1;
+}
+EXPORT_SYMBOL_GPL(kvm_inject_nmi);
+
void kvm_queue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code)
{
WARN_ON(vcpu->arch.exception.pending);
diff --git a/include/asm-ia64/kvm_host.h b/include/asm-ia64/kvm_host.h
index c082c20..5c958b0 100644
--- a/include/asm-ia64/kvm_host.h
+++ b/include/asm-ia64/kvm_host.h
@@ -521,4 +521,6 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu);
int kvm_pal_emul(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run);
void kvm_sal_emul(struct kvm_vcpu *vcpu);

+static inline void kvm_inject_nmi(struct kvm_vcpu *vcpu) {}
+
#endif
diff --git a/include/asm-x86/kvm_host.h b/include/asm-x86/kvm_host.h
index 4bcdc7d..b666219 100644
--- a/include/asm-x86/kvm_host.h
+++ b/include/asm-x86/kvm_host.h
@@ -288,6 +288,8 @@ struct kvm_vcpu_arch {
unsigned int hv_clock_tsc_khz;
unsigned int time_offset;
struct page *time_page;
+
+ bool nmi_pending;
};

struct kvm_mem_alias {
@@ -515,6 +517,8 @@ void kvm_queue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code);
void kvm_inject_page_fault(struct kvm_vcpu *vcpu, unsigned long cr2,
u32 error_code);

+void kvm_inject_nmi(struct kvm_vcpu *vcpu);
+
void fx_init(struct kvm_vcpu *vcpu);

int emulator_read_std(unsigned long addr,
diff --git a/virt/kvm/ioapic.c b/virt/kvm/ioapic.c
index 1dcf9f3..cb8a20b 100644
--- a/virt/kvm/ioapic.c
+++ b/virt/kvm/ioapic.c
@@ -146,6 +146,11 @@ static int ioapic_inj_irq(struct kvm_ioapic *ioapic,
return kvm_apic_set_irq(vcpu, vector, trig_mode);
}

+static void ioapic_inj_nmi(struct kvm_vcpu *vcpu)
+{
+ kvm_inject_nmi(vcpu);
+}
+
static u32 ioapic_get_delivery_bitmask(struct kvm_ioapic *ioapic, u8 dest,
u8 dest_mode)
{
@@ -239,8 +244,19 @@ static int ioapic_deliver(struct kvm_ioapic *ioapic, int irq)
}
}
break;
-
- /* TODO: NMI */
+ case IOAPIC_NMI:
+ for (vcpu_id = 0; deliver_bitmask != 0; vcpu_id++) {
+ if (!(deliver_bitmask & (1 << vcpu_id)))
+ continue;
+ deliver_bitmask &= ~(1 << vcpu_id);
+ vcpu = ioapic->kvm->vcpus[vcpu_id];
+ if (vcpu)
+ ioapic_inj_nmi(vcpu);
+ else
+ ioapic_debug("NMI to vcpu %d failed\n",
+ vcpu->vcpu_id);
+ }
+ break;
default:
printk(KERN_WARNING "Unsupported delivery mode %d\n",
delivery_mode);
--
1.5.6

2008-06-26 12:46:27

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 10/50] KVM: PIT: support mode 3

From: Aurelien Jarno <[email protected]>

The in-kernel PIT emulation ignores pending timers if operating
under mode 3, which for example Hurd uses.

This mode should output a square wave, high for (N+1)/2 counts and low
for (N-1)/2 counts. As we only care about the resulting interrupts, the
period is N, and mode 3 is the same as mode 2 with regard to
interrupts.

Signed-off-by: Aurelien Jarno <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/i8254.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
index 735ec9a..60074dc 100644
--- a/arch/x86/kvm/i8254.c
+++ b/arch/x86/kvm/i8254.c
@@ -308,6 +308,7 @@ static void pit_load_count(struct kvm *kvm, int channel, u32 val)
create_pit_timer(&ps->pit_timer, val, 0);
break;
case 2:
+ case 3:
create_pit_timer(&ps->pit_timer, val, 1);
break;
default:
--
1.5.6

2008-06-26 12:46:59

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 11/50] KVM: SVM: Fake MSR_K7 performance counters

From: Chris Lalancette <[email protected]>

Attached is a patch that fixes a guest crash when booting older Linux kernels.
The problem stems from the fact that we are currently emulating
MSR_K7_EVNTSEL[0-3], but not emulating MSR_K7_PERFCTR[0-3]. Because of this,
setup_k7_watchdog() in the Linux kernel receives a GPF when it attempts to
write into MSR_K7_PERFCTR, which causes an OOPs.

The patch fixes it by just "fake" emulating the appropriate MSRs, throwing
away the data in the process. This causes the NMI watchdog to not actually
work, but it's not such a big deal in a virtualized environment.

When we get a write to one of these counters, we printk_ratelimit() a warning.
I decided to print it out for all writes, even if the data is 0; it doesn't
seem to make sense to me to special case when data == 0.

Tested by myself on a RHEL-4 guest, and Joerg Roedel on a Windows XP 64-bit
guest.

Signed-off-by: Chris Lalancette <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/svm.c | 15 +++++++++------
1 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 218949c..992ab71 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1312,16 +1312,19 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 data)
case MSR_K7_EVNTSEL1:
case MSR_K7_EVNTSEL2:
case MSR_K7_EVNTSEL3:
+ case MSR_K7_PERFCTR0:
+ case MSR_K7_PERFCTR1:
+ case MSR_K7_PERFCTR2:
+ case MSR_K7_PERFCTR3:
/*
- * only support writing 0 to the performance counters for now
- * to make Windows happy. Should be replaced by a real
- * performance counter emulation later.
+ * Just discard all writes to the performance counters; this
+ * should keep both older linux and windows 64-bit guests
+ * happy
*/
- if (data != 0)
- goto unhandled;
+ pr_unimpl(vcpu, "unimplemented perfctr wrmsr: 0x%x data 0x%llx\n", ecx, data);
+
break;
default:
- unhandled:
return kvm_set_msr_common(vcpu, ecx, data);
}
return 0;
--
1.5.6

2008-06-26 12:46:44

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 05/50] KVM: SVM: implement dedicated INTR exit handler

From: Joerg Roedel <[email protected]>

With an exit handler for INTR intercepts its possible to account them using
kvmtrace.

Signed-off-by: Joerg Roedel <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/svm.c | 8 +++++++-
1 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 8a2118b..0eac1a5 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1086,6 +1086,12 @@ static int nmi_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
return 1;
}

+static int intr_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
+{
+ ++svm->vcpu.stat.irq_exits;
+ return 1;
+}
+
static int nop_on_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
{
return 1;
@@ -1369,7 +1375,7 @@ static int (*svm_exit_handlers[])(struct vcpu_svm *svm,
[SVM_EXIT_EXCP_BASE + PF_VECTOR] = pf_interception,
[SVM_EXIT_EXCP_BASE + NM_VECTOR] = nm_interception,
[SVM_EXIT_EXCP_BASE + MC_VECTOR] = mc_interception,
- [SVM_EXIT_INTR] = nop_on_interception,
+ [SVM_EXIT_INTR] = intr_interception,
[SVM_EXIT_NMI] = nmi_interception,
[SVM_EXIT_SMI] = nop_on_interception,
[SVM_EXIT_INIT] = nop_on_interception,
--
1.5.6

2008-06-26 12:47:29

by Avi Kivity

[permalink] [raw]
Subject: [PATCH 06/50] KVM: add missing kvmtrace bits

From: Joerg Roedel <[email protected]>

This patch adds some kvmtrace bits to the generic x86 code
where it is instrumented from SVM.

Signed-off-by: Joerg Roedel <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
---
arch/x86/kvm/x86.c | 25 ++++++++++++++++++++-----
1 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9a85a29..cc5b0d3 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2020,6 +2020,7 @@ int emulate_invlpg(struct kvm_vcpu *vcpu, gva_t address)

int emulate_clts(struct kvm_vcpu *vcpu)
{
+ KVMTRACE_0D(CLTS, vcpu, handler);
kvm_x86_ops->set_cr0(vcpu, vcpu->arch.cr0 & ~X86_CR0_TS);
return X86EMUL_CONTINUE;
}
@@ -2600,27 +2601,41 @@ void realmode_lmsw(struct kvm_vcpu *vcpu, unsigned long msw,

unsigned long realmode_get_cr(struct kvm_vcpu *vcpu, int cr)
{
+ unsigned long value;
+
kvm_x86_ops->decache_cr4_guest_bits(vcpu);
switch (cr) {
case 0:
- return vcpu->arch.cr0;
+ value = vcpu->arch.cr0;
+ break;
case 2:
- return vcpu->arch.cr2;
+ value = vcpu->arch.cr2;
+ break;
case 3:
- return vcpu->arch.cr3;
+ value = vcpu->arch.cr3;
+ break;
case 4:
- return vcpu->arch.cr4;
+ value = vcpu->arch.cr4;
+ break;
case 8:
- return kvm_get_cr8(vcpu);
+ value = kvm_get_cr8(vcpu);
+ break;
default:
vcpu_printf(vcpu, "%s: unexpected cr %u\n", __func__, cr);
return 0;
}
+ KVMTRACE_3D(CR_READ, vcpu, (u32)cr, (u32)value,
+ (u32)((u64)value >> 32), handler);
+
+ return value;
}

void realmode_set_cr(struct kvm_vcpu *vcpu, int cr, unsigned long val,
unsigned long *rflags)
{
+ KVMTRACE_3D(CR_WRITE, vcpu, (u32)cr, (u32)val,
+ (u32)((u64)val >> 32), handler);
+
switch (cr) {
case 0:
kvm_set_cr0(vcpu, mk_cr_64(vcpu->arch.cr0, val));
--
1.5.6