Changes since v5:
- Rebase to the latest kvm/queue [55371f1d0c01].
- "KVM: nVMX: hyper-v: Cache VP assist page in 'struct kvm_vcpu_hv'"
patch added. This is later used for both nSVM and nVMX to avoid
reading VP assist page from atomic contexts.
Original description:
Currently, KVM handles HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} requests
by flushing the whole VPID and this is sub-optimal. This series introduces
the required mechanism to make handling of these requests more
fine-grained by flushing individual GVAs only (when requested). On this
foundation, "Direct Virtual Flush" Hyper-V feature is implemented. The
feature allows L0 to handle Hyper-V TLB flush hypercalls directly at
L0 without the need to reflect the exit to L1. This has at least two
benefits: reflecting vmexit and the consequent vmenter are avoided + L0
has precise information whether the target vCPU is actually running (and
thus requires a kick).
Sean Christopherson (1):
KVM: x86: hyper-v: Add helper to read hypercall data for array
Vitaly Kuznetsov (37):
KVM: x86: Rename 'enable_direct_tlbflush' to 'enable_l2_tlb_flush'
KVM: x86: hyper-v: Resurrect dedicated KVM_REQ_HV_TLB_FLUSH flag
KVM: x86: hyper-v: Introduce TLB flush fifo
KVM: x86: hyper-v: Handle HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} calls
gently
KVM: x86: hyper-v: Expose support for extended gva ranges for flush
hypercalls
KVM: x86: Prepare kvm_hv_flush_tlb() to handle L2's GPAs
x86/hyperv: Introduce
HV_MAX_SPARSE_VCPU_BANKS/HV_VCPUS_PER_SPARSE_BANK constants
KVM: x86: hyper-v: Use
HV_MAX_SPARSE_VCPU_BANKS/HV_VCPUS_PER_SPARSE_BANK instead of raw
'64'
KVM: x86: hyper-v: Don't use sparse_set_to_vcpu_mask() in
kvm_hv_send_ipi()
KVM: x86: hyper-v: Create a separate fifo for L2 TLB flush
KVM: x86: hyper-v: Use preallocated buffer in 'struct kvm_vcpu_hv'
instead of on-stack 'sparse_banks'
KVM: nVMX: Keep track of hv_vm_id/hv_vp_id when eVMCS is in use
KVM: nSVM: Keep track of Hyper-V hv_vm_id/hv_vp_id
KVM: x86: Introduce .hv_inject_synthetic_vmexit_post_tlb_flush()
nested hook
KVM: x86: hyper-v: Introduce kvm_hv_is_tlb_flush_hcall()
KVM: x86: hyper-v: L2 TLB flush
KVM: x86: hyper-v: Introduce fast guest_hv_cpuid_has_l2_tlb_flush()
check
x86/hyperv: Fix 'struct hv_enlightened_vmcs' definition
KVM: nVMX: hyper-v: Cache VP assist page in 'struct kvm_vcpu_hv'
KVM: nVMX: hyper-v: Enable L2 TLB flush
KVM: nSVM: hyper-v: Enable L2 TLB flush
KVM: x86: Expose Hyper-V L2 TLB flush feature
KVM: selftests: Better XMM read/write helpers
KVM: selftests: Move HYPERV_LINUX_OS_ID definition to a common header
KVM: selftests: Move the function doing Hyper-V hypercall to a common
header
KVM: selftests: Hyper-V PV IPI selftest
KVM: selftests: Fill in vm->vpages_mapped bitmap in virt_map() too
KVM: selftests: Export vm_vaddr_unused_gap() to make it possible to
request unmapped ranges
KVM: selftests: Export _vm_get_page_table_entry()
KVM: selftests: Hyper-V PV TLB flush selftest
KVM: selftests: Sync 'struct hv_enlightened_vmcs' definition with
hyperv-tlfs.h
KVM: selftests: nVMX: Allocate Hyper-V partition assist page
KVM: selftests: nSVM: Allocate Hyper-V partition assist and VP assist
pages
KVM: selftests: Sync 'struct hv_vp_assist_page' definition with
hyperv-tlfs.h
KVM: selftests: evmcs_test: Introduce L2 TLB flush test
KVM: selftests: Move Hyper-V VP assist page enablement out of evmcs.h
KVM: selftests: hyperv_svm_test: Introduce L2 TLB flush test
arch/x86/include/asm/hyperv-tlfs.h | 6 +-
arch/x86/include/asm/kvm-x86-ops.h | 2 +-
arch/x86/include/asm/kvm_host.h | 43 +-
arch/x86/kvm/Makefile | 3 +-
arch/x86/kvm/hyperv.c | 334 +++++++--
arch/x86/kvm/hyperv.h | 53 +-
arch/x86/kvm/svm/hyperv.c | 18 +
arch/x86/kvm/svm/hyperv.h | 48 ++
arch/x86/kvm/svm/nested.c | 39 +-
arch/x86/kvm/svm/svm_onhyperv.c | 2 +-
arch/x86/kvm/svm/svm_onhyperv.h | 6 +-
arch/x86/kvm/trace.h | 21 +-
arch/x86/kvm/vmx/evmcs.c | 42 +-
arch/x86/kvm/vmx/evmcs.h | 13 +-
arch/x86/kvm/vmx/nested.c | 44 +-
arch/x86/kvm/vmx/vmx.c | 6 +-
arch/x86/kvm/x86.c | 18 +-
arch/x86/kvm/x86.h | 1 +
include/asm-generic/hyperv-tlfs.h | 5 +
include/asm-generic/mshyperv.h | 11 +-
tools/testing/selftests/kvm/.gitignore | 2 +
tools/testing/selftests/kvm/Makefile | 4 +-
.../selftests/kvm/include/kvm_util_base.h | 1 +
.../selftests/kvm/include/x86_64/evmcs.h | 40 +-
.../selftests/kvm/include/x86_64/hyperv.h | 62 ++
.../selftests/kvm/include/x86_64/processor.h | 71 +-
.../selftests/kvm/include/x86_64/svm_util.h | 10 +
.../selftests/kvm/include/x86_64/vmx.h | 4 +
tools/testing/selftests/kvm/lib/kvm_util.c | 7 +-
.../testing/selftests/kvm/lib/x86_64/hyperv.c | 21 +
.../selftests/kvm/lib/x86_64/processor.c | 3 +-
tools/testing/selftests/kvm/lib/x86_64/svm.c | 10 +
tools/testing/selftests/kvm/lib/x86_64/vmx.c | 7 +
.../testing/selftests/kvm/x86_64/evmcs_test.c | 43 +-
.../selftests/kvm/x86_64/hyperv_features.c | 22 +-
.../testing/selftests/kvm/x86_64/hyperv_ipi.c | 352 ++++++++++
.../selftests/kvm/x86_64/hyperv_svm_test.c | 54 +-
.../selftests/kvm/x86_64/hyperv_tlb_flush.c | 660 ++++++++++++++++++
38 files changed, 1887 insertions(+), 201 deletions(-)
create mode 100644 arch/x86/kvm/svm/hyperv.c
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/hyperv.c
create mode 100644 tools/testing/selftests/kvm/x86_64/hyperv_ipi.c
create mode 100644 tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c
--
2.35.3
To allow flushing individual GVAs instead of always flushing the whole
VPID a per-vCPU structure to pass the requests is needed. Use standard
'kfifo' to queue two types of entries: individual GVA (GFN + up to 4095
following GFNs in the lower 12 bits) and 'flush all'.
The size of the fifo is arbitrary set to '16'.
Note, kvm_hv_flush_tlb() only queues 'flush all' entries for now and
kvm_hv_vcpu_flush_tlb() doesn't actually read the fifo just resets the
queue before doing full TLB flush so the functional change is very
small but the infrastructure is prepared to handle individual GVA
flush requests.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/include/asm/kvm_host.h | 20 +++++++++++++++
arch/x86/kvm/hyperv.c | 45 +++++++++++++++++++++++++++++++++
arch/x86/kvm/hyperv.h | 16 ++++++++++++
arch/x86/kvm/x86.c | 8 +++---
arch/x86/kvm/x86.h | 1 +
5 files changed, 86 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 7a6a6f47b603..cf3748be236d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -25,6 +25,7 @@
#include <linux/clocksource.h>
#include <linux/irqbypass.h>
#include <linux/hyperv.h>
+#include <linux/kfifo.h>
#include <asm/apic.h>
#include <asm/pvclock-abi.h>
@@ -600,6 +601,23 @@ struct kvm_vcpu_hv_synic {
bool dont_zero_synic_pages;
};
+/* The maximum number of entries on the TLB flush fifo. */
+#define KVM_HV_TLB_FLUSH_FIFO_SIZE (16)
+/*
+ * Note: the following 'magic' entry is made up by KVM to avoid putting
+ * anything besides GVA on the TLB flush fifo. It is theoretically possible
+ * to observe a request to flush 4095 PFNs starting from 0xfffffffffffff000
+ * which will look identical. KVM's action to 'flush everything' instead of
+ * flushing these particular addresses is, however, fully legitimate as
+ * flushing more than requested is always OK.
+ */
+#define KVM_HV_TLB_FLUSHALL_ENTRY ((u64)-1)
+
+struct kvm_vcpu_hv_tlb_flush_fifo {
+ spinlock_t write_lock;
+ DECLARE_KFIFO(entries, u64, KVM_HV_TLB_FLUSH_FIFO_SIZE);
+};
+
/* Hyper-V per vcpu emulation context */
struct kvm_vcpu_hv {
struct kvm_vcpu *vcpu;
@@ -619,6 +637,8 @@ struct kvm_vcpu_hv {
u32 enlightenments_ebx; /* HYPERV_CPUID_ENLIGHTMENT_INFO.EBX */
u32 syndbg_cap_eax; /* HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES.EAX */
} cpuid_cache;
+
+ struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo;
};
/* Xen HVM per vcpu emulation context */
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index b402ad059eb9..c8b22bf67577 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -29,6 +29,7 @@
#include <linux/kvm_host.h>
#include <linux/highmem.h>
#include <linux/sched/cputime.h>
+#include <linux/spinlock.h>
#include <linux/eventfd.h>
#include <asm/apicdef.h>
@@ -954,6 +955,9 @@ static int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu)
hv_vcpu->vp_index = vcpu->vcpu_idx;
+ INIT_KFIFO(hv_vcpu->tlb_flush_fifo.entries);
+ spin_lock_init(&hv_vcpu->tlb_flush_fifo.write_lock);
+
return 0;
}
@@ -1789,6 +1793,35 @@ static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc,
var_cnt * sizeof(*sparse_banks));
}
+static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+ u64 entry = KVM_HV_TLB_FLUSHALL_ENTRY;
+
+ if (!hv_vcpu)
+ return;
+
+ tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
+
+ kfifo_in_spinlocked(&tlb_flush_fifo->entries, &entry, 1, &tlb_flush_fifo->write_lock);
+}
+
+void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+
+ kvm_vcpu_flush_tlb_guest(vcpu);
+
+ if (!hv_vcpu)
+ return;
+
+ tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
+
+ kfifo_reset_out(&tlb_flush_fifo->entries);
+}
+
static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
{
struct kvm *kvm = vcpu->kvm;
@@ -1797,6 +1830,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS);
u64 valid_bank_mask;
u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS];
+ struct kvm_vcpu *v;
+ unsigned long i;
bool all_cpus;
/*
@@ -1876,10 +1911,20 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
* analyze it here, flush TLB regardless of the specified address space.
*/
if (all_cpus) {
+ kvm_for_each_vcpu(i, v, kvm)
+ hv_tlb_flush_enqueue(v);
+
kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH);
} else {
sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask, vcpu_mask);
+ for_each_set_bit(i, vcpu_mask, KVM_MAX_VCPUS) {
+ v = kvm_get_vcpu(kvm, i);
+ if (!v)
+ continue;
+ hv_tlb_flush_enqueue(v);
+ }
+
kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask);
}
diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
index da2737f2a956..e5b32266ff7d 100644
--- a/arch/x86/kvm/hyperv.h
+++ b/arch/x86/kvm/hyperv.h
@@ -147,4 +147,20 @@ int kvm_vm_ioctl_hv_eventfd(struct kvm *kvm, struct kvm_hyperv_eventfd *args);
int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
struct kvm_cpuid_entry2 __user *entries);
+
+static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+
+ if (!hv_vcpu || !kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu))
+ return;
+
+ tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
+
+ kfifo_reset_out(&tlb_flush_fifo->entries);
+}
+void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu);
+
+
#endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 80cd3eb5e7de..805db43c2829 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3320,7 +3320,7 @@ static void kvm_vcpu_flush_tlb_all(struct kvm_vcpu *vcpu)
static_call(kvm_x86_flush_tlb_all)(vcpu);
}
-static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
+void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
{
++vcpu->stat.tlb_flush;
@@ -3355,14 +3355,14 @@ void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu)
{
if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) {
kvm_vcpu_flush_tlb_current(vcpu);
- kvm_clear_request(KVM_REQ_HV_TLB_FLUSH, vcpu);
+ kvm_hv_vcpu_empty_flush_tlb(vcpu);
}
if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) {
kvm_vcpu_flush_tlb_guest(vcpu);
- kvm_clear_request(KVM_REQ_HV_TLB_FLUSH, vcpu);
+ kvm_hv_vcpu_empty_flush_tlb(vcpu);
} else if (kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu)) {
- kvm_vcpu_flush_tlb_guest(vcpu);
+ kvm_hv_vcpu_flush_tlb(vcpu);
}
}
EXPORT_SYMBOL_GPL(kvm_service_local_tlb_flush_requests);
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 501b884b8cc4..9f7989f2c6d4 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -79,6 +79,7 @@ static inline unsigned int __shrink_ple_window(unsigned int val,
#define MSR_IA32_CR_PAT_DEFAULT 0x0007040600070406ULL
+void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu);
void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu);
int kvm_check_nested_events(struct kvm_vcpu *vcpu);
--
2.35.3
Currently, HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} calls are handled
the exact same way as HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE{,EX}: by
flushing the whole VPID and this is sub-optimal. Switch to handling
these requests with 'flush_tlb_gva()' hooks instead. Use the newly
introduced TLB flush fifo to queue the requests.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/kvm/hyperv.c | 100 +++++++++++++++++++++++++++++++++++++-----
1 file changed, 88 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 762b0b699fdf..956072592e2f 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -1806,32 +1806,82 @@ static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc,
sparse_banks, consumed_xmm_halves, offset);
}
-static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu)
+static int kvm_hv_get_tlb_flush_entries(struct kvm *kvm, struct kvm_hv_hcall *hc, u64 entries[],
+ int consumed_xmm_halves, gpa_t offset)
+{
+ return kvm_hv_get_hc_data(kvm, hc, hc->rep_cnt, hc->rep_cnt,
+ entries, consumed_xmm_halves, offset);
+}
+
+static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, u64 *entries, int count)
{
struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
u64 entry = KVM_HV_TLB_FLUSHALL_ENTRY;
+ unsigned long flags;
if (!hv_vcpu)
return;
tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
- kfifo_in_spinlocked(&tlb_flush_fifo->entries, &entry, 1, &tlb_flush_fifo->write_lock);
+ spin_lock_irqsave(&tlb_flush_fifo->write_lock, flags);
+
+ /*
+ * All entries should fit on the fifo leaving one free for 'flush all'
+ * entry in case another request comes in. In case there's not enough
+ * space, just put 'flush all' entry there.
+ */
+ if (count && entries && count < kfifo_avail(&tlb_flush_fifo->entries)) {
+ WARN_ON(kfifo_in(&tlb_flush_fifo->entries, entries, count) != count);
+ goto out_unlock;
+ }
+
+ /*
+ * Note: full fifo always contains 'flush all' entry, no need to check the
+ * return value.
+ */
+ kfifo_in(&tlb_flush_fifo->entries, &entry, 1);
+
+out_unlock:
+ spin_unlock_irqrestore(&tlb_flush_fifo->write_lock, flags);
}
void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+ u64 entries[KVM_HV_TLB_FLUSH_FIFO_SIZE];
+ int i, j, count;
+ gva_t gva;
- kvm_vcpu_flush_tlb_guest(vcpu);
-
- if (!hv_vcpu)
+ if (!tdp_enabled || !hv_vcpu) {
+ kvm_vcpu_flush_tlb_guest(vcpu);
return;
+ }
tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
+ count = kfifo_out(&tlb_flush_fifo->entries, entries, KVM_HV_TLB_FLUSH_FIFO_SIZE);
+
+ for (i = 0; i < count; i++) {
+ if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY)
+ goto out_flush_all;
+
+ /*
+ * Lower 12 bits of 'address' encode the number of additional
+ * pages to flush.
+ */
+ gva = entries[i] & PAGE_MASK;
+ for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++)
+ static_call(kvm_x86_flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE);
+
+ ++vcpu->stat.tlb_flush;
+ }
+ return;
+
+out_flush_all:
+ kvm_vcpu_flush_tlb_guest(vcpu);
kfifo_reset_out(&tlb_flush_fifo->entries);
}
@@ -1841,11 +1891,21 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
struct hv_tlb_flush_ex flush_ex;
struct hv_tlb_flush flush;
DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS);
+ /*
+ * Normally, there can be no more than 'KVM_HV_TLB_FLUSH_FIFO_SIZE'
+ * entries on the TLB flush fifo. The last entry, however, needs to be
+ * always left free for 'flush all' entry which gets placed when
+ * there is not enough space to put all the requested entries.
+ */
+ u64 __tlb_flush_entries[KVM_HV_TLB_FLUSH_FIFO_SIZE - 1];
+ u64 *tlb_flush_entries;
u64 valid_bank_mask;
u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS];
struct kvm_vcpu *v;
unsigned long i;
bool all_cpus;
+ int consumed_xmm_halves = 0;
+ gpa_t data_offset;
/*
* The Hyper-V TLFS doesn't allow more than 64 sparse banks, e.g. the
@@ -1861,10 +1921,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
flush.address_space = hc->ingpa;
flush.flags = hc->outgpa;
flush.processor_mask = sse128_lo(hc->xmm[0]);
+ consumed_xmm_halves = 1;
} else {
if (unlikely(kvm_read_guest(kvm, hc->ingpa,
&flush, sizeof(flush))))
return HV_STATUS_INVALID_HYPERCALL_INPUT;
+ data_offset = sizeof(flush);
}
trace_kvm_hv_flush_tlb(flush.processor_mask,
@@ -1888,10 +1950,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
flush_ex.flags = hc->outgpa;
memcpy(&flush_ex.hv_vp_set,
&hc->xmm[0], sizeof(hc->xmm[0]));
+ consumed_xmm_halves = 2;
} else {
if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex,
sizeof(flush_ex))))
return HV_STATUS_INVALID_HYPERCALL_INPUT;
+ data_offset = sizeof(flush_ex);
}
trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask,
@@ -1907,25 +1971,37 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
return HV_STATUS_INVALID_HYPERCALL_INPUT;
if (all_cpus)
- goto do_flush;
+ goto read_flush_entries;
if (!hc->var_cnt)
goto ret_success;
- if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, 2,
- offsetof(struct hv_tlb_flush_ex,
- hv_vp_set.bank_contents)))
+ if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, consumed_xmm_halves,
+ data_offset))
+ return HV_STATUS_INVALID_HYPERCALL_INPUT;
+ data_offset += hc->var_cnt * sizeof(sparse_banks[0]);
+ consumed_xmm_halves += hc->var_cnt;
+ }
+
+read_flush_entries:
+ if (hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE ||
+ hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX ||
+ hc->rep_cnt > ARRAY_SIZE(__tlb_flush_entries)) {
+ tlb_flush_entries = NULL;
+ } else {
+ if (kvm_hv_get_tlb_flush_entries(kvm, hc, __tlb_flush_entries,
+ consumed_xmm_halves, data_offset))
return HV_STATUS_INVALID_HYPERCALL_INPUT;
+ tlb_flush_entries = __tlb_flush_entries;
}
-do_flush:
/*
* vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't
* analyze it here, flush TLB regardless of the specified address space.
*/
if (all_cpus) {
kvm_for_each_vcpu(i, v, kvm)
- hv_tlb_flush_enqueue(v);
+ hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt);
kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH);
} else {
@@ -1935,7 +2011,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
v = kvm_get_vcpu(kvm, i);
if (!v)
continue;
- hv_tlb_flush_enqueue(v);
+ hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt);
}
kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask);
--
2.35.3
It may not come clear from where the magical '64' value used in
__cpumask_to_vpset() come from. Moreover, '64' means both the maximum
sparse bank number as well as the number of vCPUs per bank. Add defines
to make things clear. These defines are also going to be used by KVM.
No functional change.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
include/asm-generic/hyperv-tlfs.h | 5 +++++
include/asm-generic/mshyperv.h | 11 ++++++-----
2 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/include/asm-generic/hyperv-tlfs.h b/include/asm-generic/hyperv-tlfs.h
index fdce7a4cfc6f..020ca9bdbb79 100644
--- a/include/asm-generic/hyperv-tlfs.h
+++ b/include/asm-generic/hyperv-tlfs.h
@@ -399,6 +399,11 @@ struct hv_vpset {
u64 bank_contents[];
} __packed;
+/* The maximum number of sparse vCPU banks which can be encoded by 'struct hv_vpset' */
+#define HV_MAX_SPARSE_VCPU_BANKS (64)
+/* The number of vCPUs in one sparse bank */
+#define HV_VCPUS_PER_SPARSE_BANK (64)
+
/* HvCallSendSyntheticClusterIpi hypercall */
struct hv_send_ipi {
u32 vector;
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index c05d2ce9b6cd..89a529093042 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -214,9 +214,10 @@ static inline int __cpumask_to_vpset(struct hv_vpset *vpset,
{
int cpu, vcpu, vcpu_bank, vcpu_offset, nr_bank = 1;
int this_cpu = smp_processor_id();
+ int max_vcpu_bank = hv_max_vp_index / HV_VCPUS_PER_SPARSE_BANK;
- /* valid_bank_mask can represent up to 64 banks */
- if (hv_max_vp_index / 64 >= 64)
+ /* vpset.valid_bank_mask can represent up to HV_MAX_SPARSE_VCPU_BANKS banks */
+ if (max_vcpu_bank >= HV_MAX_SPARSE_VCPU_BANKS)
return 0;
/*
@@ -224,7 +225,7 @@ static inline int __cpumask_to_vpset(struct hv_vpset *vpset,
* structs are not cleared between calls, we risk flushing unneeded
* vCPUs otherwise.
*/
- for (vcpu_bank = 0; vcpu_bank <= hv_max_vp_index / 64; vcpu_bank++)
+ for (vcpu_bank = 0; vcpu_bank <= max_vcpu_bank; vcpu_bank++)
vpset->bank_contents[vcpu_bank] = 0;
/*
@@ -236,8 +237,8 @@ static inline int __cpumask_to_vpset(struct hv_vpset *vpset,
vcpu = hv_cpu_number_to_vp_number(cpu);
if (vcpu == VP_INVAL)
return -1;
- vcpu_bank = vcpu / 64;
- vcpu_offset = vcpu % 64;
+ vcpu_bank = vcpu / HV_VCPUS_PER_SPARSE_BANK;
+ vcpu_offset = vcpu % HV_VCPUS_PER_SPARSE_BANK;
__set_bit(vcpu_offset, (unsigned long *)
&vpset->bank_contents[vcpu_bank]);
if (vcpu_bank >= nr_bank)
--
2.35.3
From: Sean Christopherson <[email protected]>
Move the guts of kvm_get_sparse_vp_set() to a helper so that the code for
reading a guest-provided array can be reused in the future, e.g. for
getting a list of virtual addresses whose TLB entries need to be flushed.
Opportunisticaly swap the order of the data and XMM adjustment so that
the XMM/gpa offsets are bundled together.
No functional change intended.
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/kvm/hyperv.c | 53 +++++++++++++++++++++++++++----------------
1 file changed, 33 insertions(+), 20 deletions(-)
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index c8b22bf67577..762b0b699fdf 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -1759,38 +1759,51 @@ struct kvm_hv_hcall {
sse128_t xmm[HV_HYPERCALL_MAX_XMM_REGISTERS];
};
-static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc,
- int consumed_xmm_halves,
- u64 *sparse_banks, gpa_t offset)
-{
- u16 var_cnt;
- int i;
- if (hc->var_cnt > 64)
- return -EINVAL;
-
- /* Ignore banks that cannot possibly contain a legal VP index. */
- var_cnt = min_t(u16, hc->var_cnt, KVM_HV_MAX_SPARSE_VCPU_SET_BITS);
+static int kvm_hv_get_hc_data(struct kvm *kvm, struct kvm_hv_hcall *hc,
+ u16 orig_cnt, u16 cnt_cap, u64 *data,
+ int consumed_xmm_halves, gpa_t offset)
+{
+ /*
+ * Preserve the original count when ignoring entries via a "cap", KVM
+ * still needs to validate the guest input (though the non-XMM path
+ * punts on the checks).
+ */
+ u16 cnt = min(orig_cnt, cnt_cap);
+ int i, j;
if (hc->fast) {
/*
* Each XMM holds two sparse banks, but do not count halves that
* have already been consumed for hypercall parameters.
*/
- if (hc->var_cnt > 2 * HV_HYPERCALL_MAX_XMM_REGISTERS - consumed_xmm_halves)
+ if (orig_cnt > 2 * HV_HYPERCALL_MAX_XMM_REGISTERS - consumed_xmm_halves)
return HV_STATUS_INVALID_HYPERCALL_INPUT;
- for (i = 0; i < var_cnt; i++) {
- int j = i + consumed_xmm_halves;
+
+ for (i = 0; i < cnt; i++) {
+ j = i + consumed_xmm_halves;
if (j % 2)
- sparse_banks[i] = sse128_hi(hc->xmm[j / 2]);
+ data[i] = sse128_hi(hc->xmm[j / 2]);
else
- sparse_banks[i] = sse128_lo(hc->xmm[j / 2]);
+ data[i] = sse128_lo(hc->xmm[j / 2]);
}
return 0;
}
- return kvm_read_guest(kvm, hc->ingpa + offset, sparse_banks,
- var_cnt * sizeof(*sparse_banks));
+ return kvm_read_guest(kvm, hc->ingpa + offset, data,
+ cnt * sizeof(*data));
+}
+
+static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc,
+ u64 *sparse_banks, int consumed_xmm_halves,
+ gpa_t offset)
+{
+ if (hc->var_cnt > 64)
+ return -EINVAL;
+
+ /* Cap var_cnt to ignore banks that cannot contain a legal VP index. */
+ return kvm_hv_get_hc_data(kvm, hc, hc->var_cnt, KVM_HV_MAX_SPARSE_VCPU_SET_BITS,
+ sparse_banks, consumed_xmm_halves, offset);
}
static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu)
@@ -1899,7 +1912,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
if (!hc->var_cnt)
goto ret_success;
- if (kvm_get_sparse_vp_set(kvm, hc, 2, sparse_banks,
+ if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, 2,
offsetof(struct hv_tlb_flush_ex,
hv_vp_set.bank_contents)))
return HV_STATUS_INVALID_HYPERCALL_INPUT;
@@ -2010,7 +2023,7 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
if (!hc->var_cnt)
goto ret_success;
- if (kvm_get_sparse_vp_set(kvm, hc, 1, sparse_banks,
+ if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, 1,
offsetof(struct hv_send_ipi_ex,
vp_set.bank_contents)))
return HV_STATUS_INVALID_HYPERCALL_INPUT;
--
2.35.3
Get rid of on-stack allocation of vcpu_mask and optimize kvm_hv_send_ipi()
for a smaller number of vCPUs in the request. When Hyper-V TLB flush
is in use, HvSendSyntheticClusterIpi{,Ex} calls are not commonly used to
send IPIs to a large number of vCPUs (and are rarely used in general).
Introduce hv_is_vp_in_sparse_set() to directly check if the specified
VP_ID is present in sparse vCPU set.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/kvm/hyperv.c | 37 ++++++++++++++++++++++++++-----------
1 file changed, 26 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index c43355cb60bc..b347971b3924 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -1747,6 +1747,25 @@ static void sparse_set_to_vcpu_mask(struct kvm *kvm, u64 *sparse_banks,
}
}
+static bool hv_is_vp_in_sparse_set(u32 vp_id, u64 valid_bank_mask, u64 sparse_banks[])
+{
+ int bank, sbank = 0;
+
+ if (!test_bit(vp_id / HV_VCPUS_PER_SPARSE_BANK,
+ (unsigned long *)&valid_bank_mask))
+ return false;
+
+ for_each_set_bit(bank, (unsigned long *)&valid_bank_mask,
+ KVM_HV_MAX_SPARSE_VCPU_SET_BITS) {
+ if (bank == vp_id / HV_VCPUS_PER_SPARSE_BANK)
+ break;
+ sbank++;
+ }
+
+ return test_bit(vp_id % HV_VCPUS_PER_SPARSE_BANK,
+ (unsigned long *)&sparse_banks[sbank]);
+}
+
struct kvm_hv_hcall {
u64 param;
u64 ingpa;
@@ -2029,8 +2048,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
((u64)hc->rep_cnt << HV_HYPERCALL_REP_COMP_OFFSET);
}
-static void kvm_send_ipi_to_many(struct kvm *kvm, u32 vector,
- unsigned long *vcpu_bitmap)
+static void kvm_hv_send_ipi_to_many(struct kvm *kvm, u32 vector,
+ u64 *sparse_banks, u64 valid_bank_mask)
{
struct kvm_lapic_irq irq = {
.delivery_mode = APIC_DM_FIXED,
@@ -2040,7 +2059,10 @@ static void kvm_send_ipi_to_many(struct kvm *kvm, u32 vector,
unsigned long i;
kvm_for_each_vcpu(i, vcpu, kvm) {
- if (vcpu_bitmap && !test_bit(i, vcpu_bitmap))
+ if (sparse_banks &&
+ !hv_is_vp_in_sparse_set(kvm_hv_get_vpindex(vcpu),
+ valid_bank_mask,
+ sparse_banks))
continue;
/* We fail only when APIC is disabled */
@@ -2053,7 +2075,6 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
struct kvm *kvm = vcpu->kvm;
struct hv_send_ipi_ex send_ipi_ex;
struct hv_send_ipi send_ipi;
- DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS);
unsigned long valid_bank_mask;
u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS];
u32 vector;
@@ -2115,13 +2136,7 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
if ((vector < HV_IPI_LOW_VECTOR) || (vector > HV_IPI_HIGH_VECTOR))
return HV_STATUS_INVALID_HYPERCALL_INPUT;
- if (all_cpus) {
- kvm_send_ipi_to_many(kvm, vector, NULL);
- } else {
- sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask, vcpu_mask);
-
- kvm_send_ipi_to_many(kvm, vector, vcpu_mask);
- }
+ kvm_hv_send_ipi_to_many(kvm, vector, all_cpus ? NULL : sparse_banks, valid_bank_mask);
ret_success:
return HV_STATUS_SUCCESS;
--
2.35.3
It may not be clear from where the '64' limit for the maximum sparse
bank number comes from, use HV_MAX_SPARSE_VCPU_BANKS define instead.
Use HV_VCPUS_PER_SPARSE_BANK in KVM_HV_MAX_SPARSE_VCPU_SET_BITS's
definition. Opportunistically adjust the comment around BUILD_BUG_ON().
No functional change.
Reviewed-by: Maxim Levitsky <[email protected]>
Suggested-by: Sean Christopherson <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/kvm/hyperv.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 992972e0f5de..c43355cb60bc 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -43,7 +43,7 @@
/* "Hv#1" signature */
#define HYPERV_CPUID_SIGNATURE_EAX 0x31237648
-#define KVM_HV_MAX_SPARSE_VCPU_SET_BITS DIV_ROUND_UP(KVM_MAX_VCPUS, 64)
+#define KVM_HV_MAX_SPARSE_VCPU_SET_BITS DIV_ROUND_UP(KVM_MAX_VCPUS, HV_VCPUS_PER_SPARSE_BANK)
static void stimer_mark_pending(struct kvm_vcpu_hv_stimer *stimer,
bool vcpu_kick);
@@ -1799,7 +1799,7 @@ static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc,
u64 *sparse_banks, int consumed_xmm_halves,
gpa_t offset)
{
- if (hc->var_cnt > 64)
+ if (hc->var_cnt > HV_MAX_SPARSE_VCPU_BANKS)
return -EINVAL;
/* Cap var_cnt to ignore banks that cannot contain a legal VP index. */
@@ -1909,12 +1909,11 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
gpa_t data_offset;
/*
- * The Hyper-V TLFS doesn't allow more than 64 sparse banks, e.g. the
- * valid mask is a u64. Fail the build if KVM's max allowed number of
- * vCPUs (>4096) would exceed this limit, KVM will additional changes
- * for Hyper-V support to avoid setting the guest up to fail.
+ * The Hyper-V TLFS doesn't allow more than HV_MAX_SPARSE_VCPU_BANKS
+ * sparse banks. Fail the build if KVM's max allowed number of
+ * vCPUs (>4096) exceeds this limit.
*/
- BUILD_BUG_ON(KVM_HV_MAX_SPARSE_VCPU_SET_BITS > 64);
+ BUILD_BUG_ON(KVM_HV_MAX_SPARSE_VCPU_SET_BITS > HV_MAX_SPARSE_VCPU_BANKS);
if (!hc->fast && is_guest_mode(vcpu)) {
hc->ingpa = translate_nested_gpa(vcpu, hc->ingpa, 0, NULL);
--
2.35.3
To handle L2 TLB flush requests, KVM needs to translate the specified
L2 GPA to L1 GPA to read hypercall arguments from there.
No functional change as KVM doesn't handle VMCALL/VMMCALL from L2 yet.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/kvm/hyperv.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index d6abc5265f55..992972e0f5de 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -23,6 +23,7 @@
#include "ioapic.h"
#include "cpuid.h"
#include "hyperv.h"
+#include "mmu.h"
#include "xen.h"
#include <linux/cpu.h>
@@ -1915,6 +1916,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
*/
BUILD_BUG_ON(KVM_HV_MAX_SPARSE_VCPU_SET_BITS > 64);
+ if (!hc->fast && is_guest_mode(vcpu)) {
+ hc->ingpa = translate_nested_gpa(vcpu, hc->ingpa, 0, NULL);
+ if (unlikely(hc->ingpa == UNMAPPED_GVA))
+ return HV_STATUS_INVALID_HYPERCALL_INPUT;
+ }
+
if (hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST ||
hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE) {
if (hc->fast) {
--
2.35.3
Extended GVA ranges support bit seems to indicate whether lower 12
bits of GVA can be used to specify up to 4095 additional consequent
GVAs to flush. This is somewhat described in TLFS.
Previously, KVM was handling HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX}
requests by flushing the whole VPID so technically, extended GVA
ranges were already supported. As such requests are handled more
gently now, advertizing support for extended ranges starts making
sense to reduce the size of TLB flush requests.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/include/asm/hyperv-tlfs.h | 2 ++
arch/x86/kvm/hyperv.c | 1 +
2 files changed, 3 insertions(+)
diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
index 0a9407dc0859..5225a85c08c3 100644
--- a/arch/x86/include/asm/hyperv-tlfs.h
+++ b/arch/x86/include/asm/hyperv-tlfs.h
@@ -61,6 +61,8 @@
#define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE BIT(10)
/* Support for debug MSRs available */
#define HV_FEATURE_DEBUG_MSRS_AVAILABLE BIT(11)
+/* Support for extended gva ranges for flush hypercalls available */
+#define HV_FEATURE_EXT_GVA_RANGES_FLUSH BIT(14)
/*
* Support for returning hypercall output block via XMM
* registers is available
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 956072592e2f..d6abc5265f55 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -2642,6 +2642,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
ent->ebx |= HV_DEBUGGING;
ent->edx |= HV_X64_GUEST_DEBUGGING_AVAILABLE;
ent->edx |= HV_FEATURE_DEBUG_MSRS_AVAILABLE;
+ ent->edx |= HV_FEATURE_EXT_GVA_RANGES_FLUSH;
/*
* Direct Synthetic timers only make sense with in-kernel
--
2.35.3
To handle L2 TLB flush requests, KVM needs to use a separate fifo from
regular (L1) Hyper-V TLB flush requests: e.g. when a request to flush
something in L2 is made, the target vCPU can transition from L2 to L1,
receive a request to flush a GVA for L1 and then try to enter L2 back.
The first request needs to be processed at this point. Similarly,
requests to flush GVAs in L1 must wait until L2 exits to L1.
No functional change as KVM doesn't handle L2 TLB flush requests from
L2 yet.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/include/asm/kvm_host.h | 8 +++++++-
arch/x86/kvm/hyperv.c | 11 +++++++----
arch/x86/kvm/hyperv.h | 17 ++++++++++++++---
3 files changed, 28 insertions(+), 8 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index cf3748be236d..0e58ab00dff0 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -613,6 +613,12 @@ struct kvm_vcpu_hv_synic {
*/
#define KVM_HV_TLB_FLUSHALL_ENTRY ((u64)-1)
+enum hv_tlb_flush_fifos {
+ HV_L1_TLB_FLUSH_FIFO,
+ HV_L2_TLB_FLUSH_FIFO,
+ HV_NR_TLB_FLUSH_FIFOS,
+};
+
struct kvm_vcpu_hv_tlb_flush_fifo {
spinlock_t write_lock;
DECLARE_KFIFO(entries, u64, KVM_HV_TLB_FLUSH_FIFO_SIZE);
@@ -638,7 +644,7 @@ struct kvm_vcpu_hv {
u32 syndbg_cap_eax; /* HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES.EAX */
} cpuid_cache;
- struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo;
+ struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo[HV_NR_TLB_FLUSH_FIFOS];
};
/* Xen HVM per vcpu emulation context */
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index b347971b3924..32f223bbea6b 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -956,8 +956,10 @@ static int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu)
hv_vcpu->vp_index = vcpu->vcpu_idx;
- INIT_KFIFO(hv_vcpu->tlb_flush_fifo.entries);
- spin_lock_init(&hv_vcpu->tlb_flush_fifo.write_lock);
+ for (i = 0; i < HV_NR_TLB_FLUSH_FIFOS; i++) {
+ INIT_KFIFO(hv_vcpu->tlb_flush_fifo[i].entries);
+ spin_lock_init(&hv_vcpu->tlb_flush_fifo[i].write_lock);
+ }
return 0;
}
@@ -1843,7 +1845,8 @@ static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, u64 *entries, int count)
if (!hv_vcpu)
return;
- tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
+ /* kvm_hv_flush_tlb() is not ready to handle requests for L2s yet */
+ tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo[HV_L1_TLB_FLUSH_FIFO];
spin_lock_irqsave(&tlb_flush_fifo->write_lock, flags);
@@ -1880,7 +1883,7 @@ void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
return;
}
- tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
+ tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(vcpu);
count = kfifo_out(&tlb_flush_fifo->entries, entries, KVM_HV_TLB_FLUSH_FIFO_SIZE);
diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
index e5b32266ff7d..207d24efdc5a 100644
--- a/arch/x86/kvm/hyperv.h
+++ b/arch/x86/kvm/hyperv.h
@@ -22,6 +22,7 @@
#define __ARCH_X86_KVM_HYPERV_H__
#include <linux/kvm_host.h>
+#include "x86.h"
/*
* The #defines related to the synthetic debugger are required by KDNet, but
@@ -147,16 +148,26 @@ int kvm_vm_ioctl_hv_eventfd(struct kvm *kvm, struct kvm_hyperv_eventfd *args);
int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
struct kvm_cpuid_entry2 __user *entries);
+static inline struct kvm_vcpu_hv_tlb_flush_fifo *kvm_hv_get_tlb_flush_fifo(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+ int i = !is_guest_mode(vcpu) ? HV_L1_TLB_FLUSH_FIFO :
+ HV_L2_TLB_FLUSH_FIFO;
+
+ /* KVM does not handle L2 TLB flush requests yet */
+ WARN_ON_ONCE(i != HV_L1_TLB_FLUSH_FIFO);
+
+ return &hv_vcpu->tlb_flush_fifo[i];
+}
static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
- struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
- if (!hv_vcpu || !kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu))
+ if (!to_hv_vcpu(vcpu) || !kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu))
return;
- tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
+ tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(vcpu);
kfifo_reset_out(&tlb_flush_fifo->entries);
}
--
2.35.3
Section 1.9 of TLFS v6.0b says:
"All structures are padded in such a way that fields are aligned
naturally (that is, an 8-byte field is aligned to an offset of 8 bytes
and so on)".
'struct enlightened_vmcs' has a glitch:
...
struct {
u32 nested_flush_hypercall:1; /* 836: 0 4 */
u32 msr_bitmap:1; /* 836: 1 4 */
u32 reserved:30; /* 836: 2 4 */
} hv_enlightenments_control; /* 836 4 */
u32 hv_vp_id; /* 840 4 */
u64 hv_vm_id; /* 844 8 */
u64 partition_assist_page; /* 852 8 */
...
And the observed values in 'partition_assist_page' make no sense at
all. Fix the layout by padding the structure properly.
Fixes: 68d1eb72ee99 ("x86/hyper-v: define struct hv_enlightened_vmcs and clean field bits")
Reviewed-by: Maxim Levitsky <[email protected]>
Reviewed-by: Michael Kelley <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/include/asm/hyperv-tlfs.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
index 5225a85c08c3..e7ddae8e02c6 100644
--- a/arch/x86/include/asm/hyperv-tlfs.h
+++ b/arch/x86/include/asm/hyperv-tlfs.h
@@ -548,7 +548,7 @@ struct hv_enlightened_vmcs {
u64 guest_rip;
u32 hv_clean_fields;
- u32 hv_padding_32;
+ u32 padding32_1;
u32 hv_synthetic_controls;
struct {
u32 nested_flush_hypercall:1;
@@ -556,7 +556,7 @@ struct hv_enlightened_vmcs {
u32 reserved:30;
} __packed hv_enlightenments_control;
u32 hv_vp_id;
-
+ u32 padding32_2;
u64 hv_vm_id;
u64 partition_assist_page;
u64 padding64_4[4];
--
2.35.3
Introduce a helper to quickly check if KVM needs to handle VMCALL/VMMCALL
from L2 in L0 to process L2 TLB flush requests.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/hyperv.c | 6 ++++++
arch/x86/kvm/hyperv.h | 7 +++++++
3 files changed, 14 insertions(+)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 5d60c66ee0de..f9a34af0a5cc 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -642,6 +642,7 @@ struct kvm_vcpu_hv {
u32 enlightenments_eax; /* HYPERV_CPUID_ENLIGHTMENT_INFO.EAX */
u32 enlightenments_ebx; /* HYPERV_CPUID_ENLIGHTMENT_INFO.EBX */
u32 syndbg_cap_eax; /* HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES.EAX */
+ u32 nested_features_eax; /* HYPERV_CPUID_NESTED_FEATURES.EAX */
} cpuid_cache;
struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo[HV_NR_TLB_FLUSH_FIFOS];
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 740190917c1c..4396d75588d8 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -2229,6 +2229,12 @@ void kvm_hv_set_cpuid(struct kvm_vcpu *vcpu)
hv_vcpu->cpuid_cache.syndbg_cap_eax = entry->eax;
else
hv_vcpu->cpuid_cache.syndbg_cap_eax = 0;
+
+ entry = kvm_find_cpuid_entry(vcpu, HYPERV_CPUID_NESTED_FEATURES, 0);
+ if (entry)
+ hv_vcpu->cpuid_cache.nested_features_eax = entry->eax;
+ else
+ hv_vcpu->cpuid_cache.nested_features_eax = 0;
}
int kvm_hv_set_enforce_cpuid(struct kvm_vcpu *vcpu, bool enforce)
diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
index 7778b3a5913c..2aa6fb7fc599 100644
--- a/arch/x86/kvm/hyperv.h
+++ b/arch/x86/kvm/hyperv.h
@@ -170,6 +170,13 @@ static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu *vcpu, bool is_gu
kfifo_reset_out(&tlb_flush_fifo->entries);
}
+static inline bool guest_hv_cpuid_has_l2_tlb_flush(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+
+ return hv_vcpu && (hv_vcpu->cpuid_cache.nested_features_eax & HV_X64_NESTED_DIRECT_FLUSH);
+}
+
static inline bool kvm_hv_is_tlb_flush_hcall(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
--
2.35.3
The newly introduced helper checks whether vCPU is performing a
Hyper-V TLB flush hypercall. This is required to filter out L2 TLB
flush hypercalls for processing.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/kvm/hyperv.h | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
index 207d24efdc5a..dc46c5ed5d18 100644
--- a/arch/x86/kvm/hyperv.h
+++ b/arch/x86/kvm/hyperv.h
@@ -171,6 +171,24 @@ static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu *vcpu)
kfifo_reset_out(&tlb_flush_fifo->entries);
}
+
+static inline bool kvm_hv_is_tlb_flush_hcall(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+ u16 code;
+
+ if (!hv_vcpu)
+ return false;
+
+ code = is_64_bit_hypercall(vcpu) ? kvm_rcx_read(vcpu) :
+ kvm_rax_read(vcpu);
+
+ return (code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE ||
+ code == HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST ||
+ code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX ||
+ code == HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX);
+}
+
void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu);
--
2.35.3
To make kvm_hv_flush_tlb() ready to handle L2 TLB flush requests, KVM needs
to allow for all 64 sparse vCPU banks regardless of KVM_MAX_VCPUs as L1
may use vCPU overcommit for L2. To avoid growing on-stack allocation, make
'sparse_banks' part of per-vCPU 'struct kvm_vcpu_hv' which is allocated
dynamically.
Note: sparse_set_to_vcpu_mask() can't currently be used to handle L2
requests as KVM does not keep L2 VM_ID -> L2 VCPU_ID -> L1 vCPU mappings,
i.e. its vp_bitmap array is still bounded by the number of L1 vCPUs and so
can remain an on-stack allocation.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/include/asm/kvm_host.h | 3 +++
arch/x86/kvm/hyperv.c | 6 ++++--
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 0e58ab00dff0..278d8d3ee994 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -645,6 +645,9 @@ struct kvm_vcpu_hv {
} cpuid_cache;
struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo[HV_NR_TLB_FLUSH_FIFOS];
+
+ /* Preallocated buffer for handling hypercalls passing sparse vCPU set */
+ u64 sparse_banks[HV_MAX_SPARSE_VCPU_BANKS];
};
/* Xen HVM per vcpu emulation context */
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 32f223bbea6b..3075e9661696 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -1910,6 +1910,8 @@ void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
{
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+ u64 *sparse_banks = hv_vcpu->sparse_banks;
struct kvm *kvm = vcpu->kvm;
struct hv_tlb_flush_ex flush_ex;
struct hv_tlb_flush flush;
@@ -1923,7 +1925,6 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
u64 __tlb_flush_entries[KVM_HV_TLB_FLUSH_FIFO_SIZE - 1];
u64 *tlb_flush_entries;
u64 valid_bank_mask;
- u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS];
struct kvm_vcpu *v;
unsigned long i;
bool all_cpus;
@@ -2075,11 +2076,12 @@ static void kvm_hv_send_ipi_to_many(struct kvm *kvm, u32 vector,
static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
{
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+ u64 *sparse_banks = hv_vcpu->sparse_banks;
struct kvm *kvm = vcpu->kvm;
struct hv_send_ipi_ex send_ipi_ex;
struct hv_send_ipi send_ipi;
unsigned long valid_bank_mask;
- u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS];
u32 vector;
bool all_cpus;
--
2.35.3
Similar to nSVM, KVM needs to know L2's VM_ID/VP_ID and Partition
assist page address to handle L2 TLB flush requests.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/kvm/svm/hyperv.h | 16 ++++++++++++++++
arch/x86/kvm/svm/nested.c | 2 ++
2 files changed, 18 insertions(+)
diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h
index 7d6d97968fb9..8cf702fed7e5 100644
--- a/arch/x86/kvm/svm/hyperv.h
+++ b/arch/x86/kvm/svm/hyperv.h
@@ -9,6 +9,7 @@
#include <asm/mshyperv.h>
#include "../hyperv.h"
+#include "svm.h"
/*
* Hyper-V uses the software reserved 32 bytes in VMCB
@@ -32,4 +33,19 @@ struct hv_enlightenments {
*/
#define VMCB_HV_NESTED_ENLIGHTENMENTS VMCB_SW
+static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+ struct hv_enlightenments *hve =
+ (struct hv_enlightenments *)svm->nested.ctl.reserved_sw;
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+
+ if (!hv_vcpu)
+ return;
+
+ hv_vcpu->nested.pa_page_gpa = hve->partition_assist_page;
+ hv_vcpu->nested.vm_id = hve->hv_vm_id;
+ hv_vcpu->nested.vp_id = hve->hv_vp_id;
+}
+
#endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 88da8edbe1e1..e8908cc56e22 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -811,6 +811,8 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
if (kvm_vcpu_apicv_active(vcpu))
kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
+ nested_svm_hv_update_vm_vp_ids(vcpu);
+
return 0;
}
--
2.35.3
Hyper-V supports injecting synthetic L2->L1 exit after performing
L2 TLB flush operation but the procedure is vendor specific. Introduce
.hv_inject_synthetic_vmexit_post_tlb_flush nested hook for it.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/Makefile | 3 ++-
arch/x86/kvm/svm/hyperv.c | 11 +++++++++++
arch/x86/kvm/svm/hyperv.h | 2 ++
arch/x86/kvm/svm/nested.c | 1 +
arch/x86/kvm/vmx/evmcs.c | 4 ++++
arch/x86/kvm/vmx/evmcs.h | 1 +
arch/x86/kvm/vmx/nested.c | 1 +
8 files changed, 23 insertions(+), 1 deletion(-)
create mode 100644 arch/x86/kvm/svm/hyperv.c
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 02bef551dafb..5d60c66ee0de 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1603,6 +1603,7 @@ struct kvm_x86_nested_ops {
int (*enable_evmcs)(struct kvm_vcpu *vcpu,
uint16_t *vmcs_version);
uint16_t (*get_evmcs_version)(struct kvm_vcpu *vcpu);
+ void (*hv_inject_synthetic_vmexit_post_tlb_flush)(struct kvm_vcpu *vcpu);
};
struct kvm_x86_init_ops {
diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index 30f244b64523..b6d53b045692 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -25,7 +25,8 @@ kvm-intel-y += vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o \
vmx/evmcs.o vmx/nested.o vmx/posted_intr.o
kvm-intel-$(CONFIG_X86_SGX_KVM) += vmx/sgx.o
-kvm-amd-y += svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o svm/sev.o
+kvm-amd-y += svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o \
+ svm/sev.o svm/hyperv.o
ifdef CONFIG_HYPERV
kvm-amd-y += svm/svm_onhyperv.o
diff --git a/arch/x86/kvm/svm/hyperv.c b/arch/x86/kvm/svm/hyperv.c
new file mode 100644
index 000000000000..911f51021af1
--- /dev/null
+++ b/arch/x86/kvm/svm/hyperv.c
@@ -0,0 +1,11 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * AMD SVM specific code for Hyper-V on KVM.
+ *
+ * Copyright 2022 Red Hat, Inc. and/or its affiliates.
+ */
+#include "hyperv.h"
+
+void svm_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu)
+{
+}
diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h
index 8cf702fed7e5..dd2e393f84a0 100644
--- a/arch/x86/kvm/svm/hyperv.h
+++ b/arch/x86/kvm/svm/hyperv.h
@@ -48,4 +48,6 @@ static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu)
hv_vcpu->nested.vp_id = hve->hv_vp_id;
}
+void svm_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu);
+
#endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index e8908cc56e22..28b63663e1d9 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1721,4 +1721,5 @@ struct kvm_x86_nested_ops svm_nested_ops = {
.get_nested_state_pages = svm_get_nested_state_pages,
.get_state = svm_get_nested_state,
.set_state = svm_set_nested_state,
+ .hv_inject_synthetic_vmexit_post_tlb_flush = svm_hv_inject_synthetic_vmexit_post_tlb_flush,
};
diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c
index 6a61b1ae7942..805afc170b5b 100644
--- a/arch/x86/kvm/vmx/evmcs.c
+++ b/arch/x86/kvm/vmx/evmcs.c
@@ -439,3 +439,7 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu,
return 0;
}
+
+void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu)
+{
+}
diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h
index f886a8ff0342..584741b85eb6 100644
--- a/arch/x86/kvm/vmx/evmcs.h
+++ b/arch/x86/kvm/vmx/evmcs.h
@@ -245,5 +245,6 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu,
uint16_t *vmcs_version);
void nested_evmcs_filter_control_msr(u32 msr_index, u64 *pdata);
int nested_evmcs_check_controls(struct vmcs12 *vmcs12);
+void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu);
#endif /* __KVM_X86_VMX_EVMCS_H */
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 6e264a7f205b..4a827b3d929a 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -6863,4 +6863,5 @@ struct kvm_x86_nested_ops vmx_nested_ops = {
.write_log_dirty = nested_vmx_write_pml_buffer,
.enable_evmcs = nested_enable_evmcs,
.get_evmcs_version = nested_get_evmcs_version,
+ .hv_inject_synthetic_vmexit_post_tlb_flush = vmx_hv_inject_synthetic_vmexit_post_tlb_flush,
};
--
2.35.3
Enable L2 TLB flush feature on nVMX when:
- Enlightened VMCS is in use.
- The feature flag is enabled in eVMCS.
- The feature flag is enabled in partition assist page.
Perform synthetic vmexit to L1 after processing TLB flush call upon
request (HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH).
Note: nested_evmcs_l2_tlb_flush_enabled() uses cached VP assist page copy
which gets updated from nested_vmx_handle_enlightened_vmptrld(). This is
also guaranteed to happen post migration with eVMCS backed L2 running.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/kvm/vmx/evmcs.c | 17 +++++++++++++++++
arch/x86/kvm/vmx/evmcs.h | 10 ++++++++++
arch/x86/kvm/vmx/nested.c | 22 ++++++++++++++++++++++
3 files changed, 49 insertions(+)
diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c
index 7cd7b16942c6..870de69172be 100644
--- a/arch/x86/kvm/vmx/evmcs.c
+++ b/arch/x86/kvm/vmx/evmcs.c
@@ -6,6 +6,7 @@
#include "../hyperv.h"
#include "../cpuid.h"
#include "evmcs.h"
+#include "nested.h"
#include "vmcs.h"
#include "vmx.h"
#include "trace.h"
@@ -433,6 +434,22 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu,
return 0;
}
+bool nested_evmcs_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+ struct hv_enlightened_vmcs *evmcs = vmx->nested.hv_evmcs;
+
+ if (!hv_vcpu || !evmcs)
+ return false;
+
+ if (!evmcs->hv_enlightenments_control.nested_flush_hypercall)
+ return false;
+
+ return hv_vcpu->vp_assist_page.nested_control.features.directhypercall;
+}
+
void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu)
{
+ nested_vmx_vmexit(vcpu, HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH, 0, 0);
}
diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h
index 22d238b36238..0267b6191e6c 100644
--- a/arch/x86/kvm/vmx/evmcs.h
+++ b/arch/x86/kvm/vmx/evmcs.h
@@ -66,6 +66,15 @@ DECLARE_STATIC_KEY_FALSE(enable_evmcs);
#define EVMCS1_UNSUPPORTED_VMENTRY_CTRL (VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL)
#define EVMCS1_UNSUPPORTED_VMFUNC (VMX_VMFUNC_EPTP_SWITCHING)
+/*
+ * Note, Hyper-V isn't actually stealing bit 28 from Intel, just abusing it by
+ * pairing it with architecturally impossible exit reasons. Bit 28 is set only
+ * on SMI exits to a SMI transfer monitor (STM) and if and only if a MTF VM-Exit
+ * is pending. I.e. it will never be set by hardware for non-SMI exits (there
+ * are only three), nor will it ever be set unless the VMM is an STM.
+ */
+#define HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH 0x10000031
+
struct evmcs_field {
u16 offset;
u16 clean_field;
@@ -245,6 +254,7 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu,
uint16_t *vmcs_version);
void nested_evmcs_filter_control_msr(u32 msr_index, u64 *pdata);
int nested_evmcs_check_controls(struct vmcs12 *vmcs12);
+bool nested_evmcs_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu);
void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu);
#endif /* __KVM_X86_VMX_EVMCS_H */
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 87bff81f7f3e..69d06f77d7b4 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -1170,6 +1170,17 @@ static void nested_vmx_transition_tlb_flush(struct kvm_vcpu *vcpu,
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
+ /*
+ * KVM_REQ_HV_TLB_FLUSH flushes entries from either L1's VP_ID or
+ * L2's VP_ID upon request from the guest. Make sure we check for
+ * pending entries for the case when the request got misplaced (e.g.
+ * a transition from L2->L1 happened while processing L2 TLB flush
+ * request or vice versa). kvm_hv_vcpu_flush_tlb() will not flush
+ * anything if there are no requests in the corresponding buffer.
+ */
+ if (to_hv_vcpu(vcpu))
+ kvm_make_request(KVM_REQ_HV_TLB_FLUSH, vcpu);
+
/*
* If vmcs12 doesn't use VPID, L1 expects linear and combined mappings
* for *all* contexts to be flushed on VM-Enter/VM-Exit, i.e. it's a
@@ -3278,6 +3289,12 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
{
+ /*
+ * Note: nested_get_evmcs_page() also updates 'vp_assist_page' copy
+ * in 'struct kvm_vcpu_hv' in case eVMCS is in use, this is mandatory
+ * to make nested_evmcs_l2_tlb_flush_enabled() work correctly post
+ * migration.
+ */
if (!nested_get_evmcs_page(vcpu)) {
pr_debug_ratelimited("%s: enlightened vmptrld failed\n",
__func__);
@@ -6007,6 +6024,11 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu,
* Handle L2's bus locks in L0 directly.
*/
return true;
+ case EXIT_REASON_VMCALL:
+ /* Hyper-V L2 TLB flush hypercall is handled by L0 */
+ return guest_hv_cpuid_has_l2_tlb_flush(vcpu) &&
+ nested_evmcs_l2_tlb_flush_enabled(vcpu) &&
+ kvm_hv_is_tlb_flush_hcall(vcpu);
default:
break;
}
--
2.35.3
All Hyper-V specific tests issuing hypercalls need this.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
.../selftests/kvm/include/x86_64/hyperv.h | 15 +++++++++++++++
.../selftests/kvm/x86_64/hyperv_features.c | 17 +----------------
2 files changed, 16 insertions(+), 16 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
index f0a8a93694b2..e0a1b4c2fbbc 100644
--- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h
+++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
@@ -185,6 +185,21 @@
/* hypercall options */
#define HV_HYPERCALL_FAST_BIT BIT(16)
+static inline u64 hyperv_hypercall(u64 control, vm_vaddr_t input_address,
+ vm_vaddr_t output_address)
+{
+ u64 hv_status;
+
+ asm volatile("mov %3, %%r8\n"
+ "vmcall"
+ : "=a" (hv_status),
+ "+c" (control), "+d" (input_address)
+ : "r" (output_address)
+ : "cc", "memory", "r8", "r9", "r10", "r11");
+
+ return hv_status;
+}
+
/* Proper HV_X64_MSR_GUEST_OS_ID value */
#define HYPERV_LINUX_OS_ID ((u64)0x8100 << 48)
diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
index 98c020356925..788d570e991e 100644
--- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c
+++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
@@ -48,21 +48,6 @@ static void do_wrmsr(u32 idx, u64 val)
static int nr_gp;
static int nr_ud;
-static inline u64 hypercall(u64 control, vm_vaddr_t input_address,
- vm_vaddr_t output_address)
-{
- u64 hv_status;
-
- asm volatile("mov %3, %%r8\n"
- "vmcall"
- : "=a" (hv_status),
- "+c" (control), "+d" (input_address)
- : "r" (output_address)
- : "cc", "memory", "r8", "r9", "r10", "r11");
-
- return hv_status;
-}
-
static void guest_gp_handler(struct ex_regs *regs)
{
unsigned char *rip = (unsigned char *)regs->rip;
@@ -138,7 +123,7 @@ static void guest_hcall(vm_vaddr_t pgs_gpa, struct hcall_data *hcall)
input = output = 0;
}
- res = hypercall(hcall->control, input, output);
+ res = hyperv_hypercall(hcall->control, input, output);
if (hcall->ud_expected)
GUEST_ASSERT(nr_ud == 1);
else
--
2.35.3
Introduce a selftest for Hyper-V PV IPI hypercalls
(HvCallSendSyntheticClusterIpi, HvCallSendSyntheticClusterIpiEx).
The test creates one 'sender' vCPU and two 'receiver' vCPU and then
issues various combinations of send IPI hypercalls in both 'normal'
and 'fast' (with XMM input where necessary) mode. Later, the test
checks whether IPIs were delivered to the expected destination vCPU[s].
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 1 +
.../selftests/kvm/include/x86_64/hyperv.h | 12 +
.../testing/selftests/kvm/x86_64/hyperv_ipi.c | 352 ++++++++++++++++++
4 files changed, 366 insertions(+)
create mode 100644 tools/testing/selftests/kvm/x86_64/hyperv_ipi.c
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index dd5c88c11059..19a8454e3760 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -24,6 +24,7 @@
/x86_64/hyperv_clock
/x86_64/hyperv_cpuid
/x86_64/hyperv_features
+/x86_64/hyperv_ipi
/x86_64/hyperv_svm_test
/x86_64/max_vcpuid_cap_test
/x86_64/mmio_warning_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 27e432273180..cf433073fb64 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -52,6 +52,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/fix_hypercall_test
TEST_GEN_PROGS_x86_64 += x86_64/hyperv_clock
TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
TEST_GEN_PROGS_x86_64 += x86_64/hyperv_features
+TEST_GEN_PROGS_x86_64 += x86_64/hyperv_ipi
TEST_GEN_PROGS_x86_64 += x86_64/hyperv_svm_test
TEST_GEN_PROGS_x86_64 += x86_64/kvm_clock_test
TEST_GEN_PROGS_x86_64 += x86_64/kvm_pv_test
diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
index e0a1b4c2fbbc..1b467626be58 100644
--- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h
+++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
@@ -9,6 +9,8 @@
#ifndef SELFTEST_KVM_HYPERV_H
#define SELFTEST_KVM_HYPERV_H
+#include "processor.h"
+
#define HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS 0x40000000
#define HYPERV_CPUID_INTERFACE 0x40000001
#define HYPERV_CPUID_VERSION 0x40000002
@@ -184,6 +186,7 @@
/* hypercall options */
#define HV_HYPERCALL_FAST_BIT BIT(16)
+#define HV_HYPERCALL_VARHEAD_OFFSET 17
static inline u64 hyperv_hypercall(u64 control, vm_vaddr_t input_address,
vm_vaddr_t output_address)
@@ -200,6 +203,15 @@ static inline u64 hyperv_hypercall(u64 control, vm_vaddr_t input_address,
return hv_status;
}
+/* Write 'Fast' hypercall input 'data' to the first 'n_sse_regs' SSE regs */
+static inline void hyperv_write_xmm_input(void *data, int n_sse_regs)
+{
+ int i;
+
+ for (i = 0; i < n_sse_regs; i++)
+ write_sse_reg(i, (sse128_t *)(data + sizeof(sse128_t) * i));
+}
+
/* Proper HV_X64_MSR_GUEST_OS_ID value */
#define HYPERV_LINUX_OS_ID ((u64)0x8100 << 48)
diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c b/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c
new file mode 100644
index 000000000000..a8e834be62bc
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c
@@ -0,0 +1,352 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Hyper-V HvCallSendSyntheticClusterIpi{,Ex} tests
+ *
+ * Copyright (C) 2022, Red Hat, Inc.
+ *
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <pthread.h>
+#include <inttypes.h>
+
+#include "kvm_util.h"
+#include "hyperv.h"
+#include "test_util.h"
+#include "vmx.h"
+
+#define SENDER_VCPU_ID 1
+#define RECEIVER_VCPU_ID_1 2
+#define RECEIVER_VCPU_ID_2 65
+
+#define IPI_VECTOR 0xfe
+
+static volatile uint64_t ipis_rcvd[RECEIVER_VCPU_ID_2 + 1];
+
+struct thread_params {
+ struct kvm_vm *vm;
+ uint32_t vcpu_id;
+};
+
+struct hv_vpset {
+ u64 format;
+ u64 valid_bank_mask;
+ u64 bank_contents[2];
+};
+
+enum HV_GENERIC_SET_FORMAT {
+ HV_GENERIC_SET_SPARSE_4K,
+ HV_GENERIC_SET_ALL,
+};
+
+/* HvCallSendSyntheticClusterIpi hypercall */
+struct hv_send_ipi {
+ u32 vector;
+ u32 reserved;
+ u64 cpu_mask;
+};
+
+/* HvCallSendSyntheticClusterIpiEx hypercall */
+struct hv_send_ipi_ex {
+ u32 vector;
+ u32 reserved;
+ struct hv_vpset vp_set;
+};
+
+static inline void hv_init(vm_vaddr_t pgs_gpa)
+{
+ wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
+ wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa);
+}
+
+static void receiver_code(void *hcall_page, vm_vaddr_t pgs_gpa)
+{
+ u32 vcpu_id;
+
+ x2apic_enable();
+ hv_init(pgs_gpa);
+
+ vcpu_id = rdmsr(HV_X64_MSR_VP_INDEX);
+
+ /* Signal sender vCPU we're ready */
+ ipis_rcvd[vcpu_id] = (u64)-1;
+
+ for (;;)
+ asm volatile("sti; hlt; cli");
+}
+
+static void guest_ipi_handler(struct ex_regs *regs)
+{
+ u32 vcpu_id = rdmsr(HV_X64_MSR_VP_INDEX);
+
+ ipis_rcvd[vcpu_id]++;
+ wrmsr(HV_X64_MSR_EOI, 1);
+}
+
+static inline void nop_loop(void)
+{
+ int i;
+
+ for (i = 0; i < 100000000; i++)
+ asm volatile("nop");
+}
+
+static void sender_guest_code(void *hcall_page, vm_vaddr_t pgs_gpa)
+{
+ struct hv_send_ipi *ipi = (struct hv_send_ipi *)hcall_page;
+ struct hv_send_ipi_ex *ipi_ex = (struct hv_send_ipi_ex *)hcall_page;
+ int stage = 1, ipis_expected[2] = {0};
+ u64 res;
+
+ hv_init(pgs_gpa);
+ GUEST_SYNC(stage++);
+
+ /* Wait for receiver vCPUs to come up */
+ while (!ipis_rcvd[RECEIVER_VCPU_ID_1] || !ipis_rcvd[RECEIVER_VCPU_ID_2])
+ nop_loop();
+ ipis_rcvd[RECEIVER_VCPU_ID_1] = ipis_rcvd[RECEIVER_VCPU_ID_2] = 0;
+
+ /* 'Slow' HvCallSendSyntheticClusterIpi to RECEIVER_VCPU_ID_1 */
+ ipi->vector = IPI_VECTOR;
+ ipi->cpu_mask = 1 << RECEIVER_VCPU_ID_1;
+ res = hyperv_hypercall(HVCALL_SEND_IPI, pgs_gpa, pgs_gpa + 4096);
+ GUEST_ASSERT((res & 0xffff) == 0);
+ nop_loop();
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] == ++ipis_expected[0]);
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] == ipis_expected[1]);
+ GUEST_SYNC(stage++);
+ /* 'Fast' HvCallSendSyntheticClusterIpi to RECEIVER_VCPU_ID_1 */
+ res = hyperv_hypercall(HVCALL_SEND_IPI | HV_HYPERCALL_FAST_BIT,
+ IPI_VECTOR, 1 << RECEIVER_VCPU_ID_1);
+ GUEST_ASSERT((res & 0xffff) == 0);
+ nop_loop();
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] == ++ipis_expected[0]);
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] == ipis_expected[1]);
+ GUEST_SYNC(stage++);
+
+ /* 'Slow' HvCallSendSyntheticClusterIpiEx to RECEIVER_VCPU_ID_1 */
+ memset(hcall_page, 0, 4096);
+ ipi_ex->vector = IPI_VECTOR;
+ ipi_ex->vp_set.format = HV_GENERIC_SET_SPARSE_4K;
+ ipi_ex->vp_set.valid_bank_mask = 1 << 0;
+ ipi_ex->vp_set.bank_contents[0] = BIT(RECEIVER_VCPU_ID_1);
+ res = hyperv_hypercall(HVCALL_SEND_IPI_EX | (1 << HV_HYPERCALL_VARHEAD_OFFSET),
+ pgs_gpa, pgs_gpa + 4096);
+ GUEST_ASSERT((res & 0xffff) == 0);
+ nop_loop();
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] == ++ipis_expected[0]);
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] == ipis_expected[1]);
+ GUEST_SYNC(stage++);
+ /* 'XMM Fast' HvCallSendSyntheticClusterIpiEx to RECEIVER_VCPU_ID_1 */
+ hyperv_write_xmm_input(&ipi_ex->vp_set.valid_bank_mask, 1);
+ res = hyperv_hypercall(HVCALL_SEND_IPI_EX | HV_HYPERCALL_FAST_BIT |
+ (1 << HV_HYPERCALL_VARHEAD_OFFSET),
+ IPI_VECTOR, HV_GENERIC_SET_SPARSE_4K);
+ GUEST_ASSERT((res & 0xffff) == 0);
+ nop_loop();
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] == ++ipis_expected[0]);
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] == ipis_expected[1]);
+ GUEST_SYNC(stage++);
+
+ /* 'Slow' HvCallSendSyntheticClusterIpiEx to RECEIVER_VCPU_ID_2 */
+ memset(hcall_page, 0, 4096);
+ ipi_ex->vector = IPI_VECTOR;
+ ipi_ex->vp_set.format = HV_GENERIC_SET_SPARSE_4K;
+ ipi_ex->vp_set.valid_bank_mask = 1 << 1;
+ ipi_ex->vp_set.bank_contents[0] = BIT(RECEIVER_VCPU_ID_2 - 64);
+ res = hyperv_hypercall(HVCALL_SEND_IPI_EX | (1 << HV_HYPERCALL_VARHEAD_OFFSET),
+ pgs_gpa, pgs_gpa + 4096);
+ GUEST_ASSERT((res & 0xffff) == 0);
+ nop_loop();
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] == ipis_expected[0]);
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] == ++ipis_expected[1]);
+ GUEST_SYNC(stage++);
+ /* 'XMM Fast' HvCallSendSyntheticClusterIpiEx to RECEIVER_VCPU_ID_2 */
+ hyperv_write_xmm_input(&ipi_ex->vp_set.valid_bank_mask, 1);
+ res = hyperv_hypercall(HVCALL_SEND_IPI_EX | HV_HYPERCALL_FAST_BIT |
+ (1 << HV_HYPERCALL_VARHEAD_OFFSET),
+ IPI_VECTOR, HV_GENERIC_SET_SPARSE_4K);
+ GUEST_ASSERT((res & 0xffff) == 0);
+ nop_loop();
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] == ipis_expected[0]);
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] == ++ipis_expected[1]);
+ GUEST_SYNC(stage++);
+
+ /* 'Slow' HvCallSendSyntheticClusterIpiEx to both RECEIVER_VCPU_ID_{1,2} */
+ memset(hcall_page, 0, 4096);
+ ipi_ex->vector = IPI_VECTOR;
+ ipi_ex->vp_set.format = HV_GENERIC_SET_SPARSE_4K;
+ ipi_ex->vp_set.valid_bank_mask = 1 << 1 | 1;
+ ipi_ex->vp_set.bank_contents[0] = BIT(RECEIVER_VCPU_ID_1);
+ ipi_ex->vp_set.bank_contents[1] = BIT(RECEIVER_VCPU_ID_2 - 64);
+ res = hyperv_hypercall(HVCALL_SEND_IPI_EX | (2 << HV_HYPERCALL_VARHEAD_OFFSET),
+ pgs_gpa, pgs_gpa + 4096);
+ GUEST_ASSERT((res & 0xffff) == 0);
+ nop_loop();
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] == ++ipis_expected[0]);
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] == ++ipis_expected[1]);
+ GUEST_SYNC(stage++);
+ /* 'XMM Fast' HvCallSendSyntheticClusterIpiEx to both RECEIVER_VCPU_ID_{1, 2} */
+ hyperv_write_xmm_input(&ipi_ex->vp_set.valid_bank_mask, 2);
+ res = hyperv_hypercall(HVCALL_SEND_IPI_EX | HV_HYPERCALL_FAST_BIT |
+ (2 << HV_HYPERCALL_VARHEAD_OFFSET),
+ IPI_VECTOR, HV_GENERIC_SET_SPARSE_4K);
+ GUEST_ASSERT((res & 0xffff) == 0);
+ nop_loop();
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] == ++ipis_expected[0]);
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] == ++ipis_expected[1]);
+ GUEST_SYNC(stage++);
+
+ /* 'Slow' HvCallSendSyntheticClusterIpiEx to HV_GENERIC_SET_ALL */
+ memset(hcall_page, 0, 4096);
+ ipi_ex->vector = IPI_VECTOR;
+ ipi_ex->vp_set.format = HV_GENERIC_SET_ALL;
+ res = hyperv_hypercall(HVCALL_SEND_IPI_EX, pgs_gpa, pgs_gpa + 4096);
+ GUEST_ASSERT((res & 0xffff) == 0);
+ nop_loop();
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] == ++ipis_expected[0]);
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] == ++ipis_expected[1]);
+ GUEST_SYNC(stage++);
+ /*
+ * 'XMM Fast' HvCallSendSyntheticClusterIpiEx to HV_GENERIC_SET_ALL.
+ * Nothing to write anything to XMM regs.
+ */
+ res = hyperv_hypercall(HVCALL_SEND_IPI_EX | HV_HYPERCALL_FAST_BIT,
+ IPI_VECTOR, HV_GENERIC_SET_ALL);
+ GUEST_ASSERT((res & 0xffff) == 0);
+ nop_loop();
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] == ++ipis_expected[0]);
+ GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] == ++ipis_expected[1]);
+ GUEST_SYNC(stage++);
+
+ GUEST_DONE();
+}
+
+static void *vcpu_thread(void *arg)
+{
+ struct thread_params *params = (struct thread_params *)arg;
+ struct ucall uc;
+ int old;
+ int r;
+ unsigned int exit_reason;
+
+ r = pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, &old);
+ TEST_ASSERT(r == 0,
+ "pthread_setcanceltype failed on vcpu_id=%u with errno=%d",
+ params->vcpu_id, r);
+
+ vcpu_run(params->vm, params->vcpu_id);
+ exit_reason = vcpu_state(params->vm, params->vcpu_id)->exit_reason;
+
+ TEST_ASSERT(exit_reason == KVM_EXIT_IO,
+ "vCPU %u exited with unexpected exit reason %u-%s, expected KVM_EXIT_IO",
+ params->vcpu_id, exit_reason, exit_reason_str(exit_reason));
+
+ if (get_ucall(params->vm, params->vcpu_id, &uc) == UCALL_ABORT) {
+ TEST_ASSERT(false,
+ "vCPU %u exited with error: %s.\n",
+ params->vcpu_id, (const char *)uc.args[0]);
+ }
+
+ return NULL;
+}
+
+static void cancel_join_vcpu_thread(pthread_t thread, uint32_t vcpu_id)
+{
+ void *retval;
+ int r;
+
+ r = pthread_cancel(thread);
+ TEST_ASSERT(r == 0,
+ "pthread_cancel on vcpu_id=%d failed with errno=%d",
+ vcpu_id, r);
+
+ r = pthread_join(thread, &retval);
+ TEST_ASSERT(r == 0,
+ "pthread_join on vcpu_id=%d failed with errno=%d",
+ vcpu_id, r);
+ TEST_ASSERT(retval == PTHREAD_CANCELED,
+ "expected retval=%p, got %p", PTHREAD_CANCELED,
+ retval);
+}
+
+int main(int argc, char *argv[])
+{
+ int r;
+ pthread_t threads[2];
+ struct thread_params params[2];
+ struct kvm_vm *vm;
+ struct kvm_run *run;
+ vm_vaddr_t hcall_page;
+ struct ucall uc;
+ int stage = 1;
+
+ vm = vm_create_default(SENDER_VCPU_ID, 0, sender_guest_code);
+ params[0].vm = vm;
+ params[1].vm = vm;
+
+ /* Hypercall input/output */
+ hcall_page = vm_vaddr_alloc_pages(vm, 2);
+ memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());
+
+ vm_init_descriptor_tables(vm);
+
+ vm_vcpu_add_default(vm, RECEIVER_VCPU_ID_1, receiver_code);
+ vcpu_init_descriptor_tables(vm, RECEIVER_VCPU_ID_1);
+ vcpu_args_set(vm, RECEIVER_VCPU_ID_1, 2, hcall_page, addr_gva2gpa(vm, hcall_page));
+ vcpu_set_msr(vm, RECEIVER_VCPU_ID_1, HV_X64_MSR_VP_INDEX, RECEIVER_VCPU_ID_1);
+ vcpu_set_hv_cpuid(vm, RECEIVER_VCPU_ID_1);
+
+ vm_vcpu_add_default(vm, RECEIVER_VCPU_ID_2, receiver_code);
+ vcpu_init_descriptor_tables(vm, RECEIVER_VCPU_ID_2);
+ vcpu_args_set(vm, RECEIVER_VCPU_ID_2, 2, hcall_page, addr_gva2gpa(vm, hcall_page));
+ vcpu_set_msr(vm, RECEIVER_VCPU_ID_2, HV_X64_MSR_VP_INDEX, RECEIVER_VCPU_ID_2);
+ vcpu_set_hv_cpuid(vm, RECEIVER_VCPU_ID_2);
+
+ vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);
+
+ vcpu_args_set(vm, SENDER_VCPU_ID, 2, hcall_page, addr_gva2gpa(vm, hcall_page));
+ vcpu_set_hv_cpuid(vm, SENDER_VCPU_ID);
+
+ params[0].vcpu_id = RECEIVER_VCPU_ID_1;
+ r = pthread_create(&threads[0], NULL, vcpu_thread, ¶ms[0]);
+ TEST_ASSERT(r == 0,
+ "pthread_create halter failed errno=%d", errno);
+
+ params[1].vcpu_id = RECEIVER_VCPU_ID_2;
+ r = pthread_create(&threads[1], NULL, vcpu_thread, ¶ms[1]);
+ TEST_ASSERT(r == 0,
+ "pthread_create halter failed errno=%d", errno);
+
+ run = vcpu_state(vm, SENDER_VCPU_ID);
+
+ while (true) {
+ r = _vcpu_run(vm, SENDER_VCPU_ID);
+ TEST_ASSERT(!r, "vcpu_run failed: %d\n", r);
+ TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
+ "unexpected exit reason: %u (%s)",
+ run->exit_reason, exit_reason_str(run->exit_reason));
+
+ switch (get_ucall(vm, SENDER_VCPU_ID, &uc)) {
+ case UCALL_SYNC:
+ TEST_ASSERT(uc.args[1] == stage,
+ "Unexpected stage: %ld (%d expected)\n",
+ uc.args[1], stage);
+ break;
+ case UCALL_ABORT:
+ TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0],
+ __FILE__, uc.args[1]);
+ return 1;
+ case UCALL_DONE:
+ return 0;
+ }
+
+ stage++;
+ }
+
+ cancel_join_vcpu_thread(threads[0], RECEIVER_VCPU_ID_1);
+ cancel_join_vcpu_thread(threads[1], RECEIVER_VCPU_ID_2);
+ kvm_vm_free(vm);
+
+ return 0;
+}
--
2.35.3
Currently, tests can only request a new vaddr range by using
vm_vaddr_alloc()/vm_vaddr_alloc_page()/vm_vaddr_alloc_pages() but
these functions allocate and map physical pages too. Make it possible
to request unmapped range too.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
tools/testing/selftests/kvm/include/kvm_util_base.h | 1 +
tools/testing/selftests/kvm/lib/kvm_util.c | 4 ++--
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 92cef0ffb19e..8273fe93c4f6 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -169,6 +169,7 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid);
+vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 936be9c9f870..37df67780787 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1263,8 +1263,8 @@ void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid)
* TEST_ASSERT failure occurs for invalid input or no area of at least
* sz unallocated bytes >= vaddr_min is available.
*/
-static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
- vm_vaddr_t vaddr_min)
+vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
+ vm_vaddr_t vaddr_min)
{
uint64_t pages = (sz + vm->page_size - 1) >> vm->page_shift;
--
2.35.3
Similar to vm_vaddr_alloc(), virt_map() needs to reflect the mapping
in vm->vpages_mapped.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
tools/testing/selftests/kvm/lib/kvm_util.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 1665a220abcb..936be9c9f870 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1445,6 +1445,9 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
virt_pg_map(vm, vaddr, paddr);
vaddr += page_size;
paddr += page_size;
+
+ sparsebit_set(vm->vpages_mapped,
+ vaddr >> vm->page_shift);
}
}
--
2.35.3
Make it possible for tests to mangle guest's page table entries in
addition to just getting them (available with vm_get_page_table_entry()).
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
tools/testing/selftests/kvm/include/x86_64/processor.h | 1 +
tools/testing/selftests/kvm/lib/x86_64/processor.c | 3 +--
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 9ac244fdce73..76cfd7f70add 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -475,6 +475,7 @@ void vcpu_init_descriptor_tables(struct kvm_vm *vm, uint32_t vcpuid);
void vm_install_exception_handler(struct kvm_vm *vm, int vector,
void (*handler)(struct ex_regs *));
+uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t vaddr);
uint64_t vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t vaddr);
void vm_set_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t vaddr,
uint64_t pte);
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 33ea5e9955d9..dfe54a970410 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -246,8 +246,7 @@ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
__virt_pg_map(vm, vaddr, paddr, X86_PAGE_SIZE_4K);
}
-static uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid,
- uint64_t vaddr)
+uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t vaddr)
{
uint16_t index[4];
uint64_t *pml4e, *pdpe, *pde;
--
2.35.3
Handle L2 TLB flush requests by going through all vCPUs and checking
whether there are vCPUs running the same VM_ID with a VP_ID specified
in the requests. Perform synthetic exit to L2 upon finish.
Note, while checking VM_ID/VP_ID of running vCPUs seem to be a bit
racy, we count on the fact that KVM flushes the whole L2 VPID upon
transition. Also, KVM_REQ_HV_TLB_FLUSH request needs to be done upon
transition between L1 and L2 to make sure all pending requests are
always processed.
For the reference, Hyper-V TLFS refers to the feature as "Direct
Virtual Flush".
Note, nVMX/nSVM code does not handle VMCALL/VMMCALL from L2 yet.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/kvm/hyperv.c | 84 +++++++++++++++++++++++++++++++++++--------
arch/x86/kvm/hyperv.h | 14 ++++----
arch/x86/kvm/trace.h | 21 ++++++-----
arch/x86/kvm/x86.c | 4 +--
4 files changed, 91 insertions(+), 32 deletions(-)
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 3075e9661696..740190917c1c 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -34,6 +34,7 @@
#include <linux/eventfd.h>
#include <asm/apicdef.h>
+#include <asm/mshyperv.h>
#include <trace/events/kvm.h>
#include "trace.h"
@@ -1835,9 +1836,10 @@ static int kvm_hv_get_tlb_flush_entries(struct kvm *kvm, struct kvm_hv_hcall *hc
entries, consumed_xmm_halves, offset);
}
-static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, u64 *entries, int count)
+static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu,
+ struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo,
+ u64 *entries, int count)
{
- struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
u64 entry = KVM_HV_TLB_FLUSHALL_ENTRY;
unsigned long flags;
@@ -1845,9 +1847,6 @@ static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, u64 *entries, int count)
if (!hv_vcpu)
return;
- /* kvm_hv_flush_tlb() is not ready to handle requests for L2s yet */
- tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo[HV_L1_TLB_FLUSH_FIFO];
-
spin_lock_irqsave(&tlb_flush_fifo->write_lock, flags);
/*
@@ -1883,7 +1882,7 @@ void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
return;
}
- tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(vcpu);
+ tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(vcpu, is_guest_mode(vcpu));
count = kfifo_out(&tlb_flush_fifo->entries, entries, KVM_HV_TLB_FLUSH_FIFO_SIZE);
@@ -1916,6 +1915,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
struct hv_tlb_flush_ex flush_ex;
struct hv_tlb_flush flush;
DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS);
+ struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
/*
* Normally, there can be no more than 'KVM_HV_TLB_FLUSH_FIFO_SIZE'
* entries on the TLB flush fifo. The last entry, however, needs to be
@@ -1959,7 +1959,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
}
trace_kvm_hv_flush_tlb(flush.processor_mask,
- flush.address_space, flush.flags);
+ flush.address_space, flush.flags,
+ is_guest_mode(vcpu));
valid_bank_mask = BIT_ULL(0);
sparse_banks[0] = flush.processor_mask;
@@ -1990,7 +1991,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask,
flush_ex.hv_vp_set.format,
flush_ex.address_space,
- flush_ex.flags);
+ flush_ex.flags, is_guest_mode(vcpu));
valid_bank_mask = flush_ex.hv_vp_set.valid_bank_mask;
all_cpus = flush_ex.hv_vp_set.format !=
@@ -2028,19 +2029,57 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
* vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't
* analyze it here, flush TLB regardless of the specified address space.
*/
- if (all_cpus) {
- kvm_for_each_vcpu(i, v, kvm)
- hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt);
+ if (all_cpus && !is_guest_mode(vcpu)) {
+ kvm_for_each_vcpu(i, v, kvm) {
+ tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(v, false);
+ hv_tlb_flush_enqueue(v, tlb_flush_fifo,
+ tlb_flush_entries, hc->rep_cnt);
+ }
kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH);
- } else {
+ } else if (!is_guest_mode(vcpu)) {
sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask, vcpu_mask);
for_each_set_bit(i, vcpu_mask, KVM_MAX_VCPUS) {
v = kvm_get_vcpu(kvm, i);
if (!v)
continue;
- hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt);
+ tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(v, false);
+ hv_tlb_flush_enqueue(v, tlb_flush_fifo,
+ tlb_flush_entries, hc->rep_cnt);
+ }
+
+ kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask);
+ } else {
+ struct kvm_vcpu_hv *hv_v;
+
+ bitmap_zero(vcpu_mask, KVM_MAX_VCPUS);
+
+ kvm_for_each_vcpu(i, v, kvm) {
+ hv_v = to_hv_vcpu(v);
+
+ /*
+ * The following check races with nested vCPUs entering/exiting
+ * and/or migrating between L1's vCPUs, however the only case when
+ * KVM *must* flush the TLB is when the target L2 vCPU keeps
+ * running on the same L1 vCPU from the moment of the request until
+ * kvm_hv_flush_tlb() returns. TLB is fully flushed in all other
+ * cases, e.g. when the target L2 vCPU migrates to a different L1
+ * vCPU or when the corresponding L1 vCPU temporary switches to a
+ * different L2 vCPU while the request is being processed.
+ */
+ if (!hv_v || hv_v->nested.vm_id != hv_vcpu->nested.vm_id)
+ continue;
+
+ if (!all_cpus &&
+ !hv_is_vp_in_sparse_set(hv_v->nested.vp_id, valid_bank_mask,
+ sparse_banks))
+ continue;
+
+ __set_bit(i, vcpu_mask);
+ tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(v, true);
+ hv_tlb_flush_enqueue(v, tlb_flush_fifo,
+ tlb_flush_entries, hc->rep_cnt);
}
kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask);
@@ -2228,10 +2267,27 @@ static void kvm_hv_hypercall_set_result(struct kvm_vcpu *vcpu, u64 result)
static int kvm_hv_hypercall_complete(struct kvm_vcpu *vcpu, u64 result)
{
+ int ret;
+
trace_kvm_hv_hypercall_done(result);
kvm_hv_hypercall_set_result(vcpu, result);
++vcpu->stat.hypercalls;
- return kvm_skip_emulated_instruction(vcpu);
+ ret = kvm_skip_emulated_instruction(vcpu);
+
+ if (unlikely(hv_result_success(result) && is_guest_mode(vcpu)
+ && kvm_hv_is_tlb_flush_hcall(vcpu))) {
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+ u32 tlb_lock_count;
+
+ if (unlikely(kvm_read_guest(vcpu->kvm, hv_vcpu->nested.pa_page_gpa,
+ &tlb_lock_count, sizeof(tlb_lock_count))))
+ kvm_inject_gp(vcpu, 0);
+
+ if (tlb_lock_count)
+ kvm_x86_ops.nested_ops->hv_inject_synthetic_vmexit_post_tlb_flush(vcpu);
+ }
+
+ return ret;
}
static int kvm_hv_hypercall_complete_userspace(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
index dc46c5ed5d18..7778b3a5913c 100644
--- a/arch/x86/kvm/hyperv.h
+++ b/arch/x86/kvm/hyperv.h
@@ -148,26 +148,24 @@ int kvm_vm_ioctl_hv_eventfd(struct kvm *kvm, struct kvm_hyperv_eventfd *args);
int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
struct kvm_cpuid_entry2 __user *entries);
-static inline struct kvm_vcpu_hv_tlb_flush_fifo *kvm_hv_get_tlb_flush_fifo(struct kvm_vcpu *vcpu)
+static inline struct kvm_vcpu_hv_tlb_flush_fifo *kvm_hv_get_tlb_flush_fifo(struct kvm_vcpu *vcpu,
+ bool is_guest_mode)
{
struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
- int i = !is_guest_mode(vcpu) ? HV_L1_TLB_FLUSH_FIFO :
- HV_L2_TLB_FLUSH_FIFO;
-
- /* KVM does not handle L2 TLB flush requests yet */
- WARN_ON_ONCE(i != HV_L1_TLB_FLUSH_FIFO);
+ int i = is_guest_mode ? HV_L2_TLB_FLUSH_FIFO :
+ HV_L1_TLB_FLUSH_FIFO;
return &hv_vcpu->tlb_flush_fifo[i];
}
-static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu *vcpu)
+static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu *vcpu, bool is_guest_mode)
{
struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
if (!to_hv_vcpu(vcpu) || !kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu))
return;
- tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(vcpu);
+ tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(vcpu, is_guest_mode);
kfifo_reset_out(&tlb_flush_fifo->entries);
}
diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h
index fd28dd40b813..f5e5b8f0342c 100644
--- a/arch/x86/kvm/trace.h
+++ b/arch/x86/kvm/trace.h
@@ -1510,38 +1510,41 @@ TRACE_EVENT(kvm_hv_timer_state,
* Tracepoint for kvm_hv_flush_tlb.
*/
TRACE_EVENT(kvm_hv_flush_tlb,
- TP_PROTO(u64 processor_mask, u64 address_space, u64 flags),
- TP_ARGS(processor_mask, address_space, flags),
+ TP_PROTO(u64 processor_mask, u64 address_space, u64 flags, bool guest_mode),
+ TP_ARGS(processor_mask, address_space, flags, guest_mode),
TP_STRUCT__entry(
__field(u64, processor_mask)
__field(u64, address_space)
__field(u64, flags)
+ __field(bool, guest_mode)
),
TP_fast_assign(
__entry->processor_mask = processor_mask;
__entry->address_space = address_space;
__entry->flags = flags;
+ __entry->guest_mode = guest_mode;
),
- TP_printk("processor_mask 0x%llx address_space 0x%llx flags 0x%llx",
+ TP_printk("processor_mask 0x%llx address_space 0x%llx flags 0x%llx %s",
__entry->processor_mask, __entry->address_space,
- __entry->flags)
+ __entry->flags, __entry->guest_mode ? "(L2)" : "")
);
/*
* Tracepoint for kvm_hv_flush_tlb_ex.
*/
TRACE_EVENT(kvm_hv_flush_tlb_ex,
- TP_PROTO(u64 valid_bank_mask, u64 format, u64 address_space, u64 flags),
- TP_ARGS(valid_bank_mask, format, address_space, flags),
+ TP_PROTO(u64 valid_bank_mask, u64 format, u64 address_space, u64 flags, bool guest_mode),
+ TP_ARGS(valid_bank_mask, format, address_space, flags, guest_mode),
TP_STRUCT__entry(
__field(u64, valid_bank_mask)
__field(u64, format)
__field(u64, address_space)
__field(u64, flags)
+ __field(bool, guest_mode)
),
TP_fast_assign(
@@ -1549,12 +1552,14 @@ TRACE_EVENT(kvm_hv_flush_tlb_ex,
__entry->format = format;
__entry->address_space = address_space;
__entry->flags = flags;
+ __entry->guest_mode = guest_mode;
),
TP_printk("valid_bank_mask 0x%llx format 0x%llx "
- "address_space 0x%llx flags 0x%llx",
+ "address_space 0x%llx flags 0x%llx %s",
__entry->valid_bank_mask, __entry->format,
- __entry->address_space, __entry->flags)
+ __entry->address_space, __entry->flags,
+ __entry->guest_mode ? "(L2)" : "")
);
/*
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 805db43c2829..8e945500ef50 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3355,12 +3355,12 @@ void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu)
{
if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) {
kvm_vcpu_flush_tlb_current(vcpu);
- kvm_hv_vcpu_empty_flush_tlb(vcpu);
+ kvm_hv_vcpu_empty_flush_tlb(vcpu, is_guest_mode(vcpu));
}
if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) {
kvm_vcpu_flush_tlb_guest(vcpu);
- kvm_hv_vcpu_empty_flush_tlb(vcpu);
+ kvm_hv_vcpu_empty_flush_tlb(vcpu, is_guest_mode(vcpu));
} else if (kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu)) {
kvm_hv_vcpu_flush_tlb(vcpu);
}
--
2.35.3
'struct hv_enlightened_vmcs' definition in selftests is not '__packed'
and so we rely on the compiler doing the right padding. This is not
obvious so it seems beneficial to use the same definition as in kernel.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
tools/testing/selftests/kvm/include/x86_64/evmcs.h | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86_64/evmcs.h b/tools/testing/selftests/kvm/include/x86_64/evmcs.h
index cc5d14a45702..b6067b555110 100644
--- a/tools/testing/selftests/kvm/include/x86_64/evmcs.h
+++ b/tools/testing/selftests/kvm/include/x86_64/evmcs.h
@@ -41,6 +41,8 @@ struct hv_enlightened_vmcs {
u16 host_gs_selector;
u16 host_tr_selector;
+ u16 padding16_1;
+
u64 host_ia32_pat;
u64 host_ia32_efer;
@@ -159,7 +161,7 @@ struct hv_enlightened_vmcs {
u64 ept_pointer;
u16 virtual_processor_id;
- u16 padding16[3];
+ u16 padding16_2[3];
u64 padding64_2[5];
u64 guest_physical_address;
@@ -195,15 +197,15 @@ struct hv_enlightened_vmcs {
u64 guest_rip;
u32 hv_clean_fields;
- u32 hv_padding_32;
+ u32 padding32_1;
u32 hv_synthetic_controls;
struct {
u32 nested_flush_hypercall:1;
u32 msr_bitmap:1;
u32 reserved:30;
- } hv_enlightenments_control;
+ } __packed hv_enlightenments_control;
u32 hv_vp_id;
-
+ u32 padding32_2;
u64 hv_vm_id;
u64 partition_assist_page;
u64 padding64_4[4];
@@ -211,7 +213,7 @@ struct hv_enlightened_vmcs {
u64 padding64_5[7];
u64 xss_exit_bitmap;
u64 padding64_6[7];
-};
+} __packed;
#define HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE 0
#define HV_VMX_ENLIGHTENED_CLEAN_FIELD_IO_BITMAP BIT(0)
--
2.35.3
To handle L2 TLB flush requests, KVM needs to keep track of L2's VM_ID/
VP_IDs which are set by L1 hypervisor. 'Partition assist page' address is
also needed to handle post-flush exit to L1 upon request.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/include/asm/kvm_host.h | 6 ++++++
arch/x86/kvm/vmx/nested.c | 15 +++++++++++++++
2 files changed, 21 insertions(+)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 278d8d3ee994..02bef551dafb 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -648,6 +648,12 @@ struct kvm_vcpu_hv {
/* Preallocated buffer for handling hypercalls passing sparse vCPU set */
u64 sparse_banks[HV_MAX_SPARSE_VCPU_BANKS];
+
+ struct {
+ u64 pa_page_gpa;
+ u64 vm_id;
+ u32 vp_id;
+ } nested;
};
/* Xen HVM per vcpu emulation context */
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 7d8cd0ebcc75..6e264a7f205b 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -225,6 +225,7 @@ static void vmx_disable_shadow_vmcs(struct vcpu_vmx *vmx)
static inline void nested_release_evmcs(struct kvm_vcpu *vcpu)
{
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
struct vcpu_vmx *vmx = to_vmx(vcpu);
if (evmptr_is_valid(vmx->nested.hv_evmcs_vmptr)) {
@@ -233,6 +234,12 @@ static inline void nested_release_evmcs(struct kvm_vcpu *vcpu)
}
vmx->nested.hv_evmcs_vmptr = EVMPTR_INVALID;
+
+ if (hv_vcpu) {
+ hv_vcpu->nested.pa_page_gpa = INVALID_GPA;
+ hv_vcpu->nested.vm_id = 0;
+ hv_vcpu->nested.vp_id = 0;
+ }
}
static void vmx_sync_vmcs_host_state(struct vcpu_vmx *vmx,
@@ -1591,11 +1598,19 @@ static void copy_enlightened_to_vmcs12(struct vcpu_vmx *vmx, u32 hv_clean_fields
{
struct vmcs12 *vmcs12 = vmx->nested.cached_vmcs12;
struct hv_enlightened_vmcs *evmcs = vmx->nested.hv_evmcs;
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(&vmx->vcpu);
/* HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE */
vmcs12->tpr_threshold = evmcs->tpr_threshold;
vmcs12->guest_rip = evmcs->guest_rip;
+ if (unlikely(!(hv_clean_fields &
+ HV_VMX_ENLIGHTENED_CLEAN_FIELD_ENLIGHTENMENTSCONTROL))) {
+ hv_vcpu->nested.pa_page_gpa = evmcs->partition_assist_page;
+ hv_vcpu->nested.vm_id = evmcs->hv_vm_id;
+ hv_vcpu->nested.vp_id = evmcs->hv_vp_id;
+ }
+
if (unlikely(!(hv_clean_fields &
HV_VMX_ENLIGHTENED_CLEAN_FIELD_GUEST_BASIC))) {
vmcs12->guest_rsp = evmcs->guest_rsp;
--
2.35.3
In preparation to testing Hyper-V L2 TLB flush hypercalls, allocate
so-called Partition assist page and link it to 'struct vmx_pages'.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
tools/testing/selftests/kvm/include/x86_64/vmx.h | 4 ++++
tools/testing/selftests/kvm/lib/x86_64/vmx.c | 7 +++++++
2 files changed, 11 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/vmx.h b/tools/testing/selftests/kvm/include/x86_64/vmx.h
index 583ceb0d1457..f99922ca8259 100644
--- a/tools/testing/selftests/kvm/include/x86_64/vmx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/vmx.h
@@ -567,6 +567,10 @@ struct vmx_pages {
uint64_t enlightened_vmcs_gpa;
void *enlightened_vmcs;
+ void *partition_assist_hva;
+ uint64_t partition_assist_gpa;
+ void *partition_assist;
+
void *eptp_hva;
uint64_t eptp_gpa;
void *eptp;
diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c
index d089d8b850b5..3db21e0e1a8f 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c
@@ -124,6 +124,13 @@ vcpu_alloc_vmx(struct kvm_vm *vm, vm_vaddr_t *p_vmx_gva)
vmx->enlightened_vmcs_gpa =
addr_gva2gpa(vm, (uintptr_t)vmx->enlightened_vmcs);
+ /* Setup of a region of guest memory for the partition assist page. */
+ vmx->partition_assist = (void *)vm_vaddr_alloc_page(vm);
+ vmx->partition_assist_hva =
+ addr_gva2hva(vm, (uintptr_t)vmx->partition_assist);
+ vmx->partition_assist_gpa =
+ addr_gva2gpa(vm, (uintptr_t)vmx->partition_assist);
+
*p_vmx_gva = vmx_gva;
return vmx;
}
--
2.35.3
With both nSVM and nVMX implementations in place, KVM can now expose
Hyper-V L2 TLB flush feature to userspace.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/kvm/hyperv.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 91310774c0b9..c74f02668bde 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -2776,6 +2776,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
case HYPERV_CPUID_NESTED_FEATURES:
ent->eax = evmcs_ver;
+ ent->eax |= HV_X64_NESTED_DIRECT_FLUSH;
ent->eax |= HV_X64_NESTED_MSR_BITMAP;
break;
--
2.35.3
In preparation to enabling L2 TLB flush, cache VP assist page in
'struct kvm_vcpu_hv'. While on it, rename nested_enlightened_vmentry()
to nested_get_evmptr() and make it return eVMCS GPA directly.
No functional change intended.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/hyperv.c | 10 ++++++----
arch/x86/kvm/hyperv.h | 3 +--
arch/x86/kvm/vmx/evmcs.c | 21 +++++++--------------
arch/x86/kvm/vmx/evmcs.h | 2 +-
arch/x86/kvm/vmx/nested.c | 6 +++---
6 files changed, 20 insertions(+), 24 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f9a34af0a5cc..e62db76c8d37 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -650,6 +650,8 @@ struct kvm_vcpu_hv {
/* Preallocated buffer for handling hypercalls passing sparse vCPU set */
u64 sparse_banks[HV_MAX_SPARSE_VCPU_BANKS];
+ struct hv_vp_assist_page vp_assist_page;
+
struct {
u64 pa_page_gpa;
u64 vm_id;
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 4396d75588d8..91310774c0b9 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -903,13 +903,15 @@ bool kvm_hv_assist_page_enabled(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_hv_assist_page_enabled);
-bool kvm_hv_get_assist_page(struct kvm_vcpu *vcpu,
- struct hv_vp_assist_page *assist_page)
+bool kvm_hv_get_assist_page(struct kvm_vcpu *vcpu)
{
- if (!kvm_hv_assist_page_enabled(vcpu))
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+
+ if (!hv_vcpu || !kvm_hv_assist_page_enabled(vcpu))
return false;
+
return !kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.pv_eoi.data,
- assist_page, sizeof(*assist_page));
+ &hv_vcpu->vp_assist_page, sizeof(struct hv_vp_assist_page));
}
EXPORT_SYMBOL_GPL(kvm_hv_get_assist_page);
diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
index 2aa6fb7fc599..139beb55b781 100644
--- a/arch/x86/kvm/hyperv.h
+++ b/arch/x86/kvm/hyperv.h
@@ -105,8 +105,7 @@ int kvm_hv_activate_synic(struct kvm_vcpu *vcpu, bool dont_zero_synic_pages);
void kvm_hv_vcpu_uninit(struct kvm_vcpu *vcpu);
bool kvm_hv_assist_page_enabled(struct kvm_vcpu *vcpu);
-bool kvm_hv_get_assist_page(struct kvm_vcpu *vcpu,
- struct hv_vp_assist_page *assist_page);
+bool kvm_hv_get_assist_page(struct kvm_vcpu *vcpu);
static inline struct kvm_vcpu_hv_stimer *to_hv_stimer(struct kvm_vcpu *vcpu,
int timer_index)
diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c
index 805afc170b5b..7cd7b16942c6 100644
--- a/arch/x86/kvm/vmx/evmcs.c
+++ b/arch/x86/kvm/vmx/evmcs.c
@@ -307,24 +307,17 @@ __init void evmcs_sanitize_exec_ctrls(struct vmcs_config *vmcs_conf)
}
#endif
-bool nested_enlightened_vmentry(struct kvm_vcpu *vcpu, u64 *evmcs_gpa)
+u64 nested_get_evmptr(struct kvm_vcpu *vcpu)
{
- struct hv_vp_assist_page assist_page;
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
- *evmcs_gpa = -1ull;
+ if (unlikely(!kvm_hv_get_assist_page(vcpu)))
+ return EVMPTR_INVALID;
- if (unlikely(!kvm_hv_get_assist_page(vcpu, &assist_page)))
- return false;
+ if (unlikely(!hv_vcpu->vp_assist_page.enlighten_vmentry))
+ return EVMPTR_INVALID;
- if (unlikely(!assist_page.enlighten_vmentry))
- return false;
-
- if (unlikely(!evmptr_is_valid(assist_page.current_nested_vmcs)))
- return false;
-
- *evmcs_gpa = assist_page.current_nested_vmcs;
-
- return true;
+ return hv_vcpu->vp_assist_page.current_nested_vmcs;
}
uint16_t nested_get_evmcs_version(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h
index 584741b85eb6..22d238b36238 100644
--- a/arch/x86/kvm/vmx/evmcs.h
+++ b/arch/x86/kvm/vmx/evmcs.h
@@ -239,7 +239,7 @@ enum nested_evmptrld_status {
EVMPTRLD_ERROR,
};
-bool nested_enlightened_vmentry(struct kvm_vcpu *vcpu, u64 *evmcs_gpa);
+u64 nested_get_evmptr(struct kvm_vcpu *vcpu);
uint16_t nested_get_evmcs_version(struct kvm_vcpu *vcpu);
int nested_enable_evmcs(struct kvm_vcpu *vcpu,
uint16_t *vmcs_version);
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 4a827b3d929a..87bff81f7f3e 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -1995,7 +1995,8 @@ static enum nested_evmptrld_status nested_vmx_handle_enlightened_vmptrld(
if (likely(!vmx->nested.enlightened_vmcs_enabled))
return EVMPTRLD_DISABLED;
- if (!nested_enlightened_vmentry(vcpu, &evmcs_gpa)) {
+ evmcs_gpa = nested_get_evmptr(vcpu);
+ if (!evmptr_is_valid(evmcs_gpa)) {
nested_release_evmcs(vcpu);
return EVMPTRLD_DISABLED;
}
@@ -5084,7 +5085,6 @@ static int handle_vmclear(struct kvm_vcpu *vcpu)
struct vcpu_vmx *vmx = to_vmx(vcpu);
u32 zero = 0;
gpa_t vmptr;
- u64 evmcs_gpa;
int r;
if (!nested_vmx_check_permission(vcpu))
@@ -5110,7 +5110,7 @@ static int handle_vmclear(struct kvm_vcpu *vcpu)
* vmx->nested.hv_evmcs but this shouldn't be a problem.
*/
if (likely(!vmx->nested.enlightened_vmcs_enabled ||
- !nested_enlightened_vmentry(vcpu, &evmcs_gpa))) {
+ !evmptr_is_valid(nested_get_evmptr(vcpu)))) {
if (vmptr == vmx->nested.current_vmptr)
nested_release_vmcs12(vcpu);
--
2.35.3
'struct hv_vp_assist_page' definition doesn't match TLFS. Also, define
'struct hv_nested_enlightenments_control' and use it instead of opaque
'__u64'.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
.../selftests/kvm/include/x86_64/evmcs.h | 22 ++++++++++++++-----
1 file changed, 17 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86_64/evmcs.h b/tools/testing/selftests/kvm/include/x86_64/evmcs.h
index b6067b555110..9c965ba73dec 100644
--- a/tools/testing/selftests/kvm/include/x86_64/evmcs.h
+++ b/tools/testing/selftests/kvm/include/x86_64/evmcs.h
@@ -20,14 +20,26 @@
extern bool enable_evmcs;
+struct hv_nested_enlightenments_control {
+ struct {
+ __u32 directhypercall:1;
+ __u32 reserved:31;
+ } features;
+ struct {
+ __u32 reserved;
+ } hypercallControls;
+} __packed;
+
+/* Define virtual processor assist page structure. */
struct hv_vp_assist_page {
__u32 apic_assist;
- __u32 reserved;
- __u64 vtl_control[2];
- __u64 nested_enlightenments_control[2];
- __u32 enlighten_vmentry;
+ __u32 reserved1;
+ __u64 vtl_control[3];
+ struct hv_nested_enlightenments_control nested_control;
+ __u8 enlighten_vmentry;
+ __u8 reserved2[7];
__u64 current_nested_vmcs;
-};
+} __packed;
struct hv_enlightened_vmcs {
u32 revision_id;
--
2.35.3
HYPERV_LINUX_OS_ID needs to be written to HV_X64_MSR_GUEST_OS_ID by
each Hyper-V specific selftest.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
tools/testing/selftests/kvm/include/x86_64/hyperv.h | 3 +++
tools/testing/selftests/kvm/x86_64/hyperv_features.c | 5 ++---
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
index b66910702c0a..f0a8a93694b2 100644
--- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h
+++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
@@ -185,4 +185,7 @@
/* hypercall options */
#define HV_HYPERCALL_FAST_BIT BIT(16)
+/* Proper HV_X64_MSR_GUEST_OS_ID value */
+#define HYPERV_LINUX_OS_ID ((u64)0x8100 << 48)
+
#endif /* !SELFTEST_KVM_HYPERV_H */
diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
index 672915ce73d8..98c020356925 100644
--- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c
+++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
@@ -14,7 +14,6 @@
#include "hyperv.h"
#define VCPU_ID 0
-#define LINUX_OS_ID ((u64)0x8100 << 48)
extern unsigned char rdmsr_start;
extern unsigned char rdmsr_end;
@@ -127,7 +126,7 @@ static void guest_hcall(vm_vaddr_t pgs_gpa, struct hcall_data *hcall)
int i = 0;
u64 res, input, output;
- wrmsr(HV_X64_MSR_GUEST_OS_ID, LINUX_OS_ID);
+ wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa);
while (hcall->control) {
@@ -230,7 +229,7 @@ static void guest_test_msrs_access(void)
*/
msr->idx = HV_X64_MSR_GUEST_OS_ID;
msr->write = 1;
- msr->write_val = LINUX_OS_ID;
+ msr->write_val = HYPERV_LINUX_OS_ID;
msr->available = 1;
break;
case 3:
--
2.35.3
Implement Hyper-V L2 TLB flush for nSVM. The feature needs to be enabled
both in extended 'nested controls' in VMCB and VP assist page.
According to Hyper-V TLFS, synthetic vmexit to L1 is performed with
- HV_SVM_EXITCODE_ENL exit_code.
- HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH exit_info_1.
Note: VP assist page is cached in 'struct kvm_vcpu_hv' so
recalc_intercepts() doesn't need to read from guest's memory. KVM
needs to update the case upon each VMRUN and after svm_set_nested_state
(svm_get_nested_state_pages()) to handle the case when the guest got
migrated while L2 was running.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/kvm/svm/hyperv.c | 7 +++++++
arch/x86/kvm/svm/hyperv.h | 30 ++++++++++++++++++++++++++++++
arch/x86/kvm/svm/nested.c | 36 ++++++++++++++++++++++++++++++++++--
3 files changed, 71 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/hyperv.c b/arch/x86/kvm/svm/hyperv.c
index 911f51021af1..088f6429b24c 100644
--- a/arch/x86/kvm/svm/hyperv.c
+++ b/arch/x86/kvm/svm/hyperv.c
@@ -8,4 +8,11 @@
void svm_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu)
{
+ struct vcpu_svm *svm = to_svm(vcpu);
+
+ svm->vmcb->control.exit_code = HV_SVM_EXITCODE_ENL;
+ svm->vmcb->control.exit_code_hi = 0;
+ svm->vmcb->control.exit_info_1 = HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH;
+ svm->vmcb->control.exit_info_2 = 0;
+ nested_svm_vmexit(svm);
}
diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h
index dd2e393f84a0..7b01722838bf 100644
--- a/arch/x86/kvm/svm/hyperv.h
+++ b/arch/x86/kvm/svm/hyperv.h
@@ -33,6 +33,9 @@ struct hv_enlightenments {
*/
#define VMCB_HV_NESTED_ENLIGHTENMENTS VMCB_SW
+#define HV_SVM_EXITCODE_ENL 0xF0000000
+#define HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH (1)
+
static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
@@ -48,6 +51,33 @@ static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu)
hv_vcpu->nested.vp_id = hve->hv_vp_id;
}
+static inline bool nested_svm_hv_update_vp_assist(struct kvm_vcpu *vcpu)
+{
+ if (!to_hv_vcpu(vcpu))
+ return true;
+
+ if (!kvm_hv_assist_page_enabled(vcpu))
+ return true;
+
+ return kvm_hv_get_assist_page(vcpu);
+}
+
+static inline bool nested_svm_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+ struct hv_enlightenments *hve =
+ (struct hv_enlightenments *)svm->nested.ctl.reserved_sw;
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+
+ if (!hv_vcpu)
+ return false;
+
+ if (!hve->hv_enlightenments_control.nested_flush_hypercall)
+ return false;
+
+ return hv_vcpu->vp_assist_page.nested_control.features.directhypercall;
+}
+
void svm_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu);
#endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 28b63663e1d9..369b92aaf1ad 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -171,8 +171,12 @@ void recalc_intercepts(struct vcpu_svm *svm)
vmcb_clr_intercept(c, INTERCEPT_VINTR);
}
- /* We don't want to see VMMCALLs from a nested guest */
- vmcb_clr_intercept(c, INTERCEPT_VMMCALL);
+ /*
+ * We want to see VMMCALLs from a nested guest only when Hyper-V L2 TLB
+ * flush feature is enabled.
+ */
+ if (!nested_svm_l2_tlb_flush_enabled(&svm->vcpu))
+ vmcb_clr_intercept(c, INTERCEPT_VMMCALL);
for (i = 0; i < MAX_INTERCEPT; i++)
c->intercepts[i] |= g->intercepts[i];
@@ -489,6 +493,17 @@ static void nested_save_pending_event_to_vmcb12(struct vcpu_svm *svm,
static void nested_svm_transition_tlb_flush(struct kvm_vcpu *vcpu)
{
+ /*
+ * KVM_REQ_HV_TLB_FLUSH flushes entries from either L1's VP_ID or
+ * L2's VP_ID upon request from the guest. Make sure we check for
+ * pending entries for the case when the request got misplaced (e.g.
+ * a transition from L2->L1 happened while processing L2 TLB flush
+ * request or vice versa). kvm_hv_vcpu_flush_tlb() will not flush
+ * anything if there are no requests in the corresponding buffer.
+ */
+ if (to_hv_vcpu(vcpu))
+ kvm_make_request(KVM_REQ_HV_TLB_FLUSH, vcpu);
+
/*
* TODO: optimize unconditional TLB flush/MMU sync. A partial list of
* things to fix before this can be conditional:
@@ -835,6 +850,12 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
return 1;
}
+ /* This fails when VP assist page is enabled but the supplied GPA is bogus */
+ if (!nested_svm_hv_update_vp_assist(vcpu)) {
+ kvm_inject_gp(vcpu, 0);
+ return 1;
+ }
+
vmcb12_gpa = svm->vmcb->save.rax;
ret = kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map);
if (ret == -EINVAL) {
@@ -1412,6 +1433,7 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu)
int nested_svm_exit_special(struct vcpu_svm *svm)
{
u32 exit_code = svm->vmcb->control.exit_code;
+ struct kvm_vcpu *vcpu = &svm->vcpu;
switch (exit_code) {
case SVM_EXIT_INTR:
@@ -1430,6 +1452,13 @@ int nested_svm_exit_special(struct vcpu_svm *svm)
return NESTED_EXIT_HOST;
break;
}
+ case SVM_EXIT_VMMCALL:
+ /* Hyper-V L2 TLB flush hypercall is handled by L0 */
+ if (guest_hv_cpuid_has_l2_tlb_flush(vcpu) &&
+ nested_svm_l2_tlb_flush_enabled(vcpu) &&
+ kvm_hv_is_tlb_flush_hcall(vcpu))
+ return NESTED_EXIT_HOST;
+ break;
default:
break;
}
@@ -1710,6 +1739,9 @@ static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu)
return false;
}
+ if (!nested_svm_hv_update_vp_assist(vcpu))
+ return false;
+
return true;
}
--
2.35.3
Enable Hyper-V L2 TLB flush and check that Hyper-V TLB flush hypercalls
from L2 don't exit to L1 unless 'TlbLockCount' is set in the
Partition assist page.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
.../selftests/kvm/include/x86_64/evmcs.h | 2 +
.../testing/selftests/kvm/x86_64/evmcs_test.c | 42 ++++++++++++++++++-
2 files changed, 42 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86_64/evmcs.h b/tools/testing/selftests/kvm/include/x86_64/evmcs.h
index 9c965ba73dec..36c0a67d8602 100644
--- a/tools/testing/selftests/kvm/include/x86_64/evmcs.h
+++ b/tools/testing/selftests/kvm/include/x86_64/evmcs.h
@@ -252,6 +252,8 @@ struct hv_enlightened_vmcs {
#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK \
(~((1ull << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT) - 1))
+#define HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH 0x10000031
+
extern struct hv_enlightened_vmcs *current_evmcs;
extern struct hv_vp_assist_page *current_vp_assist;
diff --git a/tools/testing/selftests/kvm/x86_64/evmcs_test.c b/tools/testing/selftests/kvm/x86_64/evmcs_test.c
index e161c6dd7a02..e644622e4b51 100644
--- a/tools/testing/selftests/kvm/x86_64/evmcs_test.c
+++ b/tools/testing/selftests/kvm/x86_64/evmcs_test.c
@@ -16,6 +16,7 @@
#include "kvm_util.h"
+#include "hyperv.h"
#include "vmx.h"
#define VCPU_ID 5
@@ -66,15 +67,27 @@ void l2_guest_code(void)
vmcall();
rdmsr_gs_base(); /* intercepted */
+ /* L2 TLB flush tests */
+ hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | HV_HYPERCALL_FAST_BIT, 0x0,
+ HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS);
+ rdmsr_fs_base();
+ hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | HV_HYPERCALL_FAST_BIT, 0x0,
+ HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS);
+ /* Make sure we're no issuing Hyper-V TLB flush call again */
+ __asm__ __volatile__ ("mov $0xdeadbeef, %rcx");
+
/* Done, exit to L1 and never come back. */
vmcall();
}
-void guest_code(struct vmx_pages *vmx_pages)
+void guest_code(struct vmx_pages *vmx_pages, vm_vaddr_t pgs_gpa)
{
#define L2_GUEST_STACK_SIZE 64
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
+ wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
+ wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa);
+
x2apic_enable();
GUEST_SYNC(1);
@@ -104,6 +117,14 @@ void guest_code(struct vmx_pages *vmx_pages)
vmwrite(PIN_BASED_VM_EXEC_CONTROL, vmreadz(PIN_BASED_VM_EXEC_CONTROL) |
PIN_BASED_NMI_EXITING);
+ /* L2 TLB flush setup */
+ current_evmcs->partition_assist_page = vmx_pages->partition_assist_gpa;
+ current_evmcs->hv_enlightenments_control.nested_flush_hypercall = 1;
+ current_evmcs->hv_vm_id = 1;
+ current_evmcs->hv_vp_id = 1;
+ current_vp_assist->nested_control.features.directhypercall = 1;
+ *(u32 *)(vmx_pages->partition_assist) = 0;
+
GUEST_ASSERT(!vmlaunch());
GUEST_ASSERT(vmptrstz() == vmx_pages->enlightened_vmcs_gpa);
@@ -148,6 +169,18 @@ void guest_code(struct vmx_pages *vmx_pages)
GUEST_ASSERT(vmreadz(VM_EXIT_REASON) == EXIT_REASON_MSR_READ);
current_evmcs->guest_rip += 2; /* rdmsr */
+ /*
+ * L2 TLB flush test. First VMCALL should be handled directly by L0,
+ * no VMCALL exit expected.
+ */
+ GUEST_ASSERT(!vmresume());
+ GUEST_ASSERT(vmreadz(VM_EXIT_REASON) == EXIT_REASON_MSR_READ);
+ current_evmcs->guest_rip += 2; /* rdmsr */
+ /* Enable synthetic vmexit */
+ *(u32 *)(vmx_pages->partition_assist) = 1;
+ GUEST_ASSERT(!vmresume());
+ GUEST_ASSERT(vmreadz(VM_EXIT_REASON) == HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH);
+
GUEST_ASSERT(!vmresume());
GUEST_ASSERT(vmreadz(VM_EXIT_REASON) == EXIT_REASON_VMCALL);
GUEST_SYNC(11);
@@ -200,6 +233,7 @@ static void save_restore_vm(struct kvm_vm *vm)
int main(int argc, char *argv[])
{
vm_vaddr_t vmx_pages_gva = 0;
+ vm_vaddr_t hcall_page;
struct kvm_vm *vm;
struct kvm_run *run;
@@ -216,11 +250,15 @@ int main(int argc, char *argv[])
exit(KSFT_SKIP);
}
+ hcall_page = vm_vaddr_alloc_pages(vm, 1);
+ memset(addr_gva2hva(vm, hcall_page), 0x0, getpagesize());
+
vcpu_set_hv_cpuid(vm, VCPU_ID);
vcpu_enable_evmcs(vm, VCPU_ID);
vcpu_alloc_vmx(vm, &vmx_pages_gva);
- vcpu_args_set(vm, VCPU_ID, 1, vmx_pages_gva);
+ vcpu_args_set(vm, VCPU_ID, 2, vmx_pages_gva, addr_gva2gpa(vm, hcall_page));
+ vcpu_set_msr(vm, VCPU_ID, HV_X64_MSR_VP_INDEX, VCPU_ID);
vm_init_descriptor_tables(vm);
vcpu_init_descriptor_tables(vm, VCPU_ID);
--
2.35.3
Enable Hyper-V L2 TLB flush and check that Hyper-V TLB flush hypercalls
from L2 don't exit to L1 unless 'TlbLockCount' is set in the Partition
assist page.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
.../selftests/kvm/x86_64/hyperv_svm_test.c | 54 +++++++++++++++++--
1 file changed, 50 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c b/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c
index 994b33fd8724..2d938523ae27 100644
--- a/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c
+++ b/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c
@@ -42,6 +42,9 @@ struct hv_enlightenments {
*/
#define VMCB_HV_NESTED_ENLIGHTENMENTS (1U << 31)
+#define HV_SVM_EXITCODE_ENL 0xF0000000
+#define HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH (1)
+
void l2_guest_code(void)
{
GUEST_SYNC(3);
@@ -57,11 +60,25 @@ void l2_guest_code(void)
GUEST_SYNC(5);
+ /* L2 TLB flush tests */
+ hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE |
+ HV_HYPERCALL_FAST_BIT, 0x0,
+ HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES |
+ HV_FLUSH_ALL_PROCESSORS);
+ rdmsr(MSR_FS_BASE);
+ hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE |
+ HV_HYPERCALL_FAST_BIT, 0x0,
+ HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES |
+ HV_FLUSH_ALL_PROCESSORS);
+ /* Make sure we're not issuing Hyper-V TLB flush call again */
+ __asm__ __volatile__ ("mov $0xdeadbeef, %rcx");
+
/* Done, exit to L1 and never come back. */
vmmcall();
}
-static void __attribute__((__flatten__)) guest_code(struct svm_test_data *svm)
+static void __attribute__((__flatten__)) guest_code(struct svm_test_data *svm,
+ vm_vaddr_t pgs_gpa)
{
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
struct vmcb *vmcb = svm->vmcb;
@@ -70,13 +87,23 @@ static void __attribute__((__flatten__)) guest_code(struct svm_test_data *svm)
GUEST_SYNC(1);
- wrmsr(HV_X64_MSR_GUEST_OS_ID, (u64)0x8100 << 48);
+ wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
+ wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa);
+ enable_vp_assist(svm->vp_assist_gpa, svm->vp_assist);
GUEST_ASSERT(svm->vmcb_gpa);
/* Prepare for L2 execution. */
generic_svm_setup(svm, l2_guest_code,
&l2_guest_stack[L2_GUEST_STACK_SIZE]);
+ /* L2 TLB flush setup */
+ hve->partition_assist_page = svm->partition_assist_gpa;
+ hve->hv_enlightenments_control.nested_flush_hypercall = 1;
+ hve->hv_vm_id = 1;
+ hve->hv_vp_id = 1;
+ current_vp_assist->nested_control.features.directhypercall = 1;
+ *(u32 *)(svm->partition_assist) = 0;
+
GUEST_SYNC(2);
run_guest(vmcb, svm->vmcb_gpa);
GUEST_ASSERT(vmcb->control.exit_code == SVM_EXIT_VMMCALL);
@@ -111,6 +138,20 @@ static void __attribute__((__flatten__)) guest_code(struct svm_test_data *svm)
GUEST_ASSERT(vmcb->control.exit_code == SVM_EXIT_MSR);
vmcb->save.rip += 2; /* rdmsr */
+
+ /*
+ * L2 TLB flush test. First VMCALL should be handled directly by L0,
+ * no VMCALL exit expected.
+ */
+ run_guest(vmcb, svm->vmcb_gpa);
+ GUEST_ASSERT(vmcb->control.exit_code == SVM_EXIT_MSR);
+ vmcb->save.rip += 2; /* rdmsr */
+ /* Enable synthetic vmexit */
+ *(u32 *)(svm->partition_assist) = 1;
+ run_guest(vmcb, svm->vmcb_gpa);
+ GUEST_ASSERT(vmcb->control.exit_code == HV_SVM_EXITCODE_ENL);
+ GUEST_ASSERT(vmcb->control.exit_info_1 == HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH);
+
run_guest(vmcb, svm->vmcb_gpa);
GUEST_ASSERT(vmcb->control.exit_code == SVM_EXIT_VMMCALL);
GUEST_SYNC(6);
@@ -121,7 +162,7 @@ static void __attribute__((__flatten__)) guest_code(struct svm_test_data *svm)
int main(int argc, char *argv[])
{
vm_vaddr_t nested_gva = 0;
-
+ vm_vaddr_t hcall_page;
struct kvm_vm *vm;
struct kvm_run *run;
struct ucall uc;
@@ -136,7 +177,12 @@ int main(int argc, char *argv[])
vcpu_set_hv_cpuid(vm, VCPU_ID);
run = vcpu_state(vm, VCPU_ID);
vcpu_alloc_svm(vm, &nested_gva);
- vcpu_args_set(vm, VCPU_ID, 1, nested_gva);
+
+ hcall_page = vm_vaddr_alloc_pages(vm, 1);
+ memset(addr_gva2hva(vm, hcall_page), 0x0, getpagesize());
+
+ vcpu_args_set(vm, VCPU_ID, 2, nested_gva, addr_gva2gpa(vm, hcall_page));
+ vcpu_set_msr(vm, VCPU_ID, HV_X64_MSR_VP_INDEX, VCPU_ID);
for (stage = 1;; stage++) {
_vcpu_run(vm, VCPU_ID);
--
2.35.3
set_xmm()/get_xmm() helpers are fairly useless as they only read 64 bits
from 128-bit registers. Moreover, these helpers are not used. Borrow
_kvm_read_sse_reg()/_kvm_write_sse_reg() from KVM limiting them to
XMM0-XMM8 for now.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
.../selftests/kvm/include/x86_64/processor.h | 70 ++++++++++---------
1 file changed, 36 insertions(+), 34 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 4fd870f37b9e..9ac244fdce73 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -315,71 +315,73 @@ static inline void cpuid(uint32_t *eax, uint32_t *ebx,
: "memory");
}
-#define SET_XMM(__var, __xmm) \
- asm volatile("movq %0, %%"#__xmm : : "r"(__var) : #__xmm)
+typedef u32 __attribute__((vector_size(16))) sse128_t;
+#define __sse128_u union { sse128_t vec; u64 as_u64[2]; u32 as_u32[4]; }
+#define sse128_lo(x) ({ __sse128_u t; t.vec = x; t.as_u64[0]; })
+#define sse128_hi(x) ({ __sse128_u t; t.vec = x; t.as_u64[1]; })
-static inline void set_xmm(int n, unsigned long val)
+static inline void read_sse_reg(int reg, sse128_t *data)
{
- switch (n) {
+ switch (reg) {
case 0:
- SET_XMM(val, xmm0);
+ asm("movdqa %%xmm0, %0" : "=m"(*data));
break;
case 1:
- SET_XMM(val, xmm1);
+ asm("movdqa %%xmm1, %0" : "=m"(*data));
break;
case 2:
- SET_XMM(val, xmm2);
+ asm("movdqa %%xmm2, %0" : "=m"(*data));
break;
case 3:
- SET_XMM(val, xmm3);
+ asm("movdqa %%xmm3, %0" : "=m"(*data));
break;
case 4:
- SET_XMM(val, xmm4);
+ asm("movdqa %%xmm4, %0" : "=m"(*data));
break;
case 5:
- SET_XMM(val, xmm5);
+ asm("movdqa %%xmm5, %0" : "=m"(*data));
break;
case 6:
- SET_XMM(val, xmm6);
+ asm("movdqa %%xmm6, %0" : "=m"(*data));
break;
case 7:
- SET_XMM(val, xmm7);
+ asm("movdqa %%xmm7, %0" : "=m"(*data));
break;
+ default:
+ BUG();
}
}
-#define GET_XMM(__xmm) \
-({ \
- unsigned long __val; \
- asm volatile("movq %%"#__xmm", %0" : "=r"(__val)); \
- __val; \
-})
-
-static inline unsigned long get_xmm(int n)
+static inline void write_sse_reg(int reg, const sse128_t *data)
{
- assert(n >= 0 && n <= 7);
-
- switch (n) {
+ switch (reg) {
case 0:
- return GET_XMM(xmm0);
+ asm("movdqa %0, %%xmm0" : : "m"(*data));
+ break;
case 1:
- return GET_XMM(xmm1);
+ asm("movdqa %0, %%xmm1" : : "m"(*data));
+ break;
case 2:
- return GET_XMM(xmm2);
+ asm("movdqa %0, %%xmm2" : : "m"(*data));
+ break;
case 3:
- return GET_XMM(xmm3);
+ asm("movdqa %0, %%xmm3" : : "m"(*data));
+ break;
case 4:
- return GET_XMM(xmm4);
+ asm("movdqa %0, %%xmm4" : : "m"(*data));
+ break;
case 5:
- return GET_XMM(xmm5);
+ asm("movdqa %0, %%xmm5" : : "m"(*data));
+ break;
case 6:
- return GET_XMM(xmm6);
+ asm("movdqa %0, %%xmm6" : : "m"(*data));
+ break;
case 7:
- return GET_XMM(xmm7);
+ asm("movdqa %0, %%xmm7" : : "m"(*data));
+ break;
+ default:
+ BUG();
}
-
- /* never reached */
- return 0;
}
static inline void cpu_relax(void)
--
2.35.3
Introduce a selftest for Hyper-V PV TLB flush hypercalls
(HvFlushVirtualAddressSpace/HvFlushVirtualAddressSpaceEx,
HvFlushVirtualAddressList/HvFlushVirtualAddressListEx).
The test creates one 'sender' vCPU and two 'worker' vCPU which do busy
loop reading from a certain GVA checking the observed value. Sender
vCPU drops to the host to swap the data page with another page filled
with a different value. The expectation for workers is also
altered. Without TLB flush on worker vCPUs, they may continue to
observe old value. To guard against accidental TLB flushes for worker
vCPUs the test is repeated 100 times.
Hyper-V TLB flush hypercalls are tested in both 'normal' and 'XMM
fast' modes.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 1 +
.../selftests/kvm/include/x86_64/hyperv.h | 1 +
.../selftests/kvm/x86_64/hyperv_tlb_flush.c | 660 ++++++++++++++++++
4 files changed, 663 insertions(+)
create mode 100644 tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 19a8454e3760..7f086656f3e0 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -26,6 +26,7 @@
/x86_64/hyperv_features
/x86_64/hyperv_ipi
/x86_64/hyperv_svm_test
+/x86_64/hyperv_tlb_flush
/x86_64/max_vcpuid_cap_test
/x86_64/mmio_warning_test
/x86_64/mmu_role_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index cf433073fb64..1e61ccc0da4d 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -54,6 +54,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
TEST_GEN_PROGS_x86_64 += x86_64/hyperv_features
TEST_GEN_PROGS_x86_64 += x86_64/hyperv_ipi
TEST_GEN_PROGS_x86_64 += x86_64/hyperv_svm_test
+TEST_GEN_PROGS_x86_64 += x86_64/hyperv_tlb_flush
TEST_GEN_PROGS_x86_64 += x86_64/kvm_clock_test
TEST_GEN_PROGS_x86_64 += x86_64/kvm_pv_test
TEST_GEN_PROGS_x86_64 += x86_64/mmio_warning_test
diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
index 1b467626be58..c302027fa6d5 100644
--- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h
+++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
@@ -187,6 +187,7 @@
/* hypercall options */
#define HV_HYPERCALL_FAST_BIT BIT(16)
#define HV_HYPERCALL_VARHEAD_OFFSET 17
+#define HV_HYPERCALL_REP_COMP_OFFSET 32
static inline u64 hyperv_hypercall(u64 control, vm_vaddr_t input_address,
vm_vaddr_t output_address)
diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c b/tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c
new file mode 100644
index 000000000000..d23e40d3b480
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c
@@ -0,0 +1,660 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Hyper-V HvFlushVirtualAddress{List,Space}{,Ex} tests
+ *
+ * Copyright (C) 2022, Red Hat, Inc.
+ *
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <pthread.h>
+#include <inttypes.h>
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "hyperv.h"
+#include "test_util.h"
+#include "vmx.h"
+
+#define SENDER_VCPU_ID 1
+#define WORKER_VCPU_ID_1 2
+#define WORKER_VCPU_ID_2 65
+
+#define NTRY 100
+#define NTEST_PAGES 2
+
+struct thread_params {
+ struct kvm_vm *vm;
+ uint32_t vcpu_id;
+};
+
+struct hv_vpset {
+ u64 format;
+ u64 valid_bank_mask;
+ u64 bank_contents[];
+};
+
+enum HV_GENERIC_SET_FORMAT {
+ HV_GENERIC_SET_SPARSE_4K,
+ HV_GENERIC_SET_ALL,
+};
+
+#define HV_FLUSH_ALL_PROCESSORS BIT(0)
+#define HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES BIT(1)
+#define HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY BIT(2)
+#define HV_FLUSH_USE_EXTENDED_RANGE_FORMAT BIT(3)
+
+/* HvFlushVirtualAddressSpace, HvFlushVirtualAddressList hypercalls */
+struct hv_tlb_flush {
+ u64 address_space;
+ u64 flags;
+ u64 processor_mask;
+ u64 gva_list[];
+} __packed;
+
+/* HvFlushVirtualAddressSpaceEx, HvFlushVirtualAddressListEx hypercalls */
+struct hv_tlb_flush_ex {
+ u64 address_space;
+ u64 flags;
+ struct hv_vpset hv_vp_set;
+ u64 gva_list[];
+} __packed;
+
+/*
+ * Pass the following info to 'workers' and 'sender'
+ * - Hypercall page's GVA
+ * - Hypercall page's GPA
+ * - Test pages GVA
+ * - GVAs of the test pages' PTEs
+ */
+struct test_data {
+ vm_vaddr_t hcall_gva;
+ vm_paddr_t hcall_gpa;
+ vm_vaddr_t test_pages;
+ vm_vaddr_t test_pages_pte[NTEST_PAGES];
+};
+
+/* 'Worker' vCPU code checking the contents of the test page */
+static void worker_guest_code(vm_vaddr_t test_data)
+{
+ struct test_data *data = (struct test_data *)test_data;
+ u32 vcpu_id = rdmsr(HV_X64_MSR_VP_INDEX);
+ unsigned char chr_exp1, chr_exp2, chr_cur;
+
+ x2apic_enable();
+ wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
+
+ for (;;) {
+ /* Read the expected char, then check what's in the test pages and then
+ * check the expectation again to make sure it wasn't updated in the meantime.
+ */
+ chr_exp1 = READ_ONCE(*(unsigned char *)
+ (data->test_pages + PAGE_SIZE * NTEST_PAGES + vcpu_id));
+ asm volatile("lfence");
+ chr_cur = *(unsigned char *)data->test_pages;
+ asm volatile("lfence");
+ chr_exp2 = READ_ONCE(*(unsigned char *)
+ (data->test_pages + PAGE_SIZE * NTEST_PAGES + vcpu_id));
+ if (chr_exp1 && chr_exp1 == chr_exp2)
+ GUEST_ASSERT(chr_cur == chr_exp1);
+ asm volatile("nop");
+ }
+}
+
+/*
+ * Write per-CPU info indicating what each 'worker' CPU is supposed to see in
+ * test page. '0' means don't check.
+ */
+static void set_expected_char(void *addr, unsigned char chr, int vcpu_id)
+{
+ asm volatile("mfence");
+ *(unsigned char *)(addr + NTEST_PAGES * PAGE_SIZE + vcpu_id) = chr;
+}
+
+/* Update PTEs swapping two test pages */
+static void swap_two_test_pages(vm_paddr_t pte_gva1, vm_paddr_t pte_gva2)
+{
+ uint64_t pte[2];
+
+ pte[0] = *(uint64_t *)pte_gva1;
+ pte[1] = *(uint64_t *)pte_gva2;
+
+ *(uint64_t *)pte_gva1 = pte[1];
+ *(uint64_t *)pte_gva2 = pte[0];
+}
+
+/* Delay */
+static inline void rep_nop(void)
+{
+ int i;
+
+ for (i = 0; i < 1000000; i++)
+ asm volatile("nop");
+}
+
+/*
+ * Prepare to test: 'disable' workers by setting the expectation to '0',
+ * clear hypercall input page and then swap two test pages.
+ */
+static inline void prepare_to_test(struct test_data *data)
+{
+ /* Clear hypercall input page */
+ memset((void *)data->hcall_gva, 0, PAGE_SIZE);
+
+ /* 'Disable' workers */
+ set_expected_char((void *)data->test_pages, 0x0, WORKER_VCPU_ID_1);
+ set_expected_char((void *)data->test_pages, 0x0, WORKER_VCPU_ID_2);
+
+ /* Make sure workers have enough time to notice */
+ asm volatile("mfence");
+ rep_nop();
+
+ /* Swap test page mappings */
+ swap_two_test_pages(data->test_pages_pte[0], data->test_pages_pte[1]);
+}
+
+/*
+ * Finalize the test: check hypercall resule set the expected char for
+ * 'worker' CPUs and give them some time to test.
+ */
+static inline void post_test(struct test_data *data, u64 res,
+ char exp_char1, char exp_char2)
+{
+ /* Check hypercall return code */
+ GUEST_ASSERT((res & 0xffff) == 0);
+
+ /* Set the expectation for workers, '0' means don't test */
+ set_expected_char((void *)data->test_pages, exp_char1, WORKER_VCPU_ID_1);
+ set_expected_char((void *)data->test_pages, exp_char2, WORKER_VCPU_ID_2);
+
+ /* Make sure workers have enough time to test */
+ asm volatile("mfence");
+ rep_nop();
+}
+
+/* Main vCPU doing the test */
+static void sender_guest_code(vm_vaddr_t test_data)
+{
+ struct test_data *data = (struct test_data *)test_data;
+ struct hv_tlb_flush *flush = (struct hv_tlb_flush *)data->hcall_gva;
+ struct hv_tlb_flush_ex *flush_ex = (struct hv_tlb_flush_ex *)data->hcall_gva;
+ vm_paddr_t hcall_gpa = data->hcall_gpa;
+ u64 res;
+ int i, stage = 1;
+
+ wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
+ wrmsr(HV_X64_MSR_HYPERCALL, data->hcall_gpa);
+
+ /* "Slow" hypercalls */
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for WORKER_VCPU_ID_1 */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
+ flush->processor_mask = BIT(WORKER_VCPU_ID_1);
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE, hcall_gpa,
+ hcall_gpa + PAGE_SIZE);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, 0x0);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for WORKER_VCPU_ID_1 */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
+ flush->processor_mask = BIT(WORKER_VCPU_ID_1);
+ flush->gva_list[0] = (u64)data->test_pages;
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST |
+ (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
+ hcall_gpa, hcall_gpa + PAGE_SIZE);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, 0x0);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for HV_FLUSH_ALL_PROCESSORS */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS;
+ flush->processor_mask = 0;
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE, hcall_gpa,
+ hcall_gpa + PAGE_SIZE);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for HV_FLUSH_ALL_PROCESSORS */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS;
+ flush->gva_list[0] = (u64)data->test_pages;
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST |
+ (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
+ hcall_gpa, hcall_gpa + PAGE_SIZE);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for WORKER_VCPU_ID_2 */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
+ flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
+ flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_2 / 64);
+ flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX |
+ (1 << HV_HYPERCALL_VARHEAD_OFFSET),
+ hcall_gpa, hcall_gpa + PAGE_SIZE);
+ post_test(data, res, 0x0, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for WORKER_VCPU_ID_2 */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
+ flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
+ flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_2 / 64);
+ flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
+ /* bank_contents and gva_list occupy the same space, thus [1] */
+ flush_ex->gva_list[1] = (u64)data->test_pages;
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX |
+ (1 << HV_HYPERCALL_VARHEAD_OFFSET) |
+ (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
+ hcall_gpa, hcall_gpa + PAGE_SIZE);
+ post_test(data, res, 0x0, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for both vCPUs */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
+ flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
+ flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_2 / 64) |
+ BIT_ULL(WORKER_VCPU_ID_1 / 64);
+ flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_1 % 64);
+ flush_ex->hv_vp_set.bank_contents[1] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX |
+ (2 << HV_HYPERCALL_VARHEAD_OFFSET),
+ hcall_gpa, hcall_gpa + PAGE_SIZE);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for both vCPUs */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
+ flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
+ flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_1 / 64) |
+ BIT_ULL(WORKER_VCPU_ID_2 / 64);
+ flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_1 % 64);
+ flush_ex->hv_vp_set.bank_contents[1] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
+ /* bank_contents and gva_list occupy the same space, thus [2] */
+ flush_ex->gva_list[2] = (u64)data->test_pages;
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX |
+ (2 << HV_HYPERCALL_VARHEAD_OFFSET) |
+ (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
+ hcall_gpa, hcall_gpa + PAGE_SIZE);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for HV_GENERIC_SET_ALL */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
+ flush_ex->hv_vp_set.format = HV_GENERIC_SET_ALL;
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX,
+ hcall_gpa, hcall_gpa + PAGE_SIZE);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for HV_GENERIC_SET_ALL */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
+ flush_ex->hv_vp_set.format = HV_GENERIC_SET_ALL;
+ flush_ex->gva_list[0] = (u64)data->test_pages;
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX |
+ (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
+ hcall_gpa, hcall_gpa + PAGE_SIZE);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
+ }
+
+ /* "Fast" hypercalls */
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for WORKER_VCPU_ID_1 */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush->processor_mask = BIT(WORKER_VCPU_ID_1);
+ hyperv_write_xmm_input(&flush->processor_mask, 1);
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE |
+ HV_HYPERCALL_FAST_BIT, 0x0,
+ HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, 0x0);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for WORKER_VCPU_ID_1 */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush->processor_mask = BIT(WORKER_VCPU_ID_1);
+ flush->gva_list[0] = (u64)data->test_pages;
+ hyperv_write_xmm_input(&flush->processor_mask, 1);
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST |
+ HV_HYPERCALL_FAST_BIT |
+ (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
+ 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, 0x0);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for HV_FLUSH_ALL_PROCESSORS */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ hyperv_write_xmm_input(&flush->processor_mask, 1);
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE |
+ HV_HYPERCALL_FAST_BIT, 0x0,
+ HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES |
+ HV_FLUSH_ALL_PROCESSORS);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for HV_FLUSH_ALL_PROCESSORS */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush->gva_list[0] = (u64)data->test_pages;
+ hyperv_write_xmm_input(&flush->processor_mask, 1);
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST |
+ HV_HYPERCALL_FAST_BIT |
+ (1UL << HV_HYPERCALL_REP_COMP_OFFSET), 0x0,
+ HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES |
+ HV_FLUSH_ALL_PROCESSORS);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for WORKER_VCPU_ID_2 */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
+ flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_2 / 64);
+ flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
+ hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2);
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX |
+ HV_HYPERCALL_FAST_BIT |
+ (1 << HV_HYPERCALL_VARHEAD_OFFSET),
+ 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
+ post_test(data, res, 0x0, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for WORKER_VCPU_ID_2 */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
+ flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_2 / 64);
+ flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
+ /* bank_contents and gva_list occupy the same space, thus [1] */
+ flush_ex->gva_list[1] = (u64)data->test_pages;
+ hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2);
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX |
+ HV_HYPERCALL_FAST_BIT |
+ (1 << HV_HYPERCALL_VARHEAD_OFFSET) |
+ (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
+ 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
+ post_test(data, res, 0x0, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for both vCPUs */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
+ flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_2 / 64) |
+ BIT_ULL(WORKER_VCPU_ID_1 / 64);
+ flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_1 % 64);
+ flush_ex->hv_vp_set.bank_contents[1] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
+ hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2);
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX |
+ HV_HYPERCALL_FAST_BIT |
+ (2 << HV_HYPERCALL_VARHEAD_OFFSET),
+ 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for both vCPUs */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
+ flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_1 / 64) |
+ BIT_ULL(WORKER_VCPU_ID_2 / 64);
+ flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_1 % 64);
+ flush_ex->hv_vp_set.bank_contents[1] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
+ /* bank_contents and gva_list occupy the same space, thus [2] */
+ flush_ex->gva_list[2] = (u64)data->test_pages;
+ hyperv_write_xmm_input(&flush_ex->hv_vp_set, 3);
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX |
+ HV_HYPERCALL_FAST_BIT |
+ (2 << HV_HYPERCALL_VARHEAD_OFFSET) |
+ (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
+ 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for HV_GENERIC_SET_ALL */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
+ flush_ex->hv_vp_set.format = HV_GENERIC_SET_ALL;
+ hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2);
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX |
+ HV_HYPERCALL_FAST_BIT,
+ 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_SYNC(stage++);
+
+ /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for HV_GENERIC_SET_ALL */
+ for (i = 0; i < NTRY; i++) {
+ prepare_to_test(data);
+ flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
+ flush_ex->hv_vp_set.format = HV_GENERIC_SET_ALL;
+ flush_ex->gva_list[0] = (u64)data->test_pages;
+ hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2);
+ res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX |
+ HV_HYPERCALL_FAST_BIT |
+ (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
+ 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
+ post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
+ }
+
+ GUEST_DONE();
+}
+
+static void *vcpu_thread(void *arg)
+{
+ struct thread_params *params = (struct thread_params *)arg;
+ struct ucall uc;
+ int old;
+ int r;
+ unsigned int exit_reason;
+
+ r = pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, &old);
+ TEST_ASSERT(r == 0,
+ "pthread_setcanceltype failed on vcpu_id=%u with errno=%d",
+ params->vcpu_id, r);
+
+ vcpu_run(params->vm, params->vcpu_id);
+ exit_reason = vcpu_state(params->vm, params->vcpu_id)->exit_reason;
+
+ TEST_ASSERT(exit_reason == KVM_EXIT_IO,
+ "vCPU %u exited with unexpected exit reason %u-%s, expected KVM_EXIT_IO",
+ params->vcpu_id, exit_reason, exit_reason_str(exit_reason));
+
+ if (get_ucall(params->vm, params->vcpu_id, &uc) == UCALL_ABORT) {
+ TEST_ASSERT(false,
+ "vCPU %u exited with error: %s.\n",
+ params->vcpu_id, (const char *)uc.args[0]);
+ }
+
+ return NULL;
+}
+
+static void cancel_join_vcpu_thread(pthread_t thread, uint32_t vcpu_id)
+{
+ void *retval;
+ int r;
+
+ r = pthread_cancel(thread);
+ TEST_ASSERT(r == 0,
+ "pthread_cancel on vcpu_id=%d failed with errno=%d",
+ vcpu_id, r);
+
+ r = pthread_join(thread, &retval);
+ TEST_ASSERT(r == 0,
+ "pthread_join on vcpu_id=%d failed with errno=%d",
+ vcpu_id, r);
+ TEST_ASSERT(retval == PTHREAD_CANCELED,
+ "expected retval=%p, got %p", PTHREAD_CANCELED,
+ retval);
+}
+
+int main(int argc, char *argv[])
+{
+ pthread_t threads[2];
+ struct thread_params params[2];
+ struct kvm_vm *vm;
+ struct kvm_run *run;
+ vm_vaddr_t test_data_page, gva;
+ vm_paddr_t gpa;
+ uint64_t *pte;
+ struct test_data *data;
+ struct ucall uc;
+ int stage = 1, r, i;
+
+ vm = vm_create_default(SENDER_VCPU_ID, 0, sender_guest_code);
+ params[0].vm = vm;
+ params[1].vm = vm;
+
+ /* Test data page */
+ test_data_page = vm_vaddr_alloc_page(vm);
+ data = (struct test_data *)addr_gva2hva(vm, test_data_page);
+
+ /* Hypercall input/output */
+ data->hcall_gva = vm_vaddr_alloc_pages(vm, 2);
+ data->hcall_gpa = addr_gva2gpa(vm, data->hcall_gva);
+ memset(addr_gva2hva(vm, data->hcall_gva), 0x0, 2 * PAGE_SIZE);
+
+ /*
+ * Test pages: the first one is filled with '0x1's, the second with '0x2's
+ * and the test will swap their mappings. The third page keeps the indication
+ * about the current state of mappings.
+ */
+ data->test_pages = vm_vaddr_alloc_pages(vm, NTEST_PAGES + 1);
+ for (i = 0; i < NTEST_PAGES; i++)
+ memset(addr_gva2hva(vm, data->test_pages + PAGE_SIZE * i),
+ (char)(i + 1), PAGE_SIZE);
+ set_expected_char(addr_gva2hva(vm, data->test_pages), 0x0, WORKER_VCPU_ID_1);
+ set_expected_char(addr_gva2hva(vm, data->test_pages), 0x0, WORKER_VCPU_ID_2);
+
+ /*
+ * Get PTE pointers for test pages and map them inside the guest.
+ * Use separate page for each PTE for simplicity.
+ */
+ gva = vm_vaddr_unused_gap(vm, NTEST_PAGES * PAGE_SIZE, KVM_UTIL_MIN_VADDR);
+ for (i = 0; i < NTEST_PAGES; i++) {
+ pte = _vm_get_page_table_entry(vm, SENDER_VCPU_ID,
+ data->test_pages + i * PAGE_SIZE);
+ gpa = addr_hva2gpa(vm, pte);
+ __virt_pg_map(vm, gva + PAGE_SIZE * i, gpa & PAGE_MASK, X86_PAGE_SIZE_4K);
+ data->test_pages_pte[i] = gva + (gpa & ~PAGE_MASK);
+ }
+
+ /*
+ * Sender vCPU which performs the test: swaps test pages, sets expectation
+ * for 'workers' and issues TLB flush hypercalls.
+ */
+ vcpu_args_set(vm, SENDER_VCPU_ID, 1, test_data_page);
+ vcpu_set_hv_cpuid(vm, SENDER_VCPU_ID);
+
+ /* Create worker vCPUs which check the contents of the test pages */
+ vm_vcpu_add_default(vm, WORKER_VCPU_ID_1, worker_guest_code);
+ vcpu_args_set(vm, WORKER_VCPU_ID_1, 1, test_data_page);
+ vcpu_set_msr(vm, WORKER_VCPU_ID_1, HV_X64_MSR_VP_INDEX, WORKER_VCPU_ID_1);
+ vcpu_set_hv_cpuid(vm, WORKER_VCPU_ID_1);
+
+ vm_vcpu_add_default(vm, WORKER_VCPU_ID_2, worker_guest_code);
+ vcpu_args_set(vm, WORKER_VCPU_ID_2, 1, test_data_page);
+ vcpu_set_msr(vm, WORKER_VCPU_ID_2, HV_X64_MSR_VP_INDEX, WORKER_VCPU_ID_2);
+ vcpu_set_hv_cpuid(vm, WORKER_VCPU_ID_2);
+
+ params[0].vcpu_id = WORKER_VCPU_ID_1;
+ r = pthread_create(&threads[0], NULL, vcpu_thread, ¶ms[0]);
+ TEST_ASSERT(r == 0,
+ "pthread_create failed errno=%d", errno);
+
+ params[1].vcpu_id = WORKER_VCPU_ID_2;
+ r = pthread_create(&threads[1], NULL, vcpu_thread, ¶ms[1]);
+ TEST_ASSERT(r == 0,
+ "pthread_create failed errno=%d", errno);
+
+ run = vcpu_state(vm, SENDER_VCPU_ID);
+
+ while (true) {
+ r = _vcpu_run(vm, SENDER_VCPU_ID);
+ TEST_ASSERT(!r, "vcpu_run failed: %d\n", r);
+ TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
+ "unexpected exit reason: %u (%s)",
+ run->exit_reason, exit_reason_str(run->exit_reason));
+
+ switch (get_ucall(vm, SENDER_VCPU_ID, &uc)) {
+ case UCALL_SYNC:
+ TEST_ASSERT(uc.args[1] == stage,
+ "Unexpected stage: %ld (%d expected)\n",
+ uc.args[1], stage);
+ break;
+ case UCALL_ABORT:
+ TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0],
+ __FILE__, uc.args[1]);
+ return 1;
+ case UCALL_DONE:
+ return 0;
+ }
+
+ stage++;
+ }
+
+ cancel_join_vcpu_thread(threads[0], WORKER_VCPU_ID_1);
+ cancel_join_vcpu_thread(threads[1], WORKER_VCPU_ID_2);
+ kvm_vm_free(vm);
+
+ return 0;
+}
--
2.35.3
In preparation to testing Hyper-V L2 TLB flush hypercalls, allocate VP
assist and Partition assist pages and link them to 'struct svm_test_data'.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
tools/testing/selftests/kvm/include/x86_64/svm_util.h | 10 ++++++++++
tools/testing/selftests/kvm/lib/x86_64/svm.c | 10 ++++++++++
2 files changed, 20 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/svm_util.h b/tools/testing/selftests/kvm/include/x86_64/svm_util.h
index 136ba6a5d027..3922e4842c68 100644
--- a/tools/testing/selftests/kvm/include/x86_64/svm_util.h
+++ b/tools/testing/selftests/kvm/include/x86_64/svm_util.h
@@ -36,6 +36,16 @@ struct svm_test_data {
void *msr; /* gva */
void *msr_hva;
uint64_t msr_gpa;
+
+ /* Hyper-V VP assist page */
+ void *vp_assist; /* gva */
+ void *vp_assist_hva;
+ uint64_t vp_assist_gpa;
+
+ /* Hyper-V Partition assist page */
+ void *partition_assist; /* gva */
+ void *partition_assist_hva;
+ uint64_t partition_assist_gpa;
};
#define stgi() \
diff --git a/tools/testing/selftests/kvm/lib/x86_64/svm.c b/tools/testing/selftests/kvm/lib/x86_64/svm.c
index 736ee4a23df6..c284e8f87f5c 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/svm.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/svm.c
@@ -48,6 +48,16 @@ vcpu_alloc_svm(struct kvm_vm *vm, vm_vaddr_t *p_svm_gva)
svm->msr_gpa = addr_gva2gpa(vm, (uintptr_t)svm->msr);
memset(svm->msr_hva, 0, getpagesize());
+ svm->vp_assist = (void *)vm_vaddr_alloc_page(vm);
+ svm->vp_assist_hva = addr_gva2hva(vm, (uintptr_t)svm->vp_assist);
+ svm->vp_assist_gpa = addr_gva2gpa(vm, (uintptr_t)svm->vp_assist);
+ memset(svm->vp_assist_hva, 0, getpagesize());
+
+ svm->partition_assist = (void *)vm_vaddr_alloc_page(vm);
+ svm->partition_assist_hva = addr_gva2hva(vm, (uintptr_t)svm->partition_assist);
+ svm->partition_assist_gpa = addr_gva2gpa(vm, (uintptr_t)svm->partition_assist);
+ memset(svm->partition_assist_hva, 0, getpagesize());
+
*p_svm_gva = svm_gva;
return svm;
}
--
2.35.3
Hyper-V VP assist page is not eVMCS specific, it is also used for
enlightened nSVM. Move the code to vendor neutral place.
Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
tools/testing/selftests/kvm/Makefile | 2 +-
.../selftests/kvm/include/x86_64/evmcs.h | 40 +------------------
.../selftests/kvm/include/x86_64/hyperv.h | 31 ++++++++++++++
.../testing/selftests/kvm/lib/x86_64/hyperv.c | 21 ++++++++++
.../testing/selftests/kvm/x86_64/evmcs_test.c | 1 +
5 files changed, 56 insertions(+), 39 deletions(-)
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/hyperv.c
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 1e61ccc0da4d..eb7b51af5683 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -38,7 +38,7 @@ ifeq ($(ARCH),riscv)
endif
LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c
-LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S
+LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/hyperv.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S
LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c lib/aarch64/gic.c lib/aarch64/gic_v3.c lib/aarch64/vgic.c
LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c
LIBKVM_riscv = lib/riscv/processor.c lib/riscv/ucall.c
diff --git a/tools/testing/selftests/kvm/include/x86_64/evmcs.h b/tools/testing/selftests/kvm/include/x86_64/evmcs.h
index 36c0a67d8602..026586b53013 100644
--- a/tools/testing/selftests/kvm/include/x86_64/evmcs.h
+++ b/tools/testing/selftests/kvm/include/x86_64/evmcs.h
@@ -10,6 +10,7 @@
#define SELFTEST_KVM_EVMCS_H
#include <stdint.h>
+#include "hyperv.h"
#include "vmx.h"
#define u16 uint16_t
@@ -20,27 +21,6 @@
extern bool enable_evmcs;
-struct hv_nested_enlightenments_control {
- struct {
- __u32 directhypercall:1;
- __u32 reserved:31;
- } features;
- struct {
- __u32 reserved;
- } hypercallControls;
-} __packed;
-
-/* Define virtual processor assist page structure. */
-struct hv_vp_assist_page {
- __u32 apic_assist;
- __u32 reserved1;
- __u64 vtl_control[3];
- struct hv_nested_enlightenments_control nested_control;
- __u8 enlighten_vmentry;
- __u8 reserved2[7];
- __u64 current_nested_vmcs;
-} __packed;
-
struct hv_enlightened_vmcs {
u32 revision_id;
u32 abort;
@@ -246,31 +226,15 @@ struct hv_enlightened_vmcs {
#define HV_VMX_ENLIGHTENED_CLEAN_FIELD_ENLIGHTENMENTSCONTROL BIT(15)
#define HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL 0xFFFF
-#define HV_X64_MSR_VP_ASSIST_PAGE 0x40000073
-#define HV_X64_MSR_VP_ASSIST_PAGE_ENABLE 0x00000001
-#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT 12
-#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK \
- (~((1ull << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT) - 1))
-
#define HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH 0x10000031
extern struct hv_enlightened_vmcs *current_evmcs;
-extern struct hv_vp_assist_page *current_vp_assist;
int vcpu_enable_evmcs(struct kvm_vm *vm, int vcpu_id);
-static inline int enable_vp_assist(uint64_t vp_assist_pa, void *vp_assist)
+static inline void evmcs_enable(void)
{
- u64 val = (vp_assist_pa & HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK) |
- HV_X64_MSR_VP_ASSIST_PAGE_ENABLE;
-
- wrmsr(HV_X64_MSR_VP_ASSIST_PAGE, val);
-
- current_vp_assist = vp_assist;
-
enable_evmcs = true;
-
- return 0;
}
static inline int evmcs_vmptrld(uint64_t vmcs_pa, void *vmcs)
diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
index c302027fa6d5..a2561f31dabb 100644
--- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h
+++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
@@ -216,4 +216,35 @@ static inline void hyperv_write_xmm_input(void *data, int n_sse_regs)
/* Proper HV_X64_MSR_GUEST_OS_ID value */
#define HYPERV_LINUX_OS_ID ((u64)0x8100 << 48)
+#define HV_X64_MSR_VP_ASSIST_PAGE 0x40000073
+#define HV_X64_MSR_VP_ASSIST_PAGE_ENABLE 0x00000001
+#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT 12
+#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK \
+ (~((1ull << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT) - 1))
+
+struct hv_nested_enlightenments_control {
+ struct {
+ __u32 directhypercall:1;
+ __u32 reserved:31;
+ } features;
+ struct {
+ __u32 reserved;
+ } hypercallControls;
+} __packed;
+
+/* Define virtual processor assist page structure. */
+struct hv_vp_assist_page {
+ __u32 apic_assist;
+ __u32 reserved1;
+ __u64 vtl_control[3];
+ struct hv_nested_enlightenments_control nested_control;
+ __u8 enlighten_vmentry;
+ __u8 reserved2[7];
+ __u64 current_nested_vmcs;
+} __packed;
+
+extern struct hv_vp_assist_page *current_vp_assist;
+
+int enable_vp_assist(uint64_t vp_assist_pa, void *vp_assist);
+
#endif /* !SELFTEST_KVM_HYPERV_H */
diff --git a/tools/testing/selftests/kvm/lib/x86_64/hyperv.c b/tools/testing/selftests/kvm/lib/x86_64/hyperv.c
new file mode 100644
index 000000000000..32dc0afd9e5b
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/hyperv.c
@@ -0,0 +1,21 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Hyper-V specific functions.
+ *
+ * Copyright (C) 2021, Red Hat Inc.
+ */
+#include <stdint.h>
+#include "processor.h"
+#include "hyperv.h"
+
+int enable_vp_assist(uint64_t vp_assist_pa, void *vp_assist)
+{
+ uint64_t val = (vp_assist_pa & HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK) |
+ HV_X64_MSR_VP_ASSIST_PAGE_ENABLE;
+
+ wrmsr(HV_X64_MSR_VP_ASSIST_PAGE, val);
+
+ current_vp_assist = vp_assist;
+
+ return 0;
+}
diff --git a/tools/testing/selftests/kvm/x86_64/evmcs_test.c b/tools/testing/selftests/kvm/x86_64/evmcs_test.c
index e644622e4b51..d23b1218e3c3 100644
--- a/tools/testing/selftests/kvm/x86_64/evmcs_test.c
+++ b/tools/testing/selftests/kvm/x86_64/evmcs_test.c
@@ -94,6 +94,7 @@ void guest_code(struct vmx_pages *vmx_pages, vm_vaddr_t pgs_gpa)
GUEST_SYNC(2);
enable_vp_assist(vmx_pages->vp_assist_gpa, vmx_pages->vp_assist);
+ evmcs_enable();
GUEST_ASSERT(vmx_pages->vmcs_gpa);
GUEST_ASSERT(prepare_for_vmx_operation(vmx_pages));
--
2.35.3
On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
> Enable L2 TLB flush feature on nVMX when:
> - Enlightened VMCS is in use.
> - The feature flag is enabled in eVMCS.
> - The feature flag is enabled in partition assist page.
>
> Perform synthetic vmexit to L1 after processing TLB flush call upon
> request (HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH).
>
> Note: nested_evmcs_l2_tlb_flush_enabled() uses cached VP assist page copy
> which gets updated from nested_vmx_handle_enlightened_vmptrld(). This is
> also guaranteed to happen post migration with eVMCS backed L2 running.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> arch/x86/kvm/vmx/evmcs.c | 17 +++++++++++++++++
> arch/x86/kvm/vmx/evmcs.h | 10 ++++++++++
> arch/x86/kvm/vmx/nested.c | 22 ++++++++++++++++++++++
> 3 files changed, 49 insertions(+)
>
> diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c
> index 7cd7b16942c6..870de69172be 100644
> --- a/arch/x86/kvm/vmx/evmcs.c
> +++ b/arch/x86/kvm/vmx/evmcs.c
> @@ -6,6 +6,7 @@
> #include "../hyperv.h"
> #include "../cpuid.h"
> #include "evmcs.h"
> +#include "nested.h"
> #include "vmcs.h"
> #include "vmx.h"
> #include "trace.h"
> @@ -433,6 +434,22 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu,
> return 0;
> }
>
> +bool nested_evmcs_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> + struct vcpu_vmx *vmx = to_vmx(vcpu);
> + struct hv_enlightened_vmcs *evmcs = vmx->nested.hv_evmcs;
> +
> + if (!hv_vcpu || !evmcs)
> + return false;
> +
> + if (!evmcs->hv_enlightenments_control.nested_flush_hypercall)
> + return false;
> +
> + return hv_vcpu->vp_assist_page.nested_control.features.directhypercall;
> +}
> +
> void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu)
> {
> + nested_vmx_vmexit(vcpu, HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH, 0, 0);
> }
> diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h
> index 22d238b36238..0267b6191e6c 100644
> --- a/arch/x86/kvm/vmx/evmcs.h
> +++ b/arch/x86/kvm/vmx/evmcs.h
> @@ -66,6 +66,15 @@ DECLARE_STATIC_KEY_FALSE(enable_evmcs);
> #define EVMCS1_UNSUPPORTED_VMENTRY_CTRL (VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL)
> #define EVMCS1_UNSUPPORTED_VMFUNC (VMX_VMFUNC_EPTP_SWITCHING)
>
> +/*
> + * Note, Hyper-V isn't actually stealing bit 28 from Intel, just abusing it by
> + * pairing it with architecturally impossible exit reasons. Bit 28 is set only
> + * on SMI exits to a SMI transfer monitor (STM) and if and only if a MTF VM-Exit
> + * is pending. I.e. it will never be set by hardware for non-SMI exits (there
> + * are only three), nor will it ever be set unless the VMM is an STM.
> + */
> +#define HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH 0x10000031
> +
> struct evmcs_field {
> u16 offset;
> u16 clean_field;
> @@ -245,6 +254,7 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu,
> uint16_t *vmcs_version);
> void nested_evmcs_filter_control_msr(u32 msr_index, u64 *pdata);
> int nested_evmcs_check_controls(struct vmcs12 *vmcs12);
> +bool nested_evmcs_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu);
> void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu);
>
> #endif /* __KVM_X86_VMX_EVMCS_H */
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 87bff81f7f3e..69d06f77d7b4 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -1170,6 +1170,17 @@ static void nested_vmx_transition_tlb_flush(struct kvm_vcpu *vcpu,
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> + /*
> + * KVM_REQ_HV_TLB_FLUSH flushes entries from either L1's VP_ID or
> + * L2's VP_ID upon request from the guest. Make sure we check for
> + * pending entries for the case when the request got misplaced (e.g.
> + * a transition from L2->L1 happened while processing L2 TLB flush
> + * request or vice versa). kvm_hv_vcpu_flush_tlb() will not flush
> + * anything if there are no requests in the corresponding buffer.
> + */
> + if (to_hv_vcpu(vcpu))
> + kvm_make_request(KVM_REQ_HV_TLB_FLUSH, vcpu);
> +
> /*
> * If vmcs12 doesn't use VPID, L1 expects linear and combined mappings
> * for *all* contexts to be flushed on VM-Enter/VM-Exit, i.e. it's a
> @@ -3278,6 +3289,12 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
>
> static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
> {
> + /*
> + * Note: nested_get_evmcs_page() also updates 'vp_assist_page' copy
> + * in 'struct kvm_vcpu_hv' in case eVMCS is in use, this is mandatory
> + * to make nested_evmcs_l2_tlb_flush_enabled() work correctly post
> + * migration.
> + */
> if (!nested_get_evmcs_page(vcpu)) {
> pr_debug_ratelimited("%s: enlightened vmptrld failed\n",
> __func__);
> @@ -6007,6 +6024,11 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu,
> * Handle L2's bus locks in L0 directly.
> */
> return true;
> + case EXIT_REASON_VMCALL:
> + /* Hyper-V L2 TLB flush hypercall is handled by L0 */
> + return guest_hv_cpuid_has_l2_tlb_flush(vcpu) &&
> + nested_evmcs_l2_tlb_flush_enabled(vcpu) &&
> + kvm_hv_is_tlb_flush_hcall(vcpu);
> default:
> break;
> }
Reviewed-by: Maxim Levitsky <[email protected]>
Best regards,
Maxim Levitsky
On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
> Handle L2 TLB flush requests by going through all vCPUs and checking
> whether there are vCPUs running the same VM_ID with a VP_ID specified
> in the requests. Perform synthetic exit to L2 upon finish.
>
> Note, while checking VM_ID/VP_ID of running vCPUs seem to be a bit
> racy, we count on the fact that KVM flushes the whole L2 VPID upon
> transition. Also, KVM_REQ_HV_TLB_FLUSH request needs to be done upon
> transition between L1 and L2 to make sure all pending requests are
> always processed.
>
> For the reference, Hyper-V TLFS refers to the feature as "Direct
> Virtual Flush".
>
> Note, nVMX/nSVM code does not handle VMCALL/VMMCALL from L2 yet.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> arch/x86/kvm/hyperv.c | 84 +++++++++++++++++++++++++++++++++++--------
> arch/x86/kvm/hyperv.h | 14 ++++----
> arch/x86/kvm/trace.h | 21 ++++++-----
> arch/x86/kvm/x86.c | 4 +--
> 4 files changed, 91 insertions(+), 32 deletions(-)
>
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index 3075e9661696..740190917c1c 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -34,6 +34,7 @@
> #include <linux/eventfd.h>
>
> #include <asm/apicdef.h>
> +#include <asm/mshyperv.h>
> #include <trace/events/kvm.h>
>
> #include "trace.h"
> @@ -1835,9 +1836,10 @@ static int kvm_hv_get_tlb_flush_entries(struct kvm *kvm, struct kvm_hv_hcall *hc
> entries, consumed_xmm_halves, offset);
> }
>
> -static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, u64 *entries, int count)
> +static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu,
> + struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo,
> + u64 *entries, int count)
> {
> - struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
> struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> u64 entry = KVM_HV_TLB_FLUSHALL_ENTRY;
> unsigned long flags;
> @@ -1845,9 +1847,6 @@ static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, u64 *entries, int count)
> if (!hv_vcpu)
> return;
>
> - /* kvm_hv_flush_tlb() is not ready to handle requests for L2s yet */
> - tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo[HV_L1_TLB_FLUSH_FIFO];
> -
> spin_lock_irqsave(&tlb_flush_fifo->write_lock, flags);
>
> /*
> @@ -1883,7 +1882,7 @@ void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
> return;
> }
>
> - tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(vcpu);
> + tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(vcpu, is_guest_mode(vcpu));
>
> count = kfifo_out(&tlb_flush_fifo->entries, entries, KVM_HV_TLB_FLUSH_FIFO_SIZE);
>
> @@ -1916,6 +1915,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
> struct hv_tlb_flush_ex flush_ex;
> struct hv_tlb_flush flush;
> DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS);
> + struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
> /*
> * Normally, there can be no more than 'KVM_HV_TLB_FLUSH_FIFO_SIZE'
> * entries on the TLB flush fifo. The last entry, however, needs to be
> @@ -1959,7 +1959,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
> }
>
> trace_kvm_hv_flush_tlb(flush.processor_mask,
> - flush.address_space, flush.flags);
> + flush.address_space, flush.flags,
> + is_guest_mode(vcpu));
>
> valid_bank_mask = BIT_ULL(0);
> sparse_banks[0] = flush.processor_mask;
> @@ -1990,7 +1991,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
> trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask,
> flush_ex.hv_vp_set.format,
> flush_ex.address_space,
> - flush_ex.flags);
> + flush_ex.flags, is_guest_mode(vcpu));
>
> valid_bank_mask = flush_ex.hv_vp_set.valid_bank_mask;
> all_cpus = flush_ex.hv_vp_set.format !=
> @@ -2028,19 +2029,57 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
> * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't
> * analyze it here, flush TLB regardless of the specified address space.
> */
> - if (all_cpus) {
> - kvm_for_each_vcpu(i, v, kvm)
> - hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt);
> + if (all_cpus && !is_guest_mode(vcpu)) {
> + kvm_for_each_vcpu(i, v, kvm) {
> + tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(v, false);
> + hv_tlb_flush_enqueue(v, tlb_flush_fifo,
> + tlb_flush_entries, hc->rep_cnt);
> + }
>
> kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH);
> - } else {
> + } else if (!is_guest_mode(vcpu)) {
> sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask, vcpu_mask);
>
> for_each_set_bit(i, vcpu_mask, KVM_MAX_VCPUS) {
> v = kvm_get_vcpu(kvm, i);
> if (!v)
> continue;
> - hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt);
> + tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(v, false);
> + hv_tlb_flush_enqueue(v, tlb_flush_fifo,
> + tlb_flush_entries, hc->rep_cnt);
> + }
> +
> + kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask);
> + } else {
> + struct kvm_vcpu_hv *hv_v;
> +
> + bitmap_zero(vcpu_mask, KVM_MAX_VCPUS);
> +
> + kvm_for_each_vcpu(i, v, kvm) {
> + hv_v = to_hv_vcpu(v);
> +
> + /*
> + * The following check races with nested vCPUs entering/exiting
> + * and/or migrating between L1's vCPUs, however the only case when
> + * KVM *must* flush the TLB is when the target L2 vCPU keeps
> + * running on the same L1 vCPU from the moment of the request until
> + * kvm_hv_flush_tlb() returns. TLB is fully flushed in all other
> + * cases, e.g. when the target L2 vCPU migrates to a different L1
> + * vCPU or when the corresponding L1 vCPU temporary switches to a
> + * different L2 vCPU while the request is being processed.
Looks great!
> + */
> + if (!hv_v || hv_v->nested.vm_id != hv_vcpu->nested.vm_id)
> + continue;
> +
> + if (!all_cpus &&
> + !hv_is_vp_in_sparse_set(hv_v->nested.vp_id, valid_bank_mask,
> + sparse_banks))
> + continue;
> +
> + __set_bit(i, vcpu_mask);
> + tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(v, true);
> + hv_tlb_flush_enqueue(v, tlb_flush_fifo,
> + tlb_flush_entries, hc->rep_cnt);
> }
>
> kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask);
> @@ -2228,10 +2267,27 @@ static void kvm_hv_hypercall_set_result(struct kvm_vcpu *vcpu, u64 result)
>
> static int kvm_hv_hypercall_complete(struct kvm_vcpu *vcpu, u64 result)
> {
> + int ret;
> +
> trace_kvm_hv_hypercall_done(result);
> kvm_hv_hypercall_set_result(vcpu, result);
> ++vcpu->stat.hypercalls;
> - return kvm_skip_emulated_instruction(vcpu);
> + ret = kvm_skip_emulated_instruction(vcpu);
> +
> + if (unlikely(hv_result_success(result) && is_guest_mode(vcpu)
> + && kvm_hv_is_tlb_flush_hcall(vcpu))) {
> + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> + u32 tlb_lock_count;
> +
> + if (unlikely(kvm_read_guest(vcpu->kvm, hv_vcpu->nested.pa_page_gpa,
> + &tlb_lock_count, sizeof(tlb_lock_count))))
> + kvm_inject_gp(vcpu, 0);
> +
> + if (tlb_lock_count)
> + kvm_x86_ops.nested_ops->hv_inject_synthetic_vmexit_post_tlb_flush(vcpu);
> + }
> +
> + return ret;
> }
>
> static int kvm_hv_hypercall_complete_userspace(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
> index dc46c5ed5d18..7778b3a5913c 100644
> --- a/arch/x86/kvm/hyperv.h
> +++ b/arch/x86/kvm/hyperv.h
> @@ -148,26 +148,24 @@ int kvm_vm_ioctl_hv_eventfd(struct kvm *kvm, struct kvm_hyperv_eventfd *args);
> int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
> struct kvm_cpuid_entry2 __user *entries);
>
> -static inline struct kvm_vcpu_hv_tlb_flush_fifo *kvm_hv_get_tlb_flush_fifo(struct kvm_vcpu *vcpu)
> +static inline struct kvm_vcpu_hv_tlb_flush_fifo *kvm_hv_get_tlb_flush_fifo(struct kvm_vcpu *vcpu,
> + bool is_guest_mode)
> {
> struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> - int i = !is_guest_mode(vcpu) ? HV_L1_TLB_FLUSH_FIFO :
> - HV_L2_TLB_FLUSH_FIFO;
> -
> - /* KVM does not handle L2 TLB flush requests yet */
> - WARN_ON_ONCE(i != HV_L1_TLB_FLUSH_FIFO);
> + int i = is_guest_mode ? HV_L2_TLB_FLUSH_FIFO :
> + HV_L1_TLB_FLUSH_FIFO;
>
> return &hv_vcpu->tlb_flush_fifo[i];
> }
>
> -static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu *vcpu)
> +static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu *vcpu, bool is_guest_mode)
> {
> struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
>
> if (!to_hv_vcpu(vcpu) || !kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu))
> return;
>
> - tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(vcpu);
> + tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(vcpu, is_guest_mode);
>
> kfifo_reset_out(&tlb_flush_fifo->entries);
> }
> diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h
> index fd28dd40b813..f5e5b8f0342c 100644
> --- a/arch/x86/kvm/trace.h
> +++ b/arch/x86/kvm/trace.h
> @@ -1510,38 +1510,41 @@ TRACE_EVENT(kvm_hv_timer_state,
> * Tracepoint for kvm_hv_flush_tlb.
> */
> TRACE_EVENT(kvm_hv_flush_tlb,
> - TP_PROTO(u64 processor_mask, u64 address_space, u64 flags),
> - TP_ARGS(processor_mask, address_space, flags),
> + TP_PROTO(u64 processor_mask, u64 address_space, u64 flags, bool guest_mode),
> + TP_ARGS(processor_mask, address_space, flags, guest_mode),
>
> TP_STRUCT__entry(
> __field(u64, processor_mask)
> __field(u64, address_space)
> __field(u64, flags)
> + __field(bool, guest_mode)
> ),
>
> TP_fast_assign(
> __entry->processor_mask = processor_mask;
> __entry->address_space = address_space;
> __entry->flags = flags;
> + __entry->guest_mode = guest_mode;
> ),
>
> - TP_printk("processor_mask 0x%llx address_space 0x%llx flags 0x%llx",
> + TP_printk("processor_mask 0x%llx address_space 0x%llx flags 0x%llx %s",
> __entry->processor_mask, __entry->address_space,
> - __entry->flags)
> + __entry->flags, __entry->guest_mode ? "(L2)" : "")
> );
>
> /*
> * Tracepoint for kvm_hv_flush_tlb_ex.
> */
> TRACE_EVENT(kvm_hv_flush_tlb_ex,
> - TP_PROTO(u64 valid_bank_mask, u64 format, u64 address_space, u64 flags),
> - TP_ARGS(valid_bank_mask, format, address_space, flags),
> + TP_PROTO(u64 valid_bank_mask, u64 format, u64 address_space, u64 flags, bool guest_mode),
> + TP_ARGS(valid_bank_mask, format, address_space, flags, guest_mode),
>
> TP_STRUCT__entry(
> __field(u64, valid_bank_mask)
> __field(u64, format)
> __field(u64, address_space)
> __field(u64, flags)
> + __field(bool, guest_mode)
> ),
>
> TP_fast_assign(
> @@ -1549,12 +1552,14 @@ TRACE_EVENT(kvm_hv_flush_tlb_ex,
> __entry->format = format;
> __entry->address_space = address_space;
> __entry->flags = flags;
> + __entry->guest_mode = guest_mode;
> ),
>
> TP_printk("valid_bank_mask 0x%llx format 0x%llx "
> - "address_space 0x%llx flags 0x%llx",
> + "address_space 0x%llx flags 0x%llx %s",
> __entry->valid_bank_mask, __entry->format,
> - __entry->address_space, __entry->flags)
> + __entry->address_space, __entry->flags,
> + __entry->guest_mode ? "(L2)" : "")
> );
>
> /*
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 805db43c2829..8e945500ef50 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3355,12 +3355,12 @@ void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu)
> {
> if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) {
> kvm_vcpu_flush_tlb_current(vcpu);
> - kvm_hv_vcpu_empty_flush_tlb(vcpu);
> + kvm_hv_vcpu_empty_flush_tlb(vcpu, is_guest_mode(vcpu));
> }
>
> if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) {
> kvm_vcpu_flush_tlb_guest(vcpu);
> - kvm_hv_vcpu_empty_flush_tlb(vcpu);
> + kvm_hv_vcpu_empty_flush_tlb(vcpu, is_guest_mode(vcpu));
> } else if (kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu)) {
> kvm_hv_vcpu_flush_tlb(vcpu);
> }
Looks good,
Reviewed-by: Maxim Levitsky <[email protected]>
Best regadrds,
Maxim Levitsky
On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
> Currently, HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} calls are handled
> the exact same way as HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE{,EX}: by
> flushing the whole VPID and this is sub-optimal. Switch to handling
> these requests with 'flush_tlb_gva()' hooks instead. Use the newly
> introduced TLB flush fifo to queue the requests.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> arch/x86/kvm/hyperv.c | 100 +++++++++++++++++++++++++++++++++++++-----
> 1 file changed, 88 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index 762b0b699fdf..956072592e2f 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -1806,32 +1806,82 @@ static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc,
> sparse_banks, consumed_xmm_halves, offset);
> }
>
> -static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu)
> +static int kvm_hv_get_tlb_flush_entries(struct kvm *kvm, struct kvm_hv_hcall *hc, u64 entries[],
> + int consumed_xmm_halves, gpa_t offset)
> +{
> + return kvm_hv_get_hc_data(kvm, hc, hc->rep_cnt, hc->rep_cnt,
> + entries, consumed_xmm_halves, offset);
> +}
> +
> +static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, u64 *entries, int count)
> {
> struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
> struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> u64 entry = KVM_HV_TLB_FLUSHALL_ENTRY;
> + unsigned long flags;
>
> if (!hv_vcpu)
> return;
>
> tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
>
> - kfifo_in_spinlocked(&tlb_flush_fifo->entries, &entry, 1, &tlb_flush_fifo->write_lock);
> + spin_lock_irqsave(&tlb_flush_fifo->write_lock, flags);
> +
> + /*
> + * All entries should fit on the fifo leaving one free for 'flush all'
> + * entry in case another request comes in. In case there's not enough
> + * space, just put 'flush all' entry there.
> + */
> + if (count && entries && count < kfifo_avail(&tlb_flush_fifo->entries)) {
> + WARN_ON(kfifo_in(&tlb_flush_fifo->entries, entries, count) != count);
> + goto out_unlock;
> + }
> +
> + /*
> + * Note: full fifo always contains 'flush all' entry, no need to check the
> + * return value.
> + */
> + kfifo_in(&tlb_flush_fifo->entries, &entry, 1);
Very tiny nitpick: maybe call this flush_all_entry instead,
just so that it is a tiny bit easier to notice.
> +
> +out_unlock:
> + spin_unlock_irqrestore(&tlb_flush_fifo->write_lock, flags);
> }
>
> void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
> {
> struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
> struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> + u64 entries[KVM_HV_TLB_FLUSH_FIFO_SIZE];
> + int i, j, count;
> + gva_t gva;
>
> - kvm_vcpu_flush_tlb_guest(vcpu);
> -
> - if (!hv_vcpu)
> + if (!tdp_enabled || !hv_vcpu) {
I haven't noticed that in the review I did back then, but
any reason why !tdp_enabled? Just curious.
> + kvm_vcpu_flush_tlb_guest(vcpu);
> return;
> + }
>
> tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
>
> + count = kfifo_out(&tlb_flush_fifo->entries, entries, KVM_HV_TLB_FLUSH_FIFO_SIZE);
> +
> + for (i = 0; i < count; i++) {
> + if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY)
> + goto out_flush_all;
> +
> + /*
> + * Lower 12 bits of 'address' encode the number of additional
> + * pages to flush.
> + */
> + gva = entries[i] & PAGE_MASK;
> + for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++)
> + static_call(kvm_x86_flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE);
> +
> + ++vcpu->stat.tlb_flush;
> + }
> + return;
> +
> +out_flush_all:
> + kvm_vcpu_flush_tlb_guest(vcpu);
> kfifo_reset_out(&tlb_flush_fifo->entries);
> }
>
> @@ -1841,11 +1891,21 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
> struct hv_tlb_flush_ex flush_ex;
> struct hv_tlb_flush flush;
> DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS);
> + /*
> + * Normally, there can be no more than 'KVM_HV_TLB_FLUSH_FIFO_SIZE'
> + * entries on the TLB flush fifo. The last entry, however, needs to be
> + * always left free for 'flush all' entry which gets placed when
> + * there is not enough space to put all the requested entries.
> + */
> + u64 __tlb_flush_entries[KVM_HV_TLB_FLUSH_FIFO_SIZE - 1];
> + u64 *tlb_flush_entries;
> u64 valid_bank_mask;
> u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS];
> struct kvm_vcpu *v;
> unsigned long i;
> bool all_cpus;
> + int consumed_xmm_halves = 0;
> + gpa_t data_offset;
>
> /*
> * The Hyper-V TLFS doesn't allow more than 64 sparse banks, e.g. the
> @@ -1861,10 +1921,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
> flush.address_space = hc->ingpa;
> flush.flags = hc->outgpa;
> flush.processor_mask = sse128_lo(hc->xmm[0]);
> + consumed_xmm_halves = 1;
> } else {
> if (unlikely(kvm_read_guest(kvm, hc->ingpa,
> &flush, sizeof(flush))))
> return HV_STATUS_INVALID_HYPERCALL_INPUT;
> + data_offset = sizeof(flush);
> }
>
> trace_kvm_hv_flush_tlb(flush.processor_mask,
> @@ -1888,10 +1950,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
> flush_ex.flags = hc->outgpa;
> memcpy(&flush_ex.hv_vp_set,
> &hc->xmm[0], sizeof(hc->xmm[0]));
> + consumed_xmm_halves = 2;
> } else {
> if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex,
> sizeof(flush_ex))))
> return HV_STATUS_INVALID_HYPERCALL_INPUT;
> + data_offset = sizeof(flush_ex);
> }
>
> trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask,
> @@ -1907,25 +1971,37 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
> return HV_STATUS_INVALID_HYPERCALL_INPUT;
>
> if (all_cpus)
> - goto do_flush;
> + goto read_flush_entries;
>
> if (!hc->var_cnt)
> goto ret_success;
>
> - if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, 2,
> - offsetof(struct hv_tlb_flush_ex,
> - hv_vp_set.bank_contents)))
> + if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, consumed_xmm_halves,
> + data_offset))
> + return HV_STATUS_INVALID_HYPERCALL_INPUT;
> + data_offset += hc->var_cnt * sizeof(sparse_banks[0]);
> + consumed_xmm_halves += hc->var_cnt;
> + }
> +
> +read_flush_entries:
> + if (hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE ||
> + hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX ||
> + hc->rep_cnt > ARRAY_SIZE(__tlb_flush_entries)) {
> + tlb_flush_entries = NULL;
> + } else {
> + if (kvm_hv_get_tlb_flush_entries(kvm, hc, __tlb_flush_entries,
> + consumed_xmm_halves, data_offset))
> return HV_STATUS_INVALID_HYPERCALL_INPUT;
> + tlb_flush_entries = __tlb_flush_entries;
> }
>
> -do_flush:
> /*
> * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't
> * analyze it here, flush TLB regardless of the specified address space.
> */
> if (all_cpus) {
> kvm_for_each_vcpu(i, v, kvm)
> - hv_tlb_flush_enqueue(v);
> + hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt);
>
> kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH);
> } else {
> @@ -1935,7 +2011,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
> v = kvm_get_vcpu(kvm, i);
> if (!v)
> continue;
> - hv_tlb_flush_enqueue(v);
> + hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt);
> }
>
> kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask);
Besides the nitpick, dont see anything wrong, but I might have missed something.
Best regards,
Maxim Levitsky
On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
> HYPERV_LINUX_OS_ID needs to be written to HV_X64_MSR_GUEST_OS_ID by
> each Hyper-V specific selftest.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> tools/testing/selftests/kvm/include/x86_64/hyperv.h | 3 +++
> tools/testing/selftests/kvm/x86_64/hyperv_features.c | 5 ++---
> 2 files changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
> index b66910702c0a..f0a8a93694b2 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
> @@ -185,4 +185,7 @@
> /* hypercall options */
> #define HV_HYPERCALL_FAST_BIT BIT(16)
>
> +/* Proper HV_X64_MSR_GUEST_OS_ID value */
> +#define HYPERV_LINUX_OS_ID ((u64)0x8100 << 48)
> +
> #endif /* !SELFTEST_KVM_HYPERV_H */
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> index 672915ce73d8..98c020356925 100644
> --- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> @@ -14,7 +14,6 @@
> #include "hyperv.h"
>
> #define VCPU_ID 0
> -#define LINUX_OS_ID ((u64)0x8100 << 48)
>
> extern unsigned char rdmsr_start;
> extern unsigned char rdmsr_end;
> @@ -127,7 +126,7 @@ static void guest_hcall(vm_vaddr_t pgs_gpa, struct hcall_data *hcall)
> int i = 0;
> u64 res, input, output;
>
> - wrmsr(HV_X64_MSR_GUEST_OS_ID, LINUX_OS_ID);
> + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
> wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa);
>
> while (hcall->control) {
> @@ -230,7 +229,7 @@ static void guest_test_msrs_access(void)
> */
> msr->idx = HV_X64_MSR_GUEST_OS_ID;
> msr->write = 1;
> - msr->write_val = LINUX_OS_ID;
> + msr->write_val = HYPERV_LINUX_OS_ID;
> msr->available = 1;
> break;
> case 3:
Reviewed-by: Maxim Levitsky <[email protected]>
Best regards,
Maxim Levitsky
On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
> Introduce a helper to quickly check if KVM needs to handle VMCALL/VMMCALL
> from L2 in L0 to process L2 TLB flush requests.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> arch/x86/include/asm/kvm_host.h | 1 +
> arch/x86/kvm/hyperv.c | 6 ++++++
> arch/x86/kvm/hyperv.h | 7 +++++++
> 3 files changed, 14 insertions(+)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 5d60c66ee0de..f9a34af0a5cc 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -642,6 +642,7 @@ struct kvm_vcpu_hv {
> u32 enlightenments_eax; /* HYPERV_CPUID_ENLIGHTMENT_INFO.EAX */
> u32 enlightenments_ebx; /* HYPERV_CPUID_ENLIGHTMENT_INFO.EBX */
> u32 syndbg_cap_eax; /* HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES.EAX */
> + u32 nested_features_eax; /* HYPERV_CPUID_NESTED_FEATURES.EAX */
> } cpuid_cache;
>
> struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo[HV_NR_TLB_FLUSH_FIFOS];
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index 740190917c1c..4396d75588d8 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -2229,6 +2229,12 @@ void kvm_hv_set_cpuid(struct kvm_vcpu *vcpu)
> hv_vcpu->cpuid_cache.syndbg_cap_eax = entry->eax;
> else
> hv_vcpu->cpuid_cache.syndbg_cap_eax = 0;
> +
> + entry = kvm_find_cpuid_entry(vcpu, HYPERV_CPUID_NESTED_FEATURES, 0);
> + if (entry)
> + hv_vcpu->cpuid_cache.nested_features_eax = entry->eax;
> + else
> + hv_vcpu->cpuid_cache.nested_features_eax = 0;
> }
>
> int kvm_hv_set_enforce_cpuid(struct kvm_vcpu *vcpu, bool enforce)
> diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
> index 7778b3a5913c..2aa6fb7fc599 100644
> --- a/arch/x86/kvm/hyperv.h
> +++ b/arch/x86/kvm/hyperv.h
> @@ -170,6 +170,13 @@ static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu *vcpu, bool is_gu
> kfifo_reset_out(&tlb_flush_fifo->entries);
> }
>
> +static inline bool guest_hv_cpuid_has_l2_tlb_flush(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> +
> + return hv_vcpu && (hv_vcpu->cpuid_cache.nested_features_eax & HV_X64_NESTED_DIRECT_FLUSH);
> +}
> +
> static inline bool kvm_hv_is_tlb_flush_hcall(struct kvm_vcpu *vcpu)
> {
> struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
Reviewed-by: Maxim Levitsky <[email protected]>
Best regards,
Maxim Levitsky
On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
> Hyper-V supports injecting synthetic L2->L1 exit after performing
> L2 TLB flush operation but the procedure is vendor specific. Introduce
> .hv_inject_synthetic_vmexit_post_tlb_flush nested hook for it.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> arch/x86/include/asm/kvm_host.h | 1 +
> arch/x86/kvm/Makefile | 3 ++-
> arch/x86/kvm/svm/hyperv.c | 11 +++++++++++
> arch/x86/kvm/svm/hyperv.h | 2 ++
> arch/x86/kvm/svm/nested.c | 1 +
> arch/x86/kvm/vmx/evmcs.c | 4 ++++
> arch/x86/kvm/vmx/evmcs.h | 1 +
> arch/x86/kvm/vmx/nested.c | 1 +
> 8 files changed, 23 insertions(+), 1 deletion(-)
> create mode 100644 arch/x86/kvm/svm/hyperv.c
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 02bef551dafb..5d60c66ee0de 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1603,6 +1603,7 @@ struct kvm_x86_nested_ops {
> int (*enable_evmcs)(struct kvm_vcpu *vcpu,
> uint16_t *vmcs_version);
> uint16_t (*get_evmcs_version)(struct kvm_vcpu *vcpu);
> + void (*hv_inject_synthetic_vmexit_post_tlb_flush)(struct kvm_vcpu *vcpu);
> };
>
> struct kvm_x86_init_ops {
> diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
> index 30f244b64523..b6d53b045692 100644
> --- a/arch/x86/kvm/Makefile
> +++ b/arch/x86/kvm/Makefile
> @@ -25,7 +25,8 @@ kvm-intel-y += vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o \
> vmx/evmcs.o vmx/nested.o vmx/posted_intr.o
> kvm-intel-$(CONFIG_X86_SGX_KVM) += vmx/sgx.o
>
> -kvm-amd-y += svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o svm/sev.o
> +kvm-amd-y += svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o \
> + svm/sev.o svm/hyperv.o
>
> ifdef CONFIG_HYPERV
> kvm-amd-y += svm/svm_onhyperv.o
> diff --git a/arch/x86/kvm/svm/hyperv.c b/arch/x86/kvm/svm/hyperv.c
> new file mode 100644
> index 000000000000..911f51021af1
> --- /dev/null
> +++ b/arch/x86/kvm/svm/hyperv.c
> @@ -0,0 +1,11 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * AMD SVM specific code for Hyper-V on KVM.
> + *
> + * Copyright 2022 Red Hat, Inc. and/or its affiliates.
> + */
> +#include "hyperv.h"
> +
> +void svm_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu)
> +{
> +}
> diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h
> index 8cf702fed7e5..dd2e393f84a0 100644
> --- a/arch/x86/kvm/svm/hyperv.h
> +++ b/arch/x86/kvm/svm/hyperv.h
> @@ -48,4 +48,6 @@ static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu)
> hv_vcpu->nested.vp_id = hve->hv_vp_id;
> }
>
> +void svm_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu);
> +
> #endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index e8908cc56e22..28b63663e1d9 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -1721,4 +1721,5 @@ struct kvm_x86_nested_ops svm_nested_ops = {
> .get_nested_state_pages = svm_get_nested_state_pages,
> .get_state = svm_get_nested_state,
> .set_state = svm_set_nested_state,
> + .hv_inject_synthetic_vmexit_post_tlb_flush = svm_hv_inject_synthetic_vmexit_post_tlb_flush,
> };
> diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c
> index 6a61b1ae7942..805afc170b5b 100644
> --- a/arch/x86/kvm/vmx/evmcs.c
> +++ b/arch/x86/kvm/vmx/evmcs.c
> @@ -439,3 +439,7 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu,
>
> return 0;
> }
> +
> +void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu)
> +{
> +}
> diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h
> index f886a8ff0342..584741b85eb6 100644
> --- a/arch/x86/kvm/vmx/evmcs.h
> +++ b/arch/x86/kvm/vmx/evmcs.h
> @@ -245,5 +245,6 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu,
> uint16_t *vmcs_version);
> void nested_evmcs_filter_control_msr(u32 msr_index, u64 *pdata);
> int nested_evmcs_check_controls(struct vmcs12 *vmcs12);
> +void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu);
>
> #endif /* __KVM_X86_VMX_EVMCS_H */
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 6e264a7f205b..4a827b3d929a 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -6863,4 +6863,5 @@ struct kvm_x86_nested_ops vmx_nested_ops = {
> .write_log_dirty = nested_vmx_write_pml_buffer,
> .enable_evmcs = nested_enable_evmcs,
> .get_evmcs_version = nested_get_evmcs_version,
> + .hv_inject_synthetic_vmexit_post_tlb_flush = vmx_hv_inject_synthetic_vmexit_post_tlb_flush,
> };
Reviewed-by: Maxim Levitsky <[email protected]>
Best regards,
Maxim Levitsky
On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
> Implement Hyper-V L2 TLB flush for nSVM. The feature needs to be enabled
> both in extended 'nested controls' in VMCB and VP assist page.
> According to Hyper-V TLFS, synthetic vmexit to L1 is performed with
> - HV_SVM_EXITCODE_ENL exit_code.
> - HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH exit_info_1.
>
> Note: VP assist page is cached in 'struct kvm_vcpu_hv' so
> recalc_intercepts() doesn't need to read from guest's memory. KVM
> needs to update the case upon each VMRUN and after svm_set_nested_state
> (svm_get_nested_state_pages()) to handle the case when the guest got
> migrated while L2 was running.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> arch/x86/kvm/svm/hyperv.c | 7 +++++++
> arch/x86/kvm/svm/hyperv.h | 30 ++++++++++++++++++++++++++++++
> arch/x86/kvm/svm/nested.c | 36 ++++++++++++++++++++++++++++++++++--
> 3 files changed, 71 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/hyperv.c b/arch/x86/kvm/svm/hyperv.c
> index 911f51021af1..088f6429b24c 100644
> --- a/arch/x86/kvm/svm/hyperv.c
> +++ b/arch/x86/kvm/svm/hyperv.c
> @@ -8,4 +8,11 @@
>
> void svm_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu)
> {
> + struct vcpu_svm *svm = to_svm(vcpu);
> +
> + svm->vmcb->control.exit_code = HV_SVM_EXITCODE_ENL;
> + svm->vmcb->control.exit_code_hi = 0;
> + svm->vmcb->control.exit_info_1 = HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH;
> + svm->vmcb->control.exit_info_2 = 0;
> + nested_svm_vmexit(svm);
> }
> diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h
> index dd2e393f84a0..7b01722838bf 100644
> --- a/arch/x86/kvm/svm/hyperv.h
> +++ b/arch/x86/kvm/svm/hyperv.h
> @@ -33,6 +33,9 @@ struct hv_enlightenments {
> */
> #define VMCB_HV_NESTED_ENLIGHTENMENTS VMCB_SW
>
> +#define HV_SVM_EXITCODE_ENL 0xF0000000
> +#define HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH (1)
> +
> static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu)
> {
> struct vcpu_svm *svm = to_svm(vcpu);
> @@ -48,6 +51,33 @@ static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu)
> hv_vcpu->nested.vp_id = hve->hv_vp_id;
> }
>
> +static inline bool nested_svm_hv_update_vp_assist(struct kvm_vcpu *vcpu)
> +{
> + if (!to_hv_vcpu(vcpu))
> + return true;
> +
> + if (!kvm_hv_assist_page_enabled(vcpu))
> + return true;
> +
> + return kvm_hv_get_assist_page(vcpu);
> +}
> +
> +static inline bool nested_svm_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu)
> +{
> + struct vcpu_svm *svm = to_svm(vcpu);
> + struct hv_enlightenments *hve =
> + (struct hv_enlightenments *)svm->nested.ctl.reserved_sw;
> + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> +
> + if (!hv_vcpu)
> + return false;
> +
> + if (!hve->hv_enlightenments_control.nested_flush_hypercall)
> + return false;
> +
> + return hv_vcpu->vp_assist_page.nested_control.features.directhypercall;
> +}
> +
> void svm_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu);
>
> #endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index 28b63663e1d9..369b92aaf1ad 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -171,8 +171,12 @@ void recalc_intercepts(struct vcpu_svm *svm)
> vmcb_clr_intercept(c, INTERCEPT_VINTR);
> }
>
> - /* We don't want to see VMMCALLs from a nested guest */
> - vmcb_clr_intercept(c, INTERCEPT_VMMCALL);
> + /*
> + * We want to see VMMCALLs from a nested guest only when Hyper-V L2 TLB
> + * flush feature is enabled.
> + */
> + if (!nested_svm_l2_tlb_flush_enabled(&svm->vcpu))
> + vmcb_clr_intercept(c, INTERCEPT_VMMCALL);
>
> for (i = 0; i < MAX_INTERCEPT; i++)
> c->intercepts[i] |= g->intercepts[i];
> @@ -489,6 +493,17 @@ static void nested_save_pending_event_to_vmcb12(struct vcpu_svm *svm,
>
> static void nested_svm_transition_tlb_flush(struct kvm_vcpu *vcpu)
> {
> + /*
> + * KVM_REQ_HV_TLB_FLUSH flushes entries from either L1's VP_ID or
> + * L2's VP_ID upon request from the guest. Make sure we check for
> + * pending entries for the case when the request got misplaced (e.g.
> + * a transition from L2->L1 happened while processing L2 TLB flush
> + * request or vice versa). kvm_hv_vcpu_flush_tlb() will not flush
> + * anything if there are no requests in the corresponding buffer.
> + */
> + if (to_hv_vcpu(vcpu))
> + kvm_make_request(KVM_REQ_HV_TLB_FLUSH, vcpu);
> +
> /*
> * TODO: optimize unconditional TLB flush/MMU sync. A partial list of
> * things to fix before this can be conditional:
> @@ -835,6 +850,12 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
> return 1;
> }
>
> + /* This fails when VP assist page is enabled but the supplied GPA is bogus */
> + if (!nested_svm_hv_update_vp_assist(vcpu)) {
> + kvm_inject_gp(vcpu, 0);
> + return 1;
> + }
> +
> vmcb12_gpa = svm->vmcb->save.rax;
> ret = kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map);
> if (ret == -EINVAL) {
> @@ -1412,6 +1433,7 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu)
> int nested_svm_exit_special(struct vcpu_svm *svm)
> {
> u32 exit_code = svm->vmcb->control.exit_code;
> + struct kvm_vcpu *vcpu = &svm->vcpu;
>
> switch (exit_code) {
> case SVM_EXIT_INTR:
> @@ -1430,6 +1452,13 @@ int nested_svm_exit_special(struct vcpu_svm *svm)
> return NESTED_EXIT_HOST;
> break;
> }
> + case SVM_EXIT_VMMCALL:
> + /* Hyper-V L2 TLB flush hypercall is handled by L0 */
> + if (guest_hv_cpuid_has_l2_tlb_flush(vcpu) &&
> + nested_svm_l2_tlb_flush_enabled(vcpu) &&
> + kvm_hv_is_tlb_flush_hcall(vcpu))
> + return NESTED_EXIT_HOST;
> + break;
> default:
> break;
> }
> @@ -1710,6 +1739,9 @@ static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu)
> return false;
> }
>
> + if (!nested_svm_hv_update_vp_assist(vcpu))
> + return false;
> +
> return true;
> }
>
Reviewed-by: Maxim Levitsky <[email protected]>
Best regards,
Maxim Levitsky
On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
> All Hyper-V specific tests issuing hypercalls need this.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> .../selftests/kvm/include/x86_64/hyperv.h | 15 +++++++++++++++
> .../selftests/kvm/x86_64/hyperv_features.c | 17 +----------------
> 2 files changed, 16 insertions(+), 16 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
> index f0a8a93694b2..e0a1b4c2fbbc 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
> @@ -185,6 +185,21 @@
> /* hypercall options */
> #define HV_HYPERCALL_FAST_BIT BIT(16)
>
> +static inline u64 hyperv_hypercall(u64 control, vm_vaddr_t input_address,
> + vm_vaddr_t output_address)
> +{
> + u64 hv_status;
> +
> + asm volatile("mov %3, %%r8\n"
> + "vmcall"
> + : "=a" (hv_status),
> + "+c" (control), "+d" (input_address)
> + : "r" (output_address)
> + : "cc", "memory", "r8", "r9", "r10", "r11");
> +
> + return hv_status;
> +}
> +
> /* Proper HV_X64_MSR_GUEST_OS_ID value */
> #define HYPERV_LINUX_OS_ID ((u64)0x8100 << 48)
>
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> index 98c020356925..788d570e991e 100644
> --- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> @@ -48,21 +48,6 @@ static void do_wrmsr(u32 idx, u64 val)
> static int nr_gp;
> static int nr_ud;
>
> -static inline u64 hypercall(u64 control, vm_vaddr_t input_address,
> - vm_vaddr_t output_address)
> -{
> - u64 hv_status;
> -
> - asm volatile("mov %3, %%r8\n"
> - "vmcall"
> - : "=a" (hv_status),
> - "+c" (control), "+d" (input_address)
> - : "r" (output_address)
> - : "cc", "memory", "r8", "r9", "r10", "r11");
> -
> - return hv_status;
> -}
> -
> static void guest_gp_handler(struct ex_regs *regs)
> {
> unsigned char *rip = (unsigned char *)regs->rip;
> @@ -138,7 +123,7 @@ static void guest_hcall(vm_vaddr_t pgs_gpa, struct hcall_data *hcall)
> input = output = 0;
> }
>
> - res = hypercall(hcall->control, input, output);
> + res = hyperv_hypercall(hcall->control, input, output);
> if (hcall->ud_expected)
> GUEST_ASSERT(nr_ud == 1);
> else
Reviewed-by: Maxim Levitsky <[email protected]>
Best regards,
Maxim Levitsky
On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
> Similar to nSVM, KVM needs to know L2's VM_ID/VP_ID and Partition
> assist page address to handle L2 TLB flush requests.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> arch/x86/kvm/svm/hyperv.h | 16 ++++++++++++++++
> arch/x86/kvm/svm/nested.c | 2 ++
> 2 files changed, 18 insertions(+)
>
> diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h
> index 7d6d97968fb9..8cf702fed7e5 100644
> --- a/arch/x86/kvm/svm/hyperv.h
> +++ b/arch/x86/kvm/svm/hyperv.h
> @@ -9,6 +9,7 @@
> #include <asm/mshyperv.h>
>
> #include "../hyperv.h"
> +#include "svm.h"
>
> /*
> * Hyper-V uses the software reserved 32 bytes in VMCB
> @@ -32,4 +33,19 @@ struct hv_enlightenments {
> */
> #define VMCB_HV_NESTED_ENLIGHTENMENTS VMCB_SW
>
> +static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu)
> +{
> + struct vcpu_svm *svm = to_svm(vcpu);
> + struct hv_enlightenments *hve =
> + (struct hv_enlightenments *)svm->nested.ctl.reserved_sw;
> + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> +
> + if (!hv_vcpu)
> + return;
> +
> + hv_vcpu->nested.pa_page_gpa = hve->partition_assist_page;
> + hv_vcpu->nested.vm_id = hve->hv_vm_id;
> + hv_vcpu->nested.vp_id = hve->hv_vp_id;
> +}
> +
> #endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index 88da8edbe1e1..e8908cc56e22 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -811,6 +811,8 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
> if (kvm_vcpu_apicv_active(vcpu))
> kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
>
> + nested_svm_hv_update_vm_vp_ids(vcpu);
> +
> return 0;
> }
>
Reviewed-by: Maxim Levitsky <[email protected]>
Best regards,
Maxim Levitsky
On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
> Introduce a selftest for Hyper-V PV TLB flush hypercalls
> (HvFlushVirtualAddressSpace/HvFlushVirtualAddressSpaceEx,
> HvFlushVirtualAddressList/HvFlushVirtualAddressListEx).
>
> The test creates one 'sender' vCPU and two 'worker' vCPU which do busy
> loop reading from a certain GVA checking the observed value. Sender
> vCPU drops to the host to swap the data page with another page filled
> with a different value. The expectation for workers is also
> altered. Without TLB flush on worker vCPUs, they may continue to
> observe old value. To guard against accidental TLB flushes for worker
> vCPUs the test is repeated 100 times.
>
> Hyper-V TLB flush hypercalls are tested in both 'normal' and 'XMM
> fast' modes.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> tools/testing/selftests/kvm/.gitignore | 1 +
> tools/testing/selftests/kvm/Makefile | 1 +
> .../selftests/kvm/include/x86_64/hyperv.h | 1 +
> .../selftests/kvm/x86_64/hyperv_tlb_flush.c | 660 ++++++++++++++++++
> 4 files changed, 663 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c
>
> diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
> index 19a8454e3760..7f086656f3e0 100644
> --- a/tools/testing/selftests/kvm/.gitignore
> +++ b/tools/testing/selftests/kvm/.gitignore
> @@ -26,6 +26,7 @@
> /x86_64/hyperv_features
> /x86_64/hyperv_ipi
> /x86_64/hyperv_svm_test
> +/x86_64/hyperv_tlb_flush
> /x86_64/max_vcpuid_cap_test
> /x86_64/mmio_warning_test
> /x86_64/mmu_role_test
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index cf433073fb64..1e61ccc0da4d 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -54,6 +54,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
> TEST_GEN_PROGS_x86_64 += x86_64/hyperv_features
> TEST_GEN_PROGS_x86_64 += x86_64/hyperv_ipi
> TEST_GEN_PROGS_x86_64 += x86_64/hyperv_svm_test
> +TEST_GEN_PROGS_x86_64 += x86_64/hyperv_tlb_flush
> TEST_GEN_PROGS_x86_64 += x86_64/kvm_clock_test
> TEST_GEN_PROGS_x86_64 += x86_64/kvm_pv_test
> TEST_GEN_PROGS_x86_64 += x86_64/mmio_warning_test
> diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
> index 1b467626be58..c302027fa6d5 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h
> @@ -187,6 +187,7 @@
> /* hypercall options */
> #define HV_HYPERCALL_FAST_BIT BIT(16)
> #define HV_HYPERCALL_VARHEAD_OFFSET 17
> +#define HV_HYPERCALL_REP_COMP_OFFSET 32
>
> static inline u64 hyperv_hypercall(u64 control, vm_vaddr_t input_address,
> vm_vaddr_t output_address)
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c b/tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c
> new file mode 100644
> index 000000000000..d23e40d3b480
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c
> @@ -0,0 +1,660 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Hyper-V HvFlushVirtualAddress{List,Space}{,Ex} tests
> + *
> + * Copyright (C) 2022, Red Hat, Inc.
> + *
> + */
> +
> +#define _GNU_SOURCE /* for program_invocation_short_name */
> +#include <pthread.h>
> +#include <inttypes.h>
> +
> +#include "kvm_util.h"
> +#include "processor.h"
> +#include "hyperv.h"
> +#include "test_util.h"
> +#include "vmx.h"
> +
> +#define SENDER_VCPU_ID 1
> +#define WORKER_VCPU_ID_1 2
> +#define WORKER_VCPU_ID_2 65
> +
> +#define NTRY 100
> +#define NTEST_PAGES 2
> +
> +struct thread_params {
> + struct kvm_vm *vm;
> + uint32_t vcpu_id;
> +};
> +
> +struct hv_vpset {
> + u64 format;
> + u64 valid_bank_mask;
> + u64 bank_contents[];
> +};
> +
> +enum HV_GENERIC_SET_FORMAT {
> + HV_GENERIC_SET_SPARSE_4K,
> + HV_GENERIC_SET_ALL,
> +};
> +
> +#define HV_FLUSH_ALL_PROCESSORS BIT(0)
> +#define HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES BIT(1)
> +#define HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY BIT(2)
> +#define HV_FLUSH_USE_EXTENDED_RANGE_FORMAT BIT(3)
> +
> +/* HvFlushVirtualAddressSpace, HvFlushVirtualAddressList hypercalls */
> +struct hv_tlb_flush {
> + u64 address_space;
> + u64 flags;
> + u64 processor_mask;
> + u64 gva_list[];
> +} __packed;
> +
> +/* HvFlushVirtualAddressSpaceEx, HvFlushVirtualAddressListEx hypercalls */
> +struct hv_tlb_flush_ex {
> + u64 address_space;
> + u64 flags;
> + struct hv_vpset hv_vp_set;
> + u64 gva_list[];
> +} __packed;
> +
> +/*
> + * Pass the following info to 'workers' and 'sender'
> + * - Hypercall page's GVA
> + * - Hypercall page's GPA
> + * - Test pages GVA
> + * - GVAs of the test pages' PTEs
> + */
> +struct test_data {
> + vm_vaddr_t hcall_gva;
> + vm_paddr_t hcall_gpa;
> + vm_vaddr_t test_pages;
> + vm_vaddr_t test_pages_pte[NTEST_PAGES];
> +};
> +
> +/* 'Worker' vCPU code checking the contents of the test page */
> +static void worker_guest_code(vm_vaddr_t test_data)
> +{
> + struct test_data *data = (struct test_data *)test_data;
> + u32 vcpu_id = rdmsr(HV_X64_MSR_VP_INDEX);
> + unsigned char chr_exp1, chr_exp2, chr_cur;
> +
> + x2apic_enable();
> + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
> +
> + for (;;) {
> + /* Read the expected char, then check what's in the test pages and then
> + * check the expectation again to make sure it wasn't updated in the meantime.
> + */
> + chr_exp1 = READ_ONCE(*(unsigned char *)
> + (data->test_pages + PAGE_SIZE * NTEST_PAGES + vcpu_id));
> + asm volatile("lfence");
> + chr_cur = *(unsigned char *)data->test_pages;
> + asm volatile("lfence");
> + chr_exp2 = READ_ONCE(*(unsigned char *)
> + (data->test_pages + PAGE_SIZE * NTEST_PAGES + vcpu_id));
> + if (chr_exp1 && chr_exp1 == chr_exp2)
> + GUEST_ASSERT(chr_cur == chr_exp1);
> + asm volatile("nop");
> + }
> +}
> +
> +/*
> + * Write per-CPU info indicating what each 'worker' CPU is supposed to see in
> + * test page. '0' means don't check.
> + */
> +static void set_expected_char(void *addr, unsigned char chr, int vcpu_id)
> +{
> + asm volatile("mfence");
> + *(unsigned char *)(addr + NTEST_PAGES * PAGE_SIZE + vcpu_id) = chr;
> +}
> +
> +/* Update PTEs swapping two test pages */
> +static void swap_two_test_pages(vm_paddr_t pte_gva1, vm_paddr_t pte_gva2)
> +{
> + uint64_t pte[2];
> +
> + pte[0] = *(uint64_t *)pte_gva1;
> + pte[1] = *(uint64_t *)pte_gva2;
> +
> + *(uint64_t *)pte_gva1 = pte[1];
> + *(uint64_t *)pte_gva2 = pte[0];
> +}
> +
> +/* Delay */
> +static inline void rep_nop(void)
> +{
> + int i;
> +
> + for (i = 0; i < 1000000; i++)
> + asm volatile("nop");
> +}
> +
> +/*
> + * Prepare to test: 'disable' workers by setting the expectation to '0',
> + * clear hypercall input page and then swap two test pages.
> + */
> +static inline void prepare_to_test(struct test_data *data)
> +{
> + /* Clear hypercall input page */
> + memset((void *)data->hcall_gva, 0, PAGE_SIZE);
> +
> + /* 'Disable' workers */
> + set_expected_char((void *)data->test_pages, 0x0, WORKER_VCPU_ID_1);
> + set_expected_char((void *)data->test_pages, 0x0, WORKER_VCPU_ID_2);
> +
> + /* Make sure workers have enough time to notice */
> + asm volatile("mfence");
> + rep_nop();
> +
> + /* Swap test page mappings */
> + swap_two_test_pages(data->test_pages_pte[0], data->test_pages_pte[1]);
> +}
> +
> +/*
> + * Finalize the test: check hypercall resule set the expected char for
> + * 'worker' CPUs and give them some time to test.
> + */
> +static inline void post_test(struct test_data *data, u64 res,
> + char exp_char1, char exp_char2)
> +{
> + /* Check hypercall return code */
> + GUEST_ASSERT((res & 0xffff) == 0);
> +
> + /* Set the expectation for workers, '0' means don't test */
> + set_expected_char((void *)data->test_pages, exp_char1, WORKER_VCPU_ID_1);
> + set_expected_char((void *)data->test_pages, exp_char2, WORKER_VCPU_ID_2);
> +
> + /* Make sure workers have enough time to test */
> + asm volatile("mfence");
> + rep_nop();
> +}
> +
> +/* Main vCPU doing the test */
> +static void sender_guest_code(vm_vaddr_t test_data)
> +{
> + struct test_data *data = (struct test_data *)test_data;
> + struct hv_tlb_flush *flush = (struct hv_tlb_flush *)data->hcall_gva;
> + struct hv_tlb_flush_ex *flush_ex = (struct hv_tlb_flush_ex *)data->hcall_gva;
> + vm_paddr_t hcall_gpa = data->hcall_gpa;
> + u64 res;
> + int i, stage = 1;
> +
> + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
> + wrmsr(HV_X64_MSR_HYPERCALL, data->hcall_gpa);
> +
> + /* "Slow" hypercalls */
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for WORKER_VCPU_ID_1 */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> + flush->processor_mask = BIT(WORKER_VCPU_ID_1);
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE, hcall_gpa,
> + hcall_gpa + PAGE_SIZE);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, 0x0);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for WORKER_VCPU_ID_1 */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> + flush->processor_mask = BIT(WORKER_VCPU_ID_1);
> + flush->gva_list[0] = (u64)data->test_pages;
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST |
> + (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
> + hcall_gpa, hcall_gpa + PAGE_SIZE);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, 0x0);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for HV_FLUSH_ALL_PROCESSORS */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS;
> + flush->processor_mask = 0;
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE, hcall_gpa,
> + hcall_gpa + PAGE_SIZE);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for HV_FLUSH_ALL_PROCESSORS */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS;
> + flush->gva_list[0] = (u64)data->test_pages;
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST |
> + (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
> + hcall_gpa, hcall_gpa + PAGE_SIZE);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for WORKER_VCPU_ID_2 */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> + flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
> + flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_2 / 64);
> + flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX |
> + (1 << HV_HYPERCALL_VARHEAD_OFFSET),
> + hcall_gpa, hcall_gpa + PAGE_SIZE);
> + post_test(data, res, 0x0, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for WORKER_VCPU_ID_2 */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> + flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
> + flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_2 / 64);
> + flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
> + /* bank_contents and gva_list occupy the same space, thus [1] */
> + flush_ex->gva_list[1] = (u64)data->test_pages;
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX |
> + (1 << HV_HYPERCALL_VARHEAD_OFFSET) |
> + (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
> + hcall_gpa, hcall_gpa + PAGE_SIZE);
> + post_test(data, res, 0x0, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for both vCPUs */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> + flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
> + flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_2 / 64) |
> + BIT_ULL(WORKER_VCPU_ID_1 / 64);
> + flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_1 % 64);
> + flush_ex->hv_vp_set.bank_contents[1] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX |
> + (2 << HV_HYPERCALL_VARHEAD_OFFSET),
> + hcall_gpa, hcall_gpa + PAGE_SIZE);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for both vCPUs */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> + flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
> + flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_1 / 64) |
> + BIT_ULL(WORKER_VCPU_ID_2 / 64);
> + flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_1 % 64);
> + flush_ex->hv_vp_set.bank_contents[1] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
> + /* bank_contents and gva_list occupy the same space, thus [2] */
> + flush_ex->gva_list[2] = (u64)data->test_pages;
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX |
> + (2 << HV_HYPERCALL_VARHEAD_OFFSET) |
> + (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
> + hcall_gpa, hcall_gpa + PAGE_SIZE);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for HV_GENERIC_SET_ALL */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> + flush_ex->hv_vp_set.format = HV_GENERIC_SET_ALL;
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX,
> + hcall_gpa, hcall_gpa + PAGE_SIZE);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for HV_GENERIC_SET_ALL */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> + flush_ex->hv_vp_set.format = HV_GENERIC_SET_ALL;
> + flush_ex->gva_list[0] = (u64)data->test_pages;
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX |
> + (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
> + hcall_gpa, hcall_gpa + PAGE_SIZE);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
> + }
> +
> + /* "Fast" hypercalls */
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for WORKER_VCPU_ID_1 */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush->processor_mask = BIT(WORKER_VCPU_ID_1);
> + hyperv_write_xmm_input(&flush->processor_mask, 1);
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE |
> + HV_HYPERCALL_FAST_BIT, 0x0,
> + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, 0x0);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for WORKER_VCPU_ID_1 */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush->processor_mask = BIT(WORKER_VCPU_ID_1);
> + flush->gva_list[0] = (u64)data->test_pages;
> + hyperv_write_xmm_input(&flush->processor_mask, 1);
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST |
> + HV_HYPERCALL_FAST_BIT |
> + (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
> + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, 0x0);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for HV_FLUSH_ALL_PROCESSORS */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + hyperv_write_xmm_input(&flush->processor_mask, 1);
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE |
> + HV_HYPERCALL_FAST_BIT, 0x0,
> + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES |
> + HV_FLUSH_ALL_PROCESSORS);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for HV_FLUSH_ALL_PROCESSORS */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush->gva_list[0] = (u64)data->test_pages;
> + hyperv_write_xmm_input(&flush->processor_mask, 1);
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST |
> + HV_HYPERCALL_FAST_BIT |
> + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), 0x0,
> + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES |
> + HV_FLUSH_ALL_PROCESSORS);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for WORKER_VCPU_ID_2 */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
> + flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_2 / 64);
> + flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
> + hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2);
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX |
> + HV_HYPERCALL_FAST_BIT |
> + (1 << HV_HYPERCALL_VARHEAD_OFFSET),
> + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
> + post_test(data, res, 0x0, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for WORKER_VCPU_ID_2 */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
> + flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_2 / 64);
> + flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
> + /* bank_contents and gva_list occupy the same space, thus [1] */
> + flush_ex->gva_list[1] = (u64)data->test_pages;
> + hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2);
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX |
> + HV_HYPERCALL_FAST_BIT |
> + (1 << HV_HYPERCALL_VARHEAD_OFFSET) |
> + (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
> + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
> + post_test(data, res, 0x0, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for both vCPUs */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
> + flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_2 / 64) |
> + BIT_ULL(WORKER_VCPU_ID_1 / 64);
> + flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_1 % 64);
> + flush_ex->hv_vp_set.bank_contents[1] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
> + hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2);
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX |
> + HV_HYPERCALL_FAST_BIT |
> + (2 << HV_HYPERCALL_VARHEAD_OFFSET),
> + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for both vCPUs */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush_ex->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
> + flush_ex->hv_vp_set.valid_bank_mask = BIT_ULL(WORKER_VCPU_ID_1 / 64) |
> + BIT_ULL(WORKER_VCPU_ID_2 / 64);
> + flush_ex->hv_vp_set.bank_contents[0] = BIT_ULL(WORKER_VCPU_ID_1 % 64);
> + flush_ex->hv_vp_set.bank_contents[1] = BIT_ULL(WORKER_VCPU_ID_2 % 64);
> + /* bank_contents and gva_list occupy the same space, thus [2] */
> + flush_ex->gva_list[2] = (u64)data->test_pages;
> + hyperv_write_xmm_input(&flush_ex->hv_vp_set, 3);
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX |
> + HV_HYPERCALL_FAST_BIT |
> + (2 << HV_HYPERCALL_VARHEAD_OFFSET) |
> + (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
> + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for HV_GENERIC_SET_ALL */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> + flush_ex->hv_vp_set.format = HV_GENERIC_SET_ALL;
> + hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2);
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX |
> + HV_HYPERCALL_FAST_BIT,
> + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_SYNC(stage++);
> +
> + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for HV_GENERIC_SET_ALL */
> + for (i = 0; i < NTRY; i++) {
> + prepare_to_test(data);
> + flush_ex->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> + flush_ex->hv_vp_set.format = HV_GENERIC_SET_ALL;
> + flush_ex->gva_list[0] = (u64)data->test_pages;
> + hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2);
> + res = hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX |
> + HV_HYPERCALL_FAST_BIT |
> + (1UL << HV_HYPERCALL_REP_COMP_OFFSET),
> + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES);
> + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2);
> + }
> +
> + GUEST_DONE();
> +}
> +
> +static void *vcpu_thread(void *arg)
> +{
> + struct thread_params *params = (struct thread_params *)arg;
> + struct ucall uc;
> + int old;
> + int r;
> + unsigned int exit_reason;
> +
> + r = pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, &old);
> + TEST_ASSERT(r == 0,
> + "pthread_setcanceltype failed on vcpu_id=%u with errno=%d",
> + params->vcpu_id, r);
> +
> + vcpu_run(params->vm, params->vcpu_id);
> + exit_reason = vcpu_state(params->vm, params->vcpu_id)->exit_reason;
> +
> + TEST_ASSERT(exit_reason == KVM_EXIT_IO,
> + "vCPU %u exited with unexpected exit reason %u-%s, expected KVM_EXIT_IO",
> + params->vcpu_id, exit_reason, exit_reason_str(exit_reason));
> +
> + if (get_ucall(params->vm, params->vcpu_id, &uc) == UCALL_ABORT) {
> + TEST_ASSERT(false,
> + "vCPU %u exited with error: %s.\n",
> + params->vcpu_id, (const char *)uc.args[0]);
> + }
> +
> + return NULL;
> +}
> +
> +static void cancel_join_vcpu_thread(pthread_t thread, uint32_t vcpu_id)
> +{
> + void *retval;
> + int r;
> +
> + r = pthread_cancel(thread);
> + TEST_ASSERT(r == 0,
> + "pthread_cancel on vcpu_id=%d failed with errno=%d",
> + vcpu_id, r);
> +
> + r = pthread_join(thread, &retval);
> + TEST_ASSERT(r == 0,
> + "pthread_join on vcpu_id=%d failed with errno=%d",
> + vcpu_id, r);
> + TEST_ASSERT(retval == PTHREAD_CANCELED,
> + "expected retval=%p, got %p", PTHREAD_CANCELED,
> + retval);
> +}
> +
> +int main(int argc, char *argv[])
> +{
> + pthread_t threads[2];
> + struct thread_params params[2];
> + struct kvm_vm *vm;
> + struct kvm_run *run;
> + vm_vaddr_t test_data_page, gva;
> + vm_paddr_t gpa;
> + uint64_t *pte;
> + struct test_data *data;
> + struct ucall uc;
> + int stage = 1, r, i;
> +
> + vm = vm_create_default(SENDER_VCPU_ID, 0, sender_guest_code);
> + params[0].vm = vm;
> + params[1].vm = vm;
> +
> + /* Test data page */
> + test_data_page = vm_vaddr_alloc_page(vm);
> + data = (struct test_data *)addr_gva2hva(vm, test_data_page);
> +
> + /* Hypercall input/output */
> + data->hcall_gva = vm_vaddr_alloc_pages(vm, 2);
> + data->hcall_gpa = addr_gva2gpa(vm, data->hcall_gva);
> + memset(addr_gva2hva(vm, data->hcall_gva), 0x0, 2 * PAGE_SIZE);
> +
> + /*
> + * Test pages: the first one is filled with '0x1's, the second with '0x2's
> + * and the test will swap their mappings. The third page keeps the indication
> + * about the current state of mappings.
> + */
> + data->test_pages = vm_vaddr_alloc_pages(vm, NTEST_PAGES + 1);
> + for (i = 0; i < NTEST_PAGES; i++)
> + memset(addr_gva2hva(vm, data->test_pages + PAGE_SIZE * i),
> + (char)(i + 1), PAGE_SIZE);
> + set_expected_char(addr_gva2hva(vm, data->test_pages), 0x0, WORKER_VCPU_ID_1);
> + set_expected_char(addr_gva2hva(vm, data->test_pages), 0x0, WORKER_VCPU_ID_2);
> +
> + /*
> + * Get PTE pointers for test pages and map them inside the guest.
> + * Use separate page for each PTE for simplicity.
> + */
> + gva = vm_vaddr_unused_gap(vm, NTEST_PAGES * PAGE_SIZE, KVM_UTIL_MIN_VADDR);
> + for (i = 0; i < NTEST_PAGES; i++) {
> + pte = _vm_get_page_table_entry(vm, SENDER_VCPU_ID,
> + data->test_pages + i * PAGE_SIZE);
> + gpa = addr_hva2gpa(vm, pte);
> + __virt_pg_map(vm, gva + PAGE_SIZE * i, gpa & PAGE_MASK, X86_PAGE_SIZE_4K);
> + data->test_pages_pte[i] = gva + (gpa & ~PAGE_MASK);
> + }
> +
> + /*
> + * Sender vCPU which performs the test: swaps test pages, sets expectation
> + * for 'workers' and issues TLB flush hypercalls.
> + */
> + vcpu_args_set(vm, SENDER_VCPU_ID, 1, test_data_page);
> + vcpu_set_hv_cpuid(vm, SENDER_VCPU_ID);
> +
> + /* Create worker vCPUs which check the contents of the test pages */
> + vm_vcpu_add_default(vm, WORKER_VCPU_ID_1, worker_guest_code);
> + vcpu_args_set(vm, WORKER_VCPU_ID_1, 1, test_data_page);
> + vcpu_set_msr(vm, WORKER_VCPU_ID_1, HV_X64_MSR_VP_INDEX, WORKER_VCPU_ID_1);
> + vcpu_set_hv_cpuid(vm, WORKER_VCPU_ID_1);
> +
> + vm_vcpu_add_default(vm, WORKER_VCPU_ID_2, worker_guest_code);
> + vcpu_args_set(vm, WORKER_VCPU_ID_2, 1, test_data_page);
> + vcpu_set_msr(vm, WORKER_VCPU_ID_2, HV_X64_MSR_VP_INDEX, WORKER_VCPU_ID_2);
> + vcpu_set_hv_cpuid(vm, WORKER_VCPU_ID_2);
> +
> + params[0].vcpu_id = WORKER_VCPU_ID_1;
> + r = pthread_create(&threads[0], NULL, vcpu_thread, ¶ms[0]);
> + TEST_ASSERT(r == 0,
> + "pthread_create failed errno=%d", errno);
> +
> + params[1].vcpu_id = WORKER_VCPU_ID_2;
> + r = pthread_create(&threads[1], NULL, vcpu_thread, ¶ms[1]);
> + TEST_ASSERT(r == 0,
> + "pthread_create failed errno=%d", errno);
> +
> + run = vcpu_state(vm, SENDER_VCPU_ID);
> +
> + while (true) {
> + r = _vcpu_run(vm, SENDER_VCPU_ID);
> + TEST_ASSERT(!r, "vcpu_run failed: %d\n", r);
> + TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
> + "unexpected exit reason: %u (%s)",
> + run->exit_reason, exit_reason_str(run->exit_reason));
> +
> + switch (get_ucall(vm, SENDER_VCPU_ID, &uc)) {
> + case UCALL_SYNC:
> + TEST_ASSERT(uc.args[1] == stage,
> + "Unexpected stage: %ld (%d expected)\n",
> + uc.args[1], stage);
> + break;
> + case UCALL_ABORT:
> + TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0],
> + __FILE__, uc.args[1]);
> + return 1;
> + case UCALL_DONE:
> + return 0;
> + }
> +
> + stage++;
> + }
> +
> + cancel_join_vcpu_thread(threads[0], WORKER_VCPU_ID_1);
> + cancel_join_vcpu_thread(threads[1], WORKER_VCPU_ID_2);
> + kvm_vm_free(vm);
> +
> + return 0;
> +}
Looks good overall. I didn't check everything, so I could have missed something.
Best regards,
Maxim Levitsky
On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
> In preparation to enabling L2 TLB flush, cache VP assist page in
> 'struct kvm_vcpu_hv'. While on it, rename nested_enlightened_vmentry()
> to nested_get_evmptr() and make it return eVMCS GPA directly.
>
> No functional change intended.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> arch/x86/include/asm/kvm_host.h | 2 ++
> arch/x86/kvm/hyperv.c | 10 ++++++----
> arch/x86/kvm/hyperv.h | 3 +--
> arch/x86/kvm/vmx/evmcs.c | 21 +++++++--------------
> arch/x86/kvm/vmx/evmcs.h | 2 +-
> arch/x86/kvm/vmx/nested.c | 6 +++---
> 6 files changed, 20 insertions(+), 24 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index f9a34af0a5cc..e62db76c8d37 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -650,6 +650,8 @@ struct kvm_vcpu_hv {
> /* Preallocated buffer for handling hypercalls passing sparse vCPU set */
> u64 sparse_banks[HV_MAX_SPARSE_VCPU_BANKS];
>
> + struct hv_vp_assist_page vp_assist_page;
> +
> struct {
> u64 pa_page_gpa;
> u64 vm_id;
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index 4396d75588d8..91310774c0b9 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -903,13 +903,15 @@ bool kvm_hv_assist_page_enabled(struct kvm_vcpu *vcpu)
> }
> EXPORT_SYMBOL_GPL(kvm_hv_assist_page_enabled);
>
> -bool kvm_hv_get_assist_page(struct kvm_vcpu *vcpu,
> - struct hv_vp_assist_page *assist_page)
> +bool kvm_hv_get_assist_page(struct kvm_vcpu *vcpu)
> {
> - if (!kvm_hv_assist_page_enabled(vcpu))
> + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> +
> + if (!hv_vcpu || !kvm_hv_assist_page_enabled(vcpu))
> return false;
> +
> return !kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.pv_eoi.data,
> - assist_page, sizeof(*assist_page));
> + &hv_vcpu->vp_assist_page, sizeof(struct hv_vp_assist_page));
> }
> EXPORT_SYMBOL_GPL(kvm_hv_get_assist_page);
>
> diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
> index 2aa6fb7fc599..139beb55b781 100644
> --- a/arch/x86/kvm/hyperv.h
> +++ b/arch/x86/kvm/hyperv.h
> @@ -105,8 +105,7 @@ int kvm_hv_activate_synic(struct kvm_vcpu *vcpu, bool dont_zero_synic_pages);
> void kvm_hv_vcpu_uninit(struct kvm_vcpu *vcpu);
>
> bool kvm_hv_assist_page_enabled(struct kvm_vcpu *vcpu);
> -bool kvm_hv_get_assist_page(struct kvm_vcpu *vcpu,
> - struct hv_vp_assist_page *assist_page);
> +bool kvm_hv_get_assist_page(struct kvm_vcpu *vcpu);
>
> static inline struct kvm_vcpu_hv_stimer *to_hv_stimer(struct kvm_vcpu *vcpu,
> int timer_index)
> diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c
> index 805afc170b5b..7cd7b16942c6 100644
> --- a/arch/x86/kvm/vmx/evmcs.c
> +++ b/arch/x86/kvm/vmx/evmcs.c
> @@ -307,24 +307,17 @@ __init void evmcs_sanitize_exec_ctrls(struct vmcs_config *vmcs_conf)
> }
> #endif
>
> -bool nested_enlightened_vmentry(struct kvm_vcpu *vcpu, u64 *evmcs_gpa)
> +u64 nested_get_evmptr(struct kvm_vcpu *vcpu)
> {
> - struct hv_vp_assist_page assist_page;
> + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
>
> - *evmcs_gpa = -1ull;
> + if (unlikely(!kvm_hv_get_assist_page(vcpu)))
> + return EVMPTR_INVALID;
>
> - if (unlikely(!kvm_hv_get_assist_page(vcpu, &assist_page)))
> - return false;
> + if (unlikely(!hv_vcpu->vp_assist_page.enlighten_vmentry))
> + return EVMPTR_INVALID;
>
> - if (unlikely(!assist_page.enlighten_vmentry))
> - return false;
> -
> - if (unlikely(!evmptr_is_valid(assist_page.current_nested_vmcs)))
> - return false;
> -
> - *evmcs_gpa = assist_page.current_nested_vmcs;
> -
> - return true;
> + return hv_vcpu->vp_assist_page.current_nested_vmcs;
> }
>
> uint16_t nested_get_evmcs_version(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h
> index 584741b85eb6..22d238b36238 100644
> --- a/arch/x86/kvm/vmx/evmcs.h
> +++ b/arch/x86/kvm/vmx/evmcs.h
> @@ -239,7 +239,7 @@ enum nested_evmptrld_status {
> EVMPTRLD_ERROR,
> };
>
> -bool nested_enlightened_vmentry(struct kvm_vcpu *vcpu, u64 *evmcs_gpa);
> +u64 nested_get_evmptr(struct kvm_vcpu *vcpu);
> uint16_t nested_get_evmcs_version(struct kvm_vcpu *vcpu);
> int nested_enable_evmcs(struct kvm_vcpu *vcpu,
> uint16_t *vmcs_version);
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 4a827b3d929a..87bff81f7f3e 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -1995,7 +1995,8 @@ static enum nested_evmptrld_status nested_vmx_handle_enlightened_vmptrld(
> if (likely(!vmx->nested.enlightened_vmcs_enabled))
> return EVMPTRLD_DISABLED;
>
> - if (!nested_enlightened_vmentry(vcpu, &evmcs_gpa)) {
> + evmcs_gpa = nested_get_evmptr(vcpu);
> + if (!evmptr_is_valid(evmcs_gpa)) {
> nested_release_evmcs(vcpu);
> return EVMPTRLD_DISABLED;
> }
> @@ -5084,7 +5085,6 @@ static int handle_vmclear(struct kvm_vcpu *vcpu)
> struct vcpu_vmx *vmx = to_vmx(vcpu);
> u32 zero = 0;
> gpa_t vmptr;
> - u64 evmcs_gpa;
> int r;
>
> if (!nested_vmx_check_permission(vcpu))
> @@ -5110,7 +5110,7 @@ static int handle_vmclear(struct kvm_vcpu *vcpu)
> * vmx->nested.hv_evmcs but this shouldn't be a problem.
> */
> if (likely(!vmx->nested.enlightened_vmcs_enabled ||
> - !nested_enlightened_vmentry(vcpu, &evmcs_gpa))) {
> + !evmptr_is_valid(nested_get_evmptr(vcpu)))) {
> if (vmptr == vmx->nested.current_vmptr)
> nested_release_vmcs12(vcpu);
>
Reviewed-by: Maxim Levitsky <[email protected]>
Best regards,
Maxim Levitsky
On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
> To handle L2 TLB flush requests, KVM needs to use a separate fifo from
> regular (L1) Hyper-V TLB flush requests: e.g. when a request to flush
> something in L2 is made, the target vCPU can transition from L2 to L1,
> receive a request to flush a GVA for L1 and then try to enter L2 back.
> The first request needs to be processed at this point. Similarly,
> requests to flush GVAs in L1 must wait until L2 exits to L1.
>
> No functional change as KVM doesn't handle L2 TLB flush requests from
> L2 yet.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> arch/x86/include/asm/kvm_host.h | 8 +++++++-
> arch/x86/kvm/hyperv.c | 11 +++++++----
> arch/x86/kvm/hyperv.h | 17 ++++++++++++++---
> 3 files changed, 28 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index cf3748be236d..0e58ab00dff0 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -613,6 +613,12 @@ struct kvm_vcpu_hv_synic {
> */
> #define KVM_HV_TLB_FLUSHALL_ENTRY ((u64)-1)
>
> +enum hv_tlb_flush_fifos {
> + HV_L1_TLB_FLUSH_FIFO,
> + HV_L2_TLB_FLUSH_FIFO,
> + HV_NR_TLB_FLUSH_FIFOS,
> +};
> +
> struct kvm_vcpu_hv_tlb_flush_fifo {
> spinlock_t write_lock;
> DECLARE_KFIFO(entries, u64, KVM_HV_TLB_FLUSH_FIFO_SIZE);
> @@ -638,7 +644,7 @@ struct kvm_vcpu_hv {
> u32 syndbg_cap_eax; /* HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES.EAX */
> } cpuid_cache;
>
> - struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo;
> + struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo[HV_NR_TLB_FLUSH_FIFOS];
> };
>
> /* Xen HVM per vcpu emulation context */
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index b347971b3924..32f223bbea6b 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -956,8 +956,10 @@ static int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu)
>
> hv_vcpu->vp_index = vcpu->vcpu_idx;
>
> - INIT_KFIFO(hv_vcpu->tlb_flush_fifo.entries);
> - spin_lock_init(&hv_vcpu->tlb_flush_fifo.write_lock);
> + for (i = 0; i < HV_NR_TLB_FLUSH_FIFOS; i++) {
> + INIT_KFIFO(hv_vcpu->tlb_flush_fifo[i].entries);
> + spin_lock_init(&hv_vcpu->tlb_flush_fifo[i].write_lock);
> + }
>
> return 0;
> }
> @@ -1843,7 +1845,8 @@ static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, u64 *entries, int count)
> if (!hv_vcpu)
> return;
>
> - tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
> + /* kvm_hv_flush_tlb() is not ready to handle requests for L2s yet */
> + tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo[HV_L1_TLB_FLUSH_FIFO];
Yes, as expected here the local var starts to make sense.
>
> spin_lock_irqsave(&tlb_flush_fifo->write_lock, flags);
>
> @@ -1880,7 +1883,7 @@ void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
> return;
> }
>
> - tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
> + tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(vcpu);
>
> count = kfifo_out(&tlb_flush_fifo->entries, entries, KVM_HV_TLB_FLUSH_FIFO_SIZE);
>
> diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
> index e5b32266ff7d..207d24efdc5a 100644
> --- a/arch/x86/kvm/hyperv.h
> +++ b/arch/x86/kvm/hyperv.h
> @@ -22,6 +22,7 @@
> #define __ARCH_X86_KVM_HYPERV_H__
>
> #include <linux/kvm_host.h>
> +#include "x86.h"
>
> /*
> * The #defines related to the synthetic debugger are required by KDNet, but
> @@ -147,16 +148,26 @@ int kvm_vm_ioctl_hv_eventfd(struct kvm *kvm, struct kvm_hyperv_eventfd *args);
> int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
> struct kvm_cpuid_entry2 __user *entries);
>
> +static inline struct kvm_vcpu_hv_tlb_flush_fifo *kvm_hv_get_tlb_flush_fifo(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> + int i = !is_guest_mode(vcpu) ? HV_L1_TLB_FLUSH_FIFO :
> + HV_L2_TLB_FLUSH_FIFO;
> +
> + /* KVM does not handle L2 TLB flush requests yet */
> + WARN_ON_ONCE(i != HV_L1_TLB_FLUSH_FIFO);
> +
> + return &hv_vcpu->tlb_flush_fifo[i];
> +}
>
> static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu *vcpu)
> {
> struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
> - struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
>
> - if (!hv_vcpu || !kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu))
> + if (!to_hv_vcpu(vcpu) || !kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu))
> return;
>
> - tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
> + tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(vcpu);
>
> kfifo_reset_out(&tlb_flush_fifo->entries);
> }
Looks great,
Reviewed-by: Maxim Levitsky <[email protected]>
Best regards,
Maxim Levitsky
On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
> To allow flushing individual GVAs instead of always flushing the
> whole
> VPID a per-vCPU structure to pass the requests is needed. Use
> standard
> 'kfifo' to queue two types of entries: individual GVA (GFN + up to
> 4095
> following GFNs in the lower 12 bits) and 'flush all'.
Honestly I still don't think I understand why we can't just
raise KVM_REQ_TLB_FLUSH_GUEST when the guest uses this interface
to flush everthing, and then we won't need to touch the ring
at all.
But I am just curious - if there is a reason for that,
then no objections from my side.
>
> The size of the fifo is arbitrary set to '16'.
>
> Note, kvm_hv_flush_tlb() only queues 'flush all' entries for now and
> kvm_hv_vcpu_flush_tlb() doesn't actually read the fifo just resets
> the
> queue before doing full TLB flush so the functional change is very
> small but the infrastructure is prepared to handle individual GVA
> flush requests.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> arch/x86/include/asm/kvm_host.h | 20 +++++++++++++++
> arch/x86/kvm/hyperv.c | 45
> +++++++++++++++++++++++++++++++++
> arch/x86/kvm/hyperv.h | 16 ++++++++++++
> arch/x86/kvm/x86.c | 8 +++---
> arch/x86/kvm/x86.h | 1 +
> 5 files changed, 86 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h
> b/arch/x86/include/asm/kvm_host.h
> index 7a6a6f47b603..cf3748be236d 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -25,6 +25,7 @@
> #include <linux/clocksource.h>
> #include <linux/irqbypass.h>
> #include <linux/hyperv.h>
> +#include <linux/kfifo.h>
>
> #include <asm/apic.h>
> #include <asm/pvclock-abi.h>
> @@ -600,6 +601,23 @@ struct kvm_vcpu_hv_synic {
> bool dont_zero_synic_pages;
> };
>
> +/* The maximum number of entries on the TLB flush fifo. */
> +#define KVM_HV_TLB_FLUSH_FIFO_SIZE (16)
> +/*
> + * Note: the following 'magic' entry is made up by KVM to avoid
> putting
> + * anything besides GVA on the TLB flush fifo. It is theoretically
> possible
> + * to observe a request to flush 4095 PFNs starting from
> 0xfffffffffffff000
> + * which will look identical. KVM's action to 'flush everything'
> instead of
> + * flushing these particular addresses is, however, fully legitimate
> as
> + * flushing more than requested is always OK.
> + */
> +#define KVM_HV_TLB_FLUSHALL_ENTRY ((u64)-1)
> +
> +struct kvm_vcpu_hv_tlb_flush_fifo {
> + spinlock_t write_lock;
> + DECLARE_KFIFO(entries, u64, KVM_HV_TLB_FLUSH_FIFO_SIZE);
> +};
> +
> /* Hyper-V per vcpu emulation context */
> struct kvm_vcpu_hv {
> struct kvm_vcpu *vcpu;
> @@ -619,6 +637,8 @@ struct kvm_vcpu_hv {
> u32 enlightenments_ebx; /*
> HYPERV_CPUID_ENLIGHTMENT_INFO.EBX */
> u32 syndbg_cap_eax; /*
> HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES.EAX */
> } cpuid_cache;
> +
> + struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo;
> };
>
> /* Xen HVM per vcpu emulation context */
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index b402ad059eb9..c8b22bf67577 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -29,6 +29,7 @@
> #include <linux/kvm_host.h>
> #include <linux/highmem.h>
> #include <linux/sched/cputime.h>
> +#include <linux/spinlock.h>
> #include <linux/eventfd.h>
>
> #include <asm/apicdef.h>
> @@ -954,6 +955,9 @@ static int kvm_hv_vcpu_init(struct kvm_vcpu
> *vcpu)
>
> hv_vcpu->vp_index = vcpu->vcpu_idx;
>
> + INIT_KFIFO(hv_vcpu->tlb_flush_fifo.entries);
> + spin_lock_init(&hv_vcpu->tlb_flush_fifo.write_lock);
> +
> return 0;
> }
>
> @@ -1789,6 +1793,35 @@ static u64 kvm_get_sparse_vp_set(struct kvm
> *kvm, struct kvm_hv_hcall *hc,
> var_cnt * sizeof(*sparse_banks));
> }
>
> +static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
> + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> + u64 entry = KVM_HV_TLB_FLUSHALL_ENTRY;
> +
> + if (!hv_vcpu)
> + return;
> +
> + tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
> +
> + kfifo_in_spinlocked(&tlb_flush_fifo->entries, &entry, 1,
> &tlb_flush_fifo->write_lock);
Tiny nitpick: the 'tlb_flush_fifo' isn't really neede here I think,
but probably will be needed in later patches, so feel free to ignore.
> +}
> +
> +void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
> + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> +
> + kvm_vcpu_flush_tlb_guest(vcpu);
> +
> + if (!hv_vcpu)
> + return;
> +
> + tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
> +
> + kfifo_reset_out(&tlb_flush_fifo->entries);
Same here.
> +}
> +
> static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct
> kvm_hv_hcall *hc)
> {
> struct kvm *kvm = vcpu->kvm;
> @@ -1797,6 +1830,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu
> *vcpu, struct kvm_hv_hcall *hc)
> DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS);
> u64 valid_bank_mask;
> u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS];
> + struct kvm_vcpu *v;
> + unsigned long i;
> bool all_cpus;
>
> /*
> @@ -1876,10 +1911,20 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu
> *vcpu, struct kvm_hv_hcall *hc)
> * analyze it here, flush TLB regardless of the specified
> address space.
> */
> if (all_cpus) {
> + kvm_for_each_vcpu(i, v, kvm)
> + hv_tlb_flush_enqueue(v);
> +
> kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH);
> } else {
> sparse_set_to_vcpu_mask(kvm, sparse_banks,
> valid_bank_mask, vcpu_mask);
>
> + for_each_set_bit(i, vcpu_mask, KVM_MAX_VCPUS) {
> + v = kvm_get_vcpu(kvm, i);
> + if (!v)
> + continue;
> + hv_tlb_flush_enqueue(v);
> + }
> +
> kvm_make_vcpus_request_mask(kvm,
> KVM_REQ_HV_TLB_FLUSH, vcpu_mask);
> }
>
> diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
> index da2737f2a956..e5b32266ff7d 100644
> --- a/arch/x86/kvm/hyperv.h
> +++ b/arch/x86/kvm/hyperv.h
> @@ -147,4 +147,20 @@ int kvm_vm_ioctl_hv_eventfd(struct kvm *kvm,
> struct kvm_hyperv_eventfd *args);
> int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2
> *cpuid,
> struct kvm_cpuid_entry2 __user *entries);
>
> +
> +static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu
> *vcpu)
> +{
> + struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
> + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> +
> + if (!hv_vcpu || !kvm_check_request(KVM_REQ_HV_TLB_FLUSH,
> vcpu))
> + return;
> +
> + tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
> +
> + kfifo_reset_out(&tlb_flush_fifo->entries);
> +}
> +void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu);
> +
> +
> #endif
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 80cd3eb5e7de..805db43c2829 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3320,7 +3320,7 @@ static void kvm_vcpu_flush_tlb_all(struct
> kvm_vcpu *vcpu)
> static_call(kvm_x86_flush_tlb_all)(vcpu);
> }
>
> -static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
> +void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
> {
> ++vcpu->stat.tlb_flush;
>
> @@ -3355,14 +3355,14 @@ void
> kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu)
> {
> if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) {
> kvm_vcpu_flush_tlb_current(vcpu);
> - kvm_clear_request(KVM_REQ_HV_TLB_FLUSH, vcpu);
> + kvm_hv_vcpu_empty_flush_tlb(vcpu);
> }
>
> if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) {
> kvm_vcpu_flush_tlb_guest(vcpu);
> - kvm_clear_request(KVM_REQ_HV_TLB_FLUSH, vcpu);
> + kvm_hv_vcpu_empty_flush_tlb(vcpu);
> } else if (kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu)) {
> - kvm_vcpu_flush_tlb_guest(vcpu);
> + kvm_hv_vcpu_flush_tlb(vcpu);
> }
> }
> EXPORT_SYMBOL_GPL(kvm_service_local_tlb_flush_requests);
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 501b884b8cc4..9f7989f2c6d4 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -79,6 +79,7 @@ static inline unsigned int
> __shrink_ple_window(unsigned int val,
>
> #define MSR_IA32_CR_PAT_DEFAULT 0x0007040600070406ULL
>
> +void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu);
> void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu);
> int kvm_check_nested_events(struct kvm_vcpu *vcpu);
>
Best regards,
Maxim Levitsky
Maxim Levitsky <[email protected]> writes:
> On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
>> To allow flushing individual GVAs instead of always flushing the
>> whole
>> VPID a per-vCPU structure to pass the requests is needed. Use
>> standard
>> 'kfifo' to queue two types of entries: individual GVA (GFN + up to
>> 4095
>> following GFNs in the lower 12 bits) and 'flush all'.
>
> Honestly I still don't think I understand why we can't just
> raise KVM_REQ_TLB_FLUSH_GUEST when the guest uses this interface
> to flush everthing, and then we won't need to touch the ring
> at all.
The main reason is that we need to know what to flush: L1 or
L2. E.g. for VMX, KVM_REQ_TLB_FLUSH_GUEST is basically
vpid_sync_context(vmx_get_current_vpid(vcpu));
which means that if the target vCPU transitions from L1 to L2 or vice
versa before KVM_REQ_TLB_FLUSH_GUEST gets processed we will flush the
wrong VPID. And actually the writer (the vCPU which processes the TLB
flush hypercall) is not anyhow synchronized with the reader (the vCPU
whose TLB needs to be flushed) here so we can't even know if the target
vCPU is in guest more or not.
With the newly added KVM_REQ_HV_TLB_FLUSH, we always look at the
corresponding FIFO and process 'flush all' accordingly. In case the vCPU
switches between modes, we always raise KVM_REQ_HV_TLB_FLUSH request to
make sure we check. Note: we can't be raising KVM_REQ_TLB_FLUSH_GUEST
instead as it always means 'full tlb flush' and we certainly don't want
that.
--
Vitaly
On Wed, 2022-06-08 at 09:47 +0200, Vitaly Kuznetsov wrote:
> Maxim Levitsky <[email protected]> writes:
>
> > On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
> > > To allow flushing individual GVAs instead of always flushing the
> > > whole
> > > VPID a per-vCPU structure to pass the requests is needed. Use
> > > standard
> > > 'kfifo' to queue two types of entries: individual GVA (GFN + up to
> > > 4095
> > > following GFNs in the lower 12 bits) and 'flush all'.
> >
> > Honestly I still don't think I understand why we can't just
> > raise KVM_REQ_TLB_FLUSH_GUEST when the guest uses this interface
> > to flush everthing, and then we won't need to touch the ring
> > at all.
>
> The main reason is that we need to know what to flush: L1 or
> L2. E.g. for VMX, KVM_REQ_TLB_FLUSH_GUEST is basically
>
> vpid_sync_context(vmx_get_current_vpid(vcpu));
>
> which means that if the target vCPU transitions from L1 to L2 or vice
> versa before KVM_REQ_TLB_FLUSH_GUEST gets processed we will flush the
> wrong VPID. And actually the writer (the vCPU which processes the TLB
> flush hypercall) is not anyhow synchronized with the reader (the vCPU
> whose TLB needs to be flushed) here so we can't even know if the target
> vCPU is in guest more or not.
>
> With the newly added KVM_REQ_HV_TLB_FLUSH, we always look at the
> corresponding FIFO and process 'flush all' accordingly. In case the vCPU
> switches between modes, we always raise KVM_REQ_HV_TLB_FLUSH request to
> make sure we check. Note: we can't be raising KVM_REQ_TLB_FLUSH_GUEST
> instead as it always means 'full tlb flush' and we certainly don't want
> that.
>
OK, that makes sense! Let it be then.
Best regards,
Maxim Levitsky
On Mon, Jun 06, 2022 at 10:36:22AM +0200, Vitaly Kuznetsov wrote:
> Currently, HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} calls are handled
> the exact same way as HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE{,EX}: by
> flushing the whole VPID and this is sub-optimal. Switch to handling
> these requests with 'flush_tlb_gva()' hooks instead. Use the newly
> introduced TLB flush fifo to queue the requests.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> arch/x86/kvm/hyperv.c | 100 +++++++++++++++++++++++++++++++++++++-----
> 1 file changed, 88 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index 762b0b699fdf..956072592e2f 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -1806,32 +1806,82 @@ static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc,
> sparse_banks, consumed_xmm_halves, offset);
> }
>
> -static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu)
> +static int kvm_hv_get_tlb_flush_entries(struct kvm *kvm, struct kvm_hv_hcall *hc, u64 entries[],
> + int consumed_xmm_halves, gpa_t offset)
> +{
> + return kvm_hv_get_hc_data(kvm, hc, hc->rep_cnt, hc->rep_cnt,
> + entries, consumed_xmm_halves, offset);
> +}
> +
> +static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, u64 *entries, int count)
> {
> struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
> struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> u64 entry = KVM_HV_TLB_FLUSHALL_ENTRY;
> + unsigned long flags;
>
> if (!hv_vcpu)
> return;
>
> tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
>
> - kfifo_in_spinlocked(&tlb_flush_fifo->entries, &entry, 1, &tlb_flush_fifo->write_lock);
> + spin_lock_irqsave(&tlb_flush_fifo->write_lock, flags);
> +
> + /*
> + * All entries should fit on the fifo leaving one free for 'flush all'
> + * entry in case another request comes in. In case there's not enough
> + * space, just put 'flush all' entry there.
> + */
> + if (count && entries && count < kfifo_avail(&tlb_flush_fifo->entries)) {
> + WARN_ON(kfifo_in(&tlb_flush_fifo->entries, entries, count) != count);
> + goto out_unlock;
> + }
> +
> + /*
> + * Note: full fifo always contains 'flush all' entry, no need to check the
> + * return value.
> + */
> + kfifo_in(&tlb_flush_fifo->entries, &entry, 1);
> +
> +out_unlock:
> + spin_unlock_irqrestore(&tlb_flush_fifo->write_lock, flags);
> }
>
> void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
> {
> struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
> struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> + u64 entries[KVM_HV_TLB_FLUSH_FIFO_SIZE];
> + int i, j, count;
> + gva_t gva;
>
> - kvm_vcpu_flush_tlb_guest(vcpu);
> -
> - if (!hv_vcpu)
> + if (!tdp_enabled || !hv_vcpu) {
> + kvm_vcpu_flush_tlb_guest(vcpu);
> return;
> + }
>
> tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
>
> + count = kfifo_out(&tlb_flush_fifo->entries, entries, KVM_HV_TLB_FLUSH_FIFO_SIZE);
Writers are protected by the fifo lock so only 1 writer VS 1 reader on
this kfifo (at least so far), it shuold be safe but I'm not sure
whether some unexpected cases there, e.g. KVM flushs another VCPU's
kfifo while that VCPU is doing same thing for itself yet.
> +
> + for (i = 0; i < count; i++) {
> + if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY)
> + goto out_flush_all;
> +
> + /*
> + * Lower 12 bits of 'address' encode the number of additional
> + * pages to flush.
> + */
> + gva = entries[i] & PAGE_MASK;
> + for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++)
> + static_call(kvm_x86_flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE);
> +
> + ++vcpu->stat.tlb_flush;
> + }
> + return;
> +
> +out_flush_all:
> + kvm_vcpu_flush_tlb_guest(vcpu);
> kfifo_reset_out(&tlb_flush_fifo->entries);
> }
>
> @@ -1841,11 +1891,21 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
> struct hv_tlb_flush_ex flush_ex;
> struct hv_tlb_flush flush;
> DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS);
> + /*
> + * Normally, there can be no more than 'KVM_HV_TLB_FLUSH_FIFO_SIZE'
> + * entries on the TLB flush fifo. The last entry, however, needs to be
> + * always left free for 'flush all' entry which gets placed when
> + * there is not enough space to put all the requested entries.
> + */
> + u64 __tlb_flush_entries[KVM_HV_TLB_FLUSH_FIFO_SIZE - 1];
> + u64 *tlb_flush_entries;
> u64 valid_bank_mask;
> u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS];
> struct kvm_vcpu *v;
> unsigned long i;
> bool all_cpus;
> + int consumed_xmm_halves = 0;
> + gpa_t data_offset;
>
> /*
> * The Hyper-V TLFS doesn't allow more than 64 sparse banks, e.g. the
> @@ -1861,10 +1921,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
> flush.address_space = hc->ingpa;
> flush.flags = hc->outgpa;
> flush.processor_mask = sse128_lo(hc->xmm[0]);
> + consumed_xmm_halves = 1;
> } else {
> if (unlikely(kvm_read_guest(kvm, hc->ingpa,
> &flush, sizeof(flush))))
> return HV_STATUS_INVALID_HYPERCALL_INPUT;
> + data_offset = sizeof(flush);
> }
>
> trace_kvm_hv_flush_tlb(flush.processor_mask,
> @@ -1888,10 +1950,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
> flush_ex.flags = hc->outgpa;
> memcpy(&flush_ex.hv_vp_set,
> &hc->xmm[0], sizeof(hc->xmm[0]));
> + consumed_xmm_halves = 2;
> } else {
> if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex,
> sizeof(flush_ex))))
> return HV_STATUS_INVALID_HYPERCALL_INPUT;
> + data_offset = sizeof(flush_ex);
> }
>
> trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask,
> @@ -1907,25 +1971,37 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
> return HV_STATUS_INVALID_HYPERCALL_INPUT;
>
> if (all_cpus)
> - goto do_flush;
> + goto read_flush_entries;
>
> if (!hc->var_cnt)
> goto ret_success;
>
> - if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, 2,
> - offsetof(struct hv_tlb_flush_ex,
> - hv_vp_set.bank_contents)))
> + if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, consumed_xmm_halves,
> + data_offset))
> + return HV_STATUS_INVALID_HYPERCALL_INPUT;
> + data_offset += hc->var_cnt * sizeof(sparse_banks[0]);
> + consumed_xmm_halves += hc->var_cnt;
> + }
> +
> +read_flush_entries:
> + if (hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE ||
> + hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX ||
> + hc->rep_cnt > ARRAY_SIZE(__tlb_flush_entries)) {
> + tlb_flush_entries = NULL;
> + } else {
> + if (kvm_hv_get_tlb_flush_entries(kvm, hc, __tlb_flush_entries,
> + consumed_xmm_halves, data_offset))
> return HV_STATUS_INVALID_HYPERCALL_INPUT;
> + tlb_flush_entries = __tlb_flush_entries;
> }
>
> -do_flush:
> /*
> * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't
> * analyze it here, flush TLB regardless of the specified address space.
> */
> if (all_cpus) {
> kvm_for_each_vcpu(i, v, kvm)
> - hv_tlb_flush_enqueue(v);
> + hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt);
>
> kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH);
> } else {
> @@ -1935,7 +2011,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
> v = kvm_get_vcpu(kvm, i);
> if (!v)
> continue;
> - hv_tlb_flush_enqueue(v);
> + hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt);
> }
>
> kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask);
> --
> 2.35.3
>
Yuan Yao <[email protected]> writes:
> On Mon, Jun 06, 2022 at 10:36:22AM +0200, Vitaly Kuznetsov wrote:
>> Currently, HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} calls are handled
>> the exact same way as HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE{,EX}: by
>> flushing the whole VPID and this is sub-optimal. Switch to handling
>> these requests with 'flush_tlb_gva()' hooks instead. Use the newly
>> introduced TLB flush fifo to queue the requests.
>>
>> Signed-off-by: Vitaly Kuznetsov <[email protected]>
>> ---
>> arch/x86/kvm/hyperv.c | 100 +++++++++++++++++++++++++++++++++++++-----
>> 1 file changed, 88 insertions(+), 12 deletions(-)
>>
>> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
>> index 762b0b699fdf..956072592e2f 100644
>> --- a/arch/x86/kvm/hyperv.c
>> +++ b/arch/x86/kvm/hyperv.c
>> @@ -1806,32 +1806,82 @@ static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc,
>> sparse_banks, consumed_xmm_halves, offset);
>> }
>>
>> -static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu)
>> +static int kvm_hv_get_tlb_flush_entries(struct kvm *kvm, struct kvm_hv_hcall *hc, u64 entries[],
>> + int consumed_xmm_halves, gpa_t offset)
>> +{
>> + return kvm_hv_get_hc_data(kvm, hc, hc->rep_cnt, hc->rep_cnt,
>> + entries, consumed_xmm_halves, offset);
>> +}
>> +
>> +static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, u64 *entries, int count)
>> {
>> struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
>> struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
>> u64 entry = KVM_HV_TLB_FLUSHALL_ENTRY;
>> + unsigned long flags;
>>
>> if (!hv_vcpu)
>> return;
>>
>> tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
>>
>> - kfifo_in_spinlocked(&tlb_flush_fifo->entries, &entry, 1, &tlb_flush_fifo->write_lock);
>> + spin_lock_irqsave(&tlb_flush_fifo->write_lock, flags);
>> +
>> + /*
>> + * All entries should fit on the fifo leaving one free for 'flush all'
>> + * entry in case another request comes in. In case there's not enough
>> + * space, just put 'flush all' entry there.
>> + */
>> + if (count && entries && count < kfifo_avail(&tlb_flush_fifo->entries)) {
>> + WARN_ON(kfifo_in(&tlb_flush_fifo->entries, entries, count) != count);
>> + goto out_unlock;
>> + }
>> +
>> + /*
>> + * Note: full fifo always contains 'flush all' entry, no need to check the
>> + * return value.
>> + */
>> + kfifo_in(&tlb_flush_fifo->entries, &entry, 1);
>> +
>> +out_unlock:
>> + spin_unlock_irqrestore(&tlb_flush_fifo->write_lock, flags);
>> }
>>
>> void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
>> {
>> struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
>> struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
>> + u64 entries[KVM_HV_TLB_FLUSH_FIFO_SIZE];
>> + int i, j, count;
>> + gva_t gva;
>>
>> - kvm_vcpu_flush_tlb_guest(vcpu);
>> -
>> - if (!hv_vcpu)
>> + if (!tdp_enabled || !hv_vcpu) {
>> + kvm_vcpu_flush_tlb_guest(vcpu);
>> return;
>> + }
>>
>> tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo;
>>
>> + count = kfifo_out(&tlb_flush_fifo->entries, entries, KVM_HV_TLB_FLUSH_FIFO_SIZE);
>
> Writers are protected by the fifo lock so only 1 writer VS 1 reader on
> this kfifo (at least so far), it shuold be safe but I'm not sure
> whether some unexpected cases there, e.g. KVM flushs another VCPU's
> kfifo while that VCPU is doing same thing for itself yet.
>
TLB is always flushed by the vCPU itself, here we just queue for it to
do so. Over-flushing is possible of course (e.g. the vCPU just flushed
and didn't even enter the guest but we're going to queue flush work for
it from other vCPUs), but that's nothing new even with the current
'dumb' implementation which always flushes everything.
The main concern should be that we never under-flush, i.e. return to the
caller while TLB on some target vCPUs was not flushed *and* target vCPUs
are running the guest.
--
Vitaly
Maxim Levitsky <[email protected]> writes:
...
>>
>> void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
>> {
>> struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
>> struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
>> + u64 entries[KVM_HV_TLB_FLUSH_FIFO_SIZE];
>> + int i, j, count;
>> + gva_t gva;
>>
>> - kvm_vcpu_flush_tlb_guest(vcpu);
>> -
>> - if (!hv_vcpu)
>> + if (!tdp_enabled || !hv_vcpu) {
> I haven't noticed that in the review I did back then, but
> any reason why !tdp_enabled?
This follows the logic in kvm_vcpu_flush_tlb_guest():
if (!tdp_enabled) {
/*
* A TLB flush on behalf of the guest is equivalent to
* INVPCID(all), toggling CR4.PGE, etc., which requires
* a forced sync of the shadow page tables. Ensure all the
* roots are synced and the guest TLB in hardware is clean.
*/
kvm_mmu_sync_roots(vcpu);
kvm_mmu_sync_prev_roots(vcpu);
}
and as !tdp_enabled should be a rare debug or special case I decided to
take the shortcut and not drag any of this logic into hyperv emulation
code.
--
Vitaly