2020-07-28 14:38:33

by Vitaly Kuznetsov

[permalink] [raw]
Subject: [PATCH 0/3] KVM: x86: KVM_MEM_PCI_HOLE memory

This is a continuation of "[PATCH RFC 0/5] KVM: x86: KVM_MEM_ALLONES
memory" work:
https://lore.kernel.org/kvm/[email protected]/
and pairs with Julia's "x86/PCI: Use MMCONFIG by default for KVM guests":
https://lore.kernel.org/linux-pci/[email protected]/

PCIe config space can (depending on the configuration) be quite big but
usually is sparsely populated. Guest may scan it by accessing individual
device's page which, when device is missing, is supposed to have 'pci
hole' semantics: reads return '0xff' and writes get discarded.

When testing Linux kernel boot with QEMU q35 VM and direct kernel boot
I observed 8193 accesses to PCI hole memory. When such exit is handled
in KVM without exiting to userspace, it takes roughly 0.000001 sec.
Handling the same exit in userspace is six times slower (0.000006 sec) so
the overal; difference is 0.04 sec. This may be significant for 'microvm'
ideas.

Note, the same speed can already be achieved by using KVM_MEM_READONLY
but doing this would require allocating real memory for all missing
devices and e.g. 8192 pages gives us 32mb. This will have to be allocated
for each guest separately and for 'microvm' use-cases this is likely
a no-go.

Introduce special KVM_MEM_PCI_HOLE memory: userspace doesn't need to
back it with real memory, all reads from it are handled inside KVM and
return '0xff'. Writes still go to userspace but these should be extremely
rare.

The original 'KVM_MEM_ALLONES' idea had additional optimizations: KVM
was mapping all 'PCI hole' pages to a single read-only page stuffed with
0xff. This is omitted in this submission as the benefits are unclear:
KVM will have to allocate SPTEs (either on demand or aggressively) and
this also consumes time/memory. We can always take a look at possible
optimizations later.

Vitaly Kuznetsov (3):
KVM: x86: move kvm_vcpu_gfn_to_memslot() out of try_async_pf()
KVM: x86: introduce KVM_MEM_PCI_HOLE memory
KVM: selftests: add KVM_MEM_PCI_HOLE test

Documentation/virt/kvm/api.rst | 19 ++-
arch/x86/include/uapi/asm/kvm.h | 1 +
arch/x86/kvm/mmu/mmu.c | 19 +--
arch/x86/kvm/mmu/paging_tmpl.h | 10 +-
arch/x86/kvm/x86.c | 10 +-
include/linux/kvm_host.h | 7 +-
include/uapi/linux/kvm.h | 3 +-
tools/testing/selftests/kvm/Makefile | 1 +
.../testing/selftests/kvm/include/kvm_util.h | 1 +
tools/testing/selftests/kvm/lib/kvm_util.c | 81 +++++++------
.../kvm/x86_64/memory_slot_pci_hole.c | 112 ++++++++++++++++++
virt/kvm/kvm_main.c | 39 ++++--
12 files changed, 243 insertions(+), 60 deletions(-)
create mode 100644 tools/testing/selftests/kvm/x86_64/memory_slot_pci_hole.c

--
2.25.4


2020-07-28 14:38:36

by Vitaly Kuznetsov

[permalink] [raw]
Subject: [PATCH 1/3] KVM: x86: move kvm_vcpu_gfn_to_memslot() out of try_async_pf()

No functional change intended. Slot flags will need to be analyzed
prior to try_async_pf() when KVM_MEM_PCI_HOLE is implemented.

Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
arch/x86/kvm/mmu/mmu.c | 14 ++++++++------
arch/x86/kvm/mmu/paging_tmpl.h | 7 +++++--
2 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 77810ce66bdb..8597e8102636 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4041,11 +4041,10 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch);
}

-static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
- gpa_t cr2_or_gpa, kvm_pfn_t *pfn, bool write,
- bool *writable)
+static bool try_async_pf(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
+ bool prefault, gfn_t gfn, gpa_t cr2_or_gpa,
+ kvm_pfn_t *pfn, bool write, bool *writable)
{
- struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
bool async;

/* Don't expose private memslots to L2. */
@@ -4081,7 +4080,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
bool exec = error_code & PFERR_FETCH_MASK;
bool lpage_disallowed = exec && is_nx_huge_page_enabled();
bool map_writable;
-
+ struct kvm_memory_slot *slot;
gfn_t gfn = gpa >> PAGE_SHIFT;
unsigned long mmu_seq;
kvm_pfn_t pfn;
@@ -4103,7 +4102,10 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
mmu_seq = vcpu->kvm->mmu_notifier_seq;
smp_rmb();

- if (try_async_pf(vcpu, prefault, gfn, gpa, &pfn, write, &map_writable))
+ slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
+
+ if (try_async_pf(vcpu, slot, prefault, gfn, gpa, &pfn, write,
+ &map_writable))
return RET_PF_RETRY;

if (handle_abnormal_pfn(vcpu, is_tdp ? 0 : gpa, gfn, pfn, ACC_ALL, &r))
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 0172a949f6a7..5c6a895f67c3 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -779,6 +779,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code,
int write_fault = error_code & PFERR_WRITE_MASK;
int user_fault = error_code & PFERR_USER_MASK;
struct guest_walker walker;
+ struct kvm_memory_slot *slot;
int r;
kvm_pfn_t pfn;
unsigned long mmu_seq;
@@ -833,8 +834,10 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code,
mmu_seq = vcpu->kvm->mmu_notifier_seq;
smp_rmb();

- if (try_async_pf(vcpu, prefault, walker.gfn, addr, &pfn, write_fault,
- &map_writable))
+ slot = kvm_vcpu_gfn_to_memslot(vcpu, walker.gfn);
+
+ if (try_async_pf(vcpu, slot, prefault, walker.gfn, addr, &pfn,
+ write_fault, &map_writable))
return RET_PF_RETRY;

if (handle_abnormal_pfn(vcpu, addr, walker.gfn, pfn, walker.pte_access, &r))
--
2.25.4

2020-07-28 14:39:49

by Vitaly Kuznetsov

[permalink] [raw]
Subject: [PATCH 2/3] KVM: x86: introduce KVM_MEM_PCI_HOLE memory

PCIe config space can (depending on the configuration) be quite big but
usually is sparsely populated. Guest may scan it by accessing individual
device's page which, when device is missing, is supposed to have 'pci
hole' semantics: reads return '0xff' and writes get discarded. Compared
to the already existing KVM_MEM_READONLY, VMM doesn't need to allocate
real memory and stuff it with '0xff'.

Suggested-by: Michael S. Tsirkin <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
Documentation/virt/kvm/api.rst | 19 +++++++++++-----
arch/x86/include/uapi/asm/kvm.h | 1 +
arch/x86/kvm/mmu/mmu.c | 5 ++++-
arch/x86/kvm/mmu/paging_tmpl.h | 3 +++
arch/x86/kvm/x86.c | 10 ++++++---
include/linux/kvm_host.h | 7 +++++-
include/uapi/linux/kvm.h | 3 ++-
virt/kvm/kvm_main.c | 39 +++++++++++++++++++++++++++------
8 files changed, 68 insertions(+), 19 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 644e5326aa50..fbbf533a331b 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -1241,6 +1241,7 @@ yet and must be cleared on entry.
/* for kvm_memory_region::flags */
#define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0)
#define KVM_MEM_READONLY (1UL << 1)
+ #define KVM_MEM_PCI_HOLE (1UL << 2)

This ioctl allows the user to create, modify or delete a guest physical
memory slot. Bits 0-15 of "slot" specify the slot id and this value
@@ -1268,12 +1269,18 @@ It is recommended that the lower 21 bits of guest_phys_addr and userspace_addr
be identical. This allows large pages in the guest to be backed by large
pages in the host.

-The flags field supports two flags: KVM_MEM_LOG_DIRTY_PAGES and
-KVM_MEM_READONLY. The former can be set to instruct KVM to keep track of
-writes to memory within the slot. See KVM_GET_DIRTY_LOG ioctl to know how to
-use it. The latter can be set, if KVM_CAP_READONLY_MEM capability allows it,
-to make a new slot read-only. In this case, writes to this memory will be
-posted to userspace as KVM_EXIT_MMIO exits.
+The flags field supports the following flags: KVM_MEM_LOG_DIRTY_PAGES,
+KVM_MEM_READONLY, KVM_MEM_READONLY:
+- KVM_MEM_LOG_DIRTY_PAGES can be set to instruct KVM to keep track of writes to
+ memory within the slot. See KVM_GET_DIRTY_LOG ioctl to know how to use it.
+- KVM_MEM_READONLY can be set, if KVM_CAP_READONLY_MEM capability allows it,
+ to make a new slot read-only. In this case, writes to this memory will be
+ posted to userspace as KVM_EXIT_MMIO exits.
+- KVM_MEM_PCI_HOLE can be set, if KVM_CAP_PCI_HOLE_MEM capability allows it,
+ to create a new virtual read-only slot which will always return '0xff' when
+ guest reads from it. 'userspace_addr' has to be set to NULL. This flag is
+ mutually exclusive with KVM_MEM_LOG_DIRTY_PAGES/KVM_MEM_READONLY. All writes
+ to this memory will be posted to userspace as KVM_EXIT_MMIO exits.

When the KVM_CAP_SYNC_MMU capability is available, changes in the backing of
the memory region are automatically reflected into the guest. For example, an
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 17c5a038f42d..cf80a26d74f5 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -48,6 +48,7 @@
#define __KVM_HAVE_XSAVE
#define __KVM_HAVE_XCRS
#define __KVM_HAVE_READONLY_MEM
+#define __KVM_HAVE_PCI_HOLE_MEM

/* Architectural interrupt line count. */
#define KVM_NR_INTERRUPTS 256
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 8597e8102636..c2e3a1deafdd 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3253,7 +3253,7 @@ static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
return PG_LEVEL_4K;

slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, true);
- if (!slot)
+ if (!slot || (slot->flags & KVM_MEM_PCI_HOLE))
return PG_LEVEL_4K;

max_level = min(max_level, max_page_level);
@@ -4104,6 +4104,9 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,

slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);

+ if (!write && slot && (slot->flags & KVM_MEM_PCI_HOLE))
+ return RET_PF_EMULATE;
+
if (try_async_pf(vcpu, slot, prefault, gfn, gpa, &pfn, write,
&map_writable))
return RET_PF_RETRY;
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 5c6a895f67c3..27abd69e69f6 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -836,6 +836,9 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code,

slot = kvm_vcpu_gfn_to_memslot(vcpu, walker.gfn);

+ if (!write_fault && slot && (slot->flags & KVM_MEM_PCI_HOLE))
+ return RET_PF_EMULATE;
+
if (try_async_pf(vcpu, slot, prefault, walker.gfn, addr, &pfn,
write_fault, &map_writable))
return RET_PF_RETRY;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 95ef62922869..dc312b8bfa05 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3515,6 +3515,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_EXCEPTION_PAYLOAD:
case KVM_CAP_SET_GUEST_DEBUG:
case KVM_CAP_LAST_CPU:
+ case KVM_CAP_PCI_HOLE_MEM:
r = 1;
break;
case KVM_CAP_SYNC_REGS:
@@ -10115,9 +10116,11 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot,
ugfn = slot->userspace_addr >> PAGE_SHIFT;
/*
* If the gfn and userspace address are not aligned wrt each
- * other, disable large page support for this slot.
+ * other, disable large page support for this slot. Also,
+ * disable large page support for KVM_MEM_PCI_HOLE slots.
*/
- if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1)) {
+ if (slot->flags & KVM_MEM_PCI_HOLE || ((slot->base_gfn ^ ugfn) &
+ (KVM_PAGES_PER_HPAGE(level) - 1))) {
unsigned long j;

for (j = 0; j < lpages; ++j)
@@ -10179,7 +10182,8 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm,
* Nothing to do for RO slots or CREATE/MOVE/DELETE of a slot.
* See comments below.
*/
- if ((change != KVM_MR_FLAGS_ONLY) || (new->flags & KVM_MEM_READONLY))
+ if ((change != KVM_MR_FLAGS_ONLY) || (new->flags & KVM_MEM_READONLY) ||
+ (new->flags & KVM_MEM_PCI_HOLE))
return;

/*
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 989afcbe642f..63c2d93ef172 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1081,7 +1081,12 @@ __gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn)
static inline unsigned long
__gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn)
{
- return slot->userspace_addr + (gfn - slot->base_gfn) * PAGE_SIZE;
+ if (likely(!(slot->flags & KVM_MEM_PCI_HOLE))) {
+ return slot->userspace_addr +
+ (gfn - slot->base_gfn) * PAGE_SIZE;
+ } else {
+ BUG();
+ }
}

static inline int memslot_id(struct kvm *kvm, gfn_t gfn)
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 2c73dcfb3dbb..59d631cbb71d 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -109,6 +109,7 @@ struct kvm_userspace_memory_region {
*/
#define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0)
#define KVM_MEM_READONLY (1UL << 1)
+#define KVM_MEM_PCI_HOLE (1UL << 2)

/* for KVM_IRQ_LINE */
struct kvm_irq_level {
@@ -1034,7 +1035,7 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_ASYNC_PF_INT 183
#define KVM_CAP_LAST_CPU 184
#define KVM_CAP_SMALLER_MAXPHYADDR 185
-
+#define KVM_CAP_PCI_HOLE_MEM 186

#ifdef KVM_CAP_IRQ_ROUTING

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 2c2c0254c2d8..3f69ae711021 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1107,6 +1107,10 @@ static int check_memory_region_flags(const struct kvm_userspace_memory_region *m
valid_flags |= KVM_MEM_READONLY;
#endif

+#ifdef __KVM_HAVE_PCI_HOLE_MEM
+ valid_flags |= KVM_MEM_PCI_HOLE;
+#endif
+
if (mem->flags & ~valid_flags)
return -EINVAL;

@@ -1284,11 +1288,26 @@ int __kvm_set_memory_region(struct kvm *kvm,
return -EINVAL;
if (mem->guest_phys_addr & (PAGE_SIZE - 1))
return -EINVAL;
- /* We can read the guest memory with __xxx_user() later on. */
- if ((mem->userspace_addr & (PAGE_SIZE - 1)) ||
- !access_ok((void __user *)(unsigned long)mem->userspace_addr,
- mem->memory_size))
+
+ /*
+ * KVM_MEM_PCI_HOLE is mutually exclusive with KVM_MEM_READONLY/
+ * KVM_MEM_LOG_DIRTY_PAGES.
+ */
+ if ((mem->flags & KVM_MEM_PCI_HOLE) &&
+ (mem->flags & (KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES)))
return -EINVAL;
+
+ if (!(mem->flags & KVM_MEM_PCI_HOLE)) {
+ /* We can read the guest memory with __xxx_user() later on. */
+ if ((mem->userspace_addr & (PAGE_SIZE - 1)) ||
+ !access_ok((void __user *)(unsigned long)mem->userspace_addr,
+ mem->memory_size))
+ return -EINVAL;
+ } else {
+ if (mem->userspace_addr)
+ return -EINVAL;
+ }
+
if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM)
return -EINVAL;
if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr)
@@ -1328,7 +1347,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
} else { /* Modify an existing slot. */
if ((new.userspace_addr != old.userspace_addr) ||
(new.npages != old.npages) ||
- ((new.flags ^ old.flags) & KVM_MEM_READONLY))
+ ((new.flags ^ old.flags) & KVM_MEM_READONLY) ||
+ ((new.flags ^ old.flags) & KVM_MEM_PCI_HOLE))
return -EINVAL;

if (new.base_gfn != old.base_gfn)
@@ -1715,13 +1735,13 @@ unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn)

static bool memslot_is_readonly(struct kvm_memory_slot *slot)
{
- return slot->flags & KVM_MEM_READONLY;
+ return slot->flags & (KVM_MEM_READONLY | KVM_MEM_PCI_HOLE);
}

static unsigned long __gfn_to_hva_many(struct kvm_memory_slot *slot, gfn_t gfn,
gfn_t *nr_pages, bool write)
{
- if (!slot || slot->flags & KVM_MEMSLOT_INVALID)
+ if (!slot || (slot->flags & (KVM_MEMSLOT_INVALID | KVM_MEM_PCI_HOLE)))
return KVM_HVA_ERR_BAD;

if (memslot_is_readonly(slot) && write)
@@ -2318,6 +2338,11 @@ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn,
int r;
unsigned long addr;

+ if (unlikely(slot && (slot->flags & KVM_MEM_PCI_HOLE))) {
+ memset(data, 0xff, len);
+ return 0;
+ }
+
addr = gfn_to_hva_memslot_prot(slot, gfn, NULL);
if (kvm_is_error_hva(addr))
return -EFAULT;
--
2.25.4

2020-07-28 14:42:05

by Vitaly Kuznetsov

[permalink] [raw]
Subject: [PATCH 3/3] KVM: selftests: add KVM_MEM_PCI_HOLE test

Test the newly introduced KVM_MEM_PCI_HOLE memslots:
- Reads from all pages return '0xff'
- Writes to all pages cause KVM_EXIT_MMIO

Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
tools/testing/selftests/kvm/Makefile | 1 +
.../testing/selftests/kvm/include/kvm_util.h | 1 +
tools/testing/selftests/kvm/lib/kvm_util.c | 81 +++++++------
.../kvm/x86_64/memory_slot_pci_hole.c | 112 ++++++++++++++++++
4 files changed, 162 insertions(+), 33 deletions(-)
create mode 100644 tools/testing/selftests/kvm/x86_64/memory_slot_pci_hole.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 4a166588d99f..a6fe303fbf6a 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -41,6 +41,7 @@ LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c
TEST_GEN_PROGS_x86_64 = x86_64/cr4_cpuid_sync_test
TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test
TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
+TEST_GEN_PROGS_x86_64 += x86_64/memory_slot_pci_hole
TEST_GEN_PROGS_x86_64 += x86_64/mmio_warning_test
TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test
TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 919e161dd289..8e7bec7bd287 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -59,6 +59,7 @@ enum vm_mem_backing_src_type {
VM_MEM_SRC_ANONYMOUS,
VM_MEM_SRC_ANONYMOUS_THP,
VM_MEM_SRC_ANONYMOUS_HUGETLB,
+ VM_MEM_SRC_NONE,
};

int kvm_check_cap(long cap);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 74776ee228f2..46bb28ea34ec 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -453,8 +453,11 @@ static void __vm_mem_region_delete(struct kvm_vm *vm,
"rc: %i errno: %i", ret, errno);

sparsebit_free(&region->unused_phy_pages);
- ret = munmap(region->mmap_start, region->mmap_size);
- TEST_ASSERT(ret == 0, "munmap failed, rc: %i errno: %i", ret, errno);
+ if (region->mmap_start) {
+ ret = munmap(region->mmap_start, region->mmap_size);
+ TEST_ASSERT(ret == 0, "munmap failed, rc: %i errno: %i", ret,
+ errno);
+ }

free(region);
}
@@ -643,34 +646,42 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
alignment = 1;
#endif

- if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
- alignment = max(huge_page_size, alignment);
-
- /* Add enough memory to align up if necessary */
- if (alignment > 1)
- region->mmap_size += alignment;
-
- region->mmap_start = mmap(NULL, region->mmap_size,
- PROT_READ | PROT_WRITE,
- MAP_PRIVATE | MAP_ANONYMOUS
- | (src_type == VM_MEM_SRC_ANONYMOUS_HUGETLB ? MAP_HUGETLB : 0),
- -1, 0);
- TEST_ASSERT(region->mmap_start != MAP_FAILED,
- "test_malloc failed, mmap_start: %p errno: %i",
- region->mmap_start, errno);
-
- /* Align host address */
- region->host_mem = align(region->mmap_start, alignment);
-
- /* As needed perform madvise */
- if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
- ret = madvise(region->host_mem, npages * vm->page_size,
- src_type == VM_MEM_SRC_ANONYMOUS ? MADV_NOHUGEPAGE : MADV_HUGEPAGE);
- TEST_ASSERT(ret == 0, "madvise failed,\n"
- " addr: %p\n"
- " length: 0x%lx\n"
- " src_type: %x",
- region->host_mem, npages * vm->page_size, src_type);
+ if (src_type != VM_MEM_SRC_NONE) {
+ if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
+ alignment = max(huge_page_size, alignment);
+
+ /* Add enough memory to align up if necessary */
+ if (alignment > 1)
+ region->mmap_size += alignment;
+
+ region->mmap_start = mmap(NULL, region->mmap_size,
+ PROT_READ | PROT_WRITE,
+ MAP_PRIVATE | MAP_ANONYMOUS
+ | (src_type == VM_MEM_SRC_ANONYMOUS_HUGETLB ?
+ MAP_HUGETLB : 0), -1, 0);
+ TEST_ASSERT(region->mmap_start != MAP_FAILED,
+ "test_malloc failed, mmap_start: %p errno: %i",
+ region->mmap_start, errno);
+
+ /* Align host address */
+ region->host_mem = align(region->mmap_start, alignment);
+
+ /* As needed perform madvise */
+ if (src_type == VM_MEM_SRC_ANONYMOUS ||
+ src_type == VM_MEM_SRC_ANONYMOUS_THP) {
+ ret = madvise(region->host_mem, npages * vm->page_size,
+ src_type == VM_MEM_SRC_ANONYMOUS ?
+ MADV_NOHUGEPAGE : MADV_HUGEPAGE);
+ TEST_ASSERT(ret == 0, "madvise failed,\n"
+ " addr: %p\n"
+ " length: 0x%lx\n"
+ " src_type: %x",
+ region->host_mem, npages * vm->page_size,
+ src_type);
+ }
+ } else {
+ region->mmap_start = NULL;
+ region->host_mem = NULL;
}

region->unused_phy_pages = sparsebit_alloc();
@@ -1076,9 +1087,13 @@ void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa)
list_for_each_entry(region, &vm->userspace_mem_regions, list) {
if ((gpa >= region->region.guest_phys_addr)
&& (gpa <= (region->region.guest_phys_addr
- + region->region.memory_size - 1)))
- return (void *) ((uintptr_t) region->host_mem
- + (gpa - region->region.guest_phys_addr));
+ + region->region.memory_size - 1))) {
+ if (region->host_mem)
+ return (void *) ((uintptr_t) region->host_mem
+ + (gpa - region->region.guest_phys_addr));
+ else
+ return NULL;
+ }
}

TEST_FAIL("No vm physical memory at 0x%lx", gpa);
diff --git a/tools/testing/selftests/kvm/x86_64/memory_slot_pci_hole.c b/tools/testing/selftests/kvm/x86_64/memory_slot_pci_hole.c
new file mode 100644
index 000000000000..f5fa80dfcba7
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/memory_slot_pci_hole.c
@@ -0,0 +1,112 @@
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <pthread.h>
+#include <sched.h>
+#include <signal.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include <linux/compiler.h>
+
+#include <test_util.h>
+#include <kvm_util.h>
+#include <processor.h>
+
+#define VCPU_ID 0
+
+#define MEM_REGION_GPA 0xc0000000
+#define MEM_REGION_SIZE 0x4000
+#define MEM_REGION_SLOT 10
+
+static void guest_code(void)
+{
+ uint8_t val;
+
+ /* First byte in the first page */
+ val = READ_ONCE(*((uint8_t *)MEM_REGION_GPA));
+ GUEST_ASSERT(val == 0xff);
+
+ GUEST_SYNC(1);
+
+ /* Random byte in the second page */
+ val = READ_ONCE(*((uint8_t *)MEM_REGION_GPA + 5000));
+ GUEST_ASSERT(val == 0xff);
+
+ GUEST_SYNC(2);
+
+ /* Write to the first page */
+ WRITE_ONCE(*((uint64_t *)MEM_REGION_GPA + 1024/8), 0xdeafbeef);
+
+ GUEST_SYNC(3);
+
+ /* Write to the second page */
+ WRITE_ONCE(*((uint64_t *)MEM_REGION_GPA + 8000/8), 0xdeafbeef);
+
+ GUEST_SYNC(4);
+
+ GUEST_DONE();
+}
+
+int main(int argc, char *argv[])
+{
+ struct kvm_vm *vm;
+ struct kvm_run *run;
+ struct ucall uc;
+ int stage, rv;
+
+ rv = kvm_check_cap(KVM_CAP_PCI_HOLE_MEM);
+ if (!rv) {
+ print_skip("KVM_CAP_PCI_HOLE_MEM not supported");
+ exit(KSFT_SKIP);
+ }
+
+ vm = vm_create_default(VCPU_ID, 0, guest_code);
+
+ run = vcpu_state(vm, VCPU_ID);
+
+ vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid());
+
+ vm_userspace_mem_region_add(vm, VM_MEM_SRC_NONE,
+ MEM_REGION_GPA, MEM_REGION_SLOT,
+ MEM_REGION_SIZE / getpagesize(),
+ KVM_MEM_PCI_HOLE);
+
+ virt_map(vm, MEM_REGION_GPA, MEM_REGION_GPA,
+ MEM_REGION_SIZE / getpagesize(), 0);
+
+ for (stage = 1;; stage++) {
+ _vcpu_run(vm, VCPU_ID);
+
+ if (stage == 3 || stage == 5) {
+ TEST_ASSERT(run->exit_reason == KVM_EXIT_MMIO,
+ "Write to PCI_HOLE page should cause KVM_EXIT_MMIO");
+ continue;
+ }
+
+ TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
+ "Stage %d: unexpected exit reason: %u (%s),\n",
+ stage, run->exit_reason,
+ exit_reason_str(run->exit_reason));
+
+ switch (get_ucall(vm, VCPU_ID, &uc)) {
+ case UCALL_ABORT:
+ TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0],
+ __FILE__, uc.args[1]);
+ /* NOT REACHED */
+ case UCALL_SYNC:
+ break;
+ case UCALL_DONE:
+ goto done;
+ default:
+ TEST_FAIL("Unknown ucall %lu", uc.cmd);
+ }
+ }
+
+done:
+ kvm_vm_free(vm);
+
+ return 0;
+}
--
2.25.4

2020-08-05 17:08:37

by Jim Mattson

[permalink] [raw]
Subject: Re: [PATCH 2/3] KVM: x86: introduce KVM_MEM_PCI_HOLE memory

On Tue, Jul 28, 2020 at 7:38 AM Vitaly Kuznetsov <[email protected]> wrote:
>
> PCIe config space can (depending on the configuration) be quite big but
> usually is sparsely populated. Guest may scan it by accessing individual
> device's page which, when device is missing, is supposed to have 'pci
> hole' semantics: reads return '0xff' and writes get discarded. Compared
> to the already existing KVM_MEM_READONLY, VMM doesn't need to allocate
> real memory and stuff it with '0xff'.

Note that the bus error semantics described should apply to *any*
unbacked guest physical addresses, not just addresses in the PCI hole.
(Typically, this also applies to the standard local APIC page
(0xfee00xxx) when the local APIC is either disabled or in x2APIC mode,
which is an area that kvm has had trouble with in the past.)

2020-08-05 20:03:45

by Andrew Jones

[permalink] [raw]
Subject: Re: [PATCH 2/3] KVM: x86: introduce KVM_MEM_PCI_HOLE memory

On Tue, Jul 28, 2020 at 04:37:40PM +0200, Vitaly Kuznetsov wrote:
> PCIe config space can (depending on the configuration) be quite big but
> usually is sparsely populated. Guest may scan it by accessing individual
> device's page which, when device is missing, is supposed to have 'pci
> hole' semantics: reads return '0xff' and writes get discarded. Compared
> to the already existing KVM_MEM_READONLY, VMM doesn't need to allocate
> real memory and stuff it with '0xff'.
>
> Suggested-by: Michael S. Tsirkin <[email protected]>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> Documentation/virt/kvm/api.rst | 19 +++++++++++-----
> arch/x86/include/uapi/asm/kvm.h | 1 +
> arch/x86/kvm/mmu/mmu.c | 5 ++++-
> arch/x86/kvm/mmu/paging_tmpl.h | 3 +++
> arch/x86/kvm/x86.c | 10 ++++++---
> include/linux/kvm_host.h | 7 +++++-
> include/uapi/linux/kvm.h | 3 ++-
> virt/kvm/kvm_main.c | 39 +++++++++++++++++++++++++++------
> 8 files changed, 68 insertions(+), 19 deletions(-)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index 644e5326aa50..fbbf533a331b 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -1241,6 +1241,7 @@ yet and must be cleared on entry.
> /* for kvm_memory_region::flags */
> #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0)
> #define KVM_MEM_READONLY (1UL << 1)
> + #define KVM_MEM_PCI_HOLE (1UL << 2)
>
> This ioctl allows the user to create, modify or delete a guest physical
> memory slot. Bits 0-15 of "slot" specify the slot id and this value
> @@ -1268,12 +1269,18 @@ It is recommended that the lower 21 bits of guest_phys_addr and userspace_addr
> be identical. This allows large pages in the guest to be backed by large
> pages in the host.
>
> -The flags field supports two flags: KVM_MEM_LOG_DIRTY_PAGES and
> -KVM_MEM_READONLY. The former can be set to instruct KVM to keep track of
> -writes to memory within the slot. See KVM_GET_DIRTY_LOG ioctl to know how to
> -use it. The latter can be set, if KVM_CAP_READONLY_MEM capability allows it,
> -to make a new slot read-only. In this case, writes to this memory will be
> -posted to userspace as KVM_EXIT_MMIO exits.
> +The flags field supports the following flags: KVM_MEM_LOG_DIRTY_PAGES,
> +KVM_MEM_READONLY, KVM_MEM_READONLY:

The second KVM_MEM_READONLY should be KVM_MEM_PCI_HOLE. Or just drop the
list here, as they're listed below anyway

> +- KVM_MEM_LOG_DIRTY_PAGES can be set to instruct KVM to keep track of writes to
> + memory within the slot. See KVM_GET_DIRTY_LOG ioctl to know how to use it.
> +- KVM_MEM_READONLY can be set, if KVM_CAP_READONLY_MEM capability allows it,
> + to make a new slot read-only. In this case, writes to this memory will be
> + posted to userspace as KVM_EXIT_MMIO exits.
> +- KVM_MEM_PCI_HOLE can be set, if KVM_CAP_PCI_HOLE_MEM capability allows it,
> + to create a new virtual read-only slot which will always return '0xff' when
> + guest reads from it. 'userspace_addr' has to be set to NULL. This flag is
> + mutually exclusive with KVM_MEM_LOG_DIRTY_PAGES/KVM_MEM_READONLY. All writes
> + to this memory will be posted to userspace as KVM_EXIT_MMIO exits.

I see 2/3's of this text is copy+pasted from above, but how about this

- KVM_MEM_LOG_DIRTY_PAGES: log writes. Use KVM_GET_DIRTY_LOG to retreive
the log.
- KVM_MEM_READONLY: exit to userspace with KVM_EXIT_MMIO on writes. Only
available when KVM_CAP_READONLY_MEM is present.
- KVM_MEM_PCI_HOLE: always return 0xff on reads, exit to userspace with
KVM_EXIT_MMIO on writes. Only available when KVM_CAP_PCI_HOLE_MEM is
present. When setting the memory region 'userspace_addr' must be NULL.
This flag is mutually exclusive with KVM_MEM_LOG_DIRTY_PAGES and with
KVM_MEM_READONLY.

>
> When the KVM_CAP_SYNC_MMU capability is available, changes in the backing of
> the memory region are automatically reflected into the guest. For example, an
> diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
> index 17c5a038f42d..cf80a26d74f5 100644
> --- a/arch/x86/include/uapi/asm/kvm.h
> +++ b/arch/x86/include/uapi/asm/kvm.h
> @@ -48,6 +48,7 @@
> #define __KVM_HAVE_XSAVE
> #define __KVM_HAVE_XCRS
> #define __KVM_HAVE_READONLY_MEM
> +#define __KVM_HAVE_PCI_HOLE_MEM
>
> /* Architectural interrupt line count. */
> #define KVM_NR_INTERRUPTS 256
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 8597e8102636..c2e3a1deafdd 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3253,7 +3253,7 @@ static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
> return PG_LEVEL_4K;
>
> slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, true);
> - if (!slot)
> + if (!slot || (slot->flags & KVM_MEM_PCI_HOLE))
> return PG_LEVEL_4K;
>
> max_level = min(max_level, max_page_level);
> @@ -4104,6 +4104,9 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
>
> slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
>
> + if (!write && slot && (slot->flags & KVM_MEM_PCI_HOLE))
> + return RET_PF_EMULATE;
> +
> if (try_async_pf(vcpu, slot, prefault, gfn, gpa, &pfn, write,
> &map_writable))
> return RET_PF_RETRY;
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index 5c6a895f67c3..27abd69e69f6 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -836,6 +836,9 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code,
>
> slot = kvm_vcpu_gfn_to_memslot(vcpu, walker.gfn);
>
> + if (!write_fault && slot && (slot->flags & KVM_MEM_PCI_HOLE))
> + return RET_PF_EMULATE;
> +
> if (try_async_pf(vcpu, slot, prefault, walker.gfn, addr, &pfn,
> write_fault, &map_writable))
> return RET_PF_RETRY;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 95ef62922869..dc312b8bfa05 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3515,6 +3515,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> case KVM_CAP_EXCEPTION_PAYLOAD:
> case KVM_CAP_SET_GUEST_DEBUG:
> case KVM_CAP_LAST_CPU:
> + case KVM_CAP_PCI_HOLE_MEM:
> r = 1;
> break;
> case KVM_CAP_SYNC_REGS:
> @@ -10115,9 +10116,11 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot,
> ugfn = slot->userspace_addr >> PAGE_SHIFT;
> /*
> * If the gfn and userspace address are not aligned wrt each
> - * other, disable large page support for this slot.
> + * other, disable large page support for this slot. Also,
> + * disable large page support for KVM_MEM_PCI_HOLE slots.
> */
> - if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1)) {
> + if (slot->flags & KVM_MEM_PCI_HOLE || ((slot->base_gfn ^ ugfn) &

Please add () around the first expression

> + (KVM_PAGES_PER_HPAGE(level) - 1))) {
> unsigned long j;
>
> for (j = 0; j < lpages; ++j)
> @@ -10179,7 +10182,8 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm,
> * Nothing to do for RO slots or CREATE/MOVE/DELETE of a slot.
> * See comments below.
> */
> - if ((change != KVM_MR_FLAGS_ONLY) || (new->flags & KVM_MEM_READONLY))
> + if ((change != KVM_MR_FLAGS_ONLY) || (new->flags & KVM_MEM_READONLY) ||
> + (new->flags & KVM_MEM_PCI_HOLE))

How about

if ((change != KVM_MR_FLAGS_ONLY) ||
(new->flags & (KVM_MEM_READONLY|KVM_MEM_PCI_HOLE)))

> return;
>
> /*
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 989afcbe642f..63c2d93ef172 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1081,7 +1081,12 @@ __gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn)
> static inline unsigned long
> __gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn)
> {
> - return slot->userspace_addr + (gfn - slot->base_gfn) * PAGE_SIZE;
> + if (likely(!(slot->flags & KVM_MEM_PCI_HOLE))) {
> + return slot->userspace_addr +
> + (gfn - slot->base_gfn) * PAGE_SIZE;
> + } else {
> + BUG();

Debug code you forgot to remove? I see below you've modified
__gfn_to_hva_many() to return KVM_HVA_ERR_BAD already when
given a PCI hole slot. I think that's the only check we should add.

> + }
> }
>
> static inline int memslot_id(struct kvm *kvm, gfn_t gfn)
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 2c73dcfb3dbb..59d631cbb71d 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -109,6 +109,7 @@ struct kvm_userspace_memory_region {
> */
> #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0)
> #define KVM_MEM_READONLY (1UL << 1)
> +#define KVM_MEM_PCI_HOLE (1UL << 2)
>
> /* for KVM_IRQ_LINE */
> struct kvm_irq_level {
> @@ -1034,7 +1035,7 @@ struct kvm_ppc_resize_hpt {
> #define KVM_CAP_ASYNC_PF_INT 183
> #define KVM_CAP_LAST_CPU 184
> #define KVM_CAP_SMALLER_MAXPHYADDR 185
> -
> +#define KVM_CAP_PCI_HOLE_MEM 186
>
> #ifdef KVM_CAP_IRQ_ROUTING
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 2c2c0254c2d8..3f69ae711021 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1107,6 +1107,10 @@ static int check_memory_region_flags(const struct kvm_userspace_memory_region *m
> valid_flags |= KVM_MEM_READONLY;
> #endif
>
> +#ifdef __KVM_HAVE_PCI_HOLE_MEM
> + valid_flags |= KVM_MEM_PCI_HOLE;
> +#endif
> +
> if (mem->flags & ~valid_flags)
> return -EINVAL;
>
> @@ -1284,11 +1288,26 @@ int __kvm_set_memory_region(struct kvm *kvm,
> return -EINVAL;
> if (mem->guest_phys_addr & (PAGE_SIZE - 1))
> return -EINVAL;
> - /* We can read the guest memory with __xxx_user() later on. */
> - if ((mem->userspace_addr & (PAGE_SIZE - 1)) ||
> - !access_ok((void __user *)(unsigned long)mem->userspace_addr,
> - mem->memory_size))
> +
> + /*
> + * KVM_MEM_PCI_HOLE is mutually exclusive with KVM_MEM_READONLY/
> + * KVM_MEM_LOG_DIRTY_PAGES.
> + */
> + if ((mem->flags & KVM_MEM_PCI_HOLE) &&
> + (mem->flags & (KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES)))
> return -EINVAL;
> +
> + if (!(mem->flags & KVM_MEM_PCI_HOLE)) {
> + /* We can read the guest memory with __xxx_user() later on. */
> + if ((mem->userspace_addr & (PAGE_SIZE - 1)) ||
> + !access_ok((void __user *)(unsigned long)mem->userspace_addr,
> + mem->memory_size))
> + return -EINVAL;
> + } else {
> + if (mem->userspace_addr)
> + return -EINVAL;
> + }
> +
> if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM)
> return -EINVAL;
> if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr)
> @@ -1328,7 +1347,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
> } else { /* Modify an existing slot. */
> if ((new.userspace_addr != old.userspace_addr) ||
> (new.npages != old.npages) ||
> - ((new.flags ^ old.flags) & KVM_MEM_READONLY))
> + ((new.flags ^ old.flags) & KVM_MEM_READONLY) ||
> + ((new.flags ^ old.flags) & KVM_MEM_PCI_HOLE))
> return -EINVAL;
>
> if (new.base_gfn != old.base_gfn)
> @@ -1715,13 +1735,13 @@ unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn)
>
> static bool memslot_is_readonly(struct kvm_memory_slot *slot)
> {
> - return slot->flags & KVM_MEM_READONLY;
> + return slot->flags & (KVM_MEM_READONLY | KVM_MEM_PCI_HOLE);
> }
>
> static unsigned long __gfn_to_hva_many(struct kvm_memory_slot *slot, gfn_t gfn,
> gfn_t *nr_pages, bool write)
> {
> - if (!slot || slot->flags & KVM_MEMSLOT_INVALID)
> + if (!slot || (slot->flags & (KVM_MEMSLOT_INVALID | KVM_MEM_PCI_HOLE)))
> return KVM_HVA_ERR_BAD;
>
> if (memslot_is_readonly(slot) && write)
> @@ -2318,6 +2338,11 @@ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn,
> int r;
> unsigned long addr;
>
> + if (unlikely(slot && (slot->flags & KVM_MEM_PCI_HOLE))) {
> + memset(data, 0xff, len);
> + return 0;
> + }
> +
> addr = gfn_to_hva_memslot_prot(slot, gfn, NULL);
> if (kvm_is_error_hva(addr))
> return -EFAULT;
> --
> 2.25.4
>

I didn't really review this patch, as it's touching lots of x86 mm
functions that I didn't want to delve into, but I took a quick look
since I was curious about the feature.

Thanks,
drew

2020-08-06 00:19:25

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH 2/3] KVM: x86: introduce KVM_MEM_PCI_HOLE memory

On Wed, Aug 05, 2020 at 10:05:40AM -0700, Jim Mattson wrote:
> On Tue, Jul 28, 2020 at 7:38 AM Vitaly Kuznetsov <[email protected]> wrote:
> >
> > PCIe config space can (depending on the configuration) be quite big but
> > usually is sparsely populated. Guest may scan it by accessing individual
> > device's page which, when device is missing, is supposed to have 'pci
> > hole' semantics: reads return '0xff' and writes get discarded. Compared
> > to the already existing KVM_MEM_READONLY, VMM doesn't need to allocate
> > real memory and stuff it with '0xff'.
>
> Note that the bus error semantics described should apply to *any*
> unbacked guest physical addresses, not just addresses in the PCI hole.
> (Typically, this also applies to the standard local APIC page
> (0xfee00xxx) when the local APIC is either disabled or in x2APIC mode,
> which is an area that kvm has had trouble with in the past.)

Well ATM from KVM's POV unbacked -> exit to userspace, right?
Not sure what you are suggesting here ...

--
MST

2020-08-06 00:24:47

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH 0/3] KVM: x86: KVM_MEM_PCI_HOLE memory

On Tue, Jul 28, 2020 at 04:37:38PM +0200, Vitaly Kuznetsov wrote:
> This is a continuation of "[PATCH RFC 0/5] KVM: x86: KVM_MEM_ALLONES
> memory" work:
> https://lore.kernel.org/kvm/[email protected]/
> and pairs with Julia's "x86/PCI: Use MMCONFIG by default for KVM guests":
> https://lore.kernel.org/linux-pci/[email protected]/
>
> PCIe config space can (depending on the configuration) be quite big but
> usually is sparsely populated. Guest may scan it by accessing individual
> device's page which, when device is missing, is supposed to have 'pci
> hole' semantics: reads return '0xff' and writes get discarded.
>
> When testing Linux kernel boot with QEMU q35 VM and direct kernel boot
> I observed 8193 accesses to PCI hole memory. When such exit is handled
> in KVM without exiting to userspace, it takes roughly 0.000001 sec.
> Handling the same exit in userspace is six times slower (0.000006 sec) so
> the overal; difference is 0.04 sec. This may be significant for 'microvm'
> ideas.
>
> Note, the same speed can already be achieved by using KVM_MEM_READONLY
> but doing this would require allocating real memory for all missing
> devices and e.g. 8192 pages gives us 32mb. This will have to be allocated
> for each guest separately and for 'microvm' use-cases this is likely
> a no-go.
>
> Introduce special KVM_MEM_PCI_HOLE memory: userspace doesn't need to
> back it with real memory, all reads from it are handled inside KVM and
> return '0xff'. Writes still go to userspace but these should be extremely
> rare.
>
> The original 'KVM_MEM_ALLONES' idea had additional optimizations: KVM
> was mapping all 'PCI hole' pages to a single read-only page stuffed with
> 0xff. This is omitted in this submission as the benefits are unclear:
> KVM will have to allocate SPTEs (either on demand or aggressively) and
> this also consumes time/memory.

Curious about this: if we do it aggressively on the 1st fault,
how long does it take to allocate 256 huge page SPTEs?
And the amount of memory seems pretty small then, right?

> We can always take a look at possible
> optimizations later.
>
> Vitaly Kuznetsov (3):
> KVM: x86: move kvm_vcpu_gfn_to_memslot() out of try_async_pf()
> KVM: x86: introduce KVM_MEM_PCI_HOLE memory
> KVM: selftests: add KVM_MEM_PCI_HOLE test
>
> Documentation/virt/kvm/api.rst | 19 ++-
> arch/x86/include/uapi/asm/kvm.h | 1 +
> arch/x86/kvm/mmu/mmu.c | 19 +--
> arch/x86/kvm/mmu/paging_tmpl.h | 10 +-
> arch/x86/kvm/x86.c | 10 +-
> include/linux/kvm_host.h | 7 +-
> include/uapi/linux/kvm.h | 3 +-
> tools/testing/selftests/kvm/Makefile | 1 +
> .../testing/selftests/kvm/include/kvm_util.h | 1 +
> tools/testing/selftests/kvm/lib/kvm_util.c | 81 +++++++------
> .../kvm/x86_64/memory_slot_pci_hole.c | 112 ++++++++++++++++++
> virt/kvm/kvm_main.c | 39 ++++--
> 12 files changed, 243 insertions(+), 60 deletions(-)
> create mode 100644 tools/testing/selftests/kvm/x86_64/memory_slot_pci_hole.c
>
> --
> 2.25.4

2020-08-06 09:10:43

by Vitaly Kuznetsov

[permalink] [raw]
Subject: Re: [PATCH 2/3] KVM: x86: introduce KVM_MEM_PCI_HOLE memory

Andrew Jones <[email protected]> writes:

> On Tue, Jul 28, 2020 at 04:37:40PM +0200, Vitaly Kuznetsov wrote:
>> PCIe config space can (depending on the configuration) be quite big but
>> usually is sparsely populated. Guest may scan it by accessing individual
>> device's page which, when device is missing, is supposed to have 'pci
>> hole' semantics: reads return '0xff' and writes get discarded. Compared
>> to the already existing KVM_MEM_READONLY, VMM doesn't need to allocate
>> real memory and stuff it with '0xff'.
>>
>> Suggested-by: Michael S. Tsirkin <[email protected]>
>> Signed-off-by: Vitaly Kuznetsov <[email protected]>
>> ---
>> Documentation/virt/kvm/api.rst | 19 +++++++++++-----
>> arch/x86/include/uapi/asm/kvm.h | 1 +
>> arch/x86/kvm/mmu/mmu.c | 5 ++++-
>> arch/x86/kvm/mmu/paging_tmpl.h | 3 +++
>> arch/x86/kvm/x86.c | 10 ++++++---
>> include/linux/kvm_host.h | 7 +++++-
>> include/uapi/linux/kvm.h | 3 ++-
>> virt/kvm/kvm_main.c | 39 +++++++++++++++++++++++++++------
>> 8 files changed, 68 insertions(+), 19 deletions(-)
>>
>> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
>> index 644e5326aa50..fbbf533a331b 100644
>> --- a/Documentation/virt/kvm/api.rst
>> +++ b/Documentation/virt/kvm/api.rst
>> @@ -1241,6 +1241,7 @@ yet and must be cleared on entry.
>> /* for kvm_memory_region::flags */
>> #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0)
>> #define KVM_MEM_READONLY (1UL << 1)
>> + #define KVM_MEM_PCI_HOLE (1UL << 2)
>>
>> This ioctl allows the user to create, modify or delete a guest physical
>> memory slot. Bits 0-15 of "slot" specify the slot id and this value
>> @@ -1268,12 +1269,18 @@ It is recommended that the lower 21 bits of guest_phys_addr and userspace_addr
>> be identical. This allows large pages in the guest to be backed by large
>> pages in the host.
>>
>> -The flags field supports two flags: KVM_MEM_LOG_DIRTY_PAGES and
>> -KVM_MEM_READONLY. The former can be set to instruct KVM to keep track of
>> -writes to memory within the slot. See KVM_GET_DIRTY_LOG ioctl to know how to
>> -use it. The latter can be set, if KVM_CAP_READONLY_MEM capability allows it,
>> -to make a new slot read-only. In this case, writes to this memory will be
>> -posted to userspace as KVM_EXIT_MMIO exits.
>> +The flags field supports the following flags: KVM_MEM_LOG_DIRTY_PAGES,
>> +KVM_MEM_READONLY, KVM_MEM_READONLY:
>
> The second KVM_MEM_READONLY should be KVM_MEM_PCI_HOLE. Or just drop the
> list here, as they're listed below anyway
>
>> +- KVM_MEM_LOG_DIRTY_PAGES can be set to instruct KVM to keep track of writes to
>> + memory within the slot. See KVM_GET_DIRTY_LOG ioctl to know how to use it.
>> +- KVM_MEM_READONLY can be set, if KVM_CAP_READONLY_MEM capability allows it,
>> + to make a new slot read-only. In this case, writes to this memory will be
>> + posted to userspace as KVM_EXIT_MMIO exits.
>> +- KVM_MEM_PCI_HOLE can be set, if KVM_CAP_PCI_HOLE_MEM capability allows it,
>> + to create a new virtual read-only slot which will always return '0xff' when
>> + guest reads from it. 'userspace_addr' has to be set to NULL. This flag is
>> + mutually exclusive with KVM_MEM_LOG_DIRTY_PAGES/KVM_MEM_READONLY. All writes
>> + to this memory will be posted to userspace as KVM_EXIT_MMIO exits.
>
> I see 2/3's of this text is copy+pasted from above, but how about this
>
> - KVM_MEM_LOG_DIRTY_PAGES: log writes. Use KVM_GET_DIRTY_LOG to retreive
> the log.
> - KVM_MEM_READONLY: exit to userspace with KVM_EXIT_MMIO on writes. Only
> available when KVM_CAP_READONLY_MEM is present.
> - KVM_MEM_PCI_HOLE: always return 0xff on reads, exit to userspace with
> KVM_EXIT_MMIO on writes. Only available when KVM_CAP_PCI_HOLE_MEM is
> present. When setting the memory region 'userspace_addr' must be NULL.
> This flag is mutually exclusive with KVM_MEM_LOG_DIRTY_PAGES and with
> KVM_MEM_READONLY.

Sound better, thanks! Will add in v2.

>
>>
>> When the KVM_CAP_SYNC_MMU capability is available, changes in the backing of
>> the memory region are automatically reflected into the guest. For example, an
>> diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
>> index 17c5a038f42d..cf80a26d74f5 100644
>> --- a/arch/x86/include/uapi/asm/kvm.h
>> +++ b/arch/x86/include/uapi/asm/kvm.h
>> @@ -48,6 +48,7 @@
>> #define __KVM_HAVE_XSAVE
>> #define __KVM_HAVE_XCRS
>> #define __KVM_HAVE_READONLY_MEM
>> +#define __KVM_HAVE_PCI_HOLE_MEM
>>
>> /* Architectural interrupt line count. */
>> #define KVM_NR_INTERRUPTS 256
>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
>> index 8597e8102636..c2e3a1deafdd 100644
>> --- a/arch/x86/kvm/mmu/mmu.c
>> +++ b/arch/x86/kvm/mmu/mmu.c
>> @@ -3253,7 +3253,7 @@ static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
>> return PG_LEVEL_4K;
>>
>> slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, true);
>> - if (!slot)
>> + if (!slot || (slot->flags & KVM_MEM_PCI_HOLE))
>> return PG_LEVEL_4K;
>>
>> max_level = min(max_level, max_page_level);
>> @@ -4104,6 +4104,9 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
>>
>> slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
>>
>> + if (!write && slot && (slot->flags & KVM_MEM_PCI_HOLE))
>> + return RET_PF_EMULATE;
>> +
>> if (try_async_pf(vcpu, slot, prefault, gfn, gpa, &pfn, write,
>> &map_writable))
>> return RET_PF_RETRY;
>> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
>> index 5c6a895f67c3..27abd69e69f6 100644
>> --- a/arch/x86/kvm/mmu/paging_tmpl.h
>> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
>> @@ -836,6 +836,9 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code,
>>
>> slot = kvm_vcpu_gfn_to_memslot(vcpu, walker.gfn);
>>
>> + if (!write_fault && slot && (slot->flags & KVM_MEM_PCI_HOLE))
>> + return RET_PF_EMULATE;
>> +
>> if (try_async_pf(vcpu, slot, prefault, walker.gfn, addr, &pfn,
>> write_fault, &map_writable))
>> return RET_PF_RETRY;
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 95ef62922869..dc312b8bfa05 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -3515,6 +3515,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>> case KVM_CAP_EXCEPTION_PAYLOAD:
>> case KVM_CAP_SET_GUEST_DEBUG:
>> case KVM_CAP_LAST_CPU:
>> + case KVM_CAP_PCI_HOLE_MEM:
>> r = 1;
>> break;
>> case KVM_CAP_SYNC_REGS:
>> @@ -10115,9 +10116,11 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot,
>> ugfn = slot->userspace_addr >> PAGE_SHIFT;
>> /*
>> * If the gfn and userspace address are not aligned wrt each
>> - * other, disable large page support for this slot.
>> + * other, disable large page support for this slot. Also,
>> + * disable large page support for KVM_MEM_PCI_HOLE slots.
>> */
>> - if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1)) {
>> + if (slot->flags & KVM_MEM_PCI_HOLE || ((slot->base_gfn ^ ugfn) &
>
> Please add () around the first expression
>

Ack

>> + (KVM_PAGES_PER_HPAGE(level) - 1))) {
>> unsigned long j;
>>
>> for (j = 0; j < lpages; ++j)
>> @@ -10179,7 +10182,8 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm,
>> * Nothing to do for RO slots or CREATE/MOVE/DELETE of a slot.
>> * See comments below.
>> */
>> - if ((change != KVM_MR_FLAGS_ONLY) || (new->flags & KVM_MEM_READONLY))
>> + if ((change != KVM_MR_FLAGS_ONLY) || (new->flags & KVM_MEM_READONLY) ||
>> + (new->flags & KVM_MEM_PCI_HOLE))
>
> How about
>
> if ((change != KVM_MR_FLAGS_ONLY) ||
> (new->flags & (KVM_MEM_READONLY|KVM_MEM_PCI_HOLE)))
>

Ack

>> return;
>>
>> /*
>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> index 989afcbe642f..63c2d93ef172 100644
>> --- a/include/linux/kvm_host.h
>> +++ b/include/linux/kvm_host.h
>> @@ -1081,7 +1081,12 @@ __gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn)
>> static inline unsigned long
>> __gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn)
>> {
>> - return slot->userspace_addr + (gfn - slot->base_gfn) * PAGE_SIZE;
>> + if (likely(!(slot->flags & KVM_MEM_PCI_HOLE))) {
>> + return slot->userspace_addr +
>> + (gfn - slot->base_gfn) * PAGE_SIZE;
>> + } else {
>> + BUG();
>
> Debug code you forgot to remove? I see below you've modified
> __gfn_to_hva_many() to return KVM_HVA_ERR_BAD already when
> given a PCI hole slot. I think that's the only check we should add.

No, this was intentional. We have at least two users of
__gfn_to_hva_memslot() today and in case we ever reach here with a
KVM_MEM_PCI_HOLE slot we're doomed anyway but it would be much easier to
debug the immediate BUG() than an invalid pointer access some time
later.

Anyway, I don't really feel strong and I'm fine with dropping the
check. Alternatively, I can suggest we add

BUG_ON(!slot->userspace_addr);

to the beginning of __gfn_to_hva_memslot() intead.

>
>> + }
>> }
>>
>> static inline int memslot_id(struct kvm *kvm, gfn_t gfn)
>> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
>> index 2c73dcfb3dbb..59d631cbb71d 100644
>> --- a/include/uapi/linux/kvm.h
>> +++ b/include/uapi/linux/kvm.h
>> @@ -109,6 +109,7 @@ struct kvm_userspace_memory_region {
>> */
>> #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0)
>> #define KVM_MEM_READONLY (1UL << 1)
>> +#define KVM_MEM_PCI_HOLE (1UL << 2)
>>
>> /* for KVM_IRQ_LINE */
>> struct kvm_irq_level {
>> @@ -1034,7 +1035,7 @@ struct kvm_ppc_resize_hpt {
>> #define KVM_CAP_ASYNC_PF_INT 183
>> #define KVM_CAP_LAST_CPU 184
>> #define KVM_CAP_SMALLER_MAXPHYADDR 185
>> -
>> +#define KVM_CAP_PCI_HOLE_MEM 186
>>
>> #ifdef KVM_CAP_IRQ_ROUTING
>>
>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>> index 2c2c0254c2d8..3f69ae711021 100644
>> --- a/virt/kvm/kvm_main.c
>> +++ b/virt/kvm/kvm_main.c
>> @@ -1107,6 +1107,10 @@ static int check_memory_region_flags(const struct kvm_userspace_memory_region *m
>> valid_flags |= KVM_MEM_READONLY;
>> #endif
>>
>> +#ifdef __KVM_HAVE_PCI_HOLE_MEM
>> + valid_flags |= KVM_MEM_PCI_HOLE;
>> +#endif
>> +
>> if (mem->flags & ~valid_flags)
>> return -EINVAL;
>>
>> @@ -1284,11 +1288,26 @@ int __kvm_set_memory_region(struct kvm *kvm,
>> return -EINVAL;
>> if (mem->guest_phys_addr & (PAGE_SIZE - 1))
>> return -EINVAL;
>> - /* We can read the guest memory with __xxx_user() later on. */
>> - if ((mem->userspace_addr & (PAGE_SIZE - 1)) ||
>> - !access_ok((void __user *)(unsigned long)mem->userspace_addr,
>> - mem->memory_size))
>> +
>> + /*
>> + * KVM_MEM_PCI_HOLE is mutually exclusive with KVM_MEM_READONLY/
>> + * KVM_MEM_LOG_DIRTY_PAGES.
>> + */
>> + if ((mem->flags & KVM_MEM_PCI_HOLE) &&
>> + (mem->flags & (KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES)))
>> return -EINVAL;
>> +
>> + if (!(mem->flags & KVM_MEM_PCI_HOLE)) {
>> + /* We can read the guest memory with __xxx_user() later on. */
>> + if ((mem->userspace_addr & (PAGE_SIZE - 1)) ||
>> + !access_ok((void __user *)(unsigned long)mem->userspace_addr,
>> + mem->memory_size))
>> + return -EINVAL;
>> + } else {
>> + if (mem->userspace_addr)
>> + return -EINVAL;
>> + }
>> +
>> if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM)
>> return -EINVAL;
>> if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr)
>> @@ -1328,7 +1347,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
>> } else { /* Modify an existing slot. */
>> if ((new.userspace_addr != old.userspace_addr) ||
>> (new.npages != old.npages) ||
>> - ((new.flags ^ old.flags) & KVM_MEM_READONLY))
>> + ((new.flags ^ old.flags) & KVM_MEM_READONLY) ||
>> + ((new.flags ^ old.flags) & KVM_MEM_PCI_HOLE))
>> return -EINVAL;
>>
>> if (new.base_gfn != old.base_gfn)
>> @@ -1715,13 +1735,13 @@ unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn)
>>
>> static bool memslot_is_readonly(struct kvm_memory_slot *slot)
>> {
>> - return slot->flags & KVM_MEM_READONLY;
>> + return slot->flags & (KVM_MEM_READONLY | KVM_MEM_PCI_HOLE);
>> }
>>
>> static unsigned long __gfn_to_hva_many(struct kvm_memory_slot *slot, gfn_t gfn,
>> gfn_t *nr_pages, bool write)
>> {
>> - if (!slot || slot->flags & KVM_MEMSLOT_INVALID)
>> + if (!slot || (slot->flags & (KVM_MEMSLOT_INVALID | KVM_MEM_PCI_HOLE)))
>> return KVM_HVA_ERR_BAD;
>>
>> if (memslot_is_readonly(slot) && write)
>> @@ -2318,6 +2338,11 @@ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn,
>> int r;
>> unsigned long addr;
>>
>> + if (unlikely(slot && (slot->flags & KVM_MEM_PCI_HOLE))) {
>> + memset(data, 0xff, len);
>> + return 0;
>> + }
>> +
>> addr = gfn_to_hva_memslot_prot(slot, gfn, NULL);
>> if (kvm_is_error_hva(addr))
>> return -EFAULT;
>> --
>> 2.25.4
>>
>
> I didn't really review this patch, as it's touching lots of x86 mm
> functions that I didn't want to delve into, but I took a quick look
> since I was curious about the feature.

x86 part is really negligible, I think it would be very easy to expand
the scope to other arches if needed.

Thanks!

--
Vitaly

2020-08-06 09:19:53

by Vitaly Kuznetsov

[permalink] [raw]
Subject: Re: [PATCH 2/3] KVM: x86: introduce KVM_MEM_PCI_HOLE memory

Jim Mattson <[email protected]> writes:

> On Tue, Jul 28, 2020 at 7:38 AM Vitaly Kuznetsov <[email protected]> wrote:
>>
>> PCIe config space can (depending on the configuration) be quite big but
>> usually is sparsely populated. Guest may scan it by accessing individual
>> device's page which, when device is missing, is supposed to have 'pci
>> hole' semantics: reads return '0xff' and writes get discarded. Compared
>> to the already existing KVM_MEM_READONLY, VMM doesn't need to allocate
>> real memory and stuff it with '0xff'.
>
> Note that the bus error semantics described should apply to *any*
> unbacked guest physical addresses, not just addresses in the PCI hole.
> (Typically, this also applies to the standard local APIC page
> (0xfee00xxx) when the local APIC is either disabled or in x2APIC mode,
> which is an area that kvm has had trouble with in the past.)

Yes, we can make KVM return 0xff on all read access to unbacked memory,
not only KVM_MEM_PCI_HOLE slots (and drop them completely). This,
however, takes the control from userspace: with KVM_MEM_PCI_HOLE
memslots we're saying 'accessing this unbacked memory is fine' and we
can still trap accesses to all other places. This should help in
detecting misbehaving guests.

--
Vitaly

2020-08-06 09:23:51

by Vitaly Kuznetsov

[permalink] [raw]
Subject: Re: [PATCH 0/3] KVM: x86: KVM_MEM_PCI_HOLE memory

"Michael S. Tsirkin" <[email protected]> writes:

> On Tue, Jul 28, 2020 at 04:37:38PM +0200, Vitaly Kuznetsov wrote:
>> This is a continuation of "[PATCH RFC 0/5] KVM: x86: KVM_MEM_ALLONES
>> memory" work:
>> https://lore.kernel.org/kvm/[email protected]/
>> and pairs with Julia's "x86/PCI: Use MMCONFIG by default for KVM guests":
>> https://lore.kernel.org/linux-pci/[email protected]/
>>
>> PCIe config space can (depending on the configuration) be quite big but
>> usually is sparsely populated. Guest may scan it by accessing individual
>> device's page which, when device is missing, is supposed to have 'pci
>> hole' semantics: reads return '0xff' and writes get discarded.
>>
>> When testing Linux kernel boot with QEMU q35 VM and direct kernel boot
>> I observed 8193 accesses to PCI hole memory. When such exit is handled
>> in KVM without exiting to userspace, it takes roughly 0.000001 sec.
>> Handling the same exit in userspace is six times slower (0.000006 sec) so
>> the overal; difference is 0.04 sec. This may be significant for 'microvm'
>> ideas.
>>
>> Note, the same speed can already be achieved by using KVM_MEM_READONLY
>> but doing this would require allocating real memory for all missing
>> devices and e.g. 8192 pages gives us 32mb. This will have to be allocated
>> for each guest separately and for 'microvm' use-cases this is likely
>> a no-go.
>>
>> Introduce special KVM_MEM_PCI_HOLE memory: userspace doesn't need to
>> back it with real memory, all reads from it are handled inside KVM and
>> return '0xff'. Writes still go to userspace but these should be extremely
>> rare.
>>
>> The original 'KVM_MEM_ALLONES' idea had additional optimizations: KVM
>> was mapping all 'PCI hole' pages to a single read-only page stuffed with
>> 0xff. This is omitted in this submission as the benefits are unclear:
>> KVM will have to allocate SPTEs (either on demand or aggressively) and
>> this also consumes time/memory.
>
> Curious about this: if we do it aggressively on the 1st fault,
> how long does it take to allocate 256 huge page SPTEs?
> And the amount of memory seems pretty small then, right?

Right, this could work but we'll need a 2M region (one per KVM host of
course) filled with 0xff-s instead of a single 4k page.

Generally, I'd like to reach an agreement on whether this feature (and
the corresponding Julia's patch addding PV feature bit) is worthy. In
case it is (meaning it gets merged in this simplest form), we can
suggest further improvements. It would also help if firmware (SeaBIOS,
OVMF) would start recognizing the PV feature bit too, this way we'll be
seeing even bigger improvement and this may or may not be a deal-breaker
when it comes to the 'aggressive PTE mapping' idea.

--
Vitaly

2020-08-06 09:55:50

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH 0/3] KVM: x86: KVM_MEM_PCI_HOLE memory

On Thu, Aug 06, 2020 at 11:19:55AM +0200, Vitaly Kuznetsov wrote:
> "Michael S. Tsirkin" <[email protected]> writes:
>
> > On Tue, Jul 28, 2020 at 04:37:38PM +0200, Vitaly Kuznetsov wrote:
> >> This is a continuation of "[PATCH RFC 0/5] KVM: x86: KVM_MEM_ALLONES
> >> memory" work:
> >> https://lore.kernel.org/kvm/[email protected]/
> >> and pairs with Julia's "x86/PCI: Use MMCONFIG by default for KVM guests":
> >> https://lore.kernel.org/linux-pci/[email protected]/
> >>
> >> PCIe config space can (depending on the configuration) be quite big but
> >> usually is sparsely populated. Guest may scan it by accessing individual
> >> device's page which, when device is missing, is supposed to have 'pci
> >> hole' semantics: reads return '0xff' and writes get discarded.
> >>
> >> When testing Linux kernel boot with QEMU q35 VM and direct kernel boot
> >> I observed 8193 accesses to PCI hole memory. When such exit is handled
> >> in KVM without exiting to userspace, it takes roughly 0.000001 sec.
> >> Handling the same exit in userspace is six times slower (0.000006 sec) so
> >> the overal; difference is 0.04 sec. This may be significant for 'microvm'
> >> ideas.
> >>
> >> Note, the same speed can already be achieved by using KVM_MEM_READONLY
> >> but doing this would require allocating real memory for all missing
> >> devices and e.g. 8192 pages gives us 32mb. This will have to be allocated
> >> for each guest separately and for 'microvm' use-cases this is likely
> >> a no-go.
> >>
> >> Introduce special KVM_MEM_PCI_HOLE memory: userspace doesn't need to
> >> back it with real memory, all reads from it are handled inside KVM and
> >> return '0xff'. Writes still go to userspace but these should be extremely
> >> rare.
> >>
> >> The original 'KVM_MEM_ALLONES' idea had additional optimizations: KVM
> >> was mapping all 'PCI hole' pages to a single read-only page stuffed with
> >> 0xff. This is omitted in this submission as the benefits are unclear:
> >> KVM will have to allocate SPTEs (either on demand or aggressively) and
> >> this also consumes time/memory.
> >
> > Curious about this: if we do it aggressively on the 1st fault,
> > how long does it take to allocate 256 huge page SPTEs?
> > And the amount of memory seems pretty small then, right?
>
> Right, this could work but we'll need a 2M region (one per KVM host of
> course) filled with 0xff-s instead of a single 4k page.

Given it's global doesn't sound too bad.

>
> Generally, I'd like to reach an agreement on whether this feature (and
> the corresponding Julia's patch addding PV feature bit) is worthy. In
> case it is (meaning it gets merged in this simplest form), we can
> suggest further improvements. It would also help if firmware (SeaBIOS,
> OVMF) would start recognizing the PV feature bit too, this way we'll be
> seeing even bigger improvement and this may or may not be a deal-breaker
> when it comes to the 'aggressive PTE mapping' idea.

About the feature bit, I am not sure why it's really needed. A single
mmio access is cheaper than two io accesses anyway, right? So it makes
sense for a kvm guest whether host has this feature or not.
We need to be careful and limit to a specific QEMU implementation
to avoid tripping up bugs, but it seems more appropriate to
check it using pci host IDs.

> --
> Vitaly

2020-08-06 16:58:29

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH 0/3] KVM: x86: KVM_MEM_PCI_HOLE memory

On Thu, Aug 06, 2020 at 01:39:09PM +0200, Vitaly Kuznetsov wrote:
> "Michael S. Tsirkin" <[email protected]> writes:
>
> > About the feature bit, I am not sure why it's really needed. A single
> > mmio access is cheaper than two io accesses anyway, right? So it makes
> > sense for a kvm guest whether host has this feature or not.
> > We need to be careful and limit to a specific QEMU implementation
> > to avoid tripping up bugs, but it seems more appropriate to
> > check it using pci host IDs.
>
> Right, it's just that "running on KVM" is too coarse grained, we just
> need a way to somehow distinguish between "known/good" and
> "unknown/buggy" configurations.

Basically it's not KVM, it's QEMU that is known good. QEMU vendor id in
the pci host seems like a reasonable way to detect that. If someone
reuses QEMU ID - I guess they better behave just like QEMU :)

I also proposed only limiting this to register 0 (device id),
will make it very unlikely this can break accidentally ...

> --
> Vitaly

2020-08-06 17:22:56

by Vitaly Kuznetsov

[permalink] [raw]
Subject: Re: [PATCH 0/3] KVM: x86: KVM_MEM_PCI_HOLE memory

"Michael S. Tsirkin" <[email protected]> writes:

> About the feature bit, I am not sure why it's really needed. A single
> mmio access is cheaper than two io accesses anyway, right? So it makes
> sense for a kvm guest whether host has this feature or not.
> We need to be careful and limit to a specific QEMU implementation
> to avoid tripping up bugs, but it seems more appropriate to
> check it using pci host IDs.

Right, it's just that "running on KVM" is too coarse grained, we just
need a way to somehow distinguish between "known/good" and
"unknown/buggy" configurations.

--
Vitaly

2020-08-06 17:37:20

by Jim Mattson

[permalink] [raw]
Subject: Re: [PATCH 2/3] KVM: x86: introduce KVM_MEM_PCI_HOLE memory

On Wed, Aug 5, 2020 at 5:18 PM Michael S. Tsirkin <[email protected]> wrote:
>
> On Wed, Aug 05, 2020 at 10:05:40AM -0700, Jim Mattson wrote:
> > On Tue, Jul 28, 2020 at 7:38 AM Vitaly Kuznetsov <[email protected]> wrote:
> > >
> > > PCIe config space can (depending on the configuration) be quite big but
> > > usually is sparsely populated. Guest may scan it by accessing individual
> > > device's page which, when device is missing, is supposed to have 'pci
> > > hole' semantics: reads return '0xff' and writes get discarded. Compared
> > > to the already existing KVM_MEM_READONLY, VMM doesn't need to allocate
> > > real memory and stuff it with '0xff'.
> >
> > Note that the bus error semantics described should apply to *any*
> > unbacked guest physical addresses, not just addresses in the PCI hole.
> > (Typically, this also applies to the standard local APIC page
> > (0xfee00xxx) when the local APIC is either disabled or in x2APIC mode,
> > which is an area that kvm has had trouble with in the past.)
>
> Well ATM from KVM's POV unbacked -> exit to userspace, right?
> Not sure what you are suggesting here ...

Sometimes, maybe. That's not the way the emulation of most VMX
instructions works, should they access unbacked memory. Perhaps that's
just a whole slew of bugs to be fixed. :-)