From: "Maciej S. Szmigiero" <[email protected]>
The current memslot code uses a (reverse) gfn-ordered memslot array
for keeping track of them.
This only allows quick binary search by gfn, quick lookup by hva is
not possible (the implementation has to do a linear scan of the whole
memslot array).
Because the memslot array that is currently in use cannot be modified
every memslot management operation (create, delete, move, change
flags) has to make a copy of the whole array so it has a scratch copy
to work on.
Strictly speaking, however, it is only necessary to make copy of the
memslot that is being modified, copying all the memslots currently
present is just a limitation of the array-based memslot implementation.
Two memslot sets, however, are still needed so the VM continues to run
on the currently active set while the requested operation is being
performed on the second, currently inactive one.
In order to have two memslot sets, but only one copy of the actual
memslots it is necessary to split out the memslot data from the
memslot sets.
The memslots themselves should be also kept independent of each other
so they can be individually added or deleted.
These two memslot sets should normally point to the same set of memslots.
They can, however, be desynchronized when performing a memslot management
operation by replacing the memslot to be modified by its copy.
After the operation is complete, both memslot sets once again point to
the same, common set of memslot data.
This series implements the aforementioned idea.
The new implementation uses two trees to perform quick lookups:
For tracking of gfn an ordinary rbtree is used since memslots cannot
overlap in the guest address space and so this data structure is
sufficient for ensuring that lookups are done quickly.
For tracking of hva, however, an interval tree is needed since they
can overlap between memslots.
ID to memslot mappings are kept in a hash table instead of using a
statically allocated "id_to_index" array.
The "last used slot" mini-caches (both per-slot set one and per-vCPU one),
that keep track of the last found-by-gfn memslot, are still present in the
new code.
There was also a desire to make the new structure operate on "pay as
you go" basis, that is, that the user only pays the price of the
memslot count that is actually used, not of the maximum count allowed.
The operation semantics were carefully matched to the original
implementation, the outside-visible behavior should not change.
Only the timing will be different.
Making lookup and memslot management operations O(log(n)) brings some
performance benefits (tested on a Xeon 8167M machine):
509 slots in use:
Test Before After Improvement
Map 0.0232s 0.0223s 4%
Unmap 0.0724s 0.0315s 56%
Unmap 2M 0.00155s 0.000869s 44%
Move active 0.000101s 0.0000792s 22%
Move inactive 0.000108s 0.0000847s 21%
Slot setup 0.0135s 0.00803s 41%
100 slots in use:
Test Before After Improvement
Map 0.0195s 0.0191s None
Unmap 0.0374s 0.0312s 17%
Unmap 2M 0.000470s 0.000447s 5%
Move active 0.0000956s 0.0000800s 16%
Move inactive 0.000101s 0.0000840s 17%
Slot setup 0.00260s 0.00174s 33%
50 slots in use:
Test Before After Improvement
Map 0.0192s 0.0190s None
Unmap 0.0339s 0.0311s 8%
Unmap 2M 0.000399s 0.000395s None
Move active 0.0000999s 0.0000854s 15%
Move inactive 0.0000992s 0.0000826s 17%
Slot setup 0.00141s 0.000990s 30%
30 slots in use:
Test Before After Improvement
Map 0.0192s 0.0190s None
Unmap 0.0325s 0.0310s 5%
Unmap 2M 0.000373s 0.000373s None
Move active 0.000100s 0.0000865s 14%
Move inactive 0.000106s 0.0000918s 13%
Slot setup 0.000989s 0.000775s 22%
10 slots in use:
Test Before After Improvement
Map 0.0192s 0.0186s 3%
Unmap 0.0313s 0.0310s None
Unmap 2M 0.000348s 0.000351s None
Move active 0.000110s 0.0000948s 14%
Move inactive 0.000111s 0.0000933s 16%
Slot setup 0.000342s 0.000283s 17%
32k slots in use:
Test Before After Improvement
Map (8194) 0.200s 0.0541s 73%
Unmap 3.88s 0.0351s 99%
Unmap 2M 3.88s 0.0348s 99%
Move active 0.00142s 0.0000786s 94%
Move inactive 0.00148s 0.0000880s 94%
Slot setup 16.1s 0.59s 96%
Since the map test can be done with up to 8194 slots, the result above
for this test was obtained running it with that maximum number of
slots.
In both the old and new memslot code case the measurements were done
against the new KVM selftest framework, with TDP MMU disabled.
On x86-64 the code was well tested, passed KVM unit tests and KVM
selftests with KASAN on.
And, of course, booted various guests successfully (including nested
ones with TDP MMU enabled).
On other KVM platforms the code was compile-tested only.
Changes since v1:
* Drop the already merged HVA handler retpoline-friendliness patch,
* Split the scalable memslots patch into 8 smaller ones,
* Rebase onto the current kvm/queue,
* Make sure that ranges at both memslot's hva_nodes are always
initialized,
* Remove kvm_mmu_calculate_default_mmu_pages() prototype, too,
when removing this function,
* Redo benchmarks, measure 32k memslots on the old implementation, too.
Changes since v2:
* Rebase onto the current kvm/queue, taking into account the now-merged
MMU notifiers rewrite.
This reduces the diffstat by ~50 lines.
Changes since v3:
* Rebase onto the current (Aug 12) kvm/queue, taking into account the
introduction of slots_arch_lock (and lazy rmaps allocation) and per-vCPU
"last used slot" mini-cache,
* Change n_memslots_pages to u64 to avoid overflowing it on 32-bit KVM,
* Skip the kvm_mmu_change_mmu_pages() call for memslot operations that
don't change the total page count anyway,
* Move n_memslots_pages recalc to kvm_arch_prepare_memory_region() so
a proper error code can be returned in case we spot an underflow,
* Integrate the open-coded s390 gfn_to_memslot_approx() into the main KVM
code by adding "approx" parameter to the existing __gfn_to_memslot() while
renaming it to __gfn_to_memslot_approx() and introducing
__gfn_to_memslot() wrapper so existing call sites won't be affected,
since __gfn_to_memslot() now offers an extra functionality over
search_memslots().
This last fact wasn't the case at the time the previous version of this
series was posted,
* Split out the move of WARN that's triggered on invalid memslot index to
a separate patch,
* Rename the old slot variable in kvm_memslot_delete() to "oldslot",
add an additional comment why we delete a memslot from the hash table
in kvm_memslot_move_backward() in "KVM: Resolve memslot ID via a hash
table instead of via a static array" patch,
* Rename a patch from "Introduce memslots hva tree" to "Use interval tree
to do fast hva lookup in memslots",
* Document that the kvm_for_each_hva_range_memslot() range is inclusive,
* Rename kvm_for_each_hva_range_memslot to
kvm_for_each_memslot_in_hva_range, move this macro to kvm_main.c,
* Update the WARN in __kvm_handle_hva_range() and kvm_zap_gfn_range() to
also trigger on empty ranges,
* Use "bkt" instead of "ctr" for hash bucket index variables,
* Add a comment describing the idea behind the memslots dual set system
above struct kvm_memory_slot,
* Rename "memslots_all" to "__memslots" in struct kvm and add comments
there describing its two memslot members,
* Remove kvm_memslots_idx() helper and store the set index explicitly in
the particular memslots set instead,
* Open code kvm_init_memslots(),
* Rename swap_memslots() into kvm_swap_active_memslots() and make it
also swap pointers themselves to the provided two sets,
* Initialize "parent" outside of the for loop in kvm_memslot_gfn_insert(),
* Add kvm_memslot_gfn_erase() and kvm_memslot_gfn_replace() helpers,
* Fold kvm_init_memslot_hva_ranges() into kvm_copy_memslot(),
* Remove kvm_copy_replace_memslot() and introduce kvm_activate_memslot()
instead,
* Rename "slotsact" / "slotsina" into "active" / "inactive" in
kvm_set_memslot(), add a big comment what are the semantics of
"slotact" / "slotina" variables to this function,
* Move WARN about a missing old slot to kvm_set_memslot() and return -EIO
in this case,
* Set KVM_MEMSLOT_INVALID flag on a temporary slot before (rather than
after) replacing it in the inactive set in kvm_set_memslot() as it is a bit
more intuitive way to do it,
* Move handling of an error from kvm_arch_prepare_memory_region() directly
below a call to this function in kvm_set_memslot() from the end of this
function,
* Rework some of the comments in kvm_set_memslot(),
* Rename "oldslot" / "nslot" into "old" / "new" here and there in
"Keep memslots in tree-based structures instead of array-based ones"
patch,
* Add Sean's "Co-developed-by:" to the aforementioned patch since many of
its above changes are coming from his proposed patch (and he did a lot
of good work reviewing both this patch and the whole series),
* Introduce kvm_for_each_memslot_in_gfn_range() and use it in the last two
patches,
* Unfold long lines here and there,
* Retest the patch series.
arch/arm64/kvm/Kconfig | 1 +
arch/arm64/kvm/mmu.c | 15 +-
arch/mips/kvm/Kconfig | 1 +
arch/mips/kvm/mips.c | 3 +-
arch/powerpc/kvm/Kconfig | 1 +
arch/powerpc/kvm/book3s_64_mmu_hv.c | 4 +-
arch/powerpc/kvm/book3s_hv.c | 3 +-
arch/powerpc/kvm/book3s_hv_nested.c | 4 +-
arch/powerpc/kvm/book3s_hv_uvmem.c | 14 +-
arch/powerpc/kvm/powerpc.c | 5 +-
arch/s390/kvm/Kconfig | 1 +
arch/s390/kvm/kvm-s390.c | 66 +--
arch/s390/kvm/kvm-s390.h | 14 +
arch/s390/kvm/pv.c | 4 +-
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/Kconfig | 1 +
arch/x86/kvm/debugfs.c | 6 +-
arch/x86/kvm/mmu/mmu.c | 35 +-
arch/x86/kvm/x86.c | 40 +-
include/linux/kvm_host.h | 224 +++++++---
virt/kvm/kvm_main.c | 651 +++++++++++++++-------------
21 files changed, 622 insertions(+), 473 deletions(-)
From: "Maciej S. Szmigiero" <[email protected]>
Do a quick lookup for possibly overlapping gfns when creating or moving
a memslot instead of performing a linear scan of the whole memslot set.
Signed-off-by: Maciej S. Szmigiero <[email protected]>
---
virt/kvm/kvm_main.c | 36 +++++++++++++++++++++++++++---------
1 file changed, 27 insertions(+), 9 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 26aa3690b172..91304edb00e9 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1679,6 +1679,30 @@ static int kvm_delete_memslot(struct kvm *kvm,
return kvm_set_memslot(kvm, mem, old, &new, as_id, KVM_MR_DELETE);
}
+static bool kvm_check_memslot_overlap(struct kvm_memslots *slots,
+ struct kvm_memory_slot *nslot)
+{
+ int idx = slots->node_idx;
+ gfn_t nend = nslot->base_gfn + nslot->npages;
+ struct rb_node *node;
+
+ kvm_for_each_memslot_in_gfn_range(node, slots, nslot->base_gfn, nend) {
+ struct kvm_memory_slot *cslot;
+ gfn_t cend;
+
+ cslot = container_of(node, struct kvm_memory_slot, gfn_node[idx]);
+ cend = cslot->base_gfn + cslot->npages;
+ if (cslot->id == nslot->id)
+ continue;
+
+ /* kvm_for_each_in_gfn_no_more() guarantees that cslot->base_gfn < nend */
+ if (cend > nslot->base_gfn)
+ return true;
+ }
+
+ return false;
+}
+
/*
* Allocate some memory and give it an address in the guest physical address
* space.
@@ -1764,16 +1788,10 @@ int __kvm_set_memory_region(struct kvm *kvm,
}
if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) {
- int bkt;
-
/* Check for overlaps */
- kvm_for_each_memslot(tmp, bkt, __kvm_memslots(kvm, as_id)) {
- if (tmp->id == id)
- continue;
- if (!((new.base_gfn + new.npages <= tmp->base_gfn) ||
- (new.base_gfn >= tmp->base_gfn + tmp->npages)))
- return -EEXIST;
- }
+ if (kvm_check_memslot_overlap(__kvm_memslots(kvm, as_id),
+ &new))
+ return -EEXIST;
}
/* Allocate/free page dirty bitmap as needed */
From: "Maciej S. Szmigiero" <[email protected]>
There is no need to copy the whole memslot data after releasing
slots_arch_lock for a moment to install temporary memslots copy in
kvm_set_memslot() since this lock only protects the arch field of each
memslot.
Just resync this particular field after reacquiring slots_arch_lock.
Signed-off-by: Maciej S. Szmigiero <[email protected]>
---
virt/kvm/kvm_main.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 7000efff1425..272bc86a0e69 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1494,6 +1494,15 @@ static void kvm_copy_memslots(struct kvm_memslots *to,
memcpy(to, from, kvm_memslots_size(from->used_slots));
}
+static void kvm_copy_memslots_arch(struct kvm_memslots *to,
+ struct kvm_memslots *from)
+{
+ int i;
+
+ for (i = 0; i < from->used_slots; i++)
+ to->memslots[i].arch = from->memslots[i].arch;
+}
+
/*
* Note, at a minimum, the current number of used slots must be allocated, even
* when deleting a memslot, as we need a complete duplicate of the memslots for
@@ -1579,10 +1588,10 @@ static int kvm_set_memslot(struct kvm *kvm,
/*
* The arch-specific fields of the memslots could have changed
* between releasing the slots_arch_lock in
- * install_new_memslots and here, so get a fresh copy of the
- * slots.
+ * install_new_memslots and here, so get a fresh copy of these
+ * fields.
*/
- kvm_copy_memslots(slots, __kvm_memslots(kvm, as_id));
+ kvm_copy_memslots_arch(slots, __kvm_memslots(kvm, as_id));
}
r = kvm_arch_prepare_memory_region(kvm, old, new, mem, change);
@@ -1599,8 +1608,6 @@ static int kvm_set_memslot(struct kvm *kvm,
out_slots:
if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) {
- slot = id_to_memslot(slots, old->id);
- slot->flags &= ~KVM_MEMSLOT_INVALID;
slots = install_new_memslots(kvm, as_id, slots);
} else {
mutex_unlock(&kvm->slots_arch_lock);
From: "Maciej S. Szmigiero" <[email protected]>
Introduce a memslots gfn upper bound operation and use it to optimize
kvm_zap_gfn_range().
This way this handler can do a quick lookup for intersecting gfns and won't
have to do a linear scan of the whole memslot set.
Signed-off-by: Maciej S. Szmigiero <[email protected]>
---
arch/x86/kvm/mmu/mmu.c | 11 +++++--
include/linux/kvm_host.h | 69 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 78 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 2a4ac5100ff2..fd068efbe462 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5637,18 +5637,25 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
int i;
bool flush = false;
+ if (WARN_ON_ONCE(gfn_end <= gfn_start))
+ return;
+
write_lock(&kvm->mmu_lock);
kvm_inc_notifier_count(kvm, gfn_start, gfn_end);
if (kvm_memslots_have_rmaps(kvm)) {
for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
- int bkt;
+ int idx;
+ struct rb_node *node;
slots = __kvm_memslots(kvm, i);
- kvm_for_each_memslot(memslot, bkt, slots) {
+ idx = slots->node_idx;
+
+ kvm_for_each_memslot_in_gfn_range(node, slots, gfn_start, gfn_end) {
gfn_t start, end;
+ memslot = container_of(node, struct kvm_memory_slot, gfn_node[idx]);
start = max(gfn_start, memslot->base_gfn);
end = min(gfn_end, memslot->base_gfn + memslot->npages);
if (start >= end)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 71a9e13dab2d..c58f5a9e18f7 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -833,6 +833,75 @@ struct kvm_memory_slot *id_to_memslot(struct kvm_memslots *slots, int id)
return NULL;
}
+static inline
+struct rb_node *kvm_memslots_gfn_upper_bound(struct kvm_memslots *slots, gfn_t gfn)
+{
+ int idx = slots->node_idx;
+ struct rb_node *node, *result = NULL;
+
+ for (node = slots->gfn_tree.rb_node; node; ) {
+ struct kvm_memory_slot *slot;
+
+ slot = container_of(node, struct kvm_memory_slot, gfn_node[idx]);
+ if (gfn < slot->base_gfn) {
+ result = node;
+ node = node->rb_left;
+ } else
+ node = node->rb_right;
+ }
+
+ return result;
+}
+
+static inline
+struct rb_node *kvm_for_each_in_gfn_first(struct kvm_memslots *slots, gfn_t start)
+{
+ struct rb_node *node;
+
+ /*
+ * Find the slot with the lowest gfn that can possibly intersect with
+ * the range, so we'll ideally have slot start <= range start
+ */
+ node = kvm_memslots_gfn_upper_bound(slots, start);
+ if (node) {
+ struct rb_node *pnode;
+
+ /*
+ * A NULL previous node means that the very first slot
+ * already has a higher start gfn.
+ * In this case slot start > range start.
+ */
+ pnode = rb_prev(node);
+ if (pnode)
+ node = pnode;
+ } else {
+ /* a NULL node below means no slots */
+ node = rb_last(&slots->gfn_tree);
+ }
+
+ return node;
+}
+
+static inline
+bool kvm_for_each_in_gfn_no_more(struct kvm_memslots *slots, struct rb_node *node, gfn_t end)
+{
+ struct kvm_memory_slot *memslot;
+
+ memslot = container_of(node, struct kvm_memory_slot, gfn_node[slots->node_idx]);
+
+ /*
+ * If this slot starts beyond or at the end of the range so does
+ * every next one
+ */
+ return memslot->base_gfn >= end;
+}
+
+/* Iterate over each memslot *possibly* intersecting [start, end) range */
+#define kvm_for_each_memslot_in_gfn_range(node, slots, start, end) \
+ for (node = kvm_for_each_in_gfn_first(slots, start); \
+ node && !kvm_for_each_in_gfn_no_more(slots, node, end); \
+ node = rb_next(node)) \
+
/*
* KVM_SET_USER_MEMORY_REGION ioctl allows the following operations:
* - create a new memory slot
From: "Maciej S. Szmigiero" <[email protected]>
The current memslot code uses a (reverse gfn-ordered) memslot array for
keeping track of them.
Because the memslot array that is currently in use cannot be modified
every memslot management operation (create, delete, move, change flags)
has to make a copy of the whole array so it has a scratch copy to work on.
Strictly speaking, however, it is only necessary to make copy of the
memslot that is being modified, copying all the memslots currently present
is just a limitation of the array-based memslot implementation.
Two memslot sets, however, are still needed so the VM continues to run
on the currently active set while the requested operation is being
performed on the second, currently inactive one.
In order to have two memslot sets, but only one copy of actual memslots
it is necessary to split out the memslot data from the memslot sets.
The memslots themselves should be also kept independent of each other
so they can be individually added or deleted.
These two memslot sets should normally point to the same set of
memslots. They can, however, be desynchronized when performing a
memslot management operation by replacing the memslot to be modified
by its copy. After the operation is complete, both memslot sets once
again point to the same, common set of memslot data.
This commit implements the aforementioned idea.
For tracking of gfns an ordinary rbtree is used since memslots cannot
overlap in the guest address space and so this data structure is
sufficient for ensuring that lookups are done quickly.
The "last used slot" mini-caches (both per-slot set one and per-vCPU one),
that keep track of the last found-by-gfn memslot, are still present in the
new code.
Signed-off-by: Maciej S. Szmigiero <[email protected]>
Co-developed-by: Sean Christopherson <[email protected]>
---
arch/arm64/kvm/mmu.c | 8 +-
arch/powerpc/kvm/book3s_64_mmu_hv.c | 4 +-
arch/powerpc/kvm/book3s_hv.c | 3 +-
arch/powerpc/kvm/book3s_hv_nested.c | 4 +-
arch/powerpc/kvm/book3s_hv_uvmem.c | 14 +-
arch/s390/kvm/kvm-s390.c | 24 +-
arch/s390/kvm/kvm-s390.h | 6 +-
arch/x86/kvm/debugfs.c | 6 +-
arch/x86/kvm/mmu/mmu.c | 4 +-
arch/x86/kvm/x86.c | 4 +-
include/linux/kvm_host.h | 140 +++---
virt/kvm/kvm_main.c | 659 +++++++++++++---------------
12 files changed, 425 insertions(+), 451 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index ed74d23970a0..deea7de18e96 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -209,13 +209,13 @@ static void stage2_flush_vm(struct kvm *kvm)
{
struct kvm_memslots *slots;
struct kvm_memory_slot *memslot;
- int idx;
+ int idx, bkt;
idx = srcu_read_lock(&kvm->srcu);
spin_lock(&kvm->mmu_lock);
slots = kvm_memslots(kvm);
- kvm_for_each_memslot(memslot, slots)
+ kvm_for_each_memslot(memslot, bkt, slots)
stage2_flush_memslot(kvm, memslot);
spin_unlock(&kvm->mmu_lock);
@@ -548,14 +548,14 @@ void stage2_unmap_vm(struct kvm *kvm)
{
struct kvm_memslots *slots;
struct kvm_memory_slot *memslot;
- int idx;
+ int idx, bkt;
idx = srcu_read_lock(&kvm->srcu);
mmap_read_lock(current->mm);
spin_lock(&kvm->mmu_lock);
slots = kvm_memslots(kvm);
- kvm_for_each_memslot(memslot, slots)
+ kvm_for_each_memslot(memslot, bkt, slots)
stage2_unmap_memslot(kvm, memslot);
spin_unlock(&kvm->mmu_lock);
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index c63e263312a4..213232914367 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -734,11 +734,11 @@ void kvmppc_rmap_reset(struct kvm *kvm)
{
struct kvm_memslots *slots;
struct kvm_memory_slot *memslot;
- int srcu_idx;
+ int srcu_idx, bkt;
srcu_idx = srcu_read_lock(&kvm->srcu);
slots = kvm_memslots(kvm);
- kvm_for_each_memslot(memslot, slots) {
+ kvm_for_each_memslot(memslot, bkt, slots) {
/* Mutual exclusion with kvm_unmap_hva_range etc. */
spin_lock(&kvm->mmu_lock);
/*
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 6d63c8e6d4f0..4ead97ae985e 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -5791,11 +5791,12 @@ static int kvmhv_svm_off(struct kvm *kvm)
for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
struct kvm_memory_slot *memslot;
struct kvm_memslots *slots = __kvm_memslots(kvm, i);
+ int bkt;
if (!slots)
continue;
- kvm_for_each_memslot(memslot, slots) {
+ kvm_for_each_memslot(memslot, bkt, slots) {
kvmppc_uvmem_drop_pages(memslot, kvm, true);
uv_unregister_mem_slot(kvm->arch.lpid, memslot->id);
}
diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 8543ad538b0c..c95868719706 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -730,7 +730,7 @@ void kvmhv_release_all_nested(struct kvm *kvm)
struct kvm_nested_guest *gp;
struct kvm_nested_guest *freelist = NULL;
struct kvm_memory_slot *memslot;
- int srcu_idx;
+ int srcu_idx, bkt;
spin_lock(&kvm->mmu_lock);
for (i = 0; i <= kvm->arch.max_nested_lpid; i++) {
@@ -751,7 +751,7 @@ void kvmhv_release_all_nested(struct kvm *kvm)
}
srcu_idx = srcu_read_lock(&kvm->srcu);
- kvm_for_each_memslot(memslot, kvm_memslots(kvm))
+ kvm_for_each_memslot(memslot, bkt, kvm_memslots(kvm))
kvmhv_free_memslot_nest_rmap(memslot);
srcu_read_unlock(&kvm->srcu, srcu_idx);
}
diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
index a7061ee3b157..adc1c495d47c 100644
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -459,7 +459,7 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
struct kvm_memslots *slots;
struct kvm_memory_slot *memslot, *m;
int ret = H_SUCCESS;
- int srcu_idx;
+ int srcu_idx, bkt;
kvm->arch.secure_guest = KVMPPC_SECURE_INIT_START;
@@ -478,7 +478,7 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
/* register the memslot */
slots = kvm_memslots(kvm);
- kvm_for_each_memslot(memslot, slots) {
+ kvm_for_each_memslot(memslot, bkt, slots) {
ret = __kvmppc_uvmem_memslot_create(kvm, memslot);
if (ret)
break;
@@ -486,7 +486,7 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
if (ret) {
slots = kvm_memslots(kvm);
- kvm_for_each_memslot(m, slots) {
+ kvm_for_each_memslot(m, bkt, slots) {
if (m == memslot)
break;
__kvmppc_uvmem_memslot_delete(kvm, memslot);
@@ -647,7 +647,7 @@ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *slot,
unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm)
{
- int srcu_idx;
+ int srcu_idx, bkt;
struct kvm_memory_slot *memslot;
/*
@@ -662,7 +662,7 @@ unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm)
srcu_idx = srcu_read_lock(&kvm->srcu);
- kvm_for_each_memslot(memslot, kvm_memslots(kvm))
+ kvm_for_each_memslot(memslot, bkt, kvm_memslots(kvm))
kvmppc_uvmem_drop_pages(memslot, kvm, false);
srcu_read_unlock(&kvm->srcu, srcu_idx);
@@ -821,7 +821,7 @@ unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
{
struct kvm_memslots *slots;
struct kvm_memory_slot *memslot;
- int srcu_idx;
+ int srcu_idx, bkt;
long ret = H_SUCCESS;
if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START))
@@ -830,7 +830,7 @@ unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
/* migrate any unmoved normal pfn to device pfns*/
srcu_idx = srcu_read_lock(&kvm->srcu);
slots = kvm_memslots(kvm);
- kvm_for_each_memslot(memslot, slots) {
+ kvm_for_each_memslot(memslot, bkt, slots) {
ret = kvmppc_uv_migrate_mem_slot(kvm, memslot);
if (ret) {
/*
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 9c3a85793ba4..811761a187dd 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -1035,13 +1035,13 @@ static int kvm_s390_vm_start_migration(struct kvm *kvm)
struct kvm_memory_slot *ms;
struct kvm_memslots *slots;
unsigned long ram_pages = 0;
- int slotnr;
+ int bkt;
/* migration mode already enabled */
if (kvm->arch.migration_mode)
return 0;
slots = kvm_memslots(kvm);
- if (!slots || !slots->used_slots)
+ if (!slots || kvm_memslots_empty(slots))
return -EINVAL;
if (!kvm->arch.use_cmma) {
@@ -1049,8 +1049,7 @@ static int kvm_s390_vm_start_migration(struct kvm *kvm)
return 0;
}
/* mark all the pages in active slots as dirty */
- for (slotnr = 0; slotnr < slots->used_slots; slotnr++) {
- ms = slots->memslots + slotnr;
+ kvm_for_each_memslot(ms, bkt, slots) {
if (!ms->dirty_bitmap)
return -EINVAL;
/*
@@ -1968,22 +1967,21 @@ static unsigned long kvm_s390_next_dirty_cmma(struct kvm_memslots *slots,
unsigned long cur_gfn)
{
struct kvm_memory_slot *ms = __gfn_to_memslot_approx(slots, cur_gfn, true);
- int slotidx = ms - slots->memslots;
unsigned long ofs = cur_gfn - ms->base_gfn;
+ struct rb_node *mnode = &ms->gfn_node[slots->node_idx];
if (ms->base_gfn + ms->npages <= cur_gfn) {
- slotidx--;
+ mnode = rb_next(mnode);
/* If we are above the highest slot, wrap around */
- if (slotidx < 0)
- slotidx = slots->used_slots - 1;
+ if (!mnode)
+ mnode = rb_first(&slots->gfn_tree);
- ms = slots->memslots + slotidx;
+ ms = container_of(mnode, struct kvm_memory_slot, gfn_node[slots->node_idx]);
ofs = 0;
}
ofs = find_next_bit(kvm_second_dirty_bitmap(ms), ms->npages, ofs);
- while ((slotidx > 0) && (ofs >= ms->npages)) {
- slotidx--;
- ms = slots->memslots + slotidx;
+ while (ofs >= ms->npages && (mnode = rb_next(mnode))) {
+ ms = container_of(mnode, struct kvm_memory_slot, gfn_node[slots->node_idx]);
ofs = find_next_bit(kvm_second_dirty_bitmap(ms), ms->npages, 0);
}
return ms->base_gfn + ofs;
@@ -1996,7 +1994,7 @@ static int kvm_s390_get_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args,
struct kvm_memslots *slots = kvm_memslots(kvm);
struct kvm_memory_slot *ms;
- if (unlikely(!slots->used_slots))
+ if (unlikely(kvm_memslots_empty(slots)))
return 0;
cur_gfn = kvm_s390_next_dirty_cmma(slots, args->start_gfn);
diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
index 5787b12aff7e..71a92d208b4a 100644
--- a/arch/s390/kvm/kvm-s390.h
+++ b/arch/s390/kvm/kvm-s390.h
@@ -211,12 +211,14 @@ static inline int kvm_s390_user_cpu_state_ctrl(struct kvm *kvm)
/* get the end gfn of the last (highest gfn) memslot */
static inline unsigned long kvm_s390_get_gfn_end(struct kvm_memslots *slots)
{
+ struct rb_node *node;
struct kvm_memory_slot *ms;
- if (WARN_ON(!slots->used_slots))
+ if (WARN_ON(kvm_memslots_empty(slots)))
return 0;
- ms = slots->memslots;
+ node = rb_last(&slots->gfn_tree);
+ ms = container_of(node, struct kvm_memory_slot, gfn_node[slots->node_idx]);
return ms->base_gfn + ms->npages;
}
diff --git a/arch/x86/kvm/debugfs.c b/arch/x86/kvm/debugfs.c
index 4fa519caaef7..555f98f4a396 100644
--- a/arch/x86/kvm/debugfs.c
+++ b/arch/x86/kvm/debugfs.c
@@ -108,9 +108,10 @@ static int kvm_mmu_rmaps_stat_show(struct seq_file *m, void *v)
write_lock(&kvm->mmu_lock);
for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+ int bkt;
+
slots = __kvm_memslots(kvm, i);
- for (j = 0; j < slots->used_slots; j++) {
- slot = &slots->memslots[j];
+ kvm_for_each_memslot(slot, bkt, slots)
for (k = 0; k < KVM_NR_PAGE_SIZES; k++) {
rmap = slot->arch.rmap[k];
lpage_size = kvm_mmu_slot_lpages(slot, k + 1);
@@ -122,7 +123,6 @@ static int kvm_mmu_rmaps_stat_show(struct seq_file *m, void *v)
cur[index]++;
}
}
- }
}
write_unlock(&kvm->mmu_lock);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 708ffc3bf1e0..2a4ac5100ff2 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5643,8 +5643,10 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
if (kvm_memslots_have_rmaps(kvm)) {
for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+ int bkt;
+
slots = __kvm_memslots(kvm, i);
- kvm_for_each_memslot(memslot, slots) {
+ kvm_for_each_memslot(memslot, bkt, slots) {
gfn_t start, end;
start = max(gfn_start, memslot->base_gfn);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f39bf3c3a054..d9fd85400663 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11384,8 +11384,10 @@ int alloc_all_memslots_rmaps(struct kvm *kvm)
}
for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+ int bkt;
+
slots = __kvm_memslots(kvm, i);
- kvm_for_each_memslot(slot, slots) {
+ kvm_for_each_memslot(slot, bkt, slots) {
r = memslot_rmap_alloc(slot, slot->npages);
if (r) {
mutex_unlock(&kvm->slots_arch_lock);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 27ece967ab48..71a9e13dab2d 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -31,6 +31,7 @@
#include <linux/notifier.h>
#include <linux/hashtable.h>
#include <linux/interval_tree.h>
+#include <linux/rbtree.h>
#include <asm/signal.h>
#include <linux/kvm.h>
@@ -358,11 +359,13 @@ struct kvm_vcpu {
struct kvm_dirty_ring dirty_ring;
/*
- * The index of the most recently used memslot by this vCPU. It's ok
- * if this becomes stale due to memslot changes since we always check
- * it is a valid slot.
+ * The most recently used memslot by this vCPU and the slots generation
+ * for which it is valid.
+ * No wraparound protection is needed since generations won't overflow in
+ * thousands of years, even assuming 1M memslot operations per second.
*/
- int last_used_slot;
+ struct kvm_memory_slot *last_used_slot;
+ u64 last_used_slot_gen;
};
/* must be called with irqs disabled */
@@ -427,9 +430,26 @@ static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu)
*/
#define KVM_MEM_MAX_NR_PAGES ((1UL << 31) - 1)
+/*
+ * Since at idle each memslot belongs to two memslot sets it has to contain
+ * two embedded nodes for each data structure that it forms a part of.
+ *
+ * Two memslot sets (one active and one inactive) are necessary so the VM
+ * continues to run on one memslot set while the other is being modified.
+ *
+ * These two memslot sets normally point to the same set of memslots.
+ * They can, however, be desynchronized when performing a memslot management
+ * operation by replacing the memslot to be modified by its copy.
+ * After the operation is complete, both memslot sets once again point to
+ * the same, common set of memslot data.
+ *
+ * The memslots themselves are independent of each other so they can be
+ * individually added or deleted.
+ */
struct kvm_memory_slot {
- struct hlist_node id_node;
- struct interval_tree_node hva_node;
+ struct hlist_node id_node[2];
+ struct interval_tree_node hva_node[2];
+ struct rb_node gfn_node[2];
gfn_t base_gfn;
unsigned long npages;
unsigned long *dirty_bitmap;
@@ -524,19 +544,14 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu)
}
#endif
-/*
- * Note:
- * memslots are not sorted by id anymore, please use id_to_memslot()
- * to get the memslot by its id.
- */
struct kvm_memslots {
u64 generation;
+ atomic_long_t last_used_slot;
struct rb_root_cached hva_tree;
- /* The mapping table from slot id to the index in memslots[]. */
+ struct rb_root gfn_tree;
+ /* The mapping table from slot id to memslot. */
DECLARE_HASHTABLE(id_hash, 7);
- atomic_t last_used_slot;
- int used_slots;
- struct kvm_memory_slot memslots[];
+ int node_idx;
};
struct kvm {
@@ -557,6 +572,9 @@ struct kvm {
*/
struct mutex slots_arch_lock;
struct mm_struct *mm; /* userspace tied to this vm */
+ /* The two memslot sets - active and inactive (per address space) */
+ struct kvm_memslots __memslots[KVM_ADDRESS_SPACE_NUM][2];
+ /* The current active memslot set for each address space */
struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM];
struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
@@ -731,12 +749,6 @@ static inline int kvm_vcpu_get_idx(struct kvm_vcpu *vcpu)
return vcpu->vcpu_idx;
}
-#define kvm_for_each_memslot(memslot, slots) \
- for (memslot = &slots->memslots[0]; \
- memslot < slots->memslots + slots->used_slots; memslot++) \
- if (WARN_ON_ONCE(!memslot->npages)) { \
- } else
-
void kvm_vcpu_destroy(struct kvm_vcpu *vcpu);
void vcpu_load(struct kvm_vcpu *vcpu);
@@ -797,12 +809,23 @@ static inline struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu)
return __kvm_memslots(vcpu->kvm, as_id);
}
+static inline bool kvm_memslots_empty(struct kvm_memslots *slots)
+{
+ return RB_EMPTY_ROOT(&slots->gfn_tree);
+}
+
+#define kvm_for_each_memslot(memslot, bkt, slots) \
+ hash_for_each(slots->id_hash, bkt, memslot, id_node[slots->node_idx]) \
+ if (WARN_ON_ONCE(!memslot->npages)) { \
+ } else
+
static inline
struct kvm_memory_slot *id_to_memslot(struct kvm_memslots *slots, int id)
{
struct kvm_memory_slot *slot;
+ int idx = slots->node_idx;
- hash_for_each_possible(slots->id_hash, slot, id_node, id) {
+ hash_for_each_possible(slots->id_hash, slot, id_node[idx], id) {
if (slot->id == id)
return slot;
}
@@ -1206,25 +1229,15 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id);
bool kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args);
/*
- * Returns a pointer to the memslot at slot_index if it contains gfn.
+ * Returns a pointer to the memslot if it contains gfn.
* Otherwise returns NULL.
*/
static inline struct kvm_memory_slot *
-try_get_memslot(struct kvm_memslots *slots, int slot_index, gfn_t gfn)
+try_get_memslot(struct kvm_memory_slot *slot, gfn_t gfn)
{
- struct kvm_memory_slot *slot;
-
- if (slot_index < 0 || slot_index >= slots->used_slots)
+ if (!slot)
return NULL;
- /*
- * slot_index can come from vcpu->last_used_slot which is not kept
- * in sync with userspace-controllable memslot deletion. So use nospec
- * to prevent the CPU from speculating past the end of memslots[].
- */
- slot_index = array_index_nospec(slot_index, slots->used_slots);
- slot = &slots->memslots[slot_index];
-
if (gfn >= slot->base_gfn && gfn < slot->base_gfn + slot->npages)
return slot;
else
@@ -1232,50 +1245,31 @@ try_get_memslot(struct kvm_memslots *slots, int slot_index, gfn_t gfn)
}
/*
- * Returns a pointer to the memslot that contains gfn and records the index of
- * the slot in index. Otherwise returns NULL.
+ * Returns a pointer to the memslot that contains gfn. Otherwise returns NULL.
*
* With "approx" set returns the memslot also when the address falls
* in a hole. In that case one of the memslots bordering the hole is
* returned.
- *
- * IMPORTANT: Slots are sorted from highest GFN to lowest GFN!
*/
static inline struct kvm_memory_slot *
-search_memslots(struct kvm_memslots *slots, gfn_t gfn, int *index, bool approx)
+search_memslots(struct kvm_memslots *slots, gfn_t gfn, bool approx)
{
- int start = 0, end = slots->used_slots;
- struct kvm_memory_slot *memslots = slots->memslots;
struct kvm_memory_slot *slot;
-
- if (unlikely(!slots->used_slots))
- return NULL;
-
- while (start < end) {
- int slot = start + (end - start) / 2;
-
- if (gfn >= memslots[slot].base_gfn)
- end = slot;
- else
- start = slot + 1;
- }
-
- if (approx && start >= slots->used_slots) {
- *index = slots->used_slots - 1;
- return &memslots[slots->used_slots - 1];
- }
-
- slot = try_get_memslot(slots, start, gfn);
- if (slot) {
- *index = start;
- return slot;
- }
- if (approx) {
- *index = start;
- return &memslots[start];
+ struct rb_node *node;
+ int idx = slots->node_idx;
+
+ slot = NULL;
+ for (node = slots->gfn_tree.rb_node; node; ) {
+ slot = container_of(node, struct kvm_memory_slot, gfn_node[idx]);
+ if (gfn >= slot->base_gfn) {
+ if (gfn < slot->base_gfn + slot->npages)
+ return slot;
+ node = node->rb_right;
+ } else
+ node = node->rb_left;
}
- return NULL;
+ return approx ? slot : NULL;
}
/*
@@ -1287,15 +1281,15 @@ static inline struct kvm_memory_slot *
__gfn_to_memslot_approx(struct kvm_memslots *slots, gfn_t gfn, bool approx)
{
struct kvm_memory_slot *slot;
- int slot_index = atomic_read(&slots->last_used_slot);
- slot = try_get_memslot(slots, slot_index, gfn);
+ slot = (struct kvm_memory_slot *)atomic_long_read(&slots->last_used_slot);
+ slot = try_get_memslot(slot, gfn);
if (slot)
return slot;
- slot = search_memslots(slots, gfn, &slot_index, approx);
+ slot = search_memslots(slots, gfn, approx);
if (slot) {
- atomic_set(&slots->last_used_slot, slot_index);
+ atomic_long_set(&slots->last_used_slot, (unsigned long)slot);
return slot;
}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 1be05acab3ed..26aa3690b172 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -415,7 +415,7 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
vcpu->preempted = false;
vcpu->ready = false;
preempt_notifier_init(&vcpu->preempt_notifier, &kvm_preempt_ops);
- vcpu->last_used_slot = 0;
+ vcpu->last_used_slot = NULL;
}
void kvm_vcpu_destroy(struct kvm_vcpu *vcpu)
@@ -514,7 +514,7 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm,
range->start, range->end - 1) {
unsigned long hva_start, hva_end;
- slot = container_of(node, struct kvm_memory_slot, hva_node);
+ slot = container_of(node, struct kvm_memory_slot, hva_node[slots->node_idx]);
hva_start = max(range->start, slot->userspace_addr);
hva_end = min(range->end, slot->userspace_addr +
(slot->npages << PAGE_SHIFT));
@@ -848,20 +848,6 @@ static void kvm_destroy_pm_notifier(struct kvm *kvm)
}
#endif /* CONFIG_HAVE_KVM_PM_NOTIFIER */
-static struct kvm_memslots *kvm_alloc_memslots(void)
-{
- struct kvm_memslots *slots;
-
- slots = kvzalloc(sizeof(struct kvm_memslots), GFP_KERNEL_ACCOUNT);
- if (!slots)
- return NULL;
-
- slots->hva_tree = RB_ROOT_CACHED;
- hash_init(slots->id_hash);
-
- return slots;
-}
-
static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot)
{
if (!memslot->dirty_bitmap)
@@ -871,27 +857,33 @@ static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot)
memslot->dirty_bitmap = NULL;
}
+/* This does not remove the slot from struct kvm_memslots data structures */
static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
{
kvm_destroy_dirty_bitmap(slot);
kvm_arch_free_memslot(kvm, slot);
- slot->flags = 0;
- slot->npages = 0;
+ kfree(slot);
}
static void kvm_free_memslots(struct kvm *kvm, struct kvm_memslots *slots)
{
+ struct hlist_node *idnode;
struct kvm_memory_slot *memslot;
+ int bkt;
- if (!slots)
+ /*
+ * The same memslot objects live in both active and inactive sets,
+ * arbitrarily free using index '1' so the second invocation of this
+ * function isn't operating over a structure with dangling pointers
+ * (even though this function isn't actually touching them).
+ */
+ if (!slots->node_idx)
return;
- kvm_for_each_memslot(memslot, slots)
+ hash_for_each_safe(slots->id_hash, bkt, idnode, memslot, id_node[1])
kvm_free_memslot(kvm, memslot);
-
- kvfree(slots);
}
static umode_t kvm_stats_debugfs_mode(const struct _kvm_stats_desc *pdesc)
@@ -1030,8 +1022,9 @@ int __weak kvm_arch_create_vm_debugfs(struct kvm *kvm)
static struct kvm *kvm_create_vm(unsigned long type)
{
struct kvm *kvm = kvm_arch_alloc_vm();
+ struct kvm_memslots *slots;
int r = -ENOMEM;
- int i;
+ int i, j;
if (!kvm)
return ERR_PTR(-ENOMEM);
@@ -1058,13 +1051,20 @@ static struct kvm *kvm_create_vm(unsigned long type)
refcount_set(&kvm->users_count, 1);
for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
- struct kvm_memslots *slots = kvm_alloc_memslots();
+ for (j = 0; j < 2; j++) {
+ slots = &kvm->__memslots[i][j];
- if (!slots)
- goto out_err_no_arch_destroy_vm;
- /* Generations must be different for each address space. */
- slots->generation = i;
- rcu_assign_pointer(kvm->memslots[i], slots);
+ atomic_long_set(&slots->last_used_slot, (unsigned long)NULL);
+ slots->hva_tree = RB_ROOT_CACHED;
+ slots->gfn_tree = RB_ROOT;
+ hash_init(slots->id_hash);
+ slots->node_idx = j;
+
+ /* Generations must be different for each address space. */
+ slots->generation = i;
+ }
+
+ rcu_assign_pointer(kvm->memslots[i], &kvm->__memslots[i][0]);
}
for (i = 0; i < KVM_NR_BUSES; i++) {
@@ -1118,8 +1118,6 @@ static struct kvm *kvm_create_vm(unsigned long type)
WARN_ON_ONCE(!refcount_dec_and_test(&kvm->users_count));
for (i = 0; i < KVM_NR_BUSES; i++)
kfree(kvm_get_bus(kvm, i));
- for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
- kvm_free_memslots(kvm, __kvm_memslots(kvm, i));
cleanup_srcu_struct(&kvm->irq_srcu);
out_err_no_irq_srcu:
cleanup_srcu_struct(&kvm->srcu);
@@ -1184,8 +1182,10 @@ static void kvm_destroy_vm(struct kvm *kvm)
#endif
kvm_arch_destroy_vm(kvm);
kvm_destroy_devices(kvm);
- for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
- kvm_free_memslots(kvm, __kvm_memslots(kvm, i));
+ for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+ kvm_free_memslots(kvm, &kvm->__memslots[i][0]);
+ kvm_free_memslots(kvm, &kvm->__memslots[i][1]);
+ }
cleanup_srcu_struct(&kvm->irq_srcu);
cleanup_srcu_struct(&kvm->srcu);
kvm_arch_free_vm(kvm);
@@ -1255,217 +1255,6 @@ static int kvm_alloc_dirty_bitmap(struct kvm_memory_slot *memslot)
return 0;
}
-/*
- * Delete a memslot by decrementing the number of used slots and shifting all
- * other entries in the array forward one spot.
- * @memslot is a detached dummy struct with just .id and .as_id filled.
- */
-static inline void kvm_memslot_delete(struct kvm_memslots *slots,
- struct kvm_memory_slot *memslot)
-{
- struct kvm_memory_slot *mslots = slots->memslots;
- struct kvm_memory_slot *oldslot = id_to_memslot(slots, memslot->id);
- int i;
-
- if (WARN_ON(!oldslot))
- return;
-
- slots->used_slots--;
-
- if (atomic_read(&slots->last_used_slot) >= slots->used_slots)
- atomic_set(&slots->last_used_slot, 0);
-
- for (i = oldslot - mslots; i < slots->used_slots; i++) {
- interval_tree_remove(&mslots[i].hva_node, &slots->hva_tree);
- hash_del(&mslots[i].id_node);
-
- mslots[i] = mslots[i + 1];
- interval_tree_insert(&mslots[i].hva_node, &slots->hva_tree);
- hash_add(slots->id_hash, &mslots[i].id_node, mslots[i].id);
- }
- interval_tree_remove(&mslots[i].hva_node, &slots->hva_tree);
- hash_del(&mslots[i].id_node);
- mslots[i] = *memslot;
-}
-
-/*
- * "Insert" a new memslot by incrementing the number of used slots. Returns
- * the new slot's initial index into the memslots array.
- */
-static inline int kvm_memslot_insert_back(struct kvm_memslots *slots)
-{
- return slots->used_slots++;
-}
-
-/*
- * Move a changed memslot backwards in the array by shifting existing slots
- * with a higher GFN toward the front of the array. Note, the changed memslot
- * itself is not preserved in the array, i.e. not swapped at this time, only
- * its new index into the array is tracked. Returns the changed memslot's
- * current index into the memslots array.
- * The memslot at the returned index will not be in @slots->hva_tree or
- * @slots->id_hash by then.
- * @memslot is a detached struct with desired final data of the changed slot.
- */
-static inline int kvm_memslot_move_backward(struct kvm_memslots *slots,
- struct kvm_memory_slot *memslot)
-{
- struct kvm_memory_slot *mslots = slots->memslots;
- struct kvm_memory_slot *mmemslot = id_to_memslot(slots, memslot->id);
- int i;
-
- if (!mmemslot || !slots->used_slots)
- return -1;
-
- /*
- * The loop below will (possibly) overwrite the target memslot with
- * data of the next memslot, or a similar loop in
- * kvm_memslot_move_forward() will overwrite it with data of the
- * previous memslot.
- * Then update_memslots() will unconditionally overwrite and re-add
- * it to the hash table.
- * That's why the memslot has to be first removed from the hash table
- * here.
- */
- interval_tree_remove(&mmemslot->hva_node, &slots->hva_tree);
- hash_del(&mmemslot->id_node);
-
- /*
- * Move the target memslot backward in the array by shifting existing
- * memslots with a higher GFN (than the target memslot) towards the
- * front of the array.
- */
- for (i = mmemslot - mslots; i < slots->used_slots - 1; i++) {
- if (memslot->base_gfn > mslots[i + 1].base_gfn)
- break;
-
- WARN_ON_ONCE(memslot->base_gfn == mslots[i + 1].base_gfn);
-
- /* Shift the next memslot forward one and update its index. */
- interval_tree_remove(&mslots[i + 1].hva_node, &slots->hva_tree);
- hash_del(&mslots[i + 1].id_node);
-
- mslots[i] = mslots[i + 1];
- interval_tree_insert(&mslots[i].hva_node, &slots->hva_tree);
- hash_add(slots->id_hash, &mslots[i].id_node, mslots[i].id);
- }
- return i;
-}
-
-/*
- * Move a changed memslot forwards in the array by shifting existing slots with
- * a lower GFN toward the back of the array. Note, the changed memslot itself
- * is not preserved in the array, i.e. not swapped at this time, only its new
- * index into the array is tracked. Returns the changed memslot's final index
- * into the memslots array.
- * The memslot at the returned index will not be in @slots->hva_tree or
- * @slots->id_hash by then.
- * @memslot is a detached struct with desired final data of the new or
- * changed slot.
- * Assumes that the memslot at @start index is not in @slots->hva_tree or
- * @slots->id_hash.
- */
-static inline int kvm_memslot_move_forward(struct kvm_memslots *slots,
- struct kvm_memory_slot *memslot,
- int start)
-{
- struct kvm_memory_slot *mslots = slots->memslots;
- int i;
-
- for (i = start; i > 0; i--) {
- if (memslot->base_gfn < mslots[i - 1].base_gfn)
- break;
-
- WARN_ON_ONCE(memslot->base_gfn == mslots[i - 1].base_gfn);
-
- /* Shift the next memslot back one and update its index. */
- interval_tree_remove(&mslots[i - 1].hva_node, &slots->hva_tree);
- hash_del(&mslots[i - 1].id_node);
-
- mslots[i] = mslots[i - 1];
- interval_tree_insert(&mslots[i].hva_node, &slots->hva_tree);
- hash_add(slots->id_hash, &mslots[i].id_node, mslots[i].id);
- }
- return i;
-}
-
-/*
- * Re-sort memslots based on their GFN to account for an added, deleted, or
- * moved memslot. Sorting memslots by GFN allows using a binary search during
- * memslot lookup.
- *
- * IMPORTANT: Slots are sorted from highest GFN to lowest GFN! I.e. the entry
- * at memslots[0] has the highest GFN.
- *
- * The sorting algorithm takes advantage of having initially sorted memslots
- * and knowing the position of the changed memslot. Sorting is also optimized
- * by not swapping the updated memslot and instead only shifting other memslots
- * and tracking the new index for the update memslot. Only once its final
- * index is known is the updated memslot copied into its position in the array.
- *
- * - When deleting a memslot, the deleted memslot simply needs to be moved to
- * the end of the array.
- *
- * - When creating a memslot, the algorithm "inserts" the new memslot at the
- * end of the array and then it forward to its correct location.
- *
- * - When moving a memslot, the algorithm first moves the updated memslot
- * backward to handle the scenario where the memslot's GFN was changed to a
- * lower value. update_memslots() then falls through and runs the same flow
- * as creating a memslot to move the memslot forward to handle the scenario
- * where its GFN was changed to a higher value.
- *
- * Note, slots are sorted from highest->lowest instead of lowest->highest for
- * historical reasons. Originally, invalid memslots where denoted by having
- * GFN=0, thus sorting from highest->lowest naturally sorted invalid memslots
- * to the end of the array. The current algorithm uses dedicated logic to
- * delete a memslot and thus does not rely on invalid memslots having GFN=0.
- *
- * The other historical motiviation for highest->lowest was to improve the
- * performance of memslot lookup. KVM originally used a linear search starting
- * at memslots[0]. On x86, the largest memslot usually has one of the highest,
- * if not *the* highest, GFN, as the bulk of the guest's RAM is located in a
- * single memslot above the 4gb boundary. As the largest memslot is also the
- * most likely to be referenced, sorting it to the front of the array was
- * advantageous. The current binary search starts from the middle of the array
- * and uses an LRU pointer to improve performance for all memslots and GFNs.
- *
- * @memslot is a detached struct, not a part of the current or new memslot
- * array.
- */
-static void update_memslots(struct kvm_memslots *slots,
- struct kvm_memory_slot *memslot,
- enum kvm_mr_change change)
-{
- int i;
-
- if (change == KVM_MR_DELETE) {
- kvm_memslot_delete(slots, memslot);
- } else {
- if (change == KVM_MR_CREATE)
- i = kvm_memslot_insert_back(slots);
- else
- i = kvm_memslot_move_backward(slots, memslot);
- i = kvm_memslot_move_forward(slots, memslot, i);
-
- if (WARN_ON_ONCE(i < 0))
- return;
-
- /*
- * Copy the memslot to its new position in memslots and update
- * its index accordingly.
- */
- slots->memslots[i] = *memslot;
- slots->memslots[i].hva_node.start = memslot->userspace_addr;
- slots->memslots[i].hva_node.last = memslot->userspace_addr +
- (memslot->npages << PAGE_SHIFT) - 1;
- interval_tree_insert(&slots->memslots[i].hva_node,
- &slots->hva_tree);
- hash_add(slots->id_hash, &slots->memslots[i].id_node,
- memslot->id);
- }
-}
-
static int check_memory_region_flags(const struct kvm_userspace_memory_region *mem)
{
u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES;
@@ -1480,11 +1269,12 @@ static int check_memory_region_flags(const struct kvm_userspace_memory_region *m
return 0;
}
-static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
- int as_id, struct kvm_memslots *slots)
+static void kvm_swap_active_memslots(struct kvm *kvm, int as_id,
+ struct kvm_memslots **active,
+ struct kvm_memslots **inactive)
{
- struct kvm_memslots *old_memslots = __kvm_memslots(kvm, as_id);
- u64 gen = old_memslots->generation;
+ struct kvm_memslots *slots = *inactive;
+ u64 gen = (*active)->generation;
WARN_ON(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS);
slots->generation = gen | KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS;
@@ -1536,61 +1326,139 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
slots->generation = gen;
- return old_memslots;
+ swap(*active, *inactive);
}
-static size_t kvm_memslots_size(int slots)
+static void kvm_memslot_gfn_insert(struct kvm_memslots *slots,
+ struct kvm_memory_slot *slot)
{
- return sizeof(struct kvm_memslots) +
- (sizeof(struct kvm_memory_slot) * slots);
+ struct rb_root *gfn_tree = &slots->gfn_tree;
+ struct rb_node **node, *parent;
+ int idx = slots->node_idx;
+
+ parent = NULL;
+ for (node = &gfn_tree->rb_node; *node; ) {
+ struct kvm_memory_slot *tmp;
+
+ tmp = container_of(*node, struct kvm_memory_slot, gfn_node[idx]);
+ parent = *node;
+ if (slot->base_gfn < tmp->base_gfn)
+ node = &(*node)->rb_left;
+ else if (slot->base_gfn > tmp->base_gfn)
+ node = &(*node)->rb_right;
+ else
+ BUG();
+ }
+
+ rb_link_node(&slot->gfn_node[idx], parent, node);
+ rb_insert_color(&slot->gfn_node[idx], gfn_tree);
}
-static void kvm_copy_memslots(struct kvm_memslots *to,
- struct kvm_memslots *from)
+static void kvm_memslot_gfn_erase(struct kvm_memslots *slots,
+ struct kvm_memory_slot *slot)
{
- memcpy(to, from, kvm_memslots_size(from->used_slots));
+ rb_erase(&slot->gfn_node[slots->node_idx], &slots->gfn_tree);
}
-static void kvm_copy_memslots_arch(struct kvm_memslots *to,
- struct kvm_memslots *from)
+static void kvm_memslot_gfn_replace(struct kvm_memslots *slots,
+ struct kvm_memory_slot *old,
+ struct kvm_memory_slot *new)
{
- int i;
+ int idx = slots->node_idx;
+
+ WARN_ON_ONCE(old->base_gfn != new->base_gfn);
- for (i = 0; i < from->used_slots; i++)
- to->memslots[i].arch = from->memslots[i].arch;
+ rb_replace_node(&old->gfn_node[idx], &new->gfn_node[idx],
+ &slots->gfn_tree);
}
-/*
- * Note, at a minimum, the current number of used slots must be allocated, even
- * when deleting a memslot, as we need a complete duplicate of the memslots for
- * use when invalidating a memslot prior to deleting/moving the memslot.
- */
-static struct kvm_memslots *kvm_dup_memslots(struct kvm_memslots *old,
- enum kvm_mr_change change)
+static void kvm_copy_memslot(struct kvm_memory_slot *dest,
+ const struct kvm_memory_slot *src)
{
- struct kvm_memslots *slots;
- size_t new_size;
- struct kvm_memory_slot *memslot;
+ dest->base_gfn = src->base_gfn;
+ dest->npages = src->npages;
+ dest->dirty_bitmap = src->dirty_bitmap;
+ dest->arch = src->arch;
+ dest->userspace_addr = src->userspace_addr;
+ dest->flags = src->flags;
+ dest->id = src->id;
+ dest->as_id = src->as_id;
- if (change == KVM_MR_CREATE)
- new_size = kvm_memslots_size(old->used_slots + 1);
- else
- new_size = kvm_memslots_size(old->used_slots);
+ dest->hva_node[0].start = dest->hva_node[1].start =
+ dest->userspace_addr;
+ dest->hva_node[0].last = dest->hva_node[1].last =
+ dest->userspace_addr + (dest->npages << PAGE_SHIFT) - 1;
+}
- slots = kvzalloc(new_size, GFP_KERNEL_ACCOUNT);
- if (unlikely(!slots))
- return NULL;
+/*
+ * Replace @old with @new in @slots.
+ *
+ * With NULL @old this simply adds @new to @slots.
+ * With NULL @new this simply removes @old from @slots.
+ *
+ * If @new is non-NULL its hva_node[slots_idx] range has to be set
+ * appropriately.
+ */
+static void kvm_replace_memslot(struct kvm_memslots *slots,
+ struct kvm_memory_slot *old,
+ struct kvm_memory_slot *new)
+{
+ int idx = slots->node_idx;
+
+ if (old) {
+ hash_del(&old->id_node[idx]);
+ interval_tree_remove(&old->hva_node[idx], &slots->hva_tree);
+ atomic_long_cmpxchg(&slots->last_used_slot,
+ (unsigned long)old, (unsigned long)new);
+ if (!new) {
+ kvm_memslot_gfn_erase(slots, old);
+ return;
+ }
+ }
- kvm_copy_memslots(slots, old);
+ WARN_ON(PAGE_SHIFT > 0 &&
+ new->hva_node[idx].start >= new->hva_node[idx].last);
+ hash_add(slots->id_hash, &new->id_node[idx], new->id);
+ interval_tree_insert(&new->hva_node[idx], &slots->hva_tree);
- slots->hva_tree = RB_ROOT_CACHED;
- hash_init(slots->id_hash);
- kvm_for_each_memslot(memslot, slots) {
- interval_tree_insert(&memslot->hva_node, &slots->hva_tree);
- hash_add(slots->id_hash, &memslot->id_node, memslot->id);
+ /* Shame there is no O(1) interval_tree_replace()... */
+ if (old && old->base_gfn == new->base_gfn) {
+ kvm_memslot_gfn_replace(slots, old, new);
+ } else {
+ if (old)
+ kvm_memslot_gfn_erase(slots, old);
+ kvm_memslot_gfn_insert(slots, new);
}
+}
- return slots;
+/*
+ * Replace @old with @new in @active set, first activating the @inactive
+ * set so @active will no longer be active and can be modified.
+ * Then free @old and return with pointers in @active and @inactive swapped
+ * (since the actual active <-> inactive sets have been swapped).
+ *
+ * With NULL @old this simply adds @new to @active (while swapping the sets).
+ * With NULL @new this simply removes @old from @active and frees it
+ * (while also swapping the sets).
+ */
+static void kvm_activate_memslot(struct kvm *kvm, int as_id,
+ struct kvm_memslots **active,
+ struct kvm_memslots **inactive,
+ struct kvm_memory_slot *old,
+ struct kvm_memory_slot *new)
+{
+ /*
+ * Swap the active <-> inactive memslots.
+ * Note, this also swaps the active and inactive pointers themselves
+ * and releases slots_arch_lock.
+ */
+ kvm_swap_active_memslots(kvm, as_id, active, inactive);
+
+ /* Propagate the new memslot to the now inactive memslots. */
+ kvm_replace_memslot(*inactive, old, new);
+
+ /* And free the old slot (if there was one). */
+ kfree(old);
}
static int kvm_set_memslot(struct kvm *kvm,
@@ -1599,16 +1467,47 @@ static int kvm_set_memslot(struct kvm *kvm,
struct kvm_memory_slot *new, int as_id,
enum kvm_mr_change change)
{
- struct kvm_memory_slot *slot;
- struct kvm_memslots *slots;
+ struct kvm_memslots *active = __kvm_memslots(kvm, as_id);
+ int node_idx_inactive = active->node_idx == 0 ? 1 : 0;
+ struct kvm_memslots *inactive = &kvm->__memslots[as_id][node_idx_inactive];
+ /*
+ * "slotina" (from "slot inactive") is a slot that is never in the
+ * active memslot set.
+ * This slot may be a part of the inactive memslot set or it might be detached.
+ *
+ * Conversely, an "slotact" (from "slot active") is a slot that is
+ * in the active memslot set.
+ * This slot might be a part of the inactive memslot set, too.
+ *
+ * The above terms only apply to a particular variable if it is going
+ * to see further accesses later during this function execution
+ * (that is, an invariant may no longer be true if the particular variable
+ * won't be accessed anymore).
+ */
+ struct kvm_memory_slot *slotina, *slotact;
int r;
+ if (change != KVM_MR_CREATE) {
+ slotact = id_to_memslot(active, old->id);
+ if (WARN_ON_ONCE(!slotact))
+ return -EIO;
+ }
+
+ /*
+ * Modifications are done on a temporary, unreachable slot.
+ * The changes are then (eventually) propagated to both the
+ * active and inactive slots.
+ */
+ slotina = kzalloc(sizeof(*slotina), GFP_KERNEL_ACCOUNT);
+ if (!slotina)
+ return -ENOMEM;
+
/*
- * Released in install_new_memslots.
+ * Released in kvm_swap_active_memslots.
*
* Must be held from before the current memslots are copied until
* after the new memslots are installed with rcu_assign_pointer,
- * then released before the synchronize srcu in install_new_memslots.
+ * then released before the synchronize srcu in kvm_swap_active_memslots.
*
* When modifying memslots outside of the slots_lock, must be held
* before reading the pointer to the current memslots until after all
@@ -1619,68 +1518,145 @@ static int kvm_set_memslot(struct kvm *kvm,
*/
mutex_lock(&kvm->slots_arch_lock);
- slots = kvm_dup_memslots(__kvm_memslots(kvm, as_id), change);
- if (!slots) {
- mutex_unlock(&kvm->slots_arch_lock);
- return -ENOMEM;
- }
-
if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) {
/*
- * Note, the INVALID flag needs to be in the appropriate entry
- * in the freshly allocated memslots, not in @old or @new.
+ * Mark the current slot INVALID.
+ * This must be done on the temporary slot to avoid
+ * modifying the current slot in the active tree.
*/
- slot = id_to_memslot(slots, old->id);
- slot->flags |= KVM_MEMSLOT_INVALID;
+ kvm_copy_memslot(slotina, slotact);
+ slotina->flags |= KVM_MEMSLOT_INVALID;
+ kvm_replace_memslot(inactive, slotact, slotina);
+
+ /*
+ * Activate the slot that is now marked INVALID, but don't
+ * propagate the slot to the now inactive slots. The slot is
+ * either going to be deleted or recreated as a new slot.
+ */
+ kvm_swap_active_memslots(kvm, as_id, &active, &inactive);
/*
- * We can re-use the memory from the old memslots.
- * It will be overwritten with a copy of the new memslots
- * after reacquiring the slots_arch_lock below.
+ * The temporary and current slot have swapped roles,
+ * slotina is now in the active set and slotact is not,
+ * so swap the variables appropriately, too.
*/
- slots = install_new_memslots(kvm, as_id, slots);
+ swap(slotina, slotact);
- /* From this point no new shadow pages pointing to a deleted,
+ /*
+ * From this point no new shadow pages pointing to a deleted,
* or moved, memslot will be created.
*
* validation of sp->gfn happens in:
* - gfn_to_hva (kvm_read_guest, gfn_to_pfn)
* - kvm_is_visible_gfn (mmu_check_root)
*/
- kvm_arch_flush_shadow_memslot(kvm, slot);
+ kvm_arch_flush_shadow_memslot(kvm, slotact);
- /* Released in install_new_memslots. */
+ /* Was released by kvm_swap_active_memslots, reacquire. */
mutex_lock(&kvm->slots_arch_lock);
+ }
+ if (change != KVM_MR_CREATE) {
/*
- * The arch-specific fields of the memslots could have changed
- * between releasing the slots_arch_lock in
- * install_new_memslots and here, so get a fresh copy of these
- * fields.
+ * The arch-specific fields of the memslot could have changed
+ * between reading them and taking slots_arch_lock in one of two
+ * places above.
+ * That includes old and new which were read in __kvm_set_memory_region.
*/
- kvm_copy_memslots_arch(slots, __kvm_memslots(kvm, as_id));
+ old->arch = new->arch = slotina->arch = slotact->arch;
}
r = kvm_arch_prepare_memory_region(kvm, old, new, mem, change);
- if (r)
- goto out_slots;
+ if (r) {
+ if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) {
+ /*
+ * Revert the above INVALID change.
+ * No modifications required since the original slot
+ * was preserved in the inactive slots.
+ * This also frees the temporary slot and releases slots_arch_lock.
+ */
+ kvm_activate_memslot(kvm, as_id, &active, &inactive, slotact, slotina);
+ } else {
+ mutex_unlock(&kvm->slots_arch_lock);
+ kfree(slotina);
+ }
+ return r;
+ }
- update_memslots(slots, new, change);
- slots = install_new_memslots(kvm, as_id, slots);
+ if (change == KVM_MR_MOVE) {
+ /*
+ * The memslot's gfn is changing, remove it from the inactive
+ * tree, it will be re-added with its updated gfn. Because its
+ * range is changing, an in-place replace is not possible.
+ */
+ kvm_memslot_gfn_erase(inactive, slotina);
- kvm_arch_commit_memory_region(kvm, mem, old, new, change);
+ slotina->base_gfn = new->base_gfn;
+ slotina->flags = new->flags;
+ slotina->dirty_bitmap = new->dirty_bitmap;
+ /* kvm_arch_prepare_memory_region() might have modified arch */
+ slotina->arch = new->arch;
- kvfree(slots);
- return 0;
+ /* Re-add to the gfn tree with the updated gfn */
+ kvm_memslot_gfn_insert(inactive, slotina);
-out_slots:
- if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) {
- slots = install_new_memslots(kvm, as_id, slots);
+ /* Replace the current INVALID slot with the updated memslot. */
+ kvm_activate_memslot(kvm, as_id, &active, &inactive, slotact, slotina);
+ } else if (change == KVM_MR_FLAGS_ONLY) {
+ /*
+ * Similar to the MOVE case, but the slot doesn't need to be
+ * zapped as an intermediate step. Instead, the old memslot is
+ * simply replaced with a new, updated copy in both memslot sets.
+ *
+ * Since the memslot gfn is unchanged, kvm_copy_replace_memslot()
+ * and kvm_memslot_gfn_replace() can be used to switch the node
+ * in the gfn tree instead of removing the old and inserting the
+ * new as two separate operations. Replacement is a single O(1)
+ * operation versus two O(log(n)) operations for remove+insert.
+ */
+ kvm_copy_memslot(slotina, slotact);
+ slotina->flags = new->flags;
+ slotina->dirty_bitmap = new->dirty_bitmap;
+ /* kvm_arch_prepare_memory_region() might have modified arch */
+ slotina->arch = new->arch;
+ kvm_replace_memslot(inactive, slotact, slotina);
+
+ kvm_activate_memslot(kvm, as_id, &active, &inactive, slotact, slotina);
+ } else if (change == KVM_MR_CREATE) {
+ /*
+ * Add the new memslot to the inactive set as a copy of the
+ * new memslot data provided by userspace.
+ */
+ kvm_copy_memslot(slotina, new);
+ kvm_replace_memslot(inactive, NULL, slotina);
+
+ kvm_activate_memslot(kvm, as_id, &active, &inactive, NULL, slotina);
+ } else if (change == KVM_MR_DELETE) {
+ /*
+ * Remove the old memslot (in the inactive memslots)
+ * by passing NULL as the new slot.
+ */
+ kvm_replace_memslot(inactive, slotina, NULL);
+ kvm_activate_memslot(kvm, as_id, &active, &inactive, slotact, NULL);
} else {
- mutex_unlock(&kvm->slots_arch_lock);
+ BUG();
}
- kvfree(slots);
- return r;
+
+ /*
+ * No need to refresh new->arch since this runs without slots_arch_lock anyway
+ * (was released by kvm_activate_memslot call in one of the branches above).
+ */
+ kvm_arch_commit_memory_region(kvm, mem, old, new, change);
+
+ /*
+ * Free the memslot and its metadata.
+ * Note, slotact and slotina hold the same metadata, but slotact
+ * was freed by kvm_activate_memslot(). It's slotina's turn now.
+ */
+ if (change == KVM_MR_DELETE)
+ kvm_free_memslot(kvm, slotina);
+
+ return 0;
}
static int kvm_delete_memslot(struct kvm *kvm,
@@ -1688,7 +1664,6 @@ static int kvm_delete_memslot(struct kvm *kvm,
struct kvm_memory_slot *old, int as_id)
{
struct kvm_memory_slot new;
- int r;
if (!old->npages)
return -EINVAL;
@@ -1701,12 +1676,7 @@ static int kvm_delete_memslot(struct kvm *kvm,
*/
new.as_id = as_id;
- r = kvm_set_memslot(kvm, mem, old, &new, as_id, KVM_MR_DELETE);
- if (r)
- return r;
-
- kvm_free_memslot(kvm, old);
- return 0;
+ return kvm_set_memslot(kvm, mem, old, &new, as_id, KVM_MR_DELETE);
}
/*
@@ -1749,12 +1719,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr)
return -EINVAL;
- /*
- * Make a full copy of the old memslot, the pointer will become stale
- * when the memslots are re-sorted by update_memslots(), and the old
- * memslot needs to be referenced after calling update_memslots(), e.g.
- * to free its resources and for arch specific behavior.
- */
tmp = id_to_memslot(__kvm_memslots(kvm, as_id), id);
if (tmp) {
old = *tmp;
@@ -1800,8 +1764,10 @@ int __kvm_set_memory_region(struct kvm *kvm,
}
if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) {
+ int bkt;
+
/* Check for overlaps */
- kvm_for_each_memslot(tmp, __kvm_memslots(kvm, as_id)) {
+ kvm_for_each_memslot(tmp, bkt, __kvm_memslots(kvm, as_id)) {
if (tmp->id == id)
continue;
if (!((new.base_gfn + new.npages <= tmp->base_gfn) ||
@@ -2138,21 +2104,30 @@ EXPORT_SYMBOL_GPL(gfn_to_memslot);
struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn)
{
struct kvm_memslots *slots = kvm_vcpu_memslots(vcpu);
+ u64 gen = slots->generation;
struct kvm_memory_slot *slot;
- int slot_index;
- slot = try_get_memslot(slots, vcpu->last_used_slot, gfn);
+ /*
+ * This also protects against using a memslot from a different address space,
+ * since different address spaces have different generation numbers.
+ */
+ if (unlikely(gen != vcpu->last_used_slot_gen)) {
+ vcpu->last_used_slot = NULL;
+ vcpu->last_used_slot_gen = gen;
+ }
+
+ slot = try_get_memslot(vcpu->last_used_slot, gfn);
if (slot)
return slot;
/*
* Fall back to searching all memslots. We purposely use
* search_memslots() instead of __gfn_to_memslot() to avoid
- * thrashing the VM-wide last_used_index in kvm_memslots.
+ * thrashing the VM-wide last_used_slot in kvm_memslots.
*/
- slot = search_memslots(slots, gfn, &slot_index, false);
+ slot = search_memslots(slots, gfn, false);
if (slot) {
- vcpu->last_used_slot = slot_index;
+ vcpu->last_used_slot = slot;
return slot;
}
From: "Maciej S. Szmigiero" <[email protected]>
Since kvm_memslot_move_forward() can theoretically return a negative
memslot index even when kvm_memslot_move_backward() returned a positive one
(and so did not WARN) let's just move the warning to the common code.
Signed-off-by: Maciej S. Szmigiero <[email protected]>
---
virt/kvm/kvm_main.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 03ef42d2e421..7000efff1425 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1293,8 +1293,7 @@ static inline int kvm_memslot_move_backward(struct kvm_memslots *slots,
struct kvm_memory_slot *mslots = slots->memslots;
int i;
- if (WARN_ON_ONCE(slots->id_to_index[memslot->id] == -1) ||
- WARN_ON_ONCE(!slots->used_slots))
+ if (slots->id_to_index[memslot->id] == -1 || !slots->used_slots)
return -1;
/*
@@ -1398,6 +1397,9 @@ static void update_memslots(struct kvm_memslots *slots,
i = kvm_memslot_move_backward(slots, memslot);
i = kvm_memslot_move_forward(slots, memslot, i);
+ if (WARN_ON_ONCE(i < 0))
+ return;
+
/*
* Copy the memslot to its new position in memslots and update
* its index accordingly.
From: "Maciej S. Szmigiero" <[email protected]>
s390 arch has gfn_to_memslot_approx() which is almost identical to
search_memslots(), differing only in that in case the gfn falls in a hole
one of the memslots bordering the hole is returned.
Add this lookup mode as an option to search_memslots() so we don't have two
almost identical functions for looking up a memslot by its gfn.
Signed-off-by: Maciej S. Szmigiero <[email protected]>
---
arch/s390/kvm/kvm-s390.c | 39 ++-------------------------------------
include/linux/kvm_host.h | 25 ++++++++++++++++++++++---
virt/kvm/kvm_main.c | 2 +-
3 files changed, 25 insertions(+), 41 deletions(-)
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 4bed65dbad5e..6b3f05086fae 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -1941,41 +1941,6 @@ static long kvm_s390_set_skeys(struct kvm *kvm, struct kvm_s390_skeys *args)
/* for consistency */
#define KVM_S390_CMMA_SIZE_MAX ((u32)KVM_S390_SKEYS_MAX)
-/*
- * Similar to gfn_to_memslot, but returns the index of a memslot also when the
- * address falls in a hole. In that case the index of one of the memslots
- * bordering the hole is returned.
- */
-static int gfn_to_memslot_approx(struct kvm_memslots *slots, gfn_t gfn)
-{
- int start = 0, end = slots->used_slots;
- int slot = atomic_read(&slots->last_used_slot);
- struct kvm_memory_slot *memslots = slots->memslots;
-
- if (gfn >= memslots[slot].base_gfn &&
- gfn < memslots[slot].base_gfn + memslots[slot].npages)
- return slot;
-
- while (start < end) {
- slot = start + (end - start) / 2;
-
- if (gfn >= memslots[slot].base_gfn)
- end = slot;
- else
- start = slot + 1;
- }
-
- if (start >= slots->used_slots)
- return slots->used_slots - 1;
-
- if (gfn >= memslots[start].base_gfn &&
- gfn < memslots[start].base_gfn + memslots[start].npages) {
- atomic_set(&slots->last_used_slot, start);
- }
-
- return start;
-}
-
static int kvm_s390_peek_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args,
u8 *res, unsigned long bufsize)
{
@@ -2002,8 +1967,8 @@ static int kvm_s390_peek_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args,
static unsigned long kvm_s390_next_dirty_cmma(struct kvm_memslots *slots,
unsigned long cur_gfn)
{
- int slotidx = gfn_to_memslot_approx(slots, cur_gfn);
- struct kvm_memory_slot *ms = slots->memslots + slotidx;
+ struct kvm_memory_slot *ms = __gfn_to_memslot_approx(slots, cur_gfn, true);
+ int slotidx = ms - slots->memslots;
unsigned long ofs = cur_gfn - ms->base_gfn;
if (ms->base_gfn + ms->npages <= cur_gfn) {
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 2d62581b400e..6d0bbd6c8554 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1232,10 +1232,14 @@ try_get_memslot(struct kvm_memslots *slots, int slot_index, gfn_t gfn)
* Returns a pointer to the memslot that contains gfn and records the index of
* the slot in index. Otherwise returns NULL.
*
+ * With "approx" set returns the memslot also when the address falls
+ * in a hole. In that case one of the memslots bordering the hole is
+ * returned.
+ *
* IMPORTANT: Slots are sorted from highest GFN to lowest GFN!
*/
static inline struct kvm_memory_slot *
-search_memslots(struct kvm_memslots *slots, gfn_t gfn, int *index)
+search_memslots(struct kvm_memslots *slots, gfn_t gfn, int *index, bool approx)
{
int start = 0, end = slots->used_slots;
struct kvm_memory_slot *memslots = slots->memslots;
@@ -1253,11 +1257,20 @@ search_memslots(struct kvm_memslots *slots, gfn_t gfn, int *index)
start = slot + 1;
}
+ if (approx && start >= slots->used_slots) {
+ *index = slots->used_slots - 1;
+ return &memslots[slots->used_slots - 1];
+ }
+
slot = try_get_memslot(slots, start, gfn);
if (slot) {
*index = start;
return slot;
}
+ if (approx) {
+ *index = start;
+ return &memslots[start];
+ }
return NULL;
}
@@ -1268,7 +1281,7 @@ search_memslots(struct kvm_memslots *slots, gfn_t gfn, int *index)
* itself isn't here as an inline because that would bloat other code too much.
*/
static inline struct kvm_memory_slot *
-__gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn)
+__gfn_to_memslot_approx(struct kvm_memslots *slots, gfn_t gfn, bool approx)
{
struct kvm_memory_slot *slot;
int slot_index = atomic_read(&slots->last_used_slot);
@@ -1277,7 +1290,7 @@ __gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn)
if (slot)
return slot;
- slot = search_memslots(slots, gfn, &slot_index);
+ slot = search_memslots(slots, gfn, &slot_index, approx);
if (slot) {
atomic_set(&slots->last_used_slot, slot_index);
return slot;
@@ -1286,6 +1299,12 @@ __gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn)
return NULL;
}
+static inline struct kvm_memory_slot *
+__gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn)
+{
+ return __gfn_to_memslot_approx(slots, gfn, false);
+}
+
static inline unsigned long
__gfn_to_hva_memslot(const struct kvm_memory_slot *slot, gfn_t gfn)
{
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 207306f7c559..03ef42d2e421 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2074,7 +2074,7 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn
* search_memslots() instead of __gfn_to_memslot() to avoid
* thrashing the VM-wide last_used_index in kvm_memslots.
*/
- slot = search_memslots(slots, gfn, &slot_index);
+ slot = search_memslots(slots, gfn, &slot_index, false);
if (slot) {
vcpu->last_used_slot = slot_index;
return slot;
From: "Maciej S. Szmigiero" <[email protected]>
This allows us to return a proper error code in case we spot an underflow.
Signed-off-by: Maciej S. Szmigiero <[email protected]>
---
arch/x86/kvm/x86.c | 43 +++++++++++++++++++++++++------------------
1 file changed, 25 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2ab0de7483ef..f39bf3c3a054 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11490,9 +11490,23 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
const struct kvm_userspace_memory_region *mem,
enum kvm_mr_change change)
{
- if (change == KVM_MR_CREATE || change == KVM_MR_MOVE)
- return kvm_alloc_memslot_metadata(kvm, new,
- mem->memory_size >> PAGE_SHIFT);
+ if (change == KVM_MR_CREATE || change == KVM_MR_MOVE) {
+ int ret;
+
+ ret = kvm_alloc_memslot_metadata(kvm, new,
+ mem->memory_size >> PAGE_SHIFT);
+ if (ret)
+ return ret;
+
+ if (change == KVM_MR_CREATE)
+ kvm->arch.n_memslots_pages += new->npages;
+ } else if (change == KVM_MR_DELETE) {
+ if (WARN_ON(kvm->arch.n_memslots_pages < old->npages))
+ return -EIO;
+
+ kvm->arch.n_memslots_pages -= old->npages;
+ }
+
return 0;
}
@@ -11589,22 +11603,15 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
const struct kvm_memory_slot *new,
enum kvm_mr_change change)
{
- if (change == KVM_MR_CREATE || change == KVM_MR_DELETE) {
- if (change == KVM_MR_CREATE)
- kvm->arch.n_memslots_pages += new->npages;
- else {
- WARN_ON(kvm->arch.n_memslots_pages < old->npages);
- kvm->arch.n_memslots_pages -= old->npages;
- }
-
- if (!kvm->arch.n_requested_mmu_pages) {
- unsigned long nr_mmu_pages;
+ /* Only CREATE or DELETE affects n_memslots_pages */
+ if ((change == KVM_MR_CREATE || change == KVM_MR_DELETE) &&
+ !kvm->arch.n_requested_mmu_pages) {
+ unsigned long nr_mmu_pages;
- nr_mmu_pages = kvm->arch.n_memslots_pages *
- KVM_PERMILLE_MMU_PAGES / 1000;
- nr_mmu_pages = max(nr_mmu_pages, KVM_MIN_ALLOC_MMU_PAGES);
- kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
- }
+ nr_mmu_pages = kvm->arch.n_memslots_pages *
+ KVM_PERMILLE_MMU_PAGES / 1000;
+ nr_mmu_pages = max(nr_mmu_pages, KVM_MIN_ALLOC_MMU_PAGES);
+ kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
}
kvm_mmu_slot_apply_flags(kvm, old, new, change);
From: "Maciej S. Szmigiero" <[email protected]>
And use it where s390 code would just access the memslot with the highest
gfn directly.
No functional change intended.
Signed-off-by: Maciej S. Szmigiero <[email protected]>
---
arch/s390/kvm/kvm-s390.c | 2 +-
arch/s390/kvm/kvm-s390.h | 12 ++++++++++++
arch/s390/kvm/pv.c | 4 +---
3 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 6b3f05086fae..9c3a85793ba4 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -2006,7 +2006,7 @@ static int kvm_s390_get_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args,
if (!ms)
return 0;
next_gfn = kvm_s390_next_dirty_cmma(slots, cur_gfn + 1);
- mem_end = slots->memslots[0].base_gfn + slots->memslots[0].npages;
+ mem_end = kvm_s390_get_gfn_end(slots);
while (args->count < bufsize) {
hva = gfn_to_hva(kvm, cur_gfn);
diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
index 9fad25109b0d..5787b12aff7e 100644
--- a/arch/s390/kvm/kvm-s390.h
+++ b/arch/s390/kvm/kvm-s390.h
@@ -208,6 +208,18 @@ static inline int kvm_s390_user_cpu_state_ctrl(struct kvm *kvm)
return kvm->arch.user_cpu_state_ctrl != 0;
}
+/* get the end gfn of the last (highest gfn) memslot */
+static inline unsigned long kvm_s390_get_gfn_end(struct kvm_memslots *slots)
+{
+ struct kvm_memory_slot *ms;
+
+ if (WARN_ON(!slots->used_slots))
+ return 0;
+
+ ms = slots->memslots;
+ return ms->base_gfn + ms->npages;
+}
+
/* implemented in pv.c */
int kvm_s390_pv_destroy_cpu(struct kvm_vcpu *vcpu, u16 *rc, u16 *rrc);
int kvm_s390_pv_create_cpu(struct kvm_vcpu *vcpu, u16 *rc, u16 *rrc);
diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
index c8841f476e91..e51cccfded25 100644
--- a/arch/s390/kvm/pv.c
+++ b/arch/s390/kvm/pv.c
@@ -117,7 +117,6 @@ static int kvm_s390_pv_alloc_vm(struct kvm *kvm)
unsigned long base = uv_info.guest_base_stor_len;
unsigned long virt = uv_info.guest_virt_var_stor_len;
unsigned long npages = 0, vlen = 0;
- struct kvm_memory_slot *memslot;
kvm->arch.pv.stor_var = NULL;
kvm->arch.pv.stor_base = __get_free_pages(GFP_KERNEL_ACCOUNT, get_order(base));
@@ -131,8 +130,7 @@ static int kvm_s390_pv_alloc_vm(struct kvm *kvm)
* Slots are sorted by GFN
*/
mutex_lock(&kvm->slots_lock);
- memslot = kvm_memslots(kvm)->memslots;
- npages = memslot->base_gfn + memslot->npages;
+ npages = kvm_s390_get_gfn_end(kvm_memslots(kvm));
mutex_unlock(&kvm->slots_lock);
kvm->arch.pv.guest_len = npages * PAGE_SIZE;
From: "Maciej S. Szmigiero" <[email protected]>
Memslot ID to the corresponding memslot mappings are currently kept as
indices in static id_to_index array.
The size of this array depends on the maximum allowed memslot count
(regardless of the number of memslots actually in use).
This has become especially problematic recently, when memslot count cap was
removed, so the maximum count is now full 32k memslots - the maximum
allowed by the current KVM API.
Keeping these IDs in a hash table (instead of an array) avoids this
problem.
Resolving a memslot ID to the actual memslot (instead of its index) will
also enable transitioning away from an array-based implementation of the
whole memslots structure in a later commit.
Signed-off-by: Maciej S. Szmigiero <[email protected]>
---
include/linux/kvm_host.h | 16 +++++------
virt/kvm/kvm_main.c | 61 +++++++++++++++++++++++++++++++---------
2 files changed, 55 insertions(+), 22 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 6d0bbd6c8554..d1279d599f2a 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -29,6 +29,7 @@
#include <linux/refcount.h>
#include <linux/nospec.h>
#include <linux/notifier.h>
+#include <linux/hashtable.h>
#include <asm/signal.h>
#include <linux/kvm.h>
@@ -426,6 +427,7 @@ static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu)
#define KVM_MEM_MAX_NR_PAGES ((1UL << 31) - 1)
struct kvm_memory_slot {
+ struct hlist_node id_node;
gfn_t base_gfn;
unsigned long npages;
unsigned long *dirty_bitmap;
@@ -528,7 +530,7 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu)
struct kvm_memslots {
u64 generation;
/* The mapping table from slot id to the index in memslots[]. */
- short id_to_index[KVM_MEM_SLOTS_NUM];
+ DECLARE_HASHTABLE(id_hash, 7);
atomic_t last_used_slot;
int used_slots;
struct kvm_memory_slot memslots[];
@@ -795,16 +797,14 @@ static inline struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu)
static inline
struct kvm_memory_slot *id_to_memslot(struct kvm_memslots *slots, int id)
{
- int index = slots->id_to_index[id];
struct kvm_memory_slot *slot;
- if (index < 0)
- return NULL;
-
- slot = &slots->memslots[index];
+ hash_for_each_possible(slots->id_hash, slot, id_node, id) {
+ if (slot->id == id)
+ return slot;
+ }
- WARN_ON(slot->id != id);
- return slot;
+ return NULL;
}
/*
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 272bc86a0e69..143dc95f496e 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -839,15 +839,13 @@ static void kvm_destroy_pm_notifier(struct kvm *kvm)
static struct kvm_memslots *kvm_alloc_memslots(void)
{
- int i;
struct kvm_memslots *slots;
slots = kvzalloc(sizeof(struct kvm_memslots), GFP_KERNEL_ACCOUNT);
if (!slots)
return NULL;
- for (i = 0; i < KVM_MEM_SLOTS_NUM; i++)
- slots->id_to_index[i] = -1;
+ hash_init(slots->id_hash);
return slots;
}
@@ -1248,14 +1246,16 @@ static int kvm_alloc_dirty_bitmap(struct kvm_memory_slot *memslot)
/*
* Delete a memslot by decrementing the number of used slots and shifting all
* other entries in the array forward one spot.
+ * @memslot is a detached dummy struct with just .id and .as_id filled.
*/
static inline void kvm_memslot_delete(struct kvm_memslots *slots,
struct kvm_memory_slot *memslot)
{
struct kvm_memory_slot *mslots = slots->memslots;
+ struct kvm_memory_slot *oldslot = id_to_memslot(slots, memslot->id);
int i;
- if (WARN_ON(slots->id_to_index[memslot->id] == -1))
+ if (WARN_ON(!oldslot))
return;
slots->used_slots--;
@@ -1263,12 +1263,13 @@ static inline void kvm_memslot_delete(struct kvm_memslots *slots,
if (atomic_read(&slots->last_used_slot) >= slots->used_slots)
atomic_set(&slots->last_used_slot, 0);
- for (i = slots->id_to_index[memslot->id]; i < slots->used_slots; i++) {
+ for (i = oldslot - mslots; i < slots->used_slots; i++) {
+ hash_del(&mslots[i].id_node);
mslots[i] = mslots[i + 1];
- slots->id_to_index[mslots[i].id] = i;
+ hash_add(slots->id_hash, &mslots[i].id_node, mslots[i].id);
}
+ hash_del(&mslots[i].id_node);
mslots[i] = *memslot;
- slots->id_to_index[memslot->id] = -1;
}
/*
@@ -1286,30 +1287,46 @@ static inline int kvm_memslot_insert_back(struct kvm_memslots *slots)
* itself is not preserved in the array, i.e. not swapped at this time, only
* its new index into the array is tracked. Returns the changed memslot's
* current index into the memslots array.
+ * The memslot at the returned index will not be in @slots->id_hash by then.
+ * @memslot is a detached struct with desired final data of the changed slot.
*/
static inline int kvm_memslot_move_backward(struct kvm_memslots *slots,
struct kvm_memory_slot *memslot)
{
struct kvm_memory_slot *mslots = slots->memslots;
+ struct kvm_memory_slot *mmemslot = id_to_memslot(slots, memslot->id);
int i;
- if (slots->id_to_index[memslot->id] == -1 || !slots->used_slots)
+ if (!mmemslot || !slots->used_slots)
return -1;
+ /*
+ * The loop below will (possibly) overwrite the target memslot with
+ * data of the next memslot, or a similar loop in
+ * kvm_memslot_move_forward() will overwrite it with data of the
+ * previous memslot.
+ * Then update_memslots() will unconditionally overwrite and re-add
+ * it to the hash table.
+ * That's why the memslot has to be first removed from the hash table
+ * here.
+ */
+ hash_del(&mmemslot->id_node);
+
/*
* Move the target memslot backward in the array by shifting existing
* memslots with a higher GFN (than the target memslot) towards the
* front of the array.
*/
- for (i = slots->id_to_index[memslot->id]; i < slots->used_slots - 1; i++) {
+ for (i = mmemslot - mslots; i < slots->used_slots - 1; i++) {
if (memslot->base_gfn > mslots[i + 1].base_gfn)
break;
WARN_ON_ONCE(memslot->base_gfn == mslots[i + 1].base_gfn);
/* Shift the next memslot forward one and update its index. */
+ hash_del(&mslots[i + 1].id_node);
mslots[i] = mslots[i + 1];
- slots->id_to_index[mslots[i].id] = i;
+ hash_add(slots->id_hash, &mslots[i].id_node, mslots[i].id);
}
return i;
}
@@ -1320,6 +1337,10 @@ static inline int kvm_memslot_move_backward(struct kvm_memslots *slots,
* is not preserved in the array, i.e. not swapped at this time, only its new
* index into the array is tracked. Returns the changed memslot's final index
* into the memslots array.
+ * The memslot at the returned index will not be in @slots->id_hash by then.
+ * @memslot is a detached struct with desired final data of the new or
+ * changed slot.
+ * Assumes that the memslot at @start index is not in @slots->id_hash.
*/
static inline int kvm_memslot_move_forward(struct kvm_memslots *slots,
struct kvm_memory_slot *memslot,
@@ -1335,8 +1356,9 @@ static inline int kvm_memslot_move_forward(struct kvm_memslots *slots,
WARN_ON_ONCE(memslot->base_gfn == mslots[i - 1].base_gfn);
/* Shift the next memslot back one and update its index. */
+ hash_del(&mslots[i - 1].id_node);
mslots[i] = mslots[i - 1];
- slots->id_to_index[mslots[i].id] = i;
+ hash_add(slots->id_hash, &mslots[i].id_node, mslots[i].id);
}
return i;
}
@@ -1381,6 +1403,9 @@ static inline int kvm_memslot_move_forward(struct kvm_memslots *slots,
* most likely to be referenced, sorting it to the front of the array was
* advantageous. The current binary search starts from the middle of the array
* and uses an LRU pointer to improve performance for all memslots and GFNs.
+ *
+ * @memslot is a detached struct, not a part of the current or new memslot
+ * array.
*/
static void update_memslots(struct kvm_memslots *slots,
struct kvm_memory_slot *memslot,
@@ -1405,7 +1430,8 @@ static void update_memslots(struct kvm_memslots *slots,
* its index accordingly.
*/
slots->memslots[i] = *memslot;
- slots->id_to_index[memslot->id] = i;
+ hash_add(slots->id_hash, &slots->memslots[i].id_node,
+ memslot->id);
}
}
@@ -1513,6 +1539,7 @@ static struct kvm_memslots *kvm_dup_memslots(struct kvm_memslots *old,
{
struct kvm_memslots *slots;
size_t new_size;
+ struct kvm_memory_slot *memslot;
if (change == KVM_MR_CREATE)
new_size = kvm_memslots_size(old->used_slots + 1);
@@ -1520,8 +1547,14 @@ static struct kvm_memslots *kvm_dup_memslots(struct kvm_memslots *old,
new_size = kvm_memslots_size(old->used_slots);
slots = kvzalloc(new_size, GFP_KERNEL_ACCOUNT);
- if (likely(slots))
- kvm_copy_memslots(slots, old);
+ if (unlikely(!slots))
+ return NULL;
+
+ kvm_copy_memslots(slots, old);
+
+ hash_init(slots->id_hash);
+ kvm_for_each_memslot(memslot, slots)
+ hash_add(slots->id_hash, &memslot->id_node, memslot->id);
return slots;
}
On Fri, 13 Aug 2021 21:33:23 +0200
"Maciej S. Szmigiero" <[email protected]> wrote:
> From: "Maciej S. Szmigiero" <[email protected]>
>
> And use it where s390 code would just access the memslot with the highest
> gfn directly.
>
> No functional change intended.
>
> Signed-off-by: Maciej S. Szmigiero <[email protected]>
Reviewed-by: Claudio Imbrenda <[email protected]>
> ---
> arch/s390/kvm/kvm-s390.c | 2 +-
> arch/s390/kvm/kvm-s390.h | 12 ++++++++++++
> arch/s390/kvm/pv.c | 4 +---
> 3 files changed, 14 insertions(+), 4 deletions(-)
>
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 6b3f05086fae..9c3a85793ba4 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -2006,7 +2006,7 @@ static int kvm_s390_get_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args,
> if (!ms)
> return 0;
> next_gfn = kvm_s390_next_dirty_cmma(slots, cur_gfn + 1);
> - mem_end = slots->memslots[0].base_gfn + slots->memslots[0].npages;
> + mem_end = kvm_s390_get_gfn_end(slots);
>
> while (args->count < bufsize) {
> hva = gfn_to_hva(kvm, cur_gfn);
> diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
> index 9fad25109b0d..5787b12aff7e 100644
> --- a/arch/s390/kvm/kvm-s390.h
> +++ b/arch/s390/kvm/kvm-s390.h
> @@ -208,6 +208,18 @@ static inline int kvm_s390_user_cpu_state_ctrl(struct kvm *kvm)
> return kvm->arch.user_cpu_state_ctrl != 0;
> }
>
> +/* get the end gfn of the last (highest gfn) memslot */
> +static inline unsigned long kvm_s390_get_gfn_end(struct kvm_memslots *slots)
> +{
> + struct kvm_memory_slot *ms;
> +
> + if (WARN_ON(!slots->used_slots))
> + return 0;
> +
> + ms = slots->memslots;
> + return ms->base_gfn + ms->npages;
> +}
> +
> /* implemented in pv.c */
> int kvm_s390_pv_destroy_cpu(struct kvm_vcpu *vcpu, u16 *rc, u16 *rrc);
> int kvm_s390_pv_create_cpu(struct kvm_vcpu *vcpu, u16 *rc, u16 *rrc);
> diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
> index c8841f476e91..e51cccfded25 100644
> --- a/arch/s390/kvm/pv.c
> +++ b/arch/s390/kvm/pv.c
> @@ -117,7 +117,6 @@ static int kvm_s390_pv_alloc_vm(struct kvm *kvm)
> unsigned long base = uv_info.guest_base_stor_len;
> unsigned long virt = uv_info.guest_virt_var_stor_len;
> unsigned long npages = 0, vlen = 0;
> - struct kvm_memory_slot *memslot;
>
> kvm->arch.pv.stor_var = NULL;
> kvm->arch.pv.stor_base = __get_free_pages(GFP_KERNEL_ACCOUNT, get_order(base));
> @@ -131,8 +130,7 @@ static int kvm_s390_pv_alloc_vm(struct kvm *kvm)
> * Slots are sorted by GFN
> */
> mutex_lock(&kvm->slots_lock);
> - memslot = kvm_memslots(kvm)->memslots;
> - npages = memslot->base_gfn + memslot->npages;
> + npages = kvm_s390_get_gfn_end(kvm_memslots(kvm));
> mutex_unlock(&kvm->slots_lock);
>
> kvm->arch.pv.guest_len = npages * PAGE_SIZE;
On Fri, 13 Aug 2021 21:33:19 +0200
"Maciej S. Szmigiero" <[email protected]> wrote:
> From: "Maciej S. Szmigiero" <[email protected]>
>
> Since kvm_memslot_move_forward() can theoretically return a negative
> memslot index even when kvm_memslot_move_backward() returned a positive one
> (and so did not WARN) let's just move the warning to the common code.
>
> Signed-off-by: Maciej S. Szmigiero <[email protected]>
Reviewed-by: Claudio Imbrenda <[email protected]>
> ---
> virt/kvm/kvm_main.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 03ef42d2e421..7000efff1425 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1293,8 +1293,7 @@ static inline int kvm_memslot_move_backward(struct kvm_memslots *slots,
> struct kvm_memory_slot *mslots = slots->memslots;
> int i;
>
> - if (WARN_ON_ONCE(slots->id_to_index[memslot->id] == -1) ||
> - WARN_ON_ONCE(!slots->used_slots))
> + if (slots->id_to_index[memslot->id] == -1 || !slots->used_slots)
> return -1;
>
> /*
> @@ -1398,6 +1397,9 @@ static void update_memslots(struct kvm_memslots *slots,
> i = kvm_memslot_move_backward(slots, memslot);
> i = kvm_memslot_move_forward(slots, memslot, i);
>
> + if (WARN_ON_ONCE(i < 0))
> + return;
> +
> /*
> * Copy the memslot to its new position in memslots and update
> * its index accordingly.
On 13.08.21 21:33, Maciej S. Szmigiero wrote:
> From: "Maciej S. Szmigiero" <[email protected]>
>
> Since kvm_memslot_move_forward() can theoretically return a negative
> memslot index even when kvm_memslot_move_backward() returned a positive one
> (and so did not WARN) let's just move the warning to the common code.
>
> Signed-off-by: Maciej S. Szmigiero <[email protected]>
> ---
> virt/kvm/kvm_main.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 03ef42d2e421..7000efff1425 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1293,8 +1293,7 @@ static inline int kvm_memslot_move_backward(struct kvm_memslots *slots,
> struct kvm_memory_slot *mslots = slots->memslots;
> int i;
>
> - if (WARN_ON_ONCE(slots->id_to_index[memslot->id] == -1) ||
> - WARN_ON_ONCE(!slots->used_slots))
> + if (slots->id_to_index[memslot->id] == -1 || !slots->used_slots)
> return -1;
>
> /*
> @@ -1398,6 +1397,9 @@ static void update_memslots(struct kvm_memslots *slots,
> i = kvm_memslot_move_backward(slots, memslot);
> i = kvm_memslot_move_forward(slots, memslot, i);
>
> + if (WARN_ON_ONCE(i < 0))
> + return;
> +
> /*
> * Copy the memslot to its new position in memslots and update
> * its index accordingly.
>
Note that WARN_ON_* is frowned upon, because it can result in crashes
with panic_on_warn enabled, which is what some distributions do enable.
We tend to work around that by using pr_warn()/pr_warn_once(), avoiding
eventually crashing the system when there is a way to continue.
--
Thanks,
David / dhildenb
On Fri, 13 Aug 2021 21:33:18 +0200
"Maciej S. Szmigiero" <[email protected]> wrote:
> From: "Maciej S. Szmigiero" <[email protected]>
>
> s390 arch has gfn_to_memslot_approx() which is almost identical to
> search_memslots(), differing only in that in case the gfn falls in a hole
> one of the memslots bordering the hole is returned.
>
> Add this lookup mode as an option to search_memslots() so we don't have two
> almost identical functions for looking up a memslot by its gfn.
>
> Signed-off-by: Maciej S. Szmigiero <[email protected]>
Reviewed-by: Claudio Imbrenda <[email protected]>
> ---
> arch/s390/kvm/kvm-s390.c | 39 ++-------------------------------------
> include/linux/kvm_host.h | 25 ++++++++++++++++++++++---
> virt/kvm/kvm_main.c | 2 +-
> 3 files changed, 25 insertions(+), 41 deletions(-)
>
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 4bed65dbad5e..6b3f05086fae 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -1941,41 +1941,6 @@ static long kvm_s390_set_skeys(struct kvm *kvm, struct kvm_s390_skeys *args)
> /* for consistency */
> #define KVM_S390_CMMA_SIZE_MAX ((u32)KVM_S390_SKEYS_MAX)
>
> -/*
> - * Similar to gfn_to_memslot, but returns the index of a memslot also when the
> - * address falls in a hole. In that case the index of one of the memslots
> - * bordering the hole is returned.
> - */
> -static int gfn_to_memslot_approx(struct kvm_memslots *slots, gfn_t gfn)
> -{
> - int start = 0, end = slots->used_slots;
> - int slot = atomic_read(&slots->last_used_slot);
> - struct kvm_memory_slot *memslots = slots->memslots;
> -
> - if (gfn >= memslots[slot].base_gfn &&
> - gfn < memslots[slot].base_gfn + memslots[slot].npages)
> - return slot;
> -
> - while (start < end) {
> - slot = start + (end - start) / 2;
> -
> - if (gfn >= memslots[slot].base_gfn)
> - end = slot;
> - else
> - start = slot + 1;
> - }
> -
> - if (start >= slots->used_slots)
> - return slots->used_slots - 1;
> -
> - if (gfn >= memslots[start].base_gfn &&
> - gfn < memslots[start].base_gfn + memslots[start].npages) {
> - atomic_set(&slots->last_used_slot, start);
> - }
> -
> - return start;
> -}
> -
> static int kvm_s390_peek_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args,
> u8 *res, unsigned long bufsize)
> {
> @@ -2002,8 +1967,8 @@ static int kvm_s390_peek_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args,
> static unsigned long kvm_s390_next_dirty_cmma(struct kvm_memslots *slots,
> unsigned long cur_gfn)
> {
> - int slotidx = gfn_to_memslot_approx(slots, cur_gfn);
> - struct kvm_memory_slot *ms = slots->memslots + slotidx;
> + struct kvm_memory_slot *ms = __gfn_to_memslot_approx(slots, cur_gfn, true);
> + int slotidx = ms - slots->memslots;
> unsigned long ofs = cur_gfn - ms->base_gfn;
>
> if (ms->base_gfn + ms->npages <= cur_gfn) {
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 2d62581b400e..6d0bbd6c8554 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1232,10 +1232,14 @@ try_get_memslot(struct kvm_memslots *slots, int slot_index, gfn_t gfn)
> * Returns a pointer to the memslot that contains gfn and records the index of
> * the slot in index. Otherwise returns NULL.
> *
> + * With "approx" set returns the memslot also when the address falls
> + * in a hole. In that case one of the memslots bordering the hole is
> + * returned.
> + *
> * IMPORTANT: Slots are sorted from highest GFN to lowest GFN!
> */
> static inline struct kvm_memory_slot *
> -search_memslots(struct kvm_memslots *slots, gfn_t gfn, int *index)
> +search_memslots(struct kvm_memslots *slots, gfn_t gfn, int *index, bool approx)
> {
> int start = 0, end = slots->used_slots;
> struct kvm_memory_slot *memslots = slots->memslots;
> @@ -1253,11 +1257,20 @@ search_memslots(struct kvm_memslots *slots, gfn_t gfn, int *index)
> start = slot + 1;
> }
>
> + if (approx && start >= slots->used_slots) {
> + *index = slots->used_slots - 1;
> + return &memslots[slots->used_slots - 1];
> + }
> +
> slot = try_get_memslot(slots, start, gfn);
> if (slot) {
> *index = start;
> return slot;
> }
> + if (approx) {
> + *index = start;
> + return &memslots[start];
> + }
>
> return NULL;
> }
> @@ -1268,7 +1281,7 @@ search_memslots(struct kvm_memslots *slots, gfn_t gfn, int *index)
> * itself isn't here as an inline because that would bloat other code too much.
> */
> static inline struct kvm_memory_slot *
> -__gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn)
> +__gfn_to_memslot_approx(struct kvm_memslots *slots, gfn_t gfn, bool approx)
> {
> struct kvm_memory_slot *slot;
> int slot_index = atomic_read(&slots->last_used_slot);
> @@ -1277,7 +1290,7 @@ __gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn)
> if (slot)
> return slot;
>
> - slot = search_memslots(slots, gfn, &slot_index);
> + slot = search_memslots(slots, gfn, &slot_index, approx);
> if (slot) {
> atomic_set(&slots->last_used_slot, slot_index);
> return slot;
> @@ -1286,6 +1299,12 @@ __gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn)
> return NULL;
> }
>
> +static inline struct kvm_memory_slot *
> +__gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn)
> +{
> + return __gfn_to_memslot_approx(slots, gfn, false);
> +}
> +
> static inline unsigned long
> __gfn_to_hva_memslot(const struct kvm_memory_slot *slot, gfn_t gfn)
> {
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 207306f7c559..03ef42d2e421 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2074,7 +2074,7 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn
> * search_memslots() instead of __gfn_to_memslot() to avoid
> * thrashing the VM-wide last_used_index in kvm_memslots.
> */
> - slot = search_memslots(slots, gfn, &slot_index);
> + slot = search_memslots(slots, gfn, &slot_index, false);
> if (slot) {
> vcpu->last_used_slot = slot_index;
> return slot;
On 18.08.2021 16:35, David Hildenbrand wrote:
> On 13.08.21 21:33, Maciej S. Szmigiero wrote:
>> From: "Maciej S. Szmigiero" <[email protected]>
>>
>> Since kvm_memslot_move_forward() can theoretically return a negative
>> memslot index even when kvm_memslot_move_backward() returned a positive one
>> (and so did not WARN) let's just move the warning to the common code.
>>
>> Signed-off-by: Maciej S. Szmigiero <[email protected]>
>> ---
>> virt/kvm/kvm_main.c | 6 ++++--
>> 1 file changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>> index 03ef42d2e421..7000efff1425 100644
>> --- a/virt/kvm/kvm_main.c
>> +++ b/virt/kvm/kvm_main.c
>> @@ -1293,8 +1293,7 @@ static inline int kvm_memslot_move_backward(struct kvm_memslots *slots,
>> struct kvm_memory_slot *mslots = slots->memslots;
>> int i;
>> - if (WARN_ON_ONCE(slots->id_to_index[memslot->id] == -1) ||
>> - WARN_ON_ONCE(!slots->used_slots))
>> + if (slots->id_to_index[memslot->id] == -1 || !slots->used_slots)
>> return -1;
>> /*
>> @@ -1398,6 +1397,9 @@ static void update_memslots(struct kvm_memslots *slots,
>> i = kvm_memslot_move_backward(slots, memslot);
>> i = kvm_memslot_move_forward(slots, memslot, i);
>> + if (WARN_ON_ONCE(i < 0))
>> + return;
>> +
>> /*
>> * Copy the memslot to its new position in memslots and update
>> * its index accordingly.
>>
>
>
> Note that WARN_ON_* is frowned upon, because it can result in crashes with panic_on_warn enabled, which is what some distributions do enable.
>
> We tend to work around that by using pr_warn()/pr_warn_once(), avoiding eventually crashing the system when there is a way to continue.
>
This patch uses WARN_ON_ONCE because:
1) It was used in the old code and the patch merely moves the check
from kvm_memslot_move_backward() to its caller,
2) This chunk of code is wholly replaced by patch 11 from this series
anyway ("Keep memslots in tree-based structures instead of array-based ones").
Thanks,
Maciej
On 18.08.21 23:43, Maciej S. Szmigiero wrote:
> On 18.08.2021 16:35, David Hildenbrand wrote:
>> On 13.08.21 21:33, Maciej S. Szmigiero wrote:
>>> From: "Maciej S. Szmigiero" <[email protected]>
>>>
>>> Since kvm_memslot_move_forward() can theoretically return a negative
>>> memslot index even when kvm_memslot_move_backward() returned a positive one
>>> (and so did not WARN) let's just move the warning to the common code.
>>>
>>> Signed-off-by: Maciej S. Szmigiero <[email protected]>
>>> ---
>>> virt/kvm/kvm_main.c | 6 ++++--
>>> 1 file changed, 4 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>>> index 03ef42d2e421..7000efff1425 100644
>>> --- a/virt/kvm/kvm_main.c
>>> +++ b/virt/kvm/kvm_main.c
>>> @@ -1293,8 +1293,7 @@ static inline int kvm_memslot_move_backward(struct kvm_memslots *slots,
>>> struct kvm_memory_slot *mslots = slots->memslots;
>>> int i;
>>> - if (WARN_ON_ONCE(slots->id_to_index[memslot->id] == -1) ||
>>> - WARN_ON_ONCE(!slots->used_slots))
>>> + if (slots->id_to_index[memslot->id] == -1 || !slots->used_slots)
>>> return -1;
>>> /*
>>> @@ -1398,6 +1397,9 @@ static void update_memslots(struct kvm_memslots *slots,
>>> i = kvm_memslot_move_backward(slots, memslot);
>>> i = kvm_memslot_move_forward(slots, memslot, i);
>>> + if (WARN_ON_ONCE(i < 0))
>>> + return;
>>> +
>>> /*
>>> * Copy the memslot to its new position in memslots and update
>>> * its index accordingly.
>>>
>>
>>
>> Note that WARN_ON_* is frowned upon, because it can result in crashes with panic_on_warn enabled, which is what some distributions do enable.
>>
>> We tend to work around that by using pr_warn()/pr_warn_once(), avoiding eventually crashing the system when there is a way to continue.
>>
>
> This patch uses WARN_ON_ONCE because:
> 1) It was used in the old code and the patch merely moves the check
> from kvm_memslot_move_backward() to its caller,
>
> 2) This chunk of code is wholly replaced by patch 11 from this series
> anyway ("Keep memslots in tree-based structures instead of array-based ones").
Okay, that makes sense then, thanks!
--
Thanks,
David / dhildenb