2014-11-14 11:12:35

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH 0/3] KVM: simplification to the memslots code

Hi Igor and Takuya,

here are a few small patches that simplify __kvm_set_memory_region
and associated code. Can you please review them?

Thanks,

Paolo

Paolo Bonzini (3):
kvm: memslots: track id_to_index changes during the insertion sort
kvm: commonize allocation of the new memory slots
kvm: simplify update_memslots invocation

virt/kvm/kvm_main.c | 87 ++++++++++++++++++++++-------------------------------
1 file changed, 36 insertions(+), 51 deletions(-)

--
1.8.3.1


2014-11-14 11:12:38

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH 1/3] kvm: memslots: track id_to_index changes during the insertion sort

This completes the optimization from the previous patch, by
removing the KVM_MEM_SLOTS_NUM-iteration loop from insert_memslot.

Signed-off-by: Paolo Bonzini <[email protected]>
---
virt/kvm/kvm_main.c | 39 +++++++++++++++++++--------------------
1 file changed, 19 insertions(+), 20 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c0c2202e6c4f..c8ff99cc0ccb 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -677,31 +677,30 @@ static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
static void insert_memslot(struct kvm_memslots *slots,
struct kvm_memory_slot *new)
{
- int i = slots->id_to_index[new->id];
- struct kvm_memory_slot *old = id_to_memslot(slots, new->id);
+ int id = new->id;
+ int i = slots->id_to_index[id];
struct kvm_memory_slot *mslots = slots->memslots;

- if (new->npages == old->npages) {
- *old = *new;
- return;
- }
-
- while (1) {
- if (i < (KVM_MEM_SLOTS_NUM - 1) &&
- new->npages < mslots[i + 1].npages) {
- mslots[i] = mslots[i + 1];
- i++;
- } else if (i > 0 && new->npages > mslots[i - 1].npages) {
- mslots[i] = mslots[i - 1];
- i--;
- } else {
- mslots[i] = *new;
- break;
+ WARN_ON(mslots[i].id != id);
+ if (new->npages != mslots[i].npages) {
+ while (1) {
+ if (i < (KVM_MEM_SLOTS_NUM - 1) &&
+ new->npages < mslots[i + 1].npages) {
+ mslots[i] = mslots[i + 1];
+ slots->id_to_index[mslots[i].id] = i;
+ i++;
+ } else if (i > 0 &&
+ new->npages > mslots[i - 1].npages) {
+ mslots[i] = mslots[i - 1];
+ slots->id_to_index[mslots[i].id] = i;
+ i--;
+ } else
+ break;
}
}

- for (i = 0; i < KVM_MEM_SLOTS_NUM; i++)
- slots->id_to_index[slots->memslots[i].id] = i;
+ mslots[i] = *new;
+ slots->id_to_index[mslots[i].id] = i;
}

static void update_memslots(struct kvm_memslots *slots,
--
1.8.3.1

2014-11-14 11:12:42

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH 3/3] kvm: simplify update_memslots invocation

The update_memslots invocation is only needed in one case. Make
the code clearer by moving it to __kvm_set_memory_region, and
removing the wrapper around insert_memslot.

Signed-off-by: Paolo Bonzini <[email protected]>
---
virt/kvm/kvm_main.c | 20 ++++++--------------
1 file changed, 6 insertions(+), 14 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 7bfc842b96d7..a5ed3a9f7522 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -674,8 +674,8 @@ static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
* takes advantage of having initially sorted array and
* known changed memslot position.
*/
-static void insert_memslot(struct kvm_memslots *slots,
- struct kvm_memory_slot *new)
+static void update_memslots(struct kvm_memslots *slots,
+ struct kvm_memory_slot *new)
{
int id = new->id;
int i = slots->id_to_index[id];
@@ -703,14 +703,6 @@ static void insert_memslot(struct kvm_memslots *slots,
slots->id_to_index[mslots[i].id] = i;
}

-static void update_memslots(struct kvm_memslots *slots,
- struct kvm_memory_slot *new)
-{
- if (new) {
- insert_memslot(slots, new);
- }
-}
-
static int check_memory_region_flags(struct kvm_userspace_memory_region *mem)
{
u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES;
@@ -726,7 +718,7 @@ static int check_memory_region_flags(struct kvm_userspace_memory_region *mem)
}

static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
- struct kvm_memslots *slots, struct kvm_memory_slot *new)
+ struct kvm_memslots *slots)
{
struct kvm_memslots *old_memslots = kvm->memslots;

@@ -737,7 +729,6 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
WARN_ON(old_memslots->generation & 1);
slots->generation = old_memslots->generation + 1;

- update_memslots(slots, new);
rcu_assign_pointer(kvm->memslots, slots);
synchronize_srcu_expedited(&kvm->srcu);

@@ -874,7 +865,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
slot = id_to_memslot(slots, mem->slot);
slot->flags |= KVM_MEMSLOT_INVALID;

- old_memslots = install_new_memslots(kvm, slots, NULL);
+ old_memslots = install_new_memslots(kvm, slots);

/* slot was deleted or moved, clear iommu mapping */
kvm_iommu_unmap_pages(kvm, &old);
@@ -905,7 +896,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
memset(&new.arch, 0, sizeof(new.arch));
}

- old_memslots = install_new_memslots(kvm, slots, &new);
+ update_memslots(slots, &new);
+ old_memslots = install_new_memslots(kvm, slots);

kvm_arch_commit_memory_region(kvm, mem, &old, change);

--
1.8.3.1

2014-11-14 11:13:09

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH 2/3] kvm: commonize allocation of the new memory slots

The two kmemdup invocations can be unified. I find that the new
placement of the comment makes it easier to see what happens.

Signed-off-by: Paolo Bonzini <[email protected]>
---
virt/kvm/kvm_main.c | 28 +++++++++++-----------------
1 file changed, 11 insertions(+), 17 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c8ff99cc0ccb..7bfc842b96d7 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -865,11 +865,12 @@ int __kvm_set_memory_region(struct kvm *kvm,
goto out_free;
}

+ slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots),
+ GFP_KERNEL);
+ if (!slots)
+ goto out_free;
+
if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) {
- slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots),
- GFP_KERNEL);
- if (!slots)
- goto out_free;
slot = id_to_memslot(slots, mem->slot);
slot->flags |= KVM_MEMSLOT_INVALID;

@@ -885,6 +886,12 @@ int __kvm_set_memory_region(struct kvm *kvm,
* - kvm_is_visible_gfn (mmu_check_roots)
*/
kvm_arch_flush_shadow_memslot(kvm, slot);
+
+ /*
+ * We can re-use the old_memslots from above, the only difference
+ * from the currently installed memslots is the invalid flag. This
+ * will get overwritten by update_memslots anyway.
+ */
slots = old_memslots;
}

@@ -892,19 +899,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
if (r)
goto out_slots;

- r = -ENOMEM;
- /*
- * We can re-use the old_memslots from above, the only difference
- * from the currently installed memslots is the invalid flag. This
- * will get overwritten by update_memslots anyway.
- */
- if (!slots) {
- slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots),
- GFP_KERNEL);
- if (!slots)
- goto out_free;
- }
-
/* actual memory is freed via old in kvm_free_physmem_slot below */
if (change == KVM_MR_DELETE) {
new.dirty_bitmap = NULL;
--
1.8.3.1

2014-11-14 13:35:18

by Igor Mammedov

[permalink] [raw]
Subject: Re: [PATCH 1/3] kvm: memslots: track id_to_index changes during the insertion sort

On Fri, 14 Nov 2014 12:12:00 +0100
Paolo Bonzini <[email protected]> wrote:

> This completes the optimization from the previous patch, by
> removing the KVM_MEM_SLOTS_NUM-iteration loop from insert_memslot.
>
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> virt/kvm/kvm_main.c | 39 +++++++++++++++++++--------------------
> 1 file changed, 19 insertions(+), 20 deletions(-)
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index c0c2202e6c4f..c8ff99cc0ccb 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -677,31 +677,30 @@ static int kvm_create_dirty_bitmap(struct
> kvm_memory_slot *memslot) static void insert_memslot(struct
> kvm_memslots *slots, struct kvm_memory_slot *new)
> {
> - int i = slots->id_to_index[new->id];
> - struct kvm_memory_slot *old = id_to_memslot(slots, new->id);
> + int id = new->id;
> + int i = slots->id_to_index[id];
> struct kvm_memory_slot *mslots = slots->memslots;
>
> - if (new->npages == old->npages) {
> - *old = *new;
> - return;
> - }
> -
> - while (1) {
> - if (i < (KVM_MEM_SLOTS_NUM - 1) &&
> - new->npages < mslots[i + 1].npages) {
> - mslots[i] = mslots[i + 1];
> - i++;
> - } else if (i > 0 && new->npages > mslots[i -
> 1].npages) {
> - mslots[i] = mslots[i - 1];
> - i--;
> - } else {
> - mslots[i] = *new;
> - break;
> + WARN_ON(mslots[i].id != id);
> + if (new->npages != mslots[i].npages) {
> + while (1) {
> + if (i < (KVM_MEM_SLOTS_NUM - 1) &&
> + new->npages < mslots[i + 1].npages) {
> + mslots[i] = mslots[i + 1];
> + slots->id_to_index[mslots[i].id] = i;
> + i++;
> + } else if (i > 0 &&
> + new->npages > mslots[i -
> 1].npages) {
> + mslots[i] = mslots[i - 1];
> + slots->id_to_index[mslots[i].id] = i;
> + i--;
> + } else
> + break;
> }
> }
>
> - for (i = 0; i < KVM_MEM_SLOTS_NUM; i++)
> - slots->id_to_index[slots->memslots[i].id] = i;
> + mslots[i] = *new;
> + slots->id_to_index[mslots[i].id] = i;
> }
>
> static void update_memslots(struct kvm_memslots *slots,

Reviewed-by: Igor Mammedov <[email protected]>

2014-11-14 13:35:43

by Radim Krčmář

[permalink] [raw]
Subject: Re: [PATCH 1/3] kvm: memslots: track id_to_index changes during the insertion sort

2014-11-14 12:12+0100, Paolo Bonzini:
> This completes the optimization from the previous patch, by
> removing the KVM_MEM_SLOTS_NUM-iteration loop from insert_memslot.
>
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> virt/kvm/kvm_main.c | 39 +++++++++++++++++++--------------------
> 1 file changed, 19 insertions(+), 20 deletions(-)
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index c0c2202e6c4f..c8ff99cc0ccb 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -677,31 +677,30 @@ static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
> static void insert_memslot(struct kvm_memslots *slots,
> struct kvm_memory_slot *new)
> {
> - int i = slots->id_to_index[new->id];
> - struct kvm_memory_slot *old = id_to_memslot(slots, new->id);
> + int id = new->id;
> + int i = slots->id_to_index[id];
> struct kvm_memory_slot *mslots = slots->memslots;
>
> - if (new->npages == old->npages) {
> - *old = *new;
> - return;
> - }
> -
> - while (1) {
> - if (i < (KVM_MEM_SLOTS_NUM - 1) &&
> - new->npages < mslots[i + 1].npages) {
> - mslots[i] = mslots[i + 1];
> - i++;
> - } else if (i > 0 && new->npages > mslots[i - 1].npages) {
> - mslots[i] = mslots[i - 1];
> - i--;
> - } else {
> - mslots[i] = *new;
> - break;
> + WARN_ON(mslots[i].id != id);
> + if (new->npages != mslots[i].npages) {
> + while (1) {
> + if (i < (KVM_MEM_SLOTS_NUM - 1) &&
> + new->npages < mslots[i + 1].npages) {
(^^^^ whitespace error)
> + mslots[i] = mslots[i + 1];
> + slots->id_to_index[mslots[i].id] = i;
> + i++;
> + } else if (i > 0 &&
> + new->npages > mslots[i - 1].npages) {
> + mslots[i] = mslots[i - 1];
> + slots->id_to_index[mslots[i].id] = i;
> + i--;
> + } else
> + break;

We are replacing in a sorted array, so the the direction of our
traversal doesn't change, (and we could lose one tab level here,)

if (new->npages < mslots[i].npages) {
while (i < (KVM_MEM_SLOTS_NUM - 1) &&
new->npages < mslots[i + 1].npages) {
mslots[i] = mslots[i + 1];
slots->id_to_index[mslots[i].id] = i;
i++;
}
else if (new->npages > mslots[i].npages)
while (i > 0 &&
new->npages > mslots[i - 1].npages) {
mslots[i] = mslots[i - 1];
slots->id_to_index[mslots[i].id] = i;
i--;
}

(I guess you don't want me to abstract these two loops further :)

If the probability of slots with same npages was high, we could also
move just the last one from each group, but I think that the current
algorithm is already faster than we need.

(We'll have to change it into an interval tree, or something, if the
number of slots rises anyway.)

2014-11-14 14:17:35

by Igor Mammedov

[permalink] [raw]
Subject: Re: [PATCH 1/3] kvm: memslots: track id_to_index changes during the insertion sort

On Fri, 14 Nov 2014 14:35:00 +0100
Radim Krčmář <[email protected]> wrote:

> 2014-11-14 12:12+0100, Paolo Bonzini:
> > This completes the optimization from the previous patch, by
> > removing the KVM_MEM_SLOTS_NUM-iteration loop from insert_memslot.
> >
> > Signed-off-by: Paolo Bonzini <[email protected]>
> > ---
> > virt/kvm/kvm_main.c | 39 +++++++++++++++++++--------------------
> > 1 file changed, 19 insertions(+), 20 deletions(-)
> >
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index c0c2202e6c4f..c8ff99cc0ccb 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -677,31 +677,30 @@ static int kvm_create_dirty_bitmap(struct
> > kvm_memory_slot *memslot) static void insert_memslot(struct
> > kvm_memslots *slots, struct kvm_memory_slot *new)
> > {
> > - int i = slots->id_to_index[new->id];
> > - struct kvm_memory_slot *old = id_to_memslot(slots,
> > new->id);
> > + int id = new->id;
> > + int i = slots->id_to_index[id];
> > struct kvm_memory_slot *mslots = slots->memslots;
> >
> > - if (new->npages == old->npages) {
> > - *old = *new;
> > - return;
> > - }
> > -
> > - while (1) {
> > - if (i < (KVM_MEM_SLOTS_NUM - 1) &&
> > - new->npages < mslots[i + 1].npages) {
> > - mslots[i] = mslots[i + 1];
> > - i++;
> > - } else if (i > 0 && new->npages > mslots[i -
> > 1].npages) {
> > - mslots[i] = mslots[i - 1];
> > - i--;
> > - } else {
> > - mslots[i] = *new;
> > - break;
> > + WARN_ON(mslots[i].id != id);
> > + if (new->npages != mslots[i].npages) {
> > + while (1) {
> > + if (i < (KVM_MEM_SLOTS_NUM - 1) &&
> > + new->npages < mslots[i +
> > 1].npages) {
> (^^^^ whitespace error)
> > + mslots[i] = mslots[i + 1];
> > + slots->id_to_index[mslots[i].id] =
> > i;
> > + i++;
> > + } else if (i > 0 &&
> > + new->npages > mslots[i -
> > 1].npages) {
> > + mslots[i] = mslots[i - 1];
> > + slots->id_to_index[mslots[i].id] =
> > i;
> > + i--;
> > + } else
> > + break;
>
> We are replacing in a sorted array, so the the direction of our
> traversal doesn't change, (and we could lose one tab level here,)
>
> if (new->npages < mslots[i].npages) {
> while (i < (KVM_MEM_SLOTS_NUM - 1) &&
> new->npages < mslots[i + 1].npages) {
> mslots[i] = mslots[i + 1];
> slots->id_to_index[mslots[i].id] = i;
> i++;
> }
> else if (new->npages > mslots[i].npages)
> while (i > 0 &&
> new->npages > mslots[i - 1].npages) {
> mslots[i] = mslots[i - 1];
> slots->id_to_index[mslots[i].id] = i;
> i--;
> }
>
> (I guess you don't want me to abstract these two loops further :)
>
> If the probability of slots with same npages was high, we could also
> move just the last one from each group, but I think that the current
> algorithm is already faster than we need.
>
> (We'll have to change it into an interval tree, or something, if the
> number of slots rises anyway.)
Only if it rises to huge amount, I've played with proposed 512 memslots
and it takes ~10000 cycles which is 5% of current heapsort overhead.
Taking in account that it's slow path anyway, it's unlikely that there
would be need to speedup this case even more.

2014-11-14 14:21:51

by Igor Mammedov

[permalink] [raw]
Subject: Re: [PATCH 2/3] kvm: commonize allocation of the new memory slots

On Fri, 14 Nov 2014 12:12:01 +0100
Paolo Bonzini <[email protected]> wrote:

> The two kmemdup invocations can be unified. I find that the new
> placement of the comment makes it easier to see what happens.
>
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> virt/kvm/kvm_main.c | 28 +++++++++++-----------------
> 1 file changed, 11 insertions(+), 17 deletions(-)
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index c8ff99cc0ccb..7bfc842b96d7 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -865,11 +865,12 @@ int __kvm_set_memory_region(struct kvm *kvm,
> goto out_free;
> }
>
> + slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots),
> + GFP_KERNEL);
> + if (!slots)
> + goto out_free;
> +
> if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) {
> - slots = kmemdup(kvm->memslots, sizeof(struct
> kvm_memslots),
> - GFP_KERNEL);
> - if (!slots)
> - goto out_free;
> slot = id_to_memslot(slots, mem->slot);
> slot->flags |= KVM_MEMSLOT_INVALID;
>
> @@ -885,6 +886,12 @@ int __kvm_set_memory_region(struct kvm *kvm,
> * - kvm_is_visible_gfn (mmu_check_roots)
> */
> kvm_arch_flush_shadow_memslot(kvm, slot);
> +
> + /*
> + * We can re-use the old_memslots from above, the
> only difference
> + * from the currently installed memslots is the
> invalid flag. This
> + * will get overwritten by update_memslots anyway.
> + */
> slots = old_memslots;
> }
>
> @@ -892,19 +899,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
> if (r)
> goto out_slots;
>
> - r = -ENOMEM;
> - /*
> - * We can re-use the old_memslots from above, the only
> difference
> - * from the currently installed memslots is the invalid
> flag. This
> - * will get overwritten by update_memslots anyway.
> - */
> - if (!slots) {
> - slots = kmemdup(kvm->memslots, sizeof(struct
> kvm_memslots),
> - GFP_KERNEL);
> - if (!slots)
> - goto out_free;
> - }
> -
> /* actual memory is freed via old in kvm_free_physmem_slot
> below */ if (change == KVM_MR_DELETE) {
> new.dirty_bitmap = NULL;

Reviewed-by: Igor Mammedov <[email protected]>

2014-11-14 14:24:35

by Igor Mammedov

[permalink] [raw]
Subject: Re: [PATCH 3/3] kvm: simplify update_memslots invocation

On Fri, 14 Nov 2014 12:12:02 +0100
Paolo Bonzini <[email protected]> wrote:

> The update_memslots invocation is only needed in one case. Make
> the code clearer by moving it to __kvm_set_memory_region, and
> removing the wrapper around insert_memslot.
>
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> virt/kvm/kvm_main.c | 20 ++++++--------------
> 1 file changed, 6 insertions(+), 14 deletions(-)
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 7bfc842b96d7..a5ed3a9f7522 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -674,8 +674,8 @@ static int kvm_create_dirty_bitmap(struct
> kvm_memory_slot *memslot)
> * takes advantage of having initially sorted array and
> * known changed memslot position.
> */
> -static void insert_memslot(struct kvm_memslots *slots,
> - struct kvm_memory_slot *new)
> +static void update_memslots(struct kvm_memslots *slots,
> + struct kvm_memory_slot *new)
> {
> int id = new->id;
> int i = slots->id_to_index[id];
> @@ -703,14 +703,6 @@ static void insert_memslot(struct kvm_memslots
> *slots, slots->id_to_index[mslots[i].id] = i;
> }
>
> -static void update_memslots(struct kvm_memslots *slots,
> - struct kvm_memory_slot *new)
> -{
> - if (new) {
> - insert_memslot(slots, new);
> - }
> -}
> -
> static int check_memory_region_flags(struct
> kvm_userspace_memory_region *mem) {
> u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES;
> @@ -726,7 +718,7 @@ static int check_memory_region_flags(struct
> kvm_userspace_memory_region *mem) }
>
> static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
> - struct kvm_memslots *slots, struct kvm_memory_slot
> *new)
> + struct kvm_memslots *slots)
> {
> struct kvm_memslots *old_memslots = kvm->memslots;
>
> @@ -737,7 +729,6 @@ static struct kvm_memslots
> *install_new_memslots(struct kvm *kvm,
> WARN_ON(old_memslots->generation & 1); slots->generation =
> old_memslots->generation + 1;
> - update_memslots(slots, new);
> rcu_assign_pointer(kvm->memslots, slots);
> synchronize_srcu_expedited(&kvm->srcu);
>
> @@ -874,7 +865,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
> slot = id_to_memslot(slots, mem->slot);
> slot->flags |= KVM_MEMSLOT_INVALID;
>
> - old_memslots = install_new_memslots(kvm, slots,
> NULL);
> + old_memslots = install_new_memslots(kvm, slots);
>
> /* slot was deleted or moved, clear iommu mapping */
> kvm_iommu_unmap_pages(kvm, &old);
> @@ -905,7 +896,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
> memset(&new.arch, 0, sizeof(new.arch));
> }
>
> - old_memslots = install_new_memslots(kvm, slots, &new);
> + update_memslots(slots, &new);
> + old_memslots = install_new_memslots(kvm, slots);
>
> kvm_arch_commit_memory_region(kvm, mem, &old, change);
>

Reviewed-by: Igor Mammedov <[email protected]>

2014-11-14 14:30:06

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH 1/3] kvm: memslots: track id_to_index changes during the insertion sort



On 14/11/2014 14:35, Radim Krčmář wrote:
> We are replacing in a sorted array, so the the direction of our
> traversal doesn't change, (and we could lose one tab level here,)
>
> if (new->npages < mslots[i].npages) {
> while (i < (KVM_MEM_SLOTS_NUM - 1) &&
> new->npages < mslots[i + 1].npages) {
> mslots[i] = mslots[i + 1];
> slots->id_to_index[mslots[i].id] = i;
> i++;
> }
> else if (new->npages > mslots[i].npages)
> while (i > 0 &&
> new->npages > mslots[i - 1].npages) {
> mslots[i] = mslots[i - 1];
> slots->id_to_index[mslots[i].id] = i;
> i--;
> }
>
> (I guess you don't want me to abstract these two loops further :)

Right. You do not need the "else if" as long as you keep the outer "if
(new->npages != mslots[i].npages)".

> (We'll have to change it into an interval tree, or something, if the
> number of slots rises anyway.)

I don't think that's needed, actually. gfn_to_page and gfn_to_memslot
are very rarely in the profiles with EPT.

Paolo

2014-11-14 14:41:42

by Radim Krčmář

[permalink] [raw]
Subject: Re: [PATCH 1/3] kvm: memslots: track id_to_index changes during the insertion sort

2014-11-14 15:17+0100, Igor Mammedov:
> > (We'll have to change it into an interval tree, or something, if the
> > number of slots rises anyway.)
> Only if it rises to huge amount, I've played with proposed 512 memslots
> and it takes ~10000 cycles which is 5% of current heapsort overhead.
> Taking in account that it's slow path anyway, it's unlikely that there
> would be need to speedup this case even more.

Yes, your improvement is great and would work even for higher amounts.

I meant that our lookup is currently pretty sad -- O(N) that is
presumably optimized by looking at the largest regions first.

Maybe we would benefit from O(log N) lookup even with 128 memslots.

2014-11-14 14:44:19

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH 1/3] kvm: memslots: track id_to_index changes during the insertion sort



On 14/11/2014 15:41, Radim Krčmář wrote:
> Yes, your improvement is great and would work even for higher amounts.
>
> I meant that our lookup is currently pretty sad -- O(N) that is
> presumably optimized by looking at the largest regions first.

Yes, that's the optimization.

> Maybe we would benefit from O(log N) lookup even with 128 memslots.

Maybe, but the common case so far is about 10, and all but two of them
are only used at boot time. :)

Perhaps we could add a one-item MRU cache, that could help lookups a
bit. That's what QEMU does too, by the way. It used to sort the list
in MRU-to-LRU order, but that wasn't too thread-friendly so it was
switched to biggest-to-smallest (with inspiration taken from KVM).

Paolo

2014-11-14 14:44:30

by Radim Krčmář

[permalink] [raw]
Subject: Re: [PATCH 1/3] kvm: memslots: track id_to_index changes during the insertion sort

2014-11-14 15:29+0100, Paolo Bonzini:
> On 14/11/2014 14:35, Radim Krčmář wrote:
> > We are replacing in a sorted array, so the the direction of our
> > traversal doesn't change, (and we could lose one tab level here,)
> >
> > if (new->npages < mslots[i].npages) {
> > while (i < (KVM_MEM_SLOTS_NUM - 1) &&
> > new->npages < mslots[i + 1].npages) {
> > mslots[i] = mslots[i + 1];
> > slots->id_to_index[mslots[i].id] = i;
> > i++;
> > }
> > else if (new->npages > mslots[i].npages)
> > while (i > 0 &&
> > new->npages > mslots[i - 1].npages) {
> > mslots[i] = mslots[i - 1];
> > slots->id_to_index[mslots[i].id] = i;
> > i--;
> > }
> >
> > (I guess you don't want me to abstract these two loops further :)
>
> Right. You do not need the "else if" as long as you keep the outer "if
> (new->npages != mslots[i].npages)".

True, I'm just an indentation hater.

> > (We'll have to change it into an interval tree, or something, if the
> > number of slots rises anyway.)
>
> I don't think that's needed, actually. gfn_to_page and gfn_to_memslot
> are very rarely in the profiles with EPT.

Ah, that replaces my reply to Igor, thanks.

2014-11-17 01:49:33

by Takuya Yoshikawa

[permalink] [raw]
Subject: Re: [PATCH 2/3] kvm: commonize allocation of the new memory slots

On 2014/11/14 20:12, Paolo Bonzini wrote:
> The two kmemdup invocations can be unified. I find that the new
> placement of the comment makes it easier to see what happens.

A lot easier to follow the logic.

Reviewed-by: Takuya Yoshikawa <[email protected]>

>
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> virt/kvm/kvm_main.c | 28 +++++++++++-----------------
> 1 file changed, 11 insertions(+), 17 deletions(-)
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index c8ff99cc0ccb..7bfc842b96d7 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -865,11 +865,12 @@ int __kvm_set_memory_region(struct kvm *kvm,
> goto out_free;
> }
>
> + slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots),
> + GFP_KERNEL);
> + if (!slots)
> + goto out_free;
> +
> if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) {
> - slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots),
> - GFP_KERNEL);
> - if (!slots)
> - goto out_free;
> slot = id_to_memslot(slots, mem->slot);
> slot->flags |= KVM_MEMSLOT_INVALID;
>
> @@ -885,6 +886,12 @@ int __kvm_set_memory_region(struct kvm *kvm,
> * - kvm_is_visible_gfn (mmu_check_roots)
> */
> kvm_arch_flush_shadow_memslot(kvm, slot);
> +
> + /*
> + * We can re-use the old_memslots from above, the only difference
> + * from the currently installed memslots is the invalid flag. This
> + * will get overwritten by update_memslots anyway.
> + */
> slots = old_memslots;
> }
>
> @@ -892,19 +899,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
> if (r)
> goto out_slots;
>
> - r = -ENOMEM;
> - /*
> - * We can re-use the old_memslots from above, the only difference
> - * from the currently installed memslots is the invalid flag. This
> - * will get overwritten by update_memslots anyway.
> - */
> - if (!slots) {
> - slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots),
> - GFP_KERNEL);
> - if (!slots)
> - goto out_free;
> - }
> -
> /* actual memory is freed via old in kvm_free_physmem_slot below */
> if (change == KVM_MR_DELETE) {
> new.dirty_bitmap = NULL;
>

2014-11-17 01:55:59

by Takuya Yoshikawa

[permalink] [raw]
Subject: Re: [PATCH 0/3] KVM: simplification to the memslots code

On 2014/11/14 20:11, Paolo Bonzini wrote:
> Hi Igor and Takuya,
>
> here are a few small patches that simplify __kvm_set_memory_region
> and associated code. Can you please review them?

Ah, already queued. Sorry for being late to respond.

Takuya

>
> Thanks,
>
> Paolo
>
> Paolo Bonzini (3):
> kvm: memslots: track id_to_index changes during the insertion sort
> kvm: commonize allocation of the new memory slots
> kvm: simplify update_memslots invocation
>
> virt/kvm/kvm_main.c | 87 ++++++++++++++++++++++-------------------------------
> 1 file changed, 36 insertions(+), 51 deletions(-)
>

2014-11-17 09:24:07

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH 0/3] KVM: simplification to the memslots code



On 17/11/2014 02:56, Takuya Yoshikawa wrote:
>> > here are a few small patches that simplify __kvm_set_memory_region
>> > and associated code. Can you please review them?
> Ah, already queued. Sorry for being late to respond.

While they are not in kvm/next, there's time to add Reviewed-by's and
all that. kvm/queue basically means "I want Fengguang to compile-test
them, some testing done on x86_64".

Paolo

2014-11-17 09:30:38

by Takuya Yoshikawa

[permalink] [raw]
Subject: Re: [PATCH 0/3] KVM: simplification to the memslots code

On 2014/11/17 18:23, Paolo Bonzini wrote:
>
>
> On 17/11/2014 02:56, Takuya Yoshikawa wrote:
>>>> here are a few small patches that simplify __kvm_set_memory_region
>>>> and associated code. Can you please review them?
>> Ah, already queued. Sorry for being late to respond.
>
> While they are not in kvm/next, there's time to add Reviewed-by's and
> all that. kvm/queue basically means "I want Fengguang to compile-test
> them, some testing done on x86_64".
>
> Paolo
>

OK.

I reviewed patch 2/3 and 3/3, and saw no problem, some
improvements, there.

Takuya