2020-03-20 20:57:30

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 0/7] KVM: Fix memslot use-after-free bug

Fix a bug introduced by dynamic memslot allocation where the LRU slot can
become invalid and lead to a out-of-bounds/use-after-free scenario.

The patch is different that what Qian has already tested, but I was able
to reproduce the bad behavior by enhancing the set memory region selftest,
i.e. I'm relatively confident the bug is fixed.

Patches 2-6 are a variety of selftest cleanup, with the aforementioned
set memory region enhancement coming in patch 7.

Note, I couldn't get the selftest to fail outright or with KASAN, but was
able to hit a WARN_ON an invalid slot 100% of the time (without the fix,
obviously).

Regarding s390, I played around a bit with merging gfn_to_memslot_approx()
into search_memslots(). Code wise it's trivial since they're basically
identical, but doing so increases the code footprint of search_memslots()
on x86 by 30 bytes, so I ended up abandoning the effort.

Sean Christopherson (7):
KVM: Fix out of range accesses to memslots
KVM: selftests: Fix cosmetic copy-paste error in vm_mem_region_move()
KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm()
KVM: selftests: Add helpers to consolidate open coded list operations
KVM: selftests: Add util to delete memory region
KVM: selftests: Expose the primary memslot number to tests
KVM: selftests: Add "delete" testcase to set_memory_region_test

arch/s390/kvm/kvm-s390.c | 3 +
include/linux/kvm_host.h | 3 +
.../testing/selftests/kvm/include/kvm_util.h | 3 +
tools/testing/selftests/kvm/lib/kvm_util.c | 139 ++++++++++--------
.../kvm/x86_64/set_memory_region_test.c | 122 +++++++++++++--
virt/kvm/kvm_main.c | 3 +
6 files changed, 201 insertions(+), 72 deletions(-)

--
2.24.1


2020-03-20 20:57:35

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 1/7] KVM: Fix out of range accesses to memslots

Reset the LRU slot if it becomes invalid when deleting a memslot to fix
an out-of-bounds/use-after-free access when searching through memslots.

Explicitly check for there being no used slots in search_memslots(), and
in the caller of s390's approximation variant.

Fixes: 36947254e5f9 ("KVM: Dynamically size memslot array based on number of used slots")
Reported-by: Qian Cai <[email protected]>
Cc: Peter Xu <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
---
arch/s390/kvm/kvm-s390.c | 3 +++
include/linux/kvm_host.h | 3 +++
virt/kvm/kvm_main.c | 3 +++
3 files changed, 9 insertions(+)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 807ed6d722dd..cb15fdda1fee 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -2002,6 +2002,9 @@ static int kvm_s390_get_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args,
struct kvm_memslots *slots = kvm_memslots(kvm);
struct kvm_memory_slot *ms;

+ if (unlikely(!slots->used_slots))
+ return 0;
+
cur_gfn = kvm_s390_next_dirty_cmma(slots, args->start_gfn);
ms = gfn_to_memslot(kvm, cur_gfn);
args->count = 0;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 35bc52e187a2..b19dee4ed7d9 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1032,6 +1032,9 @@ search_memslots(struct kvm_memslots *slots, gfn_t gfn)
int slot = atomic_read(&slots->lru_slot);
struct kvm_memory_slot *memslots = slots->memslots;

+ if (unlikely(!slots->used_slots))
+ return NULL;
+
if (gfn >= memslots[slot].base_gfn &&
gfn < memslots[slot].base_gfn + memslots[slot].npages)
return &memslots[slot];
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 28eae681859f..f744bc603c53 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -882,6 +882,9 @@ static inline void kvm_memslot_delete(struct kvm_memslots *slots,

slots->used_slots--;

+ if (atomic_read(&slots->lru_slot) >= slots->used_slots)
+ atomic_set(&slots->lru_slot, 0);
+
for (i = slots->id_to_index[memslot->id]; i < slots->used_slots; i++) {
mslots[i] = mslots[i + 1];
slots->id_to_index[mslots[i].id] = i;
--
2.24.1

2020-03-20 20:57:37

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 3/7] KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm()

The sole caller of vm_vcpu_rm() already has the vcpu pointer, take it
directly instead of doing an extra lookup.

Signed-off-by: Sean Christopherson <[email protected]>
---
tools/testing/selftests/kvm/lib/kvm_util.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 8a3523d4434f..9a783c20dd26 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -393,7 +393,7 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
*
* Input Args:
* vm - Virtual Machine
- * vcpuid - VCPU ID
+ * vcpu - VCPU to remove
*
* Output Args: None
*
@@ -401,9 +401,8 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
*
* Within the VM specified by vm, removes the VCPU given by vcpuid.
*/
-static void vm_vcpu_rm(struct kvm_vm *vm, uint32_t vcpuid)
+static void vm_vcpu_rm(struct kvm_vm *vm, struct vcpu *vcpu)
{
- struct vcpu *vcpu = vcpu_find(vm, vcpuid);
int ret;

ret = munmap(vcpu->state, sizeof(*vcpu->state));
@@ -427,7 +426,7 @@ void kvm_vm_release(struct kvm_vm *vmp)
int ret;

while (vmp->vcpu_head)
- vm_vcpu_rm(vmp, vmp->vcpu_head->id);
+ vm_vcpu_rm(vmp, vmp->vcpu_head);

ret = close(vmp->fd);
TEST_ASSERT(ret == 0, "Close of vm fd failed,\n"
--
2.24.1

2020-03-20 22:17:56

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH 1/7] KVM: Fix out of range accesses to memslots

On Fri, Mar 20, 2020 at 01:55:40PM -0700, Sean Christopherson wrote:
> Reset the LRU slot if it becomes invalid when deleting a memslot to fix
> an out-of-bounds/use-after-free access when searching through memslots.
>
> Explicitly check for there being no used slots in search_memslots(), and
> in the caller of s390's approximation variant.
>
> Fixes: 36947254e5f9 ("KVM: Dynamically size memslot array based on number of used slots")
> Reported-by: Qian Cai <[email protected]>
> Cc: Peter Xu <[email protected]>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> arch/s390/kvm/kvm-s390.c | 3 +++
> include/linux/kvm_host.h | 3 +++
> virt/kvm/kvm_main.c | 3 +++
> 3 files changed, 9 insertions(+)
>
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 807ed6d722dd..cb15fdda1fee 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -2002,6 +2002,9 @@ static int kvm_s390_get_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args,
> struct kvm_memslots *slots = kvm_memslots(kvm);
> struct kvm_memory_slot *ms;
>
> + if (unlikely(!slots->used_slots))
> + return 0;
> +
> cur_gfn = kvm_s390_next_dirty_cmma(slots, args->start_gfn);
> ms = gfn_to_memslot(kvm, cur_gfn);
> args->count = 0;
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 35bc52e187a2..b19dee4ed7d9 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1032,6 +1032,9 @@ search_memslots(struct kvm_memslots *slots, gfn_t gfn)
> int slot = atomic_read(&slots->lru_slot);
> struct kvm_memory_slot *memslots = slots->memslots;
>
> + if (unlikely(!slots->used_slots))
> + return NULL;
> +
> if (gfn >= memslots[slot].base_gfn &&
> gfn < memslots[slot].base_gfn + memslots[slot].npages)
> return &memslots[slot];
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 28eae681859f..f744bc603c53 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -882,6 +882,9 @@ static inline void kvm_memslot_delete(struct kvm_memslots *slots,
>
> slots->used_slots--;
>
> + if (atomic_read(&slots->lru_slot) >= slots->used_slots)
> + atomic_set(&slots->lru_slot, 0);

Nit: could we drop the atomic ops? The "slots" is still only used in
the current thread before the rcu assignment, so iirc it's fine (and
RCU assignment should have a mem barrier when needed anyway).

I thought resetting lru_slot to zero would be good enough when
duplicating the slots... however if we want to do finer grained reset,
maybe even better to reset also those invalidated ones (since we know
this information)?

if (slots->lru_slot >= slots->id_to_index[memslot->id])
slots->lru_slot = 0;

Thanks,

> +
> for (i = slots->id_to_index[memslot->id]; i < slots->used_slots; i++) {
> mslots[i] = mslots[i + 1];
> slots->id_to_index[mslots[i].id] = i;
> --
> 2.24.1
>

--
Peter Xu

2020-03-20 22:41:29

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH 1/7] KVM: Fix out of range accesses to memslots

On Fri, Mar 20, 2020 at 06:17:08PM -0400, Peter Xu wrote:
> On Fri, Mar 20, 2020 at 01:55:40PM -0700, Sean Christopherson wrote:
> > Reset the LRU slot if it becomes invalid when deleting a memslot to fix
> > an out-of-bounds/use-after-free access when searching through memslots.
> >
> > Explicitly check for there being no used slots in search_memslots(), and
> > in the caller of s390's approximation variant.
> >
> > Fixes: 36947254e5f9 ("KVM: Dynamically size memslot array based on number of used slots")
> > Reported-by: Qian Cai <[email protected]>
> > Cc: Peter Xu <[email protected]>
> > Signed-off-by: Sean Christopherson <[email protected]>
> > ---
> > arch/s390/kvm/kvm-s390.c | 3 +++
> > include/linux/kvm_host.h | 3 +++
> > virt/kvm/kvm_main.c | 3 +++
> > 3 files changed, 9 insertions(+)
> >
> > diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> > index 807ed6d722dd..cb15fdda1fee 100644
> > --- a/arch/s390/kvm/kvm-s390.c
> > +++ b/arch/s390/kvm/kvm-s390.c
> > @@ -2002,6 +2002,9 @@ static int kvm_s390_get_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args,
> > struct kvm_memslots *slots = kvm_memslots(kvm);
> > struct kvm_memory_slot *ms;
> >
> > + if (unlikely(!slots->used_slots))
> > + return 0;
> > +
> > cur_gfn = kvm_s390_next_dirty_cmma(slots, args->start_gfn);
> > ms = gfn_to_memslot(kvm, cur_gfn);
> > args->count = 0;
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 35bc52e187a2..b19dee4ed7d9 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -1032,6 +1032,9 @@ search_memslots(struct kvm_memslots *slots, gfn_t gfn)
> > int slot = atomic_read(&slots->lru_slot);
> > struct kvm_memory_slot *memslots = slots->memslots;
> >
> > + if (unlikely(!slots->used_slots))
> > + return NULL;
> > +
> > if (gfn >= memslots[slot].base_gfn &&
> > gfn < memslots[slot].base_gfn + memslots[slot].npages)
> > return &memslots[slot];
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 28eae681859f..f744bc603c53 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -882,6 +882,9 @@ static inline void kvm_memslot_delete(struct kvm_memslots *slots,
> >
> > slots->used_slots--;
> >
> > + if (atomic_read(&slots->lru_slot) >= slots->used_slots)
> > + atomic_set(&slots->lru_slot, 0);
>
> Nit: could we drop the atomic ops? The "slots" is still only used in
> the current thread before the rcu assignment, so iirc it's fine (and
> RCU assignment should have a mem barrier when needed anyway).

No, atomic_t wraps an int inside a struct to prevent direct usage, e.g.

virt/kvm/kvm_main.c: In function ‘kvm_memslot_delete’:
virt/kvm/kvm_main.c:885:22: error: invalid operands to binary >= (have ‘atomic_t {aka struct <anonymous>}’ and ‘int’)
if (slots->lru_slot >= slots->used_slots)
^
virt/kvm/kvm_main.c:886:19: error: incompatible types when assigning to type ‘atomic_t {aka struct <anonymous>}’ from type ‘int’
slots->lru_slot = 0;


Buy yeah, the compiler barrier associated with atomic_read() isn't
necessary.

> I thought resetting lru_slot to zero would be good enough when
> duplicating the slots... however if we want to do finer grained reset,
> maybe even better to reset also those invalidated ones (since we know
> this information)?
>
> if (slots->lru_slot >= slots->id_to_index[memslot->id])
> slots->lru_slot = 0;

I'd prefer to go with the more obviously correct version. This code will
rarely trigger, e.g. larger slots have lower indices and are more likely to
be the LRU (by virtue of being the biggest), and dynamic memslot deletion
is usually for relatively small ranges that are being remapped by the guest.

> Thanks,
>
> > +
> > for (i = slots->id_to_index[memslot->id]; i < slots->used_slots; i++) {
> > mslots[i] = mslots[i + 1];
> > slots->id_to_index[mslots[i].id] = i;
> > --
> > 2.24.1
> >
>
> --
> Peter Xu
>

2020-03-20 22:58:53

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH 1/7] KVM: Fix out of range accesses to memslots

On Fri, Mar 20, 2020 at 03:40:41PM -0700, Sean Christopherson wrote:
> On Fri, Mar 20, 2020 at 06:17:08PM -0400, Peter Xu wrote:
> > On Fri, Mar 20, 2020 at 01:55:40PM -0700, Sean Christopherson wrote:
> > > Reset the LRU slot if it becomes invalid when deleting a memslot to fix
> > > an out-of-bounds/use-after-free access when searching through memslots.
> > >
> > > Explicitly check for there being no used slots in search_memslots(), and
> > > in the caller of s390's approximation variant.
> > >
> > > Fixes: 36947254e5f9 ("KVM: Dynamically size memslot array based on number of used slots")
> > > Reported-by: Qian Cai <[email protected]>
> > > Cc: Peter Xu <[email protected]>
> > > Signed-off-by: Sean Christopherson <[email protected]>
> > > ---
> > > arch/s390/kvm/kvm-s390.c | 3 +++
> > > include/linux/kvm_host.h | 3 +++
> > > virt/kvm/kvm_main.c | 3 +++
> > > 3 files changed, 9 insertions(+)
> > >
> > > diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> > > index 807ed6d722dd..cb15fdda1fee 100644
> > > --- a/arch/s390/kvm/kvm-s390.c
> > > +++ b/arch/s390/kvm/kvm-s390.c
> > > @@ -2002,6 +2002,9 @@ static int kvm_s390_get_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args,
> > > struct kvm_memslots *slots = kvm_memslots(kvm);
> > > struct kvm_memory_slot *ms;
> > >
> > > + if (unlikely(!slots->used_slots))
> > > + return 0;
> > > +
> > > cur_gfn = kvm_s390_next_dirty_cmma(slots, args->start_gfn);
> > > ms = gfn_to_memslot(kvm, cur_gfn);
> > > args->count = 0;
> > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > index 35bc52e187a2..b19dee4ed7d9 100644
> > > --- a/include/linux/kvm_host.h
> > > +++ b/include/linux/kvm_host.h
> > > @@ -1032,6 +1032,9 @@ search_memslots(struct kvm_memslots *slots, gfn_t gfn)
> > > int slot = atomic_read(&slots->lru_slot);
> > > struct kvm_memory_slot *memslots = slots->memslots;
> > >
> > > + if (unlikely(!slots->used_slots))
> > > + return NULL;
> > > +
> > > if (gfn >= memslots[slot].base_gfn &&
> > > gfn < memslots[slot].base_gfn + memslots[slot].npages)
> > > return &memslots[slot];
> > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > > index 28eae681859f..f744bc603c53 100644
> > > --- a/virt/kvm/kvm_main.c
> > > +++ b/virt/kvm/kvm_main.c
> > > @@ -882,6 +882,9 @@ static inline void kvm_memslot_delete(struct kvm_memslots *slots,
> > >
> > > slots->used_slots--;
> > >
> > > + if (atomic_read(&slots->lru_slot) >= slots->used_slots)
> > > + atomic_set(&slots->lru_slot, 0);
> >
> > Nit: could we drop the atomic ops? The "slots" is still only used in
> > the current thread before the rcu assignment, so iirc it's fine (and
> > RCU assignment should have a mem barrier when needed anyway).
>
> No, atomic_t wraps an int inside a struct to prevent direct usage, e.g.
>
> virt/kvm/kvm_main.c: In function ‘kvm_memslot_delete’:
> virt/kvm/kvm_main.c:885:22: error: invalid operands to binary >= (have ‘atomic_t {aka struct <anonymous>}’ and ‘int’)
> if (slots->lru_slot >= slots->used_slots)
> ^
> virt/kvm/kvm_main.c:886:19: error: incompatible types when assigning to type ‘atomic_t {aka struct <anonymous>}’ from type ‘int’
> slots->lru_slot = 0;
>
>
> Buy yeah, the compiler barrier associated with atomic_read() isn't
> necessary.

Oops, sorry. I was obviously thinking about QEMU's atomic helpers.

>
> > I thought resetting lru_slot to zero would be good enough when
> > duplicating the slots... however if we want to do finer grained reset,
> > maybe even better to reset also those invalidated ones (since we know
> > this information)?
> >
> > if (slots->lru_slot >= slots->id_to_index[memslot->id])
> > slots->lru_slot = 0;
>
> I'd prefer to go with the more obviously correct version. This code will
> rarely trigger, e.g. larger slots have lower indices and are more likely to
> be the LRU (by virtue of being the biggest), and dynamic memslot deletion
> is usually for relatively small ranges that are being remapped by the guest.

IMHO the more obvious correct version could be unconditionally setting
lru_slot to something invalid (e.g. -1) in kvm_dup_memslots(), then in
the two search_memslots() skip the cache if lru_slot is invalid.
That's IMHO easier to understand than the "!slots->used_slots" checks.
But I've no strong opinion.

Thanks,

--
Peter Xu

2020-03-20 23:08:32

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH 1/7] KVM: Fix out of range accesses to memslots

On Fri, Mar 20, 2020 at 06:58:02PM -0400, Peter Xu wrote:
> On Fri, Mar 20, 2020 at 03:40:41PM -0700, Sean Christopherson wrote:
> > I thought resetting lru_slot to zero would be good enough when
> > > duplicating the slots... however if we want to do finer grained reset,
> > > maybe even better to reset also those invalidated ones (since we know
> > > this information)?
> > >
> > > if (slots->lru_slot >= slots->id_to_index[memslot->id])
> > > slots->lru_slot = 0;
> >
> > I'd prefer to go with the more obviously correct version. This code will
> > rarely trigger, e.g. larger slots have lower indices and are more likely to
> > be the LRU (by virtue of being the biggest), and dynamic memslot deletion
> > is usually for relatively small ranges that are being remapped by the guest.
>
> IMHO the more obvious correct version could be unconditionally setting
> lru_slot to something invalid (e.g. -1) in kvm_dup_memslots(), then in
> the two search_memslots() skip the cache if lru_slot is invalid.
> That's IMHO easier to understand than the "!slots->used_slots" checks.
> But I've no strong opinion.

We'd still need the !slots->used_slots check. I considered adding an
explicit check on @slot for the cache lookup, e.g.

static inline struct kvm_memory_slot *
search_memslots(struct kvm_memslots *slots, gfn_t gfn)
{
int start = 0, end = slots->used_slots;
int slot = atomic_read(&slots->lru_slot);
struct kvm_memory_slot *memslots = slots->memslots;

if (likely(slot < slots->used_slots) &&
gfn >= memslots[slot].base_gfn &&
gfn < memslots[slot].base_gfn + memslots[slot].npages)
return &memslots[slot];


But then the back half of the function still ends up with an out-of-bounds
bug. E.g. if used_slots == 0, then start==0, and memslots[start].base_gfn
accesses garbage.

while (start < end) {
slot = start + (end - start) / 2;

if (gfn >= memslots[slot].base_gfn)
end = slot;
else
start = slot + 1;
}

if (gfn >= memslots[start].base_gfn &&
gfn < memslots[start].base_gfn + memslots[start].npages) {
atomic_set(&slots->lru_slot, start);
return &memslots[start];
}

return NULL;
}

I also didn't want to invalidate the lru_slot any more than is necessary,
not that I'd expect it to make a noticeable difference even in a
pathalogical scenario, but it seemed prudent.

2020-03-24 07:14:52

by Christian Borntraeger

[permalink] [raw]
Subject: Re: [PATCH 1/7] KVM: Fix out of range accesses to memslots



On 20.03.20 21:55, Sean Christopherson wrote:
> Reset the LRU slot if it becomes invalid when deleting a memslot to fix
> an out-of-bounds/use-after-free access when searching through memslots.
>
> Explicitly check for there being no used slots in search_memslots(), and
> in the caller of s390's approximation variant.
>
> Fixes: 36947254e5f9 ("KVM: Dynamically size memslot array based on number of used slots")
> Reported-by: Qian Cai <[email protected]>
> Cc: Peter Xu <[email protected]>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> arch/s390/kvm/kvm-s390.c | 3 +++
> include/linux/kvm_host.h | 3 +++
> virt/kvm/kvm_main.c | 3 +++
> 3 files changed, 9 insertions(+)
>
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 807ed6d722dd..cb15fdda1fee 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -2002,6 +2002,9 @@ static int kvm_s390_get_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args,

Adding Claudio, but
> struct kvm_memslots *slots = kvm_memslots(kvm);
> struct kvm_memory_slot *ms;
>
> + if (unlikely(!slots->used_slots))
> + return 0;
> +

this looks sane and like the right fix.

Acked-by: Christian Borntraeger <[email protected]>

> cur_gfn = kvm_s390_next_dirty_cmma(slots, args->start_gfn);
> ms = gfn_to_memslot(kvm, cur_gfn);
> args->count = 0;
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 35bc52e187a2..b19dee4ed7d9 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1032,6 +1032,9 @@ search_memslots(struct kvm_memslots *slots, gfn_t gfn)
> int slot = atomic_read(&slots->lru_slot);
> struct kvm_memory_slot *memslots = slots->memslots;
>
> + if (unlikely(!slots->used_slots))
> + return NULL;
> +
> if (gfn >= memslots[slot].base_gfn &&
> gfn < memslots[slot].base_gfn + memslots[slot].npages)
> return &memslots[slot];
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 28eae681859f..f744bc603c53 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -882,6 +882,9 @@ static inline void kvm_memslot_delete(struct kvm_memslots *slots,
>
> slots->used_slots--;
>
> + if (atomic_read(&slots->lru_slot) >= slots->used_slots)
> + atomic_set(&slots->lru_slot, 0);
> +
> for (i = slots->id_to_index[memslot->id]; i < slots->used_slots; i++) {
> mslots[i] = mslots[i + 1];
> slots->id_to_index[mslots[i].id] = i;
>

2020-03-24 11:31:40

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH 0/7] KVM: Fix memslot use-after-free bug

On 20/03/20 21:55, Sean Christopherson wrote:
> Fix a bug introduced by dynamic memslot allocation where the LRU slot can
> become invalid and lead to a out-of-bounds/use-after-free scenario.
>
> The patch is different that what Qian has already tested, but I was able
> to reproduce the bad behavior by enhancing the set memory region selftest,
> i.e. I'm relatively confident the bug is fixed.
>
> Patches 2-6 are a variety of selftest cleanup, with the aforementioned
> set memory region enhancement coming in patch 7.
>
> Note, I couldn't get the selftest to fail outright or with KASAN, but was
> able to hit a WARN_ON an invalid slot 100% of the time (without the fix,
> obviously).
>
> Regarding s390, I played around a bit with merging gfn_to_memslot_approx()
> into search_memslots(). Code wise it's trivial since they're basically
> identical, but doing so increases the code footprint of search_memslots()
> on x86 by 30 bytes, so I ended up abandoning the effort.
>
> Sean Christopherson (7):
> KVM: Fix out of range accesses to memslots
> KVM: selftests: Fix cosmetic copy-paste error in vm_mem_region_move()
> KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm()
> KVM: selftests: Add helpers to consolidate open coded list operations
> KVM: selftests: Add util to delete memory region
> KVM: selftests: Expose the primary memslot number to tests
> KVM: selftests: Add "delete" testcase to set_memory_region_test
>
> arch/s390/kvm/kvm-s390.c | 3 +
> include/linux/kvm_host.h | 3 +
> .../testing/selftests/kvm/include/kvm_util.h | 3 +
> tools/testing/selftests/kvm/lib/kvm_util.c | 139 ++++++++++--------
> .../kvm/x86_64/set_memory_region_test.c | 122 +++++++++++++--
> virt/kvm/kvm_main.c | 3 +
> 6 files changed, 201 insertions(+), 72 deletions(-)
>

Queued patches 1-3, thanks.

Paolo

2020-03-24 11:48:19

by Claudio Imbrenda

[permalink] [raw]
Subject: Re: [PATCH 1/7] KVM: Fix out of range accesses to memslots

On Tue, 24 Mar 2020 08:12:59 +0100
Christian Borntraeger <[email protected]> wrote:

> On 20.03.20 21:55, Sean Christopherson wrote:
> > Reset the LRU slot if it becomes invalid when deleting a memslot to
> > fix an out-of-bounds/use-after-free access when searching through
> > memslots.
> >
> > Explicitly check for there being no used slots in
> > search_memslots(), and in the caller of s390's approximation
> > variant.
> >
> > Fixes: 36947254e5f9 ("KVM: Dynamically size memslot array based on
> > number of used slots") Reported-by: Qian Cai <[email protected]>
> > Cc: Peter Xu <[email protected]>
> > Signed-off-by: Sean Christopherson <[email protected]>
> > ---
> > arch/s390/kvm/kvm-s390.c | 3 +++
> > include/linux/kvm_host.h | 3 +++
> > virt/kvm/kvm_main.c | 3 +++
> > 3 files changed, 9 insertions(+)
> >
> > diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> > index 807ed6d722dd..cb15fdda1fee 100644
> > --- a/arch/s390/kvm/kvm-s390.c
> > +++ b/arch/s390/kvm/kvm-s390.c
> > @@ -2002,6 +2002,9 @@ static int kvm_s390_get_cmma(struct kvm *kvm,
> > struct kvm_s390_cmma_log *args,
>
> Adding Claudio, but
> > struct kvm_memslots *slots = kvm_memslots(kvm);
> > struct kvm_memory_slot *ms;
> >
> > + if (unlikely(!slots->used_slots))
> > + return 0;
> > +

this should never happen, as this function is only called during
migration, and if we don't have any memory slots, then we will not try
to migrate them.

But this is something that is triggered by userspace, so we need to
protect the kernel from rogue or broken userspace.

Reviewed-by: Claudio Imbrenda <[email protected]>

> this looks sane and like the right fix.
>
> Acked-by: Christian Borntraeger <[email protected]>
>
> > cur_gfn = kvm_s390_next_dirty_cmma(slots, args->start_gfn);
> > ms = gfn_to_memslot(kvm, cur_gfn);
> > args->count = 0;
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 35bc52e187a2..b19dee4ed7d9 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -1032,6 +1032,9 @@ search_memslots(struct kvm_memslots *slots,
> > gfn_t gfn) int slot = atomic_read(&slots->lru_slot);
> > struct kvm_memory_slot *memslots = slots->memslots;
> >
> > + if (unlikely(!slots->used_slots))
> > + return NULL;
> > +
> > if (gfn >= memslots[slot].base_gfn &&
> > gfn < memslots[slot].base_gfn + memslots[slot].npages)
> > return &memslots[slot];
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 28eae681859f..f744bc603c53 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -882,6 +882,9 @@ static inline void kvm_memslot_delete(struct
> > kvm_memslots *slots,
> > slots->used_slots--;
> >
> > + if (atomic_read(&slots->lru_slot) >= slots->used_slots)
> > + atomic_set(&slots->lru_slot, 0);
> > +
> > for (i = slots->id_to_index[memslot->id]; i <
> > slots->used_slots; i++) { mslots[i] = mslots[i + 1];
> > slots->id_to_index[mslots[i].id] = i;
> >