2015-04-14 06:10:00

by Wanpeng Li

[permalink] [raw]
Subject: [PATCH 1/2] kvm: mmu: fix catch transparent huge page backing

PageTransCompound() can't guarantee the page is a transparent huge page
since it returns true for both transparent huge and hugetlbfs pages.

This patch fixes it by checking the page is also !hugetlbfs page.

Signed-off-by: Wanpeng Li <[email protected]>
---
arch/x86/kvm/mmu.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 146f295..2a0d77e 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4487,7 +4487,8 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
*/
if (sp->role.direct &&
!kvm_is_reserved_pfn(pfn) &&
- PageTransCompound(pfn_to_page(pfn))) {
+ !PageHuge(pfn_to_page(pfn)) &&
+ PageTransHuge(pfn_to_page(pfn))) {
drop_spte(kvm, sptep);
sptep = rmap_get_first(*rmapp, &iter);
need_tlb_flush = 1;
--
1.9.1


2015-04-14 06:10:13

by Wanpeng Li

[permalink] [raw]
Subject: [PATCH 2/2] kvm: mmu: don't do overflow memslot check

As Andre pointed out:

| I don't understand the value of this check here. Are we looking for a
| broken memslot? Shouldn't this be a BUG_ON? Is this the place to care
| about these things? npages is capped to KVM_MEM_MAX_NR_PAGES, i.e.
| 2^31. A 64 bit overflow would be caused by a gigantic gfn_start which
| would be trouble in many other ways.

This patch drops the memslot overflow check to make the codes more simple.

Signed-off-by: Wanpeng Li <[email protected]>
---
arch/x86/kvm/mmu.c | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2a0d77e..9265fda 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4505,19 +4505,12 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
bool flush = false;
unsigned long *rmapp;
unsigned long last_index, index;
- gfn_t gfn_start, gfn_end;

spin_lock(&kvm->mmu_lock);

- gfn_start = memslot->base_gfn;
- gfn_end = memslot->base_gfn + memslot->npages - 1;
-
- if (gfn_start >= gfn_end)
- goto out;
-
rmapp = memslot->arch.rmap[0];
- last_index = gfn_to_index(gfn_end, memslot->base_gfn,
- PT_PAGE_TABLE_LEVEL);
+ last_index = gfn_to_index(memslot->base_gfn + memslot->npages - 1,
+ memslot->base_gfn, PT_PAGE_TABLE_LEVEL);

for (index = 0; index <= last_index; ++index, ++rmapp) {
if (*rmapp)
@@ -4535,7 +4528,6 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
if (flush)
kvm_flush_remote_tlbs(kvm);

-out:
spin_unlock(&kvm->mmu_lock);
}

--
1.9.1

2015-04-14 17:14:46

by Andres Lagar-Cavilla

[permalink] [raw]
Subject: Re: [PATCH 2/2] kvm: mmu: don't do overflow memslot check

On Mon, Apr 13, 2015 at 10:51 PM, Wanpeng Li <[email protected]> wrote:
> As Andre pointed out:
(Andres)
>
> | I don't understand the value of this check here. Are we looking for a
> | broken memslot? Shouldn't this be a BUG_ON? Is this the place to care
> | about these things? npages is capped to KVM_MEM_MAX_NR_PAGES, i.e.
> | 2^31. A 64 bit overflow would be caused by a gigantic gfn_start which
> | would be trouble in many other ways.
>
> This patch drops the memslot overflow check to make the codes more simple.
>
> Signed-off-by: Wanpeng Li <[email protected]>
Reviewed-by: Andres Lagar-Cavilla <[email protected]>

Thanks
> ---
> arch/x86/kvm/mmu.c | 12 ++----------
> 1 file changed, 2 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 2a0d77e..9265fda 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4505,19 +4505,12 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
> bool flush = false;
> unsigned long *rmapp;
> unsigned long last_index, index;
> - gfn_t gfn_start, gfn_end;
>
> spin_lock(&kvm->mmu_lock);
>
> - gfn_start = memslot->base_gfn;
> - gfn_end = memslot->base_gfn + memslot->npages - 1;
> -
> - if (gfn_start >= gfn_end)
> - goto out;
> -
> rmapp = memslot->arch.rmap[0];
> - last_index = gfn_to_index(gfn_end, memslot->base_gfn,
> - PT_PAGE_TABLE_LEVEL);
> + last_index = gfn_to_index(memslot->base_gfn + memslot->npages - 1,
> + memslot->base_gfn, PT_PAGE_TABLE_LEVEL);
>
> for (index = 0; index <= last_index; ++index, ++rmapp) {
> if (*rmapp)
> @@ -4535,7 +4528,6 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
> if (flush)
> kvm_flush_remote_tlbs(kvm);
>
> -out:
> spin_unlock(&kvm->mmu_lock);
> }
>
> --
> 1.9.1
>



--
Andres Lagar-Cavilla | Google Kernel Team | [email protected]