2018-08-13 09:42:19

by Punit Agrawal

[permalink] [raw]
Subject: [PATCH v2 0/2] KVM: Fix refaulting due to page table update

Hi,

Here's a couple of patches to fix an issue when multiple vcpus fault
on a page table entry[0]. The issue was reported by a user when testing
PUD hugepage support[1] but also exists for PMD and PTE updates though
with a lower probability.

In this version -

* the fix has been split for PMD hugepage and PTE update
* refactored the PMD fix
* applied fixes tag and cc'ing to stable

Thanks,
Punit

[0] https://lkml.org/lkml/2018/8/10/256
[1] https://lkml.org/lkml/2018/7/16/482

Punit Agrawal (2):
KVM: arm/arm64: Skip updating PMD entry if no change
KVM: arm/arm64: Skip updating PTE entry if no change

virt/kvm/arm/mmu.c | 45 ++++++++++++++++++++++++++++++++++-----------
1 file changed, 34 insertions(+), 11 deletions(-)

--
2.18.0



2018-08-13 09:42:27

by Punit Agrawal

[permalink] [raw]
Subject: [PATCH v2 1/2] KVM: arm/arm64: Skip updating PMD entry if no change

Contention on updating a PMD entry by a large number of vcpus can lead
to duplicate work when handling stage 2 page faults. As the page table
update follows the break-before-make requirement of the architecture,
it can lead to repeated refaults due to clearing the entry and
flushing the tlbs.

This problem is more likely when -

* there are large number of vcpus
* the mapping is large block mapping

such as when using PMD hugepages (512MB) with 64k pages.

Fix this by skipping the page table update if there is no change in
the entry being updated.

Fixes: ad361f093c1e ("KVM: ARM: Support hugetlbfs backed huge pages")
Change-Id: Ib417957c842ef67a6f4b786f68df62048d202c24
Signed-off-by: Punit Agrawal <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: Suzuki Poulose <[email protected]>
Cc: [email protected]
---
virt/kvm/arm/mmu.c | 40 +++++++++++++++++++++++++++++-----------
1 file changed, 29 insertions(+), 11 deletions(-)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 1d90d79706bd..2ab977edc63c 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1015,19 +1015,36 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
pmd = stage2_get_pmd(kvm, cache, addr);
VM_BUG_ON(!pmd);

- /*
- * Mapping in huge pages should only happen through a fault. If a
- * page is merged into a transparent huge page, the individual
- * subpages of that huge page should be unmapped through MMU
- * notifiers before we get here.
- *
- * Merging of CompoundPages is not supported; they should become
- * splitting first, unmapped, merged, and mapped back in on-demand.
- */
- VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd));
-
old_pmd = *pmd;
+
if (pmd_present(old_pmd)) {
+ /*
+ * Mapping in huge pages should only happen through a
+ * fault. If a page is merged into a transparent huge
+ * page, the individual subpages of that huge page
+ * should be unmapped through MMU notifiers before we
+ * get here.
+ *
+ * Merging of CompoundPages is not supported; they
+ * should become splitting first, unmapped, merged,
+ * and mapped back in on-demand.
+ */
+ VM_BUG_ON(pmd_pfn(old_pmd) != pmd_pfn(*new_pmd));
+
+ /*
+ * Multiple vcpus faulting on the same PMD entry, can
+ * lead to them sequentially updating the PMD with the
+ * same value. Following the break-before-make
+ * (pmd_clear() followed by tlb_flush()) process can
+ * hinder forward progress due to refaults generated
+ * on missing translations.
+ *
+ * Skip updating the page table if the entry is
+ * unchanged.
+ */
+ if (pmd_val(old_pmd) == pmd_val(*new_pmd))
+ goto out;
+
pmd_clear(pmd);
kvm_tlb_flush_vmid_ipa(kvm, addr);
} else {
@@ -1035,6 +1052,7 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
}

kvm_set_pmd(pmd, *new_pmd);
+out:
return 0;
}

--
2.18.0


2018-08-13 09:43:28

by Punit Agrawal

[permalink] [raw]
Subject: [PATCH v2 2/2] KVM: arm/arm64: Skip updating PTE entry if no change

When there is contention on faulting in a particular page table entry
at stage 2, the break-before-make requirement of the architecture can
lead to additional refaulting due to TLB invalidation.

Avoid this by skipping a page table update if the new value of the PTE
matches the previous value.

Fixes: d5d8184d35c9 ("KVM: ARM: Memory virtualization setup")
Change-Id: I28e17daf394a4821b13c2cf8726bf72bf30434f9
Signed-off-by: Punit Agrawal <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: Suzuki Poulose <[email protected]>
Cc: [email protected]
---
virt/kvm/arm/mmu.c | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 2ab977edc63c..d0a9dccc3793 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1120,6 +1120,10 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
/* Create 2nd stage page table mapping - Level 3 */
old_pte = *pte;
if (pte_present(old_pte)) {
+ /* Skip page table update if there is no change */
+ if (pte_val(old_pte) == pte_val(*new_pte))
+ goto out;
+
kvm_set_pte(pte, __pte(0));
kvm_tlb_flush_vmid_ipa(kvm, addr);
} else {
@@ -1127,6 +1131,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
}

kvm_set_pte(pte, *new_pte);
+out:
return 0;
}

--
2.18.0


2018-08-13 09:46:50

by Suzuki K Poulose

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] KVM: arm/arm64: Skip updating PMD entry if no change

On 08/13/2018 10:40 AM, Punit Agrawal wrote:
> Contention on updating a PMD entry by a large number of vcpus can lead
> to duplicate work when handling stage 2 page faults. As the page table
> update follows the break-before-make requirement of the architecture,
> it can lead to repeated refaults due to clearing the entry and
> flushing the tlbs.
>
> This problem is more likely when -
>
> * there are large number of vcpus
> * the mapping is large block mapping
>
> such as when using PMD hugepages (512MB) with 64k pages.
>
> Fix this by skipping the page table update if there is no change in
> the entry being updated.
>
> Fixes: ad361f093c1e ("KVM: ARM: Support hugetlbfs backed huge pages")
> Change-Id: Ib417957c842ef67a6f4b786f68df62048d202c24
> Signed-off-by: Punit Agrawal <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: Suzuki Poulose <[email protected]>
> Cc: [email protected]
> ---
> virt/kvm/arm/mmu.c | 40 +++++++++++++++++++++++++++++-----------
> 1 file changed, 29 insertions(+), 11 deletions(-)
>
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 1d90d79706bd..2ab977edc63c 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1015,19 +1015,36 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
> pmd = stage2_get_pmd(kvm, cache, addr);
> VM_BUG_ON(!pmd);
>
> - /*
> - * Mapping in huge pages should only happen through a fault. If a
> - * page is merged into a transparent huge page, the individual
> - * subpages of that huge page should be unmapped through MMU
> - * notifiers before we get here.
> - *
> - * Merging of CompoundPages is not supported; they should become
> - * splitting first, unmapped, merged, and mapped back in on-demand.
> - */
> - VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd));
> -
> old_pmd = *pmd;
> +
> if (pmd_present(old_pmd)) {
> + /*
> + * Mapping in huge pages should only happen through a
> + * fault. If a page is merged into a transparent huge
> + * page, the individual subpages of that huge page
> + * should be unmapped through MMU notifiers before we
> + * get here.
> + *
> + * Merging of CompoundPages is not supported; they
> + * should become splitting first, unmapped, merged,
> + * and mapped back in on-demand.
> + */
> + VM_BUG_ON(pmd_pfn(old_pmd) != pmd_pfn(*new_pmd));
> +
> + /*
> + * Multiple vcpus faulting on the same PMD entry, can
> + * lead to them sequentially updating the PMD with the
> + * same value. Following the break-before-make
> + * (pmd_clear() followed by tlb_flush()) process can
> + * hinder forward progress due to refaults generated
> + * on missing translations.
> + *
> + * Skip updating the page table if the entry is
> + * unchanged.
> + */
> + if (pmd_val(old_pmd) == pmd_val(*new_pmd))
> + goto out;

minor nit: You could as well return here, as there are no other users
for the label and there are no clean up actions.

Either way,

Reviewed-by: Suzuki K Poulose <[email protected]>


> +
> pmd_clear(pmd);
> kvm_tlb_flush_vmid_ipa(kvm, addr);
> } else {
> @@ -1035,6 +1052,7 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
> }
>
> kvm_set_pmd(pmd, *new_pmd);
> +out:
> return 0;
> }
>
>


2018-08-13 09:49:26

by Suzuki K Poulose

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] KVM: arm/arm64: Skip updating PTE entry if no change

On 08/13/2018 10:40 AM, Punit Agrawal wrote:
> When there is contention on faulting in a particular page table entry
> at stage 2, the break-before-make requirement of the architecture can
> lead to additional refaulting due to TLB invalidation.
>
> Avoid this by skipping a page table update if the new value of the PTE
> matches the previous value.
>
> Fixes: d5d8184d35c9 ("KVM: ARM: Memory virtualization setup")
> Change-Id: I28e17daf394a4821b13c2cf8726bf72bf30434f9
> Signed-off-by: Punit Agrawal <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: Suzuki Poulose <[email protected]>
> Cc: [email protected]
> ---
> virt/kvm/arm/mmu.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 2ab977edc63c..d0a9dccc3793 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1120,6 +1120,10 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> /* Create 2nd stage page table mapping - Level 3 */
> old_pte = *pte;
> if (pte_present(old_pte)) {
> + /* Skip page table update if there is no change */
> + if (pte_val(old_pte) == pte_val(*new_pte))
> + goto out;
> +
> kvm_set_pte(pte, __pte(0));
> kvm_tlb_flush_vmid_ipa(kvm, addr);
> } else {
> @@ -1127,6 +1131,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> }
>
> kvm_set_pte(pte, *new_pte);
> +out:

Similar comments as the previous patch, either way:

Reviewed-by: Suzuki K Poulose <[email protected]>

2018-08-13 10:13:32

by Punit Agrawal

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] KVM: arm/arm64: Skip updating PMD entry if no change

Suzuki K Poulose <[email protected]> writes:

> On 08/13/2018 10:40 AM, Punit Agrawal wrote:
>> Contention on updating a PMD entry by a large number of vcpus can lead
>> to duplicate work when handling stage 2 page faults. As the page table
>> update follows the break-before-make requirement of the architecture,
>> it can lead to repeated refaults due to clearing the entry and
>> flushing the tlbs.
>>
>> This problem is more likely when -
>>
>> * there are large number of vcpus
>> * the mapping is large block mapping
>>
>> such as when using PMD hugepages (512MB) with 64k pages.
>>
>> Fix this by skipping the page table update if there is no change in
>> the entry being updated.
>>
>> Fixes: ad361f093c1e ("KVM: ARM: Support hugetlbfs backed huge pages")
>> Change-Id: Ib417957c842ef67a6f4b786f68df62048d202c24
>> Signed-off-by: Punit Agrawal <[email protected]>
>> Cc: Marc Zyngier <[email protected]>
>> Cc: Christoffer Dall <[email protected]>
>> Cc: Suzuki Poulose <[email protected]>
>> Cc: [email protected]
>> ---
>> virt/kvm/arm/mmu.c | 40 +++++++++++++++++++++++++++++-----------
>> 1 file changed, 29 insertions(+), 11 deletions(-)
>>
>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index 1d90d79706bd..2ab977edc63c 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c
>> @@ -1015,19 +1015,36 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
>> pmd = stage2_get_pmd(kvm, cache, addr);
>> VM_BUG_ON(!pmd);
>> - /*
>> - * Mapping in huge pages should only happen through a fault. If a
>> - * page is merged into a transparent huge page, the individual
>> - * subpages of that huge page should be unmapped through MMU
>> - * notifiers before we get here.
>> - *
>> - * Merging of CompoundPages is not supported; they should become
>> - * splitting first, unmapped, merged, and mapped back in on-demand.
>> - */
>> - VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd));
>> -
>> old_pmd = *pmd;
>> +
>> if (pmd_present(old_pmd)) {
>> + /*
>> + * Mapping in huge pages should only happen through a
>> + * fault. If a page is merged into a transparent huge
>> + * page, the individual subpages of that huge page
>> + * should be unmapped through MMU notifiers before we
>> + * get here.
>> + *
>> + * Merging of CompoundPages is not supported; they
>> + * should become splitting first, unmapped, merged,
>> + * and mapped back in on-demand.
>> + */
>> + VM_BUG_ON(pmd_pfn(old_pmd) != pmd_pfn(*new_pmd));
>> +
>> + /*
>> + * Multiple vcpus faulting on the same PMD entry, can
>> + * lead to them sequentially updating the PMD with the
>> + * same value. Following the break-before-make
>> + * (pmd_clear() followed by tlb_flush()) process can
>> + * hinder forward progress due to refaults generated
>> + * on missing translations.
>> + *
>> + * Skip updating the page table if the entry is
>> + * unchanged.
>> + */
>> + if (pmd_val(old_pmd) == pmd_val(*new_pmd))
>> + goto out;
>
> minor nit: You could as well return here, as there are no other users
> for the label and there are no clean up actions.

Ok - I'll do a quick respin for the maintainers to pick up if they are
happy with the other aspects of the patch.

>
> Either way,
>
> Reviewed-by: Suzuki K Poulose <[email protected]>

Thanks Suzuki.

>
>
>> +
>> pmd_clear(pmd);
>> kvm_tlb_flush_vmid_ipa(kvm, addr);
>> } else {
>> @@ -1035,6 +1052,7 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
>> }
>> kvm_set_pmd(pmd, *new_pmd);
>> +out:
>> return 0;
>> }
>>
>>
>
> _______________________________________________
> kvmarm mailing list
> [email protected]
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

2018-08-13 10:15:08

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] KVM: arm/arm64: Skip updating PMD entry if no change

Hi Punit,

On 13/08/18 10:40, Punit Agrawal wrote:
> Contention on updating a PMD entry by a large number of vcpus can lead
> to duplicate work when handling stage 2 page faults. As the page table
> update follows the break-before-make requirement of the architecture,
> it can lead to repeated refaults due to clearing the entry and
> flushing the tlbs.
>
> This problem is more likely when -
>
> * there are large number of vcpus
> * the mapping is large block mapping
>
> such as when using PMD hugepages (512MB) with 64k pages.
>
> Fix this by skipping the page table update if there is no change in
> the entry being updated.
>
> Fixes: ad361f093c1e ("KVM: ARM: Support hugetlbfs backed huge pages")
> Change-Id: Ib417957c842ef67a6f4b786f68df62048d202c24
> Signed-off-by: Punit Agrawal <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: Suzuki Poulose <[email protected]>
> Cc: [email protected]
> ---
> virt/kvm/arm/mmu.c | 40 +++++++++++++++++++++++++++++-----------
> 1 file changed, 29 insertions(+), 11 deletions(-)
>
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 1d90d79706bd..2ab977edc63c 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1015,19 +1015,36 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
> pmd = stage2_get_pmd(kvm, cache, addr);
> VM_BUG_ON(!pmd);
>
> - /*
> - * Mapping in huge pages should only happen through a fault. If a
> - * page is merged into a transparent huge page, the individual
> - * subpages of that huge page should be unmapped through MMU
> - * notifiers before we get here.
> - *
> - * Merging of CompoundPages is not supported; they should become
> - * splitting first, unmapped, merged, and mapped back in on-demand.
> - */
> - VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd));
> -
> old_pmd = *pmd;
> +
> if (pmd_present(old_pmd)) {
> + /*
> + * Mapping in huge pages should only happen through a
> + * fault. If a page is merged into a transparent huge
> + * page, the individual subpages of that huge page
> + * should be unmapped through MMU notifiers before we
> + * get here.
> + *
> + * Merging of CompoundPages is not supported; they
> + * should become splitting first, unmapped, merged,
> + * and mapped back in on-demand.
> + */
> + VM_BUG_ON(pmd_pfn(old_pmd) != pmd_pfn(*new_pmd));
> +
> + /*
> + * Multiple vcpus faulting on the same PMD entry, can
> + * lead to them sequentially updating the PMD with the
> + * same value. Following the break-before-make
> + * (pmd_clear() followed by tlb_flush()) process can
> + * hinder forward progress due to refaults generated
> + * on missing translations.
> + *
> + * Skip updating the page table if the entry is
> + * unchanged.
> + */
> + if (pmd_val(old_pmd) == pmd_val(*new_pmd))
> + goto out;

I think the order of these two checks should be reversed: the first one
is clearly a subset of the second one, so it'd make sense to have the
global comparison before having the more specific one. Not that it
matter much in practice, but I just find it easier to reason about.

> +
> pmd_clear(pmd);
> kvm_tlb_flush_vmid_ipa(kvm, addr);
> } else {
> @@ -1035,6 +1052,7 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
> }
>
> kvm_set_pmd(pmd, *new_pmd);
> +out:
> return 0;
> }
>
>

Thanks,

M.
--
Jazz is not dead. It just smells funny...

2018-08-13 10:35:15

by Punit Agrawal

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] KVM: arm/arm64: Skip updating PMD entry if no change

Marc Zyngier <[email protected]> writes:

> Hi Punit,
>
> On 13/08/18 10:40, Punit Agrawal wrote:

[...]

>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index 1d90d79706bd..2ab977edc63c 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c
>> @@ -1015,19 +1015,36 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
>> pmd = stage2_get_pmd(kvm, cache, addr);
>> VM_BUG_ON(!pmd);
>>
>> - /*
>> - * Mapping in huge pages should only happen through a fault. If a
>> - * page is merged into a transparent huge page, the individual
>> - * subpages of that huge page should be unmapped through MMU
>> - * notifiers before we get here.
>> - *
>> - * Merging of CompoundPages is not supported; they should become
>> - * splitting first, unmapped, merged, and mapped back in on-demand.
>> - */
>> - VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd));
>> -
>> old_pmd = *pmd;
>> +
>> if (pmd_present(old_pmd)) {
>> + /*
>> + * Mapping in huge pages should only happen through a
>> + * fault. If a page is merged into a transparent huge
>> + * page, the individual subpages of that huge page
>> + * should be unmapped through MMU notifiers before we
>> + * get here.
>> + *
>> + * Merging of CompoundPages is not supported; they
>> + * should become splitting first, unmapped, merged,
>> + * and mapped back in on-demand.
>> + */
>> + VM_BUG_ON(pmd_pfn(old_pmd) != pmd_pfn(*new_pmd));
>> +
>> + /*
>> + * Multiple vcpus faulting on the same PMD entry, can
>> + * lead to them sequentially updating the PMD with the
>> + * same value. Following the break-before-make
>> + * (pmd_clear() followed by tlb_flush()) process can
>> + * hinder forward progress due to refaults generated
>> + * on missing translations.
>> + *
>> + * Skip updating the page table if the entry is
>> + * unchanged.
>> + */
>> + if (pmd_val(old_pmd) == pmd_val(*new_pmd))
>> + goto out;
>
> I think the order of these two checks should be reversed: the first one
> is clearly a subset of the second one, so it'd make sense to have the
> global comparison before having the more specific one. Not that it
> matter much in practice, but I just find it easier to reason about.

Makes sense. I've reordered the checks for the next version.

Thanks,
Punit

>
>> +
>> pmd_clear(pmd);
>> kvm_tlb_flush_vmid_ipa(kvm, addr);
>> } else {
>> @@ -1035,6 +1052,7 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
>> }
>>
>> kvm_set_pmd(pmd, *new_pmd);
>> +out:
>> return 0;
>> }
>>
>>
>
> Thanks,
>
> M.