When using manual protection of dirty pages, it is not necessary
to protect nested page tables down to the 4K level; instead KVM
can protect only hugepages in order to split them lazily, and
delay write protection at 4K-granularity until KVM_CLEAR_DIRTY_LOG.
This was overlooked in the TDP MMU, so do it there as well.
Fixes: a6a0b05da9f37 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
Cc: Ben Gardon <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/mmu/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index efb41f31e80a..0d92a269c5fa 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5538,7 +5538,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect,
start_level, KVM_MAX_HUGEPAGE_LEVEL, false);
if (is_tdp_mmu_enabled(kvm))
- flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, PG_LEVEL_4K);
+ flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
write_unlock(&kvm->mmu_lock);
/*
--
2.26.2
Hi Paolo,
I'm just going to fix this issue, and found that you have done this ;-)
Please feel free to add:
Reviewed-by: Keqian Zhu <[email protected]>
Thanks,
Keqian
On 2021/4/2 20:17, Paolo Bonzini wrote:
> When using manual protection of dirty pages, it is not necessary
> to protect nested page tables down to the 4K level; instead KVM
> can protect only hugepages in order to split them lazily, and
> delay write protection at 4K-granularity until KVM_CLEAR_DIRTY_LOG.
> This was overlooked in the TDP MMU, so do it there as well.
>
> Fixes: a6a0b05da9f37 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
> Cc: Ben Gardon <[email protected]>
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> arch/x86/kvm/mmu/mmu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index efb41f31e80a..0d92a269c5fa 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5538,7 +5538,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
> flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect,
> start_level, KVM_MAX_HUGEPAGE_LEVEL, false);
> if (is_tdp_mmu_enabled(kvm))
> - flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, PG_LEVEL_4K);
> + flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
> write_unlock(&kvm->mmu_lock);
>
> /*
>
On 2021/4/7 7:38, Sean Christopherson wrote:
> On Tue, Apr 06, 2021, Keqian Zhu wrote:
>> Hi Paolo,
>>
>> I'm just going to fix this issue, and found that you have done this ;-)
>
> Ha, and meanwhile I'm having a serious case of deja vu[1]. It even received a
> variant of the magic "Queued, thanks"[2]. Doesn't appear in either of the 5.12
> pull requests though, must have gotten lost along the way.
Good job. We should pick them up :)
>
> [1] https://lkml.kernel.org/r/[email protected]
> [2] https://lkml.kernel.org/r/[email protected]
>
>> Please feel free to add:
>>
>> Reviewed-by: Keqian Zhu <[email protected]>
>>
>> Thanks,
>> Keqian
>>
>> On 2021/4/2 20:17, Paolo Bonzini wrote:
>>> When using manual protection of dirty pages, it is not necessary
>>> to protect nested page tables down to the 4K level; instead KVM
>>> can protect only hugepages in order to split them lazily, and
>>> delay write protection at 4K-granularity until KVM_CLEAR_DIRTY_LOG.
>>> This was overlooked in the TDP MMU, so do it there as well.
>>>
>>> Fixes: a6a0b05da9f37 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
>>> Cc: Ben Gardon <[email protected]>
>>> Signed-off-by: Paolo Bonzini <[email protected]>
>>> ---
>>> arch/x86/kvm/mmu/mmu.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
>>> index efb41f31e80a..0d92a269c5fa 100644
>>> --- a/arch/x86/kvm/mmu/mmu.c
>>> +++ b/arch/x86/kvm/mmu/mmu.c
>>> @@ -5538,7 +5538,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
>>> flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect,
>>> start_level, KVM_MAX_HUGEPAGE_LEVEL, false);
>>> if (is_tdp_mmu_enabled(kvm))
>>> - flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, PG_LEVEL_4K);
>>> + flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
>>> write_unlock(&kvm->mmu_lock);
>>>
>>> /*
>>>
> .
>
On Tue, Apr 06, 2021, Keqian Zhu wrote:
> Hi Paolo,
>
> I'm just going to fix this issue, and found that you have done this ;-)
Ha, and meanwhile I'm having a serious case of deja vu[1]. It even received a
variant of the magic "Queued, thanks"[2]. Doesn't appear in either of the 5.12
pull requests though, must have gotten lost along the way.
[1] https://lkml.kernel.org/r/[email protected]
[2] https://lkml.kernel.org/r/[email protected]
> Please feel free to add:
>
> Reviewed-by: Keqian Zhu <[email protected]>
>
> Thanks,
> Keqian
>
> On 2021/4/2 20:17, Paolo Bonzini wrote:
> > When using manual protection of dirty pages, it is not necessary
> > to protect nested page tables down to the 4K level; instead KVM
> > can protect only hugepages in order to split them lazily, and
> > delay write protection at 4K-granularity until KVM_CLEAR_DIRTY_LOG.
> > This was overlooked in the TDP MMU, so do it there as well.
> >
> > Fixes: a6a0b05da9f37 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
> > Cc: Ben Gardon <[email protected]>
> > Signed-off-by: Paolo Bonzini <[email protected]>
> > ---
> > arch/x86/kvm/mmu/mmu.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index efb41f31e80a..0d92a269c5fa 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -5538,7 +5538,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
> > flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect,
> > start_level, KVM_MAX_HUGEPAGE_LEVEL, false);
> > if (is_tdp_mmu_enabled(kvm))
> > - flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, PG_LEVEL_4K);
> > + flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
> > write_unlock(&kvm->mmu_lock);
> >
> > /*
> >