From: Ashish Kalra <[email protected]>
With SNP/guest_memfd, private/encrypted memory should not be mappable,
and MMU notifications for HVA-mapped memory will only be relevant to
unencrypted guest memory. Therefore, the rationale behind issuing a
wbinvd_on_all_cpus() in sev_guest_memory_reclaimed() should not apply
for SNP guests and can be ignored.
Signed-off-by: Ashish Kalra <[email protected]>
[mdr: Add some clarifications in commit]
Signed-off-by: Michael Roth <[email protected]>
---
arch/x86/kvm/svm/sev.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 31f6f4786503..3e8de7cb3c89 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2975,7 +2975,14 @@ static void sev_flush_encrypted_page(struct kvm_vcpu *vcpu, void *va)
void sev_guest_memory_reclaimed(struct kvm *kvm)
{
- if (!sev_guest(kvm))
+ /*
+ * With SNP+gmem, private/encrypted memory should be
+ * unreachable via the hva-based mmu notifiers. Additionally,
+ * for shared->private translations, H/W coherency will ensure
+ * first guest access to the page would clear out any existing
+ * dirty copies of that cacheline.
+ */
+ if (!sev_guest(kvm) || sev_snp_guest(kvm))
return;
wbinvd_on_all_cpus();
--
2.25.1
On 3/29/24 23:58, Michael Roth wrote:
> From: Ashish Kalra <[email protected]>
>
> With SNP/guest_memfd, private/encrypted memory should not be mappable,
> and MMU notifications for HVA-mapped memory will only be relevant to
> unencrypted guest memory. Therefore, the rationale behind issuing a
> wbinvd_on_all_cpus() in sev_guest_memory_reclaimed() should not apply
> for SNP guests and can be ignored.
>
> Signed-off-by: Ashish Kalra <[email protected]>
> [mdr: Add some clarifications in commit]
> Signed-off-by: Michael Roth <[email protected]>
Reviewed-by: Paolo Bonzini <[email protected]>
> ---
> arch/x86/kvm/svm/sev.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 31f6f4786503..3e8de7cb3c89 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -2975,7 +2975,14 @@ static void sev_flush_encrypted_page(struct kvm_vcpu *vcpu, void *va)
>
> void sev_guest_memory_reclaimed(struct kvm *kvm)
> {
> - if (!sev_guest(kvm))
> + /*
> + * With SNP+gmem, private/encrypted memory should be
> + * unreachable via the hva-based mmu notifiers. Additionally,
> + * for shared->private translations, H/W coherency will ensure
> + * first guest access to the page would clear out any existing
> + * dirty copies of that cacheline.
> + */
> + if (!sev_guest(kvm) || sev_snp_guest(kvm))
> return;
>
> wbinvd_on_all_cpus();