Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp3193455pxb; Tue, 20 Apr 2021 02:43:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzS4Fgqi+jFyoISJ2zf/0XCxqOoSDijGT+37Bqk3lmfjnPbjclOs6lWejoN4Umpt/cIpqB3 X-Received: by 2002:a17:907:3e06:: with SMTP id hp6mr9880426ejc.273.1618911820036; Tue, 20 Apr 2021 02:43:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618911820; cv=none; d=google.com; s=arc-20160816; b=SdiXDC6rXU6IzkmgpLVrNm+/vgAG7jfWdPqR0mSbT+rlg4WvKU4bkWZifRjn6Y5hEN oMNptgt94V+lRQ63DcKWUdqLJF3biW+OqF2BRgyhkm1HydCT1jOwQ88cCHwpIb8sx0oU rXKOiDmhYrnB4p7yGA40z+eqYcXTuwMBYSyzQV5nyQKu9mGTL7LrDlYCgi5wC17ncSkt skdL4h3HOC3KQ2dajxSD+3f5ewEAoZA6Owm8TkbsCogG9+WrPoXum5BDv7wpy0+1dczq ye5RPTPGbubiZB59jnMtO0TOXPvr9pxTgXZREMjMgQB0fztOSmOZnazTF+PFuyJ0SMAr xNrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject:dkim-signature; bh=7sWRozUdZOTDSWH44pqId+a7eHo5mZxw3UWB7bjXCJI=; b=kVYxx0/CtvxVBEvq9M792YW5XOdVd3++aU4iuWDrRm9cAmVa2VEATZZkJ8O/V5GhJy bTmC+/3ZEHB6p3O8aornBUUcVmmJVochOezLElNmuy72PqkcYJVai+cSInhYrVL1ZqeD tmPmfUIKyLeZOGdaHwoj/KhdPV+esr48vh/aPRqqib3VgEhg2WNjQLL8lSFBm8cUPt7/ nRN/itxH7KWw5tFDUInjtPP3Ev9pKN4+afOYa+gz7pdcBazcUYK56NOPAGX/FtmaFug9 Hdr4ntElzif2yMszxV/j59Xb8alutwdg4hcb0921ORRwofnd+9aRMKUJ7ISbRaf6opAu Nzxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Bvx1TkWg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f21si14743613ejx.468.2021.04.20.02.43.16; Tue, 20 Apr 2021 02:43:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Bvx1TkWg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231255AbhDTJk5 (ORCPT + 99 others); Tue, 20 Apr 2021 05:40:57 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:23465 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229937AbhDTJk5 (ORCPT ); Tue, 20 Apr 2021 05:40:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618911624; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7sWRozUdZOTDSWH44pqId+a7eHo5mZxw3UWB7bjXCJI=; b=Bvx1TkWgtqoit79hVMtFryhGzRtpOSwLiv/pUWisjY7TYIFJKMZeRUBNlSNejWVwxj+Por 8L70EDru6LEbQJOqNQRxWq05SgEI5jKXqI48RlTVVD9IUcMZixrOFF9eUG+mRaWhFq19zV YfCEPt0iwVukoaN6mTRv6L4BHOIR0Qk= Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com [209.85.218.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-444-GrXrrxpNPcakBufc7ij9HQ-1; Tue, 20 Apr 2021 05:39:19 -0400 X-MC-Unique: GrXrrxpNPcakBufc7ij9HQ-1 Received: by mail-ej1-f72.google.com with SMTP id ji8-20020a1709079808b029037c921a9ea0so4554034ejc.9 for ; Tue, 20 Apr 2021 02:39:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=7sWRozUdZOTDSWH44pqId+a7eHo5mZxw3UWB7bjXCJI=; b=TS0CPGZJCQeJj0rMjEcCD6yb1HL6WHY7JbbXJ8gdgrSpd2VW3n6UHkUz4z3flsRaSx NKOnpo20cJALh1eXj2O1GouU0CtynNVpecQXUkFe5nJTuyTz2+b1fxbyVGxIDJ15QKKd LawGrpOIC4eXyQKOondIphUKSSzQeG/fk3yRSqmlxkAwNdbv+DZCn94Sdi9unNHEo6cV ob+eO9t0pIiSz8dhIUMRucK94OUJH5uu5A1HVopIn5q7NU0+H6hwTFGLaMV7fWm12rYc IwfhhCHNYBU2IVVvFfsqLGCAQsHlvxXMBQovsRIgmRcrkklhQjkA4aVRAO9OyZTZnodc DYMw== X-Gm-Message-State: AOAM530sbA+SntrLPgTha9TY4olx3ul4Z6DCr0R3znHJ6TuxEl3HfS2i 3URNMbyabVhHIWhr6hqqlKfgbEgx+U3S5cA+Fd48EYrDFi0xyORgyK2QhzKSZ9fa3GK0nlKVgsA NHjWSiBW24Lv5q0/fMQeUh6QJ X-Received: by 2002:aa7:d85a:: with SMTP id f26mr19697543eds.305.1618911558274; Tue, 20 Apr 2021 02:39:18 -0700 (PDT) X-Received: by 2002:aa7:d85a:: with SMTP id f26mr19697511eds.305.1618911558108; Tue, 20 Apr 2021 02:39:18 -0700 (PDT) Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a? ([2001:b07:6468:f312:c8dd:75d4:99ab:290a]) by smtp.gmail.com with ESMTPSA id t17sm3662738edv.3.2021.04.20.02.39.16 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Apr 2021 02:39:17 -0700 (PDT) Subject: Re: [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed To: Ashish Kalra , bp@suse.de Cc: tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, joro@8bytes.org, thomas.lendacky@amd.com, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, srutherford@google.com, seanjc@google.com, venu.busireddy@oracle.com, brijesh.singh@amd.com References: From: Paolo Bonzini Message-ID: Date: Tue, 20 Apr 2021 11:39:15 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 15/04/21 17:57, Ashish Kalra wrote: > From: Brijesh Singh > > Invoke a hypercall when a memory region is changed from encrypted -> > decrypted and vice versa. Hypervisor needs to know the page encryption > status during the guest migration. Boris, can you ack this patch? Paolo > Cc: Thomas Gleixner > Cc: Ingo Molnar > Cc: "H. Peter Anvin" > Cc: Paolo Bonzini > Cc: Joerg Roedel > Cc: Borislav Petkov > Cc: Tom Lendacky > Cc: x86@kernel.org > Cc: kvm@vger.kernel.org > Cc: linux-kernel@vger.kernel.org > Reviewed-by: Steve Rutherford > Reviewed-by: Venu Busireddy > Signed-off-by: Brijesh Singh > Signed-off-by: Ashish Kalra > --- > arch/x86/include/asm/paravirt.h | 10 +++++ > arch/x86/include/asm/paravirt_types.h | 2 + > arch/x86/kernel/paravirt.c | 1 + > arch/x86/mm/mem_encrypt.c | 57 ++++++++++++++++++++++++++- > arch/x86/mm/pat/set_memory.c | 7 ++++ > 5 files changed, 76 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h > index 4abf110e2243..efaa3e628967 100644 > --- a/arch/x86/include/asm/paravirt.h > +++ b/arch/x86/include/asm/paravirt.h > @@ -84,6 +84,12 @@ static inline void paravirt_arch_exit_mmap(struct mm_struct *mm) > PVOP_VCALL1(mmu.exit_mmap, mm); > } > > +static inline void page_encryption_changed(unsigned long vaddr, int npages, > + bool enc) > +{ > + PVOP_VCALL3(mmu.page_encryption_changed, vaddr, npages, enc); > +} > + > #ifdef CONFIG_PARAVIRT_XXL > static inline void load_sp0(unsigned long sp0) > { > @@ -799,6 +805,10 @@ static inline void paravirt_arch_dup_mmap(struct mm_struct *oldmm, > static inline void paravirt_arch_exit_mmap(struct mm_struct *mm) > { > } > + > +static inline void page_encryption_changed(unsigned long vaddr, int npages, bool enc) > +{ > +} > #endif > #endif /* __ASSEMBLY__ */ > #endif /* _ASM_X86_PARAVIRT_H */ > diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h > index de87087d3bde..69ef9c207b38 100644 > --- a/arch/x86/include/asm/paravirt_types.h > +++ b/arch/x86/include/asm/paravirt_types.h > @@ -195,6 +195,8 @@ struct pv_mmu_ops { > > /* Hook for intercepting the destruction of an mm_struct. */ > void (*exit_mmap)(struct mm_struct *mm); > + void (*page_encryption_changed)(unsigned long vaddr, int npages, > + bool enc); > > #ifdef CONFIG_PARAVIRT_XXL > struct paravirt_callee_save read_cr2; > diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c > index c60222ab8ab9..9f206e192f6b 100644 > --- a/arch/x86/kernel/paravirt.c > +++ b/arch/x86/kernel/paravirt.c > @@ -335,6 +335,7 @@ struct paravirt_patch_template pv_ops = { > (void (*)(struct mmu_gather *, void *))tlb_remove_page, > > .mmu.exit_mmap = paravirt_nop, > + .mmu.page_encryption_changed = paravirt_nop, > > #ifdef CONFIG_PARAVIRT_XXL > .mmu.read_cr2 = __PV_IS_CALLEE_SAVE(native_read_cr2), > diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c > index ae78cef79980..fae9ccbd0da7 100644 > --- a/arch/x86/mm/mem_encrypt.c > +++ b/arch/x86/mm/mem_encrypt.c > @@ -19,6 +19,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -29,6 +30,7 @@ > #include > #include > #include > +#include > > #include "mm_internal.h" > > @@ -229,6 +231,47 @@ void __init sev_setup_arch(void) > swiotlb_adjust_size(size); > } > > +static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages, > + bool enc) > +{ > + unsigned long sz = npages << PAGE_SHIFT; > + unsigned long vaddr_end, vaddr_next; > + > + vaddr_end = vaddr + sz; > + > + for (; vaddr < vaddr_end; vaddr = vaddr_next) { > + int psize, pmask, level; > + unsigned long pfn; > + pte_t *kpte; > + > + kpte = lookup_address(vaddr, &level); > + if (!kpte || pte_none(*kpte)) > + return; > + > + switch (level) { > + case PG_LEVEL_4K: > + pfn = pte_pfn(*kpte); > + break; > + case PG_LEVEL_2M: > + pfn = pmd_pfn(*(pmd_t *)kpte); > + break; > + case PG_LEVEL_1G: > + pfn = pud_pfn(*(pud_t *)kpte); > + break; > + default: > + return; > + } > + > + psize = page_level_size(level); > + pmask = page_level_mask(level); > + > + kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS, > + pfn << PAGE_SHIFT, psize >> PAGE_SHIFT, enc); > + > + vaddr_next = (vaddr & pmask) + psize; > + } > +} > + > static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc) > { > pgprot_t old_prot, new_prot; > @@ -286,12 +329,13 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc) > static int __init early_set_memory_enc_dec(unsigned long vaddr, > unsigned long size, bool enc) > { > - unsigned long vaddr_end, vaddr_next; > + unsigned long vaddr_end, vaddr_next, start; > unsigned long psize, pmask; > int split_page_size_mask; > int level, ret; > pte_t *kpte; > > + start = vaddr; > vaddr_next = vaddr; > vaddr_end = vaddr + size; > > @@ -346,6 +390,8 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr, > > ret = 0; > > + set_memory_enc_dec_hypercall(start, PAGE_ALIGN(size) >> PAGE_SHIFT, > + enc); > out: > __flush_tlb_all(); > return ret; > @@ -481,6 +527,15 @@ void __init mem_encrypt_init(void) > if (sev_active() && !sev_es_active()) > static_branch_enable(&sev_enable_key); > > +#ifdef CONFIG_PARAVIRT > + /* > + * With SEV, we need to make a hypercall when page encryption state is > + * changed. > + */ > + if (sev_active()) > + pv_ops.mmu.page_encryption_changed = set_memory_enc_dec_hypercall; > +#endif > + > print_mem_encrypt_feature_info(); > } > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > index 16f878c26667..3576b583ac65 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -27,6 +27,7 @@ > #include > #include > #include > +#include > > #include "../mm_internal.h" > > @@ -2012,6 +2013,12 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) > */ > cpa_flush(&cpa, 0); > > + /* Notify hypervisor that a given memory range is mapped encrypted > + * or decrypted. The hypervisor will use this information during the > + * VM migration. > + */ > + page_encryption_changed(addr, numpages, enc); > + > return ret; > } > >