Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp471482iog; Fri, 24 Jun 2022 07:35:00 -0700 (PDT) X-Google-Smtp-Source: AGRyM1unP4Nyvy1xe/xECv4PNQDAJty2sEpvl/Zrg0eMH7jDrw6ubuthCteXlYdM96+6IFJkNlwN X-Received: by 2002:a17:90b:1e47:b0:1ed:2723:981a with SMTP id pi7-20020a17090b1e4700b001ed2723981amr3676597pjb.144.1656081300025; Fri, 24 Jun 2022 07:35:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656081300; cv=none; d=google.com; s=arc-20160816; b=Q76+usmKwmjTT+cxxPg/B2Z0WHmQw7IUF+QI/IWGi4HWV6QSu5B1N/JiEmfuEKKhbq yZWmIIZaX+7p/dYZOFR6CCpBcd65uGgdCL7cp5B/ONCSfEJf1sQa1VfPHI4AsXvdRsWu CZY80i4xP8IoiuQTELmmx3m5in7fyczQ8hcu/WCSykm1hBvvEACiRRyj3I2kfUkvevFk BIaereH4UnOF6Gt/WMUycP6Ty/GvwTOYNpm021iBXjZjZL8FhKvnTIp3S7LH+037rHuO OhmTSoYMwWZ9XOZJEXq2Q34GxbmMdlY77tlpzAm0d4LL++lLyVV0XOsGzG3TMJKpoGsd c3VA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=0yhMNVTfPgAiz+CgtIR9pHMpZQGSdtbAVl/2DeBNAqc=; b=zVrWp3+5bORXoSQFoYeTvjaXe9f5caISZUYNf/R7fWsEentBRZgJa4DytYwZ6R3ysZ e8E7uF41soNEeXpKIm1gP4h3EfUr5v5j7KTbZwHohfhZ/YbL4taWSaRJm8QmEKc1DOic r9yhxY3GqwCqAwnwXh0vdq4WIppfID04YMmHJ74gBk7af4YmSswGCpzwCJQcPm2v6ksU ceaFFaibDvFeI1ytw6FY3eUzxtCzwGUwKZ48PYBV3hKA28iMah0I369ONHbYeaYxXCCz PWIxtcrE1DjJZnnBHuijCuKxUwMK0pM7RsKNiw6FLxAk9E9jEhjTZXhFOnXbwDJqLW2P 13qw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=SjJEMzrW; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a21-20020a170902ee9500b0016763e0a6eesi3068732pld.486.2022.06.24.07.34.46; Fri, 24 Jun 2022 07:35:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=SjJEMzrW; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232340AbiFXOdh (ORCPT + 99 others); Fri, 24 Jun 2022 10:33:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230073AbiFXOdg (ORCPT ); Fri, 24 Jun 2022 10:33:36 -0400 Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com [IPv6:2a00:1450:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 779B240E5D for ; Fri, 24 Jun 2022 07:33:34 -0700 (PDT) Received: by mail-lj1-x236.google.com with SMTP id n15so2974584ljg.8 for ; Fri, 24 Jun 2022 07:33:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=0yhMNVTfPgAiz+CgtIR9pHMpZQGSdtbAVl/2DeBNAqc=; b=SjJEMzrWxAcKwc5YzGrav8NBQX276eeEeaWCtW3CaaxjO2ltEuVAL6sjneDt8mLwMn G6omp21sIapg84ueYj+ecm4lMnl2oPZc8MrWyBu7vx5vt3yt77+RjCP//eeuR7ow53sn VnnylV8XIzAoEQEuEz4hxs8qd3t0zqP4KQ9SzEQh0VCHd4ml8YA99OQxsqxV3YgqjJjL aWusHu4Bphs1xWYY5SOgSPxmBo+MvIuc/HJDTXMRpWrdYBJWzuf/0vsMJWq14wpSeisJ MnxOEE3UFKvBxX5t+eB5AQfzMwsQX0c/vbKbT//6/qzs6CLTB1B9hAzjiZz/YUrPRgCg 585A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=0yhMNVTfPgAiz+CgtIR9pHMpZQGSdtbAVl/2DeBNAqc=; b=Tk3Z6Xa/evopLbpglmxanPssH57Da9UIeyxQX/fyAibpp0aqgUoCGDswrTbIlC6O17 UGPbaW7jdTZ4Zv8drYUHKVVwVtoWqbyNOdTDFPi6XN1h2bZTtbRiyXCUFRg9H0BCHumV M9aiDF6mkNmonDbRi2t1h5IvHulrMLGsLzRdXp/DUDoqEWnnsaXJdGUSc9SEeUzEzk0x WSTP9lsvs9wLfmKFzZaq4u4gMWB05B3J/QlOJNVgyRHcRTu2+Cbpea7hdFWzSStlRsWZ 3XIL5iQyR5dY4zDn6bIDv0qGDGMDJSil3Sisk+m9VjtEjwxJBqGT78+L/CMXmUI7fWZu qIIw== X-Gm-Message-State: AJIora+4sy10rAw7tdkp6+/6rLf1nwSSbTZMncyFXnUEOtYU5eiLfJOf OIiLhann10loAweFOlqdKF+icdZkP3oa1m3RRkczzw== X-Received: by 2002:a2e:2a43:0:b0:25a:84a9:921c with SMTP id q64-20020a2e2a43000000b0025a84a9921cmr7511271ljq.83.1656081212439; Fri, 24 Jun 2022 07:33:32 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Peter Gonda Date: Fri, 24 Jun 2022 08:33:21 -0600 Message-ID: Subject: Re: [PATCH Part2 v6 26/49] KVM: SVM: Add KVM_SEV_SNP_LAUNCH_UPDATE command To: Ashish Kalra Cc: "the arch/x86 maintainers" , LKML , kvm list , linux-coco@lists.linux.dev, linux-mm@kvack.org, Linux Crypto Mailing List , Thomas Gleixner , Ingo Molnar , Joerg Roedel , "Lendacky, Thomas" , "H. Peter Anvin" , Ard Biesheuvel , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Jim Mattson , Andy Lutomirski , Dave Hansen , Sergio Lopez , Peter Zijlstra , Srinivas Pandruvada , David Rientjes , Dov Murik , Tobin Feldman-Fitzthum , Borislav Petkov , Michael Roth , Vlastimil Babka , "Kirill A . Shutemov" , Andi Kleen , Tony Luck , Marc Orr , Sathyanarayanan Kuppuswamy , Alper Gun , "Dr. David Alan Gilbert" , jarkko@kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On Mon, Jun 20, 2022 at 5:08 PM Ashish Kalra wrote: > > From: Brijesh Singh > > The KVM_SEV_SNP_LAUNCH_UPDATE command can be used to insert data into the > guest's memory. The data is encrypted with the cryptographic context > created with the KVM_SEV_SNP_LAUNCH_START. > > In addition to the inserting data, it can insert a two special pages > into the guests memory: the secrets page and the CPUID page. > > While terminating the guest, reclaim the guest pages added in the RMP > table. If the reclaim fails, then the page is no longer safe to be > released back to the system and leak them. > > For more information see the SEV-SNP specification. > > Signed-off-by: Brijesh Singh > --- > .../virt/kvm/x86/amd-memory-encryption.rst | 29 +++ > arch/x86/kvm/svm/sev.c | 187 ++++++++++++++++++ > include/uapi/linux/kvm.h | 19 ++ > 3 files changed, 235 insertions(+) > > diff --git a/Documentation/virt/kvm/x86/amd-memory-encryption.rst b/Documentation/virt/kvm/x86/amd-memory-encryption.rst > index 878711f2dca6..62abd5c1f72b 100644 > --- a/Documentation/virt/kvm/x86/amd-memory-encryption.rst > +++ b/Documentation/virt/kvm/x86/amd-memory-encryption.rst > @@ -486,6 +486,35 @@ Returns: 0 on success, -negative on error > > See the SEV-SNP specification for further detail on the launch input. > > +20. KVM_SNP_LAUNCH_UPDATE > +------------------------- > + > +The KVM_SNP_LAUNCH_UPDATE is used for encrypting a memory region. It also > +calculates a measurement of the memory contents. The measurement is a signature > +of the memory contents that can be sent to the guest owner as an attestation > +that the memory was encrypted correctly by the firmware. > + > +Parameters (in): struct kvm_snp_launch_update > + > +Returns: 0 on success, -negative on error > + > +:: > + > + struct kvm_sev_snp_launch_update { > + __u64 start_gfn; /* Guest page number to start from. */ > + __u64 uaddr; /* userspace address need to be encrypted */ > + __u32 len; /* length of memory region */ > + __u8 imi_page; /* 1 if memory is part of the IMI */ > + __u8 page_type; /* page type */ > + __u8 vmpl3_perms; /* VMPL3 permission mask */ > + __u8 vmpl2_perms; /* VMPL2 permission mask */ > + __u8 vmpl1_perms; /* VMPL1 permission mask */ > + }; > + > +See the SEV-SNP spec for further details on how to build the VMPL permission > +mask and page type. > + > + > References > ========== > > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > index 41b83aa6b5f4..b5f0707d7ed6 100644 > --- a/arch/x86/kvm/svm/sev.c > +++ b/arch/x86/kvm/svm/sev.c > @@ -18,6 +18,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -233,6 +234,49 @@ static void sev_decommission(unsigned int handle) > sev_guest_decommission(&decommission, NULL); > } > > +static inline void snp_leak_pages(u64 pfn, enum pg_level level) > +{ > + unsigned int npages = page_level_size(level) >> PAGE_SHIFT; > + > + WARN(1, "psc failed pfn 0x%llx pages %d (leaking)\n", pfn, npages); > + > + while (npages) { > + memory_failure(pfn, 0); > + dump_rmpentry(pfn); > + npages--; > + pfn++; > + } > +} Should this be deduplicated with the snp_leak_pages() in "crypto: ccp: Handle the legacy TMR allocation when SNP is enabled" ? > + > +static int snp_page_reclaim(u64 pfn) > +{ > + struct sev_data_snp_page_reclaim data = {0}; > + int err, rc; > + > + data.paddr = __sme_set(pfn << PAGE_SHIFT); > + rc = snp_guest_page_reclaim(&data, &err); > + if (rc) { > + /* > + * If the reclaim failed, then page is no longer safe > + * to use. > + */ > + snp_leak_pages(pfn, PG_LEVEL_4K); > + } > + > + return rc; > +} > + > +static int host_rmp_make_shared(u64 pfn, enum pg_level level, bool leak) > +{ > + int rc; > + > + rc = rmp_make_shared(pfn, level); > + if (rc && leak) > + snp_leak_pages(pfn, level); > + > + return rc; > +} > + > static void sev_unbind_asid(struct kvm *kvm, unsigned int handle) > { > struct sev_data_deactivate deactivate; > @@ -1902,6 +1946,123 @@ static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp) > return rc; > } > > +static bool is_hva_registered(struct kvm *kvm, hva_t hva, size_t len) > +{ > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > + struct list_head *head = &sev->regions_list; > + struct enc_region *i; > + > + lockdep_assert_held(&kvm->lock); > + > + list_for_each_entry(i, head, list) { > + u64 start = i->uaddr; > + u64 end = start + i->size; > + > + if (start <= hva && end >= (hva + len)) > + return true; > + } Given that usersapce could load sev->regions_list with any # of any sized regions. Should we add a cond_resched() like in sev_vm_destroy()? > + > + return false; > +} > + > +static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp) > +{ > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > + struct sev_data_snp_launch_update data = {0}; > + struct kvm_sev_snp_launch_update params; > + unsigned long npages, pfn, n = 0; > + int *error = &argp->error; > + struct page **inpages; > + int ret, i, level; > + u64 gfn; > + > + if (!sev_snp_guest(kvm)) > + return -ENOTTY; > + > + if (!sev->snp_context) > + return -EINVAL; > + > + if (copy_from_user(¶ms, (void __user *)(uintptr_t)argp->data, sizeof(params))) > + return -EFAULT; > + > + /* Verify that the specified address range is registered. */ > + if (!is_hva_registered(kvm, params.uaddr, params.len)) > + return -EINVAL; > + > + /* > + * The userspace memory is already locked so technically we don't > + * need to lock it again. Later part of the function needs to know > + * pfn so call the sev_pin_memory() so that we can get the list of > + * pages to iterate through. > + */ > + inpages = sev_pin_memory(kvm, params.uaddr, params.len, &npages, 1); > + if (!inpages) > + return -ENOMEM; > + > + /* > + * Verify that all the pages are marked shared in the RMP table before > + * going further. This is avoid the cases where the userspace may try > + * updating the same page twice. > + */ > + for (i = 0; i < npages; i++) { > + if (snp_lookup_rmpentry(page_to_pfn(inpages[i]), &level) != 0) { > + sev_unpin_memory(kvm, inpages, npages); > + return -EFAULT; > + } > + } > + > + gfn = params.start_gfn; > + level = PG_LEVEL_4K; > + data.gctx_paddr = __psp_pa(sev->snp_context); > + > + for (i = 0; i < npages; i++) { > + pfn = page_to_pfn(inpages[i]); > + > + ret = rmp_make_private(pfn, gfn << PAGE_SHIFT, level, sev_get_asid(kvm), true); > + if (ret) { > + ret = -EFAULT; > + goto e_unpin; > + } > + > + n++; > + data.address = __sme_page_pa(inpages[i]); > + data.page_size = X86_TO_RMP_PG_LEVEL(level); > + data.page_type = params.page_type; > + data.vmpl3_perms = params.vmpl3_perms; > + data.vmpl2_perms = params.vmpl2_perms; > + data.vmpl1_perms = params.vmpl1_perms; > + ret = __sev_issue_cmd(argp->sev_fd, SEV_CMD_SNP_LAUNCH_UPDATE, &data, error); > + if (ret) { > + /* > + * If the command failed then need to reclaim the page. > + */ > + snp_page_reclaim(pfn); > + goto e_unpin; > + } > + > + gfn++; > + } > + > +e_unpin: > + /* Content of memory is updated, mark pages dirty */ > + for (i = 0; i < n; i++) { Since |n| is not only a loop variable but actually carries the number of private pages over to e_unpin can we use a more descriptive name? How about something like 'nprivate_pages'? > + set_page_dirty_lock(inpages[i]); > + mark_page_accessed(inpages[i]); > + > + /* > + * If its an error, then update RMP entry to change page ownership > + * to the hypervisor. > + */ > + if (ret) > + host_rmp_make_shared(pfn, level, true); > + } > + > + /* Unlock the user pages */ > + sev_unpin_memory(kvm, inpages, npages); > + > + return ret; > +} > + > int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp) > { > struct kvm_sev_cmd sev_cmd; > @@ -1995,6 +2156,9 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp) > case KVM_SEV_SNP_LAUNCH_START: > r = snp_launch_start(kvm, &sev_cmd); > break; > + case KVM_SEV_SNP_LAUNCH_UPDATE: > + r = snp_launch_update(kvm, &sev_cmd); > + break; > default: > r = -EINVAL; > goto out; > @@ -2113,6 +2277,29 @@ find_enc_region(struct kvm *kvm, struct kvm_enc_region *range) > static void __unregister_enc_region_locked(struct kvm *kvm, > struct enc_region *region) > { > + unsigned long i, pfn; > + int level; > + > + /* > + * The guest memory pages are assigned in the RMP table. Unassign it > + * before releasing the memory. > + */ > + if (sev_snp_guest(kvm)) { > + for (i = 0; i < region->npages; i++) { > + pfn = page_to_pfn(region->pages[i]); > + > + if (!snp_lookup_rmpentry(pfn, &level)) > + continue; > + > + cond_resched(); > + > + if (level > PG_LEVEL_4K) > + pfn &= ~(KVM_PAGES_PER_HPAGE(PG_LEVEL_2M) - 1); > + > + host_rmp_make_shared(pfn, level, true); > + } > + } > + > sev_unpin_memory(kvm, region->pages, region->npages); > list_del(®ion->list); > kfree(region); > diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h > index 0cb119d66ae5..9b36b07414ea 100644 > --- a/include/uapi/linux/kvm.h > +++ b/include/uapi/linux/kvm.h > @@ -1813,6 +1813,7 @@ enum sev_cmd_id { > /* SNP specific commands */ > KVM_SEV_SNP_INIT, > KVM_SEV_SNP_LAUNCH_START, > + KVM_SEV_SNP_LAUNCH_UPDATE, > > KVM_SEV_NR_MAX, > }; > @@ -1929,6 +1930,24 @@ struct kvm_sev_snp_launch_start { > __u8 pad[6]; > }; > > +#define KVM_SEV_SNP_PAGE_TYPE_NORMAL 0x1 > +#define KVM_SEV_SNP_PAGE_TYPE_VMSA 0x2 > +#define KVM_SEV_SNP_PAGE_TYPE_ZERO 0x3 > +#define KVM_SEV_SNP_PAGE_TYPE_UNMEASURED 0x4 > +#define KVM_SEV_SNP_PAGE_TYPE_SECRETS 0x5 > +#define KVM_SEV_SNP_PAGE_TYPE_CPUID 0x6 > + > +struct kvm_sev_snp_launch_update { > + __u64 start_gfn; > + __u64 uaddr; > + __u32 len; > + __u8 imi_page; > + __u8 page_type; > + __u8 vmpl3_perms; > + __u8 vmpl2_perms; > + __u8 vmpl1_perms; > +}; > + > #define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0) > #define KVM_DEV_ASSIGN_PCI_2_3 (1 << 1) > #define KVM_DEV_ASSIGN_MASK_INTX (1 << 2) > -- > 2.25.1 >