Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp4750227pxb; Tue, 28 Sep 2021 03:17:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyCUiCBmcY89rkDCkP5OqP6+mkdlT/oNIzsn3BQcEmTs7xpWocZL2uc6DfNOCWWkzddl/xc X-Received: by 2002:a17:906:4d58:: with SMTP id b24mr5660604ejv.565.1632824268230; Tue, 28 Sep 2021 03:17:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632824268; cv=none; d=google.com; s=arc-20160816; b=Agrlyj5nLvc+sK0MtTBdSCRFcbnArmQI0DRTr0fCiw+Vfb7PctNq3BVgkwqcFnMLiL l/4KTkRXza342y6hY8tM9UtkmrQjijEmDhQNGniMlpswA5ugbLACsFp2cYeLOGcbLRqo K7j34BShhw5T+xfOHaPbYbHGw8w6gXmFNIn0Kxdpi/m4CaXkXLlcILWwArnVD3dtrYFH GTJk6BCcrOn06igNFTHCKgV2f/QYgKENaaKVk9BYjIlLSSxC8q7LVNfovqUyL3yj0wvh k/2ijzpRAfFFdRDm9d+106D9K0bGZRX5cWPfwwJOx3xPigEkeWInN++YzFrAWyfxuPe5 LDWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=u5uEz0zMzLI3u5SmcZkNM4/r8rAoFXUcwaUNur87ZOg=; b=sfQ5VDGK67eybNpfRlbkjclqiPjYK4im2UICcjOUQqTRvdvst+LuraCmOtXAkQWVpX oib7acElYhTgK7yHPDDpZ0g59JcTWRFNf+Te/dYsqhhSjrkPERXJqmKL2sove4JmyioP j+UQ81GyC1mrLpNGbrL6+Icgm8/npNsa2OfoYtOycMUCTDiAjwkBWJiletyigcj1IcI5 mZGYZBF3AXEY56NhKc5FTNSmipfxqvQs+sP4dgaGxTWakpjjewdug4zSshTBt3dOWuN8 3hF2kLQyJhdQTPg3iP358xkHOrR6HO4b8sMHmGo7j3Qw0bXOlWwS6h7smspNzCCHu/qY M3sA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=LkoG9Xuk; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n7si3375404edt.105.2021.09.28.03.17.18; Tue, 28 Sep 2021 03:17:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=LkoG9Xuk; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240152AbhI1KSx (ORCPT + 99 others); Tue, 28 Sep 2021 06:18:53 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:36633 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240054AbhI1KSx (ORCPT ); Tue, 28 Sep 2021 06:18:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1632824233; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=u5uEz0zMzLI3u5SmcZkNM4/r8rAoFXUcwaUNur87ZOg=; b=LkoG9XukneGodjIKelr5XPdaThNAXzYB79P61YrXtUv2ecs1biUbcfYp6rfbpL9SnHpoQM Z8PbbmKpA09GxPoPJNYoIDvmqpfzRKWtLd2wNH0DgZTiZJ4Rh5kh25o0+eooASkMD1zsrS eLgsl1chEjG+MddqIxL99YSnLqMwVaU= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-600-Nw6AGKZaPDad64ZOQ95F8g-1; Tue, 28 Sep 2021 06:17:12 -0400 X-MC-Unique: Nw6AGKZaPDad64ZOQ95F8g-1 Received: by mail-wm1-f72.google.com with SMTP id k35-20020a05600c1ca300b0030d13cecf46so78756wms.4 for ; Tue, 28 Sep 2021 03:17:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=u5uEz0zMzLI3u5SmcZkNM4/r8rAoFXUcwaUNur87ZOg=; b=Zy1nQgxNGZbRSJ6tnqWkmAvgvxMVAXJNLx7UK2kBGVpBois0Q1jXdQjmmI4/wEZy0Q WQWAcwL8iYHdE7EUXzbuD1rNZwQm3e7vo5rPphtlZveB+TdfBd3Ym2kvDECXwia3wxe0 FhV1EHPFJ1o0BsjoufddEc9IbgxHwglXdYDY8EOEqODzeq4E08sRV6iqvYwXdIurMAFd v1PlJKxHBRJgVdcF5KkhuTDnHzp//cjRh7ZWwGgekEv1u74HgKQen5WCyK8IKkLyEePG 7yl5+V1ZUQ5yRfgEbqsfzfQJ3xlo+JYfpNtrH4QZSP4tpVgrBFbT4W6mKbjtsHvIm2Qf PKzg== X-Gm-Message-State: AOAM5307kWlYvuhA02H/CV1C4dmHrG2fMhS6mEC/fKeJrvlZcQuGhMLs inUnLUUz/AOZOGswejaLpLUKRZqmP6GDAEwWv+fAuvwsaY3tjNFAfKZvL17V2Ok+UwnsO1y1eRV 8tDz9bmtPYqppD5B29uwnp/HN X-Received: by 2002:adf:a31a:: with SMTP id c26mr5544768wrb.307.1632824230975; Tue, 28 Sep 2021 03:17:10 -0700 (PDT) X-Received: by 2002:adf:a31a:: with SMTP id c26mr5544717wrb.307.1632824230752; Tue, 28 Sep 2021 03:17:10 -0700 (PDT) Received: from work-vm (cpc109025-salf6-2-0-cust480.10-2.cable.virginm.net. [82.30.61.225]) by smtp.gmail.com with ESMTPSA id v17sm9829732wro.34.2021.09.28.03.17.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Sep 2021 03:17:09 -0700 (PDT) Date: Tue, 28 Sep 2021 11:17:07 +0100 From: "Dr. David Alan Gilbert" To: Brijesh Singh Cc: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-coco@lists.linux.dev, linux-mm@kvack.org, linux-crypto@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Joerg Roedel , Tom Lendacky , "H. Peter Anvin" , Ard Biesheuvel , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Andy Lutomirski , Dave Hansen , Sergio Lopez , Peter Gonda , Peter Zijlstra , Srinivas Pandruvada , David Rientjes , Dov Murik , Tobin Feldman-Fitzthum , Borislav Petkov , Michael Roth , Vlastimil Babka , "Kirill A . Shutemov" , Andi Kleen , tony.luck@intel.com, marcorr@google.com, sathyanarayanan.kuppuswamy@linux.intel.com Subject: Re: [PATCH Part2 v5 38/45] KVM: SVM: Add support to handle Page State Change VMGEXIT Message-ID: References: <20210820155918.7518-1-brijesh.singh@amd.com> <20210820155918.7518-39-brijesh.singh@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210820155918.7518-39-brijesh.singh@amd.com> User-Agent: Mutt/2.0.7 (2021-05-04) Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org * Brijesh Singh (brijesh.singh@amd.com) wrote: > SEV-SNP VMs can ask the hypervisor to change the page state in the RMP > table to be private or shared using the Page State Change NAE event > as defined in the GHCB specification version 2. > > Signed-off-by: Brijesh Singh > --- > arch/x86/include/asm/sev-common.h | 7 +++ > arch/x86/kvm/svm/sev.c | 82 +++++++++++++++++++++++++++++-- > 2 files changed, 84 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h > index 4980f77aa1d5..5ee30bb2cdb8 100644 > --- a/arch/x86/include/asm/sev-common.h > +++ b/arch/x86/include/asm/sev-common.h > @@ -126,6 +126,13 @@ enum psc_op { > /* SNP Page State Change NAE event */ > #define VMGEXIT_PSC_MAX_ENTRY 253 > > +/* The page state change hdr structure in not valid */ > +#define PSC_INVALID_HDR 1 > +/* The hdr.cur_entry or hdr.end_entry is not valid */ > +#define PSC_INVALID_ENTRY 2 > +/* Page state change encountered undefined error */ > +#define PSC_UNDEF_ERR 3 > + > struct psc_hdr { > u16 cur_entry; > u16 end_entry; > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > index 6d9483ec91ab..0de85ed63e9b 100644 > --- a/arch/x86/kvm/svm/sev.c > +++ b/arch/x86/kvm/svm/sev.c > @@ -2731,6 +2731,7 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm, u64 *exit_code) > case SVM_VMGEXIT_AP_JUMP_TABLE: > case SVM_VMGEXIT_UNSUPPORTED_EVENT: > case SVM_VMGEXIT_HV_FEATURES: > + case SVM_VMGEXIT_PSC: > break; > default: > goto vmgexit_err; > @@ -3004,13 +3005,13 @@ static int __snp_handle_page_state_change(struct kvm_vcpu *vcpu, enum psc_op op, > */ > rc = snp_check_and_build_npt(vcpu, gpa, level); > if (rc) > - return -EINVAL; > + return PSC_UNDEF_ERR; > > if (op == SNP_PAGE_STATE_PRIVATE) { > hva_t hva; > > if (snp_gpa_to_hva(kvm, gpa, &hva)) > - return -EINVAL; > + return PSC_UNDEF_ERR; > > /* > * Verify that the hva range is registered. This enforcement is > @@ -3022,7 +3023,7 @@ static int __snp_handle_page_state_change(struct kvm_vcpu *vcpu, enum psc_op op, > rc = is_hva_registered(kvm, hva, page_level_size(level)); > mutex_unlock(&kvm->lock); > if (!rc) > - return -EINVAL; > + return PSC_UNDEF_ERR; > > /* > * Mark the userspace range unmerable before adding the pages > @@ -3032,7 +3033,7 @@ static int __snp_handle_page_state_change(struct kvm_vcpu *vcpu, enum psc_op op, > rc = snp_mark_unmergable(kvm, hva, page_level_size(level)); > mmap_write_unlock(kvm->mm); > if (rc) > - return -EINVAL; > + return PSC_UNDEF_ERR; > } > > write_lock(&kvm->mmu_lock); > @@ -3062,8 +3063,11 @@ static int __snp_handle_page_state_change(struct kvm_vcpu *vcpu, enum psc_op op, > case SNP_PAGE_STATE_PRIVATE: > rc = rmp_make_private(pfn, gpa, level, sev->asid, false); > break; > + case SNP_PAGE_STATE_PSMASH: > + case SNP_PAGE_STATE_UNSMASH: > + /* TODO: Add support to handle it */ > default: > - rc = -EINVAL; > + rc = PSC_INVALID_ENTRY; > break; > } > > @@ -3081,6 +3085,65 @@ static int __snp_handle_page_state_change(struct kvm_vcpu *vcpu, enum psc_op op, > return 0; > } > > +static inline unsigned long map_to_psc_vmgexit_code(int rc) > +{ > + switch (rc) { > + case PSC_INVALID_HDR: > + return ((1ul << 32) | 1); > + case PSC_INVALID_ENTRY: > + return ((1ul << 32) | 2); > + case RMPUPDATE_FAIL_OVERLAP: > + return ((3ul << 32) | 2); > + default: return (4ul << 32); > + } Are these the values defined in 56421 section 4.1.6 ? If so, that says: SW_EXITINFO2[63:32] == 0x00000100 The hypervisor encountered some other error situation and was not able to complete the request identified by page_state_change_header.cur_entry. It is left to the guest to decide how to proceed in this situation. so it looks like the default should be 0x100 rather than 4? (It's a shame they're all magical constants, it would be nice if the standard have them names) Dave > +} > + > +static unsigned long snp_handle_page_state_change(struct vcpu_svm *svm) > +{ > + struct kvm_vcpu *vcpu = &svm->vcpu; > + int level, op, rc = PSC_UNDEF_ERR; > + struct snp_psc_desc *info; > + struct psc_entry *entry; > + u16 cur, end; > + gpa_t gpa; > + > + if (!sev_snp_guest(vcpu->kvm)) > + return PSC_INVALID_HDR; > + > + if (!setup_vmgexit_scratch(svm, true, sizeof(*info))) { > + pr_err("vmgexit: scratch area is not setup.\n"); > + return PSC_INVALID_HDR; > + } > + > + info = (struct snp_psc_desc *)svm->ghcb_sa; > + cur = info->hdr.cur_entry; > + end = info->hdr.end_entry; > + > + if (cur >= VMGEXIT_PSC_MAX_ENTRY || > + end >= VMGEXIT_PSC_MAX_ENTRY || cur > end) > + return PSC_INVALID_ENTRY; > + > + for (; cur <= end; cur++) { > + entry = &info->entries[cur]; > + gpa = gfn_to_gpa(entry->gfn); > + level = RMP_TO_X86_PG_LEVEL(entry->pagesize); > + op = entry->operation; > + > + if (!IS_ALIGNED(gpa, page_level_size(level))) { > + rc = PSC_INVALID_ENTRY; > + goto out; > + } > + > + rc = __snp_handle_page_state_change(vcpu, op, gpa, level); > + if (rc) > + goto out; > + } > + > +out: > + info->hdr.cur_entry = cur; > + return rc ? map_to_psc_vmgexit_code(rc) : 0; > +} > + > static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) > { > struct vmcb_control_area *control = &svm->vmcb->control; > @@ -3315,6 +3378,15 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu) > ret = 1; > break; > } > + case SVM_VMGEXIT_PSC: { > + unsigned long rc; > + > + ret = 1; > + > + rc = snp_handle_page_state_change(svm); > + svm_set_ghcb_sw_exit_info_2(vcpu, rc); > + break; > + } > case SVM_VMGEXIT_UNSUPPORTED_EVENT: > vcpu_unimpl(vcpu, > "vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n", > -- > 2.17.1 > > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK