Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp1032744pxb; Tue, 14 Sep 2021 14:38:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx6QBhcd5pywjUFumvPu9R6WX3vUT5M9qJmHmdxUvBjK+7IhQtjavnyos5GhjyMmdQeZoGu X-Received: by 2002:a05:6e02:dd4:: with SMTP id l20mr13550414ilj.256.1631655536104; Tue, 14 Sep 2021 14:38:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631655536; cv=none; d=google.com; s=arc-20160816; b=Pi/luxRgLnbMVGpZtH0DEPbGIQ3U3SM5sVVhLQgx82B/BIeGEOAtt5KRX1z1xjJmVa IpBLG9m448T/inPUwKzROplKoqzu1a3l8CAAuQ/9Mt+4QHb1cb4CMKQHiITFHzpZmoRq DSGZYxf+hwiqoXtnmEFY6Wef9l2/3J3+LQm6hs6m6RHcDj1bjrH3TNeekSGGbkFJZ9Yf G+ORNUKpQMmZBPEK+nLRpNzea+L1XFmmAhNRNnytJ16gQ7ctsXJiX+r8/41SLgfG66iw RtNfE6sv7paOfTjDHC1Q81DmlKzHWfd5z2HIIL21YO7oSZU21Oh0NI351YyBUM1LN+4S ZcFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=ZyjM2wXU6lcczfYiDbbE+nr5M72sF/mBbv9m3C0jSUo=; b=tDIoZtNQhXk5zVwRw8REEYUQJPFrRvuUGnybqPQsmeOzLT2aDSraM9kM/u6IuOh68+ 8/WLHic1+4Z/SQEaJ1KTx8E3YnOf9+1vAhtwN+8sZzLxL+rTQh9yT5LS5SwCqZd7pRZm xn5UHOFYHwMdDjR3R4Y/43WTd4s6WgZbLgYGGaQg0ll7UwSYcoAnykh0wZAoUGxeFLGU RZNY6s/V8Os+sRseqkXw4HCMvYJjxg8eMqfviZWRfdTsHOgmUIt97fAr4TvRDNrAyjse f5w23WGf1Ur2+WAqXAloL0bm8N1scuoYXPC5diHP8EcAqap35/6gOpAUXWJ1wM08d9te 4PdA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=BBPcaDiz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s10si10646727ioo.6.2021.09.14.14.38.45; Tue, 14 Sep 2021 14:38:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=BBPcaDiz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234593AbhINVfQ (ORCPT + 99 others); Tue, 14 Sep 2021 17:35:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232891AbhINVfP (ORCPT ); Tue, 14 Sep 2021 17:35:15 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9EA9C061762 for ; Tue, 14 Sep 2021 14:33:57 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id j6so684820pfa.4 for ; Tue, 14 Sep 2021 14:33:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=ZyjM2wXU6lcczfYiDbbE+nr5M72sF/mBbv9m3C0jSUo=; b=BBPcaDizm+ebmiak6LRh4102fNSBmRNhl4+y56Xy2ZoG6QzA2eKAqLu1xvxohYClem UYDOdK/PS8+E8SGCISj7wMV0BW0SMNj/qd3gqwq4xV4LyCIfDhIofxM7O3ZCbrj9bTM6 AiBSpm5AnkHD7DK55QKyv67HyOeMAtBMryUEJCdRYXTXUeiiiGawQPSEU+YEOmzfc4OG +cVLqYnv2f2/EV2Nx2RcyJGERu6F/4Lp8o8bVAROm9jRVliNOGKGT9tXaO6ZwrY7F62Q I1OmwG/NvCeKXN5z93eojG4S2GjBmgVmTDZ2IyeN4Stdd+GGPp7tmKrb3fei2UwID+o8 mBuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=ZyjM2wXU6lcczfYiDbbE+nr5M72sF/mBbv9m3C0jSUo=; b=HXwOuGsfIMMOHnzl0Pj704RB8MmX3Xn/BwpR+dlIhiRDv1Guzph1bxEt7cNIBICSZH EzyFX8WwDp98TCyrOiDWIL6s115uR68GyE+7TKOfWl3cwHqYvGZRslpLKmy4YnasPaVH dDw1XSy2U2MGVnMF4MaOSP5zRhV3Kx+wIwpd0OwkoZU/KyGeZG5eqjs46HI8r00d6Nvs qLBNx8kG0gyLaFyvJMs16PGHWKWCXWCYRKdqnR+4izhqGA9b8bLyOHjYpC1VFqIzmTlt F2FTYdubHhsERSKDZGON0h24LDUInDevoMPWhLsIpq3eiHgpQznftDwqp8CHBaTBHxdN gxiw== X-Gm-Message-State: AOAM533uJfxYK5BCF6TW02l54vo5EjGkkQvFP1LB9V2P5Q1S8UTlYDBF FIPw37p4093JRNb10FJ5gcTRrQ== X-Received: by 2002:aa7:9d02:0:b0:43d:ea96:5882 with SMTP id k2-20020aa79d02000000b0043dea965882mr6882103pfp.23.1631655237051; Tue, 14 Sep 2021 14:33:57 -0700 (PDT) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id m125sm3448904pfd.174.2021.09.14.14.33.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Sep 2021 14:33:56 -0700 (PDT) Date: Tue, 14 Sep 2021 21:33:52 +0000 From: Sean Christopherson To: Peter Gonda Cc: kvm@vger.kernel.org, Marc Orr , Paolo Bonzini , Brijesh Singh , stable@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] KVM: SEV: Acquire vcpu mutex when updating VMSA Message-ID: References: <20210914200639.3305617-1-pgonda@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210914200639.3305617-1-pgonda@google.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Sep 14, 2021, Peter Gonda wrote: > Adds mutex guard to the VMSA updating code. Also adds a check to skip a > vCPU if it has already been LAUNCH_UPDATE_VMSA'd which should allow > userspace to retry this ioctl until all the vCPUs can be successfully > LAUNCH_UPDATE_VMSA'd. Because this operation cannot be undone we cannot > unwind if one vCPU fails. > > Fixes: ad73109ae7ec ("KVM: SVM: Provide support to launch and run an SEV-ES guest") > > Signed-off-by: Peter Gonda > Cc: Marc Orr > Cc: Paolo Bonzini > Cc: Sean Christopherson > Cc: Brijesh Singh > Cc: kvm@vger.kernel.org > Cc: stable@vger.kernel.org > Cc: linux-kernel@vger.kernel.org > --- > arch/x86/kvm/svm/sev.c | 24 +++++++++++++++++++----- > 1 file changed, 19 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > index 75e0b21ad07c..9a2ebd0328ca 100644 > --- a/arch/x86/kvm/svm/sev.c > +++ b/arch/x86/kvm/svm/sev.c > @@ -598,22 +598,29 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) > static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) > { > struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > - struct sev_data_launch_update_vmsa vmsa; > + struct sev_data_launch_update_vmsa vmsa = {0}; > struct kvm_vcpu *vcpu; > int i, ret; > > if (!sev_es_guest(kvm)) > return -ENOTTY; > > - vmsa.reserved = 0; > - Zeroing all of 'vmsa' is an unrelated chagne and belongs in a separate patch. I would even go so far as to say it's unnecessary, even field of the struct is explicitly written before it's consumed. > kvm_for_each_vcpu(i, vcpu, kvm) { > struct vcpu_svm *svm = to_svm(vcpu); > > + ret = mutex_lock_killable(&vcpu->mutex); > + if (ret) > + goto out_unlock; Rather than multiple unlock labels, move the guts of the loop to a wrapper. As discussed off list, this really should be a vCPU-scoped ioctl, but that ship has sadly sailed :-( We can at least imitate that by making the VM-scoped ioctl nothing but a wrapper. > + > + /* Skip to the next vCPU if this one has already be updated. */ s/be/been Uber nit, there may not be a next vCPU. It'd be more slightly more accurate to say something like "Do nothing if this vCPU has already been updated". > + ret = sev_es_sync_vmsa(svm); > + if (svm->vcpu.arch.guest_state_protected) > + goto unlock; This belongs in a separate patch, too. It also introduces a bug (arguably two) in that it adds a duplicate call to sev_es_sync_vmsa(). The second bug is that if sev_es_sync_vmsa() fails _and_ the vCPU is already protected, this will cause that failure to be squashed. In the end, I think the least gross implementation will look something like this, implemented over two patches (one for the lock, one for the protected check). static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu, int *error) { struct sev_data_launch_update_vmsa vmsa; struct vcpu_svm *svm = to_svm(vcpu); int ret; /* * Do nothing if this vCPU has already been updated. This is allowed * to let userspace retry LAUNCH_UPDATE_VMSA if the command fails on a * later vCPU. */ if (svm->vcpu.arch.guest_state_protected) return 0; /* Perform some pre-encryption checks against the VMSA */ ret = sev_es_sync_vmsa(svm); if (ret) return ret; /* * The LAUNCH_UPDATE_VMSA command will perform in-place * encryption of the VMSA memory content (i.e it will write * the same memory region with the guest's key), so invalidate * it first. */ clflush_cache_range(svm->vmsa, PAGE_SIZE); vmsa.reserved = 0; vmsa.handle = to_kvm_svm(kvm)->sev_info.handle; vmsa.address = __sme_pa(svm->vmsa); vmsa.len = PAGE_SIZE; return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error); } static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) { struct kvm_vcpu *vcpu; int i, ret; if (!sev_es_guest(kvm)) return -ENOTTY; kvm_for_each_vcpu(i, vcpu, kvm) { ret = mutex_lock_killable(&vcpu->mutex); if (ret) return ret; ret = __sev_launch_update_vmsa(kvm, vcpu, &argp->error); mutex_unlock(&vcpu->mutex); if (ret) return ret; } return 0; }