Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp1065815pxb; Tue, 14 Sep 2021 15:42:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxrvTNqfwlR/XWROhs8z67lZcnrS6797+IicIm8QUsHq2SJKPhA7jF4zqdGd+tnz+Z4yEy8 X-Received: by 2002:a05:6e02:58d:: with SMTP id c13mr10721053ils.101.1631659323149; Tue, 14 Sep 2021 15:42:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631659323; cv=none; d=google.com; s=arc-20160816; b=rzs9/2rJA6TqYVIo7YR09L+u92LgEHCYzeL8xaSaS3woDtYP++uOyuxjBQH9k1X9qo uaU6U14/MzA7YNobIxi2Btmx3QSGUSX7CJrnxakk3CsJkZ95gU91hyowEDB993/VV4Gz zpO11dlfHV2ulyp6Bct8DfJV8Qn3oPYAfaQ844f4cHsnjMMuaUuf3q3iyAlBWWJQvupc o716LlhjlczpxXMWpwb6LWjQAvWznaQp8ZXjezD4f+bBeN7c8inlUshwJU+UvjfR9nfV XzI4/4H4IBzB/Sc3qywJAaetszqLc7iXizKk0Ly5b/ODiR+mkacHZn8AZxg9ce04qQgF KxUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=+ZE2BawIU65HDTYQakuJiXPZQ2/PmXQG0jqaaUOAG3k=; b=EZVQdqMH8qN2G14yguaODhE9RY2gxnOVDDnMzh+etBc+AfLx6gqXV2sRW88Ss7ngvT VbDSq4pE6sQSEPSZb22PbaEM+t4jaUoJff/b2LSnxBVkFtkFikXtcLGHztY9qilWBUHK KwhKkBYtiOqgJRVCtQSMa4LQhPBCVU9K4h5nITaVwweX8RQyQ7lbcjkQLGYxKJUs1FZS YHAF07P2IY3qj55Ar5PxxV85pO3uyX5tUiHzHxcnXKCgmSSjG8BdSj0cKrohSZA0zgXT zJ3r3CXgBzvKtl4zv0Ad4jbCyYoyKLNIkxaKFgzrZpHARerzV09+tidBKtpxVIY6wQtl El7A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=km4VCJqB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z14si11617245ill.35.2021.09.14.15.41.46; Tue, 14 Sep 2021 15:42:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=km4VCJqB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235662AbhINWlK (ORCPT + 99 others); Tue, 14 Sep 2021 18:41:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235464AbhINWlK (ORCPT ); Tue, 14 Sep 2021 18:41:10 -0400 Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com [IPv6:2a00:1450:4864:20::136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08FBCC061764 for ; Tue, 14 Sep 2021 15:39:52 -0700 (PDT) Received: by mail-lf1-x136.google.com with SMTP id i7so1601409lfr.13 for ; Tue, 14 Sep 2021 15:39:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=+ZE2BawIU65HDTYQakuJiXPZQ2/PmXQG0jqaaUOAG3k=; b=km4VCJqBJmlVrj14YwftblNq9T/nY2fuwDrkz+OhI7AfiWKEFvprziUIS0ZOjuM/r1 8rxYGKwS/d6aXpYDdHFdy4d2N3J7jWuPzxGWgiIZai5+lCRRzoN9k+WXSwwv5IJEurpY 119iNHScaih4bf0g9YH2D305Pa803HfX3UsoNjpavJrU5E2ULDGpaGPWaCROWaBtdvK3 pNm3MwpfjUw+kjMWPsGvANFpvvA8Z4CFutIWC15ANrQGgr30wpbFLRFdTfYpOtiPbiWG nztIWzEbKol4gpNCt10dWnsPsC2BiAptgVpJ2TtstWNXd4eaa9gIEzxrUsqZQcqAsexj q0pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=+ZE2BawIU65HDTYQakuJiXPZQ2/PmXQG0jqaaUOAG3k=; b=CSztsk49CSCijrzULj2ZN2j7dl804qBo6hRGnV+GM1f1vnJKElhL5Bx21Ja4XzCEp7 S+WIorBzF34nbeYfGJTALfqdOZZeDu0cJZRW9O+NlbzoZhk+ctUiXB7/Ar/LTe4DWkVI /iHvlyvF4Yssb0i4Il4RMwlKfzv/0TQKPTNRjxLrOK8s8DNPw/SO3sfJazSdRE0U0ROM LLElayxh+jK4uH824fXgkH44kw+8PIRB1pk/bhNRFVlMdkRVHE/DeL9XkRsxisRbeTUZ IBnigAabxuXbUW4aKvpqMo3Mausife734j4mG+29qdtxIVBAe8WZ0HEp6vaKi/aB0ovT ZqOQ== X-Gm-Message-State: AOAM533y56wVNUvQ/X++PRffE9Mhw2Q7AcfzbTa77NIs6kBu2ZkSUhi/ znAIkxznnp5kcSCTFix4XyrRWo6jA9VEjfGWCTKg67OF6b1G8Q== X-Received: by 2002:ac2:483b:: with SMTP id 27mr8966677lft.644.1631659190112; Tue, 14 Sep 2021 15:39:50 -0700 (PDT) MIME-Version: 1.0 References: <20210914200639.3305617-1-pgonda@google.com> In-Reply-To: From: Peter Gonda Date: Tue, 14 Sep 2021 16:39:38 -0600 Message-ID: Subject: Re: [PATCH] KVM: SEV: Acquire vcpu mutex when updating VMSA To: Sean Christopherson Cc: kvm list , Marc Orr , Paolo Bonzini , Brijesh Singh , stable@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Sep 14, 2021 at 3:34 PM Sean Christopherson wro= te: > > On Tue, Sep 14, 2021, Peter Gonda wrote: > > Adds mutex guard to the VMSA updating code. Also adds a check to skip a > > vCPU if it has already been LAUNCH_UPDATE_VMSA'd which should allow > > userspace to retry this ioctl until all the vCPUs can be successfully > > LAUNCH_UPDATE_VMSA'd. Because this operation cannot be undone we cannot > > unwind if one vCPU fails. > > > > Fixes: ad73109ae7ec ("KVM: SVM: Provide support to launch and run an SE= V-ES guest") > > > > Signed-off-by: Peter Gonda > > Cc: Marc Orr > > Cc: Paolo Bonzini > > Cc: Sean Christopherson > > Cc: Brijesh Singh > > Cc: kvm@vger.kernel.org > > Cc: stable@vger.kernel.org > > Cc: linux-kernel@vger.kernel.org > > --- > > arch/x86/kvm/svm/sev.c | 24 +++++++++++++++++++----- > > 1 file changed, 19 insertions(+), 5 deletions(-) > > > > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > > index 75e0b21ad07c..9a2ebd0328ca 100644 > > --- a/arch/x86/kvm/svm/sev.c > > +++ b/arch/x86/kvm/svm/sev.c > > @@ -598,22 +598,29 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) > > static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd = *argp) > > { > > struct kvm_sev_info *sev =3D &to_kvm_svm(kvm)->sev_info; > > - struct sev_data_launch_update_vmsa vmsa; > > + struct sev_data_launch_update_vmsa vmsa =3D {0}; > > struct kvm_vcpu *vcpu; > > int i, ret; > > > > if (!sev_es_guest(kvm)) > > return -ENOTTY; > > > > - vmsa.reserved =3D 0; > > - > > Zeroing all of 'vmsa' is an unrelated chagne and belongs in a separate pa= tch. I > would even go so far as to say it's unnecessary, even field of the struct= is > explicitly written before it's consumed. I'll remove this. > > > kvm_for_each_vcpu(i, vcpu, kvm) { > > struct vcpu_svm *svm =3D to_svm(vcpu); > > > > + ret =3D mutex_lock_killable(&vcpu->mutex); > > + if (ret) > > + goto out_unlock; > > Rather than multiple unlock labels, move the guts of the loop to a wrappe= r. > As discussed off list, this really should be a vCPU-scoped ioctl, but tha= t ship > has sadly sailed :-( We can at least imitate that by making the VM-scope= d ioctl > nothing but a wrapper. > > > + > > + /* Skip to the next vCPU if this one has already be updat= ed. */ > > s/be/been > > Uber nit, there may not be a next vCPU. It'd be more slightly more accur= ate to > say something like "Do nothing if this vCPU has already been updated". > > > + ret =3D sev_es_sync_vmsa(svm); > > + if (svm->vcpu.arch.guest_state_protected) > > + goto unlock; > > This belongs in a separate patch, too. It also introduces a bug (arguabl= y two) > in that it adds a duplicate call to sev_es_sync_vmsa(). The second bug i= s that > if sev_es_sync_vmsa() fails _and_ the vCPU is already protected, this wil= l cause > that failure to be squashed. I'll move skipping logic to a seperate patch > > In the end, I think the least gross implementation will look something li= ke this, > implemented over two patches (one for the lock, one for the protected che= ck). > > static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcp= u, > int *error) > { > struct sev_data_launch_update_vmsa vmsa; > struct vcpu_svm *svm =3D to_svm(vcpu); > int ret; > > /* > * Do nothing if this vCPU has already been updated. This is all= owed > * to let userspace retry LAUNCH_UPDATE_VMSA if the command fails= on a > * later vCPU. > */ > if (svm->vcpu.arch.guest_state_protected) > return 0; > > /* Perform some pre-encryption checks against the VMSA */ > ret =3D sev_es_sync_vmsa(svm); > if (ret) > return ret; > > /* > * The LAUNCH_UPDATE_VMSA command will perform in-place > * encryption of the VMSA memory content (i.e it will write > * the same memory region with the guest's key), so invalidate > * it first. > */ > clflush_cache_range(svm->vmsa, PAGE_SIZE); > > vmsa.reserved =3D 0; > vmsa.handle =3D to_kvm_svm(kvm)->sev_info.handle; > vmsa.address =3D __sme_pa(svm->vmsa); > vmsa.len =3D PAGE_SIZE; > return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, erro= r); > } > > static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *ar= gp) > { > struct kvm_vcpu *vcpu; > int i, ret; > > if (!sev_es_guest(kvm)) > return -ENOTTY; > > kvm_for_each_vcpu(i, vcpu, kvm) { > ret =3D mutex_lock_killable(&vcpu->mutex); > if (ret) > return ret; > > ret =3D __sev_launch_update_vmsa(kvm, vcpu, &argp->error)= ; > > mutex_unlock(&vcpu->mutex); "> if (ret) > return ret; > } > return 0; > } That looks reasonable to me. I didn't know if changes headed for LTS should be smaller so I avoided doing this refactor. From: https://www.kernel.org/doc/html/v4.11/process/stable-kernel-rules.html#stab= le-kernel-rules seems to say less than 100 lines is ideal. I guess this could also be a "theoretical race condition=E2=80=9D anyways so maybe not for LTS anyways= . Thoughts?