Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp778649yba; Wed, 15 May 2019 09:46:26 -0700 (PDT) X-Google-Smtp-Source: APXvYqyYCgBRx0N/Awwm70KZwFLC+nOUDLQGaxoxgav1hl8EENRDuXkLC2M+Iy0UngjwIH+CnT/c X-Received: by 2002:a63:c046:: with SMTP id z6mr45535177pgi.387.1557938786448; Wed, 15 May 2019 09:46:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557938786; cv=none; d=google.com; s=arc-20160816; b=reHw3ZC4WYPdRqZyk37JWW3qeYc57dSbVqoY64ey7sJ0XyHefikqAtJm/3h6NAib4B YrIWoJA2FYLIFR9VkZdXX4Qt6IMfyse9XekMOtqnORb5x/MUm3UEhMYOrC7+mQ1nxF3Q S9ZHZIPX5jPqTz7bRtXFKfxcvs/dBaAmdbBto7M2swed9SBoe84SjeVE7IrGLriFJa9/ Z6vai43DU1jxBqN0m/sDNnorNPiIU/3oibLg3JQweOzksigUiDfKLnYczY9HlEIMn3tk PB+25X7WuT6BDRnkSo7JcKhyLl9pcpaK+AastlU08Jej7CqTAIU8u1cdYznMvO+0rrza JQbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=pJOvi18L7fqK+4eFB/vQ8Hw8akTbJRCqM8gnrvSMvQs=; b=jGNhLh+kGoaX4ZA7ocbsb7VaWPazNLOqyToWbxbz1fzj53gq7Yl3WyjYb+ynu6TJ2c hW6ntiVEDBuWzxGQXn+40+vZdKqbpuKXZfNzaj5lrmUdGMqP0r7ucIjgvuLn9leeHjIe iWUrVWfmSepwJ4dqw77irq79iJOihHUQFSviHXpDYFlxgXq73PsUUULIdrGUNY9/iaL/ KLR7BgVZoXCfsUizmBuISFF0IR5Ig3F648+mSTeEICgIrJzjKWsWVtVm8TmpyWxhA1Jx eTbLRbiNTNJvtkQd9nnQoYc+qXUErVK5tYfSueMKYUIt7CUXYCDl9EcvmCG3gEwo0ykl Wk+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=bTKvpeHf; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f66si2103927pgc.449.2019.05.15.09.46.11; Wed, 15 May 2019 09:46:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=bTKvpeHf; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727389AbfEOQo6 (ORCPT + 99 others); Wed, 15 May 2019 12:44:58 -0400 Received: from mail-lf1-f66.google.com ([209.85.167.66]:45981 "EHLO mail-lf1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726645AbfEOQo5 (ORCPT ); Wed, 15 May 2019 12:44:57 -0400 Received: by mail-lf1-f66.google.com with SMTP id n22so311648lfe.12 for ; Wed, 15 May 2019 09:44:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=pJOvi18L7fqK+4eFB/vQ8Hw8akTbJRCqM8gnrvSMvQs=; b=bTKvpeHfA/y9f0nD0G2wvBnlydD3DkqNv+iHtpU6rlFbyz3nk4VDc96vqzrFsQsUPn U3Xsj7BeVtcuZrsHNiVYeIROCBzb7CeBAkzgXAZ2wdW8Q5Vav0YS+fIshjbvuwl5RiPs do9JEhNsm34DhRg/afRZu/4CveGjJhU4khSc/Fo72nQDHNL7bheI1kXthh2bbkJXwR8w 0ZdLlDMcPV3YJ+VjD4quA3f/EAqYWUzHpMd51tEmB3k2buj2kAtpYXzIh+NGmaBdTs7i z2zIro74b+kDFSOGlg6ACYvX8afsQfbMVoXdbigrfKkKk0rmXKKbLBZcc7TJUkOkT33i BxxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=pJOvi18L7fqK+4eFB/vQ8Hw8akTbJRCqM8gnrvSMvQs=; b=IhMDr7j83XTxnEm7m8l+qoHIdzRu0qcD6//+X8UQX2lM2oJ48WdESKChl2FGXEIoHZ D/5DsMY0iWhBzv2UT6JV3l9ltOzd9esNSut3CK9/xyHslrmtcE8ynExcL/Wxbl4Ht6PA EcI6bznxqwHsB4ZPs/ThnvEVIsCS6eIR84xgV2qlPFrCbPD3MaysX2/u+qoAgrvGYk0Z SIUQqkzXWG1JgTXs79jYqnwoIgixJkLFTcUvCZzmWA+DM+ExGIo/T5FX60djBMY6z0FK ylfGfFMf9SKOjOYXCPNAcS465d4aK+ZGg6cYtEcvOVGnLI4EeRsJb/GSkDfyUOGmibCn uotQ== X-Gm-Message-State: APjAAAXUsj64RGUt5rwuWz7lusMwny09sGL7JM/Oyvt8e0oKq6Yws3SU 1bF6J1GXNnbIZTdsT3LKqGbCQE5osnvMZazHok82Xw== X-Received: by 2002:a19:f817:: with SMTP id a23mr12334507lff.123.1557938694539; Wed, 15 May 2019 09:44:54 -0700 (PDT) MIME-Version: 1.0 References: <1557317799-39866-1-git-send-email-pbonzini@redhat.com> <20190508142023.GA13834@linux.intel.com> <20190508181339.GD19656@linux.intel.com> In-Reply-To: From: Aaron Lewis Date: Wed, 15 May 2019 09:44:43 -0700 Message-ID: Subject: Re: [PATCH v2] kvm: nVMX: Set nested_run_pending in vmx_set_nested_state after checks complete To: Sean Christopherson Cc: Paolo Bonzini , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Peter Shier Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 8, 2019 at 2:13 PM Aaron Lewis wrote: > > From: Sean Christopherson > Date: Wed, May 8, 2019 at 11:13 AM > To: Aaron Lewis > Cc: Paolo Bonzini, , > , Peter Shier > > > On Wed, May 08, 2019 at 10:53:12AM -0700, Aaron Lewis wrote: > > > nested_run_pending is also checked in > > > nested_vmx_check_vmentry_postreqs > > > (https://elixir.bootlin.com/linux/v5.1/source/arch/x86/kvm/vmx/nested.c#L2709) > > > so I think the setting needs to be moved to just prior to that call > > > with Paolo's rollback along with another for if the prereqs and > > > postreqs fail. I put a patch together below: > > > > Gah, I missed that usage (also, it's now nested_vmx_check_guest_state()). > > > > Side topic, I think the VM_ENTRY_LOAD_BNDCFGS check should be gated by > > nested_run_pending, a la the EFER check.' > > > > > ------------------------------------ > > > > > > nested_run_pending=1 implies we have successfully entered guest mode. > > > Move setting from external state in vmx_set_nested_state() until after > > > all other checks are complete. > > > > > > Signed-off-by: Aaron Lewis > > > Reviewed-by: Peter Shier > > > --- > > > arch/x86/kvm/vmx/nested.c | 14 +++++++++----- > > > 1 file changed, 9 insertions(+), 5 deletions(-) > > > > > > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c > > > index 6401eb7ef19c..cf1f810223d2 100644 > > > --- a/arch/x86/kvm/vmx/nested.c > > > +++ b/arch/x86/kvm/vmx/nested.c > > > @@ -5460,9 +5460,6 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, > > > if (!(kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE)) > > > return 0; > > > > > > - vmx->nested.nested_run_pending = > > > - !!(kvm_state->flags & KVM_STATE_NESTED_RUN_PENDING); > > > > Alternatively, it might be better to leave nested_run_pending where it > > is and instead add a label to handle clearing the flag on error. IIUC, > > the real issue is that nested_run_pending is left set after a failed > > vmx_set_nested_state(), not that its shouldn't be set in the shadow > > VMCS handling. > > > > Patch attached, though it's completely untested. The KVM selftests are > > broken for me right now, grrr. > > > > > - > > > if (nested_cpu_has_shadow_vmcs(vmcs12) && > > > vmcs12->vmcs_link_pointer != -1ull) { > > > struct vmcs12 *shadow_vmcs12 = get_shadow_vmcs12(vcpu); > > > @@ -5480,14 +5477,21 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, > > > return -EINVAL; > > > } > > > > > > + vmx->nested.nested_run_pending = > > > + !!(kvm_state->flags & KVM_STATE_NESTED_RUN_PENDING); > > > + > > > if (nested_vmx_check_vmentry_prereqs(vcpu, vmcs12) || > > > - nested_vmx_check_vmentry_postreqs(vcpu, vmcs12, &exit_qual)) > > > + nested_vmx_check_vmentry_postreqs(vcpu, vmcs12, &exit_qual)) { > > > + vmx->nested.nested_run_pending = 0; > > > return -EINVAL; > > > + } > > > > > > vmx->nested.dirty_vmcs12 = true; > > > ret = nested_vmx_enter_non_root_mode(vcpu, false); > > > - if (ret) > > > + if (ret) { > > > + vmx->nested.nested_run_pending = 0; > > > return -EINVAL; > > > + } > > > > > > return 0; > > > } > > Here is an update based on your patch. I ran these changes against > the test vmx_set_nested_state_test, and it run successfully. > > That's correct that we are only concerned about restoring the state of > nested_run_pending, so it's fine to set it where we do as long as we > back the state change out before returning if we get an error. > > --------------------------------------------- > > nested_run_pending=1 implies we have successfully entered guest mode. > Move setting from external state in vmx_set_nested_state() until after > all other checks are complete. > > Signed-off-by: Aaron Lewis > Tested-by: Aaron Lewis > Reviewed-by: Peter Shier > --- > arch/x86/kvm/vmx/nested.c | 20 ++++++++++++++------ > 1 file changed, 14 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c > index 6401eb7ef19c..fe5814df5149 100644 > --- a/arch/x86/kvm/vmx/nested.c > +++ b/arch/x86/kvm/vmx/nested.c > @@ -5468,28 +5468,36 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, > struct vmcs12 *shadow_vmcs12 = get_shadow_vmcs12(vcpu); > > if (kvm_state->size < sizeof(kvm_state) + 2 * sizeof(*vmcs12)) > - return -EINVAL; > + goto error_guest_mode_einval; > > if (copy_from_user(shadow_vmcs12, > user_kvm_nested_state->data + VMCS12_SIZE, > - sizeof(*vmcs12))) > - return -EFAULT; > + sizeof(*vmcs12))) { > + ret = -EFAULT; > + goto error_guest_mode; > + } > > if (shadow_vmcs12->hdr.revision_id != VMCS12_REVISION || > !shadow_vmcs12->hdr.shadow_vmcs) > - return -EINVAL; > + goto error_guest_mode_einval; > } > > if (nested_vmx_check_vmentry_prereqs(vcpu, vmcs12) || > nested_vmx_check_vmentry_postreqs(vcpu, vmcs12, &exit_qual)) > - return -EINVAL; > + goto error_guest_mode_einval; > > vmx->nested.dirty_vmcs12 = true; > ret = nested_vmx_enter_non_root_mode(vcpu, false); > if (ret) > - return -EINVAL; > + goto error_guest_mode_einval; > > return 0; > + > +error_guest_mode_einval: > + ret = -EINVAL; > +error_guest_mode: > + vmx->nested.nested_run_pending = 0; > + return ret; > } > > void nested_vmx_vcpu_setup(void) Sean, I updated this patch based on the one you sent out. Does it look good now?