Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp1867181ybi; Thu, 20 Jun 2019 05:19:11 -0700 (PDT) X-Google-Smtp-Source: APXvYqzde5KR1ibqK1JovRsNqouDV+d9IEvfe7TOZctmCmjKOcsTYCuOdpWsUAnKcsakduYxaHf8 X-Received: by 2002:a63:d354:: with SMTP id u20mr12217911pgi.129.1561033150981; Thu, 20 Jun 2019 05:19:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561033150; cv=none; d=google.com; s=arc-20160816; b=sYwb8TsfXV3dltTAiDyk8KX3MAzC/UOQX4uOcVqjJWHG+j5sw229CmM4XgDzA9ih8T EuDu5hDW+qCCs38Q2it5Qn+gzCGPc8rdSqhDvgKqpeCb74vAuHp8lOrtyYrunCnUVchK XJgUmGXMbSmK8Im9zLmGGr1H807JlkZk+49qDs0pRXo8Rc4g2clYdnK5kzVOhekgS2KE ZqugwNNOcNLvRqC/rBpHXemJ5qPs8BXu2ZI5efkDsSQZNNFMJWD6G2vxt9mZKdSp0hBW elViqnQku9Q4nNGaD591jP96yP6HRp43WckkDW4rFsSXosylV6wAqVuWpMM8s68B+LDq 8NGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from; bh=4Enp7W0ESdkTTd9HBh/ANelF/DbInTnKEbgmPM5TYt4=; b=mb249Uvm8a98aCAg4T8rkmvsMGV3EbmArwtx6D2/ZEOptGDMRUxBo9Ew2enhU3bQGY v5f5qAzLLAmAdMmP0Bmt4R/fL4d5hZXxHqO7RXUPWHMXZmnq7V4gF7mhgg678clbGS9N h5n9pP6cBAoFywEx4qEprvgu8Kj/NjQYC26qItUNzJnyDzw0V+QROEzokbC+z7hIDhty xq3P72KATd9WdQiNz6caNcXuaCGV2QOhVZ2cVwT48gtthyA/27RL2JzpMmM75gNZ/rel lyseU6G7jHyQElwHEXmwqOv1IE36PDNMk16MQlsYuYMXe/JlrthasZC7WbZi4RF8U2ad Zy9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f62si18699970plf.88.2019.06.20.05.18.55; Thu, 20 Jun 2019 05:19:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731720AbfFTMSo (ORCPT + 99 others); Thu, 20 Jun 2019 08:18:44 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:35271 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726874AbfFTMSn (ORCPT ); Thu, 20 Jun 2019 08:18:43 -0400 Received: by mail-wr1-f67.google.com with SMTP id m3so2829679wrv.2 for ; Thu, 20 Jun 2019 05:18:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:in-reply-to:references:date :message-id:mime-version; bh=4Enp7W0ESdkTTd9HBh/ANelF/DbInTnKEbgmPM5TYt4=; b=N82snS5sF9zuxOyEHFkfP5+elME+K3z0rlNSkLzkY7x4jXpQ1RtTuWiUsZuVBrOWC7 uf7O924YQQduIEcmGvkJ3GzSCz+IBjxGjYFAA7kXSz0rpZMjC+0hGWRTggsE0OfWseIX +at9v9oRYgbwbyR9qwmQqjSPsp6KiTlFtctJO6rp2i6dF6S9a2d1SB9ZG19FKuy3O57h wd0hRGwx65N3m73sd8n0FPI4eeSyttIUYD///Z1DhnvEGJQukPpAhpQI8+rYRBN5Ezg8 6EkGA7WZj6sQ22kUH+Kh3hgcuYCEEIqnuUy0lWm1yz3SBRfFpav0H4Eoz5TIkvF5IUYD FIwA== X-Gm-Message-State: APjAAAXmfClTKDJLyJvzF59FtSShtBBaWeMMGHigd+xgnd78fw7x5gTi Vzc8GqA1KiLikOCQtxahFwCxzQ== X-Received: by 2002:adf:eacd:: with SMTP id o13mr19533546wrn.91.1561033120821; Thu, 20 Jun 2019 05:18:40 -0700 (PDT) Received: from vitty.brq.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id r5sm30834313wrg.10.2019.06.20.05.18.39 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 20 Jun 2019 05:18:40 -0700 (PDT) From: Vitaly Kuznetsov To: Paolo Bonzini Cc: Aaron Lewis , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: Re: [PATCH] KVM: nVMX: reorganize initial steps of vmx_set_nested_state In-Reply-To: <1560959396-13969-1-git-send-email-pbonzini@redhat.com> References: <1560959396-13969-1-git-send-email-pbonzini@redhat.com> Date: Thu, 20 Jun 2019 14:18:39 +0200 Message-ID: <87zhmcfo0w.fsf@vitty.brq.redhat.com> MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Paolo Bonzini writes: > Commit 332d079735f5 ("KVM: nVMX: KVM_SET_NESTED_STATE - Tear down old EVMCS > state before setting new state", 2019-05-02) broke evmcs_test because the > eVMCS setup must be performed even if there is no VMXON region defined, > as long as the eVMCS bit is set in the assist page. > > While the simplest possible fix would be to add a check on > kvm_state->flags & KVM_STATE_NESTED_EVMCS in the initial "if" that > covers kvm_state->hdr.vmx.vmxon_pa == -1ull, that is quite ugly. > > Instead, this patch moves checks earlier in the function and > conditionalizes them on kvm_state->hdr.vmx.vmxon_pa, so that > vmx_set_nested_state always goes through vmx_leave_nested > and nested_enable_evmcs. > > Fixes: 332d079735f5 checkpatch.pl will likely complain here asking for full description, e.g. Fixes: 332d079735f5 ("KVM: nVMX: KVM_SET_NESTED_STATE - Tear down old EVMCS state before setting new state") There's also something wrong with the patch as it fails to apply because of (not only?) whitespace issues or maybe I'm just applying it to the wrong tree... > Cc: Aaron Lewis > Cc: Vitaly Kuznetsov > Signed-off-by: Paolo Bonzini Enlightened VMCS migration is just a 'theoretical' feature atm, we don't know if it actually works but it's good we have a selftest for it so we know when it definitely doesn't :-) Reviewed-by: Vitaly Kuznetsov > --- > arch/x86/kvm/vmx/nested.c | 26 ++++++++++-------- > .../kvm/x86_64/vmx_set_nested_state_test.c | 32 ++++++++++++++-------- > 2 files changed, 35 insertions(+), 23 deletions(-) > > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c > index fb6d1f7b43f3..5f9c1a200201 100644 > --- a/arch/x86/kvm/vmx/nested.c > +++ b/arch/x86/kvm/vmx/nested.c > @@ -5343,9 +5343,6 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, > if (kvm_state->format != KVM_STATE_NESTED_FORMAT_VMX) > return -EINVAL; > > - if (!nested_vmx_allowed(vcpu)) > - return kvm_state->hdr.vmx.vmxon_pa == -1ull ? 0 : -EINVAL; > - > if (kvm_state->hdr.vmx.vmxon_pa == -1ull) { > if (kvm_state->hdr.vmx.smm.flags) > return -EINVAL; > @@ -5353,12 +5350,15 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, > if (kvm_state->hdr.vmx.vmcs12_pa != -1ull) > return -EINVAL; > > - vmx_leave_nested(vcpu); > - return 0; > - } > + if (kvm_state->flags & ~KVM_STATE_NESTED_EVMCS) > + return -EINVAL; > + } else { > + if (!nested_vmx_allowed(vcpu)) > + return -EINVAL; > > - if (!page_address_valid(vcpu, kvm_state->hdr.vmx.vmxon_pa)) > - return -EINVAL; > + if (!page_address_valid(vcpu, kvm_state->hdr.vmx.vmxon_pa)) > + return -EINVAL; > + } > > if ((kvm_state->hdr.vmx.smm.flags & KVM_STATE_NESTED_SMM_GUEST_MODE) && > (kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE)) > @@ -5381,11 +5381,15 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, > return -EINVAL; > > vmx_leave_nested(vcpu); > - if (kvm_state->hdr.vmx.vmxon_pa == -1ull) > - return 0; > + if (kvm_state->flags & KVM_STATE_NESTED_EVMCS) { > + if (!nested_vmx_allowed(vcpu)) > + return -EINVAL; > > - if (kvm_state->flags & KVM_STATE_NESTED_EVMCS) > nested_enable_evmcs(vcpu, NULL); > + } > + > + if (kvm_state->hdr.vmx.vmxon_pa == -1ull) > + return 0; > > vmx->nested.vmxon_ptr = kvm_state->hdr.vmx.vmxon_pa; > ret = enter_vmx_operation(vcpu); > diff --git a/tools/testing/selftests/kvm/x86_64/vmx_set_nested_state_test.c b/tools/testing/selftests/kvm/x86_64/vmx_set_nested_state_test.c > index 0648fe6df5a8..e64ca20b315a 100644 > --- a/tools/testing/selftests/kvm/x86_64/vmx_set_nested_state_test.c > +++ b/tools/testing/selftests/kvm/x86_64/vmx_set_nested_state_test.c > @@ -123,36 +123,44 @@ void test_vmx_nested_state(struct kvm_vm *vm) > /* > * We cannot virtualize anything if the guest does not have VMX > * enabled. We expect KVM_SET_NESTED_STATE to return 0 if vmxon_pa > - * is set to -1ull. > + * is set to -1ull, but the flags must be zero. > */ > set_default_vmx_state(state, state_sz); > state->hdr.vmx.vmxon_pa = -1ull; > + test_nested_state_expect_einval(vm, state); > + > + state->hdr.vmx.vmcs12_pa = -1ull; > + state->flags = KVM_STATE_NESTED_EVMCS; > + test_nested_state_expect_einval(vm, state); > + > + state->flags = 0; > test_nested_state(vm, state); > > /* Enable VMX in the guest CPUID. */ > vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid()); > > - /* It is invalid to have vmxon_pa == -1ull and SMM flags non-zero. */ > + /* > + * Setting vmxon_pa == -1ull and vmcs_pa == -1ull exits early without > + * setting the nested state but flags other than eVMCS must be clear. > + */ > set_default_vmx_state(state, state_sz); > state->hdr.vmx.vmxon_pa = -1ull; > + state->hdr.vmx.vmcs12_pa = -1ull; > + test_nested_state_expect_einval(vm, state); > + > + state->flags = KVM_STATE_NESTED_EVMCS; > + test_nested_state(vm, state); > + > + /* It is invalid to have vmxon_pa == -1ull and SMM flags non-zero. */ > state->hdr.vmx.smm.flags = 1; > test_nested_state_expect_einval(vm, state); > > /* It is invalid to have vmxon_pa == -1ull and vmcs_pa != -1ull. */ > set_default_vmx_state(state, state_sz); > state->hdr.vmx.vmxon_pa = -1ull; > - state->hdr.vmx.vmcs12_pa = 0; > + state->flags = 0; > test_nested_state_expect_einval(vm, state); > > - /* > - * Setting vmxon_pa == -1ull and vmcs_pa == -1ull exits early without > - * setting the nested state. > - */ > - set_default_vmx_state(state, state_sz); > - state->hdr.vmx.vmxon_pa = -1ull; > - state->hdr.vmx.vmcs12_pa = -1ull; > - test_nested_state(vm, state); > - > /* It is invalid to have vmxon_pa set to a non-page aligned address. */ > set_default_vmx_state(state, state_sz); > state->hdr.vmx.vmxon_pa = 1; -- Vitaly