Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp5428393pxv; Wed, 7 Jul 2021 03:38:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxXqBFGgxBHrciptX7KQfrXvpCL/tjq0hM52XuJ1O4pBDI6DtplQ5aMGIMlXh+1nUgs74HN X-Received: by 2002:a05:6402:254a:: with SMTP id l10mr14533315edb.147.1625654290685; Wed, 07 Jul 2021 03:38:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625654290; cv=none; d=google.com; s=arc-20160816; b=hBChVL9z7pGNQXtWdClV6tNJEoAqwm7doePda9ED5CLnUWHUWQuZWoo5g7P8gWiyj+ PDtLrl6y9aPAbvI3Q/3KlxZv8wrO8OdPcnqdqvq3HlYGY5vJVzvDHsTjsAKqVgEFiB59 hOsel1hapyC4FvcMJv9DyLdS197jk3yvhBXLR0SSTXeXCafy8vs6NprNj70pBRVX9X5N FBsFd29GrE9C5SJOOa+Wz5DtQ7EyjmLU+hZlwjZLp/Z+Y3has/77HoCeZNKUVqrQc+gu utH7vbRr08JmChvrZg3P0FxY02we3mXVmaw0JfkKf7z/qXmPi208UunyUQ4GtwhZ8pqQ g9tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:date:cc:to:from:subject :message-id:dkim-signature; bh=Syoe9M2t0vM3Bv0EamcjPe4W6xrI52FELx5muenx0Qc=; b=MMsTu1Fk9hxoACvzIm6I26qq6GHtF9/Y5xaud80PsJ5Vh4LHc8KCBTPKXKd8ujgiWO RoxVo0bTkJM2NvMsf4TWrJfYflQXWZm21hHb+wZDPbywu6/hfmffv+PqRAHggEAcNvVG kusHgi8rXwtDvnVCxn21+gihsruvCEw2Zsiq5R3Ix+gJMsRyjGmgiJ1LZ059U4B3vLhI SLw/30byz1tw9Ae8+Mbnj65PzZGec9mBegobEXUuL0oUr/GtT0Wg8lrZvin7m3/z8cUp NJI1r2tyvf5WwYXpGoSsQCuLu5iJA0yiEwE5V18cBD6vF1LSvl722MCQHCC0LJyIokng onZw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Gi9IxWS+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w19si61976edv.129.2021.07.07.03.37.47; Wed, 07 Jul 2021 03:38:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Gi9IxWS+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231345AbhGGKiW (ORCPT + 99 others); Wed, 7 Jul 2021 06:38:22 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:50808 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231272AbhGGKiW (ORCPT ); Wed, 7 Jul 2021 06:38:22 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625654141; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Syoe9M2t0vM3Bv0EamcjPe4W6xrI52FELx5muenx0Qc=; b=Gi9IxWS+Qph9s+z7NKSfkrv3BcNDtHzb14lt76ritIF6Q+XerKlY27+AskH9+m85CuxAKz JJzyV21+iJZCdlFROCgsbs32HDi4Zz1asXyLXwudMJblKbBOI9I7aP41CxqW/GhvTchom8 G5iI+ls2VJQpehfe960Yq7zfzccpDL8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-531-5zDSoBSIOkWwzJdIP9rkDg-1; Wed, 07 Jul 2021 06:35:40 -0400 X-MC-Unique: 5zDSoBSIOkWwzJdIP9rkDg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 344E8802E62; Wed, 7 Jul 2021 10:35:39 +0000 (UTC) Received: from starship (unknown [10.40.192.10]) by smtp.corp.redhat.com (Postfix) with ESMTP id E228F60C05; Wed, 7 Jul 2021 10:35:34 +0000 (UTC) Message-ID: Subject: Re: [PATCH 5/6] KVM: nSVM: Restore nested control upon leaving SMM From: Maxim Levitsky To: Vitaly Kuznetsov , kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Cathy Avery , Emanuele Giuseppe Esposito , Tom Lendacky , Michael Roth , linux-kernel@vger.kernel.org Date: Wed, 07 Jul 2021 13:35:33 +0300 In-Reply-To: <20210628104425.391276-6-vkuznets@redhat.com> References: <20210628104425.391276-1-vkuznets@redhat.com> <20210628104425.391276-6-vkuznets@redhat.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.36.5 (3.36.5-2.fc32) MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 2021-06-28 at 12:44 +0200, Vitaly Kuznetsov wrote: > In case nested state was saved/resored while in SMM, > nested_load_control_from_vmcb12() which sets svm->nested.ctl was never > called and the first nested_vmcb_check_controls() (either from > nested_svm_vmrun() or from svm_set_nested_state() if save/restore > cycle is repeated) is doomed to fail. I don't like the commit description. I propose something like that: If the VM was migrated while in SMM, no nested state was saved/restored, and therefore svm_leave_smm has to load both save and control area of the vmcb12. Save area is already loaded from HSAVE area, so now load the control area as well from the vmcb12. However if you like to, feel free to leave the commit message as is. My point is that while in SMM, SVM is fully disabled, so not only svm->nested.ctl is not set but no nested state is loaded/stored at all. Also this makes the svm_leave_smm even more dangerous versus errors, as I said in previos patch. Since its return value is ignored, and we are loading here the guest vmcb01 which can change under our feet, lots of fun can happen (enter_svm_guest_mode result isn't really checked). Best regards, Maxim Levitsky > > Signed-off-by: Vitaly Kuznetsov > --- > arch/x86/kvm/svm/nested.c | 4 ++-- > arch/x86/kvm/svm/svm.c | 7 ++++++- > arch/x86/kvm/svm/svm.h | 2 ++ > 3 files changed, 10 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c > index a1dec2c40181..6549e40155fa 100644 > --- a/arch/x86/kvm/svm/nested.c > +++ b/arch/x86/kvm/svm/nested.c > @@ -304,8 +304,8 @@ static bool nested_vmcb_valid_sregs(struct kvm_vcpu *vcpu, > return true; > } > > -static void nested_load_control_from_vmcb12(struct vcpu_svm *svm, > - struct vmcb_control_area *control) > +void nested_load_control_from_vmcb12(struct vcpu_svm *svm, > + struct vmcb_control_area *control) > { > copy_vmcb_control_area(&svm->nested.ctl, control); > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index fbf1b352a9bb..525b07873927 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -4344,6 +4344,7 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) > u64 saved_efer = GET_SMSTATE(u64, smstate, 0x7ed0); > u64 guest = GET_SMSTATE(u64, smstate, 0x7ed8); > u64 vmcb12_gpa = GET_SMSTATE(u64, smstate, 0x7ee0); > + struct vmcb *vmcb12; > > if (guest) { > if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM)) > @@ -4359,7 +4360,11 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) > if (svm_allocate_nested(svm)) > return 1; > > - ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, map.hva); > + vmcb12 = map.hva; > + > + nested_load_control_from_vmcb12(svm, &vmcb12->control); > + > + ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12); > kvm_vcpu_unmap(vcpu, &map, true); > > /* > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h > index ff2dac2b23b6..13f2d465ca36 100644 > --- a/arch/x86/kvm/svm/svm.h > +++ b/arch/x86/kvm/svm/svm.h > @@ -481,6 +481,8 @@ int nested_svm_check_permissions(struct kvm_vcpu *vcpu); > int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, > bool has_error_code, u32 error_code); > int nested_svm_exit_special(struct vcpu_svm *svm); > +void nested_load_control_from_vmcb12(struct vcpu_svm *svm, > + struct vmcb_control_area *control); > void nested_sync_control_from_vmcb02(struct vcpu_svm *svm); > void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm); > void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vmcb);