Received: by 2002:a05:6520:1682:b0:147:d1a0:b502 with SMTP id ck2csp5598851lkb; Mon, 11 Oct 2021 09:41:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxoWjJu2Z/5+PoC5qv2smtED6zdA08mSXCe7qRZ66N+Ws3XG/IRdbCnavDmAUzuxGROqtTn X-Received: by 2002:a17:906:76d1:: with SMTP id q17mr13036619ejn.31.1633970490170; Mon, 11 Oct 2021 09:41:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633970490; cv=none; d=google.com; s=arc-20160816; b=uTD1ViZel5Zr/gTh+nUGY9pCboEI+XPtre/F5CfIWxXvZK9YcnGdAUjQTy77XgjKbx vuPAVKXqUhzXl19SLp9U12qDKUorb5dUXFCefq12MQWuGJ7pV1r1fOkdqqX+1vE16jEP x672SWTlqlkxW1J67fh+y9g5o1zJjAep51p4iPaBkvU/2s2MHDYFgWngJ8lL/Cmgba+1 Hz2WRaXyzIt8KVgigp9iZwKvQy9Jy/IDGzRuhxZbHimrt2a/hI0D2wJaQ4CgmC87L4f1 mJsqZGGMSspt7KasoGHar7uB55Z1T0t5IbUJPMJbhHLU6nEbejJ7ep+xVzidM8gCSfk/ kIEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=YBIhqbEQFgwyPopnLPcFT4lWN21gi3x6lgr636XaT0c=; b=C3ep5K0xkJeVcFlPOg7n01omKcK5k1LVzX4S8HxFKYPD2keScynAibzyrfpN1X58Ae O866LVHEX6hpliYzZAn8hXyGip4Isbs+76+P6qSXgPB2Hfg7tMt5ZoHgSO9PjJ+UdYiq lJetgJE8Kdyo2Ue7IfnPWOTJevTDgcpcylBKqHV6npKildy/x7gka1uCDeIJIHG9da3V y2buhO7gSaIsb/S73M4QklHyFOoF/o8+zItBpI4XzJcKLJC68JuCEUhoagMuuV0w3EHV 0wDQ5awoJ1mXkFvA5++fphGn/gMdNUwOwuy49yMoBYFCEUkJduOGWe2Rz7ei8Qy+0uJb h+pA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RbT0TlnV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f10si1835654ejt.242.2021.10.11.09.41.06; Mon, 11 Oct 2021 09:41:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RbT0TlnV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242694AbhJKOj7 (ORCPT + 99 others); Mon, 11 Oct 2021 10:39:59 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:28957 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241652AbhJKOjQ (ORCPT ); Mon, 11 Oct 2021 10:39:16 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633963036; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YBIhqbEQFgwyPopnLPcFT4lWN21gi3x6lgr636XaT0c=; b=RbT0TlnVJpHL6UQurMtn5jCzn2CdvTKFmKuKlBR6ZZgnwvEVwVa8Ojv09q9398gfg/7Ln4 kdA902iXC6F0OZhyelr2J2RMPjz9fLeNuC84BY0OT6hmOTA+L1aZXUIqfnRxlO19VY9Crf ysVtKORB15cU6UnQv9Vy7bz8nVmKeVw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-557-dtGNdUMQPk20tjMD2QnrLg-1; Mon, 11 Oct 2021 10:37:10 -0400 X-MC-Unique: dtGNdUMQPk20tjMD2QnrLg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1B84DBD526; Mon, 11 Oct 2021 14:37:09 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 086A619C59; Mon, 11 Oct 2021 14:37:07 +0000 (UTC) From: Emanuele Giuseppe Esposito To: kvm@vger.kernel.org Cc: Paolo Bonzini , Maxim Levitsky , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org, Emanuele Giuseppe Esposito Subject: [PATCH v3 2/8] nSVM: introduce smv->nested.save to cache save area fields Date: Mon, 11 Oct 2021 10:36:56 -0400 Message-Id: <20211011143702.1786568-3-eesposit@redhat.com> In-Reply-To: <20211011143702.1786568-1-eesposit@redhat.com> References: <20211011143702.1786568-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This is useful in next patch, to avoid having temporary copies of vmcb12 registers and passing them manually. Right now, instead of blindly copying everything, we just copy EFER, CR0, CR3, CR4, DR6 and DR7. If more fields will need to be added, it will be more obvious to see that they must be added in struct vmcb_save_area_cached and in nested_copy_vmcb_save_to_cache(). Signed-off-by: Emanuele Giuseppe Esposito --- arch/x86/kvm/svm/nested.c | 18 ++++++++++++++++++ arch/x86/kvm/svm/svm.c | 1 + arch/x86/kvm/svm/svm.h | 16 ++++++++++++++++ 3 files changed, 35 insertions(+) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index d2fe65e2a7a4..c4959da8aec0 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -313,6 +313,22 @@ void nested_load_control_from_vmcb12(struct vcpu_svm *svm, svm->nested.ctl.iopm_base_pa &= ~0x0fffULL; } +void nested_copy_vmcb_save_to_cache(struct vcpu_svm *svm, + struct vmcb_save_area *save) +{ + /* + * Copy only fields that are validated, as we need them + * to avoid TOC/TOU races. + */ + svm->nested.save.efer = save->efer; + svm->nested.save.cr0 = save->cr0; + svm->nested.save.cr3 = save->cr3; + svm->nested.save.cr4 = save->cr4; + + svm->nested.save.dr6 = save->dr6; + svm->nested.save.dr7 = save->dr7; +} + /* * Synchronize fields that are written by the processor, so that * they can be copied back into the vmcb12. @@ -647,6 +663,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) return -EINVAL; nested_load_control_from_vmcb12(svm, &vmcb12->control); + nested_copy_vmcb_save_to_cache(svm, &vmcb12->save); if (!nested_vmcb_valid_sregs(vcpu, &vmcb12->save) || !nested_vmcb_check_controls(vcpu, &svm->nested.ctl)) { @@ -1385,6 +1402,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, svm_copy_vmrun_state(&svm->vmcb01.ptr->save, save); nested_load_control_from_vmcb12(svm, ctl); + nested_copy_vmcb_save_to_cache(svm, save); svm_switch_vmcb(svm, &svm->nested.vmcb02); nested_vmcb02_prepare_control(svm); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 69639f9624f5..bf171f5f6158 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4386,6 +4386,7 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) vmcb12 = map.hva; nested_load_control_from_vmcb12(svm, &vmcb12->control); + nested_copy_vmcb_save_to_cache(svm, &vmcb12->save); ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12); kvm_vcpu_unmap(vcpu, &map, true); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index bd0fe94c2920..f0195bc263e9 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -103,6 +103,19 @@ struct kvm_vmcb_info { uint64_t asid_generation; }; +/* + * This struct is not kept up-to-date, and it is only valid within + * svm_set_nested_state and nested_svm_vmrun. + */ +struct vmcb_save_area_cached { + u64 efer; + u64 cr4; + u64 cr3; + u64 cr0; + u64 dr7; + u64 dr6; +}; + struct svm_nested_state { struct kvm_vmcb_info vmcb02; u64 hsave_msr; @@ -119,6 +132,7 @@ struct svm_nested_state { /* cache for control fields of the guest */ struct vmcb_control_area ctl; + struct vmcb_save_area_cached save; bool initialized; }; @@ -484,6 +498,8 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, int nested_svm_exit_special(struct vcpu_svm *svm); void nested_load_control_from_vmcb12(struct vcpu_svm *svm, struct vmcb_control_area *control); +void nested_copy_vmcb_save_to_cache(struct vcpu_svm *svm, + struct vmcb_save_area *save); void nested_sync_control_from_vmcb02(struct vcpu_svm *svm); void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm); void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vmcb); -- 2.27.0