Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp320907pxb; Wed, 3 Nov 2021 04:55:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwVR5rosnKU/qIsassu6OMxgI8M1P8c8m/zAGj3fU6XZszoHRkTy0BU+0eRPEij9mfV3TKu X-Received: by 2002:a17:906:d20c:: with SMTP id w12mr32932747ejz.521.1635940555219; Wed, 03 Nov 2021 04:55:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1635940555; cv=none; d=google.com; s=arc-20160816; b=0S1edAtB9t9WnMgBzCjlcPzpWv4cDiN+6yZUet01Bur1ENhmp7C8+tNt7dvvx+QZMe bARC3flTS2VECxLqopzoNpMXhpC57xYlsOT+HSuFSP4I4bNmFASrFuzNUeBgDsbvgpxg pN0MzhAou306OecAAxNYHgvppohfe8UhCp1WHMD9R1yJkJ62nq4hWutyrzARpMXkSVIO 5nuN0eiuYDfoaQCrNYvrtrQzKXvQpsRfSZv9gLzRWF1P11WKIjLB92A1x4VyeGfsYDi/ 5ZW+TAlk64EhuPEFzu/BoJXt6SEy0B+QCwpVuka5hB7q9Vkfx5qTAM0mA1/lXpNsel9t s9+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=tCpfIgiGWgiB3hWbVBmfwlIsl6gwsuI6KOsmaWX3NaM=; b=Re23lQmUzrXdnVlSPbWvR0aQCGjf9TwYQQvzAGcb/A7qnQRHFUb14SsZnGfl1oU0pZ jb3rFCmNca9uKxouVXvR0ohSQDSgoSNQeJgSHFJrlVUXfBQTteHHQsX/q48aX9uU6Iu0 yRt6w8Zlnv0a++xsutd3ZIfzQ/LiMKq+ufSrmpH2ErRJi732LgRM5VXmEhZb18ap/Qqy k0Q2g9qwjKt5+2DRH7r+9cH3p78U2avHVaN3FVAj6eVdhjlxDfOujY6Iwm+cwZIgQynA hu83RB8JY6RAY3M4L+KJBCgXm22wFqE0VaEM8HTOXjJ3ZbNawKGcfaEvJyPXm7j9ps9D hIrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=f2I38ppp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 5si3054110ejj.698.2021.11.03.04.55.30; Wed, 03 Nov 2021 04:55:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=f2I38ppp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232035AbhKCL4Q (ORCPT + 99 others); Wed, 3 Nov 2021 07:56:16 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:50818 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231948AbhKCL4L (ORCPT ); Wed, 3 Nov 2021 07:56:11 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635940415; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tCpfIgiGWgiB3hWbVBmfwlIsl6gwsuI6KOsmaWX3NaM=; b=f2I38pppMjz54GfS2ahRn9CftGJiswXFWcUHHSlrHU9wtZBYFWsh1dPe2EvbMKTx0F9xYv arGS04mbPMhTk4V8sXLpX2SNmL7EQm7iNuSDo7oNnsXUb/fdG1oqOI26BD4Rx3MrhmZM2b 9AvO+65yR/GeGb5+7h6cqZnROgrviIY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-82-epCgz_STP8Cm1NYl4-FDSA-1; Wed, 03 Nov 2021 07:53:31 -0400 X-MC-Unique: epCgz_STP8Cm1NYl4-FDSA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 220481926DA6; Wed, 3 Nov 2021 11:53:30 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6FAB2196E2; Wed, 3 Nov 2021 11:53:28 +0000 (UTC) From: Emanuele Giuseppe Esposito To: kvm@vger.kernel.org Cc: Paolo Bonzini , Maxim Levitsky , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org, Emanuele Giuseppe Esposito Subject: [PATCH v4 2/7] nSVM: introduce smv->nested.save to cache save area fields Date: Wed, 3 Nov 2021 07:52:25 -0400 Message-Id: <20211103115230.720154-3-eesposit@redhat.com> In-Reply-To: <20211103115230.720154-1-eesposit@redhat.com> References: <20211103115230.720154-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This is useful in next patch, to avoid having temporary copies of vmcb12 registers and passing them manually. Right now, instead of blindly copying everything, we just copy EFER, CR0, CR3, CR4, DR6 and DR7. If more fields will need to be added, it will be more obvious to see that they must be added in struct vmcb_save_area_cached and in nested_copy_vmcb_save_to_cache(). _nested_copy_vmcb_save_to_cache() takes a vmcb_save_area_cached parameter, useful when we want to save the state to a local variable instead of svm internals. Note that in svm_set_nested_state() we want to cache the L2 save state only if we are in normal non guest mode, because otherwise it is not touched. Signed-off-by: Emanuele Giuseppe Esposito --- arch/x86/kvm/svm/nested.c | 27 ++++++++++++++++++++++++++- arch/x86/kvm/svm/svm.c | 1 + arch/x86/kvm/svm/svm.h | 16 ++++++++++++++++ 3 files changed, 43 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 9470933c77cd..b974b0edd9b5 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -313,6 +313,28 @@ void nested_load_control_from_vmcb12(struct vcpu_svm *svm, svm->nested.ctl.iopm_base_pa &= ~0x0fffULL; } +static void _nested_copy_vmcb_save_to_cache(struct vmcb_save_area_cached *to, + struct vmcb_save_area *from) +{ + /* + * Copy only fields that are validated, as we need them + * to avoid TOC/TOU races. + */ + to->efer = from->efer; + to->cr0 = from->cr0; + to->cr3 = from->cr3; + to->cr4 = from->cr4; + + to->dr6 = from->dr6; + to->dr7 = from->dr7; +} + +void nested_copy_vmcb_save_to_cache(struct vcpu_svm *svm, + struct vmcb_save_area *save) +{ + _nested_copy_vmcb_save_to_cache(&svm->nested.save, save); +} + /* * Synchronize fields that are written by the processor, so that * they can be copied back into the vmcb12. @@ -649,6 +671,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) return -EINVAL; nested_load_control_from_vmcb12(svm, &vmcb12->control); + nested_copy_vmcb_save_to_cache(svm, &vmcb12->save); if (!nested_vmcb_valid_sregs(vcpu, &vmcb12->save) || !nested_vmcb_check_controls(vcpu, &svm->nested.ctl)) { @@ -1370,8 +1393,10 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, if (is_guest_mode(vcpu)) svm_leave_nested(svm); - else + else { svm->nested.vmcb02.ptr->save = svm->vmcb01.ptr->save; + nested_copy_vmcb_save_to_cache(svm, &svm->nested.vmcb02.ptr->save); + } svm_set_gif(svm, !!(kvm_state->flags & KVM_STATE_NESTED_GIF_SET)); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 989685098b3e..6565a3efabd1 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4378,6 +4378,7 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) vmcb12 = map.hva; nested_load_control_from_vmcb12(svm, &vmcb12->control); + nested_copy_vmcb_save_to_cache(svm, &vmcb12->save); ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, false); unmap_save: diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 5d30db599e10..09621f4891f8 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -103,6 +103,19 @@ struct kvm_vmcb_info { uint64_t asid_generation; }; +/* + * This struct is not kept up-to-date, and it is only valid within + * svm_set_nested_state and nested_svm_vmrun. + */ +struct vmcb_save_area_cached { + u64 efer; + u64 cr4; + u64 cr3; + u64 cr0; + u64 dr7; + u64 dr6; +}; + struct svm_nested_state { struct kvm_vmcb_info vmcb02; u64 hsave_msr; @@ -119,6 +132,7 @@ struct svm_nested_state { /* cache for control fields of the guest */ struct vmcb_control_area ctl; + struct vmcb_save_area_cached save; bool initialized; }; @@ -485,6 +499,8 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, int nested_svm_exit_special(struct vcpu_svm *svm); void nested_load_control_from_vmcb12(struct vcpu_svm *svm, struct vmcb_control_area *control); +void nested_copy_vmcb_save_to_cache(struct vcpu_svm *svm, + struct vmcb_save_area *save); void nested_sync_control_from_vmcb02(struct vcpu_svm *svm); void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm); void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vmcb); -- 2.27.0