Received: by 2002:a05:7208:9594:b0:7e:5202:c8b4 with SMTP id gs20csp1187568rbb; Mon, 26 Feb 2024 01:12:15 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCVujbzZE3Iz7vAA4Ufq/buB7jggO8YOOzN3Qm9zKuMFIHY/ZlKy3Iw14c8uhqO6Ayku3cr2Y8gnSCaKQaZ3ZsKa8cvZ8MBVAc6x/J51bA== X-Google-Smtp-Source: AGHT+IEK//WkY7TBWrkmq6nMnYCLaN+9C03W74xkoN3hdrp3CQH/iIzERs9GiSoUrXzaT6YkxhbK X-Received: by 2002:a17:903:186:b0:1db:f389:2deb with SMTP id z6-20020a170903018600b001dbf3892debmr6750496plg.17.1708938735501; Mon, 26 Feb 2024 01:12:15 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708938735; cv=pass; d=google.com; s=arc-20160816; b=Eakx7hvI6o4kXFGROpTxfHn0iwfR+mh/EaX245PiNp9JKLaMVErQvIr9KPrU0dTerx CSHlG5gsLjfp8rVQyttf+FhjSysg0hc+QsiSPZDFpPxd6H2BwPpOBpvh1gJCwCS33oNQ K7slJoKPN9LyYuPekFEp6Zhnkdf8OyGpxH4EgE+9xS8Ta4EORxfelvWD14Lol7EZ4CEy UNaY4NpKlkV6SqNXx4Nf6RcZrJC/8OG3IXIStEBnrx6/uk0T/sC0RG75T86naSCssAYn 0WH5NSxvsEa3DM8J9YVjENJJZWis8IH5D8NKQE9/AShYPrbw9Pf6/Icj+9BAyFK3JEJR ikHg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=2NNakpznq+vUH+wTAqTB2V/xf0RFy3Y23Z4DReh5IxA=; fh=Itbyk7CEvizIrzGEESCqq3I2tZgG1kc/GkVOa3S7Hsg=; b=rxldnH9bb3DPff6MoGHvRaclQKlK9mNNMCJwhW6FQbGwFSSxNSNIoo0kRBPbZvUI1S y8BSO0rLbMswbdb6rZT001+wTJRvs/hn4sAZ92mo16ACLJEUgDjvea22KcVk64FkXzXv BDgnJS2Z6+4IkvrFZmic7buSQ8CYxob/ikhTPnhMuPBxGbTUT5dE/57zag9+FGXTdd/O PsDY5aMJwrwh5FwY6+F5RGAcC928/oD8RpoCgd6Ld3w3LUs0k9qGmZEUBaUktPF72yHq 2ot5Yc4b+Iysdw5b2AIM0JJsrVyo2VU/GwCtV8yWpvJhBtx5MbmyAvm2PqBu4d8Ewsdt hO/g==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MQYaFh59; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-80842-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-80842-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id 5-20020a170902c14500b001dc788e13d4si3290211plj.140.2024.02.26.01.12.14 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Feb 2024 01:12:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-80842-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MQYaFh59; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-80842-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-80842-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 45AFDB270E4 for ; Mon, 26 Feb 2024 08:53:59 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 219D273F0F; Mon, 26 Feb 2024 08:28:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="MQYaFh59" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CAC5F63CB1; Mon, 26 Feb 2024 08:28:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708936119; cv=none; b=GtcMfK6Q4ajvNKzyjvOj6cxR1y3H59iwZCBwPbKNW5kyAmPU+NaXpi3zIqXPmt83Lp2X65eURhbFs+BvSAjf5/hY3+4thldCrRAD2Vswn1lUhVVc9iSraiVEVuwqabpyWo5typ0c4aoC9cXDbDumtkcbib3mEHm2Iwn+4IrrwQQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708936119; c=relaxed/simple; bh=rnzww5cGdN7mG73mfyRxtwlDq6YIEmaeSGmlC0VnQT4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=IDrF16JQGWGPzMAHZ2pq3Qj9tMoYFVCzVB8Qq8kJ8jLYHbPnXZkHFlZa5BOQC8oRU1Fz+i0ddstIUbR1mEKr+aBUZHA5dlW/6Q9mRDC0tHOp69PAa4/8xeWMKjL/99F/SlqMqCwOu3s3f/DoMtjGVxPOPTvJEPR6Ik1NeJGXzIM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=MQYaFh59; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708936118; x=1740472118; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rnzww5cGdN7mG73mfyRxtwlDq6YIEmaeSGmlC0VnQT4=; b=MQYaFh59grH0iWwdzqFdfFgg0oLVHjSYwjNxq3YPz8APzM1C9NDJAU6u D0Q0K4qN7didWX6Cn+OVpQUtsxBFqGjLbP9B3ZG5Yp4AeEf9JiiAecEje ccggBzNGMoPMD6ijQReL2iS1O23hksFmHhLkKSDdOSLKjnHICmYERs/Ur GwU+Ct+4ltG7ZtShRp4HR+YyYSfjz9+fny0/gNKOrhbocyNyrouBHcliU k820vAxWGUMy4bvdjRVtyjLbOrmQl7NIXk7CoR/Qgi57tEwmOGiYa0L/t UHey6da+3E0am/3fl6vN3vtAJ+XMYmOJxfgZc9kpfdD+c4Rc8n+SYA47C A==; X-IronPort-AV: E=McAfee;i="6600,9927,10995"; a="3069503" X-IronPort-AV: E=Sophos;i="6.06,185,1705392000"; d="scan'208";a="3069503" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 00:28:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,185,1705392000"; d="scan'208";a="11272527" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 00:28:37 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v19 079/130] KVM: TDX: vcpu_run: save/restore host state(host kernel gs) Date: Mon, 26 Feb 2024 00:26:21 -0800 Message-Id: <4a766983346b2c01e943348af3c5ca6691e272f9.1708933498.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Isaku Yamahata On entering/exiting TDX vcpu, Preserved or clobbered CPU state is different from VMX case. Add TDX hooks to save/restore host/guest CPU state. Save/restore kernel GS base MSR. Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/kvm/vmx/main.c | 30 +++++++++++++++++++++++++-- arch/x86/kvm/vmx/tdx.c | 42 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/tdx.h | 4 ++++ arch/x86/kvm/vmx/x86_ops.h | 4 ++++ 4 files changed, 78 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index d72651ce99ac..8275a242ce07 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -158,6 +158,32 @@ static void vt_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) vmx_vcpu_reset(vcpu, init_event); } +static void vt_prepare_switch_to_guest(struct kvm_vcpu *vcpu) +{ + /* + * All host state is saved/restored across SEAMCALL/SEAMRET, and the + * guest state of a TD is obviously off limits. Deferring MSRs and DRs + * is pointless because the TDX module needs to load *something* so as + * not to expose guest state. + */ + if (is_td_vcpu(vcpu)) { + tdx_prepare_switch_to_guest(vcpu); + return; + } + + vmx_prepare_switch_to_guest(vcpu); +} + +static void vt_vcpu_put(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) { + tdx_vcpu_put(vcpu); + return; + } + + vmx_vcpu_put(vcpu); +} + static int vt_vcpu_pre_run(struct kvm_vcpu *vcpu) { if (is_td_vcpu(vcpu)) @@ -326,9 +352,9 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .vcpu_free = vt_vcpu_free, .vcpu_reset = vt_vcpu_reset, - .prepare_switch_to_guest = vmx_prepare_switch_to_guest, + .prepare_switch_to_guest = vt_prepare_switch_to_guest, .vcpu_load = vmx_vcpu_load, - .vcpu_put = vmx_vcpu_put, + .vcpu_put = vt_vcpu_put, .update_exception_bitmap = vmx_update_exception_bitmap, .get_msr_feature = vmx_get_msr_feature, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index fdf9196cb592..9616b1aab6ce 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 #include +#include #include @@ -423,6 +424,7 @@ u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) int tdx_vcpu_create(struct kvm_vcpu *vcpu) { struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm); + struct vcpu_tdx *tdx = to_tdx(vcpu); WARN_ON_ONCE(vcpu->arch.cpuid_entries); WARN_ON_ONCE(vcpu->arch.cpuid_nent); @@ -446,9 +448,47 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu) if ((kvm_tdx->xfam & XFEATURE_MASK_XTILE) == XFEATURE_MASK_XTILE) vcpu->arch.xfd_no_write_intercept = true; + tdx->host_state_need_save = true; + tdx->host_state_need_restore = false; + return 0; } +void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) +{ + struct vcpu_tdx *tdx = to_tdx(vcpu); + + if (!tdx->host_state_need_save) + return; + + if (likely(is_64bit_mm(current->mm))) + tdx->msr_host_kernel_gs_base = current->thread.gsbase; + else + tdx->msr_host_kernel_gs_base = read_msr(MSR_KERNEL_GS_BASE); + + tdx->host_state_need_save = false; +} + +static void tdx_prepare_switch_to_host(struct kvm_vcpu *vcpu) +{ + struct vcpu_tdx *tdx = to_tdx(vcpu); + + tdx->host_state_need_save = true; + if (!tdx->host_state_need_restore) + return; + + ++vcpu->stat.host_state_reload; + + wrmsrl(MSR_KERNEL_GS_BASE, tdx->msr_host_kernel_gs_base); + tdx->host_state_need_restore = false; +} + +void tdx_vcpu_put(struct kvm_vcpu *vcpu) +{ + vmx_vcpu_pi_put(vcpu); + tdx_prepare_switch_to_host(vcpu); +} + void tdx_vcpu_free(struct kvm_vcpu *vcpu) { struct vcpu_tdx *tdx = to_tdx(vcpu); @@ -569,6 +609,8 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) tdx_vcpu_enter_exit(tdx); + tdx->host_state_need_restore = true; + vcpu->arch.regs_avail &= ~VMX_REGS_LAZY_LOAD_SET; trace_kvm_exit(vcpu, KVM_ISA_VMX); diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index 81d301fbe638..e96c416e73bf 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -69,6 +69,10 @@ struct vcpu_tdx { bool initialized; + bool host_state_need_save; + bool host_state_need_restore; + u64 msr_host_kernel_gs_base; + /* * Dummy to make pmu_intel not corrupt memory. * TODO: Support PMU for TDX. Future work. diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 3e29a6fe28ef..9fd997c79c33 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -151,6 +151,8 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu); void tdx_vcpu_free(struct kvm_vcpu *vcpu); void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu); +void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu); +void tdx_vcpu_put(struct kvm_vcpu *vcpu); u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); @@ -186,6 +188,8 @@ static inline int tdx_vcpu_create(struct kvm_vcpu *vcpu) { return -EOPNOTSUPP; } static inline void tdx_vcpu_free(struct kvm_vcpu *vcpu) {} static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {} static inline fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) { return EXIT_FASTPATH_NONE; } +static inline void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) {} +static inline void tdx_vcpu_put(struct kvm_vcpu *vcpu) {} static inline u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { return 0; } static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; } -- 2.25.1