Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0D97C64ED6 for ; Mon, 27 Feb 2023 08:29:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231320AbjB0I32 (ORCPT ); Mon, 27 Feb 2023 03:29:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231548AbjB0I23 (ORCPT ); Mon, 27 Feb 2023 03:28:29 -0500 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 043EE1F90F; Mon, 27 Feb 2023 00:25:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677486335; x=1709022335; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O8o1EMXeBw3pZWezBLuqppwDDHiSfLvw2sREaH393wE=; b=KddnKwPlGlmJ++xspP9jAlLiMavmgYm54prvCJIOWI1Cc8vcsvMzRkJp lHUflpFzE+S0v5WDZhrzPlzabh6+6ZMk1dwxCWCslWEP0Z/l5KV5X5C7u zB6FERAkh0jxn7u9Qai2G7ZyE+pbjGjcEi6ns2Wigmitf54IX1O7hkfxE Sobx6jycGrgIAA+Vs1x8ITPUY2MFGZ95JRqWFJb+O4fHxDwdxx/TkEBnZ BkB3RSlatT6QGw6HlYc4EzM1xXZOrqh2tPU20GtxMn+AD32OfjIUt7oms pi4Ks+fszYO/JaS2TL46NYbzvRY9jeSzZHtdHKppQBIHQrMikHg9I2Bn9 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10633"; a="317608983" X-IronPort-AV: E=Sophos;i="5.97,331,1669104000"; d="scan'208";a="317608983" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2023 00:24:15 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10633"; a="783242340" X-IronPort-AV: E=Sophos;i="5.97,331,1669104000"; d="scan'208";a="783242340" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2023 00:24:15 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , Sean Christopherson Subject: [PATCH v12 073/106] KVM: VMX: Modify NMI and INTR handlers to take intr_info as function argument Date: Mon, 27 Feb 2023 00:23:12 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sean Christopherson TDX uses different ABI to get information about VM exit. Pass intr_info to the NMI and INTR handlers instead of pulling it from vcpu_vmx in preparation for sharing the bulk of the handlers with TDX. When the guest TD exits to VMM, RAX holds status and exit reason, RCX holds exit qualification etc rather than the VMCS fields because VMM doesn't have access to the VMCS. The eventual code will be VMX: - get exit reason, intr_info, exit_qualification, and etc from VMCS - call NMI/INTR handlers (common code) TDX: - get exit reason, intr_info, exit_qualification, and etc from guest registers - call NMI/INTR handlers (common code) Signed-off-by: Sean Christopherson Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/kvm/vmx/vmx.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f68ee2f5586b..5081679eba4a 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6861,28 +6861,27 @@ static void handle_nm_fault_irqoff(struct kvm_vcpu *vcpu) rdmsrl(MSR_IA32_XFD_ERR, vcpu->arch.guest_fpu.xfd_err); } -static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx) +static void handle_exception_nmi_irqoff(struct kvm_vcpu *vcpu, u32 intr_info) { const unsigned long nmi_entry = (unsigned long)asm_exc_nmi_noist; - u32 intr_info = vmx_get_intr_info(&vmx->vcpu); /* if exit due to PF check for async PF */ if (is_page_fault(intr_info)) - vmx->vcpu.arch.apf.host_apf_flags = kvm_read_and_reset_apf_flags(); + vcpu->arch.apf.host_apf_flags = kvm_read_and_reset_apf_flags(); /* if exit due to NM, handle before interrupts are enabled */ else if (is_nm_fault(intr_info)) - handle_nm_fault_irqoff(&vmx->vcpu); + handle_nm_fault_irqoff(vcpu); /* Handle machine checks before interrupts are enabled */ else if (is_machine_check(intr_info)) kvm_machine_check(); /* We need to handle NMIs before interrupts are enabled */ else if (is_nmi(intr_info)) - handle_interrupt_nmi_irqoff(&vmx->vcpu, nmi_entry); + handle_interrupt_nmi_irqoff(vcpu, nmi_entry); } -static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu) +static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu, + u32 intr_info) { - u32 intr_info = vmx_get_intr_info(vcpu); unsigned int vector = intr_info & INTR_INFO_VECTOR_MASK; gate_desc *desc = (gate_desc *)host_idt_base + vector; @@ -6902,9 +6901,9 @@ void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu) return; if (vmx->exit_reason.basic == EXIT_REASON_EXTERNAL_INTERRUPT) - handle_external_interrupt_irqoff(vcpu); + handle_external_interrupt_irqoff(vcpu, vmx_get_intr_info(vcpu)); else if (vmx->exit_reason.basic == EXIT_REASON_EXCEPTION_NMI) - handle_exception_nmi_irqoff(vmx); + handle_exception_nmi_irqoff(vcpu, vmx_get_intr_info(vcpu)); } /* -- 2.25.1