Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp2476630pxb; Fri, 5 Feb 2021 20:46:38 -0800 (PST) X-Google-Smtp-Source: ABdhPJweHEybT1AEVu9Cd4wourpkiLKlzeML1n3okvDWUehzV0XCBt8pN1mDoXGHNTQS2+xl7Q07 X-Received: by 2002:a17:906:384c:: with SMTP id w12mr7377305ejc.140.1612586798389; Fri, 05 Feb 2021 20:46:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612586798; cv=none; d=google.com; s=arc-20160816; b=rNIkaXy6bRqJ+EL1W6jVGbqvjmhPAX8mJR3CYzZdXFDlviCIy8828zwJH8tgkzwud+ kUFip6BxPi62NmoChmcRwghtkrXxkiBWBzJ5y59xG3lqCusBvEN5x5f/1upPIMp0t2Rb yLk/oumcAn4cmD6Ripfvdekyx+5orLdBnF0o7u0ZrDKTwvnfQDX0+FSd5VLsEnxfslg7 9reThayhPtTcNoV1DSGylF84CaSB7hEdG8oS+uJVHGYv0uqfufATlRxVv3DkEYIhWl9H uDX59kgF/WyJjwGhqkCIdyM4yduXzyvavMqimkBveDcnK9+zGkS90R6cTt4oqbVv9YwZ i4vQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=SAzlxiCXBvoJNC1beaYW6DEKVXe1RBK1WtJXpZmAoDw=; b=TgZcP5/xg41cdqg8+lMnMXc7PDHNWRv6+48Na0PvAY5doSIbotl/iNNV1ggLpt6Jbn HF3cwur9Ksw5xRm0OWya29HDyEk9SZ3ffa948PrfV5hUGroI3BeD32Dsav1CAoHl/X7W n5LRIGhxq/WbAkQVma5I64/Y21W07JnLB+fq1KpS5ZblZdKwZQSnZuNsVoRAg24RWFQI MwN7vt6WPkRxdK27mkKuG1hAJUIG9WiK0yTTS+QyF8UKSFYxCGrAmSeuQIP6xtDJIc2y iDUjtcBLFa2s8xP7FxrauLUF1SjF9Q7U6TFu3ay3KSETlDBku+gxSBEX+C0r9NQQ4gfH 8QYg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a72si7043492edf.380.2021.02.05.20.46.14; Fri, 05 Feb 2021 20:46:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231285AbhBFEob (ORCPT + 99 others); Fri, 5 Feb 2021 23:44:31 -0500 Received: from mga09.intel.com ([134.134.136.24]:63285 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232482AbhBFDY7 (ORCPT ); Fri, 5 Feb 2021 22:24:59 -0500 IronPort-SDR: vQH8o4LLjQoYBP9X8fW8fL6/yhD0SWxZy1sT95ajbClQXx9tBnt3OrZjNuwVlMhL4rXvvJswF2 xybeYewZnSDQ== X-IronPort-AV: E=McAfee;i="6000,8403,9886"; a="181650721" X-IronPort-AV: E=Sophos;i="5.81,156,1610438400"; d="scan'208";a="181650721" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2021 15:39:08 -0800 IronPort-SDR: CJIinL7L8M5rvCVjM4EJ8qTcjk9BWp/RFEgRxkqIXynXK40+EOG1xogtozg55nEbEbejHn40Xv PMRcuphuc/4g== X-IronPort-AV: E=Sophos;i="5.81,156,1610438400"; d="scan'208";a="416183889" Received: from mdhake-mobl.amr.corp.intel.com (HELO skuppusw-mobl5.amr.corp.intel.com) ([10.209.53.25]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2021 15:39:07 -0800 From: Kuppuswamy Sathyanarayanan To: Peter Zijlstra , Andy Lutomirski , Dave Hansen Cc: Andi Kleen , Kirill Shutemov , Kuppuswamy Sathyanarayanan , Dan Williams , Raj Ashok , Sean Christopherson , linux-kernel@vger.kernel.org, Kuppuswamy Sathyanarayanan Subject: [RFC v1 08/26] x86/tdx: Add MSR support for TDX guest Date: Fri, 5 Feb 2021 15:38:25 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Kirill A. Shutemov" Operations on context-switched MSRs can be run natively. The rest of MSRs should be handled through TDVMCALLs. TDVMCALL[Instruction.RDMSR] and TDVMCALL[Instruction.WRMSR] provide MSR oprations. You can find RDMSR and WRMSR details in Guest-Host-Communication Interface (GHCI) for Intel Trust Domain Extensions (Intel TDX) specification, sec 3.10, 3.11. Also, since CSTAR MSR is not used on Intel CPUs as SYSCALL instruction, ignore accesses to CSTAR MSR. Ignore accesses to the MSR for compatibility: no need in wrap callers in !is_tdx_guest(). Signed-off-by: Kirill A. Shutemov Reviewed-by: Andi Kleen Signed-off-by: Kuppuswamy Sathyanarayanan --- arch/x86/kernel/tdx.c | 94 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 93 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c index bbefe639a2ed..5d961263601e 100644 --- a/arch/x86/kernel/tdx.c +++ b/arch/x86/kernel/tdx.c @@ -94,6 +94,84 @@ static __cpuidle void tdx_safe_halt(void) BUG_ON(ret || r10); } +static bool tdx_is_context_switched_msr(unsigned int msr) +{ + /* XXX: Update the list of context-switched MSRs */ + + switch (msr) { + case MSR_EFER: + case MSR_IA32_CR_PAT: + case MSR_FS_BASE: + case MSR_GS_BASE: + case MSR_KERNEL_GS_BASE: + case MSR_IA32_SYSENTER_CS: + case MSR_IA32_SYSENTER_EIP: + case MSR_IA32_SYSENTER_ESP: + case MSR_STAR: + case MSR_LSTAR: + case MSR_SYSCALL_MASK: + case MSR_IA32_XSS: + case MSR_TSC_AUX: + case MSR_IA32_BNDCFGS: + return true; + } + return false; +} + +static u64 tdx_read_msr_safe(unsigned int msr, int *err) +{ + register long r10 asm("r10") = TDVMCALL_STANDARD; + register long r11 asm("r11") = EXIT_REASON_MSR_READ; + register long r12 asm("r12") = msr; + register long rcx asm("rcx"); + long ret; + + WARN_ON_ONCE(tdx_is_context_switched_msr(msr)); + + if (msr == MSR_CSTAR) + return 0; + + /* Allow to pass R10, R11 and R12 down to the VMM */ + rcx = BIT(10) | BIT(11) | BIT(12); + + asm volatile(TDCALL + : "=a"(ret), "=r"(r10), "=r"(r11), "=r"(r12) + : "a"(TDVMCALL), "r"(rcx), "r"(r10), "r"(r11), "r"(r12) + : ); + + /* XXX: Better error handling needed? */ + *err = (ret || r10) ? -EIO : 0; + + return r11; +} + +static int tdx_write_msr_safe(unsigned int msr, unsigned int low, + unsigned int high) +{ + register long r10 asm("r10") = TDVMCALL_STANDARD; + register long r11 asm("r11") = EXIT_REASON_MSR_WRITE; + register long r12 asm("r12") = msr; + register long r13 asm("r13") = (u64)high << 32 | low; + register long rcx asm("rcx"); + long ret; + + WARN_ON_ONCE(tdx_is_context_switched_msr(msr)); + + if (msr == MSR_CSTAR) + return 0; + + /* Allow to pass R10, R11, R12 and R13 down to the VMM */ + rcx = BIT(10) | BIT(11) | BIT(12) | BIT(13); + + asm volatile(TDCALL + : "=a"(ret), "=r"(r10), "=r"(r11), "=r"(r12), "=r"(r13) + : "a"(TDVMCALL), "r"(rcx), "r"(r10), "r"(r11), "r"(r12), + "r"(r13) + : ); + + return ret || r10 ? -EIO : 0; +} + void __init tdx_early_init(void) { if (!cpuid_has_tdx_guest()) @@ -132,17 +210,31 @@ unsigned long tdx_get_ve_info(struct ve_info *ve) int tdx_handle_virtualization_exception(struct pt_regs *regs, struct ve_info *ve) { + unsigned long val; + int ret = 0; + switch (ve->exit_reason) { case EXIT_REASON_HLT: tdx_halt(); break; + case EXIT_REASON_MSR_READ: + val = tdx_read_msr_safe(regs->cx, (unsigned int *)&ret); + if (!ret) { + regs->ax = val & UINT_MAX; + regs->dx = val >> 32; + } + break; + case EXIT_REASON_MSR_WRITE: + ret = tdx_write_msr_safe(regs->cx, regs->ax, regs->dx); + break; default: pr_warn("Unexpected #VE: %d\n", ve->exit_reason); return -EFAULT; } /* After successful #VE handling, move the IP */ - regs->ip += ve->instr_len; + if (!ret) + regs->ip += ve->instr_len; return ret; } -- 2.25.1