Received: by 2002:a05:6358:701b:b0:131:369:b2a3 with SMTP id 27csp379343rwo; Fri, 21 Jul 2023 13:25:25 -0700 (PDT) X-Google-Smtp-Source: APBJJlHbhPRmKOmYgmc3BZDXNJaz75fbfLA9yp4XzePBmHDHVRm/SP204R4zh3zMWZUQaGjgw3OK X-Received: by 2002:a17:906:5354:b0:991:e458:d16 with SMTP id j20-20020a170906535400b00991e4580d16mr2124280ejo.72.1689971124523; Fri, 21 Jul 2023 13:25:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689971124; cv=none; d=google.com; s=arc-20160816; b=M0vDCCurWKcLOi3o5cRK95MSImWgnzq+at9F8CQOBDjkm+n7ZfQCyiHgApl+ZEitwS U0yl4r01cyVm3C2EWsTmkZinqlwgLKF4vTXce3qjrDt5IhC3tqf+Ys54egGutpMel0k5 NLVQUgraLDvmO/xGUIvBT8ERtwlrSeS4urQdcofbPA1r0LQ7XWX3LPkDjkCoXk6fvBzd EMPft80kiZF9Esg8F8ey5ZbJNqenMcrzdkO/OdJyY40XHERqlpitSi3BSkedwm8PoqUP +osI9ibdfnkhABi5JWcHelcBuHWWB3OFXcnkXevMtWaQS6BRh4jXxl3Z7+i5LcuKSEWF MelA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:dkim-signature; bh=BABBYG9T7Yn2sZTHqi5m2u4CXRQ2CmLvyTIjco5c63g=; fh=GitwJflddDxrBBVhnTGEzIZYOBTU7RKSqL+OyCDv/0M=; b=SrZM82mrBrBxi3rZ5X1JV6JA5XIVK7638FE5yYq8s71ep4zsrLDdmPp9nFVWBuj1rk F6V14+BWpEuEJpm2/cpKRbymUogn+Ds3oNxAvPVOH6ef1NXW8A/816lK2554DxUw+W3U dfWZQV+6AW05ZmYF0VfH3uCP50UV1Q+vzerdOlGUO+wjJsXDYhfj53W1A/K57cRCcCE7 G1U7HRC9FMvrQjJsJ+sU5VfAVaLUztgY6R1hNNPSJctWH3Ks1kahFLxIfc70RNWp6Z8h p3rxIlC3HVUQimcGWO9qXiV9eLC1jKhBgcMoK1Jwi3DLrohvhvLKSbNiMFO8sR/Uf7z/ iBuw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=L1pQH1k9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gl21-20020a170906e0d500b00997e71d037csi2684209ejb.618.2023.07.21.13.24.58; Fri, 21 Jul 2023 13:25:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=L1pQH1k9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230308AbjGUUT6 (ORCPT + 99 others); Fri, 21 Jul 2023 16:19:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231311AbjGUUTp (ORCPT ); Fri, 21 Jul 2023 16:19:45 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E04F6B7 for ; Fri, 21 Jul 2023 13:19:25 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1b9de7951easo13878525ad.0 for ; Fri, 21 Jul 2023 13:19:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1689970765; x=1690575565; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=BABBYG9T7Yn2sZTHqi5m2u4CXRQ2CmLvyTIjco5c63g=; b=L1pQH1k9NYEONtiByxTTURMw61jXy4L9Kk3ri6cUN6Z+iQ6l9paGL8Mts/CoI1ah8E p5sh/GxpWGNRTHR8gZR3aL7Z1eHd7efWK8WBAT9JhKhc/YN0RxnOWyQ2JXr/nsvMvRF8 T8UWXdr1r33MMP2c9F+tPUT0Vx9KzkmzaqkWHFTgRAcqdD3k8r+UJhpBa3ubw0HY3ib2 imDCqrcruiJZnhVO9V3DRNlFTLXF0AKlqECEJy2mZKhEppb21CSS06AmUP0RyBFxFEon XhNIiCfxnzxVY8X+aZED+NIEf02YSjU1tdYg+DMuihX7XrM45w+LABXt9Gv1lLloQIHt eExQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689970765; x=1690575565; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=BABBYG9T7Yn2sZTHqi5m2u4CXRQ2CmLvyTIjco5c63g=; b=QuplGhVHuzJvyuHnPRXEuLAPiKH17YK5t3cqVHkZB9HP/4/WRHdkSPs8hcGq5/hB2S +DEgePG4db1gnM9X+/2ePxJCml5s6LWe54B1qphymU1LWPEBwecLPkvSMUznayrMi7+A ktqV3QUPaKBrAWos1hjI04fBTGX9jaNEah1TZGp2MvKiiZ254TDYUzmOgqCafq92Ei/K Ii4D3A6mQznr+lOplFDhlMvjvdZP5GFm3v5zwXfIdlkzTzCM3tPmwDhMPDmjHKnX+/Ac jEcRQMvaJqfGWe+TcBYf4ub2L94arUcTY+pVS5zXDtyEWkN6OJd74R0G/Y3Uohx+f9qd okuA== X-Gm-Message-State: ABy/qLZ/5PRGxaaxvjyVzc8tB5iVf6Ar63k1UifQBrrfVN9Ch3ZurG4v CcY1k01969Ujx5jTFlvGuHIvcJfq4Kw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:2308:b0:1b8:80c9:a98e with SMTP id d8-20020a170903230800b001b880c9a98emr11974plh.13.1689970764858; Fri, 21 Jul 2023 13:19:24 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 21 Jul 2023 13:18:50 -0700 In-Reply-To: <20230721201859.2307736-1-seanjc@google.com> Mime-Version: 1.0 References: <20230721201859.2307736-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230721201859.2307736-11-seanjc@google.com> Subject: [PATCH v4 10/19] x86/virt: KVM: Move VMXOFF helpers into KVM VMX From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, Sean Christopherson , Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Andrew Cooper , Kai Huang , Chao Gao Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now that VMX is disabled in emergencies via the virt callbacks, move the VMXOFF helpers into KVM, the only remaining user. No functional change intended. Reviewed-by: Kai Huang Signed-off-by: Sean Christopherson --- arch/x86/include/asm/virtext.h | 42 ---------------------------------- arch/x86/kvm/vmx/vmx.c | 29 ++++++++++++++++++++--- 2 files changed, 26 insertions(+), 45 deletions(-) diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h index b1171a5ad452..a27801f2bc71 100644 --- a/arch/x86/include/asm/virtext.h +++ b/arch/x86/include/asm/virtext.h @@ -19,48 +19,6 @@ #include #include -/* - * VMX functions: - */ -/** - * cpu_vmxoff() - Disable VMX on the current CPU - * - * Disable VMX and clear CR4.VMXE (even if VMXOFF faults) - * - * Note, VMXOFF causes a #UD if the CPU is !post-VMXON, but it's impossible to - * atomically track post-VMXON state, e.g. this may be called in NMI context. - * Eat all faults as all other faults on VMXOFF faults are mode related, i.e. - * faults are guaranteed to be due to the !post-VMXON check unless the CPU is - * magically in RM, VM86, compat mode, or at CPL>0. - */ -static inline int cpu_vmxoff(void) -{ - asm_volatile_goto("1: vmxoff\n\t" - _ASM_EXTABLE(1b, %l[fault]) - ::: "cc", "memory" : fault); - - cr4_clear_bits(X86_CR4_VMXE); - return 0; - -fault: - cr4_clear_bits(X86_CR4_VMXE); - return -EIO; -} - -static inline int cpu_vmx_enabled(void) -{ - return __read_cr4() & X86_CR4_VMXE; -} - -/** Disable VMX if it is enabled on the current CPU - */ -static inline void __cpu_emergency_vmxoff(void) -{ - if (cpu_vmx_enabled()) - cpu_vmxoff(); -} - - /* * SVM functions: */ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 71571cd9adbb..6f4fcd82fa6e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -47,7 +47,6 @@ #include #include #include -#include #include #include "capabilities.h" @@ -744,6 +743,29 @@ static int vmx_set_guest_uret_msr(struct vcpu_vmx *vmx, return ret; } +/* + * Disable VMX and clear CR4.VMXE (even if VMXOFF faults) + * + * Note, VMXOFF causes a #UD if the CPU is !post-VMXON, but it's impossible to + * atomically track post-VMXON state, e.g. this may be called in NMI context. + * Eat all faults as all other faults on VMXOFF faults are mode related, i.e. + * faults are guaranteed to be due to the !post-VMXON check unless the CPU is + * magically in RM, VM86, compat mode, or at CPL>0. + */ +static int kvm_cpu_vmxoff(void) +{ + asm_volatile_goto("1: vmxoff\n\t" + _ASM_EXTABLE(1b, %l[fault]) + ::: "cc", "memory" : fault); + + cr4_clear_bits(X86_CR4_VMXE); + return 0; + +fault: + cr4_clear_bits(X86_CR4_VMXE); + return -EIO; +} + static void vmx_emergency_disable(void) { int cpu = raw_smp_processor_id(); @@ -753,7 +775,8 @@ static void vmx_emergency_disable(void) loaded_vmcss_on_cpu_link) vmcs_clear(v->vmcs); - __cpu_emergency_vmxoff(); + if (__read_cr4() & X86_CR4_VMXE) + kvm_cpu_vmxoff(); } static void __loaded_vmcs_clear(void *arg) @@ -2818,7 +2841,7 @@ static void vmx_hardware_disable(void) { vmclear_local_loaded_vmcss(); - if (cpu_vmxoff()) + if (kvm_cpu_vmxoff()) kvm_spurious_fault(); hv_reset_evmcs(); -- 2.41.0.487.g6d72f3e995-goog