Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp3338427ybt; Sat, 4 Jul 2020 13:47:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyxQiudHasw3ma4cZ45L0n2RpBO+HjC4wkDE99oU/clb8GMTnYJ+HBdhUjB9DtGkt3b7vv8 X-Received: by 2002:a05:6402:3d0:: with SMTP id t16mr46091044edw.287.1593895641278; Sat, 04 Jul 2020 13:47:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593895641; cv=none; d=google.com; s=arc-20160816; b=axaYT6lt/K5fEDVZAemaQ+sJ2hEJDeCeIHy2yRUYToi9Qv+tesHkukCs0pV5wOUzf9 fcW5HnlyrvSIrpyO8uujp4lMagcIM1EfzWo8t/woAJ+zNEFo8PO1vBq6xvIr3Xyx6Xbm djktbKFcNtdAcu+qmdbrM3ch22pu1dLRZ48GydCVrmhJzLltnNYsiOMJtTbtpBWLxxKS 1Ch4fNePvIvm10ARy4y4NzNDL94t1P0ZutYiNh7hR4RdC1BXa+yc8u48BjHL5kJwT0Pa ZVbsZ6tZcWcrzpKVWjeFQwh6X3N0BpRiA7O0xQ0NNDs8WxyjIKht1MGIdUdN/WhA4XeX wTLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=JfRssucglXn2lokAcv0ngrAjwwsJv3lxeoWlz/j/bDk=; b=ERGFNQ+W6vSN4b/J1+wuswy5YWIa+4WJMI7GYas7ejIqXFy+VPziRElBZe4bRgpNFY obmD5nXY/cbJUci9zL002scAfE+Ewnrb2UJHkwssOzhEBfWZhqwam+Y0EUvhMf3s/Gf2 JvSkJ1AeOCXgVfriiCjla9fBPTHIImI9J5NDKSmSdqQxzKrdnrfmr/6ezrU/xb9LNI6B hcSJ2TfiD4u/KPdLRIdETSwvqtLe9snjktt1aNpwwtcNnquJjufsAEARPf1cl6qwWCDZ 7/V2srdBHCYNHF1Tp6Tx/N/yhppS55X0ky22/3C7qJWJL/NVEBAZrBtrrmYpqt2pi5tV Fflw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@g001.emailsrvr.com header.s=20190322-9u7zjiwi header.b=LjxrK19h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bo4si10260230ejb.279.2020.07.04.13.46.58; Sat, 04 Jul 2020 13:47:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@g001.emailsrvr.com header.s=20190322-9u7zjiwi header.b=LjxrK19h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727962AbgGDUqs (ORCPT + 99 others); Sat, 4 Jul 2020 16:46:48 -0400 Received: from smtp91.iad3b.emailsrvr.com ([146.20.161.91]:46828 "EHLO smtp91.iad3b.emailsrvr.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726669AbgGDUqq (ORCPT ); Sat, 4 Jul 2020 16:46:46 -0400 X-Greylist: delayed 500 seconds by postgrey-1.27 at vger.kernel.org; Sat, 04 Jul 2020 16:46:45 EDT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=g001.emailsrvr.com; s=20190322-9u7zjiwi; t=1593895107; bh=w0ZgXkMgv8LAI8TjR+1AD+5/TefFnaxCjkf2/Gvaz9k=; h=From:To:Subject:Date:From; b=LjxrK19hBZgLIh960XXcZdvSvzhIh/B44QczL05d+/kDeG4VpGpsKwWSPQI9KfyPv +ZcYixyFMXA8KSLhrAfrb+FsFAulPawrJjr/kNxcghPoTjn2lgtvK/KBhg+sk+OeW9 apBJtXAAn/z/5vWNVhAb3NQYleZlYNJNAvn16rCo= X-Auth-ID: dpreed@deepplum.com Received: by smtp20.relay.iad3b.emailsrvr.com (Authenticated sender: dpreed-AT-deepplum.com) with ESMTPSA id 65743A00ED; Sat, 4 Jul 2020 16:38:26 -0400 (EDT) From: "David P. Reed" To: Sean Christopherson Cc: "David P. Reed" , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , X86 ML , "H. Peter Anvin" , Allison Randal , Enrico Weigelt , Greg Kroah-Hartman , Kate Stewart , "Peter Zijlstra (Intel)" , Randy Dunlap , Martin Molnar , Andy Lutomirski , Alexandre Chartre , Jann Horn , Dave Hansen , LKML Subject: [PATCH v3 2/3] Fix undefined operation fault that can hang a cpu on crash or panic Date: Sat, 4 Jul 2020 16:38:08 -0400 Message-Id: <20200704203809.76391-3-dpreed@deepplum.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200704203809.76391-1-dpreed@deepplum.com> References: <20200629214956.GA12962@linux.intel.com> <20200704203809.76391-1-dpreed@deepplum.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Classification-ID: 64370b40-6b65-46c7-a817-521193c95a46-3-1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Fix: Mask undefined operation fault during emergency VMXOFF that must be attempted to force cpu exit from VMX root operation. Explanation: When a cpu may be in VMX root operation (only possible when CR4.VMXE is set), crash or panic reboot tries to exit VMX root operation using VMXOFF. This is necessary, because any INIT will be masked while cpu is in VMX root operation, but that state cannot be reliably discerned by the state of the cpu. VMXOFF faults if the cpu is not actually in VMX root operation, signalling undefined operation. Discovered while debugging an out-of-tree x-visor with a race. Can happen due to certain kinds of bugs in KVM. Fixes: 208067 Reported-by: David P. Reed Suggested-by: Thomas Gleixner Suggested-by: Sean Christopherson Suggested-by: Andy Lutomirski Signed-off-by: David P. Reed --- arch/x86/include/asm/virtext.h | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h index 0ede8d04535a..0e0900eacb9c 100644 --- a/arch/x86/include/asm/virtext.h +++ b/arch/x86/include/asm/virtext.h @@ -30,11 +30,11 @@ static inline int cpu_has_vmx(void) } -/* Disable VMX on the current CPU +/* Exit VMX root mode and isable VMX on the current CPU. * * vmxoff causes a undefined-opcode exception if vmxon was not run - * on the CPU previously. Only call this function if you know VMX - * is enabled. + * on the CPU previously. Only call this function if you know cpu + * is in VMX root mode. */ static inline void cpu_vmxoff(void) { @@ -47,14 +47,22 @@ static inline int cpu_vmx_enabled(void) return __read_cr4() & X86_CR4_VMXE; } -/* Disable VMX if it is enabled on the current CPU +/* Safely exit VMX root mode and disable VMX if VMX enabled + * on the current CPU. Handle undefined-opcode fault + * that can occur if cpu is not in VMX root mode, due + * to a race. * * You shouldn't call this if cpu_has_vmx() returns 0. */ static inline void __cpu_emergency_vmxoff(void) { - if (cpu_vmx_enabled()) - cpu_vmxoff(); + if (!cpu_vmx_enabled()) + return; + asm volatile ("1:vmxoff\n\t" + "2:\n\t" + _ASM_EXTABLE(1b, 2b) + ::: "cc", "memory"); + cr4_clear_bits(X86_CR4_VMXE); } /* Disable VMX if it is supported and enabled on the current CPU -- 2.26.2