Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp1714889ybb; Thu, 2 Apr 2020 06:07:15 -0700 (PDT) X-Google-Smtp-Source: APiQypJY3CTrfFib2jjdf/SHXhYpCkvSjAVqPpBXH96RSIgNxyFczZy8p4hV5sECGZl/caiRcMv1 X-Received: by 2002:aca:3008:: with SMTP id w8mr2023439oiw.96.1585832835738; Thu, 02 Apr 2020 06:07:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1585832835; cv=none; d=google.com; s=arc-20160816; b=z9hhilNJ507PmMGuEUkebH8zONYwjRnDsAgKp10TcRKmggbYZKlnGDHj9DJiEko0pn 1i78Pua1mH6ia97a7klCAw+oqTc5vdxodt9qBHbVVgnTjCNIthKZiEZTGIkWSXFEqsWt W/aNsTafUE5HU2FKj8hGrOcKRvutvk0etyr8CGSgGF0ihNp9a/9/D/vQjOhKZGXEk8JH fYdgRfdNqhQHSkh/ppooglq1fVTmKKhNZ09vsHQuWrBS8fOQZnCjCVI1kXmOIsIk4/Ns 5BUos7pOX7Ni84Phl1kK9mzPlviWI0OUmByAegg8NbnmJytPuFZdOzdVNwVZhsGS4BIA KzaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:subject:cc:to:from:date:user-agent:message-id; bh=GOFXDHnuPxjX6Nyvz1VxEdUv96Lxeh4xic+SIsHiYDI=; b=USxombgWu7cV6Ac7x4oZxeDuZ1pVf8LNDYEZ+dMluaRSIOwJyQUwxf6n3VlgyAMIyi N6tCwEjv/ZukHOV8oKcGCoFpPiB9iDfiKUtDjm1PTChJjtFDU41jA2w735+6IXp6Eb+J AaUIpdv67RlVTxnn24ukrEMfkDJ+ND0Xmb7zk0vvceFEIKIKPjymxdZge29IK+tdKLGi HaAVrTcNasJJ/nHs+ArnBERRLyXbiLAN4deUUc/tvCRXB/yDzJPJn3BbPqj1xRor3Flf U/ZA3Ttk2udzutla7j9RKEWbJ2Dzsohnky0mzh/JrDSMqBLVahDGYEtrD6WugUYGKPaO n5QA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b11si2247210oii.11.2020.04.02.06.06.39; Thu, 02 Apr 2020 06:07:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388461AbgDBNB1 (ORCPT + 99 others); Thu, 2 Apr 2020 09:01:27 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:37723 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729033AbgDBNB0 (ORCPT ); Thu, 2 Apr 2020 09:01:26 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jJzSu-0004Fx-KC; Thu, 02 Apr 2020 15:01:00 +0200 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 28EE2FFAA7; Thu, 2 Apr 2020 15:01:00 +0200 (CEST) Message-Id: <20200402124205.242674296@linutronix.de> User-Agent: quilt/0.65 Date: Thu, 02 Apr 2020 14:32:59 +0200 From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Kenneth R. Crudup" , "Peter Zijlstra (Intel)" , Paolo Bonzini , Jessica Yu , Fenghua Yu , Xiaoyao Li , Nadav Amit , Thomas Hellstrom , Sean Christopherson , Tony Luck , Steven Rostedt Subject: [patch 1/2] x86,module: Detect VMX modules and disable Split-Lock-Detect References: <20200402123258.895628824@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Peter Zijlstra It turns out that with Split-Lock-Detect enabled (default) any VMX hypervisor needs at least a little modification in order to not blindly inject the #AC into the guest without the guest being ready for it. Since there is no telling which module implements a hypervisor, scan the module text and look for the VMLAUNCH instruction. If found, the module is assumed to be a hypervisor of some sort and SLD is disabled. Hypervisors, which have been modified and are known to work correctly, can add: MODULE_INFO(sld_safe, "Y"); to explicitly tell the module loader they're good. NOTE: it is unfortunate that struct load_info is not available to the arch module code, this means CONFIG_CPU_SUP_INTEL gunk is needed in generic code. NOTE: while we can 'trivially' fix KVM, we're still stuck with stuff like VMware and VirtualBox doing their own thing. Reported-by: "Kenneth R. Crudup" Signed-off-by: Peter Zijlstra (Intel) Cc: Paolo Bonzini Cc: Jessica Yu Cc: Fenghua Yu Cc: Xiaoyao Li Cc: Nadav Amit Cc: Thomas Hellstrom Cc: Sean Christopherson Cc: Tony Luck Cc: Steven Rostedt --- arch/x86/include/asm/cpu.h | 2 ++ arch/x86/kernel/cpu/intel.c | 38 +++++++++++++++++++++++++++++++++++++- arch/x86/kernel/module.c | 6 ++++++ include/linux/module.h | 4 ++++ kernel/module.c | 5 +++++ 5 files changed, 54 insertions(+), 1 deletion(-) --- a/arch/x86/include/asm/cpu.h +++ b/arch/x86/include/asm/cpu.h @@ -44,6 +44,7 @@ unsigned int x86_stepping(unsigned int s extern void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c); extern void switch_to_sld(unsigned long tifn); extern bool handle_user_split_lock(struct pt_regs *regs, long error_code); +extern void split_lock_validate_module_text(struct module *me, void *text, void *text_end); #else static inline void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c) {} static inline void switch_to_sld(unsigned long tifn) {} @@ -51,5 +52,6 @@ static inline bool handle_user_split_loc { return false; } +static inline void split_lock_validate_module_text(struct module *me, void *text, void *text_end) {} #endif #endif /* _ASM_X86_CPU_H */ --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include @@ -21,6 +22,7 @@ #include #include #include +#include #ifdef CONFIG_X86_64 #include @@ -1055,12 +1057,46 @@ static void sld_update_msr(bool on) { u64 test_ctrl_val = msr_test_ctrl_cache; - if (on) + if (on && (sld_state != sld_off)) test_ctrl_val |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT; wrmsrl(MSR_TEST_CTRL, test_ctrl_val); } +static void sld_remote_kill(void *arg) +{ + sld_update_msr(false); +} + +void split_lock_validate_module_text(struct module *me, void *text, void *text_end) +{ + u8 vmlaunch[] = { 0x0f, 0x01, 0xc2 }; + struct insn insn; + + if (sld_state == sld_off) + return; + + while (text < text_end) { + kernel_insn_init(&insn, text, text_end - text); + insn_get_length(&insn); + + if (WARN_ON_ONCE(!insn_complete(&insn))) + break; + + if (insn.length == sizeof(vmlaunch) && !memcmp(text, vmlaunch, sizeof(vmlaunch))) + goto bad_module; + + text += insn.length; + } + + return; + +bad_module: + pr_warn("disabled due to VMLAUNCH in module: %s\n", me->name); + sld_state = sld_off; + on_each_cpu(sld_remote_kill, NULL, 1); +} + static void split_lock_init(void) { split_lock_verify_msr(sld_state != sld_off); --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -24,6 +24,7 @@ #include #include #include +#include #if 0 #define DEBUGP(fmt, ...) \ @@ -253,6 +254,11 @@ int module_finalize(const Elf_Ehdr *hdr, tseg, tseg + text->sh_size); } + if (text && !me->sld_safe) { + void *tseg = (void *)text->sh_addr; + split_lock_validate_module_text(me, tseg, tseg + text->sh_size); + } + if (para) { void *pseg = (void *)para->sh_addr; apply_paravirt(pseg, pseg + para->sh_size); --- a/include/linux/module.h +++ b/include/linux/module.h @@ -407,6 +407,10 @@ struct module { bool sig_ok; #endif +#ifdef CONFIG_CPU_SUP_INTEL + bool sld_safe; +#endif + bool async_probe_requested; /* symbols that will be GPL-only in the near future. */ --- a/kernel/module.c +++ b/kernel/module.c @@ -3096,6 +3096,11 @@ static int check_modinfo(struct module * "is unknown, you have been warned.\n", mod->name); } +#ifdef CONFIG_CPU_SUP_INTEL + if (get_modinfo(info, "sld_safe")) + mod->sld_safe = true; +#endif + err = check_modinfo_livepatch(mod, info); if (err) return err;