Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp1122722ybe; Fri, 13 Sep 2019 11:22:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqyuVAga5G1iqf2vR5RxLT9dHdYF0ZAD020JMc8kBBbLSGTzD3/iXy4R9nML0TZOQMucZyEx X-Received: by 2002:a17:906:f205:: with SMTP id gt5mr36081964ejb.279.1568398959668; Fri, 13 Sep 2019 11:22:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568398959; cv=none; d=google.com; s=arc-20160816; b=ARvLXWN5xNCatmIMqpepxzUBMd3+8hlQMJ2rOCtlc5jPhtiwxlLcJ0OQhNGJM4bcfK AWfdXlV3eiyOBwY5zOonnirmSocgJnFMch1/ZNNG1OlNWyKw5Kd5eOFOUz2UnJJsLSH4 zAnmujSjuj1x7xqKkX/IMC6NpvyykCJcs6tVklS6/jL8y6qVvzl65czhEuU2qb2oRsfN al6K4ERp2ST0LZ96f40kOmOgRM8Ok3SpOODzV0NGaNYfU1a6tAy2LVF5tYVAQxwmnO6N 2i/gxckqFMsSuWkj7Y+/680qDnZiLe2UMqFmdPRIUX3jh7fdVbl5PeLkQIPByfyKUQVN /Cug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=8riiipc/KyGFdy4nE82fC64fRxsxFIdviKPOcxqdYvE=; b=AhKF+j9NhYppeM88Hh0yCvQy4GcBkNDX717iDrt6aZIb+GrhRiw51It8aAutNWtmUh 2b2+A/0nwIO3tiRrk3n3gWNV9kGeT5AMMtHk12jXddlNb79sNCb821N4AjQoyzhC9ca6 1MhksmypdxB43Pgzh9+fwkFaKGNSc2UxO+5RV843IHnkrTm60lRCU9gOFAg87U+2ue36 q6fFUlh6m9GaOP4Ii3Y8XdHpJLLcQEILTUGbHPm7bu1qp1Z3UqpVBuKJGbH5+3FylMj+ EeqmezlkM1gRt54BPzWqBXTN5e3yaH5GtDjMYpuQeO1ynV1ZQob2bOYc1tUXl7X6Igkh 9xIg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=kLzgQGUN; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k14si15658477ejg.91.2019.09.13.11.22.15; Fri, 13 Sep 2019 11:22:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=kLzgQGUN; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389539AbfIMNRu (ORCPT + 99 others); Fri, 13 Sep 2019 09:17:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:45084 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390087AbfIMNRr (ORCPT ); Fri, 13 Sep 2019 09:17:47 -0400 Received: from localhost (unknown [104.132.45.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B5B8D20640; Fri, 13 Sep 2019 13:17:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1568380666; bh=Nh5kPSd3dyIsTNaacKWoqDfsF4hitShmVmt6iD81iiw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kLzgQGUNsBxnfoeMNxDPdx7+a+T3w1i9KaV5voGzN3l2xZf/u8JG/8WcAP1/ucW5g zgv4TQid1CQzonYq3XfkUV5kEtPueLHcv6seFO+w/ekKyQ3FRgwEfpomBpi1HynP6l g1At0qZHTaz2HcI7nydpeaTPQYQDA8EfBBkqleRQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Sean Christopherson , Paolo Bonzini , Sasha Levin Subject: [PATCH 4.19 115/190] KVM: x86: Always use 32-bit SMRAM save state for 32-bit kernels Date: Fri, 13 Sep 2019 14:06:10 +0100 Message-Id: <20190913130608.987637164@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190913130559.669563815@linuxfoundation.org> References: <20190913130559.669563815@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org [ Upstream commit b68f3cc7d978943fcf85148165b00594c38db776 ] Invoking the 64-bit variation on a 32-bit kenrel will crash the guest, trigger a WARN, and/or lead to a buffer overrun in the host, e.g. rsm_load_state_64() writes r8-r15 unconditionally, but enum kvm_reg and thus x86_emulate_ctxt._regs only define r8-r15 for CONFIG_X86_64. KVM allows userspace to report long mode support via CPUID, even though the guest is all but guaranteed to crash if it actually tries to enable long mode. But, a pure 32-bit guest that is ignorant of long mode will happily plod along. SMM complicates things as 64-bit CPUs use a different SMRAM save state area. KVM handles this correctly for 64-bit kernels, e.g. uses the legacy save state map if userspace has hid long mode from the guest, but doesn't fare well when userspace reports long mode support on a 32-bit host kernel (32-bit KVM doesn't support 64-bit guests). Since the alternative is to crash the guest, e.g. by not loading state or explicitly requesting shutdown, unconditionally use the legacy SMRAM save state map for 32-bit KVM. If a guest has managed to get far enough to handle SMIs when running under a weird/buggy userspace hypervisor, then don't deliberately crash the guest since there are no downsides (from KVM's perspective) to allow it to continue running. Fixes: 660a5d517aaab ("KVM: x86: save/load state on SMM switch") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Signed-off-by: Paolo Bonzini Signed-off-by: Sasha Levin --- arch/x86/kvm/emulate.c | 10 ++++++++++ arch/x86/kvm/x86.c | 10 ++++++---- 2 files changed, 16 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 4a688ef9e4481..429728b35bca1 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2331,12 +2331,16 @@ static int em_lseg(struct x86_emulate_ctxt *ctxt) static int emulator_has_longmode(struct x86_emulate_ctxt *ctxt) { +#ifdef CONFIG_X86_64 u32 eax, ebx, ecx, edx; eax = 0x80000001; ecx = 0; ctxt->ops->get_cpuid(ctxt, &eax, &ebx, &ecx, &edx, false); return edx & bit(X86_FEATURE_LM); +#else + return false; +#endif } #define GET_SMSTATE(type, smbase, offset) \ @@ -2381,6 +2385,7 @@ static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, u64 smbase, int n) return X86EMUL_CONTINUE; } +#ifdef CONFIG_X86_64 static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n) { struct desc_struct desc; @@ -2399,6 +2404,7 @@ static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n) ctxt->ops->set_segment(ctxt, selector, &desc, base3, n); return X86EMUL_CONTINUE; } +#endif static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt, u64 cr0, u64 cr3, u64 cr4) @@ -2499,6 +2505,7 @@ static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, u64 smbase) return rsm_enter_protected_mode(ctxt, cr0, cr3, cr4); } +#ifdef CONFIG_X86_64 static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase) { struct desc_struct desc; @@ -2560,6 +2567,7 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase) return X86EMUL_CONTINUE; } +#endif static int em_rsm(struct x86_emulate_ctxt *ctxt) { @@ -2616,9 +2624,11 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) if (ctxt->ops->pre_leave_smm(ctxt, smbase)) return X86EMUL_UNHANDLEABLE; +#ifdef CONFIG_X86_64 if (emulator_has_longmode(ctxt)) ret = rsm_load_state_64(ctxt, smbase + 0x8000); else +#endif ret = rsm_load_state_32(ctxt, smbase + 0x8000); if (ret != X86EMUL_CONTINUE) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a846ed13ba53c..cbc39751f36bc 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7227,9 +7227,9 @@ static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, char *buf) put_smstate(u32, buf, 0x7ef8, vcpu->arch.smbase); } +#ifdef CONFIG_X86_64 static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf) { -#ifdef CONFIG_X86_64 struct desc_ptr dt; struct kvm_segment seg; unsigned long val; @@ -7279,10 +7279,8 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf) for (i = 0; i < 6; i++) enter_smm_save_seg_64(vcpu, buf, i); -#else - WARN_ON_ONCE(1); -#endif } +#endif static void enter_smm(struct kvm_vcpu *vcpu) { @@ -7293,9 +7291,11 @@ static void enter_smm(struct kvm_vcpu *vcpu) trace_kvm_enter_smm(vcpu->vcpu_id, vcpu->arch.smbase, true); memset(buf, 0, 512); +#ifdef CONFIG_X86_64 if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) enter_smm_save_state_64(vcpu, buf); else +#endif enter_smm_save_state_32(vcpu, buf); /* @@ -7353,8 +7353,10 @@ static void enter_smm(struct kvm_vcpu *vcpu) kvm_set_segment(vcpu, &ds, VCPU_SREG_GS); kvm_set_segment(vcpu, &ds, VCPU_SREG_SS); +#ifdef CONFIG_X86_64 if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) kvm_x86_ops->set_efer(vcpu, 0); +#endif kvm_update_cpuid(vcpu); kvm_mmu_reset_context(vcpu); -- 2.20.1