Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp5461857imm; Tue, 16 Oct 2018 10:34:51 -0700 (PDT) X-Google-Smtp-Source: ACcGV61x/FX5nVQYvlHBdp7EI3fRFw1dM4CzhPOUsXOtlr6AqlQM3kRevcJL41bEfiQ6omBvnxr1 X-Received: by 2002:a63:d52:: with SMTP id 18-v6mr21339216pgn.107.1539711291680; Tue, 16 Oct 2018 10:34:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539711291; cv=none; d=google.com; s=arc-20160816; b=iTexRj7RosHJbt615hfoOsex3XMu9PD6pSAfbCDqTHHamf4G3OYmm6lgudrKp9H3fd aZXLZWRcClJgtxnDNMox+CVEfWq6usnDYmqpHIY58l95TAB25TVK4eo4qr/wEmZ/WpEk +idvDfTVT0VwBstdspqoQ58EO9W5Q0Bu25hT6ZbvJew/MKC0lvBiqq1XeKogj0gV7ZZu kKRPNEbsaEG7WfvaH2DbTLge+nZGMZiASBo//9P+4IkOCSfqYncp432yccRLXvd2eq7A OaNAP7k/sK5kN6V8kBg1yVOb8EAMq2lvbdY2SUpGomOi3SPhF5wW7HhT8bTmBtpqg7s1 8fjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=NTK5EAOF6egpmvNNeryFrWVg5l8ePKXrxivAzA6k/7Q=; b=bNLNYpVxALPi916yVjvT/6r2ITuuDaupZKhHdw/K1U1JQIwLwbzhsxGGo3DSeY2FIG HQBa727g1q1GJZEx1F5G+kN8usZlMghrHOqIkQE4U0e7uMNihTZ3l0BFOJjsVnNsJqJ6 99qIB4ewlpp5j5SNRDcBoAmGx3Zd5l6jOhfRE4rCVq8AJBTBENnywJwC7gksYClNUu1t Pdf0yBcNohjGLgGmKensBd/zG0hn+fJOpyoSGLTkoOp7q2BSOtjIXJnugzqmGdA/KhFK gOYezLPWzT/DhhOh30jX0crEz2CK0mMerZhDn/ogaxLWg3qD9DFnporHq8Yu2OHyz0kc 7NCA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=glHNJXCR; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 144-v6si14802369pgh.282.2018.10.16.10.34.34; Tue, 16 Oct 2018 10:34:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=glHNJXCR; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730886AbeJQBMw (ORCPT + 99 others); Tue, 16 Oct 2018 21:12:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:58458 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729296AbeJQBMw (ORCPT ); Tue, 16 Oct 2018 21:12:52 -0400 Received: from localhost (ip-213-127-77-176.ip.prioritytelecom.net [213.127.77.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 407F02098A; Tue, 16 Oct 2018 17:21:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1539710485; bh=HeHMIfVIvtiKCqRwwZ57hsni87Tt7llkNZhMAYBHfC4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=glHNJXCR2Afwo0mPF05voAeS1Cei7F9sdHIL69dnAzXYfedBIjYuziXfii2VH78nT Znk9j5ZcPSIzkCxC2Kft8NXeT/WIH3PKWnEFms/1qzbBnxTTaxoN8mO3z+G1OW1JHo Atlau9tfdh1VacA5iD4OMD2pYp+0/mGqSUBTnfRc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Marc Zyngier , Russell King , Tony Lindgren , "David A. Long" Subject: [PATCH 4.14 095/109] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17 Date: Tue, 16 Oct 2018 19:06:03 +0200 Message-Id: <20181016170530.142174682@linuxfoundation.org> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181016170524.530541524@linuxfoundation.org> References: <20181016170524.530541524@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Marc Zyngier Commit 3f7e8e2e1ebda787f156ce46e3f0a9ce2833fa4f upstream. In order to avoid aliasing attacks against the branch predictor, let's invalidate the BTB on guest exit. This is made complicated by the fact that we cannot take a branch before invalidating the BTB. We only apply this to A12 and A17, which are the only two ARM cores on which this useful. Signed-off-by: Marc Zyngier Signed-off-by: Russell King Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman --- arch/arm/include/asm/kvm_asm.h | 2 - arch/arm/include/asm/kvm_mmu.h | 17 +++++++++ arch/arm/kvm/hyp/hyp-entry.S | 71 +++++++++++++++++++++++++++++++++++++++-- 3 files changed, 85 insertions(+), 5 deletions(-) --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -61,8 +61,6 @@ struct kvm_vcpu; extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; -extern char __kvm_hyp_vector[]; - extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -246,7 +246,22 @@ static inline int kvm_read_guest_lock(st static inline void *kvm_get_hyp_vector(void) { - return kvm_ksym_ref(__kvm_hyp_vector); + switch(read_cpuid_part()) { +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + case ARM_CPU_PART_CORTEX_A12: + case ARM_CPU_PART_CORTEX_A17: + { + extern char __kvm_hyp_vector_bp_inv[]; + return kvm_ksym_ref(__kvm_hyp_vector_bp_inv); + } + +#endif + default: + { + extern char __kvm_hyp_vector[]; + return kvm_ksym_ref(__kvm_hyp_vector); + } + } } static inline int kvm_map_vectors(void) --- a/arch/arm/kvm/hyp/hyp-entry.S +++ b/arch/arm/kvm/hyp/hyp-entry.S @@ -71,6 +71,66 @@ __kvm_hyp_vector: W(b) hyp_irq W(b) hyp_fiq +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + .align 5 +__kvm_hyp_vector_bp_inv: + .global __kvm_hyp_vector_bp_inv + + /* + * We encode the exception entry in the bottom 3 bits of + * SP, and we have to guarantee to be 8 bytes aligned. + */ + W(add) sp, sp, #1 /* Reset 7 */ + W(add) sp, sp, #1 /* Undef 6 */ + W(add) sp, sp, #1 /* Syscall 5 */ + W(add) sp, sp, #1 /* Prefetch abort 4 */ + W(add) sp, sp, #1 /* Data abort 3 */ + W(add) sp, sp, #1 /* HVC 2 */ + W(add) sp, sp, #1 /* IRQ 1 */ + W(nop) /* FIQ 0 */ + + mcr p15, 0, r0, c7, c5, 6 /* BPIALL */ + isb + +#ifdef CONFIG_THUMB2_KERNEL + /* + * Yet another silly hack: Use VPIDR as a temp register. + * Thumb2 is really a pain, as SP cannot be used with most + * of the bitwise instructions. The vect_br macro ensures + * things gets cleaned-up. + */ + mcr p15, 4, r0, c0, c0, 0 /* VPIDR */ + mov r0, sp + and r0, r0, #7 + sub sp, sp, r0 + push {r1, r2} + mov r1, r0 + mrc p15, 4, r0, c0, c0, 0 /* VPIDR */ + mrc p15, 0, r2, c0, c0, 0 /* MIDR */ + mcr p15, 4, r2, c0, c0, 0 /* VPIDR */ +#endif + +.macro vect_br val, targ +ARM( eor sp, sp, #\val ) +ARM( tst sp, #7 ) +ARM( eorne sp, sp, #\val ) + +THUMB( cmp r1, #\val ) +THUMB( popeq {r1, r2} ) + + beq \targ +.endm + + vect_br 0, hyp_fiq + vect_br 1, hyp_irq + vect_br 2, hyp_hvc + vect_br 3, hyp_dabt + vect_br 4, hyp_pabt + vect_br 5, hyp_svc + vect_br 6, hyp_undef + vect_br 7, hyp_reset +#endif + .macro invalid_vector label, cause .align \label: mov r0, #\cause @@ -149,7 +209,14 @@ hyp_hvc: bx ip 1: - push {lr} + /* + * Pushing r2 here is just a way of keeping the stack aligned to + * 8 bytes on any path that can trigger a HYP exception. Here, + * we may well be about to jump into the guest, and the guest + * exit would otherwise be badly decoded by our fancy + * "decode-exception-without-a-branch" code... + */ + push {r2, lr} mov lr, r0 mov r0, r1 @@ -159,7 +226,7 @@ hyp_hvc: THUMB( orr lr, #1) blx lr @ Call the HYP function - pop {lr} + pop {r2, lr} eret guest_trap: