Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp186695imu; Wed, 21 Nov 2018 17:50:55 -0800 (PST) X-Google-Smtp-Source: AJdET5c1oAbdH3+MSJvRywPwFMoJg1okkPyDkVzPjAcbZypyANoXuMPOF8MAlc+44QRzvBxYuTur X-Received: by 2002:a62:1d14:: with SMTP id d20mr9226789pfd.221.1542851455592; Wed, 21 Nov 2018 17:50:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542851455; cv=none; d=google.com; s=arc-20160816; b=k5UR1t8j+z6Qx1kHJp57OgWUcHhsBIsmhz7LAW//1v8nmCGGUJ6WqStEO04XGQC9IJ XJ9Yf8Bd/Vxtlz4Fwp/D25BnXkfsH8zm8jg0MjNIpgDfMvsPJfwuTCpHrRFwKCqF8ChL 4MrVRPi7Ym5x6FeW9hi1mVdsCJcPWg50sda3baCjrXoyL2w/LjXJhkt2luAkD0qO/rpI bptE+eiLhZVCE7bG2BLDZZgWo8/PhGG0vNzEpUs5aKiXFmfntLiMuT5ts9qcR7xUDBKR ry5Mp6g4RxZFlpAUTZbfiprpjVanTUyo0UpJKZrv00kzT2Y7iMOa6Mel6nO4H7czldkd QNWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=pChTR4tPJAHAyuFIAEMWe884RKTdKLFWHVZXiAvzRzo=; b=ZJM4bd3Jl5x0ptVXp+4wg8Om8n1gN/YhREY4gSoK4Aq9rmTiy3chpCkC247D83wODq ICGxIrcwGxrOSw18dl6lM0zrsWuIm8GJi+C7J4zR0/Vw234i8hU3NvwBHbzKxkpEgEDw rGEermxmF5bbCYCpbPtGgBR9trWhh7WWkb7q5wwiYyZLCYZj57ghsbO0NaTXTX6y/YOD gITD+aZirWqbp0NW3/FxadXt5e4LqIbc8JC80gcjNXavybyVD+2sI9MLMFw/g+oR3Lto dwbpjyOKrVM+eF2hl/fpZLv4vQjddfV/sH7f+LypzojsTh/XSFV3T8LtZPmTKhp5/bJs bAIA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=rkMcF9y8; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 59-v6si40224997plb.75.2018.11.21.17.50.40; Wed, 21 Nov 2018 17:50:55 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=rkMcF9y8; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388943AbeKVFqu (ORCPT + 99 others); Thu, 22 Nov 2018 00:46:50 -0500 Received: from mail.kernel.org ([198.145.29.99]:42140 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730565AbeKVFqs (ORCPT ); Thu, 22 Nov 2018 00:46:48 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6219C2151B; Wed, 21 Nov 2018 19:11:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1542827474; bh=4Bn76IQSohnIXPF9qIVzZ3+1MClmtkoHYEJEjFlUC0Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rkMcF9y8rfM8SK+kHB9VPiYTc1j25fuEElmzFxHfuJ/ljqzgDtV70ja+ua9b86PLK 1oh5I7xL2fp2a6ASmOI0He5fBsoTtP1vjjrmfHVaaylqihiNA5YuoP1k9wtQeW1Mfi wuiEsXagTgOJNrqVcFN8s+1h5PSXJ5Kir/qPDIr4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Marc Zyngier , Russell King , Tony Lindgren , "David A. Long" Subject: [PATCH 4.9 46/59] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17 Date: Wed, 21 Nov 2018 20:07:01 +0100 Message-Id: <20181121183510.081806172@linuxfoundation.org> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181121183508.262873520@linuxfoundation.org> References: <20181121183508.262873520@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.9-stable review patch. If anyone has any objections, please let me know. ------------------ From: Marc Zyngier Commit 3f7e8e2e1ebda787f156ce46e3f0a9ce2833fa4f upstream. In order to avoid aliasing attacks against the branch predictor, let's invalidate the BTB on guest exit. This is made complicated by the fact that we cannot take a branch before invalidating the BTB. We only apply this to A12 and A17, which are the only two ARM cores on which this useful. Signed-off-by: Marc Zyngier Signed-off-by: Russell King Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman --- arch/arm/include/asm/kvm_asm.h | 2 - arch/arm/include/asm/kvm_mmu.h | 17 +++++++++ arch/arm/kvm/hyp/hyp-entry.S | 71 +++++++++++++++++++++++++++++++++++++++-- 3 files changed, 85 insertions(+), 5 deletions(-) --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -61,8 +61,6 @@ struct kvm_vcpu; extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; -extern char __kvm_hyp_vector[]; - extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -248,7 +248,22 @@ static inline int kvm_read_guest_lock(st static inline void *kvm_get_hyp_vector(void) { - return kvm_ksym_ref(__kvm_hyp_vector); + switch(read_cpuid_part()) { +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + case ARM_CPU_PART_CORTEX_A12: + case ARM_CPU_PART_CORTEX_A17: + { + extern char __kvm_hyp_vector_bp_inv[]; + return kvm_ksym_ref(__kvm_hyp_vector_bp_inv); + } + +#endif + default: + { + extern char __kvm_hyp_vector[]; + return kvm_ksym_ref(__kvm_hyp_vector); + } + } } static inline int kvm_map_vectors(void) --- a/arch/arm/kvm/hyp/hyp-entry.S +++ b/arch/arm/kvm/hyp/hyp-entry.S @@ -71,6 +71,66 @@ __kvm_hyp_vector: W(b) hyp_irq W(b) hyp_fiq +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + .align 5 +__kvm_hyp_vector_bp_inv: + .global __kvm_hyp_vector_bp_inv + + /* + * We encode the exception entry in the bottom 3 bits of + * SP, and we have to guarantee to be 8 bytes aligned. + */ + W(add) sp, sp, #1 /* Reset 7 */ + W(add) sp, sp, #1 /* Undef 6 */ + W(add) sp, sp, #1 /* Syscall 5 */ + W(add) sp, sp, #1 /* Prefetch abort 4 */ + W(add) sp, sp, #1 /* Data abort 3 */ + W(add) sp, sp, #1 /* HVC 2 */ + W(add) sp, sp, #1 /* IRQ 1 */ + W(nop) /* FIQ 0 */ + + mcr p15, 0, r0, c7, c5, 6 /* BPIALL */ + isb + +#ifdef CONFIG_THUMB2_KERNEL + /* + * Yet another silly hack: Use VPIDR as a temp register. + * Thumb2 is really a pain, as SP cannot be used with most + * of the bitwise instructions. The vect_br macro ensures + * things gets cleaned-up. + */ + mcr p15, 4, r0, c0, c0, 0 /* VPIDR */ + mov r0, sp + and r0, r0, #7 + sub sp, sp, r0 + push {r1, r2} + mov r1, r0 + mrc p15, 4, r0, c0, c0, 0 /* VPIDR */ + mrc p15, 0, r2, c0, c0, 0 /* MIDR */ + mcr p15, 4, r2, c0, c0, 0 /* VPIDR */ +#endif + +.macro vect_br val, targ +ARM( eor sp, sp, #\val ) +ARM( tst sp, #7 ) +ARM( eorne sp, sp, #\val ) + +THUMB( cmp r1, #\val ) +THUMB( popeq {r1, r2} ) + + beq \targ +.endm + + vect_br 0, hyp_fiq + vect_br 1, hyp_irq + vect_br 2, hyp_hvc + vect_br 3, hyp_dabt + vect_br 4, hyp_pabt + vect_br 5, hyp_svc + vect_br 6, hyp_undef + vect_br 7, hyp_reset +#endif + .macro invalid_vector label, cause .align \label: mov r0, #\cause @@ -131,7 +191,14 @@ hyp_hvc: mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR beq 1f - push {lr} + /* + * Pushing r2 here is just a way of keeping the stack aligned to + * 8 bytes on any path that can trigger a HYP exception. Here, + * we may well be about to jump into the guest, and the guest + * exit would otherwise be badly decoded by our fancy + * "decode-exception-without-a-branch" code... + */ + push {r2, lr} mov lr, r0 mov r0, r1 @@ -141,7 +208,7 @@ hyp_hvc: THUMB( orr lr, #1) blx lr @ Call the HYP function - pop {lr} + pop {r2, lr} 1: eret guest_trap: