2022-03-18 21:30:42

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 00/22] arm64: Mitigate spectre style branch history side channels

Hello!

There is the v4.19 backport with the k=8 typo and SDEI name thing both
fixed.

Again, its the KVM templates patch that doesn't exist upstream, this is
necessary because the infrastructure for older kernels is very
different, and the dependencies for what was a rewrite are huge.


Its v4.14 and erlier that need to bring some timer errata workaround in
with it. I'm still trying to test that.



Thanks,

James

Anshuman Khandual (1):
arm64: Add Cortex-X2 CPU part definition

James Morse (18):
arm64: entry.S: Add ventry overflow sanity checks
arm64: entry: Make the trampoline cleanup optional
arm64: entry: Free up another register on kpti's tramp_exit path
arm64: entry: Move the trampoline data page before the text page
arm64: entry: Allow tramp_alias to access symbols after the 4K
boundary
arm64: entry: Don't assume tramp_vectors is the start of the vectors
arm64: entry: Move trampoline macros out of ifdef'd section
arm64: entry: Make the kpti trampoline's kpti sequence optional
arm64: entry: Allow the trampoline text to occupy multiple pages
arm64: entry: Add non-kpti __bp_harden_el1_vectors for mitigations
arm64: entry: Add vectors that have the bhb mitigation sequences
arm64: entry: Add macro for reading symbol addresses from the
trampoline
arm64: Add percpu vectors for EL1
arm64: proton-pack: Report Spectre-BHB vulnerabilities as part of
Spectre-v2
KVM: arm64: Add templates for BHB mitigation sequences
arm64: Mitigate spectre style branch history side channels
KVM: arm64: Allow SMCCC_ARCH_WORKAROUND_3 to be discovered and
migrated
arm64: Use the clearbhb instruction in mitigations

Joey Gouly (1):
arm64: add ID_AA64ISAR2_EL1 sys register

Rob Herring (1):
arm64: Add part number for Arm Cortex-A77

Suzuki K Poulose (1):
arm64: Add Neoverse-N2, Cortex-A710 CPU part definition

arch/arm/include/asm/kvm_host.h | 7 +
arch/arm64/Kconfig | 9 +
arch/arm64/include/asm/assembler.h | 34 +++
arch/arm64/include/asm/cpu.h | 1 +
arch/arm64/include/asm/cpucaps.h | 3 +-
arch/arm64/include/asm/cpufeature.h | 39 +++
arch/arm64/include/asm/cputype.h | 16 ++
arch/arm64/include/asm/fixmap.h | 6 +-
arch/arm64/include/asm/kvm_host.h | 5 +
arch/arm64/include/asm/kvm_mmu.h | 6 +-
arch/arm64/include/asm/mmu.h | 8 +-
arch/arm64/include/asm/sections.h | 5 +
arch/arm64/include/asm/sysreg.h | 5 +
arch/arm64/include/asm/vectors.h | 74 ++++++
arch/arm64/kernel/cpu_errata.c | 381 +++++++++++++++++++++++++++-
arch/arm64/kernel/cpufeature.c | 21 ++
arch/arm64/kernel/cpuinfo.c | 1 +
arch/arm64/kernel/entry.S | 215 ++++++++++++----
arch/arm64/kernel/vmlinux.lds.S | 2 +-
arch/arm64/kvm/hyp/hyp-entry.S | 64 +++++
arch/arm64/kvm/hyp/switch.c | 8 +-
arch/arm64/kvm/sys_regs.c | 2 +-
arch/arm64/mm/mmu.c | 12 +-
include/linux/arm-smccc.h | 7 +
virt/kvm/arm/psci.c | 12 +
25 files changed, 871 insertions(+), 72 deletions(-)
create mode 100644 arch/arm64/include/asm/vectors.h

--
2.30.2


2022-03-18 22:03:33

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 03/22] arm64: Add Cortex-X2 CPU part definition

From: Anshuman Khandual <[email protected]>

commit 72bb9dcb6c33cfac80282713c2b4f2b254cd24d1 upstream.

Add the CPU Partnumbers for the new Arm designs.

Cc: Will Deacon <[email protected]>
Cc: Suzuki Poulose <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
Reviewed-by: Suzuki K Poulose <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/include/asm/cputype.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index 7315f20c7ba2..fe011f78ad86 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -83,6 +83,7 @@
#define ARM_CPU_PART_NEOVERSE_N1 0xD0C
#define ARM_CPU_PART_CORTEX_A77 0xD0D
#define ARM_CPU_PART_CORTEX_A710 0xD47
+#define ARM_CPU_PART_CORTEX_X2 0xD48
#define ARM_CPU_PART_NEOVERSE_N2 0xD49

#define APM_CPU_PART_POTENZA 0x000
@@ -114,6 +115,7 @@
#define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1)
#define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
#define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)
+#define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2)
#define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2)
#define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
--
2.30.2

2022-03-18 23:04:40

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 16/22] arm64: Add percpu vectors for EL1

commit bd09128d16fac3c34b80bd6a29088ac632e8ce09 upstream.

The Spectre-BHB workaround adds a firmware call to the vectors. This
is needed on some CPUs, but not others. To avoid the unaffected CPU in
a big/little pair from making the firmware call, create per cpu vectors.

The per-cpu vectors only apply when returning from EL0.

Systems using KPTI can use the canonical 'full-fat' vectors directly at
EL1, the trampoline exit code will switch to this_cpu_vector on exit to
EL0. Systems not using KPTI should always use this_cpu_vector.

this_cpu_vector will point at a vector in tramp_vecs or
__bp_harden_el1_vectors, depending on whether KPTI is in use.

Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/include/asm/mmu.h | 2 +-
arch/arm64/include/asm/vectors.h | 27 +++++++++++++++++++++++++++
arch/arm64/kernel/cpufeature.c | 11 +++++++++++
arch/arm64/kernel/entry.S | 16 ++++++++++------
arch/arm64/kvm/hyp/switch.c | 8 ++++++--
5 files changed, 55 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index dd320df0d026..1f61519fc23f 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -38,7 +38,7 @@ typedef struct {
*/
#define ASID(mm) ((mm)->context.id.counter & 0xffff)

-static inline bool arm64_kernel_unmapped_at_el0(void)
+static __always_inline bool arm64_kernel_unmapped_at_el0(void)
{
return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) &&
cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
diff --git a/arch/arm64/include/asm/vectors.h b/arch/arm64/include/asm/vectors.h
index 16ca74260375..3f76dfd9e074 100644
--- a/arch/arm64/include/asm/vectors.h
+++ b/arch/arm64/include/asm/vectors.h
@@ -5,6 +5,15 @@
#ifndef __ASM_VECTORS_H
#define __ASM_VECTORS_H

+#include <linux/bug.h>
+#include <linux/percpu.h>
+
+#include <asm/fixmap.h>
+
+extern char vectors[];
+extern char tramp_vectors[];
+extern char __bp_harden_el1_vectors[];
+
/*
* Note: the order of this enum corresponds to two arrays in entry.S:
* tramp_vecs and __bp_harden_el1_vectors. By default the canonical
@@ -31,4 +40,22 @@ enum arm64_bp_harden_el1_vectors {
EL1_VECTOR_KPTI,
};

+/* The vectors to use on return from EL0. e.g. to remap the kernel */
+DECLARE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector);
+
+#ifndef CONFIG_UNMAP_KERNEL_AT_EL0
+#define TRAMP_VALIAS 0
+#endif
+
+static inline const char *
+arm64_get_bp_hardening_vector(enum arm64_bp_harden_el1_vectors slot)
+{
+ if (arm64_kernel_unmapped_at_el0())
+ return (char *)TRAMP_VALIAS + SZ_2K * slot;
+
+ WARN_ON_ONCE(slot == EL1_VECTOR_KPTI);
+
+ return __bp_harden_el1_vectors + SZ_2K * slot;
+}
+
#endif /* __ASM_VECTORS_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 122d5e843ab6..0ec57bbea2e0 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -20,11 +20,13 @@

#include <linux/bsearch.h>
#include <linux/cpumask.h>
+#include <linux/percpu.h>
#include <linux/sort.h>
#include <linux/stop_machine.h>
#include <linux/types.h>
#include <linux/mm.h>
#include <linux/cpu.h>
+
#include <asm/cpu.h>
#include <asm/cpufeature.h>
#include <asm/cpu_ops.h>
@@ -33,6 +35,7 @@
#include <asm/processor.h>
#include <asm/sysreg.h>
#include <asm/traps.h>
+#include <asm/vectors.h>
#include <asm/virt.h>

unsigned long elf_hwcap __read_mostly;
@@ -51,6 +54,8 @@ unsigned int compat_elf_hwcap2 __read_mostly;
DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
EXPORT_SYMBOL(cpu_hwcaps);

+DEFINE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector) = vectors;
+
/*
* Flag to indicate if we have computed the system wide
* capabilities based on the boot time active CPUs. This
@@ -963,6 +968,12 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
static bool kpti_applied = false;
int cpu = smp_processor_id();

+ if (__this_cpu_read(this_cpu_vector) == vectors) {
+ const char *v = arm64_get_bp_hardening_vector(EL1_VECTOR_KPTI);
+
+ __this_cpu_write(this_cpu_vector, v);
+ }
+
if (kpti_applied)
return;

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 43bc04bd18d9..e64fcb9ead83 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -71,7 +71,6 @@
.macro kernel_ventry, el, label, regsize = 64
.align 7
.Lventry_start\@:
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
.if \el == 0
/*
* This must be the first instruction of the EL0 vector entries. It is
@@ -86,7 +85,6 @@
.endif
.Lskip_tramp_vectors_cleanup\@:
.endif
-#endif

sub sp, sp, #S_FRAME_SIZE
#ifdef CONFIG_VMAP_STACK
@@ -1033,10 +1031,14 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
.endm

.macro tramp_exit, regsize = 64
- adr x30, tramp_vectors
-#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
- add x30, x30, SZ_4K
-#endif
+ tramp_data_read_var x30, this_cpu_vector
+alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
+ mrs x29, tpidr_el1
+alternative_else
+ mrs x29, tpidr_el2
+alternative_endif
+ ldr x30, [x30, x29]
+
msr vbar_el1, x30
ldr lr, [sp, #S_LR]
tramp_unmap_kernel x29
@@ -1096,6 +1098,8 @@ __entry_tramp_data_vectors:
__entry_tramp_data___sdei_asm_handler:
.quad __sdei_asm_handler
#endif /* CONFIG_ARM_SDE_INTERFACE */
+__entry_tramp_data_this_cpu_vector:
+ .quad this_cpu_vector
.popsection // .rodata
#endif /* CONFIG_RANDOMIZE_BASE */
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 1d16ce0b7e0d..1c248c12a49e 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -34,6 +34,7 @@
#include <asm/debug-monitors.h>
#include <asm/processor.h>
#include <asm/thread_info.h>
+#include <asm/vectors.h>

extern struct exception_table_entry __start___kvm_ex_table;
extern struct exception_table_entry __stop___kvm_ex_table;
@@ -155,10 +156,13 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)

static void deactivate_traps_vhe(void)
{
- extern char vectors[]; /* kernel exception vectors */
+ const char *host_vectors = vectors;
write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
- write_sysreg(vectors, vbar_el1);
+
+ if (!arm64_kernel_unmapped_at_el0())
+ host_vectors = __this_cpu_read(this_cpu_vector);
+ write_sysreg(host_vectors, vbar_el1);
}
NOKPROBE_SYMBOL(deactivate_traps_vhe);

--
2.30.2

2022-03-19 07:30:43

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 22/22] arm64: Use the clearbhb instruction in mitigations

commit 228a26b912287934789023b4132ba76065d9491c upstream.

Future CPUs may implement a clearbhb instruction that is sufficient
to mitigate SpectreBHB. CPUs that implement this instruction, but
not CSV2.3 must be affected by Spectre-BHB.

Add support to use this instruction as the BHB mitigation on CPUs
that support it. The instruction is in the hint space, so it will
be treated by a NOP as older CPUs.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
[ modified for stable: Use a KVM vector template instead of alternatives ]
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/include/asm/assembler.h | 7 +++++++
arch/arm64/include/asm/cpufeature.h | 13 +++++++++++++
arch/arm64/include/asm/sysreg.h | 3 +++
arch/arm64/include/asm/vectors.h | 7 +++++++
arch/arm64/kernel/cpu_errata.c | 14 ++++++++++++++
arch/arm64/kernel/cpufeature.c | 1 +
arch/arm64/kernel/entry.S | 8 ++++++++
arch/arm64/kvm/hyp/hyp-entry.S | 6 ++++++
8 files changed, 59 insertions(+)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 3a5336ba745c..fc3d26c954a4 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -126,6 +126,13 @@
hint #20
.endm

+/*
+ * Clear Branch History instruction
+ */
+ .macro clearbhb
+ hint #22
+ .endm
+
/*
* Sanitise a 64-bit bounded index wrt speculation, returning zero if out
* of bounds.
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 86a7dda74535..05f41d8f7db3 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -497,6 +497,19 @@ static inline bool supports_csv2p3(int scope)
return csv2_val == 3;
}

+static inline bool supports_clearbhb(int scope)
+{
+ u64 isar2;
+
+ if (scope == SCOPE_LOCAL_CPU)
+ isar2 = read_sysreg_s(SYS_ID_AA64ISAR2_EL1);
+ else
+ isar2 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR2_EL1);
+
+ return cpuid_feature_extract_unsigned_field(isar2,
+ ID_AA64ISAR2_CLEARBHB_SHIFT);
+}
+
static inline bool system_supports_32bit_el0(void)
{
return cpus_have_const_cap(ARM64_HAS_32BIT_EL0);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 2afd6f23bef2..e90cf51b87ec 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -527,6 +527,9 @@
#define ID_AA64ISAR1_JSCVT_SHIFT 12
#define ID_AA64ISAR1_DPB_SHIFT 0

+/* id_aa64isar2 */
+#define ID_AA64ISAR2_CLEARBHB_SHIFT 28
+
/* id_aa64pfr0 */
#define ID_AA64PFR0_CSV3_SHIFT 60
#define ID_AA64PFR0_CSV2_SHIFT 56
diff --git a/arch/arm64/include/asm/vectors.h b/arch/arm64/include/asm/vectors.h
index f222d8e033b3..695583b9a145 100644
--- a/arch/arm64/include/asm/vectors.h
+++ b/arch/arm64/include/asm/vectors.h
@@ -33,6 +33,12 @@ enum arm64_bp_harden_el1_vectors {
* canonical vectors.
*/
EL1_VECTOR_BHB_FW,
+
+ /*
+ * Use the ClearBHB instruction, before branching to the canonical
+ * vectors.
+ */
+ EL1_VECTOR_BHB_CLEAR_INSN,
#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */

/*
@@ -44,6 +50,7 @@ enum arm64_bp_harden_el1_vectors {
#ifndef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
#define EL1_VECTOR_BHB_LOOP -1
#define EL1_VECTOR_BHB_FW -1
+#define EL1_VECTOR_BHB_CLEAR_INSN -1
#endif /* !CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */

/* The vectors to use on return from EL0. e.g. to remap the kernel */
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index f96ab92545d7..d0b7dd60861b 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -106,6 +106,8 @@ extern char __spectre_bhb_loop_k24_start[];
extern char __spectre_bhb_loop_k24_end[];
extern char __spectre_bhb_loop_k32_start[];
extern char __spectre_bhb_loop_k32_end[];
+extern char __spectre_bhb_clearbhb_start[];
+extern char __spectre_bhb_clearbhb_end[];

static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
const char *hyp_vecs_end)
@@ -969,6 +971,7 @@ static void update_mitigation_state(enum mitigation_state *oldp,
* - Mitigated by a branchy loop a CPU specific number of times, and listed
* in our "loop mitigated list".
* - Mitigated in software by the firmware Spectre v2 call.
+ * - Has the ClearBHB instruction to perform the mitigation.
* - Has the 'Exception Clears Branch History Buffer' (ECBHB) feature, so no
* software mitigation in the vectors is needed.
* - Has CSV2.3, so is unaffected.
@@ -1108,6 +1111,9 @@ bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry,
if (supports_csv2p3(scope))
return false;

+ if (supports_clearbhb(scope))
+ return true;
+
if (spectre_bhb_loop_affected(scope))
return true;

@@ -1148,6 +1154,8 @@ static const char *kvm_bhb_get_vecs_end(const char *start)
return __spectre_bhb_loop_k24_end;
else if (start == __spectre_bhb_loop_k32_start)
return __spectre_bhb_loop_k32_end;
+ else if (start == __spectre_bhb_clearbhb_start)
+ return __spectre_bhb_clearbhb_end;

return NULL;
}
@@ -1187,6 +1195,7 @@ static void kvm_setup_bhb_slot(const char *hyp_vecs_start)
#define __spectre_bhb_loop_k8_start NULL
#define __spectre_bhb_loop_k24_start NULL
#define __spectre_bhb_loop_k32_start NULL
+#define __spectre_bhb_clearbhb_start NULL

static void kvm_setup_bhb_slot(const char *hyp_vecs_start) { };
#endif
@@ -1205,6 +1214,11 @@ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry)
} else if (cpu_mitigations_off()) {
pr_info_once("spectre-bhb mitigation disabled by command line option\n");
} else if (supports_ecbhb(SCOPE_LOCAL_CPU)) {
+ state = SPECTRE_MITIGATED;
+ } else if (supports_clearbhb(SCOPE_LOCAL_CPU)) {
+ kvm_setup_bhb_slot(__spectre_bhb_clearbhb_start);
+ this_cpu_set_vectors(EL1_VECTOR_BHB_CLEAR_INSN);
+
state = SPECTRE_MITIGATED;
} else if (spectre_bhb_loop_affected(SCOPE_LOCAL_CPU)) {
switch (spectre_bhb_loop_affected(SCOPE_SYSTEM)) {
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 24ce5f5242c9..03b0fdccaf05 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -151,6 +151,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
};

static const struct arm64_ftr_bits ftr_id_aa64isar2[] = {
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64ISAR2_CLEARBHB_SHIFT, 4, 0),
ARM64_FTR_END,
};

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index e64fcb9ead83..85433a84783b 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -981,6 +981,7 @@ alternative_else_nop_endif
#define BHB_MITIGATION_NONE 0
#define BHB_MITIGATION_LOOP 1
#define BHB_MITIGATION_FW 2
+#define BHB_MITIGATION_INSN 3

.macro tramp_ventry, vector_start, regsize, kpti, bhb
.align 7
@@ -997,6 +998,11 @@ alternative_else_nop_endif
__mitigate_spectre_bhb_loop x30
.endif // \bhb == BHB_MITIGATION_LOOP

+ .if \bhb == BHB_MITIGATION_INSN
+ clearbhb
+ isb
+ .endif // \bhb == BHB_MITIGATION_INSN
+
.if \kpti == 1
/*
* Defend against branch aliasing attacks by pushing a dummy
@@ -1073,6 +1079,7 @@ ENTRY(tramp_vectors)
#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_LOOP
generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_FW
+ generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_INSN
#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_NONE
END(tramp_vectors)
@@ -1135,6 +1142,7 @@ ENTRY(__bp_harden_el1_vectors)
#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
generate_el1_vector bhb=BHB_MITIGATION_LOOP
generate_el1_vector bhb=BHB_MITIGATION_FW
+ generate_el1_vector bhb=BHB_MITIGATION_INSN
#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
END(__bp_harden_el1_vectors)
.popsection
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 55ed2cb81dc6..01e518b82c49 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -392,4 +392,10 @@ ENTRY(__spectre_bhb_loop_k32_start)
ldp x0, x1, [sp, #(8 * 0)]
add sp, sp, #(8 * 2)
ENTRY(__spectre_bhb_loop_k32_end)
+
+ENTRY(__spectre_bhb_clearbhb_start)
+ esb
+ clearbhb
+ isb
+ENTRY(__spectre_bhb_clearbhb_end)
#endif
--
2.30.2

2022-03-19 18:59:02

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 08/22] arm64: entry: Allow tramp_alias to access symbols after the 4K boundary

commit 6c5bf79b69f911560fbf82214c0971af6e58e682 upstream.

Systems using kpti enter and exit the kernel through a trampoline mapping
that is always mapped, even when the kernel is not. tramp_valias is a macro
to find the address of a symbol in the trampoline mapping.

Adding extra sets of vectors will expand the size of the entry.tramp.text
section to beyond 4K. tramp_valias will be unable to generate addresses
for symbols beyond 4K as it uses the 12 bit immediate of the add
instruction.

As there are now two registers available when tramp_alias is called,
use the extra register to avoid the 4K limit of the 12 bit immediate.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/kernel/entry.S | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 1df718f53a1f..5d174c119563 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -135,9 +135,12 @@
.org .Lventry_start\@ + 128 // Did we overflow the ventry slot?
.endm

- .macro tramp_alias, dst, sym
+ .macro tramp_alias, dst, sym, tmp
mov_q \dst, TRAMP_VALIAS
- add \dst, \dst, #(\sym - .entry.tramp.text)
+ adr_l \tmp, \sym
+ add \dst, \dst, \tmp
+ adr_l \tmp, .entry.tramp.text
+ sub \dst, \dst, \tmp
.endm

// This macro corrupts x0-x3. It is the caller's duty
@@ -362,10 +365,10 @@ alternative_else_nop_endif
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
bne 4f
msr far_el1, x29
- tramp_alias x30, tramp_exit_native
+ tramp_alias x30, tramp_exit_native, x29
br x30
4:
- tramp_alias x30, tramp_exit_compat
+ tramp_alias x30, tramp_exit_compat, x29
br x30
#endif
.else
@@ -1265,7 +1268,7 @@ alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
alternative_else_nop_endif

#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
- tramp_alias dst=x5, sym=__sdei_asm_exit_trampoline
+ tramp_alias dst=x5, sym=__sdei_asm_exit_trampoline, tmp=x3
br x5
#endif
ENDPROC(__sdei_asm_handler)
--
2.30.2

2022-03-20 06:14:07

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 04/22] arm64: entry.S: Add ventry overflow sanity checks

commit 4330e2c5c04c27bebf89d34e0bc14e6943413067 upstream.

Subsequent patches add even more code to the ventry slots.
Ensure kernels that overflow a ventry slot don't get built.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/kernel/entry.S | 3 +++
1 file changed, 3 insertions(+)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 5f800384cb9a..297da3055793 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -70,6 +70,7 @@

.macro kernel_ventry, el, label, regsize = 64
.align 7
+.Lventry_start\@:
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
alternative_if ARM64_UNMAP_KERNEL_AT_EL0
.if \el == 0
@@ -127,6 +128,7 @@ alternative_else_nop_endif
mrs x0, tpidrro_el0
#endif
b el\()\el\()_\label
+.org .Lventry_start\@ + 128 // Did we overflow the ventry slot?
.endm

.macro tramp_alias, dst, sym
@@ -984,6 +986,7 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
add x30, x30, #(1b - tramp_vectors)
isb
ret
+.org 1b + 128 // Did we overflow the ventry slot?
.endm

.macro tramp_exit, regsize = 64
--
2.30.2

2022-03-20 16:29:04

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 07/22] arm64: entry: Move the trampoline data page before the text page

commit c091fb6ae059cda563b2a4d93fdbc548ef34e1d6 upstream.

The trampoline code has a data page that holds the address of the vectors,
which is unmapped when running in user-space. This ensures that with
CONFIG_RANDOMIZE_BASE, the randomised address of the kernel can't be
discovered until after the kernel has been mapped.

If the trampoline text page is extended to include multiple sets of
vectors, it will be larger than a single page, making it tricky to
find the data page without knowing the size of the trampoline text
pages, which will vary with PAGE_SIZE.

Move the data page to appear before the text page. This allows the
data page to be found without knowing the size of the trampoline text
pages. 'tramp_vectors' is used to refer to the beginning of the
.entry.tramp.text section, do that explicitly.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/include/asm/fixmap.h | 2 +-
arch/arm64/kernel/entry.S | 9 +++++++--
2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h
index ec1e6d6fa14c..c0cfc6d3bf9f 100644
--- a/arch/arm64/include/asm/fixmap.h
+++ b/arch/arm64/include/asm/fixmap.h
@@ -59,8 +59,8 @@ enum fixed_addresses {
#endif /* CONFIG_ACPI_APEI_GHES */

#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
- FIX_ENTRY_TRAMP_DATA,
FIX_ENTRY_TRAMP_TEXT,
+ FIX_ENTRY_TRAMP_DATA,
#define TRAMP_VALIAS (__fix_to_virt(FIX_ENTRY_TRAMP_TEXT))
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
__end_of_permanent_fixed_addresses,
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 16988a9d1d19..1df718f53a1f 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -967,6 +967,11 @@ alternative_else_nop_endif
*/
.endm

+ .macro tramp_data_page dst
+ adr \dst, .entry.tramp.text
+ sub \dst, \dst, PAGE_SIZE
+ .endm
+
.macro tramp_ventry, regsize = 64
.align 7
1:
@@ -983,7 +988,7 @@ alternative_else_nop_endif
2:
tramp_map_kernel x30
#ifdef CONFIG_RANDOMIZE_BASE
- adr x30, tramp_vectors + PAGE_SIZE
+ tramp_data_page x30
alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
ldr x30, [x30]
#else
@@ -1131,7 +1136,7 @@ ENTRY(__sdei_asm_entry_trampoline)
1: str x4, [x1, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)]

#ifdef CONFIG_RANDOMIZE_BASE
- adr x4, tramp_vectors + PAGE_SIZE
+ tramp_data_page x4
add x4, x4, #:lo12:__sdei_asm_trampoline_next_handler
ldr x4, [x4]
#else
--
2.30.2

2022-03-20 23:41:42

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 13/22] arm64: entry: Add non-kpti __bp_harden_el1_vectors for mitigations

commit aff65393fa1401e034656e349abd655cfe272de0 upstream.

kpti is an optional feature, for systems not using kpti a set of
vectors for the spectre-bhb mitigations is needed.

Add another set of vectors, __bp_harden_el1_vectors, that will be
used if a mitigation is needed and kpti is not in use.

The EL1 ventries are repeated verbatim as there is no additional
work needed for entry from EL1.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/kernel/entry.S | 35 ++++++++++++++++++++++++++++++++++-
1 file changed, 34 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 22d39c941a6d..54f7657f5d61 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -973,10 +973,11 @@ alternative_else_nop_endif
.macro tramp_ventry, vector_start, regsize, kpti
.align 7
1:
- .if \kpti == 1
.if \regsize == 64
msr tpidrro_el0, x30 // Restored in kernel_ventry
.endif
+
+ .if \kpti == 1
/*
* Defend against branch aliasing attacks by pushing a dummy
* entry onto the return stack and using a RET instruction to
@@ -1059,6 +1060,38 @@ __entry_tramp_data_start:
#endif /* CONFIG_RANDOMIZE_BASE */
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */

+/*
+ * Exception vectors for spectre mitigations on entry from EL1 when
+ * kpti is not in use.
+ */
+ .macro generate_el1_vector
+.Lvector_start\@:
+ kernel_ventry 1, sync_invalid // Synchronous EL1t
+ kernel_ventry 1, irq_invalid // IRQ EL1t
+ kernel_ventry 1, fiq_invalid // FIQ EL1t
+ kernel_ventry 1, error_invalid // Error EL1t
+
+ kernel_ventry 1, sync // Synchronous EL1h
+ kernel_ventry 1, irq // IRQ EL1h
+ kernel_ventry 1, fiq_invalid // FIQ EL1h
+ kernel_ventry 1, error // Error EL1h
+
+ .rept 4
+ tramp_ventry .Lvector_start\@, 64, kpti=0
+ .endr
+ .rept 4
+ tramp_ventry .Lvector_start\@, 32, kpti=0
+ .endr
+ .endm
+
+ .pushsection ".entry.text", "ax"
+ .align 11
+ENTRY(__bp_harden_el1_vectors)
+ generate_el1_vector
+END(__bp_harden_el1_vectors)
+ .popsection
+
+
/*
* Register switch for AArch64. The callee-saved registers need to be saved
* and restored. On entry:
--
2.30.2

2022-03-21 07:16:07

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 14/22] arm64: entry: Add vectors that have the bhb mitigation sequences

commit ba2689234be92024e5635d30fe744f4853ad97db upstream.

Some CPUs affected by Spectre-BHB need a sequence of branches, or a
firmware call to be run before any indirect branch. This needs to go
in the vectors. No CPU needs both.

While this can be patched in, it would run on all CPUs as there is a
single set of vectors. If only one part of a big/little combination is
affected, the unaffected CPUs have to run the mitigation too.

Create extra vectors that include the sequence. Subsequent patches will
allow affected CPUs to select this set of vectors. Later patches will
modify the loop count to match what the CPU requires.

Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/include/asm/assembler.h | 25 ++++++++++++++
arch/arm64/include/asm/vectors.h | 34 +++++++++++++++++++
arch/arm64/kernel/entry.S | 53 +++++++++++++++++++++++++-----
include/linux/arm-smccc.h | 7 ++++
4 files changed, 110 insertions(+), 9 deletions(-)
create mode 100644 arch/arm64/include/asm/vectors.h

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 5a97ac853168..5bfc335224e7 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -711,4 +711,29 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU
.Lyield_out_\@ :
.endm

+ .macro __mitigate_spectre_bhb_loop tmp
+#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
+ mov \tmp, #32
+.Lspectre_bhb_loop\@:
+ b . + 4
+ subs \tmp, \tmp, #1
+ b.ne .Lspectre_bhb_loop\@
+ dsb nsh
+ isb
+#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
+ .endm
+
+ /* Save/restores x0-x3 to the stack */
+ .macro __mitigate_spectre_bhb_fw
+#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
+ stp x0, x1, [sp, #-16]!
+ stp x2, x3, [sp, #-16]!
+ mov w0, #ARM_SMCCC_ARCH_WORKAROUND_3
+alternative_cb arm64_update_smccc_conduit
+ nop // Patched to SMC/HVC #0
+alternative_cb_end
+ ldp x2, x3, [sp], #16
+ ldp x0, x1, [sp], #16
+#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
+ .endm
#endif /* __ASM_ASSEMBLER_H */
diff --git a/arch/arm64/include/asm/vectors.h b/arch/arm64/include/asm/vectors.h
new file mode 100644
index 000000000000..16ca74260375
--- /dev/null
+++ b/arch/arm64/include/asm/vectors.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2022 ARM Ltd.
+ */
+#ifndef __ASM_VECTORS_H
+#define __ASM_VECTORS_H
+
+/*
+ * Note: the order of this enum corresponds to two arrays in entry.S:
+ * tramp_vecs and __bp_harden_el1_vectors. By default the canonical
+ * 'full fat' vectors are used directly.
+ */
+enum arm64_bp_harden_el1_vectors {
+#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
+ /*
+ * Perform the BHB loop mitigation, before branching to the canonical
+ * vectors.
+ */
+ EL1_VECTOR_BHB_LOOP,
+
+ /*
+ * Make the SMC call for firmware mitigation, before branching to the
+ * canonical vectors.
+ */
+ EL1_VECTOR_BHB_FW,
+#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
+
+ /*
+ * Remap the kernel before branching to the canonical vectors.
+ */
+ EL1_VECTOR_KPTI,
+};
+
+#endif /* __ASM_VECTORS_H */
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 54f7657f5d61..3e391b1f6f3e 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -970,13 +970,26 @@ alternative_else_nop_endif
sub \dst, \dst, PAGE_SIZE
.endm

- .macro tramp_ventry, vector_start, regsize, kpti
+
+#define BHB_MITIGATION_NONE 0
+#define BHB_MITIGATION_LOOP 1
+#define BHB_MITIGATION_FW 2
+
+ .macro tramp_ventry, vector_start, regsize, kpti, bhb
.align 7
1:
.if \regsize == 64
msr tpidrro_el0, x30 // Restored in kernel_ventry
.endif

+ .if \bhb == BHB_MITIGATION_LOOP
+ /*
+ * This sequence must appear before the first indirect branch. i.e. the
+ * ret out of tramp_ventry. It appears here because x30 is free.
+ */
+ __mitigate_spectre_bhb_loop x30
+ .endif // \bhb == BHB_MITIGATION_LOOP
+
.if \kpti == 1
/*
* Defend against branch aliasing attacks by pushing a dummy
@@ -1001,6 +1014,15 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
ldr x30, =vectors
.endif // \kpti == 1

+ .if \bhb == BHB_MITIGATION_FW
+ /*
+ * The firmware sequence must appear before the first indirect branch.
+ * i.e. the ret out of tramp_ventry. But it also needs the stack to be
+ * mapped to save/restore the registers the SMC clobbers.
+ */
+ __mitigate_spectre_bhb_fw
+ .endif // \bhb == BHB_MITIGATION_FW
+
add x30, x30, #(1b - \vector_start + 4)
ret
.org 1b + 128 // Did we overflow the ventry slot?
@@ -1008,6 +1030,9 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003

.macro tramp_exit, regsize = 64
adr x30, tramp_vectors
+#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
+ add x30, x30, SZ_4K
+#endif
msr vbar_el1, x30
ldr lr, [sp, #S_LR]
tramp_unmap_kernel x29
@@ -1018,26 +1043,32 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
eret
.endm

- .macro generate_tramp_vector, kpti
+ .macro generate_tramp_vector, kpti, bhb
.Lvector_start\@:
.space 0x400

.rept 4
- tramp_ventry .Lvector_start\@, 64, \kpti
+ tramp_ventry .Lvector_start\@, 64, \kpti, \bhb
.endr
.rept 4
- tramp_ventry .Lvector_start\@, 32, \kpti
+ tramp_ventry .Lvector_start\@, 32, \kpti, \bhb
.endr
.endm

#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
/*
* Exception vectors trampoline.
+ * The order must match __bp_harden_el1_vectors and the
+ * arm64_bp_harden_el1_vectors enum.
*/
.pushsection ".entry.tramp.text", "ax"
.align 11
ENTRY(tramp_vectors)
- generate_tramp_vector kpti=1
+#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
+ generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_LOOP
+ generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_FW
+#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
+ generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_NONE
END(tramp_vectors)

ENTRY(tramp_exit_native)
@@ -1064,7 +1095,7 @@ __entry_tramp_data_start:
* Exception vectors for spectre mitigations on entry from EL1 when
* kpti is not in use.
*/
- .macro generate_el1_vector
+ .macro generate_el1_vector, bhb
.Lvector_start\@:
kernel_ventry 1, sync_invalid // Synchronous EL1t
kernel_ventry 1, irq_invalid // IRQ EL1t
@@ -1077,17 +1108,21 @@ __entry_tramp_data_start:
kernel_ventry 1, error // Error EL1h

.rept 4
- tramp_ventry .Lvector_start\@, 64, kpti=0
+ tramp_ventry .Lvector_start\@, 64, 0, \bhb
.endr
.rept 4
- tramp_ventry .Lvector_start\@, 32, kpti=0
+ tramp_ventry .Lvector_start\@, 32, 0, \bhb
.endr
.endm

+/* The order must match tramp_vecs and the arm64_bp_harden_el1_vectors enum. */
.pushsection ".entry.text", "ax"
.align 11
ENTRY(__bp_harden_el1_vectors)
- generate_el1_vector
+#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
+ generate_el1_vector bhb=BHB_MITIGATION_LOOP
+ generate_el1_vector bhb=BHB_MITIGATION_FW
+#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
END(__bp_harden_el1_vectors)
.popsection

diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index 6366b04c7d5f..040266891414 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -85,6 +85,13 @@
ARM_SMCCC_SMC_32, \
0, 0x7fff)

+#define ARM_SMCCC_ARCH_WORKAROUND_3 \
+ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
+ ARM_SMCCC_SMC_32, \
+ 0, 0x3fff)
+
+#define SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED 1
+
#ifndef __ASSEMBLY__

#include <linux/linkage.h>
--
2.30.2

2022-03-21 11:11:15

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 20/22] KVM: arm64: Allow SMCCC_ARCH_WORKAROUND_3 to be discovered and migrated

commit a5905d6af492ee6a4a2205f0d550b3f931b03d03 upstream.

KVM allows the guest to discover whether the ARCH_WORKAROUND SMCCC are
implemented, and to preserve that state during migration through its
firmware register interface.

Add the necessary boiler plate for SMCCC_ARCH_WORKAROUND_3.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
[ kvm code moved to virt/kvm/arm, removed fw regs ABI. Added 32bit stub ]
Signed-off-by: James Morse <[email protected]>
---
arch/arm/include/asm/kvm_host.h | 7 +++++++
arch/arm64/include/asm/kvm_host.h | 5 +++++
virt/kvm/arm/psci.c | 12 ++++++++++++
3 files changed, 24 insertions(+)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index ae073fceb3f0..db20534912cd 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -26,6 +26,7 @@
#include <asm/kvm_asm.h>
#include <asm/kvm_mmio.h>
#include <asm/fpstate.h>
+#include <asm/spectre.h>
#include <kvm/arm_arch_timer.h>

#define __KVM_HAVE_ARCH_INTC_INITIALIZED
@@ -367,4 +368,10 @@ void kvm_arch_free_vm(struct kvm *kvm);

#define kvm_arm_vcpu_loaded(vcpu) (false)

+static inline int kvm_arm_get_spectre_bhb_state(void)
+{
+ /* 32bit guests don't need firmware for this */
+ return SPECTRE_VULNERABLE; /* aka SMCCC_RET_NOT_SUPPORTED */
+}
+
#endif /* __ARM_KVM_HOST_H__ */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 07472c138ced..7e1b19ea07ea 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -542,4 +542,9 @@ void kvm_arch_free_vm(struct kvm *kvm);

#define kvm_arm_vcpu_loaded(vcpu) ((vcpu)->arch.sysregs_loaded_on_cpu)

+static inline enum mitigation_state kvm_arm_get_spectre_bhb_state(void)
+{
+ return arm64_get_spectre_bhb_state();
+}
+
#endif /* __ARM64_KVM_HOST_H__ */
diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
index 34d08ee63747..a161951e55a5 100644
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -429,6 +429,18 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
break;
}
break;
+ case ARM_SMCCC_ARCH_WORKAROUND_3:
+ switch (kvm_arm_get_spectre_bhb_state()) {
+ case SPECTRE_VULNERABLE:
+ break;
+ case SPECTRE_MITIGATED:
+ val = SMCCC_RET_SUCCESS;
+ break;
+ case SPECTRE_UNAFFECTED:
+ val = SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED;
+ break;
+ }
+ break;
}
break;
default:
--
2.30.2

2022-03-21 11:59:20

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 19/22] arm64: Mitigate spectre style branch history side channels

commit 558c303c9734af5a813739cd284879227f7297d2 upstream.

Speculation attacks against some high-performance processors can
make use of branch history to influence future speculation.
When taking an exception from user-space, a sequence of branches
or a firmware call overwrites or invalidates the branch history.

The sequence of branches is added to the vectors, and should appear
before the first indirect branch. For systems using KPTI the sequence
is added to the kpti trampoline where it has a free register as the exit
from the trampoline is via a 'ret'. For systems not using KPTI, the same
register tricks are used to free up a register in the vectors.

For the firmware call, arch-workaround-3 clobbers 4 registers, so
there is no choice but to save them to the EL1 stack. This only happens
for entry from EL0, so if we take an exception due to the stack access,
it will not become re-entrant.

For KVM, the existing branch-predictor-hardening vectors are used.
When a spectre version of these vectors is in use, the firmware call
is sufficient to mitigate against Spectre-BHB. For the non-spectre
versions, the sequence of branches is added to the indirect vector.

Reviewed-by: Catalin Marinas <[email protected]>
Cc: <[email protected]> # <v5.17.x 72bb9dcb6c33c arm64: Add Cortex-X2 CPU part definition
Cc: <[email protected]> # <v5.16.x 2d0d656700d67 arm64: Add Neoverse-N2, Cortex-A710 CPU part definition
Cc: <[email protected]> # <v5.10.x 8a6b88e66233f arm64: Add part number for Arm Cortex-A77
[ modified for stable, moved code to cpu_errata.c removed bitmap of
mitigations, use kvm template infrastructure ]
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/Kconfig | 9 +
arch/arm64/include/asm/assembler.h | 4 +-
arch/arm64/include/asm/cpufeature.h | 18 ++
arch/arm64/include/asm/cputype.h | 8 +
arch/arm64/include/asm/sysreg.h | 1 +
arch/arm64/include/asm/vectors.h | 6 +
arch/arm64/kernel/cpu_errata.c | 268 +++++++++++++++++++++++++++-
arch/arm64/kvm/hyp/hyp-entry.S | 4 +
8 files changed, 315 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1daefa57e274..a101f5d2fbed 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -979,6 +979,15 @@ config ARM64_SSBD

If unsure, say Y.

+config MITIGATE_SPECTRE_BRANCH_HISTORY
+ bool "Mitigate Spectre style attacks against branch history" if EXPERT
+ default y
+ help
+ Speculation attacks against some high-performance processors can
+ make use of branch history to influence future speculation.
+ When taking an exception from user-space, a sequence of branches
+ or a firmware call overwrites the branch history.
+
menuconfig ARMV8_DEPRECATED
bool "Emulate deprecated/obsolete ARMv8 instructions"
depends on COMPAT
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 5bfc335224e7..3a5336ba745c 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -713,7 +713,9 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU

.macro __mitigate_spectre_bhb_loop tmp
#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
- mov \tmp, #32
+alternative_cb spectre_bhb_patch_loop_iter
+ mov \tmp, #32 // Patched to correct the immediate
+alternative_cb_end
.Lspectre_bhb_loop\@:
b . + 4
subs \tmp, \tmp, #1
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index e59b9ee43e4a..86a7dda74535 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -482,6 +482,21 @@ static inline bool cpu_supports_mixed_endian_el0(void)
return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
}

+static inline bool supports_csv2p3(int scope)
+{
+ u64 pfr0;
+ u8 csv2_val;
+
+ if (scope == SCOPE_LOCAL_CPU)
+ pfr0 = read_sysreg_s(SYS_ID_AA64PFR0_EL1);
+ else
+ pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+
+ csv2_val = cpuid_feature_extract_unsigned_field(pfr0,
+ ID_AA64PFR0_CSV2_SHIFT);
+ return csv2_val == 3;
+}
+
static inline bool system_supports_32bit_el0(void)
{
return cpus_have_const_cap(ARM64_HAS_32BIT_EL0);
@@ -535,6 +550,9 @@ enum mitigation_state {
};

enum mitigation_state arm64_get_spectre_bhb_state(void);
+bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope);
+u8 spectre_bhb_loop_affected(int scope);
+void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused);
#endif /* __ASSEMBLY__ */

#endif
diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index fe011f78ad86..50368f962213 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -82,9 +82,13 @@
#define ARM_CPU_PART_CORTEX_A76 0xD0B
#define ARM_CPU_PART_NEOVERSE_N1 0xD0C
#define ARM_CPU_PART_CORTEX_A77 0xD0D
+#define ARM_CPU_PART_NEOVERSE_V1 0xD40
+#define ARM_CPU_PART_CORTEX_A78 0xD41
+#define ARM_CPU_PART_CORTEX_X1 0xD44
#define ARM_CPU_PART_CORTEX_A710 0xD47
#define ARM_CPU_PART_CORTEX_X2 0xD48
#define ARM_CPU_PART_NEOVERSE_N2 0xD49
+#define ARM_CPU_PART_CORTEX_A78C 0xD4B

#define APM_CPU_PART_POTENZA 0x000

@@ -114,9 +118,13 @@
#define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76)
#define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1)
#define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
+#define MIDR_NEOVERSE_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V1)
+#define MIDR_CORTEX_A78 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78)
+#define MIDR_CORTEX_X1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1)
#define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)
#define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2)
#define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2)
+#define MIDR_CORTEX_A78C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78C)
#define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
#define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index ed99d941c462..b7b102292498 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -583,6 +583,7 @@
#endif

/* id_aa64mmfr1 */
+#define ID_AA64MMFR1_ECBHB_SHIFT 60
#define ID_AA64MMFR1_PAN_SHIFT 20
#define ID_AA64MMFR1_LOR_SHIFT 16
#define ID_AA64MMFR1_HPD_SHIFT 12
diff --git a/arch/arm64/include/asm/vectors.h b/arch/arm64/include/asm/vectors.h
index 3f76dfd9e074..f222d8e033b3 100644
--- a/arch/arm64/include/asm/vectors.h
+++ b/arch/arm64/include/asm/vectors.h
@@ -9,6 +9,7 @@
#include <linux/percpu.h>

#include <asm/fixmap.h>
+#include <asm/mmu.h>

extern char vectors[];
extern char tramp_vectors[];
@@ -40,6 +41,11 @@ enum arm64_bp_harden_el1_vectors {
EL1_VECTOR_KPTI,
};

+#ifndef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
+#define EL1_VECTOR_BHB_LOOP -1
+#define EL1_VECTOR_BHB_FW -1
+#endif /* !CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
+
/* The vectors to use on return from EL0. e.g. to remap the kernel */
DECLARE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector);

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 791dd8dadd9e..f96ab92545d7 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -24,6 +24,7 @@
#include <asm/cputype.h>
#include <asm/cpufeature.h>
#include <asm/smp_plat.h>
+#include <asm/vectors.h>

static bool __maybe_unused
is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope)
@@ -848,6 +849,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
.matches = has_ssbd_mitigation,
.midr_range_list = arm64_ssb_cpus,
},
+ {
+ .desc = "Spectre-BHB",
+ .capability = ARM64_SPECTRE_BHB,
+ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ .matches = is_spectre_bhb_affected,
+ .cpu_enable = spectre_bhb_enable_mitigation,
+ },
#ifdef CONFIG_ARM64_ERRATUM_1463225
{
.desc = "ARM erratum 1463225",
@@ -938,6 +946,33 @@ ssize_t cpu_show_spec_store_bypass(struct device *dev,
return sprintf(buf, "Vulnerable\n");
}

+/*
+ * We try to ensure that the mitigation state can never change as the result of
+ * onlining a late CPU.
+ */
+static void update_mitigation_state(enum mitigation_state *oldp,
+ enum mitigation_state new)
+{
+ enum mitigation_state state;
+
+ do {
+ state = READ_ONCE(*oldp);
+ if (new <= state)
+ break;
+ } while (cmpxchg_relaxed(oldp, state, new) != state);
+}
+
+/*
+ * Spectre BHB.
+ *
+ * A CPU is either:
+ * - Mitigated by a branchy loop a CPU specific number of times, and listed
+ * in our "loop mitigated list".
+ * - Mitigated in software by the firmware Spectre v2 call.
+ * - Has the 'Exception Clears Branch History Buffer' (ECBHB) feature, so no
+ * software mitigation in the vectors is needed.
+ * - Has CSV2.3, so is unaffected.
+ */
static enum mitigation_state spectre_bhb_state;

enum mitigation_state arm64_get_spectre_bhb_state(void)
@@ -945,6 +980,163 @@ enum mitigation_state arm64_get_spectre_bhb_state(void)
return spectre_bhb_state;
}

+/*
+ * This must be called with SCOPE_LOCAL_CPU for each type of CPU, before any
+ * SCOPE_SYSTEM call will give the right answer.
+ */
+u8 spectre_bhb_loop_affected(int scope)
+{
+ u8 k = 0;
+ static u8 max_bhb_k;
+
+ if (scope == SCOPE_LOCAL_CPU) {
+ static const struct midr_range spectre_bhb_k32_list[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_X2),
+ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
+ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1),
+ {},
+ };
+ static const struct midr_range spectre_bhb_k24_list[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A77),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
+ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
+ {},
+ };
+ static const struct midr_range spectre_bhb_k8_list[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
+ {},
+ };
+
+ if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k32_list))
+ k = 32;
+ else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k24_list))
+ k = 24;
+ else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k8_list))
+ k = 8;
+
+ max_bhb_k = max(max_bhb_k, k);
+ } else {
+ k = max_bhb_k;
+ }
+
+ return k;
+}
+
+static enum mitigation_state spectre_bhb_get_cpu_fw_mitigation_state(void)
+{
+ int ret;
+ struct arm_smccc_res res;
+
+ if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
+ return SPECTRE_VULNERABLE;
+
+ switch (psci_ops.conduit) {
+ case PSCI_CONDUIT_HVC:
+ arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+ ARM_SMCCC_ARCH_WORKAROUND_3, &res);
+ break;
+
+ case PSCI_CONDUIT_SMC:
+ arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+ ARM_SMCCC_ARCH_WORKAROUND_3, &res);
+ break;
+
+ default:
+ return SPECTRE_VULNERABLE;
+ }
+
+ ret = res.a0;
+ switch (ret) {
+ case SMCCC_RET_SUCCESS:
+ return SPECTRE_MITIGATED;
+ case SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED:
+ return SPECTRE_UNAFFECTED;
+ default:
+ case SMCCC_RET_NOT_SUPPORTED:
+ return SPECTRE_VULNERABLE;
+ }
+}
+
+static bool is_spectre_bhb_fw_affected(int scope)
+{
+ static bool system_affected;
+ enum mitigation_state fw_state;
+ bool has_smccc = (psci_ops.smccc_version >= SMCCC_VERSION_1_1);
+ static const struct midr_range spectre_bhb_firmware_mitigated_list[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
+ {},
+ };
+ bool cpu_in_list = is_midr_in_range_list(read_cpuid_id(),
+ spectre_bhb_firmware_mitigated_list);
+
+ if (scope != SCOPE_LOCAL_CPU)
+ return system_affected;
+
+ fw_state = spectre_bhb_get_cpu_fw_mitigation_state();
+ if (cpu_in_list || (has_smccc && fw_state == SPECTRE_MITIGATED)) {
+ system_affected = true;
+ return true;
+ }
+
+ return false;
+}
+
+static bool supports_ecbhb(int scope)
+{
+ u64 mmfr1;
+
+ if (scope == SCOPE_LOCAL_CPU)
+ mmfr1 = read_sysreg_s(SYS_ID_AA64MMFR1_EL1);
+ else
+ mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
+
+ return cpuid_feature_extract_unsigned_field(mmfr1,
+ ID_AA64MMFR1_ECBHB_SHIFT);
+}
+
+bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry,
+ int scope)
+{
+ WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+
+ if (supports_csv2p3(scope))
+ return false;
+
+ if (spectre_bhb_loop_affected(scope))
+ return true;
+
+ if (is_spectre_bhb_fw_affected(scope))
+ return true;
+
+ return false;
+}
+
+static void this_cpu_set_vectors(enum arm64_bp_harden_el1_vectors slot)
+{
+ const char *v = arm64_get_bp_hardening_vector(slot);
+
+ if (slot < 0)
+ return;
+
+ __this_cpu_write(this_cpu_vector, v);
+
+ /*
+ * When KPTI is in use, the vectors are switched when exiting to
+ * user-space.
+ */
+ if (arm64_kernel_unmapped_at_el0())
+ return;
+
+ write_sysreg(v, vbar_el1);
+ isb();
+}
+
#ifdef CONFIG_KVM_INDIRECT_VECTORS
static const char *kvm_bhb_get_vecs_end(const char *start)
{
@@ -960,7 +1152,7 @@ static const char *kvm_bhb_get_vecs_end(const char *start)
return NULL;
}

-void kvm_setup_bhb_slot(const char *hyp_vecs_start)
+static void kvm_setup_bhb_slot(const char *hyp_vecs_start)
{
int cpu, slot = -1;
const char *hyp_vecs_end;
@@ -996,5 +1188,77 @@ void kvm_setup_bhb_slot(const char *hyp_vecs_start)
#define __spectre_bhb_loop_k24_start NULL
#define __spectre_bhb_loop_k32_start NULL

-void kvm_setup_bhb_slot(const char *hyp_vecs_start) { };
+static void kvm_setup_bhb_slot(const char *hyp_vecs_start) { };
#endif
+
+void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry)
+{
+ enum mitigation_state fw_state, state = SPECTRE_VULNERABLE;
+
+ if (!is_spectre_bhb_affected(entry, SCOPE_LOCAL_CPU))
+ return;
+
+ if (!__spectrev2_safe && !__hardenbp_enab) {
+ /* No point mitigating Spectre-BHB alone. */
+ } else if (!IS_ENABLED(CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY)) {
+ pr_info_once("spectre-bhb mitigation disabled by compile time option\n");
+ } else if (cpu_mitigations_off()) {
+ pr_info_once("spectre-bhb mitigation disabled by command line option\n");
+ } else if (supports_ecbhb(SCOPE_LOCAL_CPU)) {
+ state = SPECTRE_MITIGATED;
+ } else if (spectre_bhb_loop_affected(SCOPE_LOCAL_CPU)) {
+ switch (spectre_bhb_loop_affected(SCOPE_SYSTEM)) {
+ case 8:
+ kvm_setup_bhb_slot(__spectre_bhb_loop_k8_start);
+ break;
+ case 24:
+ kvm_setup_bhb_slot(__spectre_bhb_loop_k24_start);
+ break;
+ case 32:
+ kvm_setup_bhb_slot(__spectre_bhb_loop_k32_start);
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ }
+ this_cpu_set_vectors(EL1_VECTOR_BHB_LOOP);
+
+ state = SPECTRE_MITIGATED;
+ } else if (is_spectre_bhb_fw_affected(SCOPE_LOCAL_CPU)) {
+ fw_state = spectre_bhb_get_cpu_fw_mitigation_state();
+ if (fw_state == SPECTRE_MITIGATED) {
+ kvm_setup_bhb_slot(__smccc_workaround_3_smc_start);
+ this_cpu_set_vectors(EL1_VECTOR_BHB_FW);
+
+ /*
+ * With WA3 in the vectors, the WA1 calls can be
+ * removed.
+ */
+ __this_cpu_write(bp_hardening_data.fn, NULL);
+
+ state = SPECTRE_MITIGATED;
+ }
+ }
+
+ update_mitigation_state(&spectre_bhb_state, state);
+}
+
+/* Patched to correct the immediate */
+void __init spectre_bhb_patch_loop_iter(struct alt_instr *alt,
+ __le32 *origptr, __le32 *updptr, int nr_inst)
+{
+ u8 rd;
+ u32 insn;
+ u16 loop_count = spectre_bhb_loop_affected(SCOPE_SYSTEM);
+
+ BUG_ON(nr_inst != 1); /* MOV -> MOV */
+
+ if (!IS_ENABLED(CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY))
+ return;
+
+ insn = le32_to_cpu(*origptr);
+ rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, insn);
+ insn = aarch64_insn_gen_movewide(rd, loop_count, 0,
+ AARCH64_INSN_VARIANT_64BIT,
+ AARCH64_INSN_MOVEWIDE_ZERO);
+ *updptr++ = cpu_to_le32(insn);
+}
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 05593f631eca..55ed2cb81dc6 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -135,6 +135,10 @@ el1_hvc_guest:
/* ARM_SMCCC_ARCH_WORKAROUND_2 handling */
eor w1, w1, #(ARM_SMCCC_ARCH_WORKAROUND_1 ^ \
ARM_SMCCC_ARCH_WORKAROUND_2)
+ cbz w1, wa_epilogue
+
+ eor w1, w1, #(ARM_SMCCC_ARCH_WORKAROUND_2 ^ \
+ ARM_SMCCC_ARCH_WORKAROUND_3)
cbnz w1, el1_trap

#ifdef CONFIG_ARM64_SSBD
--
2.30.2

2022-03-21 12:05:22

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 06/22] arm64: entry: Free up another register on kpti's tramp_exit path

commit 03aff3a77a58b5b52a77e00537a42090ad57b80b upstream.

Kpti stashes x30 in far_el1 while it uses x30 for all its work.

Making the vectors a per-cpu data structure will require a second
register.

Allow tramp_exit two registers before it unmaps the kernel, by
leaving x30 on the stack, and stashing x29 in far_el1.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/kernel/entry.S | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 96a0dda176c5..16988a9d1d19 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -348,18 +348,20 @@ alternative_else_nop_endif
ldp x24, x25, [sp, #16 * 12]
ldp x26, x27, [sp, #16 * 13]
ldp x28, x29, [sp, #16 * 14]
- ldr lr, [sp, #S_LR]
- add sp, sp, #S_FRAME_SIZE // restore sp
/*
* ARCH_HAS_MEMBARRIER_SYNC_CORE rely on eret context synchronization
* when returning from IPI handler, and when returning to user-space.
*/

.if \el == 0
-alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
+alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
+ ldr lr, [sp, #S_LR]
+ add sp, sp, #S_FRAME_SIZE // restore sp
+ eret
+alternative_else_nop_endif
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
bne 4f
- msr far_el1, x30
+ msr far_el1, x29
tramp_alias x30, tramp_exit_native
br x30
4:
@@ -367,6 +369,8 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
br x30
#endif
.else
+ ldr lr, [sp, #S_LR]
+ add sp, sp, #S_FRAME_SIZE // restore sp
eret
.endif
.endm
@@ -996,10 +1000,12 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
.macro tramp_exit, regsize = 64
adr x30, tramp_vectors
msr vbar_el1, x30
- tramp_unmap_kernel x30
+ ldr lr, [sp, #S_LR]
+ tramp_unmap_kernel x29
.if \regsize == 64
- mrs x30, far_el1
+ mrs x29, far_el1
.endif
+ add sp, sp, #S_FRAME_SIZE // restore sp
eret
.endm

--
2.30.2

2022-03-21 13:18:38

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 21/22] arm64: add ID_AA64ISAR2_EL1 sys register

From: Joey Gouly <[email protected]>

commit 9e45365f1469ef2b934f9d035975dbc9ad352116 upstream.

This is a new ID register, introduced in 8.7.

Signed-off-by: Joey Gouly <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: James Morse <[email protected]>
Cc: Alexandru Elisei <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Cc: Reiji Watanabe <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/include/asm/cpu.h | 1 +
arch/arm64/include/asm/sysreg.h | 1 +
arch/arm64/kernel/cpufeature.c | 9 +++++++++
arch/arm64/kernel/cpuinfo.c | 1 +
arch/arm64/kvm/sys_regs.c | 2 +-
5 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h
index 88392272250e..3a9908a01219 100644
--- a/arch/arm64/include/asm/cpu.h
+++ b/arch/arm64/include/asm/cpu.h
@@ -36,6 +36,7 @@ struct cpuinfo_arm64 {
u64 reg_id_aa64dfr1;
u64 reg_id_aa64isar0;
u64 reg_id_aa64isar1;
+ u64 reg_id_aa64isar2;
u64 reg_id_aa64mmfr0;
u64 reg_id_aa64mmfr1;
u64 reg_id_aa64mmfr2;
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index b7b102292498..2afd6f23bef2 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -161,6 +161,7 @@

#define SYS_ID_AA64ISAR0_EL1 sys_reg(3, 0, 0, 6, 0)
#define SYS_ID_AA64ISAR1_EL1 sys_reg(3, 0, 0, 6, 1)
+#define SYS_ID_AA64ISAR2_EL1 sys_reg(3, 0, 0, 6, 2)

#define SYS_ID_AA64MMFR0_EL1 sys_reg(3, 0, 0, 7, 0)
#define SYS_ID_AA64MMFR1_EL1 sys_reg(3, 0, 0, 7, 1)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 0ec57bbea2e0..24ce5f5242c9 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -150,6 +150,10 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
ARM64_FTR_END,
};

+static const struct arm64_ftr_bits ftr_id_aa64isar2[] = {
+ ARM64_FTR_END,
+};
+
static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
@@ -396,6 +400,7 @@ static const struct __ftr_reg_entry {
/* Op1 = 0, CRn = 0, CRm = 6 */
ARM64_FTR_REG(SYS_ID_AA64ISAR0_EL1, ftr_id_aa64isar0),
ARM64_FTR_REG(SYS_ID_AA64ISAR1_EL1, ftr_id_aa64isar1),
+ ARM64_FTR_REG(SYS_ID_AA64ISAR2_EL1, ftr_id_aa64isar2),

/* Op1 = 0, CRn = 0, CRm = 7 */
ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0),
@@ -544,6 +549,7 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
init_cpu_ftr_reg(SYS_ID_AA64DFR1_EL1, info->reg_id_aa64dfr1);
init_cpu_ftr_reg(SYS_ID_AA64ISAR0_EL1, info->reg_id_aa64isar0);
init_cpu_ftr_reg(SYS_ID_AA64ISAR1_EL1, info->reg_id_aa64isar1);
+ init_cpu_ftr_reg(SYS_ID_AA64ISAR2_EL1, info->reg_id_aa64isar2);
init_cpu_ftr_reg(SYS_ID_AA64MMFR0_EL1, info->reg_id_aa64mmfr0);
init_cpu_ftr_reg(SYS_ID_AA64MMFR1_EL1, info->reg_id_aa64mmfr1);
init_cpu_ftr_reg(SYS_ID_AA64MMFR2_EL1, info->reg_id_aa64mmfr2);
@@ -661,6 +667,8 @@ void update_cpu_features(int cpu,
info->reg_id_aa64isar0, boot->reg_id_aa64isar0);
taint |= check_update_ftr_reg(SYS_ID_AA64ISAR1_EL1, cpu,
info->reg_id_aa64isar1, boot->reg_id_aa64isar1);
+ taint |= check_update_ftr_reg(SYS_ID_AA64ISAR2_EL1, cpu,
+ info->reg_id_aa64isar2, boot->reg_id_aa64isar2);

/*
* Differing PARange support is fine as long as all peripherals and
@@ -794,6 +802,7 @@ static u64 __read_sysreg_by_encoding(u32 sys_id)
read_sysreg_case(SYS_ID_AA64MMFR2_EL1);
read_sysreg_case(SYS_ID_AA64ISAR0_EL1);
read_sysreg_case(SYS_ID_AA64ISAR1_EL1);
+ read_sysreg_case(SYS_ID_AA64ISAR2_EL1);

read_sysreg_case(SYS_CNTFRQ_EL0);
read_sysreg_case(SYS_CTR_EL0);
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index dce971f2c167..36bd58d8ca11 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -334,6 +334,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
info->reg_id_aa64dfr1 = read_cpuid(ID_AA64DFR1_EL1);
info->reg_id_aa64isar0 = read_cpuid(ID_AA64ISAR0_EL1);
info->reg_id_aa64isar1 = read_cpuid(ID_AA64ISAR1_EL1);
+ info->reg_id_aa64isar2 = read_cpuid(ID_AA64ISAR2_EL1);
info->reg_id_aa64mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
info->reg_id_aa64mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
info->reg_id_aa64mmfr2 = read_cpuid(ID_AA64MMFR2_EL1);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 98e8bc919583..f06629bf2be1 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1289,7 +1289,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
/* CRm=6 */
ID_SANITISED(ID_AA64ISAR0_EL1),
ID_SANITISED(ID_AA64ISAR1_EL1),
- ID_UNALLOCATED(6,2),
+ ID_SANITISED(ID_AA64ISAR2_EL1),
ID_UNALLOCATED(6,3),
ID_UNALLOCATED(6,4),
ID_UNALLOCATED(6,5),
--
2.30.2

2022-03-21 14:02:55

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 18/22] KVM: arm64: Add templates for BHB mitigation sequences

KVM writes the Spectre-v2 mitigation template at the beginning of each
vector when a CPU requires a specific sequence to run.

Because the template is copied, it can not be modified by the alternatives
at runtime. As the KVM template code is intertwined with the bp-hardening
callbacks, all templates must have a bp-hardening callback.

Add templates for calling ARCH_WORKAROUND_3 and one for each value of K
in the brancy-loop. Identify these sequences by a new parameter
template_start, and add a copy of install_bp_hardening_cb() that is able to
install them.

Signed-off-by: James Morse <[email protected]>
---
arch/arm64/include/asm/cpucaps.h | 3 +-
arch/arm64/include/asm/kvm_mmu.h | 6 ++-
arch/arm64/include/asm/mmu.h | 6 +++
arch/arm64/kernel/cpu_errata.c | 65 +++++++++++++++++++++++++++++++-
arch/arm64/kvm/hyp/hyp-entry.S | 54 ++++++++++++++++++++++++++
5 files changed, 130 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index df8fe8ecc37e..64ae14371cae 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -54,7 +54,8 @@
#define ARM64_WORKAROUND_1463225 33
#define ARM64_SSBS 34
#define ARM64_WORKAROUND_1542419 35
+#define ARM64_SPECTRE_BHB 36

-#define ARM64_NCAPS 36
+#define ARM64_NCAPS 37

#endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index b2558447c67d..44d3fdbcdf62 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -438,7 +438,8 @@ static inline void *kvm_get_hyp_vector(void)
void *vect = kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector));
int slot = -1;

- if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR) && data->fn) {
+ if ((cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR) ||
+ cpus_have_const_cap(ARM64_SPECTRE_BHB)) && data->template_start) {
vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs_start));
slot = data->hyp_vectors_slot;
}
@@ -467,7 +468,8 @@ static inline int kvm_map_vectors(void)
* !HBP + HEL2 -> allocate one vector slot and use exec mapping
* HBP + HEL2 -> use hardened vertors and use exec mapping
*/
- if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) {
+ if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR) ||
+ cpus_have_const_cap(ARM64_SPECTRE_BHB)) {
__kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs_start);
__kvm_bp_vect_base = kern_hyp_va(__kvm_bp_vect_base);
}
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 1f61519fc23f..b37d185e0e84 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -49,6 +49,12 @@ typedef void (*bp_hardening_cb_t)(void);
struct bp_hardening_data {
int hyp_vectors_slot;
bp_hardening_cb_t fn;
+
+ /*
+ * template_start is only used by the BHB mitigation to identify the
+ * hyp_vectors_slot sequence.
+ */
+ const char *template_start;
};

#if (defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index d055ed21eeeb..791dd8dadd9e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -97,6 +97,14 @@ DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
#ifdef CONFIG_KVM_INDIRECT_VECTORS
extern char __smccc_workaround_1_smc_start[];
extern char __smccc_workaround_1_smc_end[];
+extern char __smccc_workaround_3_smc_start[];
+extern char __smccc_workaround_3_smc_end[];
+extern char __spectre_bhb_loop_k8_start[];
+extern char __spectre_bhb_loop_k8_end[];
+extern char __spectre_bhb_loop_k24_start[];
+extern char __spectre_bhb_loop_k24_end[];
+extern char __spectre_bhb_loop_k32_start[];
+extern char __spectre_bhb_loop_k32_end[];

static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
const char *hyp_vecs_end)
@@ -110,11 +118,11 @@ static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
__flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K);
}

+static DEFINE_SPINLOCK(bp_lock);
static void install_bp_hardening_cb(bp_hardening_cb_t fn,
const char *hyp_vecs_start,
const char *hyp_vecs_end)
{
- static DEFINE_SPINLOCK(bp_lock);
int cpu, slot = -1;

spin_lock(&bp_lock);
@@ -133,6 +141,7 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn,

__this_cpu_write(bp_hardening_data.hyp_vectors_slot, slot);
__this_cpu_write(bp_hardening_data.fn, fn);
+ __this_cpu_write(bp_hardening_data.template_start, hyp_vecs_start);
spin_unlock(&bp_lock);
}
#else
@@ -935,3 +944,57 @@ enum mitigation_state arm64_get_spectre_bhb_state(void)
{
return spectre_bhb_state;
}
+
+#ifdef CONFIG_KVM_INDIRECT_VECTORS
+static const char *kvm_bhb_get_vecs_end(const char *start)
+{
+ if (start == __smccc_workaround_3_smc_start)
+ return __smccc_workaround_3_smc_end;
+ else if (start == __spectre_bhb_loop_k8_start)
+ return __spectre_bhb_loop_k8_end;
+ else if (start == __spectre_bhb_loop_k24_start)
+ return __spectre_bhb_loop_k24_end;
+ else if (start == __spectre_bhb_loop_k32_start)
+ return __spectre_bhb_loop_k32_end;
+
+ return NULL;
+}
+
+void kvm_setup_bhb_slot(const char *hyp_vecs_start)
+{
+ int cpu, slot = -1;
+ const char *hyp_vecs_end;
+
+ if (!IS_ENABLED(CONFIG_KVM) || !is_hyp_mode_available())
+ return;
+
+ hyp_vecs_end = kvm_bhb_get_vecs_end(hyp_vecs_start);
+ if (WARN_ON_ONCE(!hyp_vecs_start || !hyp_vecs_end))
+ return;
+
+ spin_lock(&bp_lock);
+ for_each_possible_cpu(cpu) {
+ if (per_cpu(bp_hardening_data.template_start, cpu) == hyp_vecs_start) {
+ slot = per_cpu(bp_hardening_data.hyp_vectors_slot, cpu);
+ break;
+ }
+ }
+
+ if (slot == -1) {
+ slot = atomic_inc_return(&arm64_el2_vector_last_slot);
+ BUG_ON(slot >= BP_HARDEN_EL2_SLOTS);
+ __copy_hyp_vect_bpi(slot, hyp_vecs_start, hyp_vecs_end);
+ }
+
+ __this_cpu_write(bp_hardening_data.hyp_vectors_slot, slot);
+ __this_cpu_write(bp_hardening_data.template_start, hyp_vecs_start);
+ spin_unlock(&bp_lock);
+}
+#else
+#define __smccc_workaround_3_smc_start NULL
+#define __spectre_bhb_loop_k8_start NULL
+#define __spectre_bhb_loop_k24_start NULL
+#define __spectre_bhb_loop_k32_start NULL
+
+void kvm_setup_bhb_slot(const char *hyp_vecs_start) { };
+#endif
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index ea063312bca1..05593f631eca 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -334,4 +334,58 @@ ENTRY(__smccc_workaround_1_smc_start)
ldp x0, x1, [sp, #(8 * 2)]
add sp, sp, #(8 * 4)
ENTRY(__smccc_workaround_1_smc_end)
+
+ENTRY(__smccc_workaround_3_smc_start)
+ esb
+ sub sp, sp, #(8 * 4)
+ stp x2, x3, [sp, #(8 * 0)]
+ stp x0, x1, [sp, #(8 * 2)]
+ mov w0, #ARM_SMCCC_ARCH_WORKAROUND_3
+ smc #0
+ ldp x2, x3, [sp, #(8 * 0)]
+ ldp x0, x1, [sp, #(8 * 2)]
+ add sp, sp, #(8 * 4)
+ENTRY(__smccc_workaround_3_smc_end)
+
+ENTRY(__spectre_bhb_loop_k8_start)
+ esb
+ sub sp, sp, #(8 * 2)
+ stp x0, x1, [sp, #(8 * 0)]
+ mov x0, #8
+2: b . + 4
+ subs x0, x0, #1
+ b.ne 2b
+ dsb nsh
+ isb
+ ldp x0, x1, [sp, #(8 * 0)]
+ add sp, sp, #(8 * 2)
+ENTRY(__spectre_bhb_loop_k8_end)
+
+ENTRY(__spectre_bhb_loop_k24_start)
+ esb
+ sub sp, sp, #(8 * 2)
+ stp x0, x1, [sp, #(8 * 0)]
+ mov x0, #24
+2: b . + 4
+ subs x0, x0, #1
+ b.ne 2b
+ dsb nsh
+ isb
+ ldp x0, x1, [sp, #(8 * 0)]
+ add sp, sp, #(8 * 2)
+ENTRY(__spectre_bhb_loop_k24_end)
+
+ENTRY(__spectre_bhb_loop_k32_start)
+ esb
+ sub sp, sp, #(8 * 2)
+ stp x0, x1, [sp, #(8 * 0)]
+ mov x0, #32
+2: b . + 4
+ subs x0, x0, #1
+ b.ne 2b
+ dsb nsh
+ isb
+ ldp x0, x1, [sp, #(8 * 0)]
+ add sp, sp, #(8 * 2)
+ENTRY(__spectre_bhb_loop_k32_end)
#endif
--
2.30.2

2022-03-21 16:08:49

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 09/22] arm64: entry: Don't assume tramp_vectors is the start of the vectors

commit ed50da7764535f1e24432ded289974f2bf2b0c5a upstream.

The tramp_ventry macro uses tramp_vectors as the address of the vectors
when calculating which ventry in the 'full fat' vectors to branch to.

While there is one set of tramp_vectors, this will be true.
Adding multiple sets of vectors will break this assumption.

Move the generation of the vectors to a macro, and pass the start
of the vectors as an argument to tramp_ventry.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/kernel/entry.S | 28 +++++++++++++++-------------
1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 5d174c119563..a13f7d30b7fa 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -975,7 +975,7 @@ alternative_else_nop_endif
sub \dst, \dst, PAGE_SIZE
.endm

- .macro tramp_ventry, regsize = 64
+ .macro tramp_ventry, vector_start, regsize
.align 7
1:
.if \regsize == 64
@@ -997,9 +997,9 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
#else
ldr x30, =vectors
#endif
- prfm plil1strm, [x30, #(1b - tramp_vectors)]
+ prfm plil1strm, [x30, #(1b - \vector_start)]
msr vbar_el1, x30
- add x30, x30, #(1b - tramp_vectors + 4)
+ add x30, x30, #(1b - \vector_start + 4)
isb
ret
.org 1b + 128 // Did we overflow the ventry slot?
@@ -1017,19 +1017,21 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
eret
.endm

- .align 11
-ENTRY(tramp_vectors)
+ .macro generate_tramp_vector
+.Lvector_start\@:
.space 0x400

- tramp_ventry
- tramp_ventry
- tramp_ventry
- tramp_ventry
+ .rept 4
+ tramp_ventry .Lvector_start\@, 64
+ .endr
+ .rept 4
+ tramp_ventry .Lvector_start\@, 32
+ .endr
+ .endm

- tramp_ventry 32
- tramp_ventry 32
- tramp_ventry 32
- tramp_ventry 32
+ .align 11
+ENTRY(tramp_vectors)
+ generate_tramp_vector
END(tramp_vectors)

ENTRY(tramp_exit_native)
--
2.30.2

2022-03-21 16:52:05

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 11/22] arm64: entry: Make the kpti trampoline's kpti sequence optional

commit c47e4d04ba0f1ea17353d85d45f611277507e07a upstream.

Spectre-BHB needs to add sequences to the vectors. Having one global
set of vectors is a problem for big/little systems where the sequence
is costly on cpus that are not vulnerable.

Making the vectors per-cpu in the style of KVM's bh_harden_hyp_vecs
requires the vectors to be generated by macros.

Make the kpti re-mapping of the kernel optional, so the macros can be
used without kpti.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/kernel/entry.S | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 03c45538a93f..50843161607e 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -970,9 +970,10 @@ alternative_else_nop_endif
sub \dst, \dst, PAGE_SIZE
.endm

- .macro tramp_ventry, vector_start, regsize
+ .macro tramp_ventry, vector_start, regsize, kpti
.align 7
1:
+ .if \kpti == 1
.if \regsize == 64
msr tpidrro_el0, x30 // Restored in kernel_ventry
.endif
@@ -994,8 +995,12 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
#endif
prfm plil1strm, [x30, #(1b - \vector_start)]
msr vbar_el1, x30
+ isb
+ .else
+ ldr x30, =vectors
+ .endif // \kpti == 1
+
add x30, x30, #(1b - \vector_start + 4)
- isb
ret
.org 1b + 128 // Did we overflow the ventry slot?
.endm
@@ -1012,15 +1017,15 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
eret
.endm

- .macro generate_tramp_vector
+ .macro generate_tramp_vector, kpti
.Lvector_start\@:
.space 0x400

.rept 4
- tramp_ventry .Lvector_start\@, 64
+ tramp_ventry .Lvector_start\@, 64, \kpti
.endr
.rept 4
- tramp_ventry .Lvector_start\@, 32
+ tramp_ventry .Lvector_start\@, 32, \kpti
.endr
.endm

@@ -1031,7 +1036,7 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
.pushsection ".entry.tramp.text", "ax"
.align 11
ENTRY(tramp_vectors)
- generate_tramp_vector
+ generate_tramp_vector kpti=1
END(tramp_vectors)

ENTRY(tramp_exit_native)
--
2.30.2

2022-03-21 17:07:27

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 17/22] arm64: proton-pack: Report Spectre-BHB vulnerabilities as part of Spectre-v2

commit dee435be76f4117410bbd90573a881fd33488f37 upstream.

Speculation attacks against some high-performance processors can
make use of branch history to influence future speculation as part of
a spectre-v2 attack. This is not mitigated by CSV2, meaning CPUs that
previously reported 'Not affected' are now moderately mitigated by CSV2.

Update the value in /sys/devices/system/cpu/vulnerabilities/spectre_v2
to also show the state of the BHB mitigation.

Reviewed-by: Catalin Marinas <[email protected]>
[ code move to cpu_errata.c for backport ]
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/include/asm/cpufeature.h | 8 ++++++
arch/arm64/kernel/cpu_errata.c | 38 ++++++++++++++++++++++++++---
2 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index dda6e5056810..e59b9ee43e4a 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -527,6 +527,14 @@ static inline int arm64_get_ssbd_state(void)

void arm64_set_ssbd_mitigation(bool state);

+/* Watch out, ordering is important here. */
+enum mitigation_state {
+ SPECTRE_UNAFFECTED,
+ SPECTRE_MITIGATED,
+ SPECTRE_VULNERABLE,
+};
+
+enum mitigation_state arm64_get_spectre_bhb_state(void);
#endif /* __ASSEMBLY__ */

#endif
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index d191ce8410db..d055ed21eeeb 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -875,14 +875,39 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
return sprintf(buf, "Mitigation: __user pointer sanitization\n");
}

+static const char *get_bhb_affected_string(enum mitigation_state bhb_state)
+{
+ switch (bhb_state) {
+ case SPECTRE_UNAFFECTED:
+ return "";
+ default:
+ case SPECTRE_VULNERABLE:
+ return ", but not BHB";
+ case SPECTRE_MITIGATED:
+ return ", BHB";
+ }
+}
+
ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
char *buf)
{
- if (__spectrev2_safe)
- return sprintf(buf, "Not affected\n");
+ enum mitigation_state bhb_state = arm64_get_spectre_bhb_state();
+ const char *bhb_str = get_bhb_affected_string(bhb_state);
+ const char *v2_str = "Branch predictor hardening";
+
+ if (__spectrev2_safe) {
+ if (bhb_state == SPECTRE_UNAFFECTED)
+ return sprintf(buf, "Not affected\n");
+
+ /*
+ * Platforms affected by Spectre-BHB can't report
+ * "Not affected" for Spectre-v2.
+ */
+ v2_str = "CSV2";
+ }

if (__hardenbp_enab)
- return sprintf(buf, "Mitigation: Branch predictor hardening\n");
+ return sprintf(buf, "Mitigation: %s%s\n", v2_str, bhb_str);

return sprintf(buf, "Vulnerable\n");
}
@@ -903,3 +928,10 @@ ssize_t cpu_show_spec_store_bypass(struct device *dev,

return sprintf(buf, "Vulnerable\n");
}
+
+static enum mitigation_state spectre_bhb_state;
+
+enum mitigation_state arm64_get_spectre_bhb_state(void)
+{
+ return spectre_bhb_state;
+}
--
2.30.2

2022-03-21 17:38:44

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 02/22] arm64: Add Neoverse-N2, Cortex-A710 CPU part definition

From: Suzuki K Poulose <[email protected]>

commit 2d0d656700d67239a57afaf617439143d8dac9be upstream.

Add the CPU Partnumbers for the new Arm designs.

Cc: Catalin Marinas <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Reviewed-by: Anshuman Khandual <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/include/asm/cputype.h | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index b147e40c4110..7315f20c7ba2 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -82,6 +82,8 @@
#define ARM_CPU_PART_CORTEX_A76 0xD0B
#define ARM_CPU_PART_NEOVERSE_N1 0xD0C
#define ARM_CPU_PART_CORTEX_A77 0xD0D
+#define ARM_CPU_PART_CORTEX_A710 0xD47
+#define ARM_CPU_PART_NEOVERSE_N2 0xD49

#define APM_CPU_PART_POTENZA 0x000

@@ -111,6 +113,8 @@
#define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76)
#define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1)
#define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
+#define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)
+#define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2)
#define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
#define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
--
2.30.2

2022-03-21 21:16:08

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [stable:PATCH v4.19.235 00/22] arm64: Mitigate spectre style branch history side channels

On Fri, Mar 18, 2022 at 05:48:20PM +0000, James Morse wrote:
> Hello!
>
> There is the v4.19 backport with the k=8 typo and SDEI name thing both
> fixed.
>
> Again, its the KVM templates patch that doesn't exist upstream, this is
> necessary because the infrastructure for older kernels is very
> different, and the dependencies for what was a rewrite are huge.

Many thanks for these, all now queued up!

greg k-h

2022-03-21 22:09:43

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 10/22] arm64: entry: Move trampoline macros out of ifdef'd section

commit 13d7a08352a83ef2252aeb464a5e08dfc06b5dfd upstream.

The macros for building the kpti trampoline are all behind
CONFIG_UNMAP_KERNEL_AT_EL0, and in a region that outputs to the
.entry.tramp.text section.

Move the macros out so they can be used to generate other kinds of
trampoline. Only the symbols need to be guarded by
CONFIG_UNMAP_KERNEL_AT_EL0 and appear in the .entry.tramp.text section.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/kernel/entry.S | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index a13f7d30b7fa..03c45538a93f 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -933,12 +933,7 @@ ENDPROC(el0_svc)

.popsection // .entry.text

-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-/*
- * Exception vectors trampoline.
- */
- .pushsection ".entry.tramp.text", "ax"
-
+ // Move from tramp_pg_dir to swapper_pg_dir
.macro tramp_map_kernel, tmp
mrs \tmp, ttbr1_el1
add \tmp, \tmp, #(PAGE_SIZE + RESERVED_TTBR0_SIZE)
@@ -1029,6 +1024,11 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
.endr
.endm

+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+/*
+ * Exception vectors trampoline.
+ */
+ .pushsection ".entry.tramp.text", "ax"
.align 11
ENTRY(tramp_vectors)
generate_tramp_vector
--
2.30.2

2022-03-21 22:43:26

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 01/22] arm64: Add part number for Arm Cortex-A77

From: Rob Herring <[email protected]>

commit 8a6b88e66233f5f1779b0a1342aa9dc030dddcd5 upstream.

Add the MIDR part number info for the Arm Cortex-A77.

Signed-off-by: Rob Herring <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/include/asm/cputype.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index 3cd936b1c79c..b147e40c4110 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -81,6 +81,7 @@
#define ARM_CPU_PART_CORTEX_A55 0xD05
#define ARM_CPU_PART_CORTEX_A76 0xD0B
#define ARM_CPU_PART_NEOVERSE_N1 0xD0C
+#define ARM_CPU_PART_CORTEX_A77 0xD0D

#define APM_CPU_PART_POTENZA 0x000

@@ -109,6 +110,7 @@
#define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55)
#define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76)
#define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1)
+#define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
#define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
#define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
--
2.30.2

2022-03-21 22:50:50

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 05/22] arm64: entry: Make the trampoline cleanup optional

commit d739da1694a0eaef0358a42b76904b611539b77b upstream.

Subsequent patches will add additional sets of vectors that use
the same tricks as the kpti vectors to reach the full-fat vectors.
The full-fat vectors contain some cleanup for kpti that is patched
in by alternatives when kpti is in use. Once there are additional
vectors, the cleanup will be needed in more cases.

But on big/little systems, the cleanup would be harmful if no
trampoline vector were in use. Instead of forcing CPUs that don't
need a trampoline vector to use one, make the trampoline cleanup
optional.

Entry at the top of the vectors will skip the cleanup. The trampoline
vectors can then skip the first instruction, triggering the cleanup
to run.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/kernel/entry.S | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 297da3055793..96a0dda176c5 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -72,16 +72,20 @@
.align 7
.Lventry_start\@:
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-alternative_if ARM64_UNMAP_KERNEL_AT_EL0
.if \el == 0
+ /*
+ * This must be the first instruction of the EL0 vector entries. It is
+ * skipped by the trampoline vectors, to trigger the cleanup.
+ */
+ b .Lskip_tramp_vectors_cleanup\@
.if \regsize == 64
mrs x30, tpidrro_el0
msr tpidrro_el0, xzr
.else
mov x30, xzr
.endif
+.Lskip_tramp_vectors_cleanup\@:
.endif
-alternative_else_nop_endif
#endif

sub sp, sp, #S_FRAME_SIZE
@@ -983,7 +987,7 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
#endif
prfm plil1strm, [x30, #(1b - tramp_vectors)]
msr vbar_el1, x30
- add x30, x30, #(1b - tramp_vectors)
+ add x30, x30, #(1b - tramp_vectors + 4)
isb
ret
.org 1b + 128 // Did we overflow the ventry slot?
--
2.30.2

2022-03-21 23:14:12

by James Morse

[permalink] [raw]
Subject: [stable:PATCH v4.19.235 15/22] arm64: entry: Add macro for reading symbol addresses from the trampoline

commit b28a8eebe81c186fdb1a0078263b30576c8e1f42 upstream.

The trampoline code needs to use the address of symbols in the wider
kernel, e.g. vectors. PC-relative addressing wouldn't work as the
trampoline code doesn't run at the address the linker expected.

tramp_ventry uses a literal pool, unless CONFIG_RANDOMIZE_BASE is
set, in which case it uses the data page as a literal pool because
the data page can be unmapped when running in user-space, which is
required for CPUs vulnerable to meltdown.

Pull this logic out as a macro, instead of adding a third copy
of it.

Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
arch/arm64/kernel/entry.S | 35 ++++++++++++++++-------------------
1 file changed, 16 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 3e391b1f6f3e..43bc04bd18d9 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -970,6 +970,15 @@ alternative_else_nop_endif
sub \dst, \dst, PAGE_SIZE
.endm

+ .macro tramp_data_read_var dst, var
+#ifdef CONFIG_RANDOMIZE_BASE
+ tramp_data_page \dst
+ add \dst, \dst, #:lo12:__entry_tramp_data_\var
+ ldr \dst, [\dst]
+#else
+ ldr \dst, =\var
+#endif
+ .endm

#define BHB_MITIGATION_NONE 0
#define BHB_MITIGATION_LOOP 1
@@ -1000,13 +1009,8 @@ alternative_else_nop_endif
b .
2:
tramp_map_kernel x30
-#ifdef CONFIG_RANDOMIZE_BASE
- tramp_data_page x30
alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
- ldr x30, [x30]
-#else
- ldr x30, =vectors
-#endif
+ tramp_data_read_var x30, vectors
prfm plil1strm, [x30, #(1b - \vector_start)]
msr vbar_el1, x30
isb
@@ -1086,7 +1090,12 @@ END(tramp_exit_compat)
.align PAGE_SHIFT
.globl __entry_tramp_data_start
__entry_tramp_data_start:
+__entry_tramp_data_vectors:
.quad vectors
+#ifdef CONFIG_ARM_SDE_INTERFACE
+__entry_tramp_data___sdei_asm_handler:
+ .quad __sdei_asm_handler
+#endif /* CONFIG_ARM_SDE_INTERFACE */
.popsection // .rodata
#endif /* CONFIG_RANDOMIZE_BASE */
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
@@ -1213,13 +1222,7 @@ ENTRY(__sdei_asm_entry_trampoline)
*/
1: str x4, [x1, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)]

-#ifdef CONFIG_RANDOMIZE_BASE
- tramp_data_page x4
- add x4, x4, #:lo12:__sdei_asm_trampoline_next_handler
- ldr x4, [x4]
-#else
- ldr x4, =__sdei_asm_handler
-#endif
+ tramp_data_read_var x4, __sdei_asm_handler
br x4
ENDPROC(__sdei_asm_entry_trampoline)
NOKPROBE(__sdei_asm_entry_trampoline)
@@ -1242,12 +1245,6 @@ ENDPROC(__sdei_asm_exit_trampoline)
NOKPROBE(__sdei_asm_exit_trampoline)
.ltorg
.popsection // .entry.tramp.text
-#ifdef CONFIG_RANDOMIZE_BASE
-.pushsection ".rodata", "a"
-__sdei_asm_trampoline_next_handler:
- .quad __sdei_asm_handler
-.popsection // .rodata
-#endif /* CONFIG_RANDOMIZE_BASE */
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */

/*
--
2.30.2