2022-04-06 20:18:59

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 00/43] 4.9.310-rc1 review

This is the start of the stable review cycle for the 4.9.310 release.
There are 43 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.

Responses should be made by Fri, 08 Apr 2022 18:24:27 +0000.
Anything received after that time might be too late.

The whole patch series can be found in one patch at:
https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.9.310-rc1.gz
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.9.y
and the diffstat can be found below.

thanks,

greg k-h

-------------
Pseudo-Shortlog of commits:

Greg Kroah-Hartman <[email protected]>
Linux 4.9.310-rc1

James Morse <[email protected]>
arm64: Use the clearbhb instruction in mitigations

James Morse <[email protected]>
arm64: add ID_AA64ISAR2_EL1 sys register

James Morse <[email protected]>
KVM: arm64: Allow SMCCC_ARCH_WORKAROUND_3 to be discovered and migrated

James Morse <[email protected]>
arm64: Mitigate spectre style branch history side channels

James Morse <[email protected]>
KVM: arm64: Add templates for BHB mitigation sequences

James Morse <[email protected]>
arm64: Add percpu vectors for EL1

James Morse <[email protected]>
arm64: entry: Add macro for reading symbol addresses from the trampoline

James Morse <[email protected]>
arm64: entry: Add vectors that have the bhb mitigation sequences

James Morse <[email protected]>
arm64: Move arm64_update_smccc_conduit() out of SSBD ifdef

James Morse <[email protected]>
arm64: entry: Add non-kpti __bp_harden_el1_vectors for mitigations

James Morse <[email protected]>
arm64: entry: Allow the trampoline text to occupy multiple pages

James Morse <[email protected]>
arm64: entry: Make the kpti trampoline's kpti sequence optional

James Morse <[email protected]>
arm64: entry: Move trampoline macros out of ifdef'd section

James Morse <[email protected]>
arm64: entry: Don't assume tramp_vectors is the start of the vectors

James Morse <[email protected]>
arm64: entry: Allow tramp_alias to access symbols after the 4K boundary

James Morse <[email protected]>
arm64: entry: Move the trampoline data page before the text page

James Morse <[email protected]>
arm64: entry: Free up another register on kpti's tramp_exit path

James Morse <[email protected]>
arm64: entry: Make the trampoline cleanup optional

James Morse <[email protected]>
arm64: entry.S: Add ventry overflow sanity checks

Suzuki K Poulose <[email protected]>
arm64: Add helper to decode register from instruction

Anshuman Khandual <[email protected]>
arm64: Add Cortex-X2 CPU part definition

Suzuki K Poulose <[email protected]>
arm64: Add Neoverse-N2, Cortex-A710 CPU part definition

Rob Herring <[email protected]>
arm64: Add part number for Arm Cortex-A77

Marc Zyngier <[email protected]>
arm64: Add part number for Neoverse N1

Marc Zyngier <[email protected]>
arm64: Make ARM64_ERRATUM_1188873 depend on COMPAT

Marc Zyngier <[email protected]>
arm64: Add silicon-errata.txt entry for ARM erratum 1188873

Arnd Bergmann <[email protected]>
arm64: arch_timer: avoid unused function warning

Marc Zyngier <[email protected]>
arm64: arch_timer: Add workaround for ARM erratum 1188873

Marc Zyngier <[email protected]>
arm64: arch_timer: Add erratum handler for CPU-specific capability

Marc Zyngier <[email protected]>
arm64: arch_timer: Add infrastructure for multiple erratum detection methods

Ding Tianhong <[email protected]>
clocksource/drivers/arm_arch_timer: Introduce generic errata handling infrastructure

Ding Tianhong <[email protected]>
clocksource/drivers/arm_arch_timer: Remove fsl-a008585 parameter

Suzuki K Poulose <[email protected]>
arm64: capabilities: Add support for checks based on a list of MIDRs

Suzuki K Poulose <[email protected]>
arm64: Add helpers for checking CPU MIDR against a range

Suzuki K Poulose <[email protected]>
arm64: capabilities: Clean up midr range helpers

Suzuki K Poulose <[email protected]>
arm64: capabilities: Add flags to handle the conflicts on late CPU

Suzuki K Poulose <[email protected]>
arm64: capabilities: Prepare for fine grained capabilities

Suzuki K Poulose <[email protected]>
arm64: capabilities: Move errata processing code

Suzuki K Poulose <[email protected]>
arm64: capabilities: Move errata work around check on boot CPU

Dave Martin <[email protected]>
arm64: capabilities: Update prototype for enable call back

Suzuki K Poulose <[email protected]>
arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35

James Morse <[email protected]>
arm64: Remove useless UAO IPI and describe how this gets enabled

Robert Richter <[email protected]>
arm64: errata: Provide macro for major and minor cpu revisions


-------------

Diffstat:

Documentation/arm64/silicon-errata.txt | 1 +
Documentation/kernel-parameters.txt | 9 -
Makefile | 4 +-
arch/arm/include/asm/kvm_host.h | 5 +
arch/arm/kvm/psci.c | 4 +
arch/arm64/Kconfig | 24 ++
arch/arm64/include/asm/arch_timer.h | 44 ++-
arch/arm64/include/asm/assembler.h | 34 ++
arch/arm64/include/asm/cpu.h | 1 +
arch/arm64/include/asm/cpucaps.h | 4 +-
arch/arm64/include/asm/cpufeature.h | 232 ++++++++++++-
arch/arm64/include/asm/cputype.h | 63 ++++
arch/arm64/include/asm/fixmap.h | 6 +-
arch/arm64/include/asm/insn.h | 2 +
arch/arm64/include/asm/kvm_host.h | 4 +
arch/arm64/include/asm/kvm_mmu.h | 2 +-
arch/arm64/include/asm/mmu.h | 8 +-
arch/arm64/include/asm/processor.h | 6 +-
arch/arm64/include/asm/sections.h | 6 +
arch/arm64/include/asm/sysreg.h | 5 +
arch/arm64/include/asm/vectors.h | 74 +++++
arch/arm64/kernel/bpi.S | 55 +++
arch/arm64/kernel/cpu_errata.c | 591 ++++++++++++++++++++++++++-------
arch/arm64/kernel/cpufeature.c | 167 +++++++---
arch/arm64/kernel/cpuinfo.c | 1 +
arch/arm64/kernel/entry.S | 197 ++++++++---
arch/arm64/kernel/fpsimd.c | 1 +
arch/arm64/kernel/insn.c | 29 ++
arch/arm64/kernel/smp.c | 6 -
arch/arm64/kernel/traps.c | 4 +-
arch/arm64/kernel/vmlinux.lds.S | 2 +-
arch/arm64/kvm/hyp/hyp-entry.S | 4 +
arch/arm64/kvm/hyp/switch.c | 9 +-
arch/arm64/mm/fault.c | 17 +-
arch/arm64/mm/mmu.c | 11 +-
drivers/clocksource/Kconfig | 4 +
drivers/clocksource/arm_arch_timer.c | 192 ++++++++---
include/linux/arm-smccc.h | 7 +
38 files changed, 1504 insertions(+), 331 deletions(-)



2022-04-06 20:20:30

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 06/43] arm64: capabilities: Move errata processing code

From: Suzuki K Poulose <[email protected]>

[ Upstream commit 1e89baed5d50d2b8d9fd420830902570270703f1 ]

We have errata work around processing code in cpu_errata.c,
which calls back into helpers defined in cpufeature.c. Now
that we are going to make the handling of capabilities
generic, by adding the information to each capability,
move the errata work around specific processing code.
No functional changes.

Cc: Will Deacon <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Andre Przywara <[email protected]>
Reviewed-by: Dave Martin <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/cpufeature.h | 7 -----
arch/arm64/kernel/cpu_errata.c | 33 ---------------------------
arch/arm64/kernel/cpufeature.c | 43 +++++++++++++++++++++++++++++++++---
3 files changed, 40 insertions(+), 43 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -200,15 +200,8 @@ static inline bool id_aa64pfr0_32bit_el0
}

void __init setup_cpu_features(void);
-
-void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
- const char *info);
-void enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps);
void check_local_cpu_capabilities(void);

-void update_cpu_errata_workarounds(void);
-void __init enable_errata_workarounds(void);
-void verify_local_cpu_errata_workarounds(void);

u64 read_system_reg(u32 id);

--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -512,36 +512,3 @@ const struct arm64_cpu_capabilities arm6
{
}
};
-
-/*
- * The CPU Errata work arounds are detected and applied at boot time
- * and the related information is freed soon after. If the new CPU requires
- * an errata not detected at boot, fail this CPU.
- */
-void verify_local_cpu_errata_workarounds(void)
-{
- const struct arm64_cpu_capabilities *caps = arm64_errata;
-
- for (; caps->matches; caps++) {
- if (cpus_have_cap(caps->capability)) {
- if (caps->cpu_enable)
- caps->cpu_enable(caps);
- } else if (caps->matches(caps, SCOPE_LOCAL_CPU)) {
- pr_crit("CPU%d: Requires work around for %s, not detected"
- " at boot time\n",
- smp_processor_id(),
- caps->desc ? : "an erratum");
- cpu_die_early();
- }
- }
-}
-
-void update_cpu_errata_workarounds(void)
-{
- update_cpu_capabilities(arm64_errata, "enabling workaround for");
-}
-
-void __init enable_errata_workarounds(void)
-{
- enable_cpu_capabilities(arm64_errata);
-}
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -439,6 +439,9 @@ static void __init init_cpu_ftr_reg(u32
reg->strict_mask = strict_mask;
}

+extern const struct arm64_cpu_capabilities arm64_errata[];
+static void update_cpu_errata_workarounds(void);
+
void __init init_cpu_features(struct cpuinfo_arm64 *info)
{
/* Before we start using the tables, make sure it is sorted */
@@ -1066,8 +1069,8 @@ static bool __this_cpu_has_cap(const str
return false;
}

-void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
- const char *info)
+static void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
+ const char *info)
{
for (; caps->matches; caps++) {
if (!caps->matches(caps, caps->def_scope))
@@ -1091,7 +1094,8 @@ static int __enable_cpu_capability(void
* Run through the enabled capabilities and enable() it on all active
* CPUs
*/
-void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
+static void __init
+enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
{
for (; caps->matches; caps++) {
unsigned int num = caps->capability;
@@ -1174,6 +1178,39 @@ verify_local_cpu_features(const struct a
}

/*
+ * The CPU Errata work arounds are detected and applied at boot time
+ * and the related information is freed soon after. If the new CPU requires
+ * an errata not detected at boot, fail this CPU.
+ */
+static void verify_local_cpu_errata_workarounds(void)
+{
+ const struct arm64_cpu_capabilities *caps = arm64_errata;
+
+ for (; caps->matches; caps++) {
+ if (cpus_have_cap(caps->capability)) {
+ if (caps->cpu_enable)
+ caps->cpu_enable(caps);
+ } else if (caps->matches(caps, SCOPE_LOCAL_CPU)) {
+ pr_crit("CPU%d: Requires work around for %s, not detected"
+ " at boot time\n",
+ smp_processor_id(),
+ caps->desc ? : "an erratum");
+ cpu_die_early();
+ }
+ }
+}
+
+static void update_cpu_errata_workarounds(void)
+{
+ update_cpu_capabilities(arm64_errata, "enabling workaround for");
+}
+
+static void __init enable_errata_workarounds(void)
+{
+ enable_cpu_capabilities(arm64_errata);
+}
+
+/*
* Run through the enabled system capabilities and enable() it on this CPU.
* The capabilities were decided based on the available CPUs at the boot time.
* Any new CPU should match the system wide status of the capability. If the


2022-04-06 20:23:35

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 11/43] arm64: capabilities: Add support for checks based on a list of MIDRs

From: Suzuki K Poulose <[email protected]>

[ Upstream commit be5b299830c63ed76e0357473c4218c85fb388b3 ]

Add helpers for detecting an errata on list of midr ranges
of affected CPUs, with the same work around.

Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Reviewed-by: Dave Martin <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
[ardb: add Cortex-A35 to kpti_safe_list[] as well]
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/cpufeature.h | 1
arch/arm64/include/asm/cputype.h | 9 +++++
arch/arm64/kernel/cpu_errata.c | 62 ++++++++++++++++++++----------------
arch/arm64/kernel/cpufeature.c | 21 ++++++------
4 files changed, 58 insertions(+), 35 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -233,6 +233,7 @@ struct arm64_cpu_capabilities {
struct midr_range midr_range;
};

+ const struct midr_range *midr_range_list;
struct { /* Feature register checking */
u32 sys_reg;
u8 field_pos;
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -143,6 +143,15 @@ static inline bool is_midr_in_range(u32
range->rv_min, range->rv_max);
}

+static inline bool
+is_midr_in_range_list(u32 midr, struct midr_range const *ranges)
+{
+ while (ranges->model)
+ if (is_midr_in_range(midr, ranges++))
+ return true;
+ return false;
+}
+
/*
* The CPU ID never changes at run time, so we might as well tell the
* compiler that it's constant. Use this function to read the CPU ID
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -33,6 +33,14 @@ is_affected_midr_range(const struct arm6
return is_midr_in_range(midr, &entry->midr_range);
}

+static bool __maybe_unused
+is_affected_midr_range_list(const struct arm64_cpu_capabilities *entry,
+ int scope)
+{
+ WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+ return is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list);
+}
+
static bool
has_mismatched_cache_type(const struct arm64_cpu_capabilities *entry,
int scope)
@@ -383,6 +391,10 @@ static bool has_ssbd_mitigation(const st
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \
CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)

+#define CAP_MIDR_RANGE_LIST(list) \
+ .matches = is_affected_midr_range_list, \
+ .midr_range_list = list
+
/* Errata affecting a range of revisions of given model variant */
#define ERRATA_MIDR_REV_RANGE(m, var, r_min, r_max) \
ERRATA_MIDR_RANGE(m, var, r_min, var, r_max)
@@ -396,6 +408,29 @@ static bool has_ssbd_mitigation(const st
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \
CAP_MIDR_ALL_VERSIONS(model)

+/* Errata affecting a list of midr ranges, with same work around */
+#define ERRATA_MIDR_RANGE_LIST(midr_list) \
+ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \
+ CAP_MIDR_RANGE_LIST(midr_list)
+
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+
+/*
+ * List of CPUs where we need to issue a psci call to
+ * harden the branch predictor.
+ */
+static const struct midr_range arm64_bp_harden_smccc_cpus[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
+ MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
+ MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
+ {},
+};
+
+#endif
+
const struct arm64_cpu_capabilities arm64_errata[] = {
#if defined(CONFIG_ARM64_ERRATUM_826319) || \
defined(CONFIG_ARM64_ERRATUM_827319) || \
@@ -486,32 +521,7 @@ const struct arm64_cpu_capabilities arm6
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
- ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
- .cpu_enable = enable_smccc_arch_workaround_1,
- },
- {
- .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
- ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
- .cpu_enable = enable_smccc_arch_workaround_1,
- },
- {
- .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
- ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
- .cpu_enable = enable_smccc_arch_workaround_1,
- },
- {
- .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
- ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
- .cpu_enable = enable_smccc_arch_workaround_1,
- },
- {
- .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
- ERRATA_MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
- .cpu_enable = enable_smccc_arch_workaround_1,
- },
- {
- .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
- ERRATA_MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
+ ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
.cpu_enable = enable_smccc_arch_workaround_1,
},
#endif
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -767,6 +767,17 @@ static int __kpti_forced; /* 0: not forc
static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
int __unused)
{
+ /* List of CPUs that are not vulnerable and don't need KPTI */
+ static const struct midr_range kpti_safe_list[] = {
+ MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
+ MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+ };
char const *str = "command line option";
u64 pfr0 = read_system_reg(SYS_ID_AA64PFR0_EL1);

@@ -792,16 +803,8 @@ static bool unmap_kernel_at_el0(const st
return true;

/* Don't force KPTI for CPUs that are not vulnerable */
- switch (read_cpuid_id() & MIDR_CPU_MODEL_MASK) {
- case MIDR_CAVIUM_THUNDERX2:
- case MIDR_BRCM_VULCAN:
- case MIDR_CORTEX_A53:
- case MIDR_CORTEX_A55:
- case MIDR_CORTEX_A57:
- case MIDR_CORTEX_A72:
- case MIDR_CORTEX_A73:
+ if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
return false;
- }

/* Defer to CPU feature registers */
return !cpuid_feature_extract_unsigned_field(pfr0,


2022-04-06 20:23:36

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 43/43] arm64: Use the clearbhb instruction in mitigations

From: James Morse <[email protected]>

commit 228a26b912287934789023b4132ba76065d9491c upstream.

Future CPUs may implement a clearbhb instruction that is sufficient
to mitigate SpectreBHB. CPUs that implement this instruction, but
not CSV2.3 must be affected by Spectre-BHB.

Add support to use this instruction as the BHB mitigation on CPUs
that support it. The instruction is in the hint space, so it will
be treated by a NOP as older CPUs.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
[ modified for stable: Use a KVM vector template instead of alternatives ]
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/assembler.h | 7 +++++++
arch/arm64/include/asm/cpufeature.h | 13 +++++++++++++
arch/arm64/include/asm/sysreg.h | 3 +++
arch/arm64/include/asm/vectors.h | 7 +++++++
arch/arm64/kernel/bpi.S | 5 +++++
arch/arm64/kernel/cpu_errata.c | 14 ++++++++++++++
arch/arm64/kernel/cpufeature.c | 1 +
arch/arm64/kernel/entry.S | 8 ++++++++
8 files changed, 58 insertions(+)

--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -95,6 +95,13 @@
.endm

/*
+ * Clear Branch History instruction
+ */
+ .macro clearbhb
+ hint #22
+ .endm
+
+/*
* Sanitise a 64-bit bounded index wrt speculation, returning zero if out
* of bounds.
*/
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -387,6 +387,19 @@ static inline bool supports_csv2p3(int s
return csv2_val == 3;
}

+static inline bool supports_clearbhb(int scope)
+{
+ u64 isar2;
+
+ if (scope == SCOPE_LOCAL_CPU)
+ isar2 = read_sysreg_s(SYS_ID_AA64ISAR2_EL1);
+ else
+ isar2 = read_system_reg(SYS_ID_AA64ISAR2_EL1);
+
+ return cpuid_feature_extract_unsigned_field(isar2,
+ ID_AA64ISAR2_CLEARBHB_SHIFT);
+}
+
static inline bool system_supports_32bit_el0(void)
{
return cpus_have_const_cap(ARM64_HAS_32BIT_EL0);
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -174,6 +174,9 @@
#define ID_AA64ISAR0_SHA1_SHIFT 8
#define ID_AA64ISAR0_AES_SHIFT 4

+/* id_aa64isar2 */
+#define ID_AA64ISAR2_CLEARBHB_SHIFT 28
+
/* id_aa64pfr0 */
#define ID_AA64PFR0_CSV3_SHIFT 60
#define ID_AA64PFR0_CSV2_SHIFT 56
--- a/arch/arm64/include/asm/vectors.h
+++ b/arch/arm64/include/asm/vectors.h
@@ -33,6 +33,12 @@ enum arm64_bp_harden_el1_vectors {
* canonical vectors.
*/
EL1_VECTOR_BHB_FW,
+
+ /*
+ * Use the ClearBHB instruction, before branching to the canonical
+ * vectors.
+ */
+ EL1_VECTOR_BHB_CLEAR_INSN,
#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */

/*
@@ -44,6 +50,7 @@ enum arm64_bp_harden_el1_vectors {
#ifndef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
#define EL1_VECTOR_BHB_LOOP -1
#define EL1_VECTOR_BHB_FW -1
+#define EL1_VECTOR_BHB_CLEAR_INSN -1
#endif /* !CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */

/* The vectors to use on return from EL0. e.g. to remap the kernel */
--- a/arch/arm64/kernel/bpi.S
+++ b/arch/arm64/kernel/bpi.S
@@ -123,3 +123,8 @@ ENTRY(__spectre_bhb_loop_k32_start)
ldp x0, x1, [sp, #(8 * 0)]
add sp, sp, #(8 * 2)
ENTRY(__spectre_bhb_loop_k32_end)
+
+ENTRY(__spectre_bhb_clearbhb_start)
+ hint #22 /* aka clearbhb */
+ isb
+ENTRY(__spectre_bhb_clearbhb_end)
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -84,6 +84,8 @@ extern char __spectre_bhb_loop_k24_start
extern char __spectre_bhb_loop_k24_end[];
extern char __spectre_bhb_loop_k32_start[];
extern char __spectre_bhb_loop_k32_end[];
+extern char __spectre_bhb_clearbhb_start[];
+extern char __spectre_bhb_clearbhb_end[];

static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
const char *hyp_vecs_end)
@@ -590,6 +592,7 @@ static void update_mitigation_state(enum
* - Mitigated by a branchy loop a CPU specific number of times, and listed
* in our "loop mitigated list".
* - Mitigated in software by the firmware Spectre v2 call.
+ * - Has the ClearBHB instruction to perform the mitigation.
* - Has the 'Exception Clears Branch History Buffer' (ECBHB) feature, so no
* software mitigation in the vectors is needed.
* - Has CSV2.3, so is unaffected.
@@ -729,6 +732,9 @@ bool is_spectre_bhb_affected(const struc
if (supports_csv2p3(scope))
return false;

+ if (supports_clearbhb(scope))
+ return true;
+
if (spectre_bhb_loop_affected(scope))
return true;

@@ -769,6 +775,8 @@ static const char *kvm_bhb_get_vecs_end(
return __spectre_bhb_loop_k24_end;
else if (start == __spectre_bhb_loop_k32_start)
return __spectre_bhb_loop_k32_end;
+ else if (start == __spectre_bhb_clearbhb_start)
+ return __spectre_bhb_clearbhb_end;

return NULL;
}
@@ -810,6 +818,7 @@ static void kvm_setup_bhb_slot(const cha
#define __spectre_bhb_loop_k8_start NULL
#define __spectre_bhb_loop_k24_start NULL
#define __spectre_bhb_loop_k32_start NULL
+#define __spectre_bhb_clearbhb_start NULL

static void kvm_setup_bhb_slot(const char *hyp_vecs_start) { };
#endif
@@ -835,6 +844,11 @@ void spectre_bhb_enable_mitigation(const
pr_info_once("spectre-bhb mitigation disabled by command line option\n");
} else if (supports_ecbhb(SCOPE_LOCAL_CPU)) {
state = SPECTRE_MITIGATED;
+ } else if (supports_clearbhb(SCOPE_LOCAL_CPU)) {
+ kvm_setup_bhb_slot(__spectre_bhb_clearbhb_start);
+ this_cpu_set_vectors(EL1_VECTOR_BHB_CLEAR_INSN);
+
+ state = SPECTRE_MITIGATED;
} else if (spectre_bhb_loop_affected(SCOPE_LOCAL_CPU)) {
switch (spectre_bhb_loop_affected(SCOPE_SYSTEM)) {
case 8:
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -99,6 +99,7 @@ static const struct arm64_ftr_bits ftr_i
};

static const struct arm64_ftr_bits ftr_id_aa64isar2[] = {
+ ARM64_FTR_BITS(FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64ISAR2_CLEARBHB_SHIFT, 4, 0),
ARM64_FTR_END,
};

--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -932,6 +932,7 @@ __ni_sys_trace:
#define BHB_MITIGATION_NONE 0
#define BHB_MITIGATION_LOOP 1
#define BHB_MITIGATION_FW 2
+#define BHB_MITIGATION_INSN 3

.macro tramp_ventry, vector_start, regsize, kpti, bhb
.align 7
@@ -948,6 +949,11 @@ __ni_sys_trace:
__mitigate_spectre_bhb_loop x30
.endif // \bhb == BHB_MITIGATION_LOOP

+ .if \bhb == BHB_MITIGATION_INSN
+ clearbhb
+ isb
+ .endif // \bhb == BHB_MITIGATION_INSN
+
.if \kpti == 1
/*
* Defend against branch aliasing attacks by pushing a dummy
@@ -1023,6 +1029,7 @@ ENTRY(tramp_vectors)
#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_LOOP
generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_FW
+ generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_INSN
#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_NONE
END(tramp_vectors)
@@ -1085,6 +1092,7 @@ ENTRY(__bp_harden_el1_vectors)
#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
generate_el1_vector bhb=BHB_MITIGATION_LOOP
generate_el1_vector bhb=BHB_MITIGATION_FW
+ generate_el1_vector bhb=BHB_MITIGATION_INSN
#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
END(__bp_harden_el1_vectors)
.popsection


2022-04-06 20:29:21

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 36/43] arm64: entry: Add vectors that have the bhb mitigation sequences

From: James Morse <[email protected]>

commit ba2689234be92024e5635d30fe744f4853ad97db upstream.

Some CPUs affected by Spectre-BHB need a sequence of branches, or a
firmware call to be run before any indirect branch. This needs to go
in the vectors. No CPU needs both.

While this can be patched in, it would run on all CPUs as there is a
single set of vectors. If only one part of a big/little combination is
affected, the unaffected CPUs have to run the mitigation too.

Create extra vectors that include the sequence. Subsequent patches will
allow affected CPUs to select this set of vectors. Later patches will
modify the loop count to match what the CPU requires.

Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/assembler.h | 25 +++++++++++++++++
arch/arm64/include/asm/vectors.h | 34 +++++++++++++++++++++++
arch/arm64/kernel/entry.S | 53 ++++++++++++++++++++++++++++++-------
include/linux/arm-smccc.h | 7 ++++
4 files changed, 110 insertions(+), 9 deletions(-)
create mode 100644 arch/arm64/include/asm/vectors.h

--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -494,4 +494,29 @@ alternative_endif
.Ldone\@:
.endm

+ .macro __mitigate_spectre_bhb_loop tmp
+#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
+ mov \tmp, #32
+.Lspectre_bhb_loop\@:
+ b . + 4
+ subs \tmp, \tmp, #1
+ b.ne .Lspectre_bhb_loop\@
+ dsb nsh
+ isb
+#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
+ .endm
+
+ /* Save/restores x0-x3 to the stack */
+ .macro __mitigate_spectre_bhb_fw
+#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
+ stp x0, x1, [sp, #-16]!
+ stp x2, x3, [sp, #-16]!
+ mov w0, #ARM_SMCCC_ARCH_WORKAROUND_3
+alternative_cb arm64_update_smccc_conduit
+ nop // Patched to SMC/HVC #0
+alternative_cb_end
+ ldp x2, x3, [sp], #16
+ ldp x0, x1, [sp], #16
+#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
+ .endm
#endif /* __ASM_ASSEMBLER_H */
--- /dev/null
+++ b/arch/arm64/include/asm/vectors.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2022 ARM Ltd.
+ */
+#ifndef __ASM_VECTORS_H
+#define __ASM_VECTORS_H
+
+/*
+ * Note: the order of this enum corresponds to two arrays in entry.S:
+ * tramp_vecs and __bp_harden_el1_vectors. By default the canonical
+ * 'full fat' vectors are used directly.
+ */
+enum arm64_bp_harden_el1_vectors {
+#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
+ /*
+ * Perform the BHB loop mitigation, before branching to the canonical
+ * vectors.
+ */
+ EL1_VECTOR_BHB_LOOP,
+
+ /*
+ * Make the SMC call for firmware mitigation, before branching to the
+ * canonical vectors.
+ */
+ EL1_VECTOR_BHB_FW,
+#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
+
+ /*
+ * Remap the kernel before branching to the canonical vectors.
+ */
+ EL1_VECTOR_KPTI,
+};
+
+#endif /* __ASM_VECTORS_H */
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -921,13 +921,26 @@ __ni_sys_trace:
sub \dst, \dst, PAGE_SIZE
.endm

- .macro tramp_ventry, vector_start, regsize, kpti
+
+#define BHB_MITIGATION_NONE 0
+#define BHB_MITIGATION_LOOP 1
+#define BHB_MITIGATION_FW 2
+
+ .macro tramp_ventry, vector_start, regsize, kpti, bhb
.align 7
1:
.if \regsize == 64
msr tpidrro_el0, x30 // Restored in kernel_ventry
.endif

+ .if \bhb == BHB_MITIGATION_LOOP
+ /*
+ * This sequence must appear before the first indirect branch. i.e. the
+ * ret out of tramp_ventry. It appears here because x30 is free.
+ */
+ __mitigate_spectre_bhb_loop x30
+ .endif // \bhb == BHB_MITIGATION_LOOP
+
.if \kpti == 1
/*
* Defend against branch aliasing attacks by pushing a dummy
@@ -952,6 +965,15 @@ __ni_sys_trace:
ldr x30, =vectors
.endif // \kpti == 1

+ .if \bhb == BHB_MITIGATION_FW
+ /*
+ * The firmware sequence must appear before the first indirect branch.
+ * i.e. the ret out of tramp_ventry. But it also needs the stack to be
+ * mapped to save/restore the registers the SMC clobbers.
+ */
+ __mitigate_spectre_bhb_fw
+ .endif // \bhb == BHB_MITIGATION_FW
+
add x30, x30, #(1b - \vector_start + 4)
ret
.org 1b + 128 // Did we overflow the ventry slot?
@@ -959,6 +981,9 @@ __ni_sys_trace:

.macro tramp_exit, regsize = 64
adr x30, tramp_vectors
+#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
+ add x30, x30, SZ_4K
+#endif
msr vbar_el1, x30
ldr lr, [sp, #S_LR]
tramp_unmap_kernel x29
@@ -969,26 +994,32 @@ __ni_sys_trace:
eret
.endm

- .macro generate_tramp_vector, kpti
+ .macro generate_tramp_vector, kpti, bhb
.Lvector_start\@:
.space 0x400

.rept 4
- tramp_ventry .Lvector_start\@, 64, \kpti
+ tramp_ventry .Lvector_start\@, 64, \kpti, \bhb
.endr
.rept 4
- tramp_ventry .Lvector_start\@, 32, \kpti
+ tramp_ventry .Lvector_start\@, 32, \kpti, \bhb
.endr
.endm

#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
/*
* Exception vectors trampoline.
+ * The order must match __bp_harden_el1_vectors and the
+ * arm64_bp_harden_el1_vectors enum.
*/
.pushsection ".entry.tramp.text", "ax"
.align 11
ENTRY(tramp_vectors)
- generate_tramp_vector kpti=1
+#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
+ generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_LOOP
+ generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_FW
+#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
+ generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_NONE
END(tramp_vectors)

ENTRY(tramp_exit_native)
@@ -1015,7 +1046,7 @@ __entry_tramp_data_start:
* Exception vectors for spectre mitigations on entry from EL1 when
* kpti is not in use.
*/
- .macro generate_el1_vector
+ .macro generate_el1_vector, bhb
.Lvector_start\@:
kernel_ventry 1, sync_invalid // Synchronous EL1t
kernel_ventry 1, irq_invalid // IRQ EL1t
@@ -1028,17 +1059,21 @@ __entry_tramp_data_start:
kernel_ventry 1, error_invalid // Error EL1h

.rept 4
- tramp_ventry .Lvector_start\@, 64, kpti=0
+ tramp_ventry .Lvector_start\@, 64, 0, \bhb
.endr
.rept 4
- tramp_ventry .Lvector_start\@, 32, kpti=0
+ tramp_ventry .Lvector_start\@, 32, 0, \bhb
.endr
.endm

+/* The order must match tramp_vecs and the arm64_bp_harden_el1_vectors enum. */
.pushsection ".entry.text", "ax"
.align 11
ENTRY(__bp_harden_el1_vectors)
- generate_el1_vector
+#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
+ generate_el1_vector bhb=BHB_MITIGATION_LOOP
+ generate_el1_vector bhb=BHB_MITIGATION_FW
+#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
END(__bp_harden_el1_vectors)
.popsection

--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -85,6 +85,13 @@
ARM_SMCCC_SMC_32, \
0, 0x7fff)

+#define ARM_SMCCC_ARCH_WORKAROUND_3 \
+ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
+ ARM_SMCCC_SMC_32, \
+ 0, 0x3fff)
+
+#define SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED 1
+
#ifndef __ASSEMBLY__

#include <linux/linkage.h>


2022-04-06 20:29:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 26/43] arm64: entry: Make the trampoline cleanup optional

From: James Morse <[email protected]>

commit d739da1694a0eaef0358a42b76904b611539b77b upstream.

Subsequent patches will add additional sets of vectors that use
the same tricks as the kpti vectors to reach the full-fat vectors.
The full-fat vectors contain some cleanup for kpti that is patched
in by alternatives when kpti is in use. Once there are additional
vectors, the cleanup will be needed in more cases.

But on big/little systems, the cleanup would be harmful if no
trampoline vector were in use. Instead of forcing CPUs that don't
need a trampoline vector to use one, make the trampoline cleanup
optional.

Entry at the top of the vectors will skip the cleanup. The trampoline
vectors can then skip the first instruction, triggering the cleanup
to run.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/entry.S | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)

--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -76,16 +76,20 @@
.align 7
.Lventry_start\@:
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-alternative_if ARM64_UNMAP_KERNEL_AT_EL0
.if \el == 0
+ /*
+ * This must be the first instruction of the EL0 vector entries. It is
+ * skipped by the trampoline vectors, to trigger the cleanup.
+ */
+ b .Lskip_tramp_vectors_cleanup\@
.if \regsize == 64
mrs x30, tpidrro_el0
msr tpidrro_el0, xzr
.else
mov x30, xzr
.endif
+.Lskip_tramp_vectors_cleanup\@:
.endif
-alternative_else_nop_endif
#endif

sub sp, sp, #S_FRAME_SIZE
@@ -934,7 +938,7 @@ __ni_sys_trace:
#endif
prfm plil1strm, [x30, #(1b - tramp_vectors)]
msr vbar_el1, x30
- add x30, x30, #(1b - tramp_vectors)
+ add x30, x30, #(1b - tramp_vectors + 4)
isb
ret
.org 1b + 128 // Did we overflow the ventry slot?


2022-04-06 20:43:14

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 18/43] arm64: Add silicon-errata.txt entry for ARM erratum 1188873

From: Marc Zyngier <[email protected]>

commit e03a4e5bb7430f9294c12f02c69eb045d010e942 upstream.

Document that we actually work around ARM erratum 1188873

Fixes: 95b861a4a6d9 ("arm64: arch_timer: Add workaround for ARM erratum 1188873")
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
Documentation/arm64/silicon-errata.txt | 1 +
1 file changed, 1 insertion(+)

--- a/Documentation/arm64/silicon-errata.txt
+++ b/Documentation/arm64/silicon-errata.txt
@@ -55,6 +55,7 @@ stable kernels.
| ARM | Cortex-A57 | #834220 | ARM64_ERRATUM_834220 |
| ARM | Cortex-A72 | #853709 | N/A |
| ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 |
+| ARM | Cortex-A76 | #1188873 | ARM64_ERRATUM_1188873 |
| ARM | MMU-500 | #841119,#826419 | N/A |
| | | | |
| Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 |


2022-04-06 20:51:34

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 37/43] arm64: entry: Add macro for reading symbol addresses from the trampoline

From: James Morse <[email protected]>

commit b28a8eebe81c186fdb1a0078263b30576c8e1f42 upstream.

The trampoline code needs to use the address of symbols in the wider
kernel, e.g. vectors. PC-relative addressing wouldn't work as the
trampoline code doesn't run at the address the linker expected.

tramp_ventry uses a literal pool, unless CONFIG_RANDOMIZE_BASE is
set, in which case it uses the data page as a literal pool because
the data page can be unmapped when running in user-space, which is
required for CPUs vulnerable to meltdown.

Pull this logic out as a macro, instead of adding a third copy
of it.

Reviewed-by: Catalin Marinas <[email protected]>
[ Removed SDEI for stable backport ]
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/entry.S | 22 +++++++++++++++-------
1 file changed, 15 insertions(+), 7 deletions(-)

--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -921,6 +921,15 @@ __ni_sys_trace:
sub \dst, \dst, PAGE_SIZE
.endm

+ .macro tramp_data_read_var dst, var
+#ifdef CONFIG_RANDOMIZE_BASE
+ tramp_data_page \dst
+ add \dst, \dst, #:lo12:__entry_tramp_data_\var
+ ldr \dst, [\dst]
+#else
+ ldr \dst, =\var
+#endif
+ .endm

#define BHB_MITIGATION_NONE 0
#define BHB_MITIGATION_LOOP 1
@@ -951,13 +960,7 @@ __ni_sys_trace:
b .
2:
tramp_map_kernel x30
-#ifdef CONFIG_RANDOMIZE_BASE
- tramp_data_page x30
- isb
- ldr x30, [x30]
-#else
- ldr x30, =vectors
-#endif
+ tramp_data_read_var x30, vectors
prfm plil1strm, [x30, #(1b - \vector_start)]
msr vbar_el1, x30
isb
@@ -1037,7 +1040,12 @@ END(tramp_exit_compat)
.align PAGE_SHIFT
.globl __entry_tramp_data_start
__entry_tramp_data_start:
+__entry_tramp_data_vectors:
.quad vectors
+#ifdef CONFIG_ARM_SDE_INTERFACE
+__entry_tramp_data___sdei_asm_trampoline_next_handler:
+ .quad __sdei_asm_handler
+#endif /* CONFIG_ARM_SDE_INTERFACE */
.popsection // .rodata
#endif /* CONFIG_RANDOMIZE_BASE */
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */


2022-04-06 20:52:04

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 39/43] KVM: arm64: Add templates for BHB mitigation sequences

From: James Morse <[email protected]>

KVM writes the Spectre-v2 mitigation template at the beginning of each
vector when a CPU requires a specific sequence to run.

Because the template is copied, it can not be modified by the alternatives
at runtime. As the KVM template code is intertwined with the bp-hardening
callbacks, all templates must have a bp-hardening callback.

Add templates for calling ARCH_WORKAROUND_3 and one for each value of K
in the brancy-loop. Identify these sequences by a new parameter
template_start, and add a copy of install_bp_hardening_cb() that is able to
install them.

Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/cpucaps.h | 3 +
arch/arm64/include/asm/kvm_mmu.h | 2 -
arch/arm64/include/asm/mmu.h | 6 +++
arch/arm64/kernel/bpi.S | 50 +++++++++++++++++++++++++++
arch/arm64/kernel/cpu_errata.c | 71 +++++++++++++++++++++++++++++++++++++--
5 files changed, 128 insertions(+), 4 deletions(-)

--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -39,7 +39,8 @@
#define ARM64_SSBD 18
#define ARM64_MISMATCHED_CACHE_TYPE 19
#define ARM64_WORKAROUND_1188873 20
+#define ARM64_SPECTRE_BHB 21

-#define ARM64_NCAPS 21
+#define ARM64_NCAPS 22

#endif /* __ASM_CPUCAPS_H */
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -362,7 +362,7 @@ static inline void *kvm_get_hyp_vector(v
struct bp_hardening_data *data = arm64_get_bp_hardening_data();
void *vect = kvm_ksym_ref(__kvm_hyp_vector);

- if (data->fn) {
+ if (data->template_start) {
vect = __bp_harden_hyp_vecs_start +
data->hyp_vectors_slot * SZ_2K;

--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -45,6 +45,12 @@ typedef void (*bp_hardening_cb_t)(void);
struct bp_hardening_data {
int hyp_vectors_slot;
bp_hardening_cb_t fn;
+
+ /*
+ * template_start is only used by the BHB mitigation to identify the
+ * hyp_vectors_slot sequence.
+ */
+ const char *template_start;
};

#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
--- a/arch/arm64/kernel/bpi.S
+++ b/arch/arm64/kernel/bpi.S
@@ -73,3 +73,53 @@ ENTRY(__smccc_workaround_1_smc_end)
ENTRY(__smccc_workaround_1_hvc_start)
smccc_workaround_1 hvc
ENTRY(__smccc_workaround_1_hvc_end)
+
+ENTRY(__smccc_workaround_3_smc_start)
+ sub sp, sp, #(8 * 4)
+ stp x2, x3, [sp, #(8 * 0)]
+ stp x0, x1, [sp, #(8 * 2)]
+ mov w0, #ARM_SMCCC_ARCH_WORKAROUND_3
+ smc #0
+ ldp x2, x3, [sp, #(8 * 0)]
+ ldp x0, x1, [sp, #(8 * 2)]
+ add sp, sp, #(8 * 4)
+ENTRY(__smccc_workaround_3_smc_end)
+
+ENTRY(__spectre_bhb_loop_k8_start)
+ sub sp, sp, #(8 * 2)
+ stp x0, x1, [sp, #(8 * 0)]
+ mov x0, #8
+2: b . + 4
+ subs x0, x0, #1
+ b.ne 2b
+ dsb nsh
+ isb
+ ldp x0, x1, [sp, #(8 * 0)]
+ add sp, sp, #(8 * 2)
+ENTRY(__spectre_bhb_loop_k8_end)
+
+ENTRY(__spectre_bhb_loop_k24_start)
+ sub sp, sp, #(8 * 2)
+ stp x0, x1, [sp, #(8 * 0)]
+ mov x0, #24
+2: b . + 4
+ subs x0, x0, #1
+ b.ne 2b
+ dsb nsh
+ isb
+ ldp x0, x1, [sp, #(8 * 0)]
+ add sp, sp, #(8 * 2)
+ENTRY(__spectre_bhb_loop_k24_end)
+
+ENTRY(__spectre_bhb_loop_k32_start)
+ sub sp, sp, #(8 * 2)
+ stp x0, x1, [sp, #(8 * 0)]
+ mov x0, #32
+2: b . + 4
+ subs x0, x0, #1
+ b.ne 2b
+ dsb nsh
+ isb
+ ldp x0, x1, [sp, #(8 * 0)]
+ add sp, sp, #(8 * 2)
+ENTRY(__spectre_bhb_loop_k32_end)
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -74,6 +74,14 @@ extern char __smccc_workaround_1_smc_sta
extern char __smccc_workaround_1_smc_end[];
extern char __smccc_workaround_1_hvc_start[];
extern char __smccc_workaround_1_hvc_end[];
+extern char __smccc_workaround_3_smc_start[];
+extern char __smccc_workaround_3_smc_end[];
+extern char __spectre_bhb_loop_k8_start[];
+extern char __spectre_bhb_loop_k8_end[];
+extern char __spectre_bhb_loop_k24_start[];
+extern char __spectre_bhb_loop_k24_end[];
+extern char __spectre_bhb_loop_k32_start[];
+extern char __spectre_bhb_loop_k32_end[];

static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
const char *hyp_vecs_end)
@@ -87,12 +95,14 @@ static void __copy_hyp_vect_bpi(int slot
flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K);
}

+static DEFINE_SPINLOCK(bp_lock);
+static int last_slot = -1;
+
static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
const char *hyp_vecs_start,
const char *hyp_vecs_end)
{
- static int last_slot = -1;
- static DEFINE_SPINLOCK(bp_lock);
+
int cpu, slot = -1;

spin_lock(&bp_lock);
@@ -113,6 +123,7 @@ static void __install_bp_hardening_cb(bp

__this_cpu_write(bp_hardening_data.hyp_vectors_slot, slot);
__this_cpu_write(bp_hardening_data.fn, fn);
+ __this_cpu_write(bp_hardening_data.template_start, hyp_vecs_start);
spin_unlock(&bp_lock);
}
#else
@@ -544,3 +555,59 @@ const struct arm64_cpu_capabilities arm6
{
}
};
+
+#ifdef CONFIG_KVM
+static const char *kvm_bhb_get_vecs_end(const char *start)
+{
+ if (start == __smccc_workaround_3_smc_start)
+ return __smccc_workaround_3_smc_end;
+ else if (start == __spectre_bhb_loop_k8_start)
+ return __spectre_bhb_loop_k8_end;
+ else if (start == __spectre_bhb_loop_k24_start)
+ return __spectre_bhb_loop_k24_end;
+ else if (start == __spectre_bhb_loop_k32_start)
+ return __spectre_bhb_loop_k32_end;
+
+ return NULL;
+}
+
+void kvm_setup_bhb_slot(const char *hyp_vecs_start)
+{
+ int cpu, slot = -1;
+ const char *hyp_vecs_end;
+
+ if (!IS_ENABLED(CONFIG_KVM) || !is_hyp_mode_available())
+ return;
+
+ hyp_vecs_end = kvm_bhb_get_vecs_end(hyp_vecs_start);
+ if (WARN_ON_ONCE(!hyp_vecs_start || !hyp_vecs_end))
+ return;
+
+ spin_lock(&bp_lock);
+ for_each_possible_cpu(cpu) {
+ if (per_cpu(bp_hardening_data.template_start, cpu) == hyp_vecs_start) {
+ slot = per_cpu(bp_hardening_data.hyp_vectors_slot, cpu);
+ break;
+ }
+ }
+
+ if (slot == -1) {
+ last_slot++;
+ BUG_ON(((__bp_harden_hyp_vecs_end - __bp_harden_hyp_vecs_start)
+ / SZ_2K) <= last_slot);
+ slot = last_slot;
+ __copy_hyp_vect_bpi(slot, hyp_vecs_start, hyp_vecs_end);
+ }
+
+ __this_cpu_write(bp_hardening_data.hyp_vectors_slot, slot);
+ __this_cpu_write(bp_hardening_data.template_start, hyp_vecs_start);
+ spin_unlock(&bp_lock);
+}
+#else
+#define __smccc_workaround_3_smc_start NULL
+#define __spectre_bhb_loop_k8_start NULL
+#define __spectre_bhb_loop_k24_start NULL
+#define __spectre_bhb_loop_k32_start NULL
+
+void kvm_setup_bhb_slot(const char *hyp_vecs_start) { };
+#endif


2022-04-06 20:57:59

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 19/43] arm64: Make ARM64_ERRATUM_1188873 depend on COMPAT

From: Marc Zyngier <[email protected]>

commit c2b5bba3967a000764e9148e6f020d776b7ecd82 upstream.

Since ARM64_ERRATUM_1188873 only affects AArch32 EL0, it makes some
sense that it should depend on COMPAT.

Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)

--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -444,6 +444,7 @@ config ARM64_ERRATUM_1024718
config ARM64_ERRATUM_1188873
bool "Cortex-A76: MRC read following MRRC read of specific Generic Timer in AArch32 might give incorrect result"
default y
+ depends on COMPAT
select ARM_ARCH_TIMER_OOL_WORKAROUND
help
This option adds work arounds for ARM Cortex-A76 erratum 1188873


2022-04-06 21:06:48

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 05/43] arm64: capabilities: Move errata work around check on boot CPU

From: Suzuki K Poulose <[email protected]>

[ Upstream commit 5e91107b06811f0ca147cebbedce53626c9c4443 ]

We trigger CPU errata work around check on the boot CPU from
smp_prepare_boot_cpu() to make sure that we run the checks only
after the CPU feature infrastructure is initialised. While this
is correct, we can also do this from init_cpu_features() which
initilises the infrastructure, and is called only on the
Boot CPU. This helps to consolidate the CPU capability handling
to cpufeature.c. No functional changes.

Cc: Will Deacon <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Mark Rutland <[email protected]>
Reviewed-by: Dave Martin <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/cpufeature.c | 5 +++++
arch/arm64/kernel/smp.c | 6 ------
2 files changed, 5 insertions(+), 6 deletions(-)

--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -476,6 +476,11 @@ void __init init_cpu_features(struct cpu
init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
}

+ /*
+ * Run the errata work around checks on the boot CPU, once we have
+ * initialised the cpu feature infrastructure.
+ */
+ update_cpu_errata_workarounds();
}

static void update_cpu_ftr_reg(struct arm64_ftr_reg *reg, u64 new)
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -444,12 +444,6 @@ void __init smp_prepare_boot_cpu(void)
jump_label_init();
cpuinfo_store_boot_cpu();
save_boot_cpu_run_el();
- /*
- * Run the errata work around checks on the boot CPU, once we have
- * initialised the cpu feature infrastructure from
- * cpuinfo_store_boot_cpu() above.
- */
- update_cpu_errata_workarounds();
}

static u64 __init of_get_cpu_mpidr(struct device_node *dn)


2022-04-06 21:07:04

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 17/43] arm64: arch_timer: avoid unused function warning

From: Arnd Bergmann <[email protected]>

commit 040f340134751d73bd03ee92fabb992946c55b3d upstream.

arm64_1188873_read_cntvct_el0() is protected by the correct
CONFIG_ARM64_ERRATUM_1188873 #ifdef, but the only reference to it is
also inside of an CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND section,
and causes a warning if that is disabled:

drivers/clocksource/arm_arch_timer.c:323:20: error: 'arm64_1188873_read_cntvct_el0' defined but not used [-Werror=unused-function]

Since the erratum requires that we always apply the workaround
in the timer driver, select that symbol as we do for SoC
specific errata.

Fixes: 95b861a4a6d9 ("arm64: arch_timer: Add workaround for ARM erratum 1188873")
Acked-by: Marc Zyngier <[email protected]>
Signed-off-by: Arnd Bergmann <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)

--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -444,6 +444,7 @@ config ARM64_ERRATUM_1024718
config ARM64_ERRATUM_1188873
bool "Cortex-A76: MRC read following MRRC read of specific Generic Timer in AArch32 might give incorrect result"
default y
+ select ARM_ARCH_TIMER_OOL_WORKAROUND
help
This option adds work arounds for ARM Cortex-A76 erratum 1188873



2022-04-06 21:09:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 04/43] arm64: capabilities: Update prototype for enable call back

From: Dave Martin <[email protected]>

[ Upstream commit c0cda3b8ee6b4b6851b2fd8b6db91fd7b0e2524a ]

We issue the enable() call back for all CPU hwcaps capabilities
available on the system, on all the CPUs. So far we have ignored
the argument passed to the call back, which had a prototype to
accept a "void *" for use with on_each_cpu() and later with
stop_machine(). However, with commit 0a0d111d40fd1
("arm64: cpufeature: Pass capability structure to ->enable callback"),
there are some users of the argument who wants the matching capability
struct pointer where there are multiple matching criteria for a single
capability. Clean up the declaration of the call back to make it clear.

1) Renamed to cpu_enable(), to imply taking necessary actions on the
called CPU for the entry.
2) Pass const pointer to the capability, to allow the call back to
check the entry. (e.,g to check if any action is needed on the CPU)
3) We don't care about the result of the call back, turning this to
a void.

Cc: Will Deacon <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Andre Przywara <[email protected]>
Cc: James Morse <[email protected]>
Acked-by: Robin Murphy <[email protected]>
Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Dave Martin <[email protected]>
[suzuki: convert more users, rename call back and drop results]
Signed-off-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/cpufeature.h | 7 ++++-
arch/arm64/include/asm/processor.h | 5 ++--
arch/arm64/kernel/cpu_errata.c | 44 ++++++++++++++++++------------------
arch/arm64/kernel/cpufeature.c | 34 +++++++++++++++++----------
arch/arm64/kernel/fpsimd.c | 1
arch/arm64/kernel/traps.c | 4 +--
arch/arm64/mm/fault.c | 3 --
7 files changed, 56 insertions(+), 42 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -77,7 +77,12 @@ struct arm64_cpu_capabilities {
u16 capability;
int def_scope; /* default scope */
bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope);
- int (*enable)(void *); /* Called on all active CPUs */
+ /*
+ * Take the appropriate actions to enable this capability for this CPU.
+ * For each successfully booted CPU, this method is called for each
+ * globally detected capability.
+ */
+ void (*cpu_enable)(const struct arm64_cpu_capabilities *cap);
union {
struct { /* To be used for erratum handling only */
u32 midr_model;
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -37,6 +37,7 @@
#include <linux/string.h>

#include <asm/alternative.h>
+#include <asm/cpufeature.h>
#include <asm/fpsimd.h>
#include <asm/hw_breakpoint.h>
#include <asm/lse.h>
@@ -219,8 +220,8 @@ static inline void spin_lock_prefetch(co

#endif

-int cpu_enable_pan(void *__unused);
-int cpu_enable_cache_maint_trap(void *__unused);
+void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused);
+void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused);

#endif /* __ASSEMBLY__ */
#endif /* __ASM_PROCESSOR_H */
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -48,11 +48,11 @@ has_mismatched_cache_type(const struct a
(arm64_ftr_reg_ctrel0.sys_val & mask);
}

-static int cpu_enable_trap_ctr_access(void *__unused)
+static void
+cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
{
/* Clear SCTLR_EL1.UCT */
config_sctlr_el1(SCTLR_EL1_UCT, 0);
- return 0;
}

#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
@@ -152,25 +152,25 @@ static void call_hvc_arch_workaround_1(v
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
}

-static int enable_smccc_arch_workaround_1(void *data)
+static void
+enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
{
- const struct arm64_cpu_capabilities *entry = data;
bp_hardening_cb_t cb;
void *smccc_start, *smccc_end;
struct arm_smccc_res res;

if (!entry->matches(entry, SCOPE_LOCAL_CPU))
- return 0;
+ return;

if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
- return 0;
+ return;

switch (psci_ops.conduit) {
case PSCI_CONDUIT_HVC:
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 < 0)
- return 0;
+ return;
cb = call_hvc_arch_workaround_1;
smccc_start = __smccc_workaround_1_hvc_start;
smccc_end = __smccc_workaround_1_hvc_end;
@@ -180,19 +180,19 @@ static int enable_smccc_arch_workaround_
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 < 0)
- return 0;
+ return;
cb = call_smc_arch_workaround_1;
smccc_start = __smccc_workaround_1_smc_start;
smccc_end = __smccc_workaround_1_smc_end;
break;

default:
- return 0;
+ return;
}

install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);

- return 0;
+ return;
}
#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */

@@ -391,7 +391,7 @@ const struct arm64_cpu_capabilities arm6
.desc = "ARM errata 826319, 827319, 824069",
.capability = ARM64_WORKAROUND_CLEAN_CACHE,
MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x02),
- .enable = cpu_enable_cache_maint_trap,
+ .cpu_enable = cpu_enable_cache_maint_trap,
},
#endif
#ifdef CONFIG_ARM64_ERRATUM_819472
@@ -400,7 +400,7 @@ const struct arm64_cpu_capabilities arm6
.desc = "ARM errata 819472",
.capability = ARM64_WORKAROUND_CLEAN_CACHE,
MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x01),
- .enable = cpu_enable_cache_maint_trap,
+ .cpu_enable = cpu_enable_cache_maint_trap,
},
#endif
#ifdef CONFIG_ARM64_ERRATUM_832075
@@ -460,45 +460,45 @@ const struct arm64_cpu_capabilities arm6
.capability = ARM64_MISMATCHED_CACHE_LINE_SIZE,
.matches = has_mismatched_cache_type,
.def_scope = SCOPE_LOCAL_CPU,
- .enable = cpu_enable_trap_ctr_access,
+ .cpu_enable = cpu_enable_trap_ctr_access,
},
{
.desc = "Mismatched cache type",
.capability = ARM64_MISMATCHED_CACHE_TYPE,
.matches = has_mismatched_cache_type,
.def_scope = SCOPE_LOCAL_CPU,
- .enable = cpu_enable_trap_ctr_access,
+ .cpu_enable = cpu_enable_trap_ctr_access,
},
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
- .enable = enable_smccc_arch_workaround_1,
+ .cpu_enable = enable_smccc_arch_workaround_1,
},
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
- .enable = enable_smccc_arch_workaround_1,
+ .cpu_enable = enable_smccc_arch_workaround_1,
},
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
- .enable = enable_smccc_arch_workaround_1,
+ .cpu_enable = enable_smccc_arch_workaround_1,
},
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
- .enable = enable_smccc_arch_workaround_1,
+ .cpu_enable = enable_smccc_arch_workaround_1,
},
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
- .enable = enable_smccc_arch_workaround_1,
+ .cpu_enable = enable_smccc_arch_workaround_1,
},
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
- .enable = enable_smccc_arch_workaround_1,
+ .cpu_enable = enable_smccc_arch_workaround_1,
},
#endif
#ifdef CONFIG_ARM64_SSBD
@@ -524,8 +524,8 @@ void verify_local_cpu_errata_workarounds

for (; caps->matches; caps++) {
if (cpus_have_cap(caps->capability)) {
- if (caps->enable)
- caps->enable((void *)caps);
+ if (caps->cpu_enable)
+ caps->cpu_enable(caps);
} else if (caps->matches(caps, SCOPE_LOCAL_CPU)) {
pr_crit("CPU%d: Requires work around for %s, not detected"
" at boot time\n",
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -800,7 +800,8 @@ static bool unmap_kernel_at_el0(const st
ID_AA64PFR0_CSV3_SHIFT);
}

-static int kpti_install_ng_mappings(void *__unused)
+static void
+kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
{
typedef void (kpti_remap_fn)(int, int, phys_addr_t);
extern kpti_remap_fn idmap_kpti_install_ng_mappings;
@@ -810,7 +811,7 @@ static int kpti_install_ng_mappings(void
int cpu = smp_processor_id();

if (kpti_applied)
- return 0;
+ return;

remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);

@@ -821,7 +822,7 @@ static int kpti_install_ng_mappings(void
if (!cpu)
kpti_applied = true;

- return 0;
+ return;
}

static int __init parse_kpti(char *str)
@@ -838,7 +839,7 @@ static int __init parse_kpti(char *str)
early_param("kpti", parse_kpti);
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */

-static int cpu_copy_el2regs(void *__unused)
+static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
{
/*
* Copy register values that aren't redirected by hardware.
@@ -850,8 +851,6 @@ static int cpu_copy_el2regs(void *__unus
*/
if (!alternatives_applied)
write_sysreg(read_sysreg(tpidr_el1), tpidr_el2);
-
- return 0;
}

static const struct arm64_cpu_capabilities arm64_features[] = {
@@ -875,7 +874,7 @@ static const struct arm64_cpu_capabiliti
.field_pos = ID_AA64MMFR1_PAN_SHIFT,
.sign = FTR_UNSIGNED,
.min_field_value = 1,
- .enable = cpu_enable_pan,
+ .cpu_enable = cpu_enable_pan,
},
#endif /* CONFIG_ARM64_PAN */
#if defined(CONFIG_AS_LSE) && defined(CONFIG_ARM64_LSE_ATOMICS)
@@ -923,7 +922,7 @@ static const struct arm64_cpu_capabiliti
.capability = ARM64_HAS_VIRT_HOST_EXTN,
.def_scope = SCOPE_SYSTEM,
.matches = runs_at_el2,
- .enable = cpu_copy_el2regs,
+ .cpu_enable = cpu_copy_el2regs,
},
{
.desc = "32-bit EL0 Support",
@@ -947,7 +946,7 @@ static const struct arm64_cpu_capabiliti
.capability = ARM64_UNMAP_KERNEL_AT_EL0,
.def_scope = SCOPE_SYSTEM,
.matches = unmap_kernel_at_el0,
- .enable = kpti_install_ng_mappings,
+ .cpu_enable = kpti_install_ng_mappings,
},
#endif
{},
@@ -1075,6 +1074,14 @@ void update_cpu_capabilities(const struc
}
}

+static int __enable_cpu_capability(void *arg)
+{
+ const struct arm64_cpu_capabilities *cap = arg;
+
+ cap->cpu_enable(cap);
+ return 0;
+}
+
/*
* Run through the enabled capabilities and enable() it on all active
* CPUs
@@ -1090,14 +1097,15 @@ void __init enable_cpu_capabilities(cons
/* Ensure cpus_have_const_cap(num) works */
static_branch_enable(&cpu_hwcap_keys[num]);

- if (caps->enable) {
+ if (caps->cpu_enable) {
/*
* Use stop_machine() as it schedules the work allowing
* us to modify PSTATE, instead of on_each_cpu() which
* uses an IPI, giving us a PSTATE that disappears when
* we return.
*/
- stop_machine(caps->enable, (void *)caps, cpu_online_mask);
+ stop_machine(__enable_cpu_capability, (void *)caps,
+ cpu_online_mask);
}
}
}
@@ -1155,8 +1163,8 @@ verify_local_cpu_features(const struct a
smp_processor_id(), caps->desc);
cpu_die_early();
}
- if (caps->enable)
- caps->enable((void *)caps);
+ if (caps->cpu_enable)
+ caps->cpu_enable(caps);
}
}

--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -26,6 +26,7 @@
#include <linux/hardirq.h>

#include <asm/fpsimd.h>
+#include <asm/cpufeature.h>
#include <asm/cputype.h>

#define FPEXC_IOF (1 << 0)
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -34,6 +34,7 @@

#include <asm/atomic.h>
#include <asm/bug.h>
+#include <asm/cpufeature.h>
#include <asm/debug-monitors.h>
#include <asm/esr.h>
#include <asm/insn.h>
@@ -432,10 +433,9 @@ asmlinkage void __exception do_undefinst
force_signal_inject(SIGILL, ILL_ILLOPC, regs, 0);
}

-int cpu_enable_cache_maint_trap(void *__unused)
+void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
{
config_sctlr_el1(SCTLR_EL1_UCI, 0);
- return 0;
}

#define __user_cache_maint(insn, address, res) \
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -727,7 +727,7 @@ asmlinkage int __exception do_debug_exce
NOKPROBE_SYMBOL(do_debug_exception);

#ifdef CONFIG_ARM64_PAN
-int cpu_enable_pan(void *__unused)
+void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
{
/*
* We modify PSTATE. This won't work from irq context as the PSTATE
@@ -737,6 +737,5 @@ int cpu_enable_pan(void *__unused)

config_sctlr_el1(SCTLR_EL1_SPAN, 0);
asm(SET_PSTATE_PAN(1));
- return 0;
}
#endif /* CONFIG_ARM64_PAN */


2022-04-06 21:09:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 13/43] clocksource/drivers/arm_arch_timer: Introduce generic errata handling infrastructure

From: Ding Tianhong <[email protected]>

commit 16d10ef29f25aba923779234bb93a451b14d20e6 upstream.

Currently we have code inline in the arch timer probe path to cater for
Freescale erratum A-008585, complete with ifdeffery. This is a little
ugly, and will get worse as we try to add more errata handling.

This patch refactors the handling of Freescale erratum A-008585. Now the
erratum is described in a generic arch_timer_erratum_workaround
structure, and the probe path can iterate over these to detect errata
and enable workarounds.

This will simplify the addition and maintenance of code handling
Hisilicon erratum 161010101.

Signed-off-by: Ding Tianhong <[email protected]>
[Mark: split patch, correct Kconfig, reword commit message]
Signed-off-by: Mark Rutland <[email protected]>
Acked-by: Daniel Lezcano <[email protected]>
Signed-off-by: Daniel Lezcano <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/arch_timer.h | 40 +++++----------
drivers/clocksource/Kconfig | 4 +
drivers/clocksource/arm_arch_timer.c | 90 ++++++++++++++++++++++++-----------
3 files changed, 80 insertions(+), 54 deletions(-)

--- a/arch/arm64/include/asm/arch_timer.h
+++ b/arch/arm64/include/asm/arch_timer.h
@@ -29,41 +29,29 @@

#include <clocksource/arm_arch_timer.h>

-#if IS_ENABLED(CONFIG_FSL_ERRATUM_A008585)
+#if IS_ENABLED(CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND)
extern struct static_key_false arch_timer_read_ool_enabled;
-#define needs_fsl_a008585_workaround() \
+#define needs_unstable_timer_counter_workaround() \
static_branch_unlikely(&arch_timer_read_ool_enabled)
#else
-#define needs_fsl_a008585_workaround() false
+#define needs_unstable_timer_counter_workaround() false
#endif

-u32 __fsl_a008585_read_cntp_tval_el0(void);
-u32 __fsl_a008585_read_cntv_tval_el0(void);
-u64 __fsl_a008585_read_cntvct_el0(void);
-
-/*
- * The number of retries is an arbitrary value well beyond the highest number
- * of iterations the loop has been observed to take.
- */
-#define __fsl_a008585_read_reg(reg) ({ \
- u64 _old, _new; \
- int _retries = 200; \
- \
- do { \
- _old = read_sysreg(reg); \
- _new = read_sysreg(reg); \
- _retries--; \
- } while (unlikely(_old != _new) && _retries); \
- \
- WARN_ON_ONCE(!_retries); \
- _new; \
-})
+
+struct arch_timer_erratum_workaround {
+ const char *id; /* Indicate the Erratum ID */
+ u32 (*read_cntp_tval_el0)(void);
+ u32 (*read_cntv_tval_el0)(void);
+ u64 (*read_cntvct_el0)(void);
+};
+
+extern const struct arch_timer_erratum_workaround *timer_unstable_counter_workaround;

#define arch_timer_reg_read_stable(reg) \
({ \
u64 _val; \
- if (needs_fsl_a008585_workaround()) \
- _val = __fsl_a008585_read_##reg(); \
+ if (needs_unstable_timer_counter_workaround()) \
+ _val = timer_unstable_counter_workaround->read_##reg();\
else \
_val = read_sysreg(reg); \
_val; \
--- a/drivers/clocksource/Kconfig
+++ b/drivers/clocksource/Kconfig
@@ -305,10 +305,14 @@ config ARM_ARCH_TIMER_EVTSTREAM
This must be disabled for hardware validation purposes to detect any
hardware anomalies of missing events.

+config ARM_ARCH_TIMER_OOL_WORKAROUND
+ bool
+
config FSL_ERRATUM_A008585
bool "Workaround for Freescale/NXP Erratum A-008585"
default y
depends on ARM_ARCH_TIMER && ARM64
+ select ARM_ARCH_TIMER_OOL_WORKAROUND
help
This option enables a workaround for Freescale/NXP Erratum
A-008585 ("ARM generic timer may contain an erroneous
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -96,27 +96,58 @@ early_param("clocksource.arm_arch_timer.
*/

#ifdef CONFIG_FSL_ERRATUM_A008585
-DEFINE_STATIC_KEY_FALSE(arch_timer_read_ool_enabled);
-EXPORT_SYMBOL_GPL(arch_timer_read_ool_enabled);
-
-static int fsl_a008585_enable = -1;
+/*
+ * The number of retries is an arbitrary value well beyond the highest number
+ * of iterations the loop has been observed to take.
+ */
+#define __fsl_a008585_read_reg(reg) ({ \
+ u64 _old, _new; \
+ int _retries = 200; \
+ \
+ do { \
+ _old = read_sysreg(reg); \
+ _new = read_sysreg(reg); \
+ _retries--; \
+ } while (unlikely(_old != _new) && _retries); \
+ \
+ WARN_ON_ONCE(!_retries); \
+ _new; \
+})

-u32 __fsl_a008585_read_cntp_tval_el0(void)
+static u32 notrace fsl_a008585_read_cntp_tval_el0(void)
{
return __fsl_a008585_read_reg(cntp_tval_el0);
}

-u32 __fsl_a008585_read_cntv_tval_el0(void)
+static u32 notrace fsl_a008585_read_cntv_tval_el0(void)
{
return __fsl_a008585_read_reg(cntv_tval_el0);
}

-u64 __fsl_a008585_read_cntvct_el0(void)
+static u64 notrace fsl_a008585_read_cntvct_el0(void)
{
return __fsl_a008585_read_reg(cntvct_el0);
}
-EXPORT_SYMBOL(__fsl_a008585_read_cntvct_el0);
-#endif /* CONFIG_FSL_ERRATUM_A008585 */
+#endif
+
+#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
+const struct arch_timer_erratum_workaround *timer_unstable_counter_workaround = NULL;
+EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround);
+
+DEFINE_STATIC_KEY_FALSE(arch_timer_read_ool_enabled);
+EXPORT_SYMBOL_GPL(arch_timer_read_ool_enabled);
+
+static const struct arch_timer_erratum_workaround ool_workarounds[] = {
+#ifdef CONFIG_FSL_ERRATUM_A008585
+ {
+ .id = "fsl,erratum-a008585",
+ .read_cntp_tval_el0 = fsl_a008585_read_cntp_tval_el0,
+ .read_cntv_tval_el0 = fsl_a008585_read_cntv_tval_el0,
+ .read_cntvct_el0 = fsl_a008585_read_cntvct_el0,
+ },
+#endif
+};
+#endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */

static __always_inline
void arch_timer_reg_write(int access, enum arch_timer_reg reg, u32 val,
@@ -267,8 +298,8 @@ static __always_inline void set_next_eve
arch_timer_reg_write(access, ARCH_TIMER_REG_CTRL, ctrl, clk);
}

-#ifdef CONFIG_FSL_ERRATUM_A008585
-static __always_inline void fsl_a008585_set_next_event(const int access,
+#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
+static __always_inline void erratum_set_next_event_generic(const int access,
unsigned long evt, struct clock_event_device *clk)
{
unsigned long ctrl;
@@ -286,20 +317,20 @@ static __always_inline void fsl_a008585_
arch_timer_reg_write(access, ARCH_TIMER_REG_CTRL, ctrl, clk);
}

-static int fsl_a008585_set_next_event_virt(unsigned long evt,
+static int erratum_set_next_event_virt(unsigned long evt,
struct clock_event_device *clk)
{
- fsl_a008585_set_next_event(ARCH_TIMER_VIRT_ACCESS, evt, clk);
+ erratum_set_next_event_generic(ARCH_TIMER_VIRT_ACCESS, evt, clk);
return 0;
}

-static int fsl_a008585_set_next_event_phys(unsigned long evt,
+static int erratum_set_next_event_phys(unsigned long evt,
struct clock_event_device *clk)
{
- fsl_a008585_set_next_event(ARCH_TIMER_PHYS_ACCESS, evt, clk);
+ erratum_set_next_event_generic(ARCH_TIMER_PHYS_ACCESS, evt, clk);
return 0;
}
-#endif /* CONFIG_FSL_ERRATUM_A008585 */
+#endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */

static int arch_timer_set_next_event_virt(unsigned long evt,
struct clock_event_device *clk)
@@ -329,16 +360,16 @@ static int arch_timer_set_next_event_phy
return 0;
}

-static void fsl_a008585_set_sne(struct clock_event_device *clk)
+static void erratum_workaround_set_sne(struct clock_event_device *clk)
{
-#ifdef CONFIG_FSL_ERRATUM_A008585
+#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
if (!static_branch_unlikely(&arch_timer_read_ool_enabled))
return;

if (arch_timer_uses_ppi == VIRT_PPI)
- clk->set_next_event = fsl_a008585_set_next_event_virt;
+ clk->set_next_event = erratum_set_next_event_virt;
else
- clk->set_next_event = fsl_a008585_set_next_event_phys;
+ clk->set_next_event = erratum_set_next_event_phys;
#endif
}

@@ -371,7 +402,7 @@ static void __arch_timer_setup(unsigned
BUG();
}

- fsl_a008585_set_sne(clk);
+ erratum_workaround_set_sne(clk);
} else {
clk->features |= CLOCK_EVT_FEAT_DYNIRQ;
clk->name = "arch_mem_timer";
@@ -600,7 +631,7 @@ static void __init arch_counter_register

clocksource_counter.archdata.vdso_direct = true;

-#ifdef CONFIG_FSL_ERRATUM_A008585
+#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
/*
* Don't use the vdso fastpath if errata require using
* the out-of-line counter accessor.
@@ -888,12 +919,15 @@ static int __init arch_timer_of_init(str

arch_timer_c3stop = !of_property_read_bool(np, "always-on");

-#ifdef CONFIG_FSL_ERRATUM_A008585
- if (fsl_a008585_enable < 0)
- fsl_a008585_enable = of_property_read_bool(np, "fsl,erratum-a008585");
- if (fsl_a008585_enable) {
- static_branch_enable(&arch_timer_read_ool_enabled);
- pr_info("Enabling workaround for FSL erratum A-008585\n");
+#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
+ for (i = 0; i < ARRAY_SIZE(ool_workarounds); i++) {
+ if (of_property_read_bool(np, ool_workarounds[i].id)) {
+ timer_unstable_counter_workaround = &ool_workarounds[i];
+ static_branch_enable(&arch_timer_read_ool_enabled);
+ pr_info("arch_timer: Enabling workaround for %s\n",
+ timer_unstable_counter_workaround->id);
+ break;
+ }
}
#endif



2022-04-06 21:12:20

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 14/43] arm64: arch_timer: Add infrastructure for multiple erratum detection methods

From: Marc Zyngier <[email protected]>

commit 651bb2e9dca6e6dbad3fba5f6e6086a23575b8b5 upstream.

We're currently stuck with DT when it comes to handling errata, which
is pretty restrictive. In order to make things more flexible, let's
introduce an infrastructure that could support alternative discovery
methods. No change in functionality.

Acked-by: Thomas Gleixner <[email protected]>
Reviewed-by: Hanjun Guo <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
[ morse: Removed the changes to HiSilicon erratum 161010101, which isn't
present in v4.9 ]
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/arch_timer.h | 7 ++-
drivers/clocksource/arm_arch_timer.c | 81 ++++++++++++++++++++++++++++++-----
2 files changed, 76 insertions(+), 12 deletions(-)

--- a/arch/arm64/include/asm/arch_timer.h
+++ b/arch/arm64/include/asm/arch_timer.h
@@ -37,9 +37,14 @@ extern struct static_key_false arch_time
#define needs_unstable_timer_counter_workaround() false
#endif

+enum arch_timer_erratum_match_type {
+ ate_match_dt,
+};

struct arch_timer_erratum_workaround {
- const char *id; /* Indicate the Erratum ID */
+ enum arch_timer_erratum_match_type match_type;
+ const void *id;
+ const char *desc;
u32 (*read_cntp_tval_el0)(void);
u32 (*read_cntv_tval_el0)(void);
u64 (*read_cntvct_el0)(void);
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -140,13 +140,81 @@ EXPORT_SYMBOL_GPL(arch_timer_read_ool_en
static const struct arch_timer_erratum_workaround ool_workarounds[] = {
#ifdef CONFIG_FSL_ERRATUM_A008585
{
+ .match_type = ate_match_dt,
.id = "fsl,erratum-a008585",
+ .desc = "Freescale erratum a005858",
.read_cntp_tval_el0 = fsl_a008585_read_cntp_tval_el0,
.read_cntv_tval_el0 = fsl_a008585_read_cntv_tval_el0,
.read_cntvct_el0 = fsl_a008585_read_cntvct_el0,
},
#endif
};
+
+typedef bool (*ate_match_fn_t)(const struct arch_timer_erratum_workaround *,
+ const void *);
+
+static
+bool arch_timer_check_dt_erratum(const struct arch_timer_erratum_workaround *wa,
+ const void *arg)
+{
+ const struct device_node *np = arg;
+
+ return of_property_read_bool(np, wa->id);
+}
+
+static const struct arch_timer_erratum_workaround *
+arch_timer_iterate_errata(enum arch_timer_erratum_match_type type,
+ ate_match_fn_t match_fn,
+ void *arg)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(ool_workarounds); i++) {
+ if (ool_workarounds[i].match_type != type)
+ continue;
+
+ if (match_fn(&ool_workarounds[i], arg))
+ return &ool_workarounds[i];
+ }
+
+ return NULL;
+}
+
+static
+void arch_timer_enable_workaround(const struct arch_timer_erratum_workaround *wa)
+{
+ timer_unstable_counter_workaround = wa;
+ static_branch_enable(&arch_timer_read_ool_enabled);
+}
+
+static void arch_timer_check_ool_workaround(enum arch_timer_erratum_match_type type,
+ void *arg)
+{
+ const struct arch_timer_erratum_workaround *wa;
+ ate_match_fn_t match_fn = NULL;
+
+ if (static_branch_unlikely(&arch_timer_read_ool_enabled))
+ return;
+
+ switch (type) {
+ case ate_match_dt:
+ match_fn = arch_timer_check_dt_erratum;
+ break;
+ default:
+ WARN_ON(1);
+ return;
+ }
+
+ wa = arch_timer_iterate_errata(type, match_fn, arg);
+ if (!wa)
+ return;
+
+ arch_timer_enable_workaround(wa);
+ pr_info("Enabling global workaround for %s\n", wa->desc);
+}
+
+#else
+#define arch_timer_check_ool_workaround(t,a) do { } while(0)
#endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */

static __always_inline
@@ -919,17 +987,8 @@ static int __init arch_timer_of_init(str

arch_timer_c3stop = !of_property_read_bool(np, "always-on");

-#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
- for (i = 0; i < ARRAY_SIZE(ool_workarounds); i++) {
- if (of_property_read_bool(np, ool_workarounds[i].id)) {
- timer_unstable_counter_workaround = &ool_workarounds[i];
- static_branch_enable(&arch_timer_read_ool_enabled);
- pr_info("arch_timer: Enabling workaround for %s\n",
- timer_unstable_counter_workaround->id);
- break;
- }
- }
-#endif
+ /* Check for globally applicable workarounds */
+ arch_timer_check_ool_workaround(ate_match_dt, np);

/*
* If we cannot rely on firmware initializing the timer registers then


2022-04-06 21:17:38

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 27/43] arm64: entry: Free up another register on kptis tramp_exit path

From: James Morse <[email protected]>

commit 03aff3a77a58b5b52a77e00537a42090ad57b80b upstream.

Kpti stashes x30 in far_el1 while it uses x30 for all its work.

Making the vectors a per-cpu data structure will require a second
register.

Allow tramp_exit two registers before it unmaps the kernel, by
leaving x30 on the stack, and stashing x29 in far_el1.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/entry.S | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)

--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -244,14 +244,16 @@ alternative_else_nop_endif
ldp x24, x25, [sp, #16 * 12]
ldp x26, x27, [sp, #16 * 13]
ldp x28, x29, [sp, #16 * 14]
- ldr lr, [sp, #S_LR]
- add sp, sp, #S_FRAME_SIZE // restore sp

.if \el == 0
-alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
+alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
+ ldr lr, [sp, #S_LR]
+ add sp, sp, #S_FRAME_SIZE // restore sp
+ eret
+alternative_else_nop_endif
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
bne 4f
- msr far_el1, x30
+ msr far_el1, x29
tramp_alias x30, tramp_exit_native
br x30
4:
@@ -259,6 +261,8 @@ alternative_insn eret, nop, ARM64_UNMAP_
br x30
#endif
.else
+ ldr lr, [sp, #S_LR]
+ add sp, sp, #S_FRAME_SIZE // restore sp
eret
.endif
.endm
@@ -947,10 +951,12 @@ __ni_sys_trace:
.macro tramp_exit, regsize = 64
adr x30, tramp_vectors
msr vbar_el1, x30
- tramp_unmap_kernel x30
+ ldr lr, [sp, #S_LR]
+ tramp_unmap_kernel x29
.if \regsize == 64
- mrs x30, far_el1
+ mrs x29, far_el1
.endif
+ add sp, sp, #S_FRAME_SIZE // restore sp
eret
.endm



2022-04-06 21:33:19

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 23/43] arm64: Add Cortex-X2 CPU part definition

From: Anshuman Khandual <[email protected]>

commit 72bb9dcb6c33cfac80282713c2b4f2b254cd24d1 upstream.

Add the CPU Partnumbers for the new Arm designs.

Cc: Will Deacon <[email protected]>
Cc: Suzuki Poulose <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
Reviewed-by: Suzuki K Poulose <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/cputype.h | 2 ++
1 file changed, 2 insertions(+)

--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -89,6 +89,7 @@
#define ARM_CPU_PART_NEOVERSE_N1 0xD0C
#define ARM_CPU_PART_CORTEX_A77 0xD0D
#define ARM_CPU_PART_CORTEX_A710 0xD47
+#define ARM_CPU_PART_CORTEX_X2 0xD48
#define ARM_CPU_PART_NEOVERSE_N2 0xD49

#define APM_CPU_PART_POTENZA 0x000
@@ -111,6 +112,7 @@
#define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1)
#define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
#define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)
+#define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2)
#define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2)
#define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)


2022-04-06 21:34:34

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 01/43] arm64: errata: Provide macro for major and minor cpu revisions

From: Robert Richter <[email protected]>

commit fa5ce3d1928c441c3d241c34a00c07c8f5880b1a upstream

Definition of cpu ranges are hard to read if the cpu variant is not
zero. Provide MIDR_CPU_VAR_REV() macro to describe the full hardware
revision of a cpu including variant and (minor) revision.

Signed-off-by: Robert Richter <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
[ morse: some parts of this patch were already backported as part of
b8c320884eff003581ee61c5970a2e83f513eff1 ]
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/cpu_errata.c | 15 +++++++++------
arch/arm64/kernel/cpufeature.c | 8 +++-----
2 files changed, 12 insertions(+), 11 deletions(-)

--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -408,8 +408,9 @@ const struct arm64_cpu_capabilities arm6
/* Cortex-A57 r0p0 - r1p2 */
.desc = "ARM erratum 832075",
.capability = ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE,
- MIDR_RANGE(MIDR_CORTEX_A57, 0x00,
- (1 << MIDR_VARIANT_SHIFT) | 2),
+ MIDR_RANGE(MIDR_CORTEX_A57,
+ MIDR_CPU_VAR_REV(0, 0),
+ MIDR_CPU_VAR_REV(1, 2)),
},
#endif
#ifdef CONFIG_ARM64_ERRATUM_834220
@@ -417,8 +418,9 @@ const struct arm64_cpu_capabilities arm6
/* Cortex-A57 r0p0 - r1p2 */
.desc = "ARM erratum 834220",
.capability = ARM64_WORKAROUND_834220,
- MIDR_RANGE(MIDR_CORTEX_A57, 0x00,
- (1 << MIDR_VARIANT_SHIFT) | 2),
+ MIDR_RANGE(MIDR_CORTEX_A57,
+ MIDR_CPU_VAR_REV(0, 0),
+ MIDR_CPU_VAR_REV(1, 2)),
},
#endif
#ifdef CONFIG_ARM64_ERRATUM_845719
@@ -442,8 +444,9 @@ const struct arm64_cpu_capabilities arm6
/* Cavium ThunderX, T88 pass 1.x - 2.1 */
.desc = "Cavium erratum 27456",
.capability = ARM64_WORKAROUND_CAVIUM_27456,
- MIDR_RANGE(MIDR_THUNDERX, 0x00,
- (1 << MIDR_VARIANT_SHIFT) | 1),
+ MIDR_RANGE(MIDR_THUNDERX,
+ MIDR_CPU_VAR_REV(0, 0),
+ MIDR_CPU_VAR_REV(1, 1)),
},
{
/* Cavium ThunderX, T81 pass 1.0 */
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -728,13 +728,11 @@ static bool has_useable_gicv3_cpuif(cons
static bool has_no_hw_prefetch(const struct arm64_cpu_capabilities *entry, int __unused)
{
u32 midr = read_cpuid_id();
- u32 rv_min, rv_max;

/* Cavium ThunderX pass 1.x and 2.x */
- rv_min = 0;
- rv_max = (1 << MIDR_VARIANT_SHIFT) | MIDR_REVISION_MASK;
-
- return MIDR_IS_CPU_MODEL_RANGE(midr, MIDR_THUNDERX, rv_min, rv_max);
+ return MIDR_IS_CPU_MODEL_RANGE(midr, MIDR_THUNDERX,
+ MIDR_CPU_VAR_REV(0, 0),
+ MIDR_CPU_VAR_REV(1, MIDR_REVISION_MASK));
}

static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused)


2022-04-06 21:39:20

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 12/43] clocksource/drivers/arm_arch_timer: Remove fsl-a008585 parameter

From: Ding Tianhong <[email protected]>

commit 5444ea6a7f46276876e94ecf8d44615af1ef22f7 upstream.

Having a command line option to flip the errata handling for a
particular erratum is a little bit unusual, and it's vastly superior to
pass this in the DT. By common consensus, it's best to kill off the
command line parameter.

Signed-off-by: Ding Tianhong <[email protected]>
[Mark: split patch, reword commit message]
Signed-off-by: Mark Rutland <[email protected]>
Acked-by: Daniel Lezcano <[email protected]>
Signed-off-by: Daniel Lezcano <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
Documentation/kernel-parameters.txt | 9 ---------
drivers/clocksource/arm_arch_timer.c | 14 --------------
2 files changed, 23 deletions(-)

--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -751,15 +751,6 @@ bytes respectively. Such letter suffixes
loops can be debugged more effectively on production
systems.

- clocksource.arm_arch_timer.fsl-a008585=
- [ARM64]
- Format: <bool>
- Enable/disable the workaround of Freescale/NXP
- erratum A-008585. This can be useful for KVM
- guests, if the guest device tree doesn't show the
- erratum. If unspecified, the workaround is
- enabled based on the device tree.
-
clearcpuid=BITNUM [X86]
Disable CPUID feature X for the kernel. See
arch/x86/include/asm/cpufeatures.h for the valid bit
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -101,20 +101,6 @@ EXPORT_SYMBOL_GPL(arch_timer_read_ool_en

static int fsl_a008585_enable = -1;

-static int __init early_fsl_a008585_cfg(char *buf)
-{
- int ret;
- bool val;
-
- ret = strtobool(buf, &val);
- if (ret)
- return ret;
-
- fsl_a008585_enable = val;
- return 0;
-}
-early_param("clocksource.arm_arch_timer.fsl-a008585", early_fsl_a008585_cfg);
-
u32 __fsl_a008585_read_cntp_tval_el0(void)
{
return __fsl_a008585_read_reg(cntp_tval_el0);


2022-04-06 23:24:20

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 41/43] KVM: arm64: Allow SMCCC_ARCH_WORKAROUND_3 to be discovered and migrated

From: James Morse <[email protected]>

commit a5905d6af492ee6a4a2205f0d550b3f931b03d03 upstream.

KVM allows the guest to discover whether the ARCH_WORKAROUND SMCCC are
implemented, and to preserve that state during migration through its
firmware register interface.

Add the necessary boiler plate for SMCCC_ARCH_WORKAROUND_3.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
[ kvm code moved to arch/arm/kvm, removed fw regs ABI. Added 32bit stub ]
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/kvm_host.h | 5 +++++
arch/arm/kvm/psci.c | 4 ++++
arch/arm64/include/asm/kvm_host.h | 4 ++++
3 files changed, 13 insertions(+)

--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -349,4 +349,9 @@ static inline int kvm_arm_have_ssbd(void
return KVM_SSBD_UNKNOWN;
}

+static inline bool kvm_arm_spectre_bhb_mitigated(void)
+{
+ /* 32bit guests don't need firmware for this */
+ return false;
+}
#endif /* __ARM_KVM_HOST_H__ */
--- a/arch/arm/kvm/psci.c
+++ b/arch/arm/kvm/psci.c
@@ -431,6 +431,10 @@ int kvm_hvc_call_handler(struct kvm_vcpu
break;
}
break;
+ case ARM_SMCCC_ARCH_WORKAROUND_3:
+ if (kvm_arm_spectre_bhb_mitigated())
+ val = SMCCC_RET_SUCCESS;
+ break;
}
break;
default:
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -452,4 +452,8 @@ static inline int kvm_arm_have_ssbd(void
}
}

+static inline bool kvm_arm_spectre_bhb_mitigated(void)
+{
+ return arm64_get_spectre_bhb_state() == SPECTRE_MITIGATED;
+}
#endif /* __ARM64_KVM_HOST_H__ */


2022-04-06 23:29:30

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 25/43] arm64: entry.S: Add ventry overflow sanity checks

From: James Morse <[email protected]>

commit 4330e2c5c04c27bebf89d34e0bc14e6943413067 upstream.

Subsequent patches add even more code to the ventry slots.
Ensure kernels that overflow a ventry slot don't get built.

Reviewed-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/entry.S | 3 +++
1 file changed, 3 insertions(+)

--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -74,6 +74,7 @@

.macro kernel_ventry, el, label, regsize = 64
.align 7
+.Lventry_start\@:
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
alternative_if ARM64_UNMAP_KERNEL_AT_EL0
.if \el == 0
@@ -89,6 +90,7 @@ alternative_else_nop_endif

sub sp, sp, #S_FRAME_SIZE
b el\()\el\()_\label
+.org .Lventry_start\@ + 128 // Did we overflow the ventry slot?
.endm

.macro tramp_alias, dst, sym
@@ -935,6 +937,7 @@ __ni_sys_trace:
add x30, x30, #(1b - tramp_vectors)
isb
ret
+.org 1b + 128 // Did we overflow the ventry slot?
.endm

.macro tramp_exit, regsize = 64


2022-04-06 23:29:30

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 03/43] arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35

From: Suzuki K Poulose <[email protected]>

commit 6e616864f21160d8d503523b60a53a29cecc6f24 upstream.

Update the MIDR encodings for the Cortex-A55 and Cortex-A35

Cc: Mark Rutland <[email protected]>
Reviewed-by: Dave Martin <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/cputype.h | 4 ++++
1 file changed, 4 insertions(+)

--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -83,6 +83,8 @@
#define ARM_CPU_PART_CORTEX_A53 0xD03
#define ARM_CPU_PART_CORTEX_A73 0xD09
#define ARM_CPU_PART_CORTEX_A75 0xD0A
+#define ARM_CPU_PART_CORTEX_A35 0xD04
+#define ARM_CPU_PART_CORTEX_A55 0xD05

#define APM_CPU_PART_POTENZA 0x000

@@ -98,6 +100,8 @@
#define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
#define MIDR_CORTEX_A73 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A73)
#define MIDR_CORTEX_A75 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A75)
+#define MIDR_CORTEX_A35 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A35)
+#define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55)
#define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
#define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2)


2022-04-07 01:21:44

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 15/43] arm64: arch_timer: Add erratum handler for CPU-specific capability

From: Marc Zyngier <[email protected]>

commit 0064030c6fd4ca6cfab42de037b2a89445beeead upstream.

Should we ever have a workaround for an erratum that is detected using
a capability and affecting a particular CPU, it'd be nice to have
a way to probe them directly.

Acked-by: Thomas Gleixner <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/arch_timer.h | 1 +
drivers/clocksource/arm_arch_timer.c | 28 ++++++++++++++++++++++++----
2 files changed, 25 insertions(+), 4 deletions(-)

--- a/arch/arm64/include/asm/arch_timer.h
+++ b/arch/arm64/include/asm/arch_timer.h
@@ -39,6 +39,7 @@ extern struct static_key_false arch_time

enum arch_timer_erratum_match_type {
ate_match_dt,
+ ate_match_local_cap_id,
};

struct arch_timer_erratum_workaround {
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -162,6 +162,13 @@ bool arch_timer_check_dt_erratum(const s
return of_property_read_bool(np, wa->id);
}

+static
+bool arch_timer_check_local_cap_erratum(const struct arch_timer_erratum_workaround *wa,
+ const void *arg)
+{
+ return this_cpu_has_cap((uintptr_t)wa->id);
+}
+
static const struct arch_timer_erratum_workaround *
arch_timer_iterate_errata(enum arch_timer_erratum_match_type type,
ate_match_fn_t match_fn,
@@ -192,14 +199,16 @@ static void arch_timer_check_ool_workaro
{
const struct arch_timer_erratum_workaround *wa;
ate_match_fn_t match_fn = NULL;
-
- if (static_branch_unlikely(&arch_timer_read_ool_enabled))
- return;
+ bool local = false;

switch (type) {
case ate_match_dt:
match_fn = arch_timer_check_dt_erratum;
break;
+ case ate_match_local_cap_id:
+ match_fn = arch_timer_check_local_cap_erratum;
+ local = true;
+ break;
default:
WARN_ON(1);
return;
@@ -209,8 +218,17 @@ static void arch_timer_check_ool_workaro
if (!wa)
return;

+ if (needs_unstable_timer_counter_workaround()) {
+ if (wa != timer_unstable_counter_workaround)
+ pr_warn("Can't enable workaround for %s (clashes with %s\n)",
+ wa->desc,
+ timer_unstable_counter_workaround->desc);
+ return;
+ }
+
arch_timer_enable_workaround(wa);
- pr_info("Enabling global workaround for %s\n", wa->desc);
+ pr_info("Enabling %s workaround for %s\n",
+ local ? "local" : "global", wa->desc);
}

#else
@@ -470,6 +488,8 @@ static void __arch_timer_setup(unsigned
BUG();
}

+ arch_timer_check_ool_workaround(ate_match_local_cap_id, NULL);
+
erratum_workaround_set_sne(clk);
} else {
clk->features |= CLOCK_EVT_FEAT_DYNIRQ;


2022-04-07 01:32:59

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.9 20/43] arm64: Add part number for Neoverse N1

From: Marc Zyngier <[email protected]>

commit 0cf57b86859c49381addb3ce47be70aadf5fd2c0 upstream.

New CPU, new part number. You know the drill.

Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/cputype.h | 2 ++
1 file changed, 2 insertions(+)

--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -86,6 +86,7 @@
#define ARM_CPU_PART_CORTEX_A35 0xD04
#define ARM_CPU_PART_CORTEX_A55 0xD05
#define ARM_CPU_PART_CORTEX_A76 0xD0B
+#define ARM_CPU_PART_NEOVERSE_N1 0xD0C

#define APM_CPU_PART_POTENZA 0x000

@@ -104,6 +105,7 @@
#define MIDR_CORTEX_A35 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A35)
#define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55)
#define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76)
+#define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1)
#define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
#define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2)


2022-04-07 01:33:07

by Florian Fainelli

[permalink] [raw]
Subject: Re: [PATCH 4.9 00/43] 4.9.310-rc1 review

On 4/6/22 11:26, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.9.310 release.
> There are 43 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Fri, 08 Apr 2022 18:24:27 +0000.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.9.310-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.9.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h

On ARCH_BRCMSTB using 32-bit and 64-bit ARM kernels:

Tested-by: Florian Fainelli <[email protected]>
--
Florian

2022-04-07 01:43:05

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH 4.9 00/43] 4.9.310-rc1 review

On 4/6/22 12:26 PM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.9.310 release.
> There are 43 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Fri, 08 Apr 2022 18:24:27 +0000.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.9.310-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.9.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
>

Compiled and booted on my test system. No dmesg regressions.

Tested-by: Shuah Khan <[email protected]>

thanks,
-- Shuah

2022-04-07 07:58:31

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 4.9 00/43] 4.9.310-rc1 review

Hi!

> This is the start of the stable review cycle for the 4.9.310 release.
> There are 43 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.

CIP testing did not find any problems here:

https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/tree/linux-4.9.y

Tested-by: Pavel Machek (CIP) <[email protected]>

Best regards,
Pavel
--
DENX Software Engineering GmbH, Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany


Attachments:
(No filename) (659.00 B)
signature.asc (201.00 B)
Download all attachments

2022-04-07 19:41:18

by Naresh Kamboju

[permalink] [raw]
Subject: Re: [PATCH 4.9 00/43] 4.9.310-rc1 review

On Wed, 6 Apr 2022 at 23:57, Greg Kroah-Hartman
<[email protected]> wrote:
>
> This is the start of the stable review cycle for the 4.9.310 release.
> There are 43 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Fri, 08 Apr 2022 18:24:27 +0000.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.9.310-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.9.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h

Results from Linaro’s test farm.

As Guenter reported,
On stable-rc 4.9 arm64 build configs allnoconfig and tinyconfig failed [1].

## Build
* kernel: 4.9.310-rc1
* git: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc
* git branch: linux-4.9.y
* git commit: b5f0e9d665c30ceb3bee566518a1020e54d7bc1f
* git describe: v4.9.309-44-gb5f0e9d665c3
* test details:
https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-4.9.y-sanity/build/v4.9.309-44-gb5f0e9d665c3

## Test Regressions (compared to v4.9.309-163-geeae539a0d5c)
* arm64, build
- arm64-clang-11-allnoconfig
- arm64-clang-11-tinyconfig
- arm64-clang-12-allnoconfig
- arm64-clang-12-tinyconfig
- arm64-clang-13-allnoconfig
- arm64-clang-13-tinyconfig
- arm64-clang-nightly-allnoconfig
- arm64-clang-nightly-tinyconfig
- arm64-gcc-10-allnoconfig
- arm64-gcc-10-tinyconfig
- arm64-gcc-11-allnoconfig
- arm64-gcc-11-tinyconfig
- arm64-gcc-8-allnoconfig
- arm64-gcc-8-tinyconfig
- arm64-gcc-9-allnoconfig
- arm64-gcc-9-tinyconfig


Build error:
------------

arch/arm64/kernel/cpu_errata.c: In function 'is_spectrev2_safe':
arch/arm64/kernel/cpu_errata.c:829:39: error:
'arm64_bp_harden_smccc_cpus' undeclared (first use in this function);
did you mean 'arm64_bp_harden_el1_vectors'?
829 | arm64_bp_harden_smccc_cpus);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
| arm64_bp_harden_el1_vectors
arch/arm64/kernel/cpu_errata.c:829:39: note: each undeclared
identifier is reported only once for each function it appears in
arch/arm64/kernel/cpu_errata.c: In function 'spectre_bhb_enable_mitigation':
arch/arm64/kernel/cpu_errata.c:839:39: error: '__hardenbp_enab'
undeclared (first use in this function)
839 | if (!is_spectrev2_safe() && !__hardenbp_enab) {
| ^~~~~~~~~~~~~~~
In file included from include/asm-generic/percpu.h:6,
from arch/arm64/include/asm/percpu.h:279,
from include/linux/percpu.h:12,
from arch/arm64/include/asm/mmu.h:23,
from include/linux/mm_types.h:17,
from include/linux/sched.h:27,
from include/linux/ratelimit.h:5,
from include/linux/device.h:27,
from include/linux/node.h:17,
from include/linux/cpu.h:16,
from arch/arm64/include/asm/cpu.h:19,
from arch/arm64/kernel/cpu_errata.c:23:
arch/arm64/kernel/cpu_errata.c:879:42: error: 'bp_hardening_data'
undeclared (first use in this function)
879 | __this_cpu_write(bp_hardening_data.fn, NULL);
| ^~~~~~~~~~~~~~~~~
include/linux/percpu-defs.h:236:54: note: in definition of macro
'__verify_pcpu_ptr'
236 | const void __percpu *__vpp_verify = (typeof((ptr) +
0))NULL; \
| ^~~
include/linux/percpu-defs.h:438:41: note: in expansion of macro
'__pcpu_size_call'
438 | #define raw_cpu_write(pcp, val)
__pcpu_size_call(raw_cpu_write_, pcp, val)
| ^~~~~~~~~~~~~~~~
include/linux/percpu-defs.h:469:9: note: in expansion of macro 'raw_cpu_write'
469 | raw_cpu_write(pcp, val);
\
| ^~~~~~~~~~~~~
arch/arm64/kernel/cpu_errata.c:879:25: note: in expansion of macro
'__this_cpu_write'
879 | __this_cpu_write(bp_hardening_data.fn, NULL);
| ^~~~~~~~~~~~~~~~
make[2]: *** [scripts/Makefile.build:307:
arch/arm64/kernel/cpu_errata.o] Error 1


Reported-by: Linux Kernel Functional Testing <[email protected]>

steps to reproduce:

# To install tuxmake on your system globally:
# sudo pip3 install -U tuxmake

tuxmake --runtime podman --target-arch arm64 --toolchain gcc-11
--kconfig tinyconfig


--
Linaro LKFT
https://lkft.linaro.org

[1] https://builds.tuxbuild.com/27R7yjNqw1ahQiOJ16BgcTn7BcZ/

2022-04-07 20:13:09

by Naresh Kamboju

[permalink] [raw]
Subject: Re: [PATCH 4.9 00/43] 4.9.310-rc1 review

On Wed, 6 Apr 2022 at 23:57, Greg Kroah-Hartman
<[email protected]> wrote:
>
> This is the start of the stable review cycle for the 4.9.310 release.
> There are 43 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Fri, 08 Apr 2022 18:24:27 +0000.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.9.310-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.9.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h

Results from Linaro’s test farm.

As Guenter reported,
On stable rc 4.9 arm64 build configs allnoconfig and tinyconfig failed.

## Build
* kernel: 4.9.310-rc1
* git: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc
* git branch: linux-4.9.y
* git commit: b5f0e9d665c30ceb3bee566518a1020e54d7bc1f
* git describe: v4.9.309-44-gb5f0e9d665c3
* test details:
https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-4.9.y-sanity/build/v4.9.309-44-gb5f0e9d665c3

## Test Regressions (compared to v4.9.309-163-geeae539a0d5c)
* arm64, build
- arm64-clang-11-allnoconfig
- arm64-clang-11-tinyconfig
- arm64-clang-12-allnoconfig
- arm64-clang-12-tinyconfig
- arm64-clang-13-allnoconfig
- arm64-clang-13-tinyconfig
- arm64-clang-nightly-allnoconfig
- arm64-clang-nightly-tinyconfig
- arm64-gcc-10-allnoconfig
- arm64-gcc-10-tinyconfig
- arm64-gcc-11-allnoconfig
- arm64-gcc-11-tinyconfig
- arm64-gcc-8-allnoconfig
- arm64-gcc-8-tinyconfig
- arm64-gcc-9-allnoconfig
- arm64-gcc-9-tinyconfig


Reported-by: Linux Kernel Functional Testing <[email protected]>

--
Linaro LKFT
https://lkft.linaro.org

2022-04-07 20:15:07

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH 4.9 00/43] 4.9.310-rc1 review

On Wed, Apr 06, 2022 at 08:26:09PM +0200, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.9.310 release.
> There are 43 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Fri, 08 Apr 2022 18:24:27 +0000.
> Anything received after that time might be too late.
>

Build results:
total: 163 pass: 161 fail: 2
Failed builds:
arm64:allnoconfig
arm64:tinyconfig
Qemu test results:
total: 397 pass: 397 fail: 0

arch/arm64/kernel/cpu_errata.c: In function 'is_spectrev2_safe':
arch/arm64/kernel/cpu_errata.c:829:39: error: 'arm64_bp_harden_smccc_cpus' undeclared

arch/arm64/kernel/cpu_errata.c: In function 'spectre_bhb_enable_mitigation':
arch/arm64/kernel/cpu_errata.c:839:39: error: '__hardenbp_enab' undeclared

arch/arm64/kernel/cpu_errata.c:879:42: error: 'bp_hardening_data' undeclared

Guenter

2022-04-07 20:52:56

by James Morse

[permalink] [raw]
Subject: Re: [PATCH 4.9 00/43] 4.9.310-rc1 review

Hi Greg,

On 07/04/2022 11:23, Greg Kroah-Hartman wrote:
> On Thu, Apr 07, 2022 at 02:32:38AM -0700, Guenter Roeck wrote:
>> On Wed, Apr 06, 2022 at 08:26:09PM +0200, Greg Kroah-Hartman wrote:
>>> This is the start of the stable review cycle for the 4.9.310 release.
>>> There are 43 patches in this series, all will be posted as a response
>>> to this one. If anyone has any issues with these being applied, please
>>> let me know.
>>>
>>> Responses should be made by Fri, 08 Apr 2022 18:24:27 +0000.
>>> Anything received after that time might be too late.
>>>
>>
>> Build results:
>> total: 163 pass: 161 fail: 2
>> Failed builds:
>> arm64:allnoconfig
>> arm64:tinyconfig
>> Qemu test results:
>> total: 397 pass: 397 fail: 0
>>
>> arch/arm64/kernel/cpu_errata.c: In function 'is_spectrev2_safe':
>> arch/arm64/kernel/cpu_errata.c:829:39: error: 'arm64_bp_harden_smccc_cpus' undeclared
>>
>> arch/arm64/kernel/cpu_errata.c: In function 'spectre_bhb_enable_mitigation':
>> arch/arm64/kernel/cpu_errata.c:839:39: error: '__hardenbp_enab' undeclared
>>
>> arch/arm64/kernel/cpu_errata.c:879:42: error: 'bp_hardening_data' undeclared
>
> Ick, odds are James did not build with those two configs :(

I'm sure I tried allnoconfig - but evidently messed something up.


> James, any chance you can look into this and see what needs to be added
> or changed with your patch series?

Will do,...


Thanks,

James

2022-04-07 21:16:25

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 4.9 00/43] 4.9.310-rc1 review

On Thu, Apr 07, 2022 at 02:32:38AM -0700, Guenter Roeck wrote:
> On Wed, Apr 06, 2022 at 08:26:09PM +0200, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 4.9.310 release.
> > There are 43 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Fri, 08 Apr 2022 18:24:27 +0000.
> > Anything received after that time might be too late.
> >
>
> Build results:
> total: 163 pass: 161 fail: 2
> Failed builds:
> arm64:allnoconfig
> arm64:tinyconfig
> Qemu test results:
> total: 397 pass: 397 fail: 0
>
> arch/arm64/kernel/cpu_errata.c: In function 'is_spectrev2_safe':
> arch/arm64/kernel/cpu_errata.c:829:39: error: 'arm64_bp_harden_smccc_cpus' undeclared
>
> arch/arm64/kernel/cpu_errata.c: In function 'spectre_bhb_enable_mitigation':
> arch/arm64/kernel/cpu_errata.c:839:39: error: '__hardenbp_enab' undeclared
>
> arch/arm64/kernel/cpu_errata.c:879:42: error: 'bp_hardening_data' undeclared

Ick, odds are James did not build with those two configs :(

James, any chance you can look into this and see what needs to be added
or changed with your patch series?

thanks,

greg k-h

2022-04-12 11:42:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 4.9 00/43] 4.9.310-rc1 review

On Thu, Apr 07, 2022 at 06:20:28PM +0100, James Morse wrote:
> Hi Greg,
>
> On 07/04/2022 11:23, Greg Kroah-Hartman wrote:
> > On Thu, Apr 07, 2022 at 02:32:38AM -0700, Guenter Roeck wrote:
> >> On Wed, Apr 06, 2022 at 08:26:09PM +0200, Greg Kroah-Hartman wrote:
> >>> This is the start of the stable review cycle for the 4.9.310 release.
> >>> There are 43 patches in this series, all will be posted as a response
> >>> to this one. If anyone has any issues with these being applied, please
> >>> let me know.
> >>>
> >>> Responses should be made by Fri, 08 Apr 2022 18:24:27 +0000.
> >>> Anything received after that time might be too late.
> >>>
> >>
> >> Build results:
> >> total: 163 pass: 161 fail: 2
> >> Failed builds:
> >> arm64:allnoconfig
> >> arm64:tinyconfig
> >> Qemu test results:
> >> total: 397 pass: 397 fail: 0
> >>
> >> arch/arm64/kernel/cpu_errata.c: In function 'is_spectrev2_safe':
> >> arch/arm64/kernel/cpu_errata.c:829:39: error: 'arm64_bp_harden_smccc_cpus' undeclared
> >>
> >> arch/arm64/kernel/cpu_errata.c: In function 'spectre_bhb_enable_mitigation':
> >> arch/arm64/kernel/cpu_errata.c:839:39: error: '__hardenbp_enab' undeclared
> >>
> >> arch/arm64/kernel/cpu_errata.c:879:42: error: 'bp_hardening_data' undeclared
> >
> > Ick, odds are James did not build with those two configs :(
>
> I'm sure I tried allnoconfig - but evidently messed something up.
>
>
> > James, any chance you can look into this and see what needs to be added
> > or changed with your patch series?
>
> Will do,...

Thanks for the fixup patch, this passes my local build tests so I think
I'll push out the release now.

greg k-h