2021-02-08 10:16:30

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 00/23] arm64: Early CPU feature override, and applications to VHE, BTI and PAuth

It recently came to light that there is a need to be able to override
some CPU features very early on, before the kernel is fully up and
running. The reasons for this range from specific feature support
(such as using Protected KVM on VHE HW, which is the main motivation
for this work) to errata workaround (a feature is broken on a CPU and
needs to be turned off, or rather not enabled).

This series tries to offer a limited framework for this kind of
problems, by allowing a set of options to be passed on the
command-line and altering the feature set that the cpufeature
subsystem exposes to the rest of the kernel. Note that this doesn't
change anything for code that directly uses the CPU ID registers.

The series completely changes the way a VHE-capable system boots, by
*always* booting non-VHE first, and then upgrading to VHE when deemed
capable. Although it sounds scary, this is actually simple to
implement (and I wish I had done that five years ago). The "upgrade to
VHE" path is then conditioned on the VHE feature not being disabled
from the command-line.

Said command-line parsing borrows a lot from the kaslr code, and
subsequently allows the "nokaslr" option to be moved to the new
infrastructure (though it all looks a bit... odd).

Further patches now add support for disabling BTI and PAuth, the
latter being based on an initial series by Srinivas Ramana[0]. There
is some ongoing discussions about being able to disable MTE, but no
clear resolution on that subject yet

WARNING: this series breaks Apple M1 badly, as it is stuck in VHE
mode. The last patch in this series papers over the problem, but it
*isn't* a candidate for merging yet.

This has been tested on multiple VHE and non-VHE systems.

Branch available at [7].

* From v6 [6]:
- Greatly simplify SPE setup with VHE
- Simplify option parsing by reusing some of the helpers user by
parse_args(). The whole function cannot be used though, as it
does things that can't be done at the point where we parse the
overrides.
- Add a patch allowing M1 CPUs to boot. This patch shouldn't be
merged until we decide to support this non-architectural behaviour.

* From v5 [5]:
- Turn most __initdata into __initconst
- Ensure that all strings are part of the __initconst section.
This is a bit ugly, but saves memory once up and running
- Make overrides __ro_after_init
- Change the command-line parsing so that the same feature can
be overridden multiple times, with the expected left-to-right
parsing order being respected
- Handle all space-like characters as option delimiters
- Collected Acks, RBs and TBs

* From v4 [4]:
- Documentation fixes
- Moved the val/mask pair into a arm64_ftr_override structure,
leading to simpler code
- All arm64_ftr_reg now have a default override, which simplifies
the code a bit further
- Dropped some of the "const" attributes
- Renamed init_shadow_regs() to init_feature_override()
- Renamed struct reg_desc to struct ftr_set_desc
- Refactored command-line parsing
- Simplified handling of VHE being disabled on the cmdline
- Turn EL1 S1 MMU off on switch to VHE
- HVC_VHE_RESTART now returns an error code on failure
- Added missing asmlinkage and dummy prototypes
- Collected Acks and RBs from David, Catalin and Suzuki

* From v3 [3]:
- Fixed the VHE_RESTART stub (duh!)
- Switched to using arm64_ftr_safe_value() instead of the user
provided value
- Per-feature override warning

* From v2 [2]:
- Simplify the VHE_RESTART stub
- Fixed a number of spelling mistakes, and hopefully introduced a
few more
- Override features in __read_sysreg_by_encoding()
- Allow both BTI and PAuth to be overridden on the command line
- Rebased on -rc3

* From v1 [1]:
- Fix SPE init on VHE when EL2 doesn't own SPE
- Fix re-init when KASLR is used
- Handle the resume path
- Rebased to 5.11-rc2

[0] https://lore.kernel.org/r/[email protected]
[1] https://lore.kernel.org/r/[email protected]
[2] https://lore.kernel.org/r/[email protected]
[3] https://lore.kernel.org/r/[email protected]
[4] https://lore.kernel.org/r/[email protected]
[5] https://lore.kernel.org/r/[email protected]
[6] https://lore.kernel.org/r/[email protected]
[7] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=hack/arm64-early-cpufeature

Marc Zyngier (22):
arm64: Fix labels in el2_setup macros
arm64: Fix outdated TCR setup comment
arm64: Turn the MMU-on sequence into a macro
arm64: Provide an 'upgrade to VHE' stub hypercall
arm64: Initialise as nVHE before switching to VHE
arm64: Drop early setting of MDSCR_EL2.TPMS
arm64: Move VHE-specific SPE setup to mutate_to_vhe()
arm64: Simplify init_el2_state to be non-VHE only
arm64: Move SCTLR_EL1 initialisation to EL-agnostic code
arm64: cpufeature: Add global feature override facility
arm64: cpufeature: Use IDreg override in __read_sysreg_by_encoding()
arm64: Extract early FDT mapping from kaslr_early_init()
arm64: cpufeature: Add an early command-line cpufeature override
facility
arm64: Allow ID_AA64MMFR1_EL1.VH to be overridden from the command
line
arm64: Honor VHE being disabled from the command-line
arm64: Add an aliasing facility for the idreg override
arm64: Make kvm-arm.mode={nvhe, protected} an alias of
id_aa64mmfr1.vh=0
KVM: arm64: Document HVC_VHE_RESTART stub hypercall
arm64: Move "nokaslr" over to the early cpufeature infrastructure
arm64: cpufeatures: Allow disabling of BTI from the command-line
arm64: cpufeatures: Allow disabling of Pointer Auth from the
command-line
[DO NOT MERGE] arm64: Cope with CPUs stuck in VHE mode

Srinivas Ramana (1):
arm64: Defer enabling pointer authentication on boot core

.../admin-guide/kernel-parameters.txt | 9 +
Documentation/virt/kvm/arm/hyp-abi.rst | 9 +
arch/arm64/include/asm/assembler.h | 17 ++
arch/arm64/include/asm/cpufeature.h | 11 +
arch/arm64/include/asm/el2_setup.h | 60 ++---
arch/arm64/include/asm/pointer_auth.h | 10 +
arch/arm64/include/asm/setup.h | 11 +
arch/arm64/include/asm/stackprotector.h | 1 +
arch/arm64/include/asm/virt.h | 7 +-
arch/arm64/kernel/Makefile | 2 +-
arch/arm64/kernel/asm-offsets.c | 3 +
arch/arm64/kernel/cpufeature.c | 73 +++++-
arch/arm64/kernel/head.S | 94 +++-----
arch/arm64/kernel/hyp-stub.S | 141 +++++++++++-
arch/arm64/kernel/idreg-override.c | 216 ++++++++++++++++++
arch/arm64/kernel/kaslr.c | 43 +---
arch/arm64/kernel/setup.c | 15 ++
arch/arm64/kernel/sleep.S | 1 +
arch/arm64/kvm/arm.c | 3 +
arch/arm64/kvm/hyp/nvhe/hyp-init.S | 2 +-
arch/arm64/mm/mmu.c | 2 +-
arch/arm64/mm/proc.S | 16 +-
22 files changed, 576 insertions(+), 170 deletions(-)
create mode 100644 arch/arm64/include/asm/setup.h
create mode 100644 arch/arm64/kernel/idreg-override.c

--
2.29.2


2021-02-08 10:18:13

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 16/23] arm64: Add an aliasing facility for the idreg override

In order to map the override of idregs to options that a user
can easily understand, let's introduce yet another option
array, which maps an option to the corresponding idreg options.

Signed-off-by: Marc Zyngier <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Acked-by: David Brazdil <[email protected]>
---
arch/arm64/kernel/idreg-override.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index 2da11bf60195..226bac544e20 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -16,6 +16,8 @@

#define FTR_DESC_NAME_LEN 20
#define FTR_DESC_FIELD_LEN 10
+#define FTR_ALIAS_NAME_LEN 30
+#define FTR_ALIAS_OPTION_LEN 80

struct ftr_set_desc {
char name[FTR_DESC_NAME_LEN];
@@ -39,6 +41,12 @@ static const struct ftr_set_desc * const regs[] __initconst = {
&mmfr1,
};

+static const struct {
+ char alias[FTR_ALIAS_NAME_LEN];
+ char feature[FTR_ALIAS_OPTION_LEN];
+} aliases[] __initconst = {
+};
+
static int __init find_field(const char *cmdline,
const struct ftr_set_desc *reg, int f, u64 *v)
{
@@ -81,7 +89,7 @@ static void __init match_options(const char *cmdline)
}
}

-static __init void __parse_cmdline(const char *cmdline)
+static __init void __parse_cmdline(const char *cmdline, bool parse_aliases)
{
do {
char buf[256];
@@ -105,6 +113,9 @@ static __init void __parse_cmdline(const char *cmdline)

match_options(buf);

+ for (i = 0; parse_aliases && i < ARRAY_SIZE(aliases); i++)
+ if (parameq(buf, aliases[i].alias))
+ __parse_cmdline(aliases[i].feature, false);
} while (1);
}

@@ -127,14 +138,14 @@ static __init void parse_cmdline(void)
if (!prop)
goto out;

- __parse_cmdline(prop);
+ __parse_cmdline(prop, true);

if (!IS_ENABLED(CONFIG_CMDLINE_EXTEND))
return;
}

out:
- __parse_cmdline(CONFIG_CMDLINE);
+ __parse_cmdline(CONFIG_CMDLINE, true);
}

/* Keep checkers quiet */
--
2.29.2

2021-02-08 10:20:35

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 01/23] arm64: Fix labels in el2_setup macros

If someone happens to write the following code:

b 1f
init_el2_state vhe
1:
[...]

they will be in for a long debugging session, as the label "1f"
will be resolved *inside* the init_el2_state macro instead of
after it. Not really what one expects.

Instead, rewite the EL2 setup macros to use unambiguous labels,
thanks to the usual macro counter trick.

Acked-by: Catalin Marinas <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Acked-by: David Brazdil <[email protected]>
---
arch/arm64/include/asm/el2_setup.h | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
index a7f5a1bbc8ac..540116de80bf 100644
--- a/arch/arm64/include/asm/el2_setup.h
+++ b/arch/arm64/include/asm/el2_setup.h
@@ -45,24 +45,24 @@
mrs x1, id_aa64dfr0_el1
sbfx x0, x1, #ID_AA64DFR0_PMUVER_SHIFT, #4
cmp x0, #1
- b.lt 1f // Skip if no PMU present
+ b.lt .Lskip_pmu_\@ // Skip if no PMU present
mrs x0, pmcr_el0 // Disable debug access traps
ubfx x0, x0, #11, #5 // to EL2 and allow access to
-1:
+.Lskip_pmu_\@:
csel x2, xzr, x0, lt // all PMU counters from EL1

/* Statistical profiling */
ubfx x0, x1, #ID_AA64DFR0_PMSVER_SHIFT, #4
- cbz x0, 3f // Skip if SPE not present
+ cbz x0, .Lskip_spe_\@ // Skip if SPE not present

.ifeqs "\mode", "nvhe"
mrs_s x0, SYS_PMBIDR_EL1 // If SPE available at EL2,
and x0, x0, #(1 << SYS_PMBIDR_EL1_P_SHIFT)
- cbnz x0, 2f // then permit sampling of physical
+ cbnz x0, .Lskip_spe_el2_\@ // then permit sampling of physical
mov x0, #(1 << SYS_PMSCR_EL2_PCT_SHIFT | \
1 << SYS_PMSCR_EL2_PA_SHIFT)
msr_s SYS_PMSCR_EL2, x0 // addresses and physical counter
-2:
+.Lskip_spe_el2_\@:
mov x0, #(MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT)
orr x2, x2, x0 // If we don't have VHE, then
// use EL1&0 translation.
@@ -71,7 +71,7 @@
// and disable access from EL1
.endif

-3:
+.Lskip_spe_\@:
msr mdcr_el2, x2 // Configure debug traps
.endm

@@ -79,9 +79,9 @@
.macro __init_el2_lor
mrs x1, id_aa64mmfr1_el1
ubfx x0, x1, #ID_AA64MMFR1_LOR_SHIFT, 4
- cbz x0, 1f
+ cbz x0, .Lskip_lor_\@
msr_s SYS_LORC_EL1, xzr
-1:
+.Lskip_lor_\@:
.endm

/* Stage-2 translation */
@@ -93,7 +93,7 @@
.macro __init_el2_gicv3
mrs x0, id_aa64pfr0_el1
ubfx x0, x0, #ID_AA64PFR0_GIC_SHIFT, #4
- cbz x0, 1f
+ cbz x0, .Lskip_gicv3_\@

mrs_s x0, SYS_ICC_SRE_EL2
orr x0, x0, #ICC_SRE_EL2_SRE // Set ICC_SRE_EL2.SRE==1
@@ -103,7 +103,7 @@
mrs_s x0, SYS_ICC_SRE_EL2 // Read SRE back,
tbz x0, #0, 1f // and check that it sticks
msr_s SYS_ICH_HCR_EL2, xzr // Reset ICC_HCR_EL2 to defaults
-1:
+.Lskip_gicv3_\@:
.endm

.macro __init_el2_hstr
@@ -128,14 +128,14 @@
.macro __init_el2_nvhe_sve
mrs x1, id_aa64pfr0_el1
ubfx x1, x1, #ID_AA64PFR0_SVE_SHIFT, #4
- cbz x1, 1f
+ cbz x1, .Lskip_sve_\@

bic x0, x0, #CPTR_EL2_TZ // Also disable SVE traps
msr cptr_el2, x0 // Disable copro. traps to EL2
isb
mov x1, #ZCR_ELx_LEN_MASK // SVE: Enable full vector
msr_s SYS_ZCR_EL2, x1 // length for EL1.
-1:
+.Lskip_sve_\@:
.endm

.macro __init_el2_nvhe_prepare_eret
--
2.29.2

2021-02-08 10:20:39

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 11/23] arm64: cpufeature: Use IDreg override in __read_sysreg_by_encoding()

__read_sysreg_by_encoding() is used by a bunch of cpufeature helpers,
which should take the feature override into account. Let's do that.

For a good measure (and because we are likely to need to further
down the line), make this helper available to the rest of the
non-modular kernel.

Code that needs to know the *real* features of a CPU can still
use read_sysreg_s(), and find the bare, ugly truth.

Signed-off-by: Marc Zyngier <[email protected]>
Reviewed-by: Suzuki K Poulose <[email protected]>
Acked-by: David Brazdil <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
---
arch/arm64/include/asm/cpufeature.h | 1 +
arch/arm64/kernel/cpufeature.c | 15 +++++++++++++--
2 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index b1f53147e2b2..b5bf7af68691 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -606,6 +606,7 @@ void __init setup_cpu_features(void);
void check_local_cpu_capabilities(void);

u64 read_sanitised_ftr_reg(u32 id);
+u64 __read_sysreg_by_encoding(u32 sys_id);

static inline bool cpu_supports_mixed_endian_el0(void)
{
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a4e5c619a516..97da9ed4b79d 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1148,14 +1148,17 @@ u64 read_sanitised_ftr_reg(u32 id)
EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg);

#define read_sysreg_case(r) \
- case r: return read_sysreg_s(r)
+ case r: val = read_sysreg_s(r); break;

/*
* __read_sysreg_by_encoding() - Used by a STARTING cpu before cpuinfo is populated.
* Read the system register on the current CPU
*/
-static u64 __read_sysreg_by_encoding(u32 sys_id)
+u64 __read_sysreg_by_encoding(u32 sys_id)
{
+ struct arm64_ftr_reg *regp;
+ u64 val;
+
switch (sys_id) {
read_sysreg_case(SYS_ID_PFR0_EL1);
read_sysreg_case(SYS_ID_PFR1_EL1);
@@ -1198,6 +1201,14 @@ static u64 __read_sysreg_by_encoding(u32 sys_id)
BUG();
return 0;
}
+
+ regp = get_arm64_ftr_reg(sys_id);
+ if (regp) {
+ val &= ~regp->override->mask;
+ val |= (regp->override->val & regp->override->mask);
+ }
+
+ return val;
}

#include <linux/irqchip/arm-gic-v3.h>
--
2.29.2

2021-02-08 10:21:47

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 22/23] arm64: cpufeatures: Allow disabling of Pointer Auth from the command-line

In order to be able to disable Pointer Authentication at runtime,
whether it is for testing purposes, or to work around HW issues,
let's add support for overriding the ID_AA64ISAR1_EL1.{GPI,GPA,API,APA}
fields.

This is further mapped on the arm64.nopauth command-line alias.

Signed-off-by: Marc Zyngier <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Acked-by: David Brazdil <[email protected]>
Tested-by: Srinivas Ramana <[email protected]>
---
Documentation/admin-guide/kernel-parameters.txt | 3 +++
arch/arm64/include/asm/cpufeature.h | 1 +
arch/arm64/kernel/cpufeature.c | 4 +++-
arch/arm64/kernel/idreg-override.c | 16 ++++++++++++++++
4 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 7599fd0f1ad7..f9cb28a39bd0 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -376,6 +376,9 @@
arm64.nobti [ARM64] Unconditionally disable Branch Target
Identification support

+ arm64.nopauth [ARM64] Unconditionally disable Pointer Authentication
+ support
+
ataflop= [HW,M68k]

atarimouse= [HW,MOUSE] Atari Mouse
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 30917b9a760b..61177bac49fa 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -820,6 +820,7 @@ static inline unsigned int get_vmid_bits(u64 mmfr1)

extern struct arm64_ftr_override id_aa64mmfr1_override;
extern struct arm64_ftr_override id_aa64pfr1_override;
+extern struct arm64_ftr_override id_aa64isar1_override;

u32 get_kvm_ipa_limit(void);
void dump_cpu_features(void);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7fbeab497adb..3bce87a03717 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -559,6 +559,7 @@ static const struct arm64_ftr_bits ftr_raz[] = {

struct arm64_ftr_override __ro_after_init id_aa64mmfr1_override;
struct arm64_ftr_override __ro_after_init id_aa64pfr1_override;
+struct arm64_ftr_override __ro_after_init id_aa64isar1_override;

static const struct __ftr_reg_entry {
u32 sys_id;
@@ -604,7 +605,8 @@ static const struct __ftr_reg_entry {

/* Op1 = 0, CRn = 0, CRm = 6 */
ARM64_FTR_REG(SYS_ID_AA64ISAR0_EL1, ftr_id_aa64isar0),
- ARM64_FTR_REG(SYS_ID_AA64ISAR1_EL1, ftr_id_aa64isar1),
+ ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64ISAR1_EL1, ftr_id_aa64isar1,
+ &id_aa64isar1_override),

/* Op1 = 0, CRn = 0, CRm = 7 */
ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0),
diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index d691e9015c62..dffb16682330 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -46,6 +46,18 @@ static const struct ftr_set_desc pfr1 __initconst = {
},
};

+static const struct ftr_set_desc isar1 __initconst = {
+ .name = "id_aa64isar1",
+ .override = &id_aa64isar1_override,
+ .fields = {
+ { "gpi", ID_AA64ISAR1_GPI_SHIFT },
+ { "gpa", ID_AA64ISAR1_GPA_SHIFT },
+ { "api", ID_AA64ISAR1_API_SHIFT },
+ { "apa", ID_AA64ISAR1_APA_SHIFT },
+ {}
+ },
+};
+
extern struct arm64_ftr_override kaslr_feature_override;

static const struct ftr_set_desc kaslr __initconst = {
@@ -62,6 +74,7 @@ static const struct ftr_set_desc kaslr __initconst = {
static const struct ftr_set_desc * const regs[] __initconst = {
&mmfr1,
&pfr1,
+ &isar1,
&kaslr,
};

@@ -72,6 +85,9 @@ static const struct {
{ "kvm-arm.mode=nvhe", "id_aa64mmfr1.vh=0" },
{ "kvm-arm.mode=protected", "id_aa64mmfr1.vh=0" },
{ "arm64.nobti", "id_aa64pfr1.bt=0" },
+ { "arm64.nopauth",
+ "id_aa64isar1.gpi=0 id_aa64isar1.gpa=0 "
+ "id_aa64isar1.api=0 id_aa64isar1.apa=0" },
{ "nokaslr", "kaslr.disabled=1" },
};

--
2.29.2

2021-02-08 10:21:48

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 20/23] arm64: cpufeatures: Allow disabling of BTI from the command-line

In order to be able to disable BTI at runtime, whether it is
for testing purposes, or to work around HW issues, let's add
support for overriding the ID_AA64PFR1_EL1.BTI field.

This is further mapped on the arm64.nobti command-line alias.

Signed-off-by: Marc Zyngier <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Acked-by: David Brazdil <[email protected]>
Tested-by: Srinivas Ramana <[email protected]>
---
Documentation/admin-guide/kernel-parameters.txt | 3 +++
arch/arm64/include/asm/cpufeature.h | 1 +
arch/arm64/kernel/cpufeature.c | 4 +++-
arch/arm64/kernel/idreg-override.c | 11 +++++++++++
arch/arm64/mm/mmu.c | 2 +-
5 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 2786fd39a047..7599fd0f1ad7 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -373,6 +373,9 @@
arcrimi= [HW,NET] ARCnet - "RIM I" (entirely mem-mapped) cards
Format: <io>,<irq>,<nodeID>

+ arm64.nobti [ARM64] Unconditionally disable Branch Target
+ Identification support
+
ataflop= [HW,M68k]

atarimouse= [HW,MOUSE] Atari Mouse
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 570f1b4ba3cc..30917b9a760b 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -819,6 +819,7 @@ static inline unsigned int get_vmid_bits(u64 mmfr1)
}

extern struct arm64_ftr_override id_aa64mmfr1_override;
+extern struct arm64_ftr_override id_aa64pfr1_override;

u32 get_kvm_ipa_limit(void);
void dump_cpu_features(void);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index faada5d8bea6..7fbeab497adb 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -558,6 +558,7 @@ static const struct arm64_ftr_bits ftr_raz[] = {
#define ARM64_FTR_REG(id, table) ARM64_FTR_REG_OVERRIDE(id, table, &no_override)

struct arm64_ftr_override __ro_after_init id_aa64mmfr1_override;
+struct arm64_ftr_override __ro_after_init id_aa64pfr1_override;

static const struct __ftr_reg_entry {
u32 sys_id;
@@ -593,7 +594,8 @@ static const struct __ftr_reg_entry {

/* Op1 = 0, CRn = 0, CRm = 4 */
ARM64_FTR_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0),
- ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_id_aa64pfr1),
+ ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64PFR1_EL1, ftr_id_aa64pfr1,
+ &id_aa64pfr1_override),
ARM64_FTR_REG(SYS_ID_AA64ZFR0_EL1, ftr_id_aa64zfr0),

/* Op1 = 0, CRn = 0, CRm = 5 */
diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index 70dd70eee7a2..d691e9015c62 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -37,6 +37,15 @@ static const struct ftr_set_desc mmfr1 __initconst = {
},
};

+static const struct ftr_set_desc pfr1 __initconst = {
+ .name = "id_aa64pfr1",
+ .override = &id_aa64pfr1_override,
+ .fields = {
+ { "bt", ID_AA64PFR1_BT_SHIFT },
+ {}
+ },
+};
+
extern struct arm64_ftr_override kaslr_feature_override;

static const struct ftr_set_desc kaslr __initconst = {
@@ -52,6 +61,7 @@ static const struct ftr_set_desc kaslr __initconst = {

static const struct ftr_set_desc * const regs[] __initconst = {
&mmfr1,
+ &pfr1,
&kaslr,
};

@@ -61,6 +71,7 @@ static const struct {
} aliases[] __initconst = {
{ "kvm-arm.mode=nvhe", "id_aa64mmfr1.vh=0" },
{ "kvm-arm.mode=protected", "id_aa64mmfr1.vh=0" },
+ { "arm64.nobti", "id_aa64pfr1.bt=0" },
{ "nokaslr", "kaslr.disabled=1" },
};

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ae0c3d023824..617e704c980b 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -628,7 +628,7 @@ static bool arm64_early_this_cpu_has_bti(void)
if (!IS_ENABLED(CONFIG_ARM64_BTI_KERNEL))
return false;

- pfr1 = read_sysreg_s(SYS_ID_AA64PFR1_EL1);
+ pfr1 = __read_sysreg_by_encoding(SYS_ID_AA64PFR1_EL1);
return cpuid_feature_extract_unsigned_field(pfr1,
ID_AA64PFR1_BT_SHIFT);
}
--
2.29.2

2021-02-08 10:23:01

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 19/23] arm64: Move "nokaslr" over to the early cpufeature infrastructure

Given that the early cpufeature infrastructure has borrowed quite
a lot of code from the kaslr implementation, let's reimplement
the matching of the "nokaslr" option with it.

Signed-off-by: Marc Zyngier <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Acked-by: David Brazdil <[email protected]>
---
arch/arm64/kernel/idreg-override.c | 15 +++++++++++++
arch/arm64/kernel/kaslr.c | 36 ++----------------------------
2 files changed, 17 insertions(+), 34 deletions(-)

diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index b994d689d6fb..70dd70eee7a2 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -37,8 +37,22 @@ static const struct ftr_set_desc mmfr1 __initconst = {
},
};

+extern struct arm64_ftr_override kaslr_feature_override;
+
+static const struct ftr_set_desc kaslr __initconst = {
+ .name = "kaslr",
+#ifdef CONFIG_RANDOMIZE_BASE
+ .override = &kaslr_feature_override,
+#endif
+ .fields = {
+ { "disabled", 0 },
+ {}
+ },
+};
+
static const struct ftr_set_desc * const regs[] __initconst = {
&mmfr1,
+ &kaslr,
};

static const struct {
@@ -47,6 +61,7 @@ static const struct {
} aliases[] __initconst = {
{ "kvm-arm.mode=nvhe", "id_aa64mmfr1.vh=0" },
{ "kvm-arm.mode=protected", "id_aa64mmfr1.vh=0" },
+ { "nokaslr", "kaslr.disabled=1" },
};

static int __init find_field(const char *cmdline,
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 5fc86e7d01a1..27f8939deb1b 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -51,39 +51,7 @@ static __init u64 get_kaslr_seed(void *fdt)
return ret;
}

-static __init bool cmdline_contains_nokaslr(const u8 *cmdline)
-{
- const u8 *str;
-
- str = strstr(cmdline, "nokaslr");
- return str == cmdline || (str > cmdline && *(str - 1) == ' ');
-}
-
-static __init bool is_kaslr_disabled_cmdline(void *fdt)
-{
- if (!IS_ENABLED(CONFIG_CMDLINE_FORCE)) {
- int node;
- const u8 *prop;
-
- node = fdt_path_offset(fdt, "/chosen");
- if (node < 0)
- goto out;
-
- prop = fdt_getprop(fdt, node, "bootargs", NULL);
- if (!prop)
- goto out;
-
- if (cmdline_contains_nokaslr(prop))
- return true;
-
- if (IS_ENABLED(CONFIG_CMDLINE_EXTEND))
- goto out;
-
- return false;
- }
-out:
- return cmdline_contains_nokaslr(CONFIG_CMDLINE);
-}
+struct arm64_ftr_override kaslr_feature_override __initdata;

/*
* This routine will be executed with the kernel mapped at its default virtual
@@ -126,7 +94,7 @@ u64 __init kaslr_early_init(void)
* Check if 'nokaslr' appears on the command line, and
* return 0 if that is the case.
*/
- if (is_kaslr_disabled_cmdline(fdt)) {
+ if (kaslr_feature_override.val & kaslr_feature_override.mask & 0xf) {
kaslr_status = KASLR_DISABLED_CMDLINE;
return 0;
}
--
2.29.2

2021-02-08 10:24:18

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 12/23] arm64: Extract early FDT mapping from kaslr_early_init()

As we want to parse more options very early in the kernel lifetime,
let's always map the FDT early. This is achieved by moving that
code out of kaslr_early_init().

No functionnal change expected.

Signed-off-by: Marc Zyngier <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Acked-by: David Brazdil <[email protected]>
---
arch/arm64/include/asm/setup.h | 11 +++++++++++
arch/arm64/kernel/head.S | 3 ++-
arch/arm64/kernel/kaslr.c | 7 +++----
arch/arm64/kernel/setup.c | 15 +++++++++++++++
4 files changed, 31 insertions(+), 5 deletions(-)
create mode 100644 arch/arm64/include/asm/setup.h

diff --git a/arch/arm64/include/asm/setup.h b/arch/arm64/include/asm/setup.h
new file mode 100644
index 000000000000..d3320618ed14
--- /dev/null
+++ b/arch/arm64/include/asm/setup.h
@@ -0,0 +1,11 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#ifndef __ARM64_ASM_SETUP_H
+#define __ARM64_ASM_SETUP_H
+
+#include <uapi/asm/setup.h>
+
+void *get_early_fdt_ptr(void);
+void early_fdt_map(u64 dt_phys);
+
+#endif
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index b425d2587cdb..d74e5f84042e 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -433,6 +433,8 @@ SYM_FUNC_START_LOCAL(__primary_switched)
bl __pi_memset
dsb ishst // Make zero page visible to PTW

+ mov x0, x21 // pass FDT address in x0
+ bl early_fdt_map // Try mapping the FDT early
bl switch_to_vhe
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
bl kasan_early_init
@@ -440,7 +442,6 @@ SYM_FUNC_START_LOCAL(__primary_switched)
#ifdef CONFIG_RANDOMIZE_BASE
tst x23, ~(MIN_KIMG_ALIGN - 1) // already running randomized?
b.ne 0f
- mov x0, x21 // pass FDT address in x0
bl kaslr_early_init // parse FDT for KASLR options
cbz x0, 0f // KASLR disabled? just proceed
orr x23, x23, x0 // record KASLR offset
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 1c74c45b9494..5fc86e7d01a1 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -19,6 +19,7 @@
#include <asm/memory.h>
#include <asm/mmu.h>
#include <asm/sections.h>
+#include <asm/setup.h>

enum kaslr_status {
KASLR_ENABLED,
@@ -92,12 +93,11 @@ static __init bool is_kaslr_disabled_cmdline(void *fdt)
* containing function pointers) to be reinitialized, and zero-initialized
* .bss variables will be reset to 0.
*/
-u64 __init kaslr_early_init(u64 dt_phys)
+u64 __init kaslr_early_init(void)
{
void *fdt;
u64 seed, offset, mask, module_range;
unsigned long raw;
- int size;

/*
* Set a reasonable default for module_alloc_base in case
@@ -111,8 +111,7 @@ u64 __init kaslr_early_init(u64 dt_phys)
* and proceed with KASLR disabled. We will make another
* attempt at mapping the FDT in setup_machine()
*/
- early_fixmap_init();
- fdt = fixmap_remap_fdt(dt_phys, &size, PAGE_KERNEL);
+ fdt = get_early_fdt_ptr();
if (!fdt) {
kaslr_status = KASLR_DISABLED_FDT_REMAP;
return 0;
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index c18aacde8bb0..61845c0821d9 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -168,6 +168,21 @@ static void __init smp_build_mpidr_hash(void)
pr_warn("Large number of MPIDR hash buckets detected\n");
}

+static void *early_fdt_ptr __initdata;
+
+void __init *get_early_fdt_ptr(void)
+{
+ return early_fdt_ptr;
+}
+
+asmlinkage void __init early_fdt_map(u64 dt_phys)
+{
+ int fdt_size;
+
+ early_fixmap_init();
+ early_fdt_ptr = fixmap_remap_fdt(dt_phys, &fdt_size, PAGE_KERNEL);
+}
+
static void __init setup_machine_fdt(phys_addr_t dt_phys)
{
int size;
--
2.29.2

2021-02-08 10:24:41

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 14/23] arm64: Allow ID_AA64MMFR1_EL1.VH to be overridden from the command line

As we want to be able to disable VHE at runtime, let's match
"id_aa64mmfr1.vh=" from the command line as an override.
This doesn't have much effect yet as our boot code doesn't look
at the cpufeature, but only at the HW registers.

Signed-off-by: Marc Zyngier <[email protected]>
Acked-by: David Brazdil <[email protected]>
Acked-by: Suzuki K Poulose <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
---
arch/arm64/include/asm/cpufeature.h | 2 ++
arch/arm64/kernel/cpufeature.c | 5 ++++-
arch/arm64/kernel/idreg-override.c | 11 +++++++++++
3 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index b5bf7af68691..570f1b4ba3cc 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -818,6 +818,8 @@ static inline unsigned int get_vmid_bits(u64 mmfr1)
return 8;
}

+extern struct arm64_ftr_override id_aa64mmfr1_override;
+
u32 get_kvm_ipa_limit(void);
void dump_cpu_features(void);

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 97da9ed4b79d..faada5d8bea6 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -557,6 +557,8 @@ static const struct arm64_ftr_bits ftr_raz[] = {

#define ARM64_FTR_REG(id, table) ARM64_FTR_REG_OVERRIDE(id, table, &no_override)

+struct arm64_ftr_override __ro_after_init id_aa64mmfr1_override;
+
static const struct __ftr_reg_entry {
u32 sys_id;
struct arm64_ftr_reg *reg;
@@ -604,7 +606,8 @@ static const struct __ftr_reg_entry {

/* Op1 = 0, CRn = 0, CRm = 7 */
ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0),
- ARM64_FTR_REG(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1),
+ ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1,
+ &id_aa64mmfr1_override),
ARM64_FTR_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2),

/* Op1 = 0, CRn = 1, CRm = 2 */
diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index 3a347b42d07e..2da11bf60195 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -11,6 +11,7 @@
#include <linux/libfdt.h>

#include <asm/cacheflush.h>
+#include <asm/cpufeature.h>
#include <asm/setup.h>

#define FTR_DESC_NAME_LEN 20
@@ -25,7 +26,17 @@ struct ftr_set_desc {
} fields[];
};

+static const struct ftr_set_desc mmfr1 __initconst = {
+ .name = "id_aa64mmfr1",
+ .override = &id_aa64mmfr1_override,
+ .fields = {
+ { "vh", ID_AA64MMFR1_VHE_SHIFT },
+ {}
+ },
+};
+
static const struct ftr_set_desc * const regs[] __initconst = {
+ &mmfr1,
};

static int __init find_field(const char *cmdline,
--
2.29.2

2021-02-08 10:24:54

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 21/23] arm64: Defer enabling pointer authentication on boot core

From: Srinivas Ramana <[email protected]>

Defer enabling pointer authentication on boot core until
after its required to be enabled by cpufeature framework.
This will help in controlling the feature dynamically
with a boot parameter.

Signed-off-by: Ajay Patil <[email protected]>
Signed-off-by: Prasad Sodagudi <[email protected]>
Signed-off-by: Srinivas Ramana <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Catalin Marinas <[email protected]>
Acked-by: David Brazdil <[email protected]>
---
arch/arm64/include/asm/pointer_auth.h | 10 ++++++++++
arch/arm64/include/asm/stackprotector.h | 1 +
arch/arm64/kernel/head.S | 4 ----
3 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index c6b4f0603024..b112a11e9302 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -76,6 +76,15 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
return ptrauth_clear_pac(ptr);
}

+static __always_inline void ptrauth_enable(void)
+{
+ if (!system_supports_address_auth())
+ return;
+ sysreg_clear_set(sctlr_el1, 0, (SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
+ SCTLR_ELx_ENDA | SCTLR_ELx_ENDB));
+ isb();
+}
+
#define ptrauth_thread_init_user(tsk) \
ptrauth_keys_init_user(&(tsk)->thread.keys_user)
#define ptrauth_thread_init_kernel(tsk) \
@@ -84,6 +93,7 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
ptrauth_keys_switch_kernel(&(tsk)->thread.keys_kernel)

#else /* CONFIG_ARM64_PTR_AUTH */
+#define ptrauth_enable()
#define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL)
#define ptrauth_strip_insn_pac(lr) (lr)
#define ptrauth_thread_init_user(tsk)
diff --git a/arch/arm64/include/asm/stackprotector.h b/arch/arm64/include/asm/stackprotector.h
index 7263e0bac680..33f1bb453150 100644
--- a/arch/arm64/include/asm/stackprotector.h
+++ b/arch/arm64/include/asm/stackprotector.h
@@ -41,6 +41,7 @@ static __always_inline void boot_init_stack_canary(void)
#endif
ptrauth_thread_init_kernel(current);
ptrauth_thread_switch_kernel(current);
+ ptrauth_enable();
}

#endif /* _ASM_STACKPROTECTOR_H */
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 3243e3ae9bd8..2e116ef255e1 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -404,10 +404,6 @@ SYM_FUNC_START_LOCAL(__primary_switched)
adr_l x5, init_task
msr sp_el0, x5 // Save thread_info

-#ifdef CONFIG_ARM64_PTR_AUTH
- __ptrauth_keys_init_cpu x5, x6, x7, x8
-#endif
-
adr_l x8, vectors // load VBAR_EL1 with virtual
msr vbar_el1, x8 // vector table address
isb
--
2.29.2

2021-02-08 10:26:00

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 15/23] arm64: Honor VHE being disabled from the command-line

Finally we can check whether VHE is disabled on the command line,
and not enable it if that's the user's wish.

Signed-off-by: Marc Zyngier <[email protected]>
Acked-by: David Brazdil <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
---
arch/arm64/kernel/asm-offsets.c | 3 +++
arch/arm64/kernel/hyp-stub.S | 11 +++++++++++
2 files changed, 14 insertions(+)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index f42fd9e33981..1add0f21bffe 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -99,6 +99,9 @@ int main(void)
DEFINE(CPU_BOOT_STACK, offsetof(struct secondary_data, stack));
DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task));
BLANK();
+ DEFINE(FTR_OVR_VAL_OFFSET, offsetof(struct arm64_ftr_override, val));
+ DEFINE(FTR_OVR_MASK_OFFSET, offsetof(struct arm64_ftr_override, mask));
+ BLANK();
#ifdef CONFIG_KVM
DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt));
DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.fault.disr_el1));
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index 6229315d533d..3e08dcc924b5 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -87,6 +87,17 @@ SYM_CODE_START_LOCAL(mutate_to_vhe)
ubfx x1, x1, #ID_AA64MMFR1_VHE_SHIFT, #4
cbz x1, 1f

+ // Check whether VHE is disabled from the command line
+ adr_l x1, id_aa64mmfr1_override
+ ldr x2, [x1, FTR_OVR_VAL_OFFSET]
+ ldr x1, [x1, FTR_OVR_MASK_OFFSET]
+ ubfx x2, x2, #ID_AA64MMFR1_VHE_SHIFT, #4
+ ubfx x1, x1, #ID_AA64MMFR1_VHE_SHIFT, #4
+ cmp x1, xzr
+ and x2, x2, x1
+ csinv x2, x2, xzr, ne
+ cbz x2, 1f
+
// Engage the VHE magic!
mov_q x0, HCR_HOST_VHE_FLAGS
msr hcr_el2, x0
--
2.29.2

2021-02-08 10:26:02

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 13/23] arm64: cpufeature: Add an early command-line cpufeature override facility

In order to be able to override CPU features at boot time,
let's add a command line parser that matches options of the
form "cpureg.feature=value", and store the corresponding
value into the override val/mask pair.

No features are currently defined, so no expected change in
functionality.

Signed-off-by: Marc Zyngier <[email protected]>
Acked-by: David Brazdil <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
---
arch/arm64/kernel/Makefile | 2 +-
arch/arm64/kernel/head.S | 1 +
arch/arm64/kernel/idreg-override.c | 150 +++++++++++++++++++++++++++++
3 files changed, 152 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kernel/idreg-override.c

diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 86364ab6f13f..2262f0392857 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -17,7 +17,7 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \
return_address.o cpuinfo.o cpu_errata.o \
cpufeature.o alternative.o cacheinfo.o \
smp.o smp_spin_table.o topology.o smccc-call.o \
- syscall.o proton-pack.o
+ syscall.o proton-pack.o idreg-override.o

targets += efi-entry.o

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index d74e5f84042e..3243e3ae9bd8 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -435,6 +435,7 @@ SYM_FUNC_START_LOCAL(__primary_switched)

mov x0, x21 // pass FDT address in x0
bl early_fdt_map // Try mapping the FDT early
+ bl init_feature_override
bl switch_to_vhe
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
bl kasan_early_init
diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
new file mode 100644
index 000000000000..3a347b42d07e
--- /dev/null
+++ b/arch/arm64/kernel/idreg-override.c
@@ -0,0 +1,150 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Early cpufeature override framework
+ *
+ * Copyright (C) 2020 Google LLC
+ * Author: Marc Zyngier <[email protected]>
+ */
+
+#include <linux/ctype.h>
+#include <linux/kernel.h>
+#include <linux/libfdt.h>
+
+#include <asm/cacheflush.h>
+#include <asm/setup.h>
+
+#define FTR_DESC_NAME_LEN 20
+#define FTR_DESC_FIELD_LEN 10
+
+struct ftr_set_desc {
+ char name[FTR_DESC_NAME_LEN];
+ struct arm64_ftr_override *override;
+ struct {
+ char name[FTR_DESC_FIELD_LEN];
+ u8 shift;
+ } fields[];
+};
+
+static const struct ftr_set_desc * const regs[] __initconst = {
+};
+
+static int __init find_field(const char *cmdline,
+ const struct ftr_set_desc *reg, int f, u64 *v)
+{
+ char opt[FTR_DESC_NAME_LEN + FTR_DESC_FIELD_LEN + 2];
+ int len;
+
+ len = snprintf(opt, ARRAY_SIZE(opt), "%s.%s=",
+ reg->name, reg->fields[f].name);
+
+ if (!parameqn(cmdline, opt, len))
+ return -1;
+
+ return kstrtou64(cmdline + len, 0, v);
+}
+
+static void __init match_options(const char *cmdline)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(regs); i++) {
+ int f;
+
+ if (!regs[i]->override)
+ continue;
+
+ for (f = 0; strlen(regs[i]->fields[f].name); f++) {
+ u64 shift = regs[i]->fields[f].shift;
+ u64 mask = 0xfUL << shift;
+ u64 v;
+
+ if (find_field(cmdline, regs[i], f, &v))
+ continue;
+
+ regs[i]->override->val &= ~mask;
+ regs[i]->override->val |= (v << shift) & mask;
+ regs[i]->override->mask |= mask;
+
+ return;
+ }
+ }
+}
+
+static __init void __parse_cmdline(const char *cmdline)
+{
+ do {
+ char buf[256];
+ size_t len;
+ int i;
+
+ cmdline = skip_spaces(cmdline);
+
+ for (len = 0; cmdline[len] && !isspace(cmdline[len]); len++);
+ if (!len)
+ return;
+
+ len = min(len, ARRAY_SIZE(buf) - 1);
+ strncpy(buf, cmdline, len);
+ buf[len] = 0;
+
+ if (strcmp(buf, "--") == 0)
+ return;
+
+ cmdline += len;
+
+ match_options(buf);
+
+ } while (1);
+}
+
+static __init void parse_cmdline(void)
+{
+ if (!IS_ENABLED(CONFIG_CMDLINE_FORCE)) {
+ const u8 *prop;
+ void *fdt;
+ int node;
+
+ fdt = get_early_fdt_ptr();
+ if (!fdt)
+ goto out;
+
+ node = fdt_path_offset(fdt, "/chosen");
+ if (node < 0)
+ goto out;
+
+ prop = fdt_getprop(fdt, node, "bootargs", NULL);
+ if (!prop)
+ goto out;
+
+ __parse_cmdline(prop);
+
+ if (!IS_ENABLED(CONFIG_CMDLINE_EXTEND))
+ return;
+ }
+
+out:
+ __parse_cmdline(CONFIG_CMDLINE);
+}
+
+/* Keep checkers quiet */
+void init_feature_override(void);
+
+asmlinkage void __init init_feature_override(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(regs); i++) {
+ if (regs[i]->override) {
+ regs[i]->override->val = 0;
+ regs[i]->override->mask = 0;
+ }
+ }
+
+ parse_cmdline();
+
+ for (i = 0; i < ARRAY_SIZE(regs); i++) {
+ if (regs[i]->override)
+ __flush_dcache_area(regs[i]->override,
+ sizeof(*regs[i]->override));
+ }
+}
--
2.29.2

2021-02-08 10:26:31

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 10/23] arm64: cpufeature: Add global feature override facility

Add a facility to globally override a feature, no matter what
the HW says. Yes, this sounds dangerous, but we do respect the
"safe" value for a given feature. This doesn't mean the user
doesn't need to know what they are doing.

Nothing uses this yet, so we are pretty safe. For now.

Signed-off-by: Marc Zyngier <[email protected]>
Reviewed-by: Suzuki K Poulose <[email protected]>
Acked-by: David Brazdil <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
---
arch/arm64/include/asm/cpufeature.h | 6 ++++
arch/arm64/kernel/cpufeature.c | 45 +++++++++++++++++++++++++----
2 files changed, 45 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 9a555809b89c..b1f53147e2b2 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -63,6 +63,11 @@ struct arm64_ftr_bits {
s64 safe_val; /* safe value for FTR_EXACT features */
};

+struct arm64_ftr_override {
+ u64 val;
+ u64 mask;
+};
+
/*
* @arm64_ftr_reg - Feature register
* @strict_mask Bits which should match across all CPUs for sanity.
@@ -74,6 +79,7 @@ struct arm64_ftr_reg {
u64 user_mask;
u64 sys_val;
u64 user_val;
+ struct arm64_ftr_override *override;
const struct arm64_ftr_bits *ftr_bits;
};

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index e99eddec0a46..a4e5c619a516 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -352,9 +352,12 @@ static const struct arm64_ftr_bits ftr_ctr[] = {
ARM64_FTR_END,
};

+static struct arm64_ftr_override __ro_after_init no_override = { };
+
struct arm64_ftr_reg arm64_ftr_reg_ctrel0 = {
.name = "SYS_CTR_EL0",
- .ftr_bits = ftr_ctr
+ .ftr_bits = ftr_ctr,
+ .override = &no_override,
};

static const struct arm64_ftr_bits ftr_id_mmfr0[] = {
@@ -544,13 +547,16 @@ static const struct arm64_ftr_bits ftr_raz[] = {
ARM64_FTR_END,
};

-#define ARM64_FTR_REG(id, table) { \
- .sys_id = id, \
- .reg = &(struct arm64_ftr_reg){ \
- .name = #id, \
- .ftr_bits = &((table)[0]), \
+#define ARM64_FTR_REG_OVERRIDE(id, table, ovr) { \
+ .sys_id = id, \
+ .reg = &(struct arm64_ftr_reg){ \
+ .name = #id, \
+ .override = (ovr), \
+ .ftr_bits = &((table)[0]), \
}}

+#define ARM64_FTR_REG(id, table) ARM64_FTR_REG_OVERRIDE(id, table, &no_override)
+
static const struct __ftr_reg_entry {
u32 sys_id;
struct arm64_ftr_reg *reg;
@@ -770,6 +776,33 @@ static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new)
for (ftrp = reg->ftr_bits; ftrp->width; ftrp++) {
u64 ftr_mask = arm64_ftr_mask(ftrp);
s64 ftr_new = arm64_ftr_value(ftrp, new);
+ s64 ftr_ovr = arm64_ftr_value(ftrp, reg->override->val);
+
+ if ((ftr_mask & reg->override->mask) == ftr_mask) {
+ s64 tmp = arm64_ftr_safe_value(ftrp, ftr_ovr, ftr_new);
+ char *str = NULL;
+
+ if (ftr_ovr != tmp) {
+ /* Unsafe, remove the override */
+ reg->override->mask &= ~ftr_mask;
+ reg->override->val &= ~ftr_mask;
+ tmp = ftr_ovr;
+ str = "ignoring override";
+ } else if (ftr_new != tmp) {
+ /* Override was valid */
+ ftr_new = tmp;
+ str = "forced";
+ } else if (ftr_ovr == tmp) {
+ /* Override was the safe value */
+ str = "already set";
+ }
+
+ if (str)
+ pr_warn("%s[%d:%d]: %s to %llx\n",
+ reg->name,
+ ftrp->shift + ftrp->width - 1,
+ ftrp->shift, str, tmp);
+ }

val = arm64_ftr_set_value(ftrp, val, ftr_new);

--
2.29.2

2021-02-08 10:27:12

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 17/23] arm64: Make kvm-arm.mode={nvhe, protected} an alias of id_aa64mmfr1.vh=0

Admitedly, passing id_aa64mmfr1.vh=0 on the command-line isn't
that easy to understand, and it is likely that users would much
prefer write "kvm-arm.mode=nvhe", or "...=protected".

So here you go. This has the added advantage that we can now
always honor the "kvm-arm.mode=protected" option, even when
booting on a VHE system.

Signed-off-by: Marc Zyngier <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Acked-by: David Brazdil <[email protected]>
---
Documentation/admin-guide/kernel-parameters.txt | 3 +++
arch/arm64/kernel/idreg-override.c | 2 ++
arch/arm64/kvm/arm.c | 3 +++
3 files changed, 8 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 9e3cdb271d06..2786fd39a047 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2257,6 +2257,9 @@
kvm-arm.mode=
[KVM,ARM] Select one of KVM/arm64's modes of operation.

+ nvhe: Standard nVHE-based mode, without support for
+ protected guests.
+
protected: nVHE-based mode with support for guests whose
state is kept private from the host.
Not valid if the kernel is running in EL2.
diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index 226bac544e20..b994d689d6fb 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -45,6 +45,8 @@ static const struct {
char alias[FTR_ALIAS_NAME_LEN];
char feature[FTR_ALIAS_OPTION_LEN];
} aliases[] __initconst = {
+ { "kvm-arm.mode=nvhe", "id_aa64mmfr1.vh=0" },
+ { "kvm-arm.mode=protected", "id_aa64mmfr1.vh=0" },
};

static int __init find_field(const char *cmdline,
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 04c44853b103..597565a65ca2 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1966,6 +1966,9 @@ static int __init early_kvm_mode_cfg(char *arg)
return 0;
}

+ if (strcmp(arg, "nvhe") == 0 && !WARN_ON(is_kernel_in_hyp_mode()))
+ return 0;
+
return -EINVAL;
}
early_param("kvm-arm.mode", early_kvm_mode_cfg);
--
2.29.2

2021-02-08 10:27:25

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 23/23] [DO NOT MERGE] arm64: Cope with CPUs stuck in VHE mode

It seems that the CPU known as Apple M1 has the terrible habit
of being stuck with HCR_EL2.E2H==1, in violation of the architecture.

Try and work around this deplorable state of affairs by detecting
the stuck bit early and short-circuit the nVHE dance. It is still
unknown whether there are many more such nuggets to be found...

Reported-by: Hector Martin <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
---
arch/arm64/kernel/head.S | 33 ++++++++++++++++++++++++++++++---
arch/arm64/kernel/hyp-stub.S | 28 ++++++++++++++++++++++++----
2 files changed, 54 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 2e116ef255e1..bce66d6bda74 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -477,14 +477,13 @@ EXPORT_SYMBOL(kimage_vaddr)
* booted in EL1 or EL2 respectively.
*/
SYM_FUNC_START(init_kernel_el)
- mov_q x0, INIT_SCTLR_EL1_MMU_OFF
- msr sctlr_el1, x0
-
mrs x0, CurrentEL
cmp x0, #CurrentEL_EL2
b.eq init_el2

SYM_INNER_LABEL(init_el1, SYM_L_LOCAL)
+ mov_q x0, INIT_SCTLR_EL1_MMU_OFF
+ msr sctlr_el1, x0
isb
mov_q x0, INIT_PSTATE_EL1
msr spsr_el1, x0
@@ -504,6 +503,34 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL)
msr vbar_el2, x0
isb

+ /*
+ * Fruity CPUs seem to have HCR_EL2.E2H set to RES1,
+ * making it impossible to start in nVHE mode. Is that
+ * compliant with the architecture? Absolutely not!
+ */
+ mrs x0, hcr_el2
+ and x0, x0, #HCR_E2H
+ cbz x0, 1f
+
+ /* Switching to VHE requires a sane SCTLR_EL1 as a start */
+ mov_q x0, INIT_SCTLR_EL1_MMU_OFF
+ msr_s SYS_SCTLR_EL12, x0
+
+ /*
+ * Force an eret into a helper "function", and let it return
+ * to our original caller... This makes sure that we have
+ * initialised the basic PSTATE state.
+ */
+ mov x0, #INIT_PSTATE_EL2
+ msr spsr_el1, x0
+ adr_l x0, stick_to_vhe
+ msr elr_el1, x0
+ eret
+
+1:
+ mov_q x0, INIT_SCTLR_EL1_MMU_OFF
+ msr sctlr_el1, x0
+
msr elr_el2, lr
mov w0, #BOOT_CPU_MODE_EL2
eret
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index 3e08dcc924b5..b55ed4af4c4a 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -27,12 +27,12 @@ SYM_CODE_START(__hyp_stub_vectors)
ventry el2_fiq_invalid // FIQ EL2t
ventry el2_error_invalid // Error EL2t

- ventry el2_sync_invalid // Synchronous EL2h
+ ventry elx_sync // Synchronous EL2h
ventry el2_irq_invalid // IRQ EL2h
ventry el2_fiq_invalid // FIQ EL2h
ventry el2_error_invalid // Error EL2h

- ventry el1_sync // Synchronous 64-bit EL1
+ ventry elx_sync // Synchronous 64-bit EL1
ventry el1_irq_invalid // IRQ 64-bit EL1
ventry el1_fiq_invalid // FIQ 64-bit EL1
ventry el1_error_invalid // Error 64-bit EL1
@@ -45,7 +45,7 @@ SYM_CODE_END(__hyp_stub_vectors)

.align 11

-SYM_CODE_START_LOCAL(el1_sync)
+SYM_CODE_START_LOCAL(elx_sync)
cmp x0, #HVC_SET_VECTORS
b.ne 1f
msr vbar_el2, x1
@@ -71,7 +71,7 @@ SYM_CODE_START_LOCAL(el1_sync)

9: mov x0, xzr
eret
-SYM_CODE_END(el1_sync)
+SYM_CODE_END(elx_sync)

// nVHE? No way! Give me the real thing!
SYM_CODE_START_LOCAL(mutate_to_vhe)
@@ -227,3 +227,23 @@ SYM_FUNC_START(switch_to_vhe)
#endif
ret
SYM_FUNC_END(switch_to_vhe)
+
+SYM_FUNC_START(stick_to_vhe)
+ /*
+ * Make sure the switch to VHE cannot fail, by overriding the
+ * override. This is hilarious.
+ */
+ adr_l x1, id_aa64mmfr1_override
+ add x1, x1, #FTR_OVR_MASK_OFFSET
+ dc civac, x1
+ dsb sy
+ isb
+ ldr x0, [x1]
+ bic x0, x0, #(0xf << ID_AA64MMFR1_VHE_SHIFT)
+ str x0, [x1]
+
+ mov x0, #HVC_VHE_RESTART
+ hvc #0
+ mov x0, #BOOT_CPU_MODE_EL2
+ ret
+SYM_FUNC_END(stick_to_vhe)
--
2.29.2

2021-02-08 14:48:52

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v7 00/23] arm64: Early CPU feature override, and applications to VHE, BTI and PAuth

Hi Marc,

On Mon, Feb 08, 2021 at 09:57:09AM +0000, Marc Zyngier wrote:
> It recently came to light that there is a need to be able to override
> some CPU features very early on, before the kernel is fully up and
> running. The reasons for this range from specific feature support
> (such as using Protected KVM on VHE HW, which is the main motivation
> for this work) to errata workaround (a feature is broken on a CPU and
> needs to be turned off, or rather not enabled).
>
> This series tries to offer a limited framework for this kind of
> problems, by allowing a set of options to be passed on the
> command-line and altering the feature set that the cpufeature
> subsystem exposes to the rest of the kernel. Note that this doesn't
> change anything for code that directly uses the CPU ID registers.

I applied this locally, but I'm seeing consistent boot failure under QEMU when
KASAN is enabled. I tried sprinkling some __no_sanitize_address annotations
around (see below) but it didn't help. The culprit appears to be
early_fdt_map(), but looking a bit more closely, I'm really nervous about the
way we call into C functions from __primary_switched. Remember -- this code
runs _twice_ when KASLR is active: before and after the randomization. This
also means that any memory writes the first time around can be lost due to
the D-cache invalidation when (re-)creating the kernel page-tables.

Will

--->8

diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index dffb16682330..751ed55261b5 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -195,7 +195,7 @@ static __init void parse_cmdline(void)
/* Keep checkers quiet */
void init_feature_override(void);

-asmlinkage void __init init_feature_override(void)
+asmlinkage void __init __no_sanitize_address init_feature_override(void)
{
int i;

diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 61845c0821d9..33581de05d2e 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -170,12 +170,12 @@ static void __init smp_build_mpidr_hash(void)

static void *early_fdt_ptr __initdata;

-void __init *get_early_fdt_ptr(void)
+void __init __no_sanitize_address *get_early_fdt_ptr(void)
{
return early_fdt_ptr;
}

-asmlinkage void __init early_fdt_map(u64 dt_phys)
+asmlinkage void __init __no_sanitize_address early_fdt_map(u64 dt_phys)
{
int fdt_size;


2021-02-08 14:53:35

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v7 00/23] arm64: Early CPU feature override, and applications to VHE, BTI and PAuth

On Mon, 8 Feb 2021 at 15:32, Will Deacon <[email protected]> wrote:
>
> Hi Marc,
>
> On Mon, Feb 08, 2021 at 09:57:09AM +0000, Marc Zyngier wrote:
> > It recently came to light that there is a need to be able to override
> > some CPU features very early on, before the kernel is fully up and
> > running. The reasons for this range from specific feature support
> > (such as using Protected KVM on VHE HW, which is the main motivation
> > for this work) to errata workaround (a feature is broken on a CPU and
> > needs to be turned off, or rather not enabled).
> >
> > This series tries to offer a limited framework for this kind of
> > problems, by allowing a set of options to be passed on the
> > command-line and altering the feature set that the cpufeature
> > subsystem exposes to the rest of the kernel. Note that this doesn't
> > change anything for code that directly uses the CPU ID registers.
>
> I applied this locally, but I'm seeing consistent boot failure under QEMU when
> KASAN is enabled. I tried sprinkling some __no_sanitize_address annotations
> around (see below) but it didn't help. The culprit appears to be
> early_fdt_map(), but looking a bit more closely, I'm really nervous about the
> way we call into C functions from __primary_switched. Remember -- this code
> runs _twice_ when KASLR is active: before and after the randomization. This
> also means that any memory writes the first time around can be lost due to
> the D-cache invalidation when (re-)creating the kernel page-tables.
>

Not just cache invalidation - BSS gets wiped again as well.

--
Ard.

2021-02-08 15:13:27

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v7 00/23] arm64: Early CPU feature override, and applications to VHE, BTI and PAuth

Hi Will,

On 2021-02-08 14:32, Will Deacon wrote:
> Hi Marc,
>
> On Mon, Feb 08, 2021 at 09:57:09AM +0000, Marc Zyngier wrote:
>> It recently came to light that there is a need to be able to override
>> some CPU features very early on, before the kernel is fully up and
>> running. The reasons for this range from specific feature support
>> (such as using Protected KVM on VHE HW, which is the main motivation
>> for this work) to errata workaround (a feature is broken on a CPU and
>> needs to be turned off, or rather not enabled).
>>
>> This series tries to offer a limited framework for this kind of
>> problems, by allowing a set of options to be passed on the
>> command-line and altering the feature set that the cpufeature
>> subsystem exposes to the rest of the kernel. Note that this doesn't
>> change anything for code that directly uses the CPU ID registers.
>
> I applied this locally, but I'm seeing consistent boot failure under
> QEMU when
> KASAN is enabled. I tried sprinkling some __no_sanitize_address
> annotations
> around (see below) but it didn't help. The culprit appears to be
> early_fdt_map(), but looking a bit more closely, I'm really nervous
> about the
> way we call into C functions from __primary_switched. Remember -- this
> code
> runs _twice_ when KASLR is active: before and after the randomization.
> This
> also means that any memory writes the first time around can be lost due
> to
> the D-cache invalidation when (re-)creating the kernel page-tables.

Well, we already call into C functions with KASLR, and nothing explodes
with that, so I must be doing something else wrong.

I do have cache maintenance for the writes to the shadow registers, so
that
part should be fine. But I think I'm missing some cache maintenance
around
the FDT base itself, and I wonder what happens when we go around the
loop.

I'll chase this down now.

Thanks for the heads up.

M.
--
Jazz is not dead. It just smells funny...

2021-02-08 18:41:15

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v7 00/23] arm64: Early CPU feature override, and applications to VHE, BTI and PAuth

On 2021-02-08 14:32, Will Deacon wrote:
> Hi Marc,
>
> On Mon, Feb 08, 2021 at 09:57:09AM +0000, Marc Zyngier wrote:
>> It recently came to light that there is a need to be able to override
>> some CPU features very early on, before the kernel is fully up and
>> running. The reasons for this range from specific feature support
>> (such as using Protected KVM on VHE HW, which is the main motivation
>> for this work) to errata workaround (a feature is broken on a CPU and
>> needs to be turned off, or rather not enabled).
>>
>> This series tries to offer a limited framework for this kind of
>> problems, by allowing a set of options to be passed on the
>> command-line and altering the feature set that the cpufeature
>> subsystem exposes to the rest of the kernel. Note that this doesn't
>> change anything for code that directly uses the CPU ID registers.
>
> I applied this locally, but I'm seeing consistent boot failure under
> QEMU when
> KASAN is enabled. I tried sprinkling some __no_sanitize_address
> annotations
> around (see below) but it didn't help. The culprit appears to be
> early_fdt_map(), but looking a bit more closely, I'm really nervous
> about the
> way we call into C functions from __primary_switched. Remember -- this
> code
> runs _twice_ when KASLR is active: before and after the randomization.
> This
> also means that any memory writes the first time around can be lost due
> to
> the D-cache invalidation when (re-)creating the kernel page-tables.

Nailed it. Of course, before anything starts writing from C code, we
need
to have initialised KASAN. kasan_init.c itself is compiled without any
address sanitising, but we can't repaint all the stuff that is called
from early_fdt_map() (quite a lot).

So the natural thing to do is to keep kasan_early_init() as the first
thing we do in C code, and everything falls from that.

Any chance you could try that on top and see if that cures your problem?
If that works for you, I'll push an updates series.

Thanks,

M.

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index bce66d6bda74..09a5b603c950 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -429,13 +429,13 @@ SYM_FUNC_START_LOCAL(__primary_switched)
bl __pi_memset
dsb ishst // Make zero page visible to PTW

+#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
+ bl kasan_early_init
+#endif
mov x0, x21 // pass FDT address in x0
bl early_fdt_map // Try mapping the FDT early
bl init_feature_override
bl switch_to_vhe
-#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
- bl kasan_early_init
-#endif
#ifdef CONFIG_RANDOMIZE_BASE
tst x23, ~(MIN_KIMG_ALIGN - 1) // already running randomized?
b.ne 0f

--
Jazz is not dead. It just smells funny...

2021-02-08 22:39:42

by Marc Zyngier

[permalink] [raw]
Subject: [PATCH v7 18/23] KVM: arm64: Document HVC_VHE_RESTART stub hypercall

For completeness, let's document the HVC_VHE_RESTART stub.

Signed-off-by: Marc Zyngier <[email protected]>
Acked-by: David Brazdil <[email protected]>
---
Documentation/virt/kvm/arm/hyp-abi.rst | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/Documentation/virt/kvm/arm/hyp-abi.rst b/Documentation/virt/kvm/arm/hyp-abi.rst
index 83cadd8186fa..4d43fbc25195 100644
--- a/Documentation/virt/kvm/arm/hyp-abi.rst
+++ b/Documentation/virt/kvm/arm/hyp-abi.rst
@@ -58,6 +58,15 @@ these functions (see arch/arm{,64}/include/asm/virt.h):
into place (arm64 only), and jump to the restart address while at HYP/EL2.
This hypercall is not expected to return to its caller.

+* ::
+
+ x0 = HVC_VHE_RESTART (arm64 only)
+
+ Attempt to upgrade the kernel's exception level from EL1 to EL2 by enabling
+ the VHE mode. This is conditioned by the CPU supporting VHE, the EL2 MMU
+ being off, and VHE not being disabled by any other means (command line
+ option, for example).
+
Any other value of r0/x0 triggers a hypervisor-specific handling,
which is not documented here.

--
2.29.2

2021-02-22 09:37:56

by Jonathan Neuschäfer

[permalink] [raw]
Subject: Re: [PATCH v7 23/23] [DO NOT MERGE] arm64: Cope with CPUs stuck in VHE mode

Hi,

On Mon, Feb 08, 2021 at 09:57:32AM +0000, Marc Zyngier wrote:
> It seems that the CPU known as Apple M1 has the terrible habit
> of being stuck with HCR_EL2.E2H==1, in violation of the architecture.

Minor nitpick from the sideline: The M1 SoC has two kinds of CPU in it
(Icestorm and Firestorm), which makes "CPU known as Apple M1" a bit
imprecise.

In practicality it seems unlikely though, that Icestorm and Firestorm
act differently with regards to the code in this patch.


Best regards,
Jonathan Neuschäfer

>
> Try and work around this deplorable state of affairs by detecting
> the stuck bit early and short-circuit the nVHE dance. It is still
> unknown whether there are many more such nuggets to be found...
>
> Reported-by: Hector Martin <[email protected]>
> Signed-off-by: Marc Zyngier <[email protected]>
> ---
> arch/arm64/kernel/head.S | 33 ++++++++++++++++++++++++++++++---
> arch/arm64/kernel/hyp-stub.S | 28 ++++++++++++++++++++++++----
> 2 files changed, 54 insertions(+), 7 deletions(-)
[...]


Attachments:
(No filename) (1.03 kB)
signature.asc (849.00 B)
Download all attachments

2021-02-22 09:50:10

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v7 23/23] [DO NOT MERGE] arm64: Cope with CPUs stuck in VHE mode

Hi Jonathan,

On 2021-02-22 09:35, Jonathan Neuschäfer wrote:
> Hi,
>
> On Mon, Feb 08, 2021 at 09:57:32AM +0000, Marc Zyngier wrote:
>> It seems that the CPU known as Apple M1 has the terrible habit
>> of being stuck with HCR_EL2.E2H==1, in violation of the architecture.
>
> Minor nitpick from the sideline: The M1 SoC has two kinds of CPU in it
> (Icestorm and Firestorm), which makes "CPU known as Apple M1" a bit
> imprecise.

Fair enough. How about something along the lines of:
"At least some of the CPUs integrated in the Apple M1 SoC have
the terrible habit..."

> In practicality it seems unlikely though, that Icestorm and Firestorm
> act differently with regards to the code in this patch.

This is my hunch as well. And if they did, it shouldn't be a big deal:
the "architecture compliant" CPUs would simply transition via EL1
as expected, and join their buggy friends running at EL2 slightly later.

Thanks,

M.
--
Jazz is not dead. It just smells funny...