2024-04-15 06:54:32

by Liao, Chang

[permalink] [raw]
Subject: [PATCH v3 0/8] Rework the DAIF mask, unmask and track API

This patch series reworks the DAIF mask, unmask, and track API for the
upcoming FEAT_NMI extension added in Armv8.8.

As platform and virtualization[1] supports for FEAT_NMI is emerging, and
Mark Brown's FEAT_NMI patch series[2] highlighted the need for clean up
the existing hacking style approach about DAIF management code before
adding NMI functionality, furthermore, we discover some subtle bugs
during 'perf' and 'ipi_backtrace' transition from PSEUDO_NMI to
FEAT_NMI, in summary, all of these emphasize the importance of rework.

This series of reworking patches follows the suggestion from Mark
Rutland mentioned in Mark Brown's patchset. In summary, he think the
better way for DAIF manangement look likes as following:

(a) Adding entry-specific helpers to manipulate abstract exception masks
covering DAIF + PMR + ALLINT. Those need unmask-at-entry and
mask-at-exit behaviour, and today only need to manage DAIF + PMR.

It should be possible to do this ahead of ALLINT / NMI support.

(b) Adding new "logical exception mask" helpers that treat DAIF + PMR +
ALLINT as separate elements.

This patches cherry-pick a part of Mark Brown' FEAT_NMI series, in order
to pass compilation and basic testing, includes perf and ipi_backtrace.

[1] https://lore.kernel.org/all/[email protected]/
[2] https://lore.kernel.org/linux-arm-kernel/Y4sH5qX5bK9xfEBp@lpieralisi/

v3->v2:
1. Squash two commits that address two minor issues into Mark Brown's
origin patch for detecting FEAT_NMI.
2. Add one patch resolves the kprobe reenter panic while testing
FEAT_NMI on QEMU.

v2->v1:
Add SoB tags following the origin author's SoBs.

Liao Chang (5):
arm64: daifflags: Add logical exception masks covering DAIF + PMR +
ALLINT
arm64: Unify exception masking at entry and exit of exception
arm64: Deprecate old local_daif_{mask,save,restore}
irqchip/gic-v3: Improve the maintainability of NMI masking in GIC
driver
arm64: kprobe: Keep NMI maskabled while kprobe is stepping xol

Mark Brown (3):
arm64/sysreg: Add definitions for immediate versions of MSR ALLINT
arm64/cpufeature: Detect PE support for FEAT_NMI
arm64/nmi: Add Kconfig for NMI

arch/arm64/Kconfig | 17 ++
arch/arm64/include/asm/cpufeature.h | 6 +
arch/arm64/include/asm/daifflags.h | 298 ++++++++++++++++++++++-----
arch/arm64/include/asm/nmi.h | 27 +++
arch/arm64/include/asm/sysreg.h | 2 +
arch/arm64/include/uapi/asm/ptrace.h | 1 +
arch/arm64/kernel/acpi.c | 10 +-
arch/arm64/kernel/cpufeature.c | 58 +++++-
arch/arm64/kernel/debug-monitors.c | 7 +-
arch/arm64/kernel/entry-common.c | 96 +++++----
arch/arm64/kernel/entry.S | 2 -
arch/arm64/kernel/hibernate.c | 6 +-
arch/arm64/kernel/irq.c | 2 +-
arch/arm64/kernel/machine_kexec.c | 2 +-
arch/arm64/kernel/probes/kprobes.c | 4 +-
arch/arm64/kernel/setup.c | 2 +-
arch/arm64/kernel/smp.c | 6 +-
arch/arm64/kernel/suspend.c | 6 +-
arch/arm64/kvm/hyp/vgic-v3-sr.c | 6 +-
arch/arm64/kvm/hyp/vhe/switch.c | 4 +-
arch/arm64/mm/mmu.c | 6 +-
arch/arm64/tools/cpucaps | 2 +
drivers/irqchip/irq-gic-v3.c | 6 +-
23 files changed, 442 insertions(+), 134 deletions(-)
create mode 100644 arch/arm64/include/asm/nmi.h

--
2.34.1



2024-04-15 06:54:43

by Liao, Chang

[permalink] [raw]
Subject: [PATCH v3 2/8] arm64/cpufeature: Detect PE support for FEAT_NMI

From: Mark Brown <[email protected]>

Use of FEAT_NMI requires that all the PEs in the system and the GIC have
NMI support. This patch implements the PE part of that detection.

In order to avoid problematic interactions between real and pseudo NMIs
we disable the architected feature if the user has enabled pseudo NMIs
on the command line. If this is done on a system where support for the
architected feature is detected then a warning is printed during boot in
order to help users spot what is likely to be a misconfiguration.

In order to allow KVM to offer the feature to guests even if pseudo NMIs
are in use by the host we have a separate feature for the raw feature
which is used in KVM.

Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Liao Chang <[email protected]>
Signed-off-by: Jinjie Ruan <[email protected]>
---
arch/arm64/include/asm/cpufeature.h | 6 +++
arch/arm64/kernel/cpufeature.c | 58 ++++++++++++++++++++++++++++-
arch/arm64/tools/cpucaps | 2 +
3 files changed, 65 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 8b904a757bd3..4c35565ad656 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -800,6 +800,12 @@ static __always_inline bool system_uses_irq_prio_masking(void)
return alternative_has_cap_unlikely(ARM64_HAS_GIC_PRIO_MASKING);
}

+static __always_inline bool system_uses_nmi(void)
+{
+ return IS_ENABLED(CONFIG_ARM64_NMI) &&
+ alternative_has_cap_likely(ARM64_USES_NMI);
+}
+
static inline bool system_supports_mte(void)
{
return alternative_has_cap_unlikely(ARM64_MTE);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 56583677c1f2..99c3bc74008d 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -85,6 +85,7 @@
#include <asm/kvm_host.h>
#include <asm/mmu_context.h>
#include <asm/mte.h>
+#include <asm/nmi.h>
#include <asm/processor.h>
#include <asm/smp.h>
#include <asm/sysreg.h>
@@ -291,6 +292,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
};

static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_NMI_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_SME_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MPAM_frac_SHIFT, 4, 0),
@@ -1076,9 +1078,11 @@ static void init_32bit_cpu_features(struct cpuinfo_32bit *info)
init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
}

-#ifdef CONFIG_ARM64_PSEUDO_NMI
+#if IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) || IS_ENABLED(CONFIG_ARM64_NMI)
static bool enable_pseudo_nmi;
+#endif

+#ifdef CONFIG_ARM64_PSEUDO_NMI
static int __init early_enable_pseudo_nmi(char *p)
{
return kstrtobool(p, &enable_pseudo_nmi);
@@ -2263,6 +2267,41 @@ static bool has_gic_prio_relaxed_sync(const struct arm64_cpu_capabilities *entry
}
#endif

+#ifdef CONFIG_ARM64_NMI
+static bool use_nmi(const struct arm64_cpu_capabilities *entry, int scope)
+{
+ if (!has_cpuid_feature(entry, scope))
+ return false;
+
+ /*
+ * Having both real and pseudo NMIs enabled simultaneously is
+ * likely to cause confusion. Since pseudo NMIs must be
+ * enabled with an explicit command line option, if the user
+ * has set that option on a system with real NMIs for some
+ * reason assume they know what they're doing.
+ */
+ if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && enable_pseudo_nmi) {
+ pr_info("Pseudo NMI enabled, not using architected NMI\n");
+ return false;
+ }
+
+ return true;
+}
+
+static void nmi_enable(const struct arm64_cpu_capabilities *__unused)
+{
+ /*
+ * Enable use of NMIs controlled by ALLINT, SPINTMASK should
+ * be clear by default but make it explicit that we are using
+ * this mode. Ensure that ALLINT is clear first in order to
+ * avoid leaving things masked.
+ */
+ _allint_clear();
+ sysreg_clear_set(sctlr_el1, SCTLR_EL1_SPINTMASK, SCTLR_EL1_NMI);
+ isb();
+}
+#endif
+
#ifdef CONFIG_ARM64_BTI
static void bti_enable(const struct arm64_cpu_capabilities *__unused)
{
@@ -2861,6 +2900,23 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = has_nv1,
ARM64_CPUID_FIELDS_NEG(ID_AA64MMFR4_EL1, E2H0, NI_NV1)
},
+#ifdef CONFIG_ARM64_NMI
+ {
+ .desc = "Non-maskable Interrupts present",
+ .capability = ARM64_HAS_NMI,
+ .type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
+ .matches = has_cpuid_feature,
+ ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, NMI, IMP)
+ },
+ {
+ .desc = "Non-maskable Interrupts enabled",
+ .capability = ARM64_USES_NMI,
+ .type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
+ .matches = use_nmi,
+ .cpu_enable = nmi_enable,
+ ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, NMI, IMP)
+ },
+#endif
{},
};

diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index 62b2838a231a..bb62c487ef99 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -43,6 +43,7 @@ HAS_LPA2
HAS_LSE_ATOMICS
HAS_MOPS
HAS_NESTED_VIRT
+HAS_NMI
HAS_PAN
HAS_S1PIE
HAS_RAS_EXTN
@@ -71,6 +72,7 @@ SPECTRE_BHB
SSBS
SVE
UNMAP_KERNEL_AT_EL0
+USES_NMI
WORKAROUND_834220
WORKAROUND_843419
WORKAROUND_845719
--
2.34.1


2024-04-15 06:54:54

by Liao, Chang

[permalink] [raw]
Subject: [PATCH v3 3/8] arm64/nmi: Add Kconfig for NMI

From: Mark Brown <[email protected]>

Since NMI handling is in some fairly hot paths we provide a Kconfig option
which allows support to be compiled out when not needed.

Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Liao Chang <[email protected]>
---
arch/arm64/Kconfig | 17 +++++++++++++++++
1 file changed, 17 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7b11c98b3e84..c7d00d0cae9e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2095,6 +2095,23 @@ config ARM64_EPAN
if the cpu does not implement the feature.
endmenu # "ARMv8.7 architectural features"

+menu "ARMv8.8 architectural features"
+
+config ARM64_NMI
+ bool "Enable support for Non-maskable Interrupts (NMI)"
+ default y
+ help
+ Non-maskable interrupts are an architecture and GIC feature
+ which allow the system to configure some interrupts to be
+ configured to have superpriority, allowing them to be handled
+ before other interrupts and masked for shorter periods of time.
+
+ The feature is detected at runtime, and will remain disabled
+ if the cpu does not implement the feature. It will also be
+ disabled if pseudo NMIs are enabled at runtime.
+
+endmenu # "ARMv8.8 architectural features"
+
config ARM64_SVE
bool "ARM Scalable Vector Extension support"
default y
--
2.34.1


2024-04-15 06:55:30

by Liao, Chang

[permalink] [raw]
Subject: [PATCH v3 7/8] irqchip/gic-v3: Improve the maintainability of NMI masking in GIC driver

It has a better maintainability to use the local_nmi_enable() in GIC
driver to unmask NMI and keep regular IRQ and FIQ maskable, instead of
writing raw value into DAIF, PMR and ALLINT directly.

Signed-off-by: Liao Chang <[email protected]>
---
arch/arm64/include/asm/daifflags.h | 13 +++++++++++++
drivers/irqchip/irq-gic-v3.c | 6 ++----
2 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index b831def08bb3..1196eb85aa8d 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -330,4 +330,17 @@ static inline void local_errnmi_enable(void)
irqflags.fields.allint = 0;
local_allint_restore(irqflags);
}
+
+/*
+ * local_nmi_enable - Enable NMI with or without superpriority.
+ */
+static inline void local_nmi_enable(void)
+{
+ arch_irqflags_t irqflags;
+
+ irqflags.fields.daif = read_sysreg(daif);
+ irqflags.fields.pmr = GIC_PRIO_IRQOFF;
+ irqflags.fields.allint = 0;
+ local_allint_restore(irqflags);
+}
#endif
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 6fb276504bcc..ed7d8d87768f 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -33,6 +33,7 @@
#include <asm/exception.h>
#include <asm/smp_plat.h>
#include <asm/virt.h>
+#include <asm/daifflags.h>

#include "irq-gic-common.h"

@@ -813,10 +814,7 @@ static void __gic_handle_irq_from_irqson(struct pt_regs *regs)
nmi_exit();
}

- if (gic_prio_masking_enabled()) {
- gic_pmr_mask_irqs();
- gic_arch_enable_irqs();
- }
+ local_nmi_enable();

if (!is_nmi)
__gic_handle_irq(irqnr, regs);
--
2.34.1


2024-04-15 06:55:38

by Liao, Chang

[permalink] [raw]
Subject: [PATCH v3 6/8] arm64: Deprecate old local_daif_{mask,save,restore}

The new exception masking helpers offer a simpler, more consistent, and
potentially more maintainable interface for managing DAIF + PMR + ALLINT,
which are selected by the CONFIG_ARM64_NMI or CONFIG_ARM64_PSEUDO_NMI.

This patch initiates the deprecation of the local_daif_xxx functions in
favor of the newly introduced exception masking methods on arm64.

Signed-off-by: Liao Chang <[email protected]>
---
arch/arm64/include/asm/daifflags.h | 118 ++++-------------------------
arch/arm64/kernel/acpi.c | 10 +--
arch/arm64/kernel/debug-monitors.c | 7 +-
arch/arm64/kernel/hibernate.c | 6 +-
arch/arm64/kernel/irq.c | 2 +-
arch/arm64/kernel/machine_kexec.c | 2 +-
arch/arm64/kernel/setup.c | 2 +-
arch/arm64/kernel/smp.c | 6 +-
arch/arm64/kernel/suspend.c | 6 +-
arch/arm64/kvm/hyp/vgic-v3-sr.c | 6 +-
arch/arm64/kvm/hyp/vhe/switch.c | 4 +-
arch/arm64/mm/mmu.c | 6 +-
12 files changed, 44 insertions(+), 131 deletions(-)

diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index 6d391d221432..b831def08bb3 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -18,109 +18,6 @@
#define DAIF_ERRCTX (PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)
#define DAIF_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)

-
-/* mask/save/unmask/restore all exceptions, including interrupts. */
-static inline void local_daif_mask(void)
-{
- WARN_ON(system_has_prio_mask_debugging() &&
- (read_sysreg_s(SYS_ICC_PMR_EL1) == (GIC_PRIO_IRQOFF |
- GIC_PRIO_PSR_I_SET)));
-
- asm volatile(
- "msr daifset, #0xf // local_daif_mask\n"
- :
- :
- : "memory");
-
- /* Don't really care for a dsb here, we don't intend to enable IRQs */
- if (system_uses_irq_prio_masking())
- gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
-
- trace_hardirqs_off();
-}
-
-static inline unsigned long local_daif_save_flags(void)
-{
- unsigned long flags;
-
- flags = read_sysreg(daif);
-
- if (system_uses_irq_prio_masking()) {
- /* If IRQs are masked with PMR, reflect it in the flags */
- if (read_sysreg_s(SYS_ICC_PMR_EL1) != GIC_PRIO_IRQON)
- flags |= PSR_I_BIT | PSR_F_BIT;
- }
-
- return flags;
-}
-
-static inline unsigned long local_daif_save(void)
-{
- unsigned long flags;
-
- flags = local_daif_save_flags();
-
- local_daif_mask();
-
- return flags;
-}
-
-static inline void local_daif_restore(unsigned long flags)
-{
- bool irq_disabled = flags & PSR_I_BIT;
-
- WARN_ON(system_has_prio_mask_debugging() &&
- (read_sysreg(daif) & (PSR_I_BIT | PSR_F_BIT)) != (PSR_I_BIT | PSR_F_BIT));
-
- if (!irq_disabled) {
- trace_hardirqs_on();
-
- if (system_uses_irq_prio_masking()) {
- gic_write_pmr(GIC_PRIO_IRQON);
- pmr_sync();
- }
- } else if (system_uses_irq_prio_masking()) {
- u64 pmr;
-
- if (!(flags & PSR_A_BIT)) {
- /*
- * If interrupts are disabled but we can take
- * asynchronous errors, we can take NMIs
- */
- flags &= ~(PSR_I_BIT | PSR_F_BIT);
- pmr = GIC_PRIO_IRQOFF;
- } else {
- pmr = GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET;
- }
-
- /*
- * There has been concern that the write to daif
- * might be reordered before this write to PMR.
- * From the ARM ARM DDI 0487D.a, section D1.7.1
- * "Accessing PSTATE fields":
- * Writes to the PSTATE fields have side-effects on
- * various aspects of the PE operation. All of these
- * side-effects are guaranteed:
- * - Not to be visible to earlier instructions in
- * the execution stream.
- * - To be visible to later instructions in the
- * execution stream
- *
- * Also, writes to PMR are self-synchronizing, so no
- * interrupts with a lower priority than PMR is signaled
- * to the PE after the write.
- *
- * So we don't need additional synchronization here.
- */
- gic_write_pmr(pmr);
- }
-
- write_sysreg(flags, daif);
-
- if (irq_disabled)
- trace_hardirqs_off();
-}
-
/*
* For Arm64 processor support Armv8.8 or later, kernel supports three types
* of irqflags, they used for corresponding configuration depicted as below:
@@ -146,6 +43,7 @@ union arch_irqflags {
};

typedef union arch_irqflags arch_irqflags_t;
+#define ARCH_IRQFLAGS_INITIALIZER { .flags = 0UL }

static inline void __pmr_local_allint_mask(void)
{
@@ -164,6 +62,7 @@ static inline void __nmi_local_allint_mask(void)
_allint_set();
}

+/* mask/save/unmask/restore all exceptions, including interrupts. */
static inline void local_allint_mask(void)
{
asm volatile(
@@ -418,4 +317,17 @@ static inline void local_errint_enable(void)
irqflags.fields.allint = 0;
local_allint_restore(irqflags);
}
+
+/*
+ * local_errnmi_enable - Enable Serror and NMI with or without superpriority.
+ */
+static inline void local_errnmi_enable(void)
+{
+ arch_irqflags_t irqflags;
+
+ irqflags.fields.daif = DAIF_PROCCTX_NOIRQ;
+ irqflags.fields.pmr = GIC_PRIO_IRQOFF;
+ irqflags.fields.allint = 0;
+ local_allint_restore(irqflags);
+}
#endif
diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
index dba8fcec7f33..0cda765b2ae8 100644
--- a/arch/arm64/kernel/acpi.c
+++ b/arch/arm64/kernel/acpi.c
@@ -365,12 +365,12 @@ int apei_claim_sea(struct pt_regs *regs)
{
int err = -ENOENT;
bool return_to_irqs_enabled;
- unsigned long current_flags;
+ arch_irqflags_t current_flags;

if (!IS_ENABLED(CONFIG_ACPI_APEI_GHES))
return err;

- current_flags = local_daif_save_flags();
+ current_flags = local_allint_save_flags();

/* current_flags isn't useful here as daif doesn't tell us about pNMI */
return_to_irqs_enabled = !irqs_disabled_flags(arch_local_save_flags());
@@ -382,7 +382,7 @@ int apei_claim_sea(struct pt_regs *regs)
* SEA can interrupt SError, mask it and describe this as an NMI so
* that APEI defers the handling.
*/
- local_daif_restore(DAIF_ERRCTX);
+ local_errint_disable();
nmi_enter();
err = ghes_notify_sea();
nmi_exit();
@@ -393,7 +393,7 @@ int apei_claim_sea(struct pt_regs *regs)
*/
if (!err) {
if (return_to_irqs_enabled) {
- local_daif_restore(DAIF_PROCCTX_NOIRQ);
+ local_errnmi_enable();
__irq_enter();
irq_work_run();
__irq_exit();
@@ -403,7 +403,7 @@ int apei_claim_sea(struct pt_regs *regs)
}
}

- local_daif_restore(current_flags);
+ local_allint_restore(current_flags);

return err;
}
diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 64f2ecbdfe5c..559162a89a69 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -36,10 +36,11 @@ u8 debug_monitors_arch(void)
*/
static void mdscr_write(u32 mdscr)
{
- unsigned long flags;
- flags = local_daif_save();
+ arch_irqflags_t flags;
+
+ flags = local_allint_save();
write_sysreg(mdscr, mdscr_el1);
- local_daif_restore(flags);
+ local_allint_restore(flags);
}
NOKPROBE_SYMBOL(mdscr_write);

diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 02870beb271e..3f0d276121d3 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -327,7 +327,7 @@ static void swsusp_mte_restore_tags(void)
int swsusp_arch_suspend(void)
{
int ret = 0;
- unsigned long flags;
+ arch_irqflags_t flags;
struct sleep_stack_data state;

if (cpus_are_stuck_in_kernel()) {
@@ -335,7 +335,7 @@ int swsusp_arch_suspend(void)
return -EBUSY;
}

- flags = local_daif_save();
+ flags = local_allint_save();

if (__cpu_suspend_enter(&state)) {
/* make the crash dump kernel image visible/saveable */
@@ -385,7 +385,7 @@ int swsusp_arch_suspend(void)
spectre_v4_enable_mitigation(NULL);
}

- local_daif_restore(flags);
+ local_allint_restore(flags);

return ret;
}
diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
index 85087e2df564..610e6249871a 100644
--- a/arch/arm64/kernel/irq.c
+++ b/arch/arm64/kernel/irq.c
@@ -132,6 +132,6 @@ void __init init_IRQ(void)
* the PMR/PSR pair to a consistent state.
*/
WARN_ON(read_sysreg(daif) & PSR_A_BIT);
- local_daif_restore(DAIF_PROCCTX_NOIRQ);
+ local_errnmi_enable();
}
}
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index 82e2203d86a3..412f90c188dc 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -176,7 +176,7 @@ void machine_kexec(struct kimage *kimage)

pr_info("Bye!\n");

- local_daif_mask();
+ local_allint_mask();

/*
* Both restart and kernel_reloc will shutdown the MMU, disable data
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 65a052bf741f..7f1805231efb 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -301,7 +301,7 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
* Unmask SError as soon as possible after initializing earlycon so
* that we can report any SErrors immediately.
*/
- local_daif_restore(DAIF_PROCCTX_NOIRQ);
+ local_errnmi_enable();

/*
* TTBR0 is only used for the identity mapping at this stage. Make it
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 4ced34f62dab..bc5191e52fee 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -264,7 +264,7 @@ asmlinkage notrace void secondary_start_kernel(void)
set_cpu_online(cpu, true);
complete(&cpu_running);

- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();

/*
* OK, it's off to the idle thread for us
@@ -371,7 +371,7 @@ void __noreturn cpu_die(void)

idle_task_exit();

- local_daif_mask();
+ local_allint_mask();

/* Tell cpuhp_bp_sync_dead() that this CPU is now safe to dispose of */
cpuhp_ap_report_dead();
@@ -810,7 +810,7 @@ static void __noreturn local_cpu_stop(void)
{
set_cpu_online(smp_processor_id(), false);

- local_daif_mask();
+ local_allint_mask();
sdei_mask_local_cpu();
cpu_park_loop();
}
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index eaaff94329cd..4736015be55d 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -97,7 +97,7 @@ void notrace __cpu_suspend_exit(void)
int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
{
int ret = 0;
- unsigned long flags;
+ arch_irqflags_t flags;
struct sleep_stack_data state;
struct arm_cpuidle_irq_context context;

@@ -122,7 +122,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
* hardirqs should be firmly off by now. This really ought to use
* something like raw_local_daif_save().
*/
- flags = local_daif_save();
+ flags = local_allint_save();

/*
* Function graph tracer state gets inconsistent when the kernel
@@ -168,7 +168,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
* restored, so from this point onwards, debugging is fully
* reenabled if it was enabled when core started shutdown.
*/
- local_daif_restore(flags);
+ local_allint_restore(flags);

return ret;
}
diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c
index 6cb638b184b1..6a0d1b8cb8ef 100644
--- a/arch/arm64/kvm/hyp/vgic-v3-sr.c
+++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c
@@ -414,7 +414,7 @@ void __vgic_v3_init_lrs(void)
u64 __vgic_v3_get_gic_config(void)
{
u64 val, sre = read_gicreg(ICC_SRE_EL1);
- unsigned long flags = 0;
+ arch_irqflags_t flags = ARCH_IRQFLAGS_INITIALIZER;

/*
* To check whether we have a MMIO-based (GICv2 compatible)
@@ -427,7 +427,7 @@ u64 __vgic_v3_get_gic_config(void)
* EL2.
*/
if (has_vhe())
- flags = local_daif_save();
+ flags = local_allint_save();

/*
* Table 11-2 "Permitted ICC_SRE_ELx.SRE settings" indicates
@@ -447,7 +447,7 @@ u64 __vgic_v3_get_gic_config(void)
isb();

if (has_vhe())
- local_daif_restore(flags);
+ local_allint_restore(flags);

val = (val & ICC_SRE_EL1_SRE) ? 0 : (1ULL << 63);
val |= read_gicreg(ICH_VTR_EL2);
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 1581df6aec87..ace4fd6bce46 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -271,7 +271,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
{
int ret;

- local_daif_mask();
+ local_allint_mask();

/*
* Having IRQs masked via PMR when entering the guest means the GIC
@@ -290,7 +290,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
* local_daif_restore() takes care to properly restore PSTATE.DAIF
* and the GIC PMR if the host is using IRQ priorities.
*/
- local_daif_restore(DAIF_PROCCTX_NOIRQ);
+ local_errnmi_enable();

/*
* When we exit from the guest we change a number of CPU configuration
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 495b732d5af3..eab7608cf88d 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1513,7 +1513,7 @@ void __cpu_replace_ttbr1(pgd_t *pgdp, bool cnp)
typedef void (ttbr_replace_func)(phys_addr_t);
extern ttbr_replace_func idmap_cpu_replace_ttbr1;
ttbr_replace_func *replace_phys;
- unsigned long daif;
+ arch_irqflags_t flags;

/* phys_to_ttbr() zeros lower 2 bits of ttbr with 52-bit PA */
phys_addr_t ttbr1 = phys_to_ttbr(virt_to_phys(pgdp));
@@ -1529,9 +1529,9 @@ void __cpu_replace_ttbr1(pgd_t *pgdp, bool cnp)
* We really don't want to take *any* exceptions while TTBR1 is
* in the process of being replaced so mask everything.
*/
- daif = local_daif_save();
+ flags = local_allint_save();
replace_phys(ttbr1);
- local_daif_restore(daif);
+ local_allint_restore(flags);

cpu_uninstall_idmap();
}
--
2.34.1


2024-04-15 06:55:42

by Liao, Chang

[permalink] [raw]
Subject: [PATCH v3 5/8] arm64: Unify exception masking at entry and exit of exception

Currently, different exception types require specific mask. For example:

- Interrupt handlers: Mask IRQ, FIQ, and NMI on entry.
- Synchronous handler: Restore exception masks to pre-exception value.
- Serror handler: Mask all interrupts and Serror on entry (strictest).
- Debug handler: Keep all exception masked as exception taken.

This patch introduces new helper functions to unify exception masking
behavior at the entry and exit of exceptions on arm64. This approach
improves code clarity and maintainability.

Signed-off-by: Liao Chang <[email protected]>
---
arch/arm64/include/asm/daifflags.h | 81 ++++++++++++++++++-------
arch/arm64/kernel/entry-common.c | 96 ++++++++++++++----------------
arch/arm64/kernel/entry.S | 2 -
3 files changed, 105 insertions(+), 74 deletions(-)

diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index df4c4989babd..6d391d221432 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -121,28 +121,6 @@ static inline void local_daif_restore(unsigned long flags)
trace_hardirqs_off();
}

-/*
- * Called by synchronous exception handlers to restore the DAIF bits that were
- * modified by taking an exception.
- */
-static inline void local_daif_inherit(struct pt_regs *regs)
-{
- unsigned long flags = regs->pstate & DAIF_MASK;
-
- if (interrupts_enabled(regs))
- trace_hardirqs_on();
-
- if (system_uses_irq_prio_masking())
- gic_write_pmr(regs->pmr_save);
-
- /*
- * We can't use local_daif_restore(regs->pstate) here as
- * system_has_prio_mask_debugging() won't restore the I bit if it can
- * use the pmr instead.
- */
- write_sysreg(flags, daif);
-}
-
/*
* For Arm64 processor support Armv8.8 or later, kernel supports three types
* of irqflags, they used for corresponding configuration depicted as below:
@@ -381,4 +359,63 @@ static inline void local_allint_inherit(struct pt_regs *regs)
_allint_clear();
}
}
+
+/*
+ * local_allint_disable - Disable IRQ, FIQ and NMI, with or without
+ * superpriority.
+ */
+static inline void local_allint_disable(void)
+{
+ arch_irqflags_t irqflags;
+
+ irqflags.fields.daif = DAIF_PROCCTX_NOIRQ;
+ irqflags.fields.pmr = GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET;
+ irqflags.fields.allint = 1;
+ local_allint_restore(irqflags);
+}
+
+/*
+ * local_allint_mark_enabled - When the kernel enables priority masking,
+ * interrupts cannot be handled util ICC_PMR_EL1 is set to GIC_PRIO_IRQON
+ * and PSTATE.IF is cleared. This helper function indicates that interrupts
+ * remains in a semi-masked state, requring further clearing of PSTATE.IF.
+ *
+ * Kernel will give a warning, if some function try to enable semi-masked
+ * interrupt via the arch_local_irq_enable() defined in <asm/irqflags.h>.
+ *
+ * This function is typically used before handling the Debug exception.
+ */
+static inline void local_allint_mark_enabled(void)
+{
+ if (system_uses_irq_prio_masking())
+ gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+}
+
+/*
+ * local_errint_disable - Disable all types of interrupt including IRQ, FIQ,
+ * Serror and NMI, with or without superpriority.
+ */
+static inline void local_errint_disable(void)
+{
+ arch_irqflags_t irqflags;
+
+ irqflags.fields.daif = DAIF_ERRCTX;
+ irqflags.fields.pmr = GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET;
+ irqflags.fields.allint = 1;
+ local_allint_restore(irqflags);
+}
+
+/*
+ * local_errint_enable - Enable all types of interrupt including IRQ, FIQ,
+ * Serror and NMI, with or without superpriority.
+ */
+static inline void local_errint_enable(void)
+{
+ arch_irqflags_t irqflags;
+
+ irqflags.fields.daif = DAIF_PROCCTX;
+ irqflags.fields.pmr = GIC_PRIO_IRQON;
+ irqflags.fields.allint = 0;
+ local_allint_restore(irqflags);
+}
#endif
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index b77a15955f28..99168223508b 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -168,7 +168,7 @@ static __always_inline void exit_to_user_mode_prepare(struct pt_regs *regs)
if (unlikely(flags & _TIF_WORK_MASK))
do_notify_resume(regs, flags);

- local_daif_mask();
+ local_allint_mask();

lockdep_sys_exit();
}
@@ -428,9 +428,9 @@ static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr)
unsigned long far = read_sysreg(far_el1);

enter_from_kernel_mode(regs);
- local_daif_inherit(regs);
+ local_allint_inherit(regs);
do_mem_abort(far, esr, regs);
- local_daif_mask();
+ local_allint_mask();
exit_to_kernel_mode(regs);
}

@@ -439,33 +439,36 @@ static void noinstr el1_pc(struct pt_regs *regs, unsigned long esr)
unsigned long far = read_sysreg(far_el1);

enter_from_kernel_mode(regs);
- local_daif_inherit(regs);
+ local_allint_inherit(regs);
do_sp_pc_abort(far, esr, regs);
- local_daif_mask();
+ local_allint_mask();
exit_to_kernel_mode(regs);
}

static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr)
{
enter_from_kernel_mode(regs);
- local_daif_inherit(regs);
+ local_allint_inherit(regs);
do_el1_undef(regs, esr);
- local_daif_mask();
+ local_allint_mask();
exit_to_kernel_mode(regs);
}

static void noinstr el1_bti(struct pt_regs *regs, unsigned long esr)
{
enter_from_kernel_mode(regs);
- local_daif_inherit(regs);
+ local_allint_inherit(regs);
do_el1_bti(regs, esr);
- local_daif_mask();
+ local_allint_mask();
exit_to_kernel_mode(regs);
}

static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr)
{
- unsigned long far = read_sysreg(far_el1);
+ unsigned long far;
+
+ local_allint_mark_enabled();
+ far = read_sysreg(far_el1);

arm64_enter_el1_dbg(regs);
if (!cortex_a76_erratum_1463225_debug_handler(regs))
@@ -476,9 +479,9 @@ static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr)
static void noinstr el1_fpac(struct pt_regs *regs, unsigned long esr)
{
enter_from_kernel_mode(regs);
- local_daif_inherit(regs);
+ local_allint_inherit(regs);
do_el1_fpac(regs, esr);
- local_daif_mask();
+ local_allint_mask();
exit_to_kernel_mode(regs);
}

@@ -543,7 +546,7 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
static void noinstr el1_interrupt(struct pt_regs *regs,
void (*handler)(struct pt_regs *))
{
- write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
+ local_allint_disable();

if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs))
__el1_pnmi(regs, handler);
@@ -565,7 +568,7 @@ asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs)
{
unsigned long esr = read_sysreg(esr_el1);

- local_daif_restore(DAIF_ERRCTX);
+ local_errint_disable();
arm64_enter_nmi(regs);
do_serror(regs, esr);
arm64_exit_nmi(regs);
@@ -576,7 +579,7 @@ static void noinstr el0_da(struct pt_regs *regs, unsigned long esr)
unsigned long far = read_sysreg(far_el1);

enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_mem_abort(far, esr, regs);
exit_to_user_mode(regs);
}
@@ -594,7 +597,7 @@ static void noinstr el0_ia(struct pt_regs *regs, unsigned long esr)
arm64_apply_bp_hardening();

enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_mem_abort(far, esr, regs);
exit_to_user_mode(regs);
}
@@ -602,7 +605,7 @@ static void noinstr el0_ia(struct pt_regs *regs, unsigned long esr)
static void noinstr el0_fpsimd_acc(struct pt_regs *regs, unsigned long esr)
{
enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_fpsimd_acc(esr, regs);
exit_to_user_mode(regs);
}
@@ -610,7 +613,7 @@ static void noinstr el0_fpsimd_acc(struct pt_regs *regs, unsigned long esr)
static void noinstr el0_sve_acc(struct pt_regs *regs, unsigned long esr)
{
enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_sve_acc(esr, regs);
exit_to_user_mode(regs);
}
@@ -618,7 +621,7 @@ static void noinstr el0_sve_acc(struct pt_regs *regs, unsigned long esr)
static void noinstr el0_sme_acc(struct pt_regs *regs, unsigned long esr)
{
enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_sme_acc(esr, regs);
exit_to_user_mode(regs);
}
@@ -626,7 +629,7 @@ static void noinstr el0_sme_acc(struct pt_regs *regs, unsigned long esr)
static void noinstr el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr)
{
enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_fpsimd_exc(esr, regs);
exit_to_user_mode(regs);
}
@@ -634,7 +637,7 @@ static void noinstr el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr)
static void noinstr el0_sys(struct pt_regs *regs, unsigned long esr)
{
enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_el0_sys(esr, regs);
exit_to_user_mode(regs);
}
@@ -647,7 +650,7 @@ static void noinstr el0_pc(struct pt_regs *regs, unsigned long esr)
arm64_apply_bp_hardening();

enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_sp_pc_abort(far, esr, regs);
exit_to_user_mode(regs);
}
@@ -655,7 +658,7 @@ static void noinstr el0_pc(struct pt_regs *regs, unsigned long esr)
static void noinstr el0_sp(struct pt_regs *regs, unsigned long esr)
{
enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_sp_pc_abort(regs->sp, esr, regs);
exit_to_user_mode(regs);
}
@@ -663,7 +666,7 @@ static void noinstr el0_sp(struct pt_regs *regs, unsigned long esr)
static void noinstr el0_undef(struct pt_regs *regs, unsigned long esr)
{
enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_el0_undef(regs, esr);
exit_to_user_mode(regs);
}
@@ -671,7 +674,7 @@ static void noinstr el0_undef(struct pt_regs *regs, unsigned long esr)
static void noinstr el0_bti(struct pt_regs *regs)
{
enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_el0_bti(regs);
exit_to_user_mode(regs);
}
@@ -679,7 +682,7 @@ static void noinstr el0_bti(struct pt_regs *regs)
static void noinstr el0_mops(struct pt_regs *regs, unsigned long esr)
{
enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_el0_mops(regs, esr);
exit_to_user_mode(regs);
}
@@ -687,7 +690,7 @@ static void noinstr el0_mops(struct pt_regs *regs, unsigned long esr)
static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr)
{
enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
bad_el0_sync(regs, 0, esr);
exit_to_user_mode(regs);
}
@@ -695,11 +698,14 @@ static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr)
static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr)
{
/* Only watchpoints write FAR_EL1, otherwise its UNKNOWN */
- unsigned long far = read_sysreg(far_el1);
+ unsigned long far;
+
+ local_allint_mark_enabled();
+ far = read_sysreg(far_el1);

enter_from_user_mode(regs);
do_debug_exception(far, esr, regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
exit_to_user_mode(regs);
}

@@ -708,7 +714,7 @@ static void noinstr el0_svc(struct pt_regs *regs)
enter_from_user_mode(regs);
cortex_a76_erratum_1463225_svc_handler();
fp_user_discard();
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_el0_svc(regs);
exit_to_user_mode(regs);
}
@@ -716,7 +722,7 @@ static void noinstr el0_svc(struct pt_regs *regs)
static void noinstr el0_fpac(struct pt_regs *regs, unsigned long esr)
{
enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_el0_fpac(regs, esr);
exit_to_user_mode(regs);
}
@@ -785,7 +791,7 @@ static void noinstr el0_interrupt(struct pt_regs *regs,
{
enter_from_user_mode(regs);

- write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
+ local_allint_disable();

if (regs->pc & BIT(55))
arm64_apply_bp_hardening();
@@ -797,24 +803,14 @@ static void noinstr el0_interrupt(struct pt_regs *regs,
exit_to_user_mode(regs);
}

-static void noinstr __el0_irq_handler_common(struct pt_regs *regs)
-{
- el0_interrupt(regs, handle_arch_irq);
-}
-
asmlinkage void noinstr el0t_64_irq_handler(struct pt_regs *regs)
{
- __el0_irq_handler_common(regs);
-}
-
-static void noinstr __el0_fiq_handler_common(struct pt_regs *regs)
-{
- el0_interrupt(regs, handle_arch_fiq);
+ el0_interrupt(regs, handle_arch_irq);
}

asmlinkage void noinstr el0t_64_fiq_handler(struct pt_regs *regs)
{
- __el0_fiq_handler_common(regs);
+ el0_interrupt(regs, handle_arch_fiq);
}

static void noinstr __el0_error_handler_common(struct pt_regs *regs)
@@ -822,11 +818,11 @@ static void noinstr __el0_error_handler_common(struct pt_regs *regs)
unsigned long esr = read_sysreg(esr_el1);

enter_from_user_mode(regs);
- local_daif_restore(DAIF_ERRCTX);
+ local_errint_disable();
arm64_enter_nmi(regs);
do_serror(regs, esr);
arm64_exit_nmi(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
exit_to_user_mode(regs);
}

@@ -839,7 +835,7 @@ asmlinkage void noinstr el0t_64_error_handler(struct pt_regs *regs)
static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr)
{
enter_from_user_mode(regs);
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_el0_cp15(esr, regs);
exit_to_user_mode(regs);
}
@@ -848,7 +844,7 @@ static void noinstr el0_svc_compat(struct pt_regs *regs)
{
enter_from_user_mode(regs);
cortex_a76_erratum_1463225_svc_handler();
- local_daif_restore(DAIF_PROCCTX);
+ local_errint_enable();
do_el0_svc_compat(regs);
exit_to_user_mode(regs);
}
@@ -899,12 +895,12 @@ asmlinkage void noinstr el0t_32_sync_handler(struct pt_regs *regs)

asmlinkage void noinstr el0t_32_irq_handler(struct pt_regs *regs)
{
- __el0_irq_handler_common(regs);
+ el0_interrupt(regs, handle_arch_irq);
}

asmlinkage void noinstr el0t_32_fiq_handler(struct pt_regs *regs)
{
- __el0_fiq_handler_common(regs);
+ el0_interrupt(regs, handle_arch_fiq);
}

asmlinkage void noinstr el0t_32_error_handler(struct pt_regs *regs)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 7ef0e127b149..0b311fefedc2 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -316,8 +316,6 @@ alternative_else_nop_endif

mrs_s x20, SYS_ICC_PMR_EL1
str x20, [sp, #S_PMR_SAVE]
- mov x20, #GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET
- msr_s SYS_ICC_PMR_EL1, x20

.Lskip_pmr_save\@:
#endif
--
2.34.1


2024-04-15 06:55:59

by Liao, Chang

[permalink] [raw]
Subject: [PATCH v3 4/8] arm64: daifflags: Add logical exception masks covering DAIF + PMR + ALLINT

In Mark Brown's support for FEAT_NMI patchset [1], Mark Rutland suggest
to refactor the way of DAIF management via adding new "logical exception
mask" helpers that treat DAIF + PMR + ALLINT as separate elements.

A series of new exception mask helpers that has a similar interface as
the existing counterparts, which starts with "local_allint_". The usage
and behavior of new ones suppose to align with the old ones, otherwise,
some unexpected result will occurs.

[1] https://lore.kernel.org/linux-arm-kernel/Y4sH5qX5bK9xfEBp@lpieralisi/

Signed-off-by: Liao Chang <[email protected]>
---
arch/arm64/include/asm/daifflags.h | 240 +++++++++++++++++++++++++++
arch/arm64/include/uapi/asm/ptrace.h | 1 +
2 files changed, 241 insertions(+)

diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index 55f57dfa8e2f..df4c4989babd 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -11,6 +11,7 @@
#include <asm/barrier.h>
#include <asm/cpufeature.h>
#include <asm/ptrace.h>
+#include <asm/nmi.h>

#define DAIF_PROCCTX 0
#define DAIF_PROCCTX_NOIRQ (PSR_I_BIT | PSR_F_BIT)
@@ -141,4 +142,243 @@ static inline void local_daif_inherit(struct pt_regs *regs)
*/
write_sysreg(flags, daif);
}
+
+/*
+ * For Arm64 processor support Armv8.8 or later, kernel supports three types
+ * of irqflags, they used for corresponding configuration depicted as below:
+ *
+ * 1. When CONFIG_ARM64_PSEUDO_NMI and CONFIG_ARM64_NMI are not 'y', kernel
+ * does not support handling NMI.
+ *
+ * 2. When CONFIG_ARM64_PSEUDO_NMI=y and irqchip.gicv3_pseudo_nmi=1, kernel
+ * makes use of the CPU Interface PMR and GIC priority feature to support
+ * handling NMI.
+ *
+ * 3. When CONFIG_ARM64_NMI=y and irqchip.gicv3_pseudo_nmi is not enabled,
+ * kernel makes use of the FEAT_NMI extension added since Armv8.8 to
+ * support handling NMI.
+ */
+union arch_irqflags {
+ unsigned long flags;
+ struct {
+ unsigned long pmr : 8; // SYS_ICC_PMR_EL1
+ unsigned long daif : 10; // PSTATE.DAIF at bits[6-9]
+ unsigned long allint : 14; // PSTATE.ALLINT at bits[13]
+ } fields;
+};
+
+typedef union arch_irqflags arch_irqflags_t;
+
+static inline void __pmr_local_allint_mask(void)
+{
+ WARN_ON(system_has_prio_mask_debugging() &&
+ (read_sysreg_s(SYS_ICC_PMR_EL1) ==
+ (GIC_PRIO_IRQOFF | GIC_PRIO_PSR_I_SET)));
+ /*
+ * Don't really care for a dsb here, we don't intend to enable
+ * IRQs.
+ */
+ gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+}
+
+static inline void __nmi_local_allint_mask(void)
+{
+ _allint_set();
+}
+
+static inline void local_allint_mask(void)
+{
+ asm volatile(
+ "msr daifset, #0xf // local_daif_mask\n"
+ :
+ :
+ : "memory");
+
+ if (system_uses_irq_prio_masking())
+ __pmr_local_allint_mask();
+ else if (system_uses_nmi())
+ __nmi_local_allint_mask();
+
+ trace_hardirqs_off();
+}
+
+static inline arch_irqflags_t __pmr_local_allint_save_flags(void)
+{
+ arch_irqflags_t irqflags;
+
+ irqflags.fields.pmr = read_sysreg_s(SYS_ICC_PMR_EL1);
+ irqflags.fields.daif = read_sysreg(daif);
+ irqflags.fields.allint = 0;
+ /*
+ * If IRQs are masked with PMR, reflect it in the daif of irqflags.
+ * If NMIs and IRQs are masked with PMR, reflect it in the daif and
+ * allint of irqflags, this avoid the need of checking PSTATE.A in
+ * local_allint_restore() to determine if NMIs are masked.
+ */
+ switch (irqflags.fields.pmr) {
+ case GIC_PRIO_IRQON:
+ break;
+
+ case __GIC_PRIO_IRQOFF:
+ case __GIC_PRIO_IRQOFF_NS:
+ irqflags.fields.daif |= PSR_I_BIT | PSR_F_BIT;
+ break;
+
+ case GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET:
+ irqflags.fields.allint = 1;
+ break;
+
+ default:
+ WARN_ON(1);
+ }
+
+ return irqflags;
+}
+
+static inline arch_irqflags_t __nmi_local_allint_save_flags(void)
+{
+ arch_irqflags_t irqflags;
+
+ irqflags.fields.daif = read_sysreg(daif);
+ irqflags.fields.allint = read_sysreg_s(SYS_ALLINT);
+
+ return irqflags;
+}
+
+static inline arch_irqflags_t local_allint_save_flags(void)
+{
+ arch_irqflags_t irqflags = { .flags = 0UL };
+
+ if (system_uses_irq_prio_masking())
+ return __pmr_local_allint_save_flags();
+ else if (system_uses_nmi())
+ return __nmi_local_allint_save_flags();
+
+ irqflags.fields.daif = read_sysreg(daif);
+ return irqflags;
+}
+
+static inline arch_irqflags_t local_allint_save(void)
+{
+ arch_irqflags_t irqflags;
+
+ irqflags = local_allint_save_flags();
+
+ local_allint_mask();
+
+ return irqflags;
+}
+
+static inline void gic_pmr_prio_check(void)
+{
+ WARN_ON(system_has_prio_mask_debugging() &&
+ (read_sysreg(daif) & (PSR_I_BIT | PSR_F_BIT)) !=
+ (PSR_I_BIT | PSR_F_BIT));
+}
+
+static inline void __pmr_local_allint_restore(arch_irqflags_t irqflags)
+{
+ unsigned long pmr = irqflags.fields.pmr;
+ unsigned long daif = irqflags.fields.daif;
+ unsigned long allint = irqflags.fields.allint;
+
+ gic_pmr_prio_check();
+
+ gic_write_pmr(pmr);
+
+ if (!(daif & PSR_I_BIT)) {
+ pmr_sync();
+ } else if (!allint) {
+ /*
+ * Use arch_allint.fields.allint to indicates we can take
+ * NMIs, instead of the old hacking style that use PSTATE.A.
+ *
+ * There has been concern that the write to daif
+ * might be reordered before this write to PMR.
+ * From the ARM ARM DDI 0487D.a, section D1.7.1
+ * "Accessing PSTATE fields":
+ * Writes to the PSTATE fields have side-effects on
+ * various aspects of the PE operation. All of these
+ * side-effects are guaranteed:
+ * - Not to be visible to earlier instructions in
+ * the execution stream.
+ * - To be visible to later instructions in the
+ * execution stream
+ *
+ * Also, writes to PMR are self-synchronizing, so no
+ * interrupts with a lower priority than PMR is signaled
+ * to the PE after the write.
+ *
+ * So we don't need additional synchronization here.
+ */
+ daif &= ~(PSR_I_BIT | PSR_F_BIT);
+ }
+ write_sysreg(daif, daif);
+}
+
+static inline void __nmi_local_allint_restore(arch_irqflags_t irqflags)
+{
+ if (irqflags.fields.allint)
+ _allint_set();
+ else
+ _allint_clear();
+
+ write_sysreg(irqflags.fields.daif, daif);
+}
+
+static inline int local_allint_disabled(arch_irqflags_t irqflags)
+{
+ return irqflags.fields.allint || (irqflags.fields.daif & PSR_I_BIT);
+}
+
+/*
+ * It has to conside the different kernel configure and parameters, that need
+ * to use coresspoding operations to mask interrupts properly. For example, the
+ * kernel disable PSEUDO_NMI, the kernel uses prio masking to support
+ * PSEUDO_NMI, or the kernel uses FEAT_NMI extension to support PSEUDO_NMI.
+ */
+static inline void local_allint_restore(arch_irqflags_t irqflags)
+{
+ int irq_disabled = local_allint_disabled(irqflags);
+
+ if (!irq_disabled)
+ trace_hardirqs_on();
+
+ if (system_uses_irq_prio_masking())
+ __pmr_local_allint_restore(irqflags);
+ else if (system_uses_nmi())
+ __nmi_local_allint_restore(irqflags);
+ else
+ write_sysreg(irqflags.fields.daif, daif);
+
+ if (irq_disabled)
+ trace_hardirqs_off();
+}
+
+/*
+ * Called by synchronous exception handlers to restore the DAIF bits that were
+ * modified by taking an exception.
+ */
+static inline void local_allint_inherit(struct pt_regs *regs)
+{
+ if (interrupts_enabled(regs))
+ trace_hardirqs_on();
+
+ if (system_uses_irq_prio_masking())
+ gic_write_pmr(regs->pmr_save);
+
+ /*
+ * We can't use local_daif_restore(regs->pstate) here as
+ * system_has_prio_mask_debugging() won't restore the I bit if it can
+ * use the pmr instead.
+ */
+ write_sysreg(regs->pstate & DAIF_MASK, daif);
+
+ if (system_uses_nmi()) {
+ if (regs->pstate & PSR_ALLINT_BIT)
+ _allint_set();
+ else
+ _allint_clear();
+ }
+}
#endif
diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h
index 7fa2f7036aa7..8a125a1986be 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -48,6 +48,7 @@
#define PSR_D_BIT 0x00000200
#define PSR_BTYPE_MASK 0x00000c00
#define PSR_SSBS_BIT 0x00001000
+#define PSR_ALLINT_BIT 0x00002000
#define PSR_PAN_BIT 0x00400000
#define PSR_UAO_BIT 0x00800000
#define PSR_DIT_BIT 0x01000000
--
2.34.1


2024-04-15 06:56:25

by Liao, Chang

[permalink] [raw]
Subject: [PATCH v3 1/8] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT

From: Mark Brown <[email protected]>

Encodings are provided for ALLINT which allow setting of ALLINT.ALLINT
using an immediate rather than requiring that a register be loaded with
the value to write. Since these don't currently fit within the scheme we
have for sysreg generation add manual encodings like we currently do for
other similar registers such as SVCR.

Since it is required that these immediate versions be encoded with xzr
as the source register provide asm wrapper which ensure this is the
case.

Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Liao Chang <[email protected]>
---
arch/arm64/include/asm/nmi.h | 27 +++++++++++++++++++++++++++
arch/arm64/include/asm/sysreg.h | 2 ++
2 files changed, 29 insertions(+)
create mode 100644 arch/arm64/include/asm/nmi.h

diff --git a/arch/arm64/include/asm/nmi.h b/arch/arm64/include/asm/nmi.h
new file mode 100644
index 000000000000..0c566c649485
--- /dev/null
+++ b/arch/arm64/include/asm/nmi.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2022 ARM Ltd.
+ */
+#ifndef __ASM_NMI_H
+#define __ASM_NMI_H
+
+#ifndef __ASSEMBLER__
+
+#include <linux/cpumask.h>
+
+extern bool arm64_supports_nmi(void);
+
+#endif /* !__ASSEMBLER__ */
+
+static __always_inline void _allint_clear(void)
+{
+ asm volatile(__msr_s(SYS_ALLINT_CLR, "xzr"));
+}
+
+static __always_inline void _allint_set(void)
+{
+ asm volatile(__msr_s(SYS_ALLINT_SET, "xzr"));
+}
+
+#endif
+
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 9e8999592f3a..b105773c57ca 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -167,6 +167,8 @@
* System registers, organised loosely by encoding but grouped together
* where the architected name contains an index. e.g. ID_MMFR<n>_EL1.
*/
+#define SYS_ALLINT_CLR sys_reg(0, 1, 4, 0, 0)
+#define SYS_ALLINT_SET sys_reg(0, 1, 4, 1, 0)
#define SYS_SVCR_SMSTOP_SM_EL0 sys_reg(0, 3, 4, 2, 3)
#define SYS_SVCR_SMSTART_SM_EL0 sys_reg(0, 3, 4, 3, 3)
#define SYS_SVCR_SMSTOP_SMZA_EL0 sys_reg(0, 3, 4, 6, 3)
--
2.34.1


2024-04-15 06:56:38

by Liao, Chang

[permalink] [raw]
Subject: [PATCH v3 8/8] arm64: kprobe: Keep NMI maskabled while kprobe is stepping xol

Keeping NMI maskable while executing instruction out of line, otherwise,
add kprobe on the functions invoken while handling NMI will cause kprobe
reenter bug and kernel panic.

Signed-off-by: Liao Chang <[email protected]>
---
arch/arm64/include/asm/daifflags.h | 2 ++
arch/arm64/kernel/probes/kprobes.c | 4 ++--
2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index 1196eb85aa8d..60fd3b25fd73 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -17,6 +17,8 @@
#define DAIF_PROCCTX_NOIRQ (PSR_I_BIT | PSR_F_BIT)
#define DAIF_ERRCTX (PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)
#define DAIF_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)
+#define DAIF_ALLINT_MASK \
+ (system_uses_nmi() ? (ALLINT_ALLINT | DAIF_MASK) : (DAIF_MASK))

/*
* For Arm64 processor support Armv8.8 or later, kernel supports three types
diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
index 327855a11df2..e8c2b993bbb8 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -187,13 +187,13 @@ static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb,
struct pt_regs *regs)
{
kcb->saved_irqflag = regs->pstate & DAIF_MASK;
- regs->pstate |= DAIF_MASK;
+ regs->pstate |= DAIF_ALLINT_MASK;
}

static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb,
struct pt_regs *regs)
{
- regs->pstate &= ~DAIF_MASK;
+ regs->pstate &= ~DAIF_ALLINT_MASK;
regs->pstate |= kcb->saved_irqflag;
}

--
2.34.1


2024-05-03 16:01:13

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH v3 1/8] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT

On Mon, Apr 15, 2024 at 06:47:51AM +0000, Liao Chang wrote:
> From: Mark Brown <[email protected]>
>
> Encodings are provided for ALLINT which allow setting of ALLINT.ALLINT
> using an immediate rather than requiring that a register be loaded with
> the value to write. Since these don't currently fit within the scheme we
> have for sysreg generation add manual encodings like we currently do for
> other similar registers such as SVCR.
>
> Since it is required that these immediate versions be encoded with xzr
> as the source register provide asm wrapper which ensure this is the
> case.
>
> Signed-off-by: Mark Brown <[email protected]>
> Signed-off-by: Liao Chang <[email protected]>
> ---
> arch/arm64/include/asm/nmi.h | 27 +++++++++++++++++++++++++++
> arch/arm64/include/asm/sysreg.h | 2 ++
> 2 files changed, 29 insertions(+)
> create mode 100644 arch/arm64/include/asm/nmi.h

We have helpers for manipulating PSTATE bits; AFAICT we only need the three
lines below:

----8<----
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 9e8999592f3af..5c209d07ae57e 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -94,18 +94,21 @@

#define PSTATE_PAN pstate_field(0, 4)
#define PSTATE_UAO pstate_field(0, 3)
+#define PSTATE_ALLINT pstate_field(1, 0)
#define PSTATE_SSBS pstate_field(3, 1)
#define PSTATE_DIT pstate_field(3, 2)
#define PSTATE_TCO pstate_field(3, 4)

#define SET_PSTATE_PAN(x) SET_PSTATE((x), PAN)
#define SET_PSTATE_UAO(x) SET_PSTATE((x), UAO)
+#define SET_PSTATE_ALLINT(x) SET_PSTATE((x), ALLINT)
#define SET_PSTATE_SSBS(x) SET_PSTATE((x), SSBS)
#define SET_PSTATE_DIT(x) SET_PSTATE((x), DIT)
#define SET_PSTATE_TCO(x) SET_PSTATE((x), TCO)

#define set_pstate_pan(x) asm volatile(SET_PSTATE_PAN(x))
#define set_pstate_uao(x) asm volatile(SET_PSTATE_UAO(x))
+#define set_pstate_allint(x) asm volatile(SET_PSTATE_ALLINT(x))
#define set_pstate_ssbs(x) asm volatile(SET_PSTATE_SSBS(x))
#define set_pstate_dit(x) asm volatile(SET_PSTATE_DIT(x))
---->8----

The addition of <asm/nmi.h> and refrences to <linux/cpumask.h> and
arm64_supports_nmi() don't seem like they should be part of this patch.

Mark.

>
> diff --git a/arch/arm64/include/asm/nmi.h b/arch/arm64/include/asm/nmi.h
> new file mode 100644
> index 000000000000..0c566c649485
> --- /dev/null
> +++ b/arch/arm64/include/asm/nmi.h
> @@ -0,0 +1,27 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2022 ARM Ltd.
> + */
> +#ifndef __ASM_NMI_H
> +#define __ASM_NMI_H
> +
> +#ifndef __ASSEMBLER__
> +
> +#include <linux/cpumask.h>
> +
> +extern bool arm64_supports_nmi(void);
> +
> +#endif /* !__ASSEMBLER__ */
> +
> +static __always_inline void _allint_clear(void)
> +{
> + asm volatile(__msr_s(SYS_ALLINT_CLR, "xzr"));
> +}
> +
> +static __always_inline void _allint_set(void)
> +{
> + asm volatile(__msr_s(SYS_ALLINT_SET, "xzr"));
> +}
> +
> +#endif
> +
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 9e8999592f3a..b105773c57ca 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -167,6 +167,8 @@
> * System registers, organised loosely by encoding but grouped together
> * where the architected name contains an index. e.g. ID_MMFR<n>_EL1.
> */
> +#define SYS_ALLINT_CLR sys_reg(0, 1, 4, 0, 0)
> +#define SYS_ALLINT_SET sys_reg(0, 1, 4, 1, 0)
> #define SYS_SVCR_SMSTOP_SM_EL0 sys_reg(0, 3, 4, 2, 3)
> #define SYS_SVCR_SMSTART_SM_EL0 sys_reg(0, 3, 4, 3, 3)
> #define SYS_SVCR_SMSTOP_SMZA_EL0 sys_reg(0, 3, 4, 6, 3)
> --
> 2.34.1
>

2024-05-03 17:10:36

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH v3 0/8] Rework the DAIF mask, unmask and track API

Hi,

On Mon, Apr 15, 2024 at 06:47:50AM +0000, Liao Chang wrote:
> This patch series reworks the DAIF mask, unmask, and track API for the
> upcoming FEAT_NMI extension added in Armv8.8.
>
> As platform and virtualization[1] supports for FEAT_NMI is emerging, and
> Mark Brown's FEAT_NMI patch series[2] highlighted the need for clean up
> the existing hacking style approach about DAIF management code before
> adding NMI functionality, furthermore, we discover some subtle bugs
> during 'perf' and 'ipi_backtrace' transition from PSEUDO_NMI to
> FEAT_NMI, in summary, all of these emphasize the importance of rework.
>
> This series of reworking patches follows the suggestion from Mark
> Rutland mentioned in Mark Brown's patchset. In summary, he think the
> better way for DAIF manangement look likes as following:
>
> (a) Adding entry-specific helpers to manipulate abstract exception masks
> covering DAIF + PMR + ALLINT. Those need unmask-at-entry and
> mask-at-exit behaviour, and today only need to manage DAIF + PMR.
>
> It should be possible to do this ahead of ALLINT / NMI support.
>
> (b) Adding new "logical exception mask" helpers that treat DAIF + PMR +
> ALLINT as separate elements.

I've started looking at this in the series. There are some subtleties here, and
I don't think the helpers in this series are quite right as-is. I will try to
get back to you next week with a description of those; it'll take a short while
to write that up correctly and clearly and I don't trust myself to rush that
last thing on a Friday.

Thanks,
Mark.

>
> This patches cherry-pick a part of Mark Brown' FEAT_NMI series, in order
> to pass compilation and basic testing, includes perf and ipi_backtrace.
>
> [1] https://lore.kernel.org/all/[email protected]/
> [2] https://lore.kernel.org/linux-arm-kernel/Y4sH5qX5bK9xfEBp@lpieralisi/
>
> v3->v2:
> 1. Squash two commits that address two minor issues into Mark Brown's
> origin patch for detecting FEAT_NMI.
> 2. Add one patch resolves the kprobe reenter panic while testing
> FEAT_NMI on QEMU.
>
> v2->v1:
> Add SoB tags following the origin author's SoBs.
>
> Liao Chang (5):
> arm64: daifflags: Add logical exception masks covering DAIF + PMR +
> ALLINT
> arm64: Unify exception masking at entry and exit of exception
> arm64: Deprecate old local_daif_{mask,save,restore}
> irqchip/gic-v3: Improve the maintainability of NMI masking in GIC
> driver
> arm64: kprobe: Keep NMI maskabled while kprobe is stepping xol
>
> Mark Brown (3):
> arm64/sysreg: Add definitions for immediate versions of MSR ALLINT
> arm64/cpufeature: Detect PE support for FEAT_NMI
> arm64/nmi: Add Kconfig for NMI
>
> arch/arm64/Kconfig | 17 ++
> arch/arm64/include/asm/cpufeature.h | 6 +
> arch/arm64/include/asm/daifflags.h | 298 ++++++++++++++++++++++-----
> arch/arm64/include/asm/nmi.h | 27 +++
> arch/arm64/include/asm/sysreg.h | 2 +
> arch/arm64/include/uapi/asm/ptrace.h | 1 +
> arch/arm64/kernel/acpi.c | 10 +-
> arch/arm64/kernel/cpufeature.c | 58 +++++-
> arch/arm64/kernel/debug-monitors.c | 7 +-
> arch/arm64/kernel/entry-common.c | 96 +++++----
> arch/arm64/kernel/entry.S | 2 -
> arch/arm64/kernel/hibernate.c | 6 +-
> arch/arm64/kernel/irq.c | 2 +-
> arch/arm64/kernel/machine_kexec.c | 2 +-
> arch/arm64/kernel/probes/kprobes.c | 4 +-
> arch/arm64/kernel/setup.c | 2 +-
> arch/arm64/kernel/smp.c | 6 +-
> arch/arm64/kernel/suspend.c | 6 +-
> arch/arm64/kvm/hyp/vgic-v3-sr.c | 6 +-
> arch/arm64/kvm/hyp/vhe/switch.c | 4 +-
> arch/arm64/mm/mmu.c | 6 +-
> arch/arm64/tools/cpucaps | 2 +
> drivers/irqchip/irq-gic-v3.c | 6 +-
> 23 files changed, 442 insertions(+), 134 deletions(-)
> create mode 100644 arch/arm64/include/asm/nmi.h
>
> --
> 2.34.1
>
>