Sorry to resend this patch series as I forgot to Cc the open list before.
Below is formal content.
The existing RISC-V kernel lacks an NMI mechanism as there is still no
ratified resumable NMI extension in RISC-V community, which can not
satisfy some scenarios like high precision perf sampling. There is an
incoming hardware extension called Smrnmi which supports resumable NMI
by providing new control registers to save status when NMI happens.
However, it is still a draft and requires privilege level switches for
kernel to utilize it as NMIs are automatically trapped into machine mode.
This patch series introduces a software pseudo NMI mechanism in RISC-V.
The existing RISC-V kernel disables interrupts via per cpu control
register CSR_STATUS, the SIE bit of which controls the enablement of all
interrupts of whole cpu. When SIE bit is clear, no interrupt is enabled.
This patch series implements NMI by switching interrupt disable way to
another per cpu control register CSR_IE. This register controls the
enablement of each separate interrupt. Each bit of CSR_IE corresponds
to a single major interrupt and a clear bit means disablement of
corresponding interrupt.
To implement pseudo NMI, we switch to CSR_IE masking when disabling
irqs. When interrupts are disabled, all bits of CSR_IE corresponding to
normal interrupts are cleared while bits corresponding to NMIs are still
kept as ones. The SIE bit of CSR_STATUS is now untouched and always kept
as one.
We measured performacne of Pseudo NMI patches based on v6.6-rc4 on SiFive
FU740 Soc with hackbench as our benchmark. The result shows 1.90%
performance degradation.
"hackbench 200 process 1000" (average over 10 runs)
+-----------+----------+------------+
| | v6.6-rc4 | Pseudo NMI |
+-----------+----------+------------+
| time | 251.646s | 256.416s |
+-----------+----------+------------+
The overhead mainly comes from two parts:
1. Saving and restoring CSR_IE register during kernel entry/return.
This part introduces about 0.57% performance overhead.
2. The extra instructions introduced by 'irqs_enabled_ie'. It is a
special value representing normal CSR_IE when irqs are enabled. It is
implemented via ALTERNATIVE to adapt to platforms without PMU. This
part introduces about 1.32% performance overhead.
Limits:
CSR_IE is now used for disabling irqs and any other code should
not touch this register to avoid corrupting irq status, which means
we do not support masking a single interrupt now.
We have tried to fix this by introducing a per cpu variable to save
CSR_IE value when disabling irqs. Then all operatations on CSR_IE
will be redirected to this variable and CSR_IE's value will be
restored from this variable when enabling irqs. Obviously this method
introduces extra memory accesses in hot code path.
TODO:
1. The adaption to hypervisor extension is ongoing.
2. The adaption to advanced interrupt architecture is ongoing.
This version of Pseudo NMI is rebased on v6.6-rc7.
Thanks in advance for comments.
Xu Lu (12):
riscv: Introduce CONFIG_RISCV_PSEUDO_NMI
riscv: Make CSR_IE register part of context
riscv: Switch to CSR_IE masking when disabling irqs
riscv: Switch back to CSR_STATUS masking when going idle
riscv: kvm: Switch back to CSR_STATUS masking when entering guest
riscv: Allow requesting irq as pseudo NMI
riscv: Handle pseudo NMI in arch irq handler
riscv: Enable NMIs during irqs disabled context
riscv: Enable NMIs during exceptions
riscv: Enable NMIs during interrupt handling
riscv: Request pmu overflow interrupt as NMI
riscv: Enable CONFIG_RISCV_PSEUDO_NMI in default
arch/riscv/Kconfig | 10 ++++
arch/riscv/include/asm/csr.h | 17 ++++++
arch/riscv/include/asm/irqflags.h | 91 ++++++++++++++++++++++++++++++
arch/riscv/include/asm/processor.h | 4 ++
arch/riscv/include/asm/ptrace.h | 7 +++
arch/riscv/include/asm/switch_to.h | 7 +++
arch/riscv/kernel/asm-offsets.c | 3 +
arch/riscv/kernel/entry.S | 18 ++++++
arch/riscv/kernel/head.S | 10 ++++
arch/riscv/kernel/irq.c | 17 ++++++
arch/riscv/kernel/process.c | 6 ++
arch/riscv/kernel/suspend_entry.S | 1 +
arch/riscv/kernel/traps.c | 54 ++++++++++++++----
arch/riscv/kvm/vcpu.c | 18 ++++--
drivers/clocksource/timer-clint.c | 4 ++
drivers/clocksource/timer-riscv.c | 4 ++
drivers/irqchip/irq-riscv-intc.c | 66 ++++++++++++++++++++++
drivers/perf/riscv_pmu_sbi.c | 21 ++++++-
18 files changed, 340 insertions(+), 18 deletions(-)
--
2.20.1
This commit allows NMIs to happen even when irqs are disabled. When
disabling irqs, we mask all normal irqs via clearing corresponding bits
in CSR_IE while leaving NMI bits alone.
Signed-off-by: Xu Lu <[email protected]>
---
arch/riscv/include/asm/irqflags.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/riscv/include/asm/irqflags.h b/arch/riscv/include/asm/irqflags.h
index 9700a17a003a..42f7803582df 100644
--- a/arch/riscv/include/asm/irqflags.h
+++ b/arch/riscv/include/asm/irqflags.h
@@ -54,13 +54,13 @@ static inline void arch_local_irq_enable(void)
/* unconditionally disable interrupts */
static inline void arch_local_irq_disable(void)
{
- csr_clear(CSR_IE, irqs_enabled_ie);
+ csr_clear(CSR_IE, ~ALLOWED_NMI_MASK);
}
/* get status and disable interrupts */
static inline unsigned long arch_local_irq_save(void)
{
- return csr_read_clear(CSR_IE, irqs_enabled_ie);
+ return csr_read_clear(CSR_IE, ~ALLOWED_NMI_MASK);
}
/* test flags */
--
2.20.1
The WFI instruction makes current core stall until interrupt happens.
In WFI's implementation, core can only be waken up from interrupt
which is both pending in CSR_IP and enabled in CSR_IE. After we switch
to CSR_IE masking for irq disabling, WFI instruction can never resume
execution if CSR_IE is masked.
This commit handles this special case. When WFI instruction is called with
CSR_IE masked, we unmask CSR_IE first and disable irqs in traditional
CSR_STATUS way instead.
Signed-off-by: Xu Lu <[email protected]>
---
arch/riscv/include/asm/processor.h | 4 ++++
arch/riscv/kernel/irq.c | 17 +++++++++++++++++
2 files changed, 21 insertions(+)
diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
index 3e23e1786d05..ab9b2b974979 100644
--- a/arch/riscv/include/asm/processor.h
+++ b/arch/riscv/include/asm/processor.h
@@ -111,10 +111,14 @@ extern void start_thread(struct pt_regs *regs,
extern unsigned long __get_wchan(struct task_struct *p);
+#ifndef CONFIG_RISCV_PSEUDO_NMI
static inline void wait_for_interrupt(void)
{
__asm__ __volatile__ ("wfi");
}
+#else
+void wait_for_interrupt(void);
+#endif
struct device_node;
int riscv_of_processor_hartid(struct device_node *node, unsigned long *hartid);
diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
index 9cc0a7669271..e7dfd68e9ca3 100644
--- a/arch/riscv/kernel/irq.c
+++ b/arch/riscv/kernel/irq.c
@@ -15,6 +15,23 @@
#include <asm/softirq_stack.h>
#include <asm/stacktrace.h>
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+
+void wait_for_interrupt(void)
+{
+ if (irqs_disabled()) {
+ local_irq_switch_off();
+ local_irq_enable();
+ __asm__ __volatile__ ("wfi");
+ local_irq_disable();
+ local_irq_switch_on();
+ } else {
+ __asm__ __volatile__ ("wfi");
+ }
+}
+
+#endif /* CONFIG_RISCV_PSEUDO_NMI */
+
static struct fwnode_handle *(*__get_intc_node)(void);
void riscv_set_intc_hwnode_fn(struct fwnode_handle *(*fn)(void))
--
2.20.1
This commit enables CONFIG_RISCV_PSEUDO_NMI in default. Now pseudo NMI
feature is defaultly enabled on RISC-V.
Signed-off-by: Xu Lu <[email protected]>
---
arch/riscv/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 487e4293f31e..ecccdc91563f 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -672,7 +672,7 @@ config RISCV_BOOT_SPINWAIT
config RISCV_PSEUDO_NMI
bool "Support for NMI-like interrupts"
depends on !RISCV_M_MODE
- default n
+ default y
help
Adds support for mimicking Non-Maskable Interrupts through the use of
CSR_IE register.
--
2.20.1
We have switched the way of disabling irqs to CSR_IE masking. But
hardware still automatically clearing SIE field of CSR_STATUS whenever
thread traps into kernel, which disabling all irqs including NMIs.
This commit re-enables NMIs and normal irqs during exceptions by setting
the SIE field in CSR_STATUS and restoring NMI and irq bits in CSR_IE.
Signed-off-by: Xu Lu <[email protected]>
---
arch/riscv/include/asm/irqflags.h | 13 +++++++++++++
arch/riscv/include/asm/switch_to.h | 7 +++++++
arch/riscv/kernel/traps.c | 10 ++++++++++
3 files changed, 30 insertions(+)
diff --git a/arch/riscv/include/asm/irqflags.h b/arch/riscv/include/asm/irqflags.h
index 42f7803582df..6a709e9c69ca 100644
--- a/arch/riscv/include/asm/irqflags.h
+++ b/arch/riscv/include/asm/irqflags.h
@@ -29,6 +29,16 @@ static inline void set_nmi(int irq) {}
static inline void unset_nmi(int irq) {}
+static inline void enable_nmis(void)
+{
+ csr_set(CSR_IE, ALLOWED_NMI_MASK);
+}
+
+static inline void disable_nmis(void)
+{
+ csr_clear(CSR_IE, ALLOWED_NMI_MASK);
+}
+
static inline void local_irq_switch_on(void)
{
csr_set(CSR_STATUS, SR_IE);
@@ -128,6 +138,9 @@ static inline void arch_local_irq_restore(unsigned long flags)
csr_set(CSR_STATUS, flags & SR_IE);
}
+static inline void enable_nmis(void) {}
+static inline void disable_nmis(void) {}
+
#endif /* !CONFIG_RISCV_PSEUDO_NMI */
#endif /* _ASM_RISCV_IRQFLAGS_H */
diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h
index a727be723c56..116cffeaa6bf 100644
--- a/arch/riscv/include/asm/switch_to.h
+++ b/arch/riscv/include/asm/switch_to.h
@@ -84,4 +84,11 @@ do { \
((last) = __switch_to(__prev, __next)); \
} while (0)
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+
+#define prepare_arch_switch(next) disable_nmis()
+#define finish_arch_post_lock_switch() enable_nmis()
+
+#endif /* CONFIG_RISCV_PSEUDO_NMI */
+
#endif /* _ASM_RISCV_SWITCH_TO_H */
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
index fae8f610d867..63d3c1417563 100644
--- a/arch/riscv/kernel/traps.c
+++ b/arch/riscv/kernel/traps.c
@@ -135,7 +135,9 @@ asmlinkage __visible __trap_section void name(struct pt_regs *regs) \
{ \
if (user_mode(regs)) { \
irqentry_enter_from_user_mode(regs); \
+ enable_nmis(); \
do_trap_error(regs, signo, code, regs->epc, "Oops - " str); \
+ disable_nmis(); \
irqentry_exit_to_user_mode(regs); \
} else { \
irqentry_state_t state = irqentry_nmi_enter(regs); \
@@ -292,8 +294,12 @@ asmlinkage __visible __trap_section void do_trap_break(struct pt_regs *regs)
if (user_mode(regs)) {
irqentry_enter_from_user_mode(regs);
+ enable_nmis();
+
handle_break(regs);
+ disable_nmis();
+
irqentry_exit_to_user_mode(regs);
} else {
irqentry_state_t state = irqentry_nmi_enter(regs);
@@ -338,10 +344,14 @@ asmlinkage __visible noinstr void do_page_fault(struct pt_regs *regs)
{
irqentry_state_t state = irqentry_enter(regs);
+ enable_nmis();
+
handle_page_fault(regs);
local_irq_disable();
+ disable_nmis();
+
irqentry_exit(regs, state);
}
#endif
--
2.20.1
This commit makes CSR_IE register part of thread context.
Kernel nowadays saves and restores irq status of each thread via
CSR_STATUS register. When a thread traps into kernel, irq status of it
is automatically stored in SR_PIE field of CSR_STATUS by hardware. And
when kernel returns back, irq status will be automatically restored from
CSR_STATUS.
Things get different when we switch to CSR_IE masking for irq
disabling. Hardware won't save or restore CSR_IE value during traps.
In this case, when trapped into kernel, we should save CSR_IE value
for previous thread manually and then clear all CSR_IE bits to disable
irqs during traps. Also, we should manually restore SIE field of
CSR_STATUS as we do not depends on it to disable irqs. When kernel
returns back, we manually restore CSR_IE value from previous saved
value.
Signed-off-by: Xu Lu <[email protected]>
Signed-off-by: Hangjing Li <[email protected]>
Reviewed-by: Liang Deng <[email protected]>
Reviewed-by: Yu Li <[email protected]>
---
arch/riscv/include/asm/csr.h | 17 +++++++++++++++++
arch/riscv/include/asm/ptrace.h | 3 +++
arch/riscv/kernel/asm-offsets.c | 3 +++
arch/riscv/kernel/entry.S | 13 +++++++++++++
arch/riscv/kernel/process.c | 6 ++++++
arch/riscv/kernel/suspend_entry.S | 1 +
drivers/clocksource/timer-clint.c | 4 ++++
drivers/clocksource/timer-riscv.c | 4 ++++
drivers/irqchip/irq-riscv-intc.c | 4 ++++
drivers/perf/riscv_pmu_sbi.c | 4 ++++
10 files changed, 59 insertions(+)
diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
index 777cb8299551..6520bd826d52 100644
--- a/arch/riscv/include/asm/csr.h
+++ b/arch/riscv/include/asm/csr.h
@@ -7,6 +7,7 @@
#define _ASM_RISCV_CSR_H
#include <asm/asm.h>
+#include <asm/hwcap.h>
#include <linux/bits.h>
/* Status register flags */
@@ -451,6 +452,22 @@
#define IE_TIE (_AC(0x1, UL) << RV_IRQ_TIMER)
#define IE_EIE (_AC(0x1, UL) << RV_IRQ_EXT)
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+#define IRQS_ENABLED_IE (IE_SIE | IE_TIE | IE_EIE)
+#define irqs_enabled_ie \
+({ \
+ unsigned long __v; \
+ asm (ALTERNATIVE( \
+ "li %0, " __stringify(IRQS_ENABLED_IE) "\n\t" \
+ "nop", \
+ "li %0, " __stringify(IRQS_ENABLED_IE | SIP_LCOFIP),\
+ 0, RISCV_ISA_EXT_SSCOFPMF, \
+ CONFIG_RISCV_PSEUDO_NMI) \
+ : "=r"(__v) : : ); \
+ __v; \
+})
+#endif /* CONFIG_RISCV_PSEUDO_NMI */
+
#ifndef __ASSEMBLY__
#define csr_swap(csr, val) \
diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h
index b5b0adcc85c1..b57d3a6b232f 100644
--- a/arch/riscv/include/asm/ptrace.h
+++ b/arch/riscv/include/asm/ptrace.h
@@ -47,6 +47,9 @@ struct pt_regs {
unsigned long t6;
/* Supervisor/Machine CSRs */
unsigned long status;
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+ unsigned long ie;
+#endif
unsigned long badaddr;
unsigned long cause;
/* a0 value before the syscall */
diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
index d6a75aac1d27..165f6f9fc458 100644
--- a/arch/riscv/kernel/asm-offsets.c
+++ b/arch/riscv/kernel/asm-offsets.c
@@ -112,6 +112,9 @@ void asm_offsets(void)
OFFSET(PT_GP, pt_regs, gp);
OFFSET(PT_ORIG_A0, pt_regs, orig_a0);
OFFSET(PT_STATUS, pt_regs, status);
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+ OFFSET(PT_IE, pt_regs, ie);
+#endif
OFFSET(PT_BADADDR, pt_regs, badaddr);
OFFSET(PT_CAUSE, pt_regs, cause);
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
index 143a2bb3e697..19ba7c4520b9 100644
--- a/arch/riscv/kernel/entry.S
+++ b/arch/riscv/kernel/entry.S
@@ -65,6 +65,10 @@ _save_context:
REG_S s3, PT_BADADDR(sp)
REG_S s4, PT_CAUSE(sp)
REG_S s5, PT_TP(sp)
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+ csrr s0, CSR_IE
+ REG_S s0, PT_IE(sp)
+#endif /* CONFIG_RISCV_PSEUDO_NMI */
/*
* Set the scratch register to 0, so that if a recursive exception
@@ -153,6 +157,11 @@ SYM_CODE_START_NOALIGN(ret_from_exception)
csrw CSR_STATUS, a0
csrw CSR_EPC, a2
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+ REG_L s0, PT_IE(sp)
+ csrw CSR_IE, s0
+#endif /* CONFIG_RISCV_PSEUDO_NMI */
+
REG_L x1, PT_RA(sp)
REG_L x3, PT_GP(sp)
REG_L x4, PT_TP(sp)
@@ -251,6 +260,10 @@ restore_caller_reg:
REG_S s3, PT_BADADDR(sp)
REG_S s4, PT_CAUSE(sp)
REG_S s5, PT_TP(sp)
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+ csrr s0, CSR_IE
+ REG_S s0, PT_IE(sp)
+#endif /* CONFIG_RISCV_PSEUDO_NMI */
move a0, sp
tail handle_bad_stack
SYM_CODE_END(handle_kernel_stack_overflow)
diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
index e32d737e039f..9663bae23c57 100644
--- a/arch/riscv/kernel/process.c
+++ b/arch/riscv/kernel/process.c
@@ -115,6 +115,9 @@ void start_thread(struct pt_regs *regs, unsigned long pc,
unsigned long sp)
{
regs->status = SR_PIE;
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+ regs->ie = irqs_enabled_ie;
+#endif
if (has_fpu()) {
regs->status |= SR_FS_INITIAL;
/*
@@ -189,6 +192,9 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
childregs->gp = gp_in_global;
/* Supervisor/Machine, irqs on: */
childregs->status = SR_PP | SR_PIE;
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+ childregs->ie = irqs_enabled_ie;
+#endif
p->thread.s[0] = (unsigned long)args->fn;
p->thread.s[1] = (unsigned long)args->fn_arg;
diff --git a/arch/riscv/kernel/suspend_entry.S b/arch/riscv/kernel/suspend_entry.S
index f7960c7c5f9e..6825f4836be4 100644
--- a/arch/riscv/kernel/suspend_entry.S
+++ b/arch/riscv/kernel/suspend_entry.S
@@ -47,6 +47,7 @@ ENTRY(__cpu_suspend_enter)
REG_S t0, (SUSPEND_CONTEXT_REGS + PT_EPC)(a0)
csrr t0, CSR_STATUS
REG_S t0, (SUSPEND_CONTEXT_REGS + PT_STATUS)(a0)
+ /* There is no need to save CSR_IE as it is maintained in memory */
csrr t0, CSR_TVAL
REG_S t0, (SUSPEND_CONTEXT_REGS + PT_BADADDR)(a0)
csrr t0, CSR_CAUSE
diff --git a/drivers/clocksource/timer-clint.c b/drivers/clocksource/timer-clint.c
index 9a55e733ae99..bdc10be9d3b4 100644
--- a/drivers/clocksource/timer-clint.c
+++ b/drivers/clocksource/timer-clint.c
@@ -114,7 +114,9 @@ static int clint_clock_next_event(unsigned long delta,
void __iomem *r = clint_timer_cmp +
cpuid_to_hartid_map(smp_processor_id());
+#ifndef CONFIG_RISCV_PSEUDO_NMI
csr_set(CSR_IE, IE_TIE);
+#endif
writeq_relaxed(clint_get_cycles64() + delta, r);
return 0;
}
@@ -155,7 +157,9 @@ static irqreturn_t clint_timer_interrupt(int irq, void *dev_id)
{
struct clock_event_device *evdev = this_cpu_ptr(&clint_clock_event);
+#ifndef CONFIG_RISCV_PSEUDO_NMI
csr_clear(CSR_IE, IE_TIE);
+#endif
evdev->event_handler(evdev);
return IRQ_HANDLED;
diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
index da3071b387eb..b730e01a7f02 100644
--- a/drivers/clocksource/timer-riscv.c
+++ b/drivers/clocksource/timer-riscv.c
@@ -36,7 +36,9 @@ static int riscv_clock_next_event(unsigned long delta,
{
u64 next_tval = get_cycles64() + delta;
+#ifndef CONFIG_RISCV_PSEUDO_NMI
csr_set(CSR_IE, IE_TIE);
+#endif
if (static_branch_likely(&riscv_sstc_available)) {
#if defined(CONFIG_32BIT)
csr_write(CSR_STIMECMP, next_tval & 0xFFFFFFFF);
@@ -119,7 +121,9 @@ static irqreturn_t riscv_timer_interrupt(int irq, void *dev_id)
{
struct clock_event_device *evdev = this_cpu_ptr(&riscv_clock_event);
+#ifndef CONFIG_RISCV_PSEUDO_NMI
csr_clear(CSR_IE, IE_TIE);
+#endif
evdev->event_handler(evdev);
return IRQ_HANDLED;
diff --git a/drivers/irqchip/irq-riscv-intc.c b/drivers/irqchip/irq-riscv-intc.c
index e8d01b14ccdd..7fad1ba37e5c 100644
--- a/drivers/irqchip/irq-riscv-intc.c
+++ b/drivers/irqchip/irq-riscv-intc.c
@@ -39,12 +39,16 @@ static asmlinkage void riscv_intc_irq(struct pt_regs *regs)
static void riscv_intc_irq_mask(struct irq_data *d)
{
+#ifndef CONFIG_RISCV_PSEUDO_NMI
csr_clear(CSR_IE, BIT(d->hwirq));
+#endif
}
static void riscv_intc_irq_unmask(struct irq_data *d)
{
+#ifndef CONFIG_RISCV_PSEUDO_NMI
csr_set(CSR_IE, BIT(d->hwirq));
+#endif
}
static void riscv_intc_irq_eoi(struct irq_data *d)
diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
index 96c7f670c8f0..995b501ec721 100644
--- a/drivers/perf/riscv_pmu_sbi.c
+++ b/drivers/perf/riscv_pmu_sbi.c
@@ -778,7 +778,9 @@ static int pmu_sbi_starting_cpu(unsigned int cpu, struct hlist_node *node)
if (riscv_pmu_use_irq) {
cpu_hw_evt->irq = riscv_pmu_irq;
csr_clear(CSR_IP, BIT(riscv_pmu_irq_num));
+#ifndef CONFIG_RISCV_PSEUDO_NMI
csr_set(CSR_IE, BIT(riscv_pmu_irq_num));
+#endif
enable_percpu_irq(riscv_pmu_irq, IRQ_TYPE_NONE);
}
@@ -789,7 +791,9 @@ static int pmu_sbi_dying_cpu(unsigned int cpu, struct hlist_node *node)
{
if (riscv_pmu_use_irq) {
disable_percpu_irq(riscv_pmu_irq);
+#ifndef CONFIG_RISCV_PSEUDO_NMI
csr_clear(CSR_IE, BIT(riscv_pmu_irq_num));
+#endif
}
/* Disable all counters access for user mode now */
--
2.20.1
Hardware automatically clearing SIE field of CSR_STATUS whenever
thread traps into kernel by interrupt, disabling all irqs including NMIs
during interrupt handling.
This commit re-enable NMIs during interrupt handling by setting the SIE
field in CSR_STATUS and restoring NMIs bits in CSR_IE. Normal interrupts
are still disabled during interrupt handling and NMIs are also disabled
during NMIs handling to avoid nesting.
Signed-off-by: Xu Lu <[email protected]>
Signed-off-by: Hangjing Li <[email protected]>
Reviewed-by: Liang Deng <[email protected]>
Reviewed-by: Yu Li <[email protected]>
---
arch/riscv/kernel/traps.c | 44 +++++++++++++++++++++++---------
drivers/irqchip/irq-riscv-intc.c | 2 ++
2 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
index 63d3c1417563..185743edfa09 100644
--- a/arch/riscv/kernel/traps.c
+++ b/arch/riscv/kernel/traps.c
@@ -356,20 +356,11 @@ asmlinkage __visible noinstr void do_page_fault(struct pt_regs *regs)
}
#endif
-static void noinstr handle_riscv_irq(struct pt_regs *regs)
+static void noinstr do_interrupt(struct pt_regs *regs)
{
struct pt_regs *old_regs;
- irq_enter_rcu();
old_regs = set_irq_regs(regs);
- handle_arch_irq(regs);
- set_irq_regs(old_regs);
- irq_exit_rcu();
-}
-
-asmlinkage void noinstr do_irq(struct pt_regs *regs)
-{
- irqentry_state_t state = irqentry_enter(regs);
#ifdef CONFIG_IRQ_STACKS
if (on_thread_stack()) {
ulong *sp = per_cpu(irq_stack_ptr, smp_processor_id())
@@ -382,7 +373,9 @@ asmlinkage void noinstr do_irq(struct pt_regs *regs)
"addi s0, sp, 2*"RISCV_SZPTR "\n"
"move sp, %[sp] \n"
"move a0, %[regs] \n"
- "call handle_riscv_irq \n"
+ "la t0, handle_arch_irq \n"
+ REG_L" t1, (t0) \n"
+ "jalr t1 \n"
"addi sp, s0, -2*"RISCV_SZPTR"\n"
REG_L" s0, (sp) \n"
"addi sp, sp, "RISCV_SZPTR "\n"
@@ -398,11 +391,38 @@ asmlinkage void noinstr do_irq(struct pt_regs *regs)
"memory");
} else
#endif
- handle_riscv_irq(regs);
+ handle_arch_irq(regs);
+ set_irq_regs(old_regs);
+}
+
+static __always_inline void __do_nmi(struct pt_regs *regs)
+{
+ irqentry_state_t state = irqentry_nmi_enter(regs);
+
+ do_interrupt(regs);
+
+ irqentry_nmi_exit(regs, state);
+}
+
+static __always_inline void __do_irq(struct pt_regs *regs)
+{
+ irqentry_state_t state = irqentry_enter(regs);
+
+ irq_enter_rcu();
+ do_interrupt(regs);
+ irq_exit_rcu();
irqentry_exit(regs, state);
}
+asmlinkage void noinstr do_irq(struct pt_regs *regs)
+{
+ if (IS_ENABLED(CONFIG_RISCV_PSEUDO_NMI) && regs_irqs_disabled(regs))
+ __do_nmi(regs);
+ else
+ __do_irq(regs);
+}
+
#ifdef CONFIG_GENERIC_BUG
int is_valid_bugaddr(unsigned long pc)
{
diff --git a/drivers/irqchip/irq-riscv-intc.c b/drivers/irqchip/irq-riscv-intc.c
index c672c0c64d5d..80ed8606e04d 100644
--- a/drivers/irqchip/irq-riscv-intc.c
+++ b/drivers/irqchip/irq-riscv-intc.c
@@ -34,7 +34,9 @@ static asmlinkage void riscv_intc_irq(struct pt_regs *regs)
generic_handle_domain_nmi(intc_domain, cause);
nmi_exit();
} else {
+ enable_nmis();
generic_handle_domain_irq(intc_domain, cause);
+ disable_nmis();
}
}
--
2.20.1
This commit registers pmu overflow interrupt as NMI to improve the accuracy
of perf sampling.
Signed-off-by: Xu Lu <[email protected]>
---
arch/riscv/include/asm/irqflags.h | 2 +-
drivers/perf/riscv_pmu_sbi.c | 23 +++++++++++++++++++----
2 files changed, 20 insertions(+), 5 deletions(-)
diff --git a/arch/riscv/include/asm/irqflags.h b/arch/riscv/include/asm/irqflags.h
index 6a709e9c69ca..be840e297559 100644
--- a/arch/riscv/include/asm/irqflags.h
+++ b/arch/riscv/include/asm/irqflags.h
@@ -12,7 +12,7 @@
#ifdef CONFIG_RISCV_PSEUDO_NMI
-#define __ALLOWED_NMI_MASK 0
+#define __ALLOWED_NMI_MASK BIT(IRQ_PMU_OVF)
#define ALLOWED_NMI_MASK (__ALLOWED_NMI_MASK & irqs_enabled_ie)
static inline bool nmi_allowed(int irq)
diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
index 995b501ec721..85abb7dd43b9 100644
--- a/drivers/perf/riscv_pmu_sbi.c
+++ b/drivers/perf/riscv_pmu_sbi.c
@@ -760,6 +760,7 @@ static irqreturn_t pmu_sbi_ovf_handler(int irq, void *dev)
static int pmu_sbi_starting_cpu(unsigned int cpu, struct hlist_node *node)
{
+ int ret = 0;
struct riscv_pmu *pmu = hlist_entry_safe(node, struct riscv_pmu, node);
struct cpu_hw_events *cpu_hw_evt = this_cpu_ptr(pmu->hw_events);
@@ -778,20 +779,30 @@ static int pmu_sbi_starting_cpu(unsigned int cpu, struct hlist_node *node)
if (riscv_pmu_use_irq) {
cpu_hw_evt->irq = riscv_pmu_irq;
csr_clear(CSR_IP, BIT(riscv_pmu_irq_num));
-#ifndef CONFIG_RISCV_PSEUDO_NMI
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+ ret = prepare_percpu_nmi(riscv_pmu_irq);
+ if (ret != 0) {
+ pr_err("Failed to prepare percpu nmi:%d\n", ret);
+ return ret;
+ }
+ enable_percpu_nmi(riscv_pmu_irq, IRQ_TYPE_NONE);
+#else
csr_set(CSR_IE, BIT(riscv_pmu_irq_num));
-#endif
enable_percpu_irq(riscv_pmu_irq, IRQ_TYPE_NONE);
+#endif
}
- return 0;
+ return ret;
}
static int pmu_sbi_dying_cpu(unsigned int cpu, struct hlist_node *node)
{
if (riscv_pmu_use_irq) {
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+ disable_percpu_nmi(riscv_pmu_irq);
+ teardown_percpu_nmi(riscv_pmu_irq);
+#else
disable_percpu_irq(riscv_pmu_irq);
-#ifndef CONFIG_RISCV_PSEUDO_NMI
csr_clear(CSR_IE, BIT(riscv_pmu_irq_num));
#endif
}
@@ -835,7 +846,11 @@ static int pmu_sbi_setup_irqs(struct riscv_pmu *pmu, struct platform_device *pde
return -ENODEV;
}
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+ ret = request_percpu_nmi(riscv_pmu_irq, pmu_sbi_ovf_handler, "riscv-pmu", hw_events);
+#else
ret = request_percpu_irq(riscv_pmu_irq, pmu_sbi_ovf_handler, "riscv-pmu", hw_events);
+#endif
if (ret) {
pr_err("registering percpu irq failed [%d]\n", ret);
return ret;
--
2.20.1
This commit implements pseudo NMI callbacks for riscv_intc_irq chip. We
use an immediate macro to denote NMIs of each cpu. Each bit of it
represents an irq. Bit 1 means corresponding irq is registered as NMI
while bit 0 means not.
Signed-off-by: Xu Lu <[email protected]>
Signed-off-by: Hangjing Li <[email protected]>
Reviewed-by: Liang Deng <[email protected]>
Reviewed-by: Yu Li <[email protected]>
---
arch/riscv/include/asm/irqflags.h | 17 ++++++++++++++
drivers/irqchip/irq-riscv-intc.c | 38 +++++++++++++++++++++++++++++++
2 files changed, 55 insertions(+)
diff --git a/arch/riscv/include/asm/irqflags.h b/arch/riscv/include/asm/irqflags.h
index 60c19f8b57f0..9700a17a003a 100644
--- a/arch/riscv/include/asm/irqflags.h
+++ b/arch/riscv/include/asm/irqflags.h
@@ -12,6 +12,23 @@
#ifdef CONFIG_RISCV_PSEUDO_NMI
+#define __ALLOWED_NMI_MASK 0
+#define ALLOWED_NMI_MASK (__ALLOWED_NMI_MASK & irqs_enabled_ie)
+
+static inline bool nmi_allowed(int irq)
+{
+ return (BIT(irq) & ALLOWED_NMI_MASK);
+}
+
+static inline bool is_nmi(int irq)
+{
+ return (BIT(irq) & ALLOWED_NMI_MASK);
+}
+
+static inline void set_nmi(int irq) {}
+
+static inline void unset_nmi(int irq) {}
+
static inline void local_irq_switch_on(void)
{
csr_set(CSR_STATUS, SR_IE);
diff --git a/drivers/irqchip/irq-riscv-intc.c b/drivers/irqchip/irq-riscv-intc.c
index 7fad1ba37e5c..83a0a744fce6 100644
--- a/drivers/irqchip/irq-riscv-intc.c
+++ b/drivers/irqchip/irq-riscv-intc.c
@@ -67,11 +67,49 @@ static void riscv_intc_irq_eoi(struct irq_data *d)
*/
}
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+
+static int riscv_intc_irq_nmi_setup(struct irq_data *d)
+{
+ unsigned int hwirq = d->hwirq;
+ struct irq_desc *desc = irq_to_desc(d->irq);
+
+ if (WARN_ON((hwirq >= BITS_PER_LONG) || !nmi_allowed(hwirq)))
+ return -EINVAL;
+
+ desc->handle_irq = handle_percpu_devid_fasteoi_nmi;
+ set_nmi(hwirq);
+
+ return 0;
+}
+
+static void riscv_intc_irq_nmi_teardown(struct irq_data *d)
+{
+ unsigned int hwirq = d->hwirq;
+ struct irq_desc *desc = irq_to_desc(d->irq);
+
+ if (WARN_ON(hwirq >= BITS_PER_LONG))
+ return;
+
+ if (WARN_ON(!is_nmi(hwirq)))
+ return;
+
+ desc->handle_irq = handle_percpu_devid_irq;
+ unset_nmi(hwirq);
+}
+
+#endif /* CONFIG_RISCV_PSEUDO_NMI */
+
static struct irq_chip riscv_intc_chip = {
.name = "RISC-V INTC",
.irq_mask = riscv_intc_irq_mask,
.irq_unmask = riscv_intc_irq_unmask,
.irq_eoi = riscv_intc_irq_eoi,
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+ .irq_nmi_setup = riscv_intc_irq_nmi_setup,
+ .irq_nmi_teardown = riscv_intc_irq_nmi_teardown,
+ .flags = IRQCHIP_SUPPORTS_NMI,
+#endif
};
static int riscv_intc_domain_map(struct irq_domain *d, unsigned int irq,
--
2.20.1
This commit handles pseudo NMI in arch irq handler. We enter NMI context
before handling NMI and keeps all interrupts disabled during NMI handling
to avoid interrupt nesting.
Signed-off-by: Xu Lu <[email protected]>
Signed-off-by: Hangjing Li <[email protected]>
---
drivers/irqchip/irq-riscv-intc.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/drivers/irqchip/irq-riscv-intc.c b/drivers/irqchip/irq-riscv-intc.c
index 83a0a744fce6..c672c0c64d5d 100644
--- a/drivers/irqchip/irq-riscv-intc.c
+++ b/drivers/irqchip/irq-riscv-intc.c
@@ -20,6 +20,26 @@
static struct irq_domain *intc_domain;
+#ifdef CONFIG_RISCV_PSEUDO_NMI
+
+static asmlinkage void riscv_intc_irq(struct pt_regs *regs)
+{
+ unsigned long cause = regs->cause & ~CAUSE_IRQ_FLAG;
+
+ if (unlikely(cause >= BITS_PER_LONG))
+ panic("unexpected interrupt cause");
+
+ if (is_nmi(cause)) {
+ nmi_enter();
+ generic_handle_domain_nmi(intc_domain, cause);
+ nmi_exit();
+ } else {
+ generic_handle_domain_irq(intc_domain, cause);
+ }
+}
+
+#else /* CONFIG_RISCV_PSEUDO_NMI */
+
static asmlinkage void riscv_intc_irq(struct pt_regs *regs)
{
unsigned long cause = regs->cause & ~CAUSE_IRQ_FLAG;
@@ -30,6 +50,8 @@ static asmlinkage void riscv_intc_irq(struct pt_regs *regs)
generic_handle_domain_irq(intc_domain, cause);
}
+#endif /* CONFIG_RISCV_PSEUDO_NMI */
+
/*
* On RISC-V systems local interrupts are masked or unmasked by writing
* the SIE (Supervisor Interrupt Enable) CSR. As CSRs can only be written
--
2.20.1
On Mon, Oct 23, 2023 at 1:29 AM Xu Lu <[email protected]> wrote:
>
> Sorry to resend this patch series as I forgot to Cc the open list before.
> Below is formal content.
>
> The existing RISC-V kernel lacks an NMI mechanism as there is still no
> ratified resumable NMI extension in RISC-V community, which can not
> satisfy some scenarios like high precision perf sampling. There is an
> incoming hardware extension called Smrnmi which supports resumable NMI
> by providing new control registers to save status when NMI happens.
> However, it is still a draft and requires privilege level switches for
> kernel to utilize it as NMIs are automatically trapped into machine mode.
>
> This patch series introduces a software pseudo NMI mechanism in RISC-V.
> The existing RISC-V kernel disables interrupts via per cpu control
> register CSR_STATUS, the SIE bit of which controls the enablement of all
> interrupts of whole cpu. When SIE bit is clear, no interrupt is enabled.
> This patch series implements NMI by switching interrupt disable way to
> another per cpu control register CSR_IE. This register controls the
> enablement of each separate interrupt. Each bit of CSR_IE corresponds
> to a single major interrupt and a clear bit means disablement of
> corresponding interrupt.
>
> To implement pseudo NMI, we switch to CSR_IE masking when disabling
> irqs. When interrupts are disabled, all bits of CSR_IE corresponding to
> normal interrupts are cleared while bits corresponding to NMIs are still
> kept as ones. The SIE bit of CSR_STATUS is now untouched and always kept
> as one.
>
> We measured performacne of Pseudo NMI patches based on v6.6-rc4 on SiFive
> FU740 Soc with hackbench as our benchmark. The result shows 1.90%
> performance degradation.
>
> "hackbench 200 process 1000" (average over 10 runs)
> +-----------+----------+------------+
> | | v6.6-rc4 | Pseudo NMI |
> +-----------+----------+------------+
> | time | 251.646s | 256.416s |
> +-----------+----------+------------+
>
> The overhead mainly comes from two parts:
>
> 1. Saving and restoring CSR_IE register during kernel entry/return.
> This part introduces about 0.57% performance overhead.
>
> 2. The extra instructions introduced by 'irqs_enabled_ie'. It is a
> special value representing normal CSR_IE when irqs are enabled. It is
> implemented via ALTERNATIVE to adapt to platforms without PMU. This
> part introduces about 1.32% performance overhead.
>
We had an evaluation of this approach earlier this year and concluded
with the similar findings.
The pseudo NMI is only useful for profiling use case which doesn't
happen all the time in the system
Adding the cost to the hotpath and sacrificing performance for
everything for something for performance profiling
is not desirable at all.
That's why, an SBI extension Supervisor Software Events (SSE) is under
development.
https://lists.riscv.org/g/tech-prs/message/515
Instead of selective disabling of interrupts, SSE takes an orthogonal
approach where M-mode would invoke a special trap
handler. That special handler will invoke the driver specific handler
which would be registered by the driver (i.e. perf driver)
This covers both firmware first RAS and perf use cases.
The above version of the specification is a bit out-of-date and the
revised version will be sent soon.
Clement(cc'd) has also done a PoC of SSE and perf driver using the SSE
framework. This resulted in actual saving
in performance for RAS/perf without sacrificing the normal performance.
Clement is planning to send the series soon with more details.
> Limits:
>
> CSR_IE is now used for disabling irqs and any other code should
> not touch this register to avoid corrupting irq status, which means
> we do not support masking a single interrupt now.
>
> We have tried to fix this by introducing a per cpu variable to save
> CSR_IE value when disabling irqs. Then all operatations on CSR_IE
> will be redirected to this variable and CSR_IE's value will be
> restored from this variable when enabling irqs. Obviously this method
> introduces extra memory accesses in hot code path.
>
> TODO:
>
> 1. The adaption to hypervisor extension is ongoing.
>
> 2. The adaption to advanced interrupt architecture is ongoing.
>
> This version of Pseudo NMI is rebased on v6.6-rc7.
>
> Thanks in advance for comments.
>
> Xu Lu (12):
> riscv: Introduce CONFIG_RISCV_PSEUDO_NMI
> riscv: Make CSR_IE register part of context
> riscv: Switch to CSR_IE masking when disabling irqs
> riscv: Switch back to CSR_STATUS masking when going idle
> riscv: kvm: Switch back to CSR_STATUS masking when entering guest
> riscv: Allow requesting irq as pseudo NMI
> riscv: Handle pseudo NMI in arch irq handler
> riscv: Enable NMIs during irqs disabled context
> riscv: Enable NMIs during exceptions
> riscv: Enable NMIs during interrupt handling
> riscv: Request pmu overflow interrupt as NMI
> riscv: Enable CONFIG_RISCV_PSEUDO_NMI in default
>
> arch/riscv/Kconfig | 10 ++++
> arch/riscv/include/asm/csr.h | 17 ++++++
> arch/riscv/include/asm/irqflags.h | 91 ++++++++++++++++++++++++++++++
> arch/riscv/include/asm/processor.h | 4 ++
> arch/riscv/include/asm/ptrace.h | 7 +++
> arch/riscv/include/asm/switch_to.h | 7 +++
> arch/riscv/kernel/asm-offsets.c | 3 +
> arch/riscv/kernel/entry.S | 18 ++++++
> arch/riscv/kernel/head.S | 10 ++++
> arch/riscv/kernel/irq.c | 17 ++++++
> arch/riscv/kernel/process.c | 6 ++
> arch/riscv/kernel/suspend_entry.S | 1 +
> arch/riscv/kernel/traps.c | 54 ++++++++++++++----
> arch/riscv/kvm/vcpu.c | 18 ++++--
> drivers/clocksource/timer-clint.c | 4 ++
> drivers/clocksource/timer-riscv.c | 4 ++
> drivers/irqchip/irq-riscv-intc.c | 66 ++++++++++++++++++++++
> drivers/perf/riscv_pmu_sbi.c | 21 ++++++-
> 18 files changed, 340 insertions(+), 18 deletions(-)
>
> --
> 2.20.1
>
--
Regards,
Atish
On Thu, Oct 26, 2023 at 7:02 AM Atish Patra <[email protected]> wrote:
>
> On Mon, Oct 23, 2023 at 1:29 AM Xu Lu <[email protected]> wrote:
> >
> > Sorry to resend this patch series as I forgot to Cc the open list before.
> > Below is formal content.
> >
> > The existing RISC-V kernel lacks an NMI mechanism as there is still no
> > ratified resumable NMI extension in RISC-V community, which can not
> > satisfy some scenarios like high precision perf sampling. There is an
> > incoming hardware extension called Smrnmi which supports resumable NMI
> > by providing new control registers to save status when NMI happens.
> > However, it is still a draft and requires privilege level switches for
> > kernel to utilize it as NMIs are automatically trapped into machine mode.
> >
> > This patch series introduces a software pseudo NMI mechanism in RISC-V.
> > The existing RISC-V kernel disables interrupts via per cpu control
> > register CSR_STATUS, the SIE bit of which controls the enablement of all
> > interrupts of whole cpu. When SIE bit is clear, no interrupt is enabled.
> > This patch series implements NMI by switching interrupt disable way to
> > another per cpu control register CSR_IE. This register controls the
> > enablement of each separate interrupt. Each bit of CSR_IE corresponds
> > to a single major interrupt and a clear bit means disablement of
> > corresponding interrupt.
> >
> > To implement pseudo NMI, we switch to CSR_IE masking when disabling
> > irqs. When interrupts are disabled, all bits of CSR_IE corresponding to
> > normal interrupts are cleared while bits corresponding to NMIs are still
> > kept as ones. The SIE bit of CSR_STATUS is now untouched and always kept
> > as one.
> >
> > We measured performacne of Pseudo NMI patches based on v6.6-rc4 on SiFive
> > FU740 Soc with hackbench as our benchmark. The result shows 1.90%
> > performance degradation.
> >
> > "hackbench 200 process 1000" (average over 10 runs)
> > +-----------+----------+------------+
> > | | v6.6-rc4 | Pseudo NMI |
> > +-----------+----------+------------+
> > | time | 251.646s | 256.416s |
> > +-----------+----------+------------+
> >
> > The overhead mainly comes from two parts:
> >
> > 1. Saving and restoring CSR_IE register during kernel entry/return.
> > This part introduces about 0.57% performance overhead.
> >
> > 2. The extra instructions introduced by 'irqs_enabled_ie'. It is a
> > special value representing normal CSR_IE when irqs are enabled. It is
> > implemented via ALTERNATIVE to adapt to platforms without PMU. This
> > part introduces about 1.32% performance overhead.
> >
>
> We had an evaluation of this approach earlier this year and concluded
> with the similar findings.
> The pseudo NMI is only useful for profiling use case which doesn't
> happen all the time in the system
> Adding the cost to the hotpath and sacrificing performance for
> everything for something for performance profiling
> is not desirable at all.
Thanks a lot for your reply!
First, please allow me to explain that CSR_IE Pseudo NMI actually can support
more than PMU profiling. For example, if we choose to make external major
interrupt as NMI and use ithreshold or eithreshold in AIA to control which minor
external interrupts can be sent to CPU, then we actually can support multiple
minor interrupts as NMI while keeping the other minor interrupts still
normal irqs.
This is what we are working on now.
Also, if we take virtualization scenarios into account, CSR_IE Pseudo NMI can
support NMI passthrough to VM without too much effort from hypervisor, if only
corresponding interrupt can be delegated to VS-mode. I wonder if SSE supports
interrupt passthrough to VM?
>
> That's why, an SBI extension Supervisor Software Events (SSE) is under
> development.
> https://lists.riscv.org/g/tech-prs/message/515
>
> Instead of selective disabling of interrupts, SSE takes an orthogonal
> approach where M-mode would invoke a special trap
> handler. That special handler will invoke the driver specific handler
> which would be registered by the driver (i.e. perf driver)
> This covers both firmware first RAS and perf use cases.
>
> The above version of the specification is a bit out-of-date and the
> revised version will be sent soon.
> Clement(cc'd) has also done a PoC of SSE and perf driver using the SSE
> framework. This resulted in actual saving
> in performance for RAS/perf without sacrificing the normal performance.
>
> Clement is planning to send the series soon with more details.
The SSE extension you mentioned is a brilliant design and does solve a lot of
problems!
We have considered implementing NMI via SBI calls before. The main problem
is that if a driver using NMI needs to cooperate with SBI code, extra
coupling will
be introduced as the driver vendor and firmware vendor may not be the same one.
We think perhaps it is better to keep SBI code as simple and stable as possible.
Please correct me if there is any misunderstanding.
Thanks again and looking forward to your reply.
>
> > Limits:
> >
> > CSR_IE is now used for disabling irqs and any other code should
> > not touch this register to avoid corrupting irq status, which means
> > we do not support masking a single interrupt now.
> >
> > We have tried to fix this by introducing a per cpu variable to save
> > CSR_IE value when disabling irqs. Then all operatations on CSR_IE
> > will be redirected to this variable and CSR_IE's value will be
> > restored from this variable when enabling irqs. Obviously this method
> > introduces extra memory accesses in hot code path.
> >
>
>
>
> > TODO:
> >
> > 1. The adaption to hypervisor extension is ongoing.
> >
> > 2. The adaption to advanced interrupt architecture is ongoing.
> >
> > This version of Pseudo NMI is rebased on v6.6-rc7.
> >
> > Thanks in advance for comments.
> >
> > Xu Lu (12):
> > riscv: Introduce CONFIG_RISCV_PSEUDO_NMI
> > riscv: Make CSR_IE register part of context
> > riscv: Switch to CSR_IE masking when disabling irqs
> > riscv: Switch back to CSR_STATUS masking when going idle
> > riscv: kvm: Switch back to CSR_STATUS masking when entering guest
> > riscv: Allow requesting irq as pseudo NMI
> > riscv: Handle pseudo NMI in arch irq handler
> > riscv: Enable NMIs during irqs disabled context
> > riscv: Enable NMIs during exceptions
> > riscv: Enable NMIs during interrupt handling
> > riscv: Request pmu overflow interrupt as NMI
> > riscv: Enable CONFIG_RISCV_PSEUDO_NMI in default
> >
> > arch/riscv/Kconfig | 10 ++++
> > arch/riscv/include/asm/csr.h | 17 ++++++
> > arch/riscv/include/asm/irqflags.h | 91 ++++++++++++++++++++++++++++++
> > arch/riscv/include/asm/processor.h | 4 ++
> > arch/riscv/include/asm/ptrace.h | 7 +++
> > arch/riscv/include/asm/switch_to.h | 7 +++
> > arch/riscv/kernel/asm-offsets.c | 3 +
> > arch/riscv/kernel/entry.S | 18 ++++++
> > arch/riscv/kernel/head.S | 10 ++++
> > arch/riscv/kernel/irq.c | 17 ++++++
> > arch/riscv/kernel/process.c | 6 ++
> > arch/riscv/kernel/suspend_entry.S | 1 +
> > arch/riscv/kernel/traps.c | 54 ++++++++++++++----
> > arch/riscv/kvm/vcpu.c | 18 ++++--
> > drivers/clocksource/timer-clint.c | 4 ++
> > drivers/clocksource/timer-riscv.c | 4 ++
> > drivers/irqchip/irq-riscv-intc.c | 66 ++++++++++++++++++++++
> > drivers/perf/riscv_pmu_sbi.c | 21 ++++++-
> > 18 files changed, 340 insertions(+), 18 deletions(-)
> >
> > --
> > 2.20.1
> >
>
>
> --
> Regards,
> Atish
On Thu, Oct 26, 2023 at 6:56 AM Xu Lu <[email protected]> wrote:
>
> On Thu, Oct 26, 2023 at 7:02 AM Atish Patra <[email protected]> wrote:
> >
> > On Mon, Oct 23, 2023 at 1:29 AM Xu Lu <[email protected]> wrote:
> > >
> > > Sorry to resend this patch series as I forgot to Cc the open list before.
> > > Below is formal content.
> > >
> > > The existing RISC-V kernel lacks an NMI mechanism as there is still no
> > > ratified resumable NMI extension in RISC-V community, which can not
> > > satisfy some scenarios like high precision perf sampling. There is an
> > > incoming hardware extension called Smrnmi which supports resumable NMI
> > > by providing new control registers to save status when NMI happens.
> > > However, it is still a draft and requires privilege level switches for
> > > kernel to utilize it as NMIs are automatically trapped into machine mode.
> > >
> > > This patch series introduces a software pseudo NMI mechanism in RISC-V.
> > > The existing RISC-V kernel disables interrupts via per cpu control
> > > register CSR_STATUS, the SIE bit of which controls the enablement of all
> > > interrupts of whole cpu. When SIE bit is clear, no interrupt is enabled.
> > > This patch series implements NMI by switching interrupt disable way to
> > > another per cpu control register CSR_IE. This register controls the
> > > enablement of each separate interrupt. Each bit of CSR_IE corresponds
> > > to a single major interrupt and a clear bit means disablement of
> > > corresponding interrupt.
> > >
> > > To implement pseudo NMI, we switch to CSR_IE masking when disabling
> > > irqs. When interrupts are disabled, all bits of CSR_IE corresponding to
> > > normal interrupts are cleared while bits corresponding to NMIs are still
> > > kept as ones. The SIE bit of CSR_STATUS is now untouched and always kept
> > > as one.
> > >
> > > We measured performacne of Pseudo NMI patches based on v6.6-rc4 on SiFive
> > > FU740 Soc with hackbench as our benchmark. The result shows 1.90%
> > > performance degradation.
> > >
> > > "hackbench 200 process 1000" (average over 10 runs)
> > > +-----------+----------+------------+
> > > | | v6.6-rc4 | Pseudo NMI |
> > > +-----------+----------+------------+
> > > | time | 251.646s | 256.416s |
> > > +-----------+----------+------------+
> > >
> > > The overhead mainly comes from two parts:
> > >
> > > 1. Saving and restoring CSR_IE register during kernel entry/return.
> > > This part introduces about 0.57% performance overhead.
> > >
> > > 2. The extra instructions introduced by 'irqs_enabled_ie'. It is a
> > > special value representing normal CSR_IE when irqs are enabled. It is
> > > implemented via ALTERNATIVE to adapt to platforms without PMU. This
> > > part introduces about 1.32% performance overhead.
> > >
> >
> > We had an evaluation of this approach earlier this year and concluded
> > with the similar findings.
> > The pseudo NMI is only useful for profiling use case which doesn't
> > happen all the time in the system
> > Adding the cost to the hotpath and sacrificing performance for
> > everything for something for performance profiling
> > is not desirable at all.
>
> Thanks a lot for your reply!
>
> First, please allow me to explain that CSR_IE Pseudo NMI actually can support
> more than PMU profiling. For example, if we choose to make external major
> interrupt as NMI and use ithreshold or eithreshold in AIA to control which minor
> external interrupts can be sent to CPU, then we actually can support multiple
> minor interrupts as NMI while keeping the other minor interrupts still
> normal irqs.
> This is what we are working on now.
>
What's the use case for external interrupts to behave as NMI ?
Note: You can do the same thing with SSE as well if required. But I
want to understand the
use case first.
> Also, if we take virtualization scenarios into account, CSR_IE Pseudo NMI can
> support NMI passthrough to VM without too much effort from hypervisor, if only
> corresponding interrupt can be delegated to VS-mode. I wonder if SSE supports
> interrupt passthrough to VM?
>
Not technically interrupt pass through but hypervisor can invoke the
guest SSE handler
with the same mechanism. In fact, the original proposal specifies the
async page fault
as another use case for SSE.
> >
> > That's why, an SBI extension Supervisor Software Events (SSE) is under
> > development.
> > https://lists.riscv.org/g/tech-prs/message/515
> >
> > Instead of selective disabling of interrupts, SSE takes an orthogonal
> > approach where M-mode would invoke a special trap
> > handler. That special handler will invoke the driver specific handler
> > which would be registered by the driver (i.e. perf driver)
> > This covers both firmware first RAS and perf use cases.
> >
> > The above version of the specification is a bit out-of-date and the
> > revised version will be sent soon.
> > Clement(cc'd) has also done a PoC of SSE and perf driver using the SSE
> > framework. This resulted in actual saving
> > in performance for RAS/perf without sacrificing the normal performance.
> >
> > Clement is planning to send the series soon with more details.
>
> The SSE extension you mentioned is a brilliant design and does solve a lot of
> problems!
>
> We have considered implementing NMI via SBI calls before. The main problem
> is that if a driver using NMI needs to cooperate with SBI code, extra
> coupling will
> be introduced as the driver vendor and firmware vendor may not be the same one.
> We think perhaps it is better to keep SBI code as simple and stable as possible.
>
Yes. However, we also gain significant performance while we have a 2%
regression with
current pseudo-NMI approach. Quoting the numbers from SSE series[1]:
"Additionally, SSE event handling is faster that the
standard IRQ handling path with almost half executed instruction (700 vs
1590). Some complementary tests/perf measurements will be done."
Major infrastructure development is one time effort. Adding additional
sources of SSE effort will be minimal once
the framework is in place. The SSE extension is still in draft stage
and can accomodate any other use case
that you may have in mind. IMHO, it would better to define one
performant mechanism to solve the high priority
interrupt use case.
[1] https://www.spinics.net/lists/kernel/msg4982224.html
> Please correct me if there is any misunderstanding.
>
> Thanks again and looking forward to your reply.
>
> >
> > > Limits:
> > >
> > > CSR_IE is now used for disabling irqs and any other code should
> > > not touch this register to avoid corrupting irq status, which means
> > > we do not support masking a single interrupt now.
> > >
> > > We have tried to fix this by introducing a per cpu variable to save
> > > CSR_IE value when disabling irqs. Then all operatations on CSR_IE
> > > will be redirected to this variable and CSR_IE's value will be
> > > restored from this variable when enabling irqs. Obviously this method
> > > introduces extra memory accesses in hot code path.
> > >
> >
> >
> >
> > > TODO:
> > >
> > > 1. The adaption to hypervisor extension is ongoing.
> > >
> > > 2. The adaption to advanced interrupt architecture is ongoing.
> > >
> > > This version of Pseudo NMI is rebased on v6.6-rc7.
> > >
> > > Thanks in advance for comments.
> > >
> > > Xu Lu (12):
> > > riscv: Introduce CONFIG_RISCV_PSEUDO_NMI
> > > riscv: Make CSR_IE register part of context
> > > riscv: Switch to CSR_IE masking when disabling irqs
> > > riscv: Switch back to CSR_STATUS masking when going idle
> > > riscv: kvm: Switch back to CSR_STATUS masking when entering guest
> > > riscv: Allow requesting irq as pseudo NMI
> > > riscv: Handle pseudo NMI in arch irq handler
> > > riscv: Enable NMIs during irqs disabled context
> > > riscv: Enable NMIs during exceptions
> > > riscv: Enable NMIs during interrupt handling
> > > riscv: Request pmu overflow interrupt as NMI
> > > riscv: Enable CONFIG_RISCV_PSEUDO_NMI in default
> > >
> > > arch/riscv/Kconfig | 10 ++++
> > > arch/riscv/include/asm/csr.h | 17 ++++++
> > > arch/riscv/include/asm/irqflags.h | 91 ++++++++++++++++++++++++++++++
> > > arch/riscv/include/asm/processor.h | 4 ++
> > > arch/riscv/include/asm/ptrace.h | 7 +++
> > > arch/riscv/include/asm/switch_to.h | 7 +++
> > > arch/riscv/kernel/asm-offsets.c | 3 +
> > > arch/riscv/kernel/entry.S | 18 ++++++
> > > arch/riscv/kernel/head.S | 10 ++++
> > > arch/riscv/kernel/irq.c | 17 ++++++
> > > arch/riscv/kernel/process.c | 6 ++
> > > arch/riscv/kernel/suspend_entry.S | 1 +
> > > arch/riscv/kernel/traps.c | 54 ++++++++++++++----
> > > arch/riscv/kvm/vcpu.c | 18 ++++--
> > > drivers/clocksource/timer-clint.c | 4 ++
> > > drivers/clocksource/timer-riscv.c | 4 ++
> > > drivers/irqchip/irq-riscv-intc.c | 66 ++++++++++++++++++++++
> > > drivers/perf/riscv_pmu_sbi.c | 21 ++++++-
> > > 18 files changed, 340 insertions(+), 18 deletions(-)
> > >
> > > --
> > > 2.20.1
> > >
> >
> >
> > --
> > Regards,
> > Atish
--
Regards,
Atish
On Fri, Oct 27, 2023 at 3:42 AM Atish Patra <[email protected]> wrote:
>
> On Thu, Oct 26, 2023 at 6:56 AM Xu Lu <[email protected]> wrote:
> >
> > On Thu, Oct 26, 2023 at 7:02 AM Atish Patra <[email protected]> wrote:
> > >
> > > On Mon, Oct 23, 2023 at 1:29 AM Xu Lu <[email protected]> wrote:
> > > >
> > > > Sorry to resend this patch series as I forgot to Cc the open list before.
> > > > Below is formal content.
> > > >
> > > > The existing RISC-V kernel lacks an NMI mechanism as there is still no
> > > > ratified resumable NMI extension in RISC-V community, which can not
> > > > satisfy some scenarios like high precision perf sampling. There is an
> > > > incoming hardware extension called Smrnmi which supports resumable NMI
> > > > by providing new control registers to save status when NMI happens.
> > > > However, it is still a draft and requires privilege level switches for
> > > > kernel to utilize it as NMIs are automatically trapped into machine mode.
> > > >
> > > > This patch series introduces a software pseudo NMI mechanism in RISC-V.
> > > > The existing RISC-V kernel disables interrupts via per cpu control
> > > > register CSR_STATUS, the SIE bit of which controls the enablement of all
> > > > interrupts of whole cpu. When SIE bit is clear, no interrupt is enabled.
> > > > This patch series implements NMI by switching interrupt disable way to
> > > > another per cpu control register CSR_IE. This register controls the
> > > > enablement of each separate interrupt. Each bit of CSR_IE corresponds
> > > > to a single major interrupt and a clear bit means disablement of
> > > > corresponding interrupt.
> > > >
> > > > To implement pseudo NMI, we switch to CSR_IE masking when disabling
> > > > irqs. When interrupts are disabled, all bits of CSR_IE corresponding to
> > > > normal interrupts are cleared while bits corresponding to NMIs are still
> > > > kept as ones. The SIE bit of CSR_STATUS is now untouched and always kept
> > > > as one.
> > > >
> > > > We measured performacne of Pseudo NMI patches based on v6.6-rc4 on SiFive
> > > > FU740 Soc with hackbench as our benchmark. The result shows 1.90%
> > > > performance degradation.
> > > >
> > > > "hackbench 200 process 1000" (average over 10 runs)
> > > > +-----------+----------+------------+
> > > > | | v6.6-rc4 | Pseudo NMI |
> > > > +-----------+----------+------------+
> > > > | time | 251.646s | 256.416s |
> > > > +-----------+----------+------------+
> > > >
> > > > The overhead mainly comes from two parts:
> > > >
> > > > 1. Saving and restoring CSR_IE register during kernel entry/return.
> > > > This part introduces about 0.57% performance overhead.
> > > >
> > > > 2. The extra instructions introduced by 'irqs_enabled_ie'. It is a
> > > > special value representing normal CSR_IE when irqs are enabled. It is
> > > > implemented via ALTERNATIVE to adapt to platforms without PMU. This
> > > > part introduces about 1.32% performance overhead.
> > > >
> > >
> > > We had an evaluation of this approach earlier this year and concluded
> > > with the similar findings.
> > > The pseudo NMI is only useful for profiling use case which doesn't
> > > happen all the time in the system
> > > Adding the cost to the hotpath and sacrificing performance for
> > > everything for something for performance profiling
> > > is not desirable at all.
> >
> > Thanks a lot for your reply!
> >
> > First, please allow me to explain that CSR_IE Pseudo NMI actually can support
> > more than PMU profiling. For example, if we choose to make external major
> > interrupt as NMI and use ithreshold or eithreshold in AIA to control which minor
> > external interrupts can be sent to CPU, then we actually can support multiple
> > minor interrupts as NMI while keeping the other minor interrupts still
> > normal irqs.
> > This is what we are working on now.
> >
>
> What's the use case for external interrupts to behave as NMI ?
>
> Note: You can do the same thing with SSE as well if required. But I
> want to understand the
> use case first.
For example, some high precision event devices designed as timer or
watchdog devices (please refer to [1][2]) which may not be per cpu.
[1] https://lwn.net/Articles/924927/
[2] https://lore.kernel.org/lkml/[email protected]/T/
>
> > Also, if we take virtualization scenarios into account, CSR_IE Pseudo NMI can
> > support NMI passthrough to VM without too much effort from hypervisor, if only
> > corresponding interrupt can be delegated to VS-mode. I wonder if SSE supports
> > interrupt passthrough to VM?
> >
>
> Not technically interrupt pass through but hypervisor can invoke the
> guest SSE handler
> with the same mechanism. In fact, the original proposal specifies the
> async page fault
> as another use case for SSE.
>
> > >
> > > That's why, an SBI extension Supervisor Software Events (SSE) is under
> > > development.
> > > https://lists.riscv.org/g/tech-prs/message/515
> > >
> > > Instead of selective disabling of interrupts, SSE takes an orthogonal
> > > approach where M-mode would invoke a special trap
> > > handler. That special handler will invoke the driver specific handler
> > > which would be registered by the driver (i.e. perf driver)
> > > This covers both firmware first RAS and perf use cases.
> > >
> > > The above version of the specification is a bit out-of-date and the
> > > revised version will be sent soon.
> > > Clement(cc'd) has also done a PoC of SSE and perf driver using the SSE
> > > framework. This resulted in actual saving
> > > in performance for RAS/perf without sacrificing the normal performance.
> > >
> > > Clement is planning to send the series soon with more details.
> >
> > The SSE extension you mentioned is a brilliant design and does solve a lot of
> > problems!
> >
> > We have considered implementing NMI via SBI calls before. The main problem
> > is that if a driver using NMI needs to cooperate with SBI code, extra
> > coupling will
> > be introduced as the driver vendor and firmware vendor may not be the same one.
> > We think perhaps it is better to keep SBI code as simple and stable as possible.
> >
>
> Yes. However, we also gain significant performance while we have a 2%
> regression with
> current pseudo-NMI approach. Quoting the numbers from SSE series[1]:
>
> "Additionally, SSE event handling is faster that the
> standard IRQ handling path with almost half executed instruction (700 vs
> 1590). Some complementary tests/perf measurements will be done."
I think maybe there are two more issues to be considered.
1) The instructions may increase as the supported event ids increases.
More instructions will be introduced to maintain the mapping between
event id and handler_context. Besides, some security check is needed
to avoid the fact that the physical address passed by S-mode software
does not belong to it (for example, the address may belong to an
enclave).
2) I am wondering whether the control flow from user thread -> M-mode
-> S-mode -> M-mode -> user thread will sacrifice locality and cause
more cache misses.
Looking forward to your more measurements!
>
> Major infrastructure development is one time effort. Adding additional
> sources of SSE effort will be minimal once
> the framework is in place. The SSE extension is still in draft stage
> and can accomodate any other use case
> that you may have in mind. IMHO, it would better to define one
> performant mechanism to solve the high priority
> interrupt use case.
I am concerned that every time a new event id is added, both the SBI
and driver code need to be modified simultaneously, which may increase
coupling and complexity.
Regards,
Xu Lu.
>
> [1] https://www.spinics.net/lists/kernel/msg4982224.html
> > Please correct me if there is any misunderstanding.
> >
> > Thanks again and looking forward to your reply.
> >
> > >
> > > > Limits:
> > > >
> > > > CSR_IE is now used for disabling irqs and any other code should
> > > > not touch this register to avoid corrupting irq status, which means
> > > > we do not support masking a single interrupt now.
> > > >
> > > > We have tried to fix this by introducing a per cpu variable to save
> > > > CSR_IE value when disabling irqs. Then all operatations on CSR_IE
> > > > will be redirected to this variable and CSR_IE's value will be
> > > > restored from this variable when enabling irqs. Obviously this method
> > > > introduces extra memory accesses in hot code path.
> > > >
> > >
> > >
> > >
> > > > TODO:
> > > >
> > > > 1. The adaption to hypervisor extension is ongoing.
> > > >
> > > > 2. The adaption to advanced interrupt architecture is ongoing.
> > > >
> > > > This version of Pseudo NMI is rebased on v6.6-rc7.
> > > >
> > > > Thanks in advance for comments.
> > > >
> > > > Xu Lu (12):
> > > > riscv: Introduce CONFIG_RISCV_PSEUDO_NMI
> > > > riscv: Make CSR_IE register part of context
> > > > riscv: Switch to CSR_IE masking when disabling irqs
> > > > riscv: Switch back to CSR_STATUS masking when going idle
> > > > riscv: kvm: Switch back to CSR_STATUS masking when entering guest
> > > > riscv: Allow requesting irq as pseudo NMI
> > > > riscv: Handle pseudo NMI in arch irq handler
> > > > riscv: Enable NMIs during irqs disabled context
> > > > riscv: Enable NMIs during exceptions
> > > > riscv: Enable NMIs during interrupt handling
> > > > riscv: Request pmu overflow interrupt as NMI
> > > > riscv: Enable CONFIG_RISCV_PSEUDO_NMI in default
> > > >
> > > > arch/riscv/Kconfig | 10 ++++
> > > > arch/riscv/include/asm/csr.h | 17 ++++++
> > > > arch/riscv/include/asm/irqflags.h | 91 ++++++++++++++++++++++++++++++
> > > > arch/riscv/include/asm/processor.h | 4 ++
> > > > arch/riscv/include/asm/ptrace.h | 7 +++
> > > > arch/riscv/include/asm/switch_to.h | 7 +++
> > > > arch/riscv/kernel/asm-offsets.c | 3 +
> > > > arch/riscv/kernel/entry.S | 18 ++++++
> > > > arch/riscv/kernel/head.S | 10 ++++
> > > > arch/riscv/kernel/irq.c | 17 ++++++
> > > > arch/riscv/kernel/process.c | 6 ++
> > > > arch/riscv/kernel/suspend_entry.S | 1 +
> > > > arch/riscv/kernel/traps.c | 54 ++++++++++++++----
> > > > arch/riscv/kvm/vcpu.c | 18 ++++--
> > > > drivers/clocksource/timer-clint.c | 4 ++
> > > > drivers/clocksource/timer-riscv.c | 4 ++
> > > > drivers/irqchip/irq-riscv-intc.c | 66 ++++++++++++++++++++++
> > > > drivers/perf/riscv_pmu_sbi.c | 21 ++++++-
> > > > 18 files changed, 340 insertions(+), 18 deletions(-)
> > > >
> > > > --
> > > > 2.20.1
> > > >
> > >
> > >
> > > --
> > > Regards,
> > > Atish
>
>
>
> --
> Regards,
> Atish
On Thu, Oct 26 2023 at 21:56, Xu Lu wrote:
> On Thu, Oct 26, 2023 at 7:02 AM Atish Patra <[email protected]> wrote:
> First, please allow me to explain that CSR_IE Pseudo NMI actually can support
> more than PMU profiling. For example, if we choose to make external major
> interrupt as NMI and use ithreshold or eithreshold in AIA to control which minor
> external interrupts can be sent to CPU, then we actually can support multiple
> minor interrupts as NMI while keeping the other minor interrupts still
> normal irqs.
What is the use case for these NMIs? Anything else than profiling is not
really possible in NMI context at all.
Thanks,
tglx