From: Guo Ren <[email protected]>
The patches convert riscv to use the generic entry infrastructure from
kernel/entry/*. Additionally, add independent irq stacks (IRQ_STACKS)
for percpu to prevent kernel stack overflows. Add generic_entry based
STACKLEAK support. Some optimization for entry.S with new .macro and
merge ret_from_kernel_thread into ret_from_fork.
The 1,2 are the preparation of generic entry. 3~7 are the main part
of generic entry. 8~10 are separate-irq-stack optimizations based on
generic entry.
All tested with rv64, rv32, rv64 + 32rootfs, all are passed.
You can directly try it with:
[1] https://github.com/guoren83/linux/tree/generic_entry_v10
Any reviews and tests are helpful.
v10:
- Rebase on palmer/for-next branch (20221208)
- Remove unrelated patches from the series (Suggested-by: Bjorn)
- Fixup Typos.
v9:
https://lore.kernel.org/linux-riscv/[email protected]/
- Fixup NR_syscalls check (by Ben Hutchings)
- Add Tested-by: Jisheng Zhang
v8:
https://lore.kernel.org/linux-riscv/[email protected]/
- Rebase on palmer/for-next branch (20221102)
- Add save/restore_from_x5_to_x31 .macro (JishengZhang)
- Consolidate ret_from_kernel_thread into ret_from_fork (JishengZhang)
- Optimize __noinstr_section comment (JiangshanLai)
v7:
https://lore.kernel.org/linux-riscv/[email protected]/
- Fixup regs_irqs_disabled with SR_PIE
- Optimize stackleak_erase -> stackleak_erase_on_task_stack (Thx Mark
Rutland)
- Add BUG_ON(!irqs_disabled()) in trap handlers
- Using regs_irqs_disabled in __do_page_fault
- Remove unnecessary irq disable in ret_from_exception and add comment
v6:
https://lore.kernel.org/linux-riscv/[email protected]/
- Use THEAD_SIZE_ORDER for thread size adjustment in kconfig (Thx Arnd)
- Move call_on_stack to inline style (Thx Peter Zijlstra)
- Fixup fp chain broken (Thx Chen Zhongjin)
- Remove common entry modification, and fixup page_fault entry (Thx
Peter Zijlstra)
- Treat some traps as nmi entry (Thx Peter Zijlstra)
v5:
https://lore.kernel.org/linux-riscv/[email protected]/
- Add riscv own stackleak patch instead of generic entry modification
(by Mark Rutland)
- Add EXPERT dependency for THREAD_SIZE (by Arnd)
- Add EXPERT dependency for IRQ_STACK (by Sebastian, David Laight)
- Corrected __trap_section (by Peter Zijlstra)
- Add Tested-by (Yipeng Zou)
- Use CONFIG_SOFTIRQ_ON_OWN_STACK replace "#ifndef CONFIG_PREEMPT_RT"
- Fixup systrace_enter compile error
- Fixup exit_to_user_mode_prepare preempt_disable warning
V4:
https://lore.kernel.org/linux-riscv/[email protected]/
- Fixup entry.S with "la" bug (by Conor.Dooley)
- Fixup missing noinstr bug (by Peter Zijlstra)
V3:
https://lore.kernel.org/linux-riscv/[email protected]/
- Fixup CONFIG_COMPAT=n compile error
- Add THREAD_SIZE_ORDER config
- Optimize elf_kexec.c warning fixup
- Add static to irq_stack_ptr definition
V2:
https://lore.kernel.org/linux-riscv/[email protected]/
- Fixup compile error by include "riscv: ptrace: Remove duplicate
operation"
- Fixup compile warning
Reported-by: kernel test robot <[email protected]>
- Add test repo link in cover letter
V1:
https://lore.kernel.org/linux-riscv/[email protected]/
Guo Ren (6):
riscv: ptrace: Remove duplicate operation
riscv: entry: Add noinstr to prevent instrumentation inserted
riscv: entry: Convert to generic entry
riscv: stack: Support HAVE_IRQ_EXIT_ON_IRQ_STACK
riscv: stack: Support HAVE_SOFTIRQ_ON_OWN_STACK
riscv: stack: Add config of thread stack size
Jisheng Zhang (3):
riscv: entry: Remove extra level wrappers of trace_hardirqs_{on,off}
riscv: entry: Consolidate ret_from_kernel_thread into ret_from_fork
riscv: entry: Consolidate general regs saving/restoring
Lai Jiangshan (1):
compiler_types.h: Add __noinstr_section() for noinstr
arch/riscv/Kconfig | 20 ++
arch/riscv/include/asm/asm.h | 63 +++++
arch/riscv/include/asm/csr.h | 1 -
arch/riscv/include/asm/entry-common.h | 8 +
arch/riscv/include/asm/ptrace.h | 10 +-
arch/riscv/include/asm/stacktrace.h | 5 +
arch/riscv/include/asm/syscall.h | 6 +
arch/riscv/include/asm/thread_info.h | 27 +--
arch/riscv/include/asm/vmap_stack.h | 28 +++
arch/riscv/kernel/Makefile | 2 -
arch/riscv/kernel/entry.S | 325 +++-----------------------
arch/riscv/kernel/irq.c | 110 +++++++++
arch/riscv/kernel/mcount-dyn.S | 56 +----
arch/riscv/kernel/process.c | 5 +-
arch/riscv/kernel/ptrace.c | 44 ----
arch/riscv/kernel/signal.c | 21 +-
arch/riscv/kernel/sys_riscv.c | 29 +++
arch/riscv/kernel/trace_irq.c | 27 ---
arch/riscv/kernel/trace_irq.h | 11 -
arch/riscv/kernel/traps.c | 74 ++++--
arch/riscv/mm/fault.c | 16 +-
include/linux/compiler_types.h | 15 +-
22 files changed, 402 insertions(+), 501 deletions(-)
create mode 100644 arch/riscv/include/asm/entry-common.h
create mode 100644 arch/riscv/include/asm/vmap_stack.h
delete mode 100644 arch/riscv/kernel/trace_irq.c
delete mode 100644 arch/riscv/kernel/trace_irq.h
--
2.36.1
From: Guo Ren <[email protected]>
0cac21b02ba5 ("risc v: use 16KB kernel stack on 64-bit") increase the
thread size mandatory, but some scenarios, such as D1 with a small
memory footprint, would suffer from that. After independent irq stack
support, let's give users a choice to determine their custom stack size.
Link: https://lore.kernel.org/linux-riscv/[email protected]/
Suggested-by: Arnd Bergmann <[email protected]>
Tested-by: Jisheng Zhang <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
---
arch/riscv/Kconfig | 10 ++++++++++
arch/riscv/include/asm/thread_info.h | 12 +-----------
2 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index bd4c4ae4cdc9..60202cd5c5ae 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -455,6 +455,16 @@ config IRQ_STACKS
Add independent irq & softirq stacks for percpu to prevent kernel stack
overflows. We may save some memory footprint by disabling IRQ_STACKS.
+config THREAD_SIZE_ORDER
+ int "Kernel stack size (in power-of-two numbers of page size)" if VMAP_STACK && EXPERT
+ range 0 4
+ default 1 if 32BIT && !KASAN
+ default 3 if 64BIT && KASAN
+ default 2
+ help
+ Specify the Pages of thread stack size (from 4KB to 64KB), which also
+ affects irq stack size, which is equal to thread stack size.
+
endmenu # "Platform type"
menu "Kernel features"
diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
index 043da8ccc7e6..c970d41dc4c6 100644
--- a/arch/riscv/include/asm/thread_info.h
+++ b/arch/riscv/include/asm/thread_info.h
@@ -11,18 +11,8 @@
#include <asm/page.h>
#include <linux/const.h>
-#ifdef CONFIG_KASAN
-#define KASAN_STACK_ORDER 1
-#else
-#define KASAN_STACK_ORDER 0
-#endif
-
/* thread information allocation */
-#ifdef CONFIG_64BIT
-#define THREAD_SIZE_ORDER (2 + KASAN_STACK_ORDER)
-#else
-#define THREAD_SIZE_ORDER (1 + KASAN_STACK_ORDER)
-#endif
+#define THREAD_SIZE_ORDER CONFIG_THREAD_SIZE_ORDER
#define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
/*
--
2.36.1
From: Guo Ren <[email protected]>
Without noinstr the compiler is free to insert instrumentation (think
all the k*SAN, KCov, GCov, ftrace etc..) which can call code we're not
yet ready to run this early in the entry path, for instance it could
rely on RCU which isn't on yet, or expect lockdep state. (by peterz)
Link: https://lore.kernel.org/linux-riscv/[email protected]/
Suggested-by: Peter Zijlstra <[email protected]>
Tested-by: Jisheng Zhang <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
---
arch/riscv/kernel/traps.c | 4 ++--
arch/riscv/mm/fault.c | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
index f3e96d60a2ff..f7fa973558bc 100644
--- a/arch/riscv/kernel/traps.c
+++ b/arch/riscv/kernel/traps.c
@@ -95,9 +95,9 @@ static void do_trap_error(struct pt_regs *regs, int signo, int code,
}
#if defined(CONFIG_XIP_KERNEL) && defined(CONFIG_RISCV_ALTERNATIVE)
-#define __trap_section __section(".xip.traps")
+#define __trap_section __noinstr_section(".xip.traps")
#else
-#define __trap_section
+#define __trap_section noinstr
#endif
#define DO_ERROR_INFO(name, signo, code, str) \
asmlinkage __visible __trap_section void name(struct pt_regs *regs) \
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index d86f7cebd4a7..b26f68eac61c 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -204,7 +204,7 @@ static inline bool access_error(unsigned long cause, struct vm_area_struct *vma)
* This routine handles page faults. It determines the address and the
* problem, and then passes it off to one of the appropriate routines.
*/
-asmlinkage void do_page_fault(struct pt_regs *regs)
+asmlinkage void noinstr do_page_fault(struct pt_regs *regs)
{
struct task_struct *tsk;
struct vm_area_struct *vma;
--
2.36.1
From: Jisheng Zhang <[email protected]>
The ret_from_kernel_thread() behaves similarly with ret_from_fork(),
the only difference is whether call the fn(arg) or not, this can be
achieved by testing fn is NULL or not, I.E s0 is 0 or not. Many
architectures have done the same thing, it make entry.S more clean.
Signed-off-by: Jisheng Zhang <[email protected]>
Reviewed-by: Guo Ren <[email protected]>
Tested-by: Guo Ren <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
---
arch/riscv/kernel/entry.S | 12 +++---------
arch/riscv/kernel/process.c | 5 ++---
2 files changed, 5 insertions(+), 12 deletions(-)
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
index 69097dfffdc9..e4a9140a5b99 100644
--- a/arch/riscv/kernel/entry.S
+++ b/arch/riscv/kernel/entry.S
@@ -132,7 +132,6 @@ END(handle_exception)
* caller list:
* - handle_exception
* - ret_from_fork
- * - ret_from_kernel_thread
*/
SYM_CODE_START_NOALIGN(ret_from_exception)
REG_L s0, PT_STATUS(sp)
@@ -323,20 +322,15 @@ END(handle_kernel_stack_overflow)
ENTRY(ret_from_fork)
call schedule_tail
- move a0, sp /* pt_regs */
- la ra, ret_from_exception
- tail syscall_exit_to_user_mode
-ENDPROC(ret_from_fork)
-
-ENTRY(ret_from_kernel_thread)
- call schedule_tail
+ beqz s0, 1f /* not from kernel thread */
/* Call fn(arg) */
move a0, s1
jalr s0
+1:
move a0, sp /* pt_regs */
la ra, ret_from_exception
tail syscall_exit_to_user_mode
-ENDPROC(ret_from_kernel_thread)
+ENDPROC(ret_from_fork)
/*
diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
index b0c63e8e867e..5108c76a14dd 100644
--- a/arch/riscv/kernel/process.c
+++ b/arch/riscv/kernel/process.c
@@ -34,7 +34,6 @@ EXPORT_SYMBOL(__stack_chk_guard);
#endif
extern asmlinkage void ret_from_fork(void);
-extern asmlinkage void ret_from_kernel_thread(void);
void arch_cpu_idle(void)
{
@@ -172,7 +171,6 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
/* Supervisor/Machine, irqs on: */
childregs->status = SR_PP | SR_PIE;
- p->thread.ra = (unsigned long)ret_from_kernel_thread;
p->thread.s[0] = (unsigned long)args->fn;
p->thread.s[1] = (unsigned long)args->fn_arg;
} else {
@@ -182,8 +180,9 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
if (clone_flags & CLONE_SETTLS)
childregs->tp = tls;
childregs->a0 = 0; /* Return value of fork() */
- p->thread.ra = (unsigned long)ret_from_fork;
+ p->thread.s[0] = 0;
}
+ p->thread.ra = (unsigned long)ret_from_fork;
p->thread.sp = (unsigned long)childregs; /* kernel sp */
return 0;
}
--
2.36.1
From: Jisheng Zhang <[email protected]>
Since riscv is converted to generic entry, there's no need for the
extra wrappers of trace_hardirqs_{on,off}.
Tested with llvm + irqsoff.
Signed-off-by: Jisheng Zhang <[email protected]>
Reviewed-by: Guo Ren <[email protected]>
Tested-by: Guo Ren <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
---
arch/riscv/kernel/Makefile | 2 --
arch/riscv/kernel/trace_irq.c | 27 ---------------------------
arch/riscv/kernel/trace_irq.h | 11 -----------
3 files changed, 40 deletions(-)
delete mode 100644 arch/riscv/kernel/trace_irq.c
delete mode 100644 arch/riscv/kernel/trace_irq.h
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
index 4cf303a779ab..392fa6e35d4a 100644
--- a/arch/riscv/kernel/Makefile
+++ b/arch/riscv/kernel/Makefile
@@ -68,8 +68,6 @@ obj-$(CONFIG_CPU_PM) += suspend_entry.o suspend.o
obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o
obj-$(CONFIG_DYNAMIC_FTRACE) += mcount-dyn.o
-obj-$(CONFIG_TRACE_IRQFLAGS) += trace_irq.o
-
obj-$(CONFIG_PERF_EVENTS) += perf_callchain.o
obj-$(CONFIG_HAVE_PERF_REGS) += perf_regs.o
obj-$(CONFIG_RISCV_SBI) += sbi.o
diff --git a/arch/riscv/kernel/trace_irq.c b/arch/riscv/kernel/trace_irq.c
deleted file mode 100644
index 095ac976d7da..000000000000
--- a/arch/riscv/kernel/trace_irq.c
+++ /dev/null
@@ -1,27 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Copyright (C) 2022 Changbin Du <[email protected]>
- */
-
-#include <linux/irqflags.h>
-#include <linux/kprobes.h>
-#include "trace_irq.h"
-
-/*
- * trace_hardirqs_on/off require the caller to setup frame pointer properly.
- * Otherwise, CALLER_ADDR1 might trigger an pagging exception in kernel.
- * Here we add one extra level so they can be safely called by low
- * level entry code which $fp is used for other purpose.
- */
-
-void __trace_hardirqs_on(void)
-{
- trace_hardirqs_on();
-}
-NOKPROBE_SYMBOL(__trace_hardirqs_on);
-
-void __trace_hardirqs_off(void)
-{
- trace_hardirqs_off();
-}
-NOKPROBE_SYMBOL(__trace_hardirqs_off);
diff --git a/arch/riscv/kernel/trace_irq.h b/arch/riscv/kernel/trace_irq.h
deleted file mode 100644
index 99fe67377e5e..000000000000
--- a/arch/riscv/kernel/trace_irq.h
+++ /dev/null
@@ -1,11 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Copyright (C) 2022 Changbin Du <[email protected]>
- */
-#ifndef __TRACE_IRQ_H
-#define __TRACE_IRQ_H
-
-void __trace_hardirqs_on(void);
-void __trace_hardirqs_off(void);
-
-#endif /* __TRACE_IRQ_H */
--
2.36.1
From: Guo Ren <[email protected]>
This patch converts riscv to use the generic entry infrastructure from
kernel/entry/*. The generic entry makes maintainers' work easier and
codes more elegant. Here are the changes than before:
- More clear entry.S with handle_exception and ret_from_exception
- Get rid of complex custom signal implementation
- More readable syscall procedure
- Little modification on ret_from_fork & ret_from_kernel_thread
- Wrap with irqentry_enter/exit and syscall_enter/exit_from_user_mode
- Use the standard preemption code instead of custom
Suggested-by: Huacai Chen <[email protected]>
Tested-by: Yipeng Zou <[email protected]>
Tested-by: Jisheng Zhang <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
Cc: Ben Hutchings <[email protected]>
---
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/csr.h | 1 -
arch/riscv/include/asm/entry-common.h | 8 +
arch/riscv/include/asm/ptrace.h | 10 +-
arch/riscv/include/asm/stacktrace.h | 5 +
arch/riscv/include/asm/syscall.h | 6 +
arch/riscv/include/asm/thread_info.h | 13 +-
arch/riscv/kernel/entry.S | 237 ++++----------------------
arch/riscv/kernel/irq.c | 15 ++
arch/riscv/kernel/ptrace.c | 43 -----
arch/riscv/kernel/signal.c | 21 +--
arch/riscv/kernel/sys_riscv.c | 29 ++++
arch/riscv/kernel/traps.c | 70 ++++++--
arch/riscv/mm/fault.c | 16 +-
14 files changed, 175 insertions(+), 300 deletions(-)
create mode 100644 arch/riscv/include/asm/entry-common.h
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index ef8d66de5f38..518e8523d41d 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -57,6 +57,7 @@ config RISCV
select GENERIC_ATOMIC64 if !64BIT
select GENERIC_CLOCKEVENTS_BROADCAST if SMP
select GENERIC_EARLY_IOREMAP
+ select GENERIC_ENTRY
select GENERIC_GETTIMEOFDAY if HAVE_GENERIC_VDSO
select GENERIC_IDLE_POLL_SETUP
select GENERIC_IOREMAP if MMU
diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
index 0e571f6483d9..7c2b8cdb7b77 100644
--- a/arch/riscv/include/asm/csr.h
+++ b/arch/riscv/include/asm/csr.h
@@ -40,7 +40,6 @@
#define SR_UXL _AC(0x300000000, UL) /* XLEN mask for U-mode */
#define SR_UXL_32 _AC(0x100000000, UL) /* XLEN = 32 for U-mode */
#define SR_UXL_64 _AC(0x200000000, UL) /* XLEN = 64 for U-mode */
-#define SR_UXL_SHIFT 32
#endif
/* SATP flags */
diff --git a/arch/riscv/include/asm/entry-common.h b/arch/riscv/include/asm/entry-common.h
new file mode 100644
index 000000000000..1636ac2af28e
--- /dev/null
+++ b/arch/riscv/include/asm/entry-common.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _ASM_RISCV_ENTRY_COMMON_H
+#define _ASM_RISCV_ENTRY_COMMON_H
+
+#include <asm/stacktrace.h>
+
+#endif /* _ASM_RISCV_ENTRY_COMMON_H */
diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h
index 6ecd461129d2..b5b0adcc85c1 100644
--- a/arch/riscv/include/asm/ptrace.h
+++ b/arch/riscv/include/asm/ptrace.h
@@ -53,6 +53,9 @@ struct pt_regs {
unsigned long orig_a0;
};
+#define PTRACE_SYSEMU 0x1f
+#define PTRACE_SYSEMU_SINGLESTEP 0x20
+
#ifdef CONFIG_64BIT
#define REG_FMT "%016lx"
#else
@@ -121,8 +124,6 @@ extern unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs,
void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
unsigned long frame_pointer);
-int do_syscall_trace_enter(struct pt_regs *regs);
-void do_syscall_trace_exit(struct pt_regs *regs);
/**
* regs_get_register() - get register value from its offset
@@ -172,6 +173,11 @@ static inline unsigned long regs_get_kernel_argument(struct pt_regs *regs,
return 0;
}
+static inline int regs_irqs_disabled(struct pt_regs *regs)
+{
+ return !(regs->status & SR_PIE);
+}
+
#endif /* __ASSEMBLY__ */
#endif /* _ASM_RISCV_PTRACE_H */
diff --git a/arch/riscv/include/asm/stacktrace.h b/arch/riscv/include/asm/stacktrace.h
index 3450c1912afd..f7e8ef2418b9 100644
--- a/arch/riscv/include/asm/stacktrace.h
+++ b/arch/riscv/include/asm/stacktrace.h
@@ -16,4 +16,9 @@ extern void notrace walk_stackframe(struct task_struct *task, struct pt_regs *re
extern void dump_backtrace(struct pt_regs *regs, struct task_struct *task,
const char *loglvl);
+static inline bool on_thread_stack(void)
+{
+ return !(((unsigned long)(current->stack) ^ current_stack_pointer) & ~(THREAD_SIZE - 1));
+}
+
#endif /* _ASM_RISCV_STACKTRACE_H */
diff --git a/arch/riscv/include/asm/syscall.h b/arch/riscv/include/asm/syscall.h
index 384a63b86420..6c573f18030b 100644
--- a/arch/riscv/include/asm/syscall.h
+++ b/arch/riscv/include/asm/syscall.h
@@ -74,5 +74,11 @@ static inline int syscall_get_arch(struct task_struct *task)
#endif
}
+static inline bool arch_syscall_is_vdso_sigreturn(struct pt_regs *regs)
+{
+ return false;
+}
+
asmlinkage long sys_riscv_flush_icache(uintptr_t, uintptr_t, uintptr_t);
+asmlinkage void do_sys_ecall_u(struct pt_regs *regs);
#endif /* _ASM_RISCV_SYSCALL_H */
diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
index 67322f878e0d..7de4fb96f0b5 100644
--- a/arch/riscv/include/asm/thread_info.h
+++ b/arch/riscv/include/asm/thread_info.h
@@ -66,6 +66,7 @@ struct thread_info {
long kernel_sp; /* Kernel stack pointer */
long user_sp; /* User stack pointer */
int cpu;
+ unsigned long syscall_work; /* SYSCALL_WORK_ flags */
};
/*
@@ -88,26 +89,18 @@ struct thread_info {
* - pending work-to-be-done flags are in lowest half-word
* - other flags in upper half-word(s)
*/
-#define TIF_SYSCALL_TRACE 0 /* syscall trace active */
#define TIF_NOTIFY_RESUME 1 /* callback before returning to user */
#define TIF_SIGPENDING 2 /* signal pending */
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_RESTORE_SIGMASK 4 /* restore signal mask in do_signal() */
#define TIF_MEMDIE 5 /* is terminating due to OOM killer */
-#define TIF_SYSCALL_TRACEPOINT 6 /* syscall tracepoint instrumentation */
-#define TIF_SYSCALL_AUDIT 7 /* syscall auditing */
-#define TIF_SECCOMP 8 /* syscall secure computing */
#define TIF_NOTIFY_SIGNAL 9 /* signal notifications exist */
#define TIF_UPROBE 10 /* uprobe breakpoint or singlestep */
#define TIF_32BIT 11 /* compat-mode 32bit process */
-#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
-#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
-#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
-#define _TIF_SECCOMP (1 << TIF_SECCOMP)
#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
#define _TIF_UPROBE (1 << TIF_UPROBE)
@@ -115,8 +108,4 @@ struct thread_info {
(_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED | \
_TIF_NOTIFY_SIGNAL | _TIF_UPROBE)
-#define _TIF_SYSCALL_WORK \
- (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_TRACEPOINT | _TIF_SYSCALL_AUDIT | \
- _TIF_SECCOMP)
-
#endif /* _ASM_RISCV_THREAD_INFO_H */
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
index da44fe2d0d82..69097dfffdc9 100644
--- a/arch/riscv/kernel/entry.S
+++ b/arch/riscv/kernel/entry.S
@@ -14,10 +14,6 @@
#include <asm/asm-offsets.h>
#include <asm/errata_list.h>
-#if !IS_ENABLED(CONFIG_PREEMPTION)
-.set resume_kernel, restore_all
-#endif
-
ENTRY(handle_exception)
/*
* If coming from userspace, preserve the user thread pointer and load
@@ -106,19 +102,8 @@ _save_context:
.option norelax
la gp, __global_pointer$
.option pop
-
-#ifdef CONFIG_TRACE_IRQFLAGS
- call __trace_hardirqs_off
-#endif
-
-#ifdef CONFIG_CONTEXT_TRACKING_USER
- /* If previous state is in user mode, call user_exit_callable(). */
- li a0, SR_PP
- and a0, s1, a0
- bnez a0, skip_context_tracking
- call user_exit_callable
-skip_context_tracking:
-#endif
+ move a0, sp /* pt_regs */
+ la ra, ret_from_exception
/*
* MSB of cause differentiates between
@@ -126,134 +111,32 @@ skip_context_tracking:
*/
bge s4, zero, 1f
- la ra, ret_from_exception
-
/* Handle interrupts */
- move a0, sp /* pt_regs */
- la a1, generic_handle_arch_irq
- jr a1
+ tail do_riscv_irq
1:
- /*
- * Exceptions run with interrupts enabled or disabled depending on the
- * state of SR_PIE in m/sstatus.
- */
- andi t0, s1, SR_PIE
- beqz t0, 1f
- /* kprobes, entered via ebreak, must have interrupts disabled. */
- li t0, EXC_BREAKPOINT
- beq s4, t0, 1f
-#ifdef CONFIG_TRACE_IRQFLAGS
- call __trace_hardirqs_on
-#endif
- csrs CSR_STATUS, SR_IE
-
-1:
- la ra, ret_from_exception
- /* Handle syscalls */
- li t0, EXC_SYSCALL
- beq s4, t0, handle_syscall
-
/* Handle other exceptions */
slli t0, s4, RISCV_LGPTR
la t1, excp_vect_table
la t2, excp_vect_table_end
- move a0, sp /* pt_regs */
add t0, t1, t0
/* Check if exception code lies within bounds */
- bgeu t0, t2, 1f
+ bgeu t0, t2, 2f
REG_L t0, 0(t0)
jr t0
-1:
- tail do_trap_unknown
-
-handle_syscall:
-#ifdef CONFIG_RISCV_M_MODE
- /*
- * When running is M-Mode (no MMU config), MPIE does not get set.
- * As a result, we need to force enable interrupts here because
- * handle_exception did not do set SR_IE as it always sees SR_PIE
- * being cleared.
- */
- csrs CSR_STATUS, SR_IE
-#endif
-#if defined(CONFIG_TRACE_IRQFLAGS) || defined(CONFIG_CONTEXT_TRACKING_USER)
- /* Recover a0 - a7 for system calls */
- REG_L a0, PT_A0(sp)
- REG_L a1, PT_A1(sp)
- REG_L a2, PT_A2(sp)
- REG_L a3, PT_A3(sp)
- REG_L a4, PT_A4(sp)
- REG_L a5, PT_A5(sp)
- REG_L a6, PT_A6(sp)
- REG_L a7, PT_A7(sp)
-#endif
- /* save the initial A0 value (needed in signal handlers) */
- REG_S a0, PT_ORIG_A0(sp)
- /*
- * Advance SEPC to avoid executing the original
- * scall instruction on sret
- */
- addi s2, s2, 0x4
- REG_S s2, PT_EPC(sp)
- /* Trace syscalls, but only if requested by the user. */
- REG_L t0, TASK_TI_FLAGS(tp)
- andi t0, t0, _TIF_SYSCALL_WORK
- bnez t0, handle_syscall_trace_enter
-check_syscall_nr:
- /* Check to make sure we don't jump to a bogus syscall number. */
- li t0, __NR_syscalls
- la s0, sys_ni_syscall
- /*
- * Syscall number held in a7.
- * If syscall number is above allowed value, redirect to ni_syscall.
- */
- bgeu a7, t0, 3f
-#ifdef CONFIG_COMPAT
- REG_L s0, PT_STATUS(sp)
- srli s0, s0, SR_UXL_SHIFT
- andi s0, s0, (SR_UXL >> SR_UXL_SHIFT)
- li t0, (SR_UXL_32 >> SR_UXL_SHIFT)
- sub t0, s0, t0
- bnez t0, 1f
-
- /* Call compat_syscall */
- la s0, compat_sys_call_table
- j 2f
-1:
-#endif
- /* Call syscall */
- la s0, sys_call_table
2:
- slli t0, a7, RISCV_LGPTR
- add s0, s0, t0
- REG_L s0, 0(s0)
-3:
- jalr s0
-
-ret_from_syscall:
- /* Set user a0 to kernel a0 */
- REG_S a0, PT_A0(sp)
- /*
- * We didn't execute the actual syscall.
- * Seccomp already set return value for the current task pt_regs.
- * (If it was configured with SECCOMP_RET_ERRNO/TRACE)
- */
-ret_from_syscall_rejected:
-#ifdef CONFIG_DEBUG_RSEQ
- move a0, sp
- call rseq_syscall
-#endif
- /* Trace syscalls, but only if requested by the user. */
- REG_L t0, TASK_TI_FLAGS(tp)
- andi t0, t0, _TIF_SYSCALL_WORK
- bnez t0, handle_syscall_trace_exit
+ tail do_trap_unknown
+END(handle_exception)
+/*
+ * The ret_from_exception must be called with interrupt disabled. Here is the
+ * caller list:
+ * - handle_exception
+ * - ret_from_fork
+ * - ret_from_kernel_thread
+ */
SYM_CODE_START_NOALIGN(ret_from_exception)
REG_L s0, PT_STATUS(sp)
- csrc CSR_STATUS, SR_IE
-#ifdef CONFIG_TRACE_IRQFLAGS
- call __trace_hardirqs_off
-#endif
+
#ifdef CONFIG_RISCV_M_MODE
/* the MPP value is too large to be used as an immediate arg for addi */
li t0, SR_MPP
@@ -261,18 +144,7 @@ SYM_CODE_START_NOALIGN(ret_from_exception)
#else
andi s0, s0, SR_SPP
#endif
- bnez s0, resume_kernel
-SYM_CODE_END(ret_from_exception)
-
-resume_userspace:
- /* Interrupts must be disabled here so flags are checked atomically */
- REG_L s0, TASK_TI_FLAGS(tp) /* current_thread_info->flags */
- andi s1, s0, _TIF_WORK_MASK
- bnez s1, work_pending
-
-#ifdef CONFIG_CONTEXT_TRACKING_USER
- call user_enter_callable
-#endif
+ bnez s0, 1f
/* Save unwound kernel stack pointer in thread_info */
addi s0, sp, PT_SIZE_ON_STACK
@@ -283,19 +155,7 @@ resume_userspace:
* structures again.
*/
csrw CSR_SCRATCH, tp
-
-restore_all:
-#ifdef CONFIG_TRACE_IRQFLAGS
- REG_L s1, PT_STATUS(sp)
- andi t0, s1, SR_PIE
- beqz t0, 1f
- call __trace_hardirqs_on
- j 2f
1:
- call __trace_hardirqs_off
-2:
-#endif
- REG_L a0, PT_STATUS(sp)
/*
* The current load reservation is effectively part of the processor's
* state, in the sense that load reservations cannot be shared between
@@ -316,9 +176,11 @@ restore_all:
REG_L a2, PT_EPC(sp)
REG_SC x0, a2, PT_EPC(sp)
- csrw CSR_STATUS, a0
csrw CSR_EPC, a2
+ REG_L a0, PT_STATUS(sp)
+ csrw CSR_STATUS, a0
+
REG_L x1, PT_RA(sp)
REG_L x3, PT_GP(sp)
REG_L x4, PT_TP(sp)
@@ -357,54 +219,10 @@ restore_all:
#else
sret
#endif
-
-#if IS_ENABLED(CONFIG_PREEMPTION)
-resume_kernel:
- REG_L s0, TASK_TI_PREEMPT_COUNT(tp)
- bnez s0, restore_all
- REG_L s0, TASK_TI_FLAGS(tp)
- andi s0, s0, _TIF_NEED_RESCHED
- beqz s0, restore_all
- call preempt_schedule_irq
- j restore_all
-#endif
-
-work_pending:
- /* Enter slow path for supplementary processing */
- la ra, ret_from_exception
- andi s1, s0, _TIF_NEED_RESCHED
- bnez s1, work_resched
-work_notifysig:
- /* Handle pending signals and notify-resume requests */
- csrs CSR_STATUS, SR_IE /* Enable interrupts for do_notify_resume() */
- move a0, sp /* pt_regs */
- move a1, s0 /* current_thread_info->flags */
- tail do_notify_resume
-work_resched:
- tail schedule
-
-/* Slow paths for ptrace. */
-handle_syscall_trace_enter:
- move a0, sp
- call do_syscall_trace_enter
- move t0, a0
- REG_L a0, PT_A0(sp)
- REG_L a1, PT_A1(sp)
- REG_L a2, PT_A2(sp)
- REG_L a3, PT_A3(sp)
- REG_L a4, PT_A4(sp)
- REG_L a5, PT_A5(sp)
- REG_L a6, PT_A6(sp)
- REG_L a7, PT_A7(sp)
- bnez t0, ret_from_syscall_rejected
- j check_syscall_nr
-handle_syscall_trace_exit:
- move a0, sp
- call do_syscall_trace_exit
- j ret_from_exception
+SYM_CODE_END(ret_from_exception)
#ifdef CONFIG_VMAP_STACK
-handle_kernel_stack_overflow:
+ENTRY(handle_kernel_stack_overflow)
la sp, shadow_stack
addi sp, sp, SHADOW_OVERFLOW_STACK_SIZE
@@ -500,21 +318,24 @@ restore_caller_reg:
REG_S s5, PT_TP(sp)
move a0, sp
tail handle_bad_stack
+END(handle_kernel_stack_overflow)
#endif
-END(handle_exception)
-
ENTRY(ret_from_fork)
+ call schedule_tail
+ move a0, sp /* pt_regs */
la ra, ret_from_exception
- tail schedule_tail
+ tail syscall_exit_to_user_mode
ENDPROC(ret_from_fork)
ENTRY(ret_from_kernel_thread)
call schedule_tail
/* Call fn(arg) */
- la ra, ret_from_exception
move a0, s1
- jr s0
+ jalr s0
+ move a0, sp /* pt_regs */
+ la ra, ret_from_exception
+ tail syscall_exit_to_user_mode
ENDPROC(ret_from_kernel_thread)
@@ -583,7 +404,7 @@ ENTRY(excp_vect_table)
RISCV_PTR do_trap_load_fault
RISCV_PTR do_trap_store_misaligned
RISCV_PTR do_trap_store_fault
- RISCV_PTR do_trap_ecall_u /* system call, gets intercepted */
+ RISCV_PTR do_sys_ecall_u /* system call */
RISCV_PTR do_trap_ecall_s
RISCV_PTR do_trap_unknown
RISCV_PTR do_trap_ecall_m
diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
index 7207fa08d78f..24c2e1bd756a 100644
--- a/arch/riscv/kernel/irq.c
+++ b/arch/riscv/kernel/irq.c
@@ -5,6 +5,7 @@
* Copyright (C) 2018 Christoph Hellwig
*/
+#include <linux/entry-common.h>
#include <linux/interrupt.h>
#include <linux/irqchip.h>
#include <linux/seq_file.h>
@@ -22,3 +23,17 @@ void __init init_IRQ(void)
if (!handle_arch_irq)
panic("No interrupt controller found.");
}
+
+asmlinkage void noinstr do_riscv_irq(struct pt_regs *regs)
+{
+ struct pt_regs *old_regs;
+ irqentry_state_t state = irqentry_enter(regs);
+
+ irq_enter_rcu();
+ old_regs = set_irq_regs(regs);
+ handle_arch_irq(regs);
+ set_irq_regs(old_regs);
+ irq_exit_rcu();
+
+ irqentry_exit(regs, state);
+}
diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
index 44f4b1ca315d..23c48b14a0e7 100644
--- a/arch/riscv/kernel/ptrace.c
+++ b/arch/riscv/kernel/ptrace.c
@@ -19,9 +19,6 @@
#include <linux/sched.h>
#include <linux/sched/task_stack.h>
-#define CREATE_TRACE_POINTS
-#include <trace/events/syscalls.h>
-
enum riscv_regset {
REGSET_X,
#ifdef CONFIG_FPU
@@ -228,46 +225,6 @@ long arch_ptrace(struct task_struct *child, long request,
return ret;
}
-/*
- * Allows PTRACE_SYSCALL to work. These are called from entry.S in
- * {handle,ret_from}_syscall.
- */
-__visible int do_syscall_trace_enter(struct pt_regs *regs)
-{
- if (test_thread_flag(TIF_SYSCALL_TRACE))
- if (ptrace_report_syscall_entry(regs))
- return -1;
-
- /*
- * Do the secure computing after ptrace; failures should be fast.
- * If this fails we might have return value in a0 from seccomp
- * (via SECCOMP_RET_ERRNO/TRACE).
- */
- if (secure_computing() == -1)
- return -1;
-
-#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
- if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
- trace_sys_enter(regs, syscall_get_nr(current, regs));
-#endif
-
- audit_syscall_entry(regs->a7, regs->a0, regs->a1, regs->a2, regs->a3);
- return 0;
-}
-
-__visible void do_syscall_trace_exit(struct pt_regs *regs)
-{
- audit_syscall_exit(regs);
-
- if (test_thread_flag(TIF_SYSCALL_TRACE))
- ptrace_report_syscall_exit(regs, 0);
-
-#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
- if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
- trace_sys_exit(regs, regs_return_value(regs));
-#endif
-}
-
#ifdef CONFIG_COMPAT
static int compat_riscv_gpr_get(struct task_struct *target,
const struct user_regset *regset,
diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
index 5c591123c440..2e365084417e 100644
--- a/arch/riscv/kernel/signal.c
+++ b/arch/riscv/kernel/signal.c
@@ -12,6 +12,7 @@
#include <linux/syscalls.h>
#include <linux/resume_user_mode.h>
#include <linux/linkage.h>
+#include <linux/entry-common.h>
#include <asm/ucontext.h>
#include <asm/vdso.h>
@@ -274,7 +275,7 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
signal_setup_done(ret, ksig, 0);
}
-static void do_signal(struct pt_regs *regs)
+void arch_do_signal_or_restart(struct pt_regs *regs)
{
struct ksignal ksig;
@@ -311,21 +312,3 @@ static void do_signal(struct pt_regs *regs)
*/
restore_saved_sigmask();
}
-
-/*
- * notification of userspace execution resumption
- * - triggered by the _TIF_WORK_MASK flags
- */
-asmlinkage __visible void do_notify_resume(struct pt_regs *regs,
- unsigned long thread_info_flags)
-{
- if (thread_info_flags & _TIF_UPROBE)
- uprobe_notify_resume(regs);
-
- /* Handle pending signal delivery */
- if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
- do_signal(regs);
-
- if (thread_info_flags & _TIF_NOTIFY_RESUME)
- resume_user_mode_work(regs);
-}
diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
index 5d3f2fbeb33c..459e24ed61fa 100644
--- a/arch/riscv/kernel/sys_riscv.c
+++ b/arch/riscv/kernel/sys_riscv.c
@@ -5,8 +5,10 @@
* Copyright (C) 2017 SiFive
*/
+#include <linux/entry-common.h>
#include <linux/syscalls.h>
#include <asm/unistd.h>
+#include <asm/syscall.h>
#include <asm/cacheflush.h>
#include <asm-generic/mman-common.h>
@@ -69,3 +71,30 @@ SYSCALL_DEFINE3(riscv_flush_icache, uintptr_t, start, uintptr_t, end,
return 0;
}
+
+typedef long (*syscall_t)(ulong, ulong, ulong, ulong, ulong, ulong, ulong);
+
+asmlinkage void do_sys_ecall_u(struct pt_regs *regs)
+{
+ syscall_t syscall;
+ ulong nr = regs->a7;
+
+ regs->epc += 4;
+ regs->orig_a0 = regs->a0;
+ regs->a0 = -ENOSYS;
+
+ nr = syscall_enter_from_user_mode(regs, nr);
+
+ if (nr < NR_syscalls) {
+#ifdef CONFIG_COMPAT
+ if ((regs->status & SR_UXL) == SR_UXL_32)
+ syscall = compat_sys_call_table[nr];
+ else
+#endif
+ syscall = sys_call_table[nr];
+
+ regs->a0 = syscall(regs->orig_a0, regs->a1, regs->a2,
+ regs->a3, regs->a4, regs->a5, regs->a6);
+ }
+ syscall_exit_to_user_mode(regs);
+}
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
index f7fa973558bc..ee9a0ef672e9 100644
--- a/arch/riscv/kernel/traps.c
+++ b/arch/riscv/kernel/traps.c
@@ -17,6 +17,7 @@
#include <linux/module.h>
#include <linux/irq.h>
#include <linux/kexec.h>
+#include <linux/entry-common.h>
#include <asm/asm-prototypes.h>
#include <asm/bug.h>
@@ -99,10 +100,19 @@ static void do_trap_error(struct pt_regs *regs, int signo, int code,
#else
#define __trap_section noinstr
#endif
-#define DO_ERROR_INFO(name, signo, code, str) \
-asmlinkage __visible __trap_section void name(struct pt_regs *regs) \
-{ \
- do_trap_error(regs, signo, code, regs->epc, "Oops - " str); \
+#define DO_ERROR_INFO(name, signo, code, str) \
+asmlinkage __visible __trap_section void name(struct pt_regs *regs) \
+{ \
+ if (user_mode(regs)) { \
+ irqentry_enter_from_user_mode(regs); \
+ do_trap_error(regs, signo, code, regs->epc, "Oops - " str); \
+ irqentry_exit_to_user_mode(regs); \
+ } else { \
+ irqentry_state_t irq_state = irqentry_nmi_enter(regs); \
+ do_trap_error(regs, signo, code, regs->epc, "Oops - " str); \
+ irqentry_nmi_exit(regs, irq_state); \
+ } \
+ BUG_ON(!irqs_disabled()); \
}
DO_ERROR_INFO(do_trap_unknown,
@@ -126,18 +136,38 @@ int handle_misaligned_store(struct pt_regs *regs);
asmlinkage void __trap_section do_trap_load_misaligned(struct pt_regs *regs)
{
- if (!handle_misaligned_load(regs))
- return;
- do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
- "Oops - load address misaligned");
+ if (user_mode(regs)) {
+ irqentry_enter_from_user_mode(regs);
+ if (handle_misaligned_load(regs))
+ do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
+ "Oops - load address misaligned");
+ irqentry_exit_to_user_mode(regs);
+ } else {
+ irqentry_state_t irq_state = irqentry_nmi_enter(regs);
+ if (handle_misaligned_load(regs))
+ do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
+ "Oops - load address misaligned");
+ irqentry_nmi_exit(regs, irq_state);
+ }
+ BUG_ON(!irqs_disabled());
}
asmlinkage void __trap_section do_trap_store_misaligned(struct pt_regs *regs)
{
- if (!handle_misaligned_store(regs))
- return;
- do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
- "Oops - store (or AMO) address misaligned");
+ if (user_mode(regs)) {
+ irqentry_enter_from_user_mode(regs);
+ if (handle_misaligned_store(regs))
+ do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
+ "Oops - store (or AMO) address misaligned");
+ irqentry_exit_to_user_mode(regs);
+ } else {
+ irqentry_state_t irq_state = irqentry_nmi_enter(regs);
+ if (handle_misaligned_store(regs))
+ do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
+ "Oops - store (or AMO) address misaligned");
+ irqentry_nmi_exit(regs, irq_state);
+ }
+ BUG_ON(!irqs_disabled());
}
#endif
DO_ERROR_INFO(do_trap_store_fault,
@@ -159,7 +189,7 @@ static inline unsigned long get_break_insn_length(unsigned long pc)
return GET_INSN_LENGTH(insn);
}
-asmlinkage __visible __trap_section void do_trap_break(struct pt_regs *regs)
+static void __do_trap_break(struct pt_regs *regs)
{
#ifdef CONFIG_KPROBES
if (kprobe_single_step_handler(regs))
@@ -189,6 +219,20 @@ asmlinkage __visible __trap_section void do_trap_break(struct pt_regs *regs)
else
die(regs, "Kernel BUG");
}
+
+asmlinkage __visible __trap_section void do_trap_break(struct pt_regs *regs)
+{
+ if (user_mode(regs)) {
+ irqentry_enter_from_user_mode(regs);
+ __do_trap_break(regs);
+ irqentry_exit_to_user_mode(regs);
+ } else {
+ irqentry_state_t irq_state = irqentry_nmi_enter(regs);
+ __do_trap_break(regs);
+ irqentry_nmi_exit(regs, irq_state);
+ }
+ BUG_ON(!irqs_disabled());
+}
NOKPROBE_SYMBOL(do_trap_break);
#ifdef CONFIG_GENERIC_BUG
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index b26f68eac61c..5cbea6c55a59 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -15,6 +15,7 @@
#include <linux/uaccess.h>
#include <linux/kprobes.h>
#include <linux/kfence.h>
+#include <linux/entry-common.h>
#include <asm/ptrace.h>
#include <asm/tlbflush.h>
@@ -204,7 +205,7 @@ static inline bool access_error(unsigned long cause, struct vm_area_struct *vma)
* This routine handles page faults. It determines the address and the
* problem, and then passes it off to one of the appropriate routines.
*/
-asmlinkage void noinstr do_page_fault(struct pt_regs *regs)
+static void __do_page_fault(struct pt_regs *regs)
{
struct task_struct *tsk;
struct vm_area_struct *vma;
@@ -251,7 +252,7 @@ asmlinkage void noinstr do_page_fault(struct pt_regs *regs)
}
#endif
/* Enable interrupts if they were enabled in the parent context. */
- if (likely(regs->status & SR_PIE))
+ if (!regs_irqs_disabled(regs))
local_irq_enable();
/*
@@ -351,4 +352,15 @@ asmlinkage void noinstr do_page_fault(struct pt_regs *regs)
}
return;
}
+
+asmlinkage void noinstr do_page_fault(struct pt_regs *regs)
+{
+ irqentry_state_t state = irqentry_enter(regs);
+
+ __do_page_fault(regs);
+
+ local_irq_disable();
+
+ irqentry_exit(regs, state);
+}
NOKPROBE_SYMBOL(do_page_fault);
--
2.36.1
From: Guo Ren <[email protected]>
Add the HAVE_SOFTIRQ_ON_OWN_STACK feature for the IRQ_STACKS config. The
irq and softirq use the same independent irq_stack of percpu by time
division multiplexing.
Tested-by: Jisheng Zhang <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
---
arch/riscv/Kconfig | 7 ++++---
arch/riscv/kernel/irq.c | 33 +++++++++++++++++++++++++++++++++
2 files changed, 37 insertions(+), 3 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 0a9d4bdc0338..bd4c4ae4cdc9 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -447,12 +447,13 @@ config FPU
If you don't know what to do here, say Y.
config IRQ_STACKS
- bool "Independent irq stacks" if EXPERT
+ bool "Independent irq & softirq stacks" if EXPERT
default y
select HAVE_IRQ_EXIT_ON_IRQ_STACK
+ select HAVE_SOFTIRQ_ON_OWN_STACK
help
- Add independent irq stacks for percpu to prevent kernel stack overflows.
- We may save some memory footprint by disabling IRQ_STACKS.
+ Add independent irq & softirq stacks for percpu to prevent kernel stack
+ overflows. We may save some memory footprint by disabling IRQ_STACKS.
endmenu # "Platform type"
diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
index 5d77f692b198..a6406da34937 100644
--- a/arch/riscv/kernel/irq.c
+++ b/arch/riscv/kernel/irq.c
@@ -11,6 +11,7 @@
#include <linux/seq_file.h>
#include <asm/smp.h>
#include <asm/vmap_stack.h>
+#include <asm/softirq_stack.h>
#ifdef CONFIG_IRQ_STACKS
static DEFINE_PER_CPU(ulong *, irq_stack_ptr);
@@ -38,6 +39,38 @@ static void init_irq_stacks(void)
per_cpu(irq_stack_ptr, cpu) = per_cpu(irq_stack, cpu);
}
#endif /* CONFIG_VMAP_STACK */
+
+#ifdef CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK
+void do_softirq_own_stack(void)
+{
+#ifdef CONFIG_IRQ_STACKS
+ if (on_thread_stack()) {
+ ulong *sp = per_cpu(irq_stack_ptr, smp_processor_id())
+ + IRQ_STACK_SIZE/sizeof(ulong);
+ __asm__ __volatile(
+ "addi sp, sp, -"RISCV_SZPTR "\n"
+ REG_S" ra, (sp) \n"
+ "addi sp, sp, -"RISCV_SZPTR "\n"
+ REG_S" s0, (sp) \n"
+ "addi s0, sp, 2*"RISCV_SZPTR "\n"
+ "move sp, %[sp] \n"
+ "call __do_softirq \n"
+ "addi sp, s0, -2*"RISCV_SZPTR"\n"
+ REG_L" s0, (sp) \n"
+ "addi sp, sp, "RISCV_SZPTR "\n"
+ REG_L" ra, (sp) \n"
+ "addi sp, sp, "RISCV_SZPTR "\n"
+ :
+ : [sp] "r" (sp)
+ : "a0", "a1", "a2", "a3", "a4", "a5", "a6", "a7",
+ "t0", "t1", "t2", "t3", "t4", "t5", "t6",
+ "memory");
+ } else
+#endif
+ __do_softirq();
+}
+#endif /* CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK */
+
#else
static void init_irq_stacks(void) {}
#endif /* CONFIG_IRQ_STACKS */
--
2.36.1
[email protected] writes:
> From: Jisheng Zhang <[email protected]>
>
> Since riscv is converted to generic entry, there's no need for the
> extra wrappers of trace_hardirqs_{on,off}.
>
> Tested with llvm + irqsoff.
What does this mean?
Björn
[email protected] writes:
The RISC-V entry.S is much more paletable after this patch! :-)
Some minor things...
> From: Guo Ren <[email protected]>
>
> This patch converts riscv to use the generic entry infrastructure from
> kernel/entry/*. The generic entry makes maintainers' work easier and
> codes more elegant. Here are the changes than before:
s/changes than before/changes/
> - More clear entry.S with handle_exception and ret_from_exception
> - Get rid of complex custom signal implementation
> - More readable syscall procedure
Maybe reword this a bit? It's a move from assembly to C (which, is much
more readable!).
> - Little modification on ret_from_fork & ret_from_kernel_thread
What changes?
> - Wrap with irqentry_enter/exit and syscall_enter/exit_from_user_mode
> - Use the standard preemption code instead of custom
> Suggested-by: Huacai Chen <[email protected]>
> Tested-by: Yipeng Zou <[email protected]>
> Tested-by: Jisheng Zhang <[email protected]>
> Signed-off-by: Guo Ren <[email protected]>
> Signed-off-by: Guo Ren <[email protected]>
> Cc: Ben Hutchings <[email protected]>
> ---
> arch/riscv/Kconfig | 1 +
> arch/riscv/include/asm/csr.h | 1 -
> arch/riscv/include/asm/entry-common.h | 8 +
> arch/riscv/include/asm/ptrace.h | 10 +-
> arch/riscv/include/asm/stacktrace.h | 5 +
> arch/riscv/include/asm/syscall.h | 6 +
> arch/riscv/include/asm/thread_info.h | 13 +-
> arch/riscv/kernel/entry.S | 237 ++++----------------------
> arch/riscv/kernel/irq.c | 15 ++
> arch/riscv/kernel/ptrace.c | 43 -----
> arch/riscv/kernel/signal.c | 21 +--
> arch/riscv/kernel/sys_riscv.c | 29 ++++
> arch/riscv/kernel/traps.c | 70 ++++++--
> arch/riscv/mm/fault.c | 16 +-
> 14 files changed, 175 insertions(+), 300 deletions(-)
> create mode 100644 arch/riscv/include/asm/entry-common.h
[...]
> diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
> index da44fe2d0d82..69097dfffdc9 100644
> --- a/arch/riscv/kernel/entry.S
> +++ b/arch/riscv/kernel/entry.S
> @@ -14,10 +14,6 @@
> #include <asm/asm-offsets.h>
> #include <asm/errata_list.h>
>
> -#if !IS_ENABLED(CONFIG_PREEMPTION)
> -.set resume_kernel, restore_all
> -#endif
> -
> ENTRY(handle_exception)
> /*
> * If coming from userspace, preserve the user thread pointer and load
> @@ -106,19 +102,8 @@ _save_context:
> .option norelax
> la gp, __global_pointer$
> .option pop
> -
> -#ifdef CONFIG_TRACE_IRQFLAGS
> - call __trace_hardirqs_off
> -#endif
> -
> -#ifdef CONFIG_CONTEXT_TRACKING_USER
> - /* If previous state is in user mode, call user_exit_callable(). */
> - li a0, SR_PP
> - and a0, s1, a0
> - bnez a0, skip_context_tracking
> - call user_exit_callable
> -skip_context_tracking:
> -#endif
> + move a0, sp /* pt_regs */
> + la ra, ret_from_exception
Not for this series, but at some point it would be nice to get rid of
the "move" pseudoinsn in favor of "mv".
[...]
> diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
> index f7fa973558bc..ee9a0ef672e9 100644
> --- a/arch/riscv/kernel/traps.c
> +++ b/arch/riscv/kernel/traps.c
> @@ -17,6 +17,7 @@
> #include <linux/module.h>
> #include <linux/irq.h>
> #include <linux/kexec.h>
> +#include <linux/entry-common.h>
>
> #include <asm/asm-prototypes.h>
> #include <asm/bug.h>
> @@ -99,10 +100,19 @@ static void do_trap_error(struct pt_regs *regs, int signo, int code,
> #else
> #define __trap_section noinstr
> #endif
> -#define DO_ERROR_INFO(name, signo, code, str) \
> -asmlinkage __visible __trap_section void name(struct pt_regs *regs) \
> -{ \
> - do_trap_error(regs, signo, code, regs->epc, "Oops - " str); \
> +#define DO_ERROR_INFO(name, signo, code, str) \
> +asmlinkage __visible __trap_section void name(struct pt_regs *regs) \
> +{ \
> + if (user_mode(regs)) { \
> + irqentry_enter_from_user_mode(regs); \
> + do_trap_error(regs, signo, code, regs->epc, "Oops - " str); \
> + irqentry_exit_to_user_mode(regs); \
> + } else { \
> + irqentry_state_t irq_state = irqentry_nmi_enter(regs); \
> + do_trap_error(regs, signo, code, regs->epc, "Oops - " str); \
> + irqentry_nmi_exit(regs, irq_state); \
> + } \
> + BUG_ON(!irqs_disabled()); \
> }
>
> DO_ERROR_INFO(do_trap_unknown,
> @@ -126,18 +136,38 @@ int handle_misaligned_store(struct pt_regs *regs);
>
> asmlinkage void __trap_section do_trap_load_misaligned(struct pt_regs *regs)
> {
> - if (!handle_misaligned_load(regs))
> - return;
> - do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
> - "Oops - load address misaligned");
> + if (user_mode(regs)) {
> + irqentry_enter_from_user_mode(regs);
> + if (handle_misaligned_load(regs))
> + do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
> + "Oops - load address misaligned");
> + irqentry_exit_to_user_mode(regs);
> + } else {
> + irqentry_state_t irq_state = irqentry_nmi_enter(regs);
Please add a newline.
> + if (handle_misaligned_load(regs))
> + do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
> + "Oops - load address misaligned");
> + irqentry_nmi_exit(regs, irq_state);
> + }
> + BUG_ON(!irqs_disabled());
> }
>
> asmlinkage void __trap_section do_trap_store_misaligned(struct pt_regs *regs)
> {
> - if (!handle_misaligned_store(regs))
> - return;
> - do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
> - "Oops - store (or AMO) address misaligned");
> + if (user_mode(regs)) {
> + irqentry_enter_from_user_mode(regs);
> + if (handle_misaligned_store(regs))
> + do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
> + "Oops - store (or AMO) address misaligned");
> + irqentry_exit_to_user_mode(regs);
> + } else {
> + irqentry_state_t irq_state = irqentry_nmi_enter(regs);
Please add a newline.
> + if (handle_misaligned_store(regs))
> + do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
> + "Oops - store (or AMO) address misaligned");
> + irqentry_nmi_exit(regs, irq_state);
> + }
> + BUG_ON(!irqs_disabled());
> }
> #endif
> DO_ERROR_INFO(do_trap_store_fault,
> @@ -159,7 +189,7 @@ static inline unsigned long get_break_insn_length(unsigned long pc)
> return GET_INSN_LENGTH(insn);
> }
>
> -asmlinkage __visible __trap_section void do_trap_break(struct pt_regs *regs)
> +static void __do_trap_break(struct pt_regs *regs)
> {
> #ifdef CONFIG_KPROBES
> if (kprobe_single_step_handler(regs))
> @@ -189,6 +219,20 @@ asmlinkage __visible __trap_section void do_trap_break(struct pt_regs *regs)
> else
> die(regs, "Kernel BUG");
> }
> +
> +asmlinkage __visible __trap_section void do_trap_break(struct pt_regs *regs)
> +{
> + if (user_mode(regs)) {
> + irqentry_enter_from_user_mode(regs);
> + __do_trap_break(regs);
> + irqentry_exit_to_user_mode(regs);
> + } else {
> + irqentry_state_t irq_state = irqentry_nmi_enter(regs);
Please add a newline.
Björn
[email protected] writes:
> From: Guo Ren <[email protected]>
>
> Add the HAVE_SOFTIRQ_ON_OWN_STACK feature for the IRQ_STACKS config. The
> irq and softirq use the same independent irq_stack of percpu by time
> division multiplexing.
>
> Tested-by: Jisheng Zhang <[email protected]>
> Signed-off-by: Guo Ren <[email protected]>
> Signed-off-by: Guo Ren <[email protected]>
> ---
> arch/riscv/Kconfig | 7 ++++---
> arch/riscv/kernel/irq.c | 33 +++++++++++++++++++++++++++++++++
> 2 files changed, 37 insertions(+), 3 deletions(-)
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 0a9d4bdc0338..bd4c4ae4cdc9 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -447,12 +447,13 @@ config FPU
> If you don't know what to do here, say Y.
>
> config IRQ_STACKS
> - bool "Independent irq stacks" if EXPERT
> + bool "Independent irq & softirq stacks" if EXPERT
> default y
> select HAVE_IRQ_EXIT_ON_IRQ_STACK
> + select HAVE_SOFTIRQ_ON_OWN_STACK
HAVE_IRQ_EXIT_ON_IRQ_STACK is used by softirq.c Shouldn't that be
selected introduced in this patch, instead of the previous one?
> help
> - Add independent irq stacks for percpu to prevent kernel stack overflows.
> - We may save some memory footprint by disabling IRQ_STACKS.
> + Add independent irq & softirq stacks for percpu to prevent kernel stack
> + overflows. We may save some memory footprint by disabling IRQ_STACKS.
Same comment from previous patch. Please use the same wording/config as
other archs.
> endmenu # "Platform type"
>
> diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
> index 5d77f692b198..a6406da34937 100644
> --- a/arch/riscv/kernel/irq.c
> +++ b/arch/riscv/kernel/irq.c
> @@ -11,6 +11,7 @@
> #include <linux/seq_file.h>
> #include <asm/smp.h>
> #include <asm/vmap_stack.h>
> +#include <asm/softirq_stack.h>
>
> #ifdef CONFIG_IRQ_STACKS
> static DEFINE_PER_CPU(ulong *, irq_stack_ptr);
> @@ -38,6 +39,38 @@ static void init_irq_stacks(void)
> per_cpu(irq_stack_ptr, cpu) = per_cpu(irq_stack, cpu);
> }
> #endif /* CONFIG_VMAP_STACK */
> +
> +#ifdef CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK
> +void do_softirq_own_stack(void)
> +{
> +#ifdef CONFIG_IRQ_STACKS
> + if (on_thread_stack()) {
> + ulong *sp = per_cpu(irq_stack_ptr, smp_processor_id())
> + + IRQ_STACK_SIZE/sizeof(ulong);
> + __asm__ __volatile(
> + "addi sp, sp, -"RISCV_SZPTR "\n"
> + REG_S" ra, (sp) \n"
> + "addi sp, sp, -"RISCV_SZPTR "\n"
> + REG_S" s0, (sp) \n"
> + "addi s0, sp, 2*"RISCV_SZPTR "\n"
> + "move sp, %[sp] \n"
> + "call __do_softirq \n"
> + "addi sp, s0, -2*"RISCV_SZPTR"\n"
> + REG_L" s0, (sp) \n"
> + "addi sp, sp, "RISCV_SZPTR "\n"
> + REG_L" ra, (sp) \n"
> + "addi sp, sp, "RISCV_SZPTR "\n"
> + :
> + : [sp] "r" (sp)
> + : "a0", "a1", "a2", "a3", "a4", "a5", "a6", "a7",
> + "t0", "t1", "t2", "t3", "t4", "t5", "t6",
> + "memory");
Same as previous patch. Please avoid C&P and have a look at how
call_on_stack is done on x86.
Björn
[email protected] writes:
> From: Guo Ren <[email protected]>
>
> 0cac21b02ba5 ("risc v: use 16KB kernel stack on 64-bit") increase the
checkpatch complains here: Use "commit SHA...".
> thread size mandatory, but some scenarios, such as D1 with a small
> memory footprint, would suffer from that. After independent irq stack
> support, let's give users a choice to determine their custom stack size.
...and again, my "why is this in the generic entry" series rant. :-)
Björn
[email protected] writes:
> From: Jisheng Zhang <[email protected]>
>
> The ret_from_kernel_thread() behaves similarly with ret_from_fork(),
> the only difference is whether call the fn(arg) or not, this can be
> achieved by testing fn is NULL or not, I.E s0 is 0 or not. Many
> architectures have done the same thing, it make entry.S more clean.
Nit: "it makes".
Björn
On Thu, Dec 8, 2022 at 6:12 PM Björn Töpel <[email protected]> wrote:
>
> [email protected] writes:
>
> > From: Jisheng Zhang <[email protected]>
> >
> > The ret_from_kernel_thread() behaves similarly with ret_from_fork(),
> > the only difference is whether call the fn(arg) or not, this can be
> > achieved by testing fn is NULL or not, I.E s0 is 0 or not. Many
> > architectures have done the same thing, it make entry.S more clean.
>
> Nit: "it makes".
Okay.
>
>
> Björn
--
Best Regards
Guo Ren
On Thu, Dec 8, 2022 at 6:11 PM Björn Töpel <[email protected]> wrote:
>
> [email protected] writes:
>
> > From: Jisheng Zhang <[email protected]>
> >
> > Since riscv is converted to generic entry, there's no need for the
> > extra wrappers of trace_hardirqs_{on,off}.
> >
> > Tested with llvm + irqsoff.
>
> What does this mean?
It's just a tested environment description. This is covered by the
generic entry. This patch removes unused code.
I would remove the "Tested with llvm + irqsoff." sentence; it's unnecessary.
>
>
> Björn
--
Best Regards
Guo Ren
On Thu, Dec 8, 2022 at 6:12 PM Björn Töpel <[email protected]> wrote:
>
> [email protected] writes:
>
> > From: Guo Ren <[email protected]>
> >
> > Add the HAVE_SOFTIRQ_ON_OWN_STACK feature for the IRQ_STACKS config. The
> > irq and softirq use the same independent irq_stack of percpu by time
> > division multiplexing.
> >
> > Tested-by: Jisheng Zhang <[email protected]>
> > Signed-off-by: Guo Ren <[email protected]>
> > Signed-off-by: Guo Ren <[email protected]>
> > ---
> > arch/riscv/Kconfig | 7 ++++---
> > arch/riscv/kernel/irq.c | 33 +++++++++++++++++++++++++++++++++
> > 2 files changed, 37 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> > index 0a9d4bdc0338..bd4c4ae4cdc9 100644
> > --- a/arch/riscv/Kconfig
> > +++ b/arch/riscv/Kconfig
> > @@ -447,12 +447,13 @@ config FPU
> > If you don't know what to do here, say Y.
> >
> > config IRQ_STACKS
> > - bool "Independent irq stacks" if EXPERT
> > + bool "Independent irq & softirq stacks" if EXPERT
> > default y
> > select HAVE_IRQ_EXIT_ON_IRQ_STACK
> > + select HAVE_SOFTIRQ_ON_OWN_STACK
>
> HAVE_IRQ_EXIT_ON_IRQ_STACK is used by softirq.c Shouldn't that be
> selected introduced in this patch, instead of the previous one?
This patch depends on the previous one. And the previous one could
work separately.
>
> > help
> > - Add independent irq stacks for percpu to prevent kernel stack overflows.
> > - We may save some memory footprint by disabling IRQ_STACKS.
> > + Add independent irq & softirq stacks for percpu to prevent kernel stack
> > + overflows. We may save some memory footprint by disabling IRQ_STACKS.
>
> Same comment from previous patch. Please use the same wording/config as
> other archs.
>
> > endmenu # "Platform type"
> >
> > diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
> > index 5d77f692b198..a6406da34937 100644
> > --- a/arch/riscv/kernel/irq.c
> > +++ b/arch/riscv/kernel/irq.c
> > @@ -11,6 +11,7 @@
> > #include <linux/seq_file.h>
> > #include <asm/smp.h>
> > #include <asm/vmap_stack.h>
> > +#include <asm/softirq_stack.h>
> >
> > #ifdef CONFIG_IRQ_STACKS
> > static DEFINE_PER_CPU(ulong *, irq_stack_ptr);
> > @@ -38,6 +39,38 @@ static void init_irq_stacks(void)
> > per_cpu(irq_stack_ptr, cpu) = per_cpu(irq_stack, cpu);
> > }
> > #endif /* CONFIG_VMAP_STACK */
> > +
> > +#ifdef CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK
> > +void do_softirq_own_stack(void)
> > +{
> > +#ifdef CONFIG_IRQ_STACKS
> > + if (on_thread_stack()) {
> > + ulong *sp = per_cpu(irq_stack_ptr, smp_processor_id())
> > + + IRQ_STACK_SIZE/sizeof(ulong);
> > + __asm__ __volatile(
> > + "addi sp, sp, -"RISCV_SZPTR "\n"
> > + REG_S" ra, (sp) \n"
> > + "addi sp, sp, -"RISCV_SZPTR "\n"
> > + REG_S" s0, (sp) \n"
> > + "addi s0, sp, 2*"RISCV_SZPTR "\n"
> > + "move sp, %[sp] \n"
> > + "call __do_softirq \n"
> > + "addi sp, s0, -2*"RISCV_SZPTR"\n"
> > + REG_L" s0, (sp) \n"
> > + "addi sp, sp, "RISCV_SZPTR "\n"
> > + REG_L" ra, (sp) \n"
> > + "addi sp, sp, "RISCV_SZPTR "\n"
> > + :
> > + : [sp] "r" (sp)
> > + : "a0", "a1", "a2", "a3", "a4", "a5", "a6", "a7",
> > + "t0", "t1", "t2", "t3", "t4", "t5", "t6",
> > + "memory");
>
> Same as previous patch. Please avoid C&P and have a look at how
> call_on_stack is done on x86.
Okay.
>
>
> Björn
--
Best Regards
Guo Ren
On Thu, Dec 8, 2022 at 6:12 PM Björn Töpel <[email protected]> wrote:
>
> [email protected] writes:
>
> > From: Guo Ren <[email protected]>
> >
> > 0cac21b02ba5 ("risc v: use 16KB kernel stack on 64-bit") increase the
>
> checkpatch complains here: Use "commit SHA...".
Okay, I would check that.
>
> > thread size mandatory, but some scenarios, such as D1 with a small
> > memory footprint, would suffer from that. After independent irq stack
> > support, let's give users a choice to determine their custom stack size.
>
> ...and again, my "why is this in the generic entry" series rant. :-)
I would remove it from the generic entry series.
>
>
> Björn
--
Best Regards
Guo Ren
Guo Ren <[email protected]> writes:
> On Thu, Dec 8, 2022 at 6:12 PM Björn Töpel <[email protected]> wrote:
>>
>> [email protected] writes:
>>
>> > From: Guo Ren <[email protected]>
>> >
>> > Add the HAVE_SOFTIRQ_ON_OWN_STACK feature for the IRQ_STACKS config. The
>> > irq and softirq use the same independent irq_stack of percpu by time
>> > division multiplexing.
>> >
>> > Tested-by: Jisheng Zhang <[email protected]>
>> > Signed-off-by: Guo Ren <[email protected]>
>> > Signed-off-by: Guo Ren <[email protected]>
>> > ---
>> > arch/riscv/Kconfig | 7 ++++---
>> > arch/riscv/kernel/irq.c | 33 +++++++++++++++++++++++++++++++++
>> > 2 files changed, 37 insertions(+), 3 deletions(-)
>> >
>> > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>> > index 0a9d4bdc0338..bd4c4ae4cdc9 100644
>> > --- a/arch/riscv/Kconfig
>> > +++ b/arch/riscv/Kconfig
>> > @@ -447,12 +447,13 @@ config FPU
>> > If you don't know what to do here, say Y.
>> >
>> > config IRQ_STACKS
>> > - bool "Independent irq stacks" if EXPERT
>> > + bool "Independent irq & softirq stacks" if EXPERT
>> > default y
>> > select HAVE_IRQ_EXIT_ON_IRQ_STACK
>> > + select HAVE_SOFTIRQ_ON_OWN_STACK
>>
>> HAVE_IRQ_EXIT_ON_IRQ_STACK is used by softirq.c Shouldn't that be
>> selected introduced in this patch, instead of the previous one?
> This patch depends on the previous one. And the previous one could
> work separately.
Let me try to be more clear: IRQ_STACKS should be introduced in the
previous patch, which adds per-cpu stacks to hardirq. This patch adds
per-cpu stacks for softirq, and the softirq related selects:
select HAVE_IRQ_EXIT_ON_IRQ_STACK
select HAVE_SOFTIRQ_ON_OWN_STACK
Hence, the "HAVE_IRQ_EXIT_ON_IRQ_STACK" select should be part of *this*
patch, not the previous.
Or am I missing something?
Björn
On Thu, Dec 8, 2022 at 6:11 PM Björn Töpel <[email protected]> wrote:
>
> [email protected] writes:
>
> The RISC-V entry.S is much more paletable after this patch! :-)
>
> Some minor things...
>
> > From: Guo Ren <[email protected]>
> >
> > This patch converts riscv to use the generic entry infrastructure from
> > kernel/entry/*. The generic entry makes maintainers' work easier and
> > codes more elegant. Here are the changes than before:
>
> s/changes than before/changes/
Okay
>
> > - More clear entry.S with handle_exception and ret_from_exception
> > - Get rid of complex custom signal implementation
> > - More readable syscall procedure
>
> Maybe reword this a bit? It's a move from assembly to C (which, is much
> more readable!).
Okay.
>
> > - Little modification on ret_from_fork & ret_from_kernel_thread
>
> What changes?
ENTRY(ret_from_fork)
+ call schedule_tail
+ move a0, sp /* pt_regs */
la ra, ret_from_exception
- tail schedule_tail
+ tail syscall_exit_to_user_mode
ENDPROC(ret_from_fork)
ENTRY(ret_from_kernel_thread)
call schedule_tail
/* Call fn(arg) */
- la ra, ret_from_exception
move a0, s1
- jr s0
+ jalr s0
+ move a0, sp /* pt_regs */
+ la ra, ret_from_exception
+ tail syscall_exit_to_user_mode
ENDPROC(ret_from_kernel_thread)
>
> > - Wrap with irqentry_enter/exit and syscall_enter/exit_from_user_mode
> > - Use the standard preemption code instead of custom
>
> > Suggested-by: Huacai Chen <[email protected]>
> > Tested-by: Yipeng Zou <[email protected]>
> > Tested-by: Jisheng Zhang <[email protected]>
> > Signed-off-by: Guo Ren <[email protected]>
> > Signed-off-by: Guo Ren <[email protected]>
> > Cc: Ben Hutchings <[email protected]>
> > ---
> > arch/riscv/Kconfig | 1 +
> > arch/riscv/include/asm/csr.h | 1 -
> > arch/riscv/include/asm/entry-common.h | 8 +
> > arch/riscv/include/asm/ptrace.h | 10 +-
> > arch/riscv/include/asm/stacktrace.h | 5 +
> > arch/riscv/include/asm/syscall.h | 6 +
> > arch/riscv/include/asm/thread_info.h | 13 +-
> > arch/riscv/kernel/entry.S | 237 ++++----------------------
> > arch/riscv/kernel/irq.c | 15 ++
> > arch/riscv/kernel/ptrace.c | 43 -----
> > arch/riscv/kernel/signal.c | 21 +--
> > arch/riscv/kernel/sys_riscv.c | 29 ++++
> > arch/riscv/kernel/traps.c | 70 ++++++--
> > arch/riscv/mm/fault.c | 16 +-
> > 14 files changed, 175 insertions(+), 300 deletions(-)
> > create mode 100644 arch/riscv/include/asm/entry-common.h
>
> [...]
>
> > diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
> > index da44fe2d0d82..69097dfffdc9 100644
> > --- a/arch/riscv/kernel/entry.S
> > +++ b/arch/riscv/kernel/entry.S
> > @@ -14,10 +14,6 @@
> > #include <asm/asm-offsets.h>
> > #include <asm/errata_list.h>
> >
> > -#if !IS_ENABLED(CONFIG_PREEMPTION)
> > -.set resume_kernel, restore_all
> > -#endif
> > -
> > ENTRY(handle_exception)
> > /*
> > * If coming from userspace, preserve the user thread pointer and load
> > @@ -106,19 +102,8 @@ _save_context:
> > .option norelax
> > la gp, __global_pointer$
> > .option pop
> > -
> > -#ifdef CONFIG_TRACE_IRQFLAGS
> > - call __trace_hardirqs_off
> > -#endif
> > -
> > -#ifdef CONFIG_CONTEXT_TRACKING_USER
> > - /* If previous state is in user mode, call user_exit_callable(). */
> > - li a0, SR_PP
> > - and a0, s1, a0
> > - bnez a0, skip_context_tracking
> > - call user_exit_callable
> > -skip_context_tracking:
> > -#endif
> > + move a0, sp /* pt_regs */
> > + la ra, ret_from_exception
>
> Not for this series, but at some point it would be nice to get rid of
> the "move" pseudoinsn in favor of "mv".
>
> [...]
>
> > diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
> > index f7fa973558bc..ee9a0ef672e9 100644
> > --- a/arch/riscv/kernel/traps.c
> > +++ b/arch/riscv/kernel/traps.c
> > @@ -17,6 +17,7 @@
> > #include <linux/module.h>
> > #include <linux/irq.h>
> > #include <linux/kexec.h>
> > +#include <linux/entry-common.h>
> >
> > #include <asm/asm-prototypes.h>
> > #include <asm/bug.h>
> > @@ -99,10 +100,19 @@ static void do_trap_error(struct pt_regs *regs, int signo, int code,
> > #else
> > #define __trap_section noinstr
> > #endif
> > -#define DO_ERROR_INFO(name, signo, code, str) \
> > -asmlinkage __visible __trap_section void name(struct pt_regs *regs) \
> > -{ \
> > - do_trap_error(regs, signo, code, regs->epc, "Oops - " str); \
> > +#define DO_ERROR_INFO(name, signo, code, str) \
> > +asmlinkage __visible __trap_section void name(struct pt_regs *regs) \
> > +{ \
> > + if (user_mode(regs)) { \
> > + irqentry_enter_from_user_mode(regs); \
> > + do_trap_error(regs, signo, code, regs->epc, "Oops - " str); \
> > + irqentry_exit_to_user_mode(regs); \
> > + } else { \
> > + irqentry_state_t irq_state = irqentry_nmi_enter(regs); \
> > + do_trap_error(regs, signo, code, regs->epc, "Oops - " str); \
> > + irqentry_nmi_exit(regs, irq_state); \
> > + } \
> > + BUG_ON(!irqs_disabled()); \
> > }
> >
> > DO_ERROR_INFO(do_trap_unknown,
> > @@ -126,18 +136,38 @@ int handle_misaligned_store(struct pt_regs *regs);
> >
> > asmlinkage void __trap_section do_trap_load_misaligned(struct pt_regs *regs)
> > {
> > - if (!handle_misaligned_load(regs))
> > - return;
> > - do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
> > - "Oops - load address misaligned");
> > + if (user_mode(regs)) {
> > + irqentry_enter_from_user_mode(regs);
> > + if (handle_misaligned_load(regs))
> > + do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
> > + "Oops - load address misaligned");
> > + irqentry_exit_to_user_mode(regs);
> > + } else {
> > + irqentry_state_t irq_state = irqentry_nmi_enter(regs);
>
> Please add a newline.
okay
>
> > + if (handle_misaligned_load(regs))
> > + do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
> > + "Oops - load address misaligned");
> > + irqentry_nmi_exit(regs, irq_state);
> > + }
> > + BUG_ON(!irqs_disabled());
> > }
> >
> > asmlinkage void __trap_section do_trap_store_misaligned(struct pt_regs *regs)
> > {
> > - if (!handle_misaligned_store(regs))
> > - return;
> > - do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
> > - "Oops - store (or AMO) address misaligned");
> > + if (user_mode(regs)) {
> > + irqentry_enter_from_user_mode(regs);
> > + if (handle_misaligned_store(regs))
> > + do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
> > + "Oops - store (or AMO) address misaligned");
> > + irqentry_exit_to_user_mode(regs);
> > + } else {
> > + irqentry_state_t irq_state = irqentry_nmi_enter(regs);
>
> Please add a newline.
okay
>
> > + if (handle_misaligned_store(regs))
> > + do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
> > + "Oops - store (or AMO) address misaligned");
> > + irqentry_nmi_exit(regs, irq_state);
> > + }
> > + BUG_ON(!irqs_disabled());
> > }
> > #endif
> > DO_ERROR_INFO(do_trap_store_fault,
> > @@ -159,7 +189,7 @@ static inline unsigned long get_break_insn_length(unsigned long pc)
> > return GET_INSN_LENGTH(insn);
> > }
> >
> > -asmlinkage __visible __trap_section void do_trap_break(struct pt_regs *regs)
> > +static void __do_trap_break(struct pt_regs *regs)
> > {
> > #ifdef CONFIG_KPROBES
> > if (kprobe_single_step_handler(regs))
> > @@ -189,6 +219,20 @@ asmlinkage __visible __trap_section void do_trap_break(struct pt_regs *regs)
> > else
> > die(regs, "Kernel BUG");
> > }
> > +
> > +asmlinkage __visible __trap_section void do_trap_break(struct pt_regs *regs)
> > +{
> > + if (user_mode(regs)) {
> > + irqentry_enter_from_user_mode(regs);
> > + __do_trap_break(regs);
> > + irqentry_exit_to_user_mode(regs);
> > + } else {
> > + irqentry_state_t irq_state = irqentry_nmi_enter(regs);
>
> Please add a newline.
okay
>
>
> Björn
--
Best Regards
Guo Ren
On Fri, Dec 9, 2022 at 3:50 PM Björn Töpel <[email protected]> wrote:
>
> Guo Ren <[email protected]> writes:
>
> > On Thu, Dec 8, 2022 at 6:12 PM Björn Töpel <[email protected]> wrote:
> >>
> >> [email protected] writes:
> >>
> >> > From: Guo Ren <[email protected]>
> >> >
> >> > Add the HAVE_SOFTIRQ_ON_OWN_STACK feature for the IRQ_STACKS config. The
> >> > irq and softirq use the same independent irq_stack of percpu by time
> >> > division multiplexing.
> >> >
> >> > Tested-by: Jisheng Zhang <[email protected]>
> >> > Signed-off-by: Guo Ren <[email protected]>
> >> > Signed-off-by: Guo Ren <[email protected]>
> >> > ---
> >> > arch/riscv/Kconfig | 7 ++++---
> >> > arch/riscv/kernel/irq.c | 33 +++++++++++++++++++++++++++++++++
> >> > 2 files changed, 37 insertions(+), 3 deletions(-)
> >> >
> >> > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> >> > index 0a9d4bdc0338..bd4c4ae4cdc9 100644
> >> > --- a/arch/riscv/Kconfig
> >> > +++ b/arch/riscv/Kconfig
> >> > @@ -447,12 +447,13 @@ config FPU
> >> > If you don't know what to do here, say Y.
> >> >
> >> > config IRQ_STACKS
> >> > - bool "Independent irq stacks" if EXPERT
> >> > + bool "Independent irq & softirq stacks" if EXPERT
> >> > default y
> >> > select HAVE_IRQ_EXIT_ON_IRQ_STACK
> >> > + select HAVE_SOFTIRQ_ON_OWN_STACK
> >>
> >> HAVE_IRQ_EXIT_ON_IRQ_STACK is used by softirq.c Shouldn't that be
> >> selected introduced in this patch, instead of the previous one?
> > This patch depends on the previous one. And the previous one could
> > work separately.
>
> Let me try to be more clear: IRQ_STACKS should be introduced in the
> previous patch, which adds per-cpu stacks to hardirq. This patch adds
> per-cpu stacks for softirq, and the softirq related selects:
>
> select HAVE_IRQ_EXIT_ON_IRQ_STACK
> select HAVE_SOFTIRQ_ON_OWN_STACK
>
> Hence, the "HAVE_IRQ_EXIT_ON_IRQ_STACK" select should be part of *this*
> patch, not the previous.
>
> Or am I missing something?
You are right, HAVE_IRQ_EXIT_ON_IRQ_STACK is belong to SOFTIRQ:
static inline void invoke_softirq(void)
{
...
if (!force_irqthreads() || !__this_cpu_read(ksoftirqd)) {
#ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK
...
__do_softirq();
#else
...
do_softirq_own_stack();
#endif
...
}
I would fix that in the next version.
>
>
> Björn
--
Best Regards
Guo Ren
Guo Ren <[email protected]> writes:
>> > - Little modification on ret_from_fork & ret_from_kernel_thread
>>
>> What changes?
> ENTRY(ret_from_fork)
> + call schedule_tail
> + move a0, sp /* pt_regs */
> la ra, ret_from_exception
> - tail schedule_tail
> + tail syscall_exit_to_user_mode
> ENDPROC(ret_from_fork)
>
> ENTRY(ret_from_kernel_thread)
> call schedule_tail
> /* Call fn(arg) */
> - la ra, ret_from_exception
> move a0, s1
> - jr s0
> + jalr s0
> + move a0, sp /* pt_regs */
> + la ra, ret_from_exception
> + tail syscall_exit_to_user_mode
> ENDPROC(ret_from_kernel_thread)
Thanks for clearing that up! It's more useful to have a descriptive
text, than just "these functions were changed". (Why instead of what)
Cheers,
Björn