From: Jisheng Zhang <[email protected]>
Similar as other architectures such as arm64, x86 and so on, use
offsets relative to the exception table entry values rather than
absolute addresses for both the exception locationand the fixup.
And recently, arm64 and x86 remove anonymous out-of-line fixups, we
want to acchieve the same result.
patch1 remove unused macro.
patch2 consolidates the __ex_table construction, it's a great code
clean up even w/o the 2nd patch.
patch3 swith to relative extable.
The remaining patches are inspired by arm64 version. They remove
the anonymous out-of-line fixups for risv.
Since v3:
- collect Reviewed-by tag for patch2 and patch3
- add patch1 to remove unused macro
- add patches to remove anonymous out-of-line fixups
Since v2:
- directly check R_RISCV_SUB32 in __ex_table instead of adding
addend_riscv_rela()
Since v1:
- fix build error for NOMMU case, thank [email protected]
Jisheng Zhang (12):
riscv: remove unused __cmpxchg_user() macro
riscv: consolidate __ex_table construction
riscv: switch to relative exception tables
riscv: bpf: move rv_bpf_fixup_exception signature to extable.h
riscv: extable: make fixup_exception() return bool
riscv: extable: use `ex` for `exception_table_entry`
riscv: lib: uaccess: fold fixups into body
riscv: extable: consolidate definitions
riscv: extable: add `type` and `data` fields
riscv: add gpr-num.h
riscv: extable: add a dedicated uaccess handler
riscv: vmlinux.lds.S|vmlinux-xip.lds.S: remove `.fixup` section
arch/riscv/include/asm/Kbuild | 1 -
arch/riscv/include/asm/asm-extable.h | 65 +++++++++++
arch/riscv/include/asm/extable.h | 48 ++++++++
arch/riscv/include/asm/futex.h | 30 ++---
arch/riscv/include/asm/gpr-num.h | 77 +++++++++++++
arch/riscv/include/asm/uaccess.h | 162 ++++-----------------------
arch/riscv/kernel/vmlinux-xip.lds.S | 1 -
arch/riscv/kernel/vmlinux.lds.S | 3 +-
arch/riscv/lib/uaccess.S | 28 +++--
arch/riscv/mm/extable.c | 66 ++++++++---
arch/riscv/net/bpf_jit_comp64.c | 9 +-
scripts/mod/modpost.c | 15 +++
scripts/sorttable.c | 4 +-
13 files changed, 308 insertions(+), 201 deletions(-)
create mode 100644 arch/riscv/include/asm/asm-extable.h
create mode 100644 arch/riscv/include/asm/extable.h
create mode 100644 arch/riscv/include/asm/gpr-num.h
--
2.33.0
From: Jisheng Zhang <[email protected]>
Next patch will use the gpr-num to pass the register number to exception
fixup handler which sits in inline assembly routines.
Signed-off-by: Jisheng Zhang <[email protected]>
---
arch/riscv/include/asm/gpr-num.h | 77 ++++++++++++++++++++++++++++++++
1 file changed, 77 insertions(+)
create mode 100644 arch/riscv/include/asm/gpr-num.h
diff --git a/arch/riscv/include/asm/gpr-num.h b/arch/riscv/include/asm/gpr-num.h
new file mode 100644
index 000000000000..dfee2829fc7c
--- /dev/null
+++ b/arch/riscv/include/asm/gpr-num.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __ASM_GPR_NUM_H
+#define __ASM_GPR_NUM_H
+
+#ifdef __ASSEMBLY__
+ .equ .L__gpr_num_zero, 0
+ .equ .L__gpr_num_ra, 1
+ .equ .L__gpr_num_sp, 2
+ .equ .L__gpr_num_gp, 3
+ .equ .L__gpr_num_tp, 4
+ .equ .L__gpr_num_t0, 5
+ .equ .L__gpr_num_t1, 6
+ .equ .L__gpr_num_t2, 7
+ .equ .L__gpr_num_s0, 8
+ .equ .L__gpr_num_s1, 9
+ .equ .L__gpr_num_a0, 10
+ .equ .L__gpr_num_a1, 11
+ .equ .L__gpr_num_a2, 12
+ .equ .L__gpr_num_a3, 13
+ .equ .L__gpr_num_a4, 14
+ .equ .L__gpr_num_a5, 15
+ .equ .L__gpr_num_a6, 16
+ .equ .L__gpr_num_a7, 17
+ .equ .L__gpr_num_s2, 18
+ .equ .L__gpr_num_s3, 19
+ .equ .L__gpr_num_s4, 20
+ .equ .L__gpr_num_s5, 21
+ .equ .L__gpr_num_s6, 22
+ .equ .L__gpr_num_s7, 23
+ .equ .L__gpr_num_s8, 24
+ .equ .L__gpr_num_s9, 25
+ .equ .L__gpr_num_s10, 26
+ .equ .L__gpr_num_s11, 27
+ .equ .L__gpr_num_t3, 28
+ .equ .L__gpr_num_t4, 29
+ .equ .L__gpr_num_t5, 30
+ .equ .L__gpr_num_t6, 31
+
+#else /* __ASSEMBLY__ */
+
+#define __DEFINE_ASM_GPR_NUMS \
+" .equ .L__gpr_num_zero, 0\n" \
+" .equ .L__gpr_num_ra, 1\n" \
+" .equ .L__gpr_num_sp, 2\n" \
+" .equ .L__gpr_num_gp, 3\n" \
+" .equ .L__gpr_num_tp, 4\n" \
+" .equ .L__gpr_num_t0, 5\n" \
+" .equ .L__gpr_num_t1, 6\n" \
+" .equ .L__gpr_num_t2, 7\n" \
+" .equ .L__gpr_num_s0, 8\n" \
+" .equ .L__gpr_num_s1, 9\n" \
+" .equ .L__gpr_num_a0, 10\n" \
+" .equ .L__gpr_num_a1, 11\n" \
+" .equ .L__gpr_num_a2, 12\n" \
+" .equ .L__gpr_num_a3, 13\n" \
+" .equ .L__gpr_num_a4, 14\n" \
+" .equ .L__gpr_num_a5, 15\n" \
+" .equ .L__gpr_num_a6, 16\n" \
+" .equ .L__gpr_num_a7, 17\n" \
+" .equ .L__gpr_num_s2, 18\n" \
+" .equ .L__gpr_num_s3, 19\n" \
+" .equ .L__gpr_num_s4, 20\n" \
+" .equ .L__gpr_num_s5, 21\n" \
+" .equ .L__gpr_num_s6, 22\n" \
+" .equ .L__gpr_num_s7, 23\n" \
+" .equ .L__gpr_num_s8, 24\n" \
+" .equ .L__gpr_num_s9, 25\n" \
+" .equ .L__gpr_num_s10, 26\n" \
+" .equ .L__gpr_num_s11, 27\n" \
+" .equ .L__gpr_num_t3, 28\n" \
+" .equ .L__gpr_num_t4, 29\n" \
+" .equ .L__gpr_num_t5, 30\n" \
+" .equ .L__gpr_num_t6, 31\n"
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_GPR_NUM_H */
--
2.33.0
From: Jisheng Zhang <[email protected]>
Signed-off-by: Jisheng Zhang <[email protected]>
---
arch/riscv/kernel/vmlinux-xip.lds.S | 1 -
arch/riscv/kernel/vmlinux.lds.S | 1 -
2 files changed, 2 deletions(-)
diff --git a/arch/riscv/kernel/vmlinux-xip.lds.S b/arch/riscv/kernel/vmlinux-xip.lds.S
index f5ed08262139..75e0fa8a700a 100644
--- a/arch/riscv/kernel/vmlinux-xip.lds.S
+++ b/arch/riscv/kernel/vmlinux-xip.lds.S
@@ -45,7 +45,6 @@ SECTIONS
ENTRY_TEXT
IRQENTRY_TEXT
SOFTIRQENTRY_TEXT
- *(.fixup)
_etext = .;
}
RO_DATA(L1_CACHE_BYTES)
diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
index 0e5ae851929e..4e6c88aa4d87 100644
--- a/arch/riscv/kernel/vmlinux.lds.S
+++ b/arch/riscv/kernel/vmlinux.lds.S
@@ -48,7 +48,6 @@ SECTIONS
ENTRY_TEXT
IRQENTRY_TEXT
SOFTIRQENTRY_TEXT
- *(.fixup)
_etext = .;
}
--
2.33.0
From: Jisheng Zhang <[email protected]>
The return values of fixup_exception() and riscv_bpf_fixup_exception()
represent a boolean condition rather than an error code, so it's better
to return `bool` rather than `int`.
Signed-off-by: Jisheng Zhang <[email protected]>
---
arch/riscv/include/asm/extable.h | 8 ++++----
arch/riscv/mm/extable.c | 6 +++---
arch/riscv/net/bpf_jit_comp64.c | 6 +++---
3 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/riscv/include/asm/extable.h b/arch/riscv/include/asm/extable.h
index c48c020fcf4d..e4374dde02b4 100644
--- a/arch/riscv/include/asm/extable.h
+++ b/arch/riscv/include/asm/extable.h
@@ -21,16 +21,16 @@ struct exception_table_entry {
#define ARCH_HAS_RELATIVE_EXTABLE
-int fixup_exception(struct pt_regs *regs);
+bool fixup_exception(struct pt_regs *regs);
#if defined(CONFIG_BPF_JIT) && defined(CONFIG_ARCH_RV64I)
-int rv_bpf_fixup_exception(const struct exception_table_entry *ex, struct pt_regs *regs);
+bool rv_bpf_fixup_exception(const struct exception_table_entry *ex, struct pt_regs *regs);
#else
-static inline int
+static inline bool
rv_bpf_fixup_exception(const struct exception_table_entry *ex,
struct pt_regs *regs)
{
- return 0;
+ return false;
}
#endif
diff --git a/arch/riscv/mm/extable.c b/arch/riscv/mm/extable.c
index cbb0db11b28f..d41bf38e37e9 100644
--- a/arch/riscv/mm/extable.c
+++ b/arch/riscv/mm/extable.c
@@ -11,17 +11,17 @@
#include <linux/module.h>
#include <linux/uaccess.h>
-int fixup_exception(struct pt_regs *regs)
+bool fixup_exception(struct pt_regs *regs)
{
const struct exception_table_entry *fixup;
fixup = search_exception_tables(regs->epc);
if (!fixup)
- return 0;
+ return false;
if (regs->epc >= BPF_JIT_REGION_START && regs->epc < BPF_JIT_REGION_END)
return rv_bpf_fixup_exception(fixup, regs);
regs->epc = (unsigned long)&fixup->fixup + fixup->fixup;
- return 1;
+ return true;
}
diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
index 2ca345c7b0bf..7714081cbb64 100644
--- a/arch/riscv/net/bpf_jit_comp64.c
+++ b/arch/riscv/net/bpf_jit_comp64.c
@@ -459,8 +459,8 @@ static int emit_call(bool fixed, u64 addr, struct rv_jit_context *ctx)
#define BPF_FIXUP_OFFSET_MASK GENMASK(26, 0)
#define BPF_FIXUP_REG_MASK GENMASK(31, 27)
-int rv_bpf_fixup_exception(const struct exception_table_entry *ex,
- struct pt_regs *regs)
+bool rv_bpf_fixup_exception(const struct exception_table_entry *ex,
+ struct pt_regs *regs)
{
off_t offset = FIELD_GET(BPF_FIXUP_OFFSET_MASK, ex->fixup);
int regs_offset = FIELD_GET(BPF_FIXUP_REG_MASK, ex->fixup);
@@ -468,7 +468,7 @@ int rv_bpf_fixup_exception(const struct exception_table_entry *ex,
*(unsigned long *)((void *)regs + pt_regmap[regs_offset]) = 0;
regs->epc = (unsigned long)&ex->fixup - offset;
- return 1;
+ return true;
}
/* For accesses to BTF pointers, add an entry to the exception table */
--
2.33.0
From: Jisheng Zhang <[email protected]>
Similar as other architectures such as arm64, x86 and so on, use
offsets relative to the exception table entry values rather than
absolute addresses for both the exception locationand the fixup.
However, RISCV label difference will actually produce two relocations,
a pair of R_RISCV_ADD32 and R_RISCV_SUB32. Take below simple code for
example:
$ cat test.S
.section .text
1:
nop
.section __ex_table,"a"
.balign 4
.long (1b - .)
.previous
$ riscv64-linux-gnu-gcc -c test.S
$ riscv64-linux-gnu-readelf -r test.o
Relocation section '.rela__ex_table' at offset 0x100 contains 2 entries:
Offset Info Type Sym. Value Sym. Name + Addend
000000000000 000600000023 R_RISCV_ADD32 0000000000000000 .L1^B1 + 0
000000000000 000500000027 R_RISCV_SUB32 0000000000000000 .L0 + 0
The modpost will complain the R_RISCV_SUB32 relocation, so we need to
patch modpost.c to skip this relocation for .rela__ex_table section.
After this patch, the __ex_table section size of defconfig vmlinux is
reduced from 7072 Bytes to 3536 Bytes.
Signed-off-by: Jisheng Zhang <[email protected]>
Reviewed-by: Kefeng Wang <[email protected]>
---
arch/riscv/include/asm/Kbuild | 1 -
arch/riscv/include/asm/extable.h | 25 +++++++++++++++++++++++++
arch/riscv/include/asm/uaccess.h | 4 ++--
arch/riscv/lib/uaccess.S | 4 ++--
arch/riscv/mm/extable.c | 2 +-
scripts/mod/modpost.c | 15 +++++++++++++++
scripts/sorttable.c | 2 +-
7 files changed, 46 insertions(+), 7 deletions(-)
create mode 100644 arch/riscv/include/asm/extable.h
diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
index 445ccc97305a..57b86fd9916c 100644
--- a/arch/riscv/include/asm/Kbuild
+++ b/arch/riscv/include/asm/Kbuild
@@ -1,6 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
generic-y += early_ioremap.h
-generic-y += extable.h
generic-y += flat.h
generic-y += kvm_para.h
generic-y += user.h
diff --git a/arch/riscv/include/asm/extable.h b/arch/riscv/include/asm/extable.h
new file mode 100644
index 000000000000..84760392fc69
--- /dev/null
+++ b/arch/riscv/include/asm/extable.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_RISCV_EXTABLE_H
+#define _ASM_RISCV_EXTABLE_H
+
+/*
+ * The exception table consists of pairs of relative offsets: the first
+ * is the relative offset to an instruction that is allowed to fault,
+ * and the second is the relative offset at which the program should
+ * continue. No registers are modified, so it is entirely up to the
+ * continuation code to figure out what to do.
+ *
+ * All the routines below use bits of fixup code that are out of line
+ * with the main instruction path. This means when everything is well,
+ * we don't even have to jump over them. Further, they do not intrude
+ * on our cache or tlb entries.
+ */
+
+struct exception_table_entry {
+ int insn, fixup;
+};
+
+#define ARCH_HAS_RELATIVE_EXTABLE
+
+int fixup_exception(struct pt_regs *regs);
+#endif
diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
index 714cd311d9f1..0f2c5b9d2e8f 100644
--- a/arch/riscv/include/asm/uaccess.h
+++ b/arch/riscv/include/asm/uaccess.h
@@ -12,8 +12,8 @@
#define _ASM_EXTABLE(from, to) \
" .pushsection __ex_table, \"a\"\n" \
- " .balign " RISCV_SZPTR " \n" \
- " " RISCV_PTR "(" #from "), (" #to ")\n" \
+ " .balign 4\n" \
+ " .long (" #from " - .), (" #to " - .)\n" \
" .popsection\n"
/*
diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S
index 63bc691cff91..55f80f84e23f 100644
--- a/arch/riscv/lib/uaccess.S
+++ b/arch/riscv/lib/uaccess.S
@@ -7,8 +7,8 @@
100:
\op \reg, \addr
.section __ex_table,"a"
- .balign RISCV_SZPTR
- RISCV_PTR 100b, \lbl
+ .balign 4
+ .long (100b - .), (\lbl - .)
.previous
.endm
diff --git a/arch/riscv/mm/extable.c b/arch/riscv/mm/extable.c
index ddb7d3b99e89..d8d239c2c1bd 100644
--- a/arch/riscv/mm/extable.c
+++ b/arch/riscv/mm/extable.c
@@ -28,6 +28,6 @@ int fixup_exception(struct pt_regs *regs)
return rv_bpf_fixup_exception(fixup, regs);
#endif
- regs->epc = fixup->fixup;
+ regs->epc = (unsigned long)&fixup->fixup + fixup->fixup;
return 1;
}
diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
index cb8ab7d91d30..6bfa33217914 100644
--- a/scripts/mod/modpost.c
+++ b/scripts/mod/modpost.c
@@ -1830,6 +1830,14 @@ static int addend_mips_rel(struct elf_info *elf, Elf_Shdr *sechdr, Elf_Rela *r)
return 0;
}
+#ifndef EM_RISCV
+#define EM_RISCV 243
+#endif
+
+#ifndef R_RISCV_SUB32
+#define R_RISCV_SUB32 39
+#endif
+
static void section_rela(const char *modname, struct elf_info *elf,
Elf_Shdr *sechdr)
{
@@ -1866,6 +1874,13 @@ static void section_rela(const char *modname, struct elf_info *elf,
r_sym = ELF_R_SYM(r.r_info);
#endif
r.r_addend = TO_NATIVE(rela->r_addend);
+ switch (elf->hdr->e_machine) {
+ case EM_RISCV:
+ if (!strcmp("__ex_table", fromsec) &&
+ ELF_R_TYPE(r.r_info) == R_RISCV_SUB32)
+ continue;
+ break;
+ }
sym = elf->symtab_start + r_sym;
/* Skip special sections */
if (is_shndx_special(sym->st_shndx))
diff --git a/scripts/sorttable.c b/scripts/sorttable.c
index b7c2ad71f9cf..0c031e47a419 100644
--- a/scripts/sorttable.c
+++ b/scripts/sorttable.c
@@ -376,6 +376,7 @@ static int do_file(char const *const fname, void *addr)
case EM_PARISC:
case EM_PPC:
case EM_PPC64:
+ case EM_RISCV:
custom_sort = sort_relative_table;
break;
case EM_ARCOMPACT:
@@ -383,7 +384,6 @@ static int do_file(char const *const fname, void *addr)
case EM_ARM:
case EM_MICROBLAZE:
case EM_MIPS:
- case EM_RISCV:
case EM_XTENSA:
break;
default:
--
2.33.0
From: Jisheng Zhang <[email protected]>
Inspired by commit 2e77a62cb3a6("arm64: extable: add a dedicated
uaccess handler"), do similar to riscv to add a dedicated uaccess
exception handler to update registers in exception context and
subsequently return back into the function which faulted, so we remove
the need for fixups specialized to each faulting instruction.
Signed-off-by: Jisheng Zhang <[email protected]>
---
arch/riscv/include/asm/asm-extable.h | 23 +++++++++
arch/riscv/include/asm/futex.h | 23 +++------
arch/riscv/include/asm/uaccess.h | 74 +++++++++-------------------
arch/riscv/mm/extable.c | 27 ++++++++++
4 files changed, 78 insertions(+), 69 deletions(-)
diff --git a/arch/riscv/include/asm/asm-extable.h b/arch/riscv/include/asm/asm-extable.h
index 1b1f4ffd8d37..14be0673f5b5 100644
--- a/arch/riscv/include/asm/asm-extable.h
+++ b/arch/riscv/include/asm/asm-extable.h
@@ -5,6 +5,7 @@
#define EX_TYPE_NONE 0
#define EX_TYPE_FIXUP 1
#define EX_TYPE_BPF 2
+#define EX_TYPE_UACCESS_ERR_ZERO 3
#ifdef __ASSEMBLY__
@@ -23,7 +24,9 @@
#else /* __ASSEMBLY__ */
+#include <linux/bits.h>
#include <linux/stringify.h>
+#include <asm/gpr-num.h>
#define __ASM_EXTABLE_RAW(insn, fixup, type, data) \
".pushsection __ex_table, \"a\"\n" \
@@ -37,6 +40,26 @@
#define _ASM_EXTABLE(insn, fixup) \
__ASM_EXTABLE_RAW(#insn, #fixup, __stringify(EX_TYPE_FIXUP), "0")
+#define EX_DATA_REG_ERR_SHIFT 0
+#define EX_DATA_REG_ERR GENMASK(4, 0)
+#define EX_DATA_REG_ZERO_SHIFT 5
+#define EX_DATA_REG_ZERO GENMASK(9, 5)
+
+#define EX_DATA_REG(reg, gpr) \
+ "((.L__gpr_num_" #gpr ") << " __stringify(EX_DATA_REG_##reg##_SHIFT) ")"
+
+#define _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero) \
+ __DEFINE_ASM_GPR_NUMS \
+ __ASM_EXTABLE_RAW(#insn, #fixup, \
+ __stringify(EX_TYPE_UACCESS_ERR_ZERO), \
+ "(" \
+ EX_DATA_REG(ERR, err) " | " \
+ EX_DATA_REG(ZERO, zero) \
+ ")")
+
+#define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err) \
+ _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero)
+
#endif /* __ASSEMBLY__ */
#endif /* __ASM_ASM_EXTABLE_H */
diff --git a/arch/riscv/include/asm/futex.h b/arch/riscv/include/asm/futex.h
index 2e15e8e89502..fc8130f995c1 100644
--- a/arch/riscv/include/asm/futex.h
+++ b/arch/riscv/include/asm/futex.h
@@ -21,20 +21,14 @@
#define __futex_atomic_op(insn, ret, oldval, uaddr, oparg) \
{ \
- uintptr_t tmp; \
__enable_user_access(); \
__asm__ __volatile__ ( \
"1: " insn " \n" \
"2: \n" \
- " .section .fixup,\"ax\" \n" \
- " .balign 4 \n" \
- "3: li %[r],%[e] \n" \
- " jump 2b,%[t] \n" \
- " .previous \n" \
- _ASM_EXTABLE(1b, 3b) \
+ _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %[r]) \
: [r] "+r" (ret), [ov] "=&r" (oldval), \
- [u] "+m" (*uaddr), [t] "=&r" (tmp) \
- : [op] "Jr" (oparg), [e] "i" (-EFAULT) \
+ [u] "+m" (*uaddr) \
+ : [op] "Jr" (oparg) \
: "memory"); \
__disable_user_access(); \
}
@@ -96,15 +90,10 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
"2: sc.w.aqrl %[t],%z[nv],%[u] \n"
" bnez %[t],1b \n"
"3: \n"
- " .section .fixup,\"ax\" \n"
- " .balign 4 \n"
- "4: li %[r],%[e] \n"
- " jump 3b,%[t] \n"
- " .previous \n"
- _ASM_EXTABLE(1b, 4b) \
- _ASM_EXTABLE(2b, 4b) \
+ _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %[r]) \
+ _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %[r]) \
: [r] "+r" (ret), [v] "=&r" (val), [u] "+m" (*uaddr), [t] "=&r" (tmp)
- : [ov] "Jr" (oldval), [nv] "Jr" (newval), [e] "i" (-EFAULT)
+ : [ov] "Jr" (oldval), [nv] "Jr" (newval)
: "memory");
__disable_user_access();
diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
index 40e6099af488..a4716c026386 100644
--- a/arch/riscv/include/asm/uaccess.h
+++ b/arch/riscv/include/asm/uaccess.h
@@ -81,22 +81,14 @@ static inline int __access_ok(unsigned long addr, unsigned long size)
#define __get_user_asm(insn, x, ptr, err) \
do { \
- uintptr_t __tmp; \
__typeof__(x) __x; \
__asm__ __volatile__ ( \
"1:\n" \
- " " insn " %1, %3\n" \
+ " " insn " %1, %2\n" \
"2:\n" \
- " .section .fixup,\"ax\"\n" \
- " .balign 4\n" \
- "3:\n" \
- " li %0, %4\n" \
- " li %1, 0\n" \
- " jump 2b, %2\n" \
- " .previous\n" \
- _ASM_EXTABLE(1b, 3b) \
- : "+r" (err), "=&r" (__x), "=r" (__tmp) \
- : "m" (*(ptr)), "i" (-EFAULT)); \
+ _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 2b, %0, %1) \
+ : "+r" (err), "=&r" (__x) \
+ : "m" (*(ptr))); \
(x) = __x; \
} while (0)
@@ -108,27 +100,18 @@ do { \
do { \
u32 __user *__ptr = (u32 __user *)(ptr); \
u32 __lo, __hi; \
- uintptr_t __tmp; \
__asm__ __volatile__ ( \
"1:\n" \
- " lw %1, %4\n" \
+ " lw %1, %3\n" \
"2:\n" \
- " lw %2, %5\n" \
+ " lw %2, %4\n" \
"3:\n" \
- " .section .fixup,\"ax\"\n" \
- " .balign 4\n" \
- "4:\n" \
- " li %0, %6\n" \
- " li %1, 0\n" \
- " li %2, 0\n" \
- " jump 3b, %3\n" \
- " .previous\n" \
- _ASM_EXTABLE(1b, 4b) \
- _ASM_EXTABLE(2b, 4b) \
- : "+r" (err), "=&r" (__lo), "=r" (__hi), \
- "=r" (__tmp) \
- : "m" (__ptr[__LSW]), "m" (__ptr[__MSW]), \
- "i" (-EFAULT)); \
+ _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 3b, %0, %1) \
+ _ASM_EXTABLE_UACCESS_ERR_ZERO(2b, 3b, %0, %1) \
+ : "+r" (err), "=&r" (__lo), "=r" (__hi) \
+ : "m" (__ptr[__LSW]), "m" (__ptr[__MSW])) \
+ if (err) \
+ __hi = 0; \
(x) = (__typeof__(x))((__typeof__((x)-(x)))( \
(((u64)__hi << 32) | __lo))); \
} while (0)
@@ -216,21 +199,14 @@ do { \
#define __put_user_asm(insn, x, ptr, err) \
do { \
- uintptr_t __tmp; \
__typeof__(*(ptr)) __x = x; \
__asm__ __volatile__ ( \
"1:\n" \
- " " insn " %z3, %2\n" \
+ " " insn " %z2, %1\n" \
"2:\n" \
- " .section .fixup,\"ax\"\n" \
- " .balign 4\n" \
- "3:\n" \
- " li %0, %4\n" \
- " jump 2b, %1\n" \
- " .previous\n" \
- _ASM_EXTABLE(1b, 3b) \
- : "+r" (err), "=r" (__tmp), "=m" (*(ptr)) \
- : "rJ" (__x), "i" (-EFAULT)); \
+ _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %0) \
+ : "+r" (err), "=m" (*(ptr)) \
+ : "rJ" (__x)); \
} while (0)
#ifdef CONFIG_64BIT
@@ -244,22 +220,16 @@ do { \
uintptr_t __tmp; \
__asm__ __volatile__ ( \
"1:\n" \
- " sw %z4, %2\n" \
+ " sw %z3, %1\n" \
"2:\n" \
- " sw %z5, %3\n" \
+ " sw %z4, %2\n" \
"3:\n" \
- " .section .fixup,\"ax\"\n" \
- " .balign 4\n" \
- "4:\n" \
- " li %0, %6\n" \
- " jump 3b, %1\n" \
- " .previous\n" \
- _ASM_EXTABLE(1b, 4b) \
- _ASM_EXTABLE(2b, 4b) \
- : "+r" (err), "=r" (__tmp), \
+ _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %0) \
+ _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %0) \
+ : "+r" (err), \
"=m" (__ptr[__LSW]), \
"=m" (__ptr[__MSW]) \
- : "rJ" (__x), "rJ" (__x >> 32), "i" (-EFAULT)); \
+ : "rJ" (__x), "rJ" (__x >> 32)); \
} while (0)
#endif /* CONFIG_64BIT */
diff --git a/arch/riscv/mm/extable.c b/arch/riscv/mm/extable.c
index 91e52c4bb33a..05978f78579f 100644
--- a/arch/riscv/mm/extable.c
+++ b/arch/riscv/mm/extable.c
@@ -7,10 +7,12 @@
*/
+#include <linux/bitfield.h>
#include <linux/extable.h>
#include <linux/module.h>
#include <linux/uaccess.h>
#include <asm/asm-extable.h>
+#include <asm/ptrace.h>
static inline unsigned long
get_ex_fixup(const struct exception_table_entry *ex)
@@ -25,6 +27,29 @@ static bool ex_handler_fixup(const struct exception_table_entry *ex,
return true;
}
+static inline void regs_set_gpr(struct pt_regs *regs, unsigned int offset,
+ unsigned long val)
+{
+ if (unlikely(offset > MAX_REG_OFFSET))
+ return;
+
+ if (!offset)
+ *(unsigned long *)((unsigned long)regs + offset) = val;
+}
+
+static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex,
+ struct pt_regs *regs)
+{
+ int reg_err = FIELD_GET(EX_DATA_REG_ERR, ex->data);
+ int reg_zero = FIELD_GET(EX_DATA_REG_ZERO, ex->data);
+
+ regs_set_gpr(regs, reg_err, -EFAULT);
+ regs_set_gpr(regs, reg_zero, 0);
+
+ regs->epc = get_ex_fixup(ex);
+ return true;
+}
+
bool fixup_exception(struct pt_regs *regs)
{
const struct exception_table_entry *ex;
@@ -38,6 +63,8 @@ bool fixup_exception(struct pt_regs *regs)
return ex_handler_fixup(ex, regs);
case EX_TYPE_BPF:
return ex_handler_bpf(ex, regs);
+ case EX_TYPE_UACCESS_ERR_ZERO:
+ return ex_handler_uaccess_err_zero(ex, regs);
}
BUG();
--
2.33.0
From: Jisheng Zhang <[email protected]>
This is to group riscv related extable related functions signature
into one file.
Signed-off-by: Jisheng Zhang <[email protected]>
---
arch/riscv/include/asm/extable.h | 12 ++++++++++++
arch/riscv/mm/extable.c | 6 ------
arch/riscv/net/bpf_jit_comp64.c | 2 --
3 files changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/riscv/include/asm/extable.h b/arch/riscv/include/asm/extable.h
index 84760392fc69..c48c020fcf4d 100644
--- a/arch/riscv/include/asm/extable.h
+++ b/arch/riscv/include/asm/extable.h
@@ -22,4 +22,16 @@ struct exception_table_entry {
#define ARCH_HAS_RELATIVE_EXTABLE
int fixup_exception(struct pt_regs *regs);
+
+#if defined(CONFIG_BPF_JIT) && defined(CONFIG_ARCH_RV64I)
+int rv_bpf_fixup_exception(const struct exception_table_entry *ex, struct pt_regs *regs);
+#else
+static inline int
+rv_bpf_fixup_exception(const struct exception_table_entry *ex,
+ struct pt_regs *regs)
+{
+ return 0;
+}
+#endif
+
#endif
diff --git a/arch/riscv/mm/extable.c b/arch/riscv/mm/extable.c
index d8d239c2c1bd..cbb0db11b28f 100644
--- a/arch/riscv/mm/extable.c
+++ b/arch/riscv/mm/extable.c
@@ -11,10 +11,6 @@
#include <linux/module.h>
#include <linux/uaccess.h>
-#if defined(CONFIG_BPF_JIT) && defined(CONFIG_ARCH_RV64I)
-int rv_bpf_fixup_exception(const struct exception_table_entry *ex, struct pt_regs *regs);
-#endif
-
int fixup_exception(struct pt_regs *regs)
{
const struct exception_table_entry *fixup;
@@ -23,10 +19,8 @@ int fixup_exception(struct pt_regs *regs)
if (!fixup)
return 0;
-#if defined(CONFIG_BPF_JIT) && defined(CONFIG_ARCH_RV64I)
if (regs->epc >= BPF_JIT_REGION_START && regs->epc < BPF_JIT_REGION_END)
return rv_bpf_fixup_exception(fixup, regs);
-#endif
regs->epc = (unsigned long)&fixup->fixup + fixup->fixup;
return 1;
diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
index f2a779c7e225..2ca345c7b0bf 100644
--- a/arch/riscv/net/bpf_jit_comp64.c
+++ b/arch/riscv/net/bpf_jit_comp64.c
@@ -459,8 +459,6 @@ static int emit_call(bool fixed, u64 addr, struct rv_jit_context *ctx)
#define BPF_FIXUP_OFFSET_MASK GENMASK(26, 0)
#define BPF_FIXUP_REG_MASK GENMASK(31, 27)
-int rv_bpf_fixup_exception(const struct exception_table_entry *ex,
- struct pt_regs *regs);
int rv_bpf_fixup_exception(const struct exception_table_entry *ex,
struct pt_regs *regs)
{
--
2.33.0
Hi jisheng:
eBPF's exception tables needs to be modified to relative synchronously.
I modified and verified the code as follows:
===================
--- a/arch/riscv/net/bpf_jit_comp64.c
+++ b/arch/riscv/net/bpf_jit_comp64.c
@@ -499,7 +499,7 @@ static int add_exception_handler(const struct bpf_insn *insn,
offset = pc - (long)&ex->insn;
if (WARN_ON_ONCE(offset >= 0 || offset < INT_MIN))
return -ERANGE;
- ex->insn = pc;
+ ex->insn = offset;
===================
Thanks.
Reviewed-by:Tong Tiangen <[email protected]>
On 2021/11/18 19:22, Jisheng Zhang wrote:
> From: Jisheng Zhang <[email protected]>
>
> Similar as other architectures such as arm64, x86 and so on, use
> offsets relative to the exception table entry values rather than
> absolute addresses for both the exception locationand the fixup.
>
> However, RISCV label difference will actually produce two relocations,
> a pair of R_RISCV_ADD32 and R_RISCV_SUB32. Take below simple code for
> example:
>
> $ cat test.S
> .section .text
> 1:
> nop
> .section __ex_table,"a"
> .balign 4
> .long (1b - .)
> .previous
>
> $ riscv64-linux-gnu-gcc -c test.S
> $ riscv64-linux-gnu-readelf -r test.o
> Relocation section '.rela__ex_table' at offset 0x100 contains 2 entries:
> Offset Info Type Sym. Value Sym. Name + Addend
> 000000000000 000600000023 R_RISCV_ADD32 0000000000000000 .L1^B1 + 0
> 000000000000 000500000027 R_RISCV_SUB32 0000000000000000 .L0 + 0
>
> The modpost will complain the R_RISCV_SUB32 relocation, so we need to
> patch modpost.c to skip this relocation for .rela__ex_table section.
>
> After this patch, the __ex_table section size of defconfig vmlinux is
> reduced from 7072 Bytes to 3536 Bytes.
>
> Signed-off-by: Jisheng Zhang <[email protected]>
> Reviewed-by: Kefeng Wang <[email protected]>
> ---
> arch/riscv/include/asm/Kbuild | 1 -
> arch/riscv/include/asm/extable.h | 25 +++++++++++++++++++++++++
> arch/riscv/include/asm/uaccess.h | 4 ++--
> arch/riscv/lib/uaccess.S | 4 ++--
> arch/riscv/mm/extable.c | 2 +-
> scripts/mod/modpost.c | 15 +++++++++++++++
> scripts/sorttable.c | 2 +-
> 7 files changed, 46 insertions(+), 7 deletions(-)
> create mode 100644 arch/riscv/include/asm/extable.h
>
> diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
> index 445ccc97305a..57b86fd9916c 100644
> --- a/arch/riscv/include/asm/Kbuild
> +++ b/arch/riscv/include/asm/Kbuild
> @@ -1,6 +1,5 @@
> # SPDX-License-Identifier: GPL-2.0
> generic-y += early_ioremap.h
> -generic-y += extable.h
> generic-y += flat.h
> generic-y += kvm_para.h
> generic-y += user.h
> diff --git a/arch/riscv/include/asm/extable.h b/arch/riscv/include/asm/extable.h
> new file mode 100644
> index 000000000000..84760392fc69
> --- /dev/null
> +++ b/arch/riscv/include/asm/extable.h
> @@ -0,0 +1,25 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _ASM_RISCV_EXTABLE_H
> +#define _ASM_RISCV_EXTABLE_H
> +
> +/*
> + * The exception table consists of pairs of relative offsets: the first
> + * is the relative offset to an instruction that is allowed to fault,
> + * and the second is the relative offset at which the program should
> + * continue. No registers are modified, so it is entirely up to the
> + * continuation code to figure out what to do.
> + *
> + * All the routines below use bits of fixup code that are out of line
> + * with the main instruction path. This means when everything is well,
> + * we don't even have to jump over them. Further, they do not intrude
> + * on our cache or tlb entries.
> + */
> +
> +struct exception_table_entry {
> + int insn, fixup;
> +};
> +
> +#define ARCH_HAS_RELATIVE_EXTABLE
> +
> +int fixup_exception(struct pt_regs *regs);
> +#endif
> diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
> index 714cd311d9f1..0f2c5b9d2e8f 100644
> --- a/arch/riscv/include/asm/uaccess.h
> +++ b/arch/riscv/include/asm/uaccess.h
> @@ -12,8 +12,8 @@
>
> #define _ASM_EXTABLE(from, to) \
> " .pushsection __ex_table, \"a\"\n" \
> - " .balign " RISCV_SZPTR " \n" \
> - " " RISCV_PTR "(" #from "), (" #to ")\n" \
> + " .balign 4\n" \
> + " .long (" #from " - .), (" #to " - .)\n" \
> " .popsection\n"
>
> /*
> diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S
> index 63bc691cff91..55f80f84e23f 100644
> --- a/arch/riscv/lib/uaccess.S
> +++ b/arch/riscv/lib/uaccess.S
> @@ -7,8 +7,8 @@
> 100:
> \op \reg, \addr
> .section __ex_table,"a"
> - .balign RISCV_SZPTR
> - RISCV_PTR 100b, \lbl
> + .balign 4
> + .long (100b - .), (\lbl - .)
> .previous
> .endm
>
> diff --git a/arch/riscv/mm/extable.c b/arch/riscv/mm/extable.c
> index ddb7d3b99e89..d8d239c2c1bd 100644
> --- a/arch/riscv/mm/extable.c
> +++ b/arch/riscv/mm/extable.c
> @@ -28,6 +28,6 @@ int fixup_exception(struct pt_regs *regs)
> return rv_bpf_fixup_exception(fixup, regs);
> #endif
>
> - regs->epc = fixup->fixup;
> + regs->epc = (unsigned long)&fixup->fixup + fixup->fixup;
> return 1;
> }
> diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
> index cb8ab7d91d30..6bfa33217914 100644
> --- a/scripts/mod/modpost.c
> +++ b/scripts/mod/modpost.c
> @@ -1830,6 +1830,14 @@ static int addend_mips_rel(struct elf_info *elf, Elf_Shdr *sechdr, Elf_Rela *r)
> return 0;
> }
>
> +#ifndef EM_RISCV
> +#define EM_RISCV 243
> +#endif
> +
> +#ifndef R_RISCV_SUB32
> +#define R_RISCV_SUB32 39
> +#endif
> +
> static void section_rela(const char *modname, struct elf_info *elf,
> Elf_Shdr *sechdr)
> {
> @@ -1866,6 +1874,13 @@ static void section_rela(const char *modname, struct elf_info *elf,
> r_sym = ELF_R_SYM(r.r_info);
> #endif
> r.r_addend = TO_NATIVE(rela->r_addend);
> + switch (elf->hdr->e_machine) {
> + case EM_RISCV:
> + if (!strcmp("__ex_table", fromsec) &&
> + ELF_R_TYPE(r.r_info) == R_RISCV_SUB32)
> + continue;
> + break;
> + }
> sym = elf->symtab_start + r_sym;
> /* Skip special sections */
> if (is_shndx_special(sym->st_shndx))
> diff --git a/scripts/sorttable.c b/scripts/sorttable.c
> index b7c2ad71f9cf..0c031e47a419 100644
> --- a/scripts/sorttable.c
> +++ b/scripts/sorttable.c
> @@ -376,6 +376,7 @@ static int do_file(char const *const fname, void *addr)
> case EM_PARISC:
> case EM_PPC:
> case EM_PPC64:
> + case EM_RISCV:
> custom_sort = sort_relative_table;
> break;
> case EM_ARCOMPACT:
> @@ -383,7 +384,6 @@ static int do_file(char const *const fname, void *addr)
> case EM_ARM:
> case EM_MICROBLAZE:
> case EM_MIPS:
> - case EM_RISCV:
> case EM_XTENSA:
> break;
> default:
>
Hello Jisheng,
Just wanted to inform you that this patch breaks the writev02 test
case in LTP and if it is reverted then the test passes. If we run the
test through strace then we see that the test hangs and following is
the last line printed by strace:
"writev(3, [{iov_base=0x7fff848a6000, iov_len=8192}, {iov_base=NULL,
iov_len=0}]"
Thanks,
Mayuresh.
On Thu, Nov 18, 2021 at 5:05 PM Jisheng Zhang <[email protected]> wrote:
>
> From: Jisheng Zhang <[email protected]>
>
> Inspired by commit 2e77a62cb3a6("arm64: extable: add a dedicated
> uaccess handler"), do similar to riscv to add a dedicated uaccess
> exception handler to update registers in exception context and
> subsequently return back into the function which faulted, so we remove
> the need for fixups specialized to each faulting instruction.
>
> Signed-off-by: Jisheng Zhang <[email protected]>
> ---
> arch/riscv/include/asm/asm-extable.h | 23 +++++++++
> arch/riscv/include/asm/futex.h | 23 +++------
> arch/riscv/include/asm/uaccess.h | 74 +++++++++-------------------
> arch/riscv/mm/extable.c | 27 ++++++++++
> 4 files changed, 78 insertions(+), 69 deletions(-)
>
> diff --git a/arch/riscv/include/asm/asm-extable.h b/arch/riscv/include/asm/asm-extable.h
> index 1b1f4ffd8d37..14be0673f5b5 100644
> --- a/arch/riscv/include/asm/asm-extable.h
> +++ b/arch/riscv/include/asm/asm-extable.h
> @@ -5,6 +5,7 @@
> #define EX_TYPE_NONE 0
> #define EX_TYPE_FIXUP 1
> #define EX_TYPE_BPF 2
> +#define EX_TYPE_UACCESS_ERR_ZERO 3
>
> #ifdef __ASSEMBLY__
>
> @@ -23,7 +24,9 @@
>
> #else /* __ASSEMBLY__ */
>
> +#include <linux/bits.h>
> #include <linux/stringify.h>
> +#include <asm/gpr-num.h>
>
> #define __ASM_EXTABLE_RAW(insn, fixup, type, data) \
> ".pushsection __ex_table, \"a\"\n" \
> @@ -37,6 +40,26 @@
> #define _ASM_EXTABLE(insn, fixup) \
> __ASM_EXTABLE_RAW(#insn, #fixup, __stringify(EX_TYPE_FIXUP), "0")
>
> +#define EX_DATA_REG_ERR_SHIFT 0
> +#define EX_DATA_REG_ERR GENMASK(4, 0)
> +#define EX_DATA_REG_ZERO_SHIFT 5
> +#define EX_DATA_REG_ZERO GENMASK(9, 5)
> +
> +#define EX_DATA_REG(reg, gpr) \
> + "((.L__gpr_num_" #gpr ") << " __stringify(EX_DATA_REG_##reg##_SHIFT) ")"
> +
> +#define _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero) \
> + __DEFINE_ASM_GPR_NUMS \
> + __ASM_EXTABLE_RAW(#insn, #fixup, \
> + __stringify(EX_TYPE_UACCESS_ERR_ZERO), \
> + "(" \
> + EX_DATA_REG(ERR, err) " | " \
> + EX_DATA_REG(ZERO, zero) \
> + ")")
> +
> +#define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err) \
> + _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero)
> +
> #endif /* __ASSEMBLY__ */
>
> #endif /* __ASM_ASM_EXTABLE_H */
> diff --git a/arch/riscv/include/asm/futex.h b/arch/riscv/include/asm/futex.h
> index 2e15e8e89502..fc8130f995c1 100644
> --- a/arch/riscv/include/asm/futex.h
> +++ b/arch/riscv/include/asm/futex.h
> @@ -21,20 +21,14 @@
>
> #define __futex_atomic_op(insn, ret, oldval, uaddr, oparg) \
> { \
> - uintptr_t tmp; \
> __enable_user_access(); \
> __asm__ __volatile__ ( \
> "1: " insn " \n" \
> "2: \n" \
> - " .section .fixup,\"ax\" \n" \
> - " .balign 4 \n" \
> - "3: li %[r],%[e] \n" \
> - " jump 2b,%[t] \n" \
> - " .previous \n" \
> - _ASM_EXTABLE(1b, 3b) \
> + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %[r]) \
> : [r] "+r" (ret), [ov] "=&r" (oldval), \
> - [u] "+m" (*uaddr), [t] "=&r" (tmp) \
> - : [op] "Jr" (oparg), [e] "i" (-EFAULT) \
> + [u] "+m" (*uaddr) \
> + : [op] "Jr" (oparg) \
> : "memory"); \
> __disable_user_access(); \
> }
> @@ -96,15 +90,10 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
> "2: sc.w.aqrl %[t],%z[nv],%[u] \n"
> " bnez %[t],1b \n"
> "3: \n"
> - " .section .fixup,\"ax\" \n"
> - " .balign 4 \n"
> - "4: li %[r],%[e] \n"
> - " jump 3b,%[t] \n"
> - " .previous \n"
> - _ASM_EXTABLE(1b, 4b) \
> - _ASM_EXTABLE(2b, 4b) \
> + _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %[r]) \
> + _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %[r]) \
> : [r] "+r" (ret), [v] "=&r" (val), [u] "+m" (*uaddr), [t] "=&r" (tmp)
> - : [ov] "Jr" (oldval), [nv] "Jr" (newval), [e] "i" (-EFAULT)
> + : [ov] "Jr" (oldval), [nv] "Jr" (newval)
> : "memory");
> __disable_user_access();
>
> diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
> index 40e6099af488..a4716c026386 100644
> --- a/arch/riscv/include/asm/uaccess.h
> +++ b/arch/riscv/include/asm/uaccess.h
> @@ -81,22 +81,14 @@ static inline int __access_ok(unsigned long addr, unsigned long size)
>
> #define __get_user_asm(insn, x, ptr, err) \
> do { \
> - uintptr_t __tmp; \
> __typeof__(x) __x; \
> __asm__ __volatile__ ( \
> "1:\n" \
> - " " insn " %1, %3\n" \
> + " " insn " %1, %2\n" \
> "2:\n" \
> - " .section .fixup,\"ax\"\n" \
> - " .balign 4\n" \
> - "3:\n" \
> - " li %0, %4\n" \
> - " li %1, 0\n" \
> - " jump 2b, %2\n" \
> - " .previous\n" \
> - _ASM_EXTABLE(1b, 3b) \
> - : "+r" (err), "=&r" (__x), "=r" (__tmp) \
> - : "m" (*(ptr)), "i" (-EFAULT)); \
> + _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 2b, %0, %1) \
> + : "+r" (err), "=&r" (__x) \
> + : "m" (*(ptr))); \
> (x) = __x; \
> } while (0)
>
> @@ -108,27 +100,18 @@ do { \
> do { \
> u32 __user *__ptr = (u32 __user *)(ptr); \
> u32 __lo, __hi; \
> - uintptr_t __tmp; \
> __asm__ __volatile__ ( \
> "1:\n" \
> - " lw %1, %4\n" \
> + " lw %1, %3\n" \
> "2:\n" \
> - " lw %2, %5\n" \
> + " lw %2, %4\n" \
> "3:\n" \
> - " .section .fixup,\"ax\"\n" \
> - " .balign 4\n" \
> - "4:\n" \
> - " li %0, %6\n" \
> - " li %1, 0\n" \
> - " li %2, 0\n" \
> - " jump 3b, %3\n" \
> - " .previous\n" \
> - _ASM_EXTABLE(1b, 4b) \
> - _ASM_EXTABLE(2b, 4b) \
> - : "+r" (err), "=&r" (__lo), "=r" (__hi), \
> - "=r" (__tmp) \
> - : "m" (__ptr[__LSW]), "m" (__ptr[__MSW]), \
> - "i" (-EFAULT)); \
> + _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 3b, %0, %1) \
> + _ASM_EXTABLE_UACCESS_ERR_ZERO(2b, 3b, %0, %1) \
> + : "+r" (err), "=&r" (__lo), "=r" (__hi) \
> + : "m" (__ptr[__LSW]), "m" (__ptr[__MSW])) \
> + if (err) \
> + __hi = 0; \
> (x) = (__typeof__(x))((__typeof__((x)-(x)))( \
> (((u64)__hi << 32) | __lo))); \
> } while (0)
> @@ -216,21 +199,14 @@ do { \
>
> #define __put_user_asm(insn, x, ptr, err) \
> do { \
> - uintptr_t __tmp; \
> __typeof__(*(ptr)) __x = x; \
> __asm__ __volatile__ ( \
> "1:\n" \
> - " " insn " %z3, %2\n" \
> + " " insn " %z2, %1\n" \
> "2:\n" \
> - " .section .fixup,\"ax\"\n" \
> - " .balign 4\n" \
> - "3:\n" \
> - " li %0, %4\n" \
> - " jump 2b, %1\n" \
> - " .previous\n" \
> - _ASM_EXTABLE(1b, 3b) \
> - : "+r" (err), "=r" (__tmp), "=m" (*(ptr)) \
> - : "rJ" (__x), "i" (-EFAULT)); \
> + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %0) \
> + : "+r" (err), "=m" (*(ptr)) \
> + : "rJ" (__x)); \
> } while (0)
>
> #ifdef CONFIG_64BIT
> @@ -244,22 +220,16 @@ do { \
> uintptr_t __tmp; \
> __asm__ __volatile__ ( \
> "1:\n" \
> - " sw %z4, %2\n" \
> + " sw %z3, %1\n" \
> "2:\n" \
> - " sw %z5, %3\n" \
> + " sw %z4, %2\n" \
> "3:\n" \
> - " .section .fixup,\"ax\"\n" \
> - " .balign 4\n" \
> - "4:\n" \
> - " li %0, %6\n" \
> - " jump 3b, %1\n" \
> - " .previous\n" \
> - _ASM_EXTABLE(1b, 4b) \
> - _ASM_EXTABLE(2b, 4b) \
> - : "+r" (err), "=r" (__tmp), \
> + _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %0) \
> + _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %0) \
> + : "+r" (err), \
> "=m" (__ptr[__LSW]), \
> "=m" (__ptr[__MSW]) \
> - : "rJ" (__x), "rJ" (__x >> 32), "i" (-EFAULT)); \
> + : "rJ" (__x), "rJ" (__x >> 32)); \
> } while (0)
> #endif /* CONFIG_64BIT */
>
> diff --git a/arch/riscv/mm/extable.c b/arch/riscv/mm/extable.c
> index 91e52c4bb33a..05978f78579f 100644
> --- a/arch/riscv/mm/extable.c
> +++ b/arch/riscv/mm/extable.c
> @@ -7,10 +7,12 @@
> */
>
>
> +#include <linux/bitfield.h>
> #include <linux/extable.h>
> #include <linux/module.h>
> #include <linux/uaccess.h>
> #include <asm/asm-extable.h>
> +#include <asm/ptrace.h>
>
> static inline unsigned long
> get_ex_fixup(const struct exception_table_entry *ex)
> @@ -25,6 +27,29 @@ static bool ex_handler_fixup(const struct exception_table_entry *ex,
> return true;
> }
>
> +static inline void regs_set_gpr(struct pt_regs *regs, unsigned int offset,
> + unsigned long val)
> +{
> + if (unlikely(offset > MAX_REG_OFFSET))
> + return;
> +
> + if (!offset)
> + *(unsigned long *)((unsigned long)regs + offset) = val;
> +}
> +
> +static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex,
> + struct pt_regs *regs)
> +{
> + int reg_err = FIELD_GET(EX_DATA_REG_ERR, ex->data);
> + int reg_zero = FIELD_GET(EX_DATA_REG_ZERO, ex->data);
> +
> + regs_set_gpr(regs, reg_err, -EFAULT);
> + regs_set_gpr(regs, reg_zero, 0);
> +
> + regs->epc = get_ex_fixup(ex);
> + return true;
> +}
> +
> bool fixup_exception(struct pt_regs *regs)
> {
> const struct exception_table_entry *ex;
> @@ -38,6 +63,8 @@ bool fixup_exception(struct pt_regs *regs)
> return ex_handler_fixup(ex, regs);
> case EX_TYPE_BPF:
> return ex_handler_bpf(ex, regs);
> + case EX_TYPE_UACCESS_ERR_ZERO:
> + return ex_handler_uaccess_err_zero(ex, regs);
> }
>
> BUG();
> --
> 2.33.0
>
>
>
> _______________________________________________
> linux-riscv mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-riscv
On Thu, Jan 20, 2022 at 11:45:34PM +0530, Mayuresh Chitale wrote:
> Hello Jisheng,
Hi,
>
> Just wanted to inform you that this patch breaks the writev02 test
> case in LTP and if it is reverted then the test passes. If we run the
> test through strace then we see that the test hangs and following is
> the last line printed by strace:
>
> "writev(3, [{iov_base=0x7fff848a6000, iov_len=8192}, {iov_base=NULL,
> iov_len=0}]"
>
Thanks for the bug report. I will try to fix it.
> Thanks,
> Mayuresh.
>
>
> On Thu, Nov 18, 2021 at 5:05 PM Jisheng Zhang <[email protected]> wrote:
> >
> > From: Jisheng Zhang <[email protected]>
> >
> > Inspired by commit 2e77a62cb3a6("arm64: extable: add a dedicated
> > uaccess handler"), do similar to riscv to add a dedicated uaccess
> > exception handler to update registers in exception context and
> > subsequently return back into the function which faulted, so we remove
> > the need for fixups specialized to each faulting instruction.
> >
> > Signed-off-by: Jisheng Zhang <[email protected]>
> > ---
> > arch/riscv/include/asm/asm-extable.h | 23 +++++++++
> > arch/riscv/include/asm/futex.h | 23 +++------
> > arch/riscv/include/asm/uaccess.h | 74 +++++++++-------------------
> > arch/riscv/mm/extable.c | 27 ++++++++++
> > 4 files changed, 78 insertions(+), 69 deletions(-)
> >
> > diff --git a/arch/riscv/include/asm/asm-extable.h b/arch/riscv/include/asm/asm-extable.h
> > index 1b1f4ffd8d37..14be0673f5b5 100644
> > --- a/arch/riscv/include/asm/asm-extable.h
> > +++ b/arch/riscv/include/asm/asm-extable.h
> > @@ -5,6 +5,7 @@
> > #define EX_TYPE_NONE 0
> > #define EX_TYPE_FIXUP 1
> > #define EX_TYPE_BPF 2
> > +#define EX_TYPE_UACCESS_ERR_ZERO 3
> >
> > #ifdef __ASSEMBLY__
> >
> > @@ -23,7 +24,9 @@
> >
> > #else /* __ASSEMBLY__ */
> >
> > +#include <linux/bits.h>
> > #include <linux/stringify.h>
> > +#include <asm/gpr-num.h>
> >
> > #define __ASM_EXTABLE_RAW(insn, fixup, type, data) \
> > ".pushsection __ex_table, \"a\"\n" \
> > @@ -37,6 +40,26 @@
> > #define _ASM_EXTABLE(insn, fixup) \
> > __ASM_EXTABLE_RAW(#insn, #fixup, __stringify(EX_TYPE_FIXUP), "0")
> >
> > +#define EX_DATA_REG_ERR_SHIFT 0
> > +#define EX_DATA_REG_ERR GENMASK(4, 0)
> > +#define EX_DATA_REG_ZERO_SHIFT 5
> > +#define EX_DATA_REG_ZERO GENMASK(9, 5)
> > +
> > +#define EX_DATA_REG(reg, gpr) \
> > + "((.L__gpr_num_" #gpr ") << " __stringify(EX_DATA_REG_##reg##_SHIFT) ")"
> > +
> > +#define _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero) \
> > + __DEFINE_ASM_GPR_NUMS \
> > + __ASM_EXTABLE_RAW(#insn, #fixup, \
> > + __stringify(EX_TYPE_UACCESS_ERR_ZERO), \
> > + "(" \
> > + EX_DATA_REG(ERR, err) " | " \
> > + EX_DATA_REG(ZERO, zero) \
> > + ")")
> > +
> > +#define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err) \
> > + _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero)
> > +
> > #endif /* __ASSEMBLY__ */
> >
> > #endif /* __ASM_ASM_EXTABLE_H */
> > diff --git a/arch/riscv/include/asm/futex.h b/arch/riscv/include/asm/futex.h
> > index 2e15e8e89502..fc8130f995c1 100644
> > --- a/arch/riscv/include/asm/futex.h
> > +++ b/arch/riscv/include/asm/futex.h
> > @@ -21,20 +21,14 @@
> >
> > #define __futex_atomic_op(insn, ret, oldval, uaddr, oparg) \
> > { \
> > - uintptr_t tmp; \
> > __enable_user_access(); \
> > __asm__ __volatile__ ( \
> > "1: " insn " \n" \
> > "2: \n" \
> > - " .section .fixup,\"ax\" \n" \
> > - " .balign 4 \n" \
> > - "3: li %[r],%[e] \n" \
> > - " jump 2b,%[t] \n" \
> > - " .previous \n" \
> > - _ASM_EXTABLE(1b, 3b) \
> > + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %[r]) \
> > : [r] "+r" (ret), [ov] "=&r" (oldval), \
> > - [u] "+m" (*uaddr), [t] "=&r" (tmp) \
> > - : [op] "Jr" (oparg), [e] "i" (-EFAULT) \
> > + [u] "+m" (*uaddr) \
> > + : [op] "Jr" (oparg) \
> > : "memory"); \
> > __disable_user_access(); \
> > }
> > @@ -96,15 +90,10 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
> > "2: sc.w.aqrl %[t],%z[nv],%[u] \n"
> > " bnez %[t],1b \n"
> > "3: \n"
> > - " .section .fixup,\"ax\" \n"
> > - " .balign 4 \n"
> > - "4: li %[r],%[e] \n"
> > - " jump 3b,%[t] \n"
> > - " .previous \n"
> > - _ASM_EXTABLE(1b, 4b) \
> > - _ASM_EXTABLE(2b, 4b) \
> > + _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %[r]) \
> > + _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %[r]) \
> > : [r] "+r" (ret), [v] "=&r" (val), [u] "+m" (*uaddr), [t] "=&r" (tmp)
> > - : [ov] "Jr" (oldval), [nv] "Jr" (newval), [e] "i" (-EFAULT)
> > + : [ov] "Jr" (oldval), [nv] "Jr" (newval)
> > : "memory");
> > __disable_user_access();
> >
> > diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
> > index 40e6099af488..a4716c026386 100644
> > --- a/arch/riscv/include/asm/uaccess.h
> > +++ b/arch/riscv/include/asm/uaccess.h
> > @@ -81,22 +81,14 @@ static inline int __access_ok(unsigned long addr, unsigned long size)
> >
> > #define __get_user_asm(insn, x, ptr, err) \
> > do { \
> > - uintptr_t __tmp; \
> > __typeof__(x) __x; \
> > __asm__ __volatile__ ( \
> > "1:\n" \
> > - " " insn " %1, %3\n" \
> > + " " insn " %1, %2\n" \
> > "2:\n" \
> > - " .section .fixup,\"ax\"\n" \
> > - " .balign 4\n" \
> > - "3:\n" \
> > - " li %0, %4\n" \
> > - " li %1, 0\n" \
> > - " jump 2b, %2\n" \
> > - " .previous\n" \
> > - _ASM_EXTABLE(1b, 3b) \
> > - : "+r" (err), "=&r" (__x), "=r" (__tmp) \
> > - : "m" (*(ptr)), "i" (-EFAULT)); \
> > + _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 2b, %0, %1) \
> > + : "+r" (err), "=&r" (__x) \
> > + : "m" (*(ptr))); \
> > (x) = __x; \
> > } while (0)
> >
> > @@ -108,27 +100,18 @@ do { \
> > do { \
> > u32 __user *__ptr = (u32 __user *)(ptr); \
> > u32 __lo, __hi; \
> > - uintptr_t __tmp; \
> > __asm__ __volatile__ ( \
> > "1:\n" \
> > - " lw %1, %4\n" \
> > + " lw %1, %3\n" \
> > "2:\n" \
> > - " lw %2, %5\n" \
> > + " lw %2, %4\n" \
> > "3:\n" \
> > - " .section .fixup,\"ax\"\n" \
> > - " .balign 4\n" \
> > - "4:\n" \
> > - " li %0, %6\n" \
> > - " li %1, 0\n" \
> > - " li %2, 0\n" \
> > - " jump 3b, %3\n" \
> > - " .previous\n" \
> > - _ASM_EXTABLE(1b, 4b) \
> > - _ASM_EXTABLE(2b, 4b) \
> > - : "+r" (err), "=&r" (__lo), "=r" (__hi), \
> > - "=r" (__tmp) \
> > - : "m" (__ptr[__LSW]), "m" (__ptr[__MSW]), \
> > - "i" (-EFAULT)); \
> > + _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 3b, %0, %1) \
> > + _ASM_EXTABLE_UACCESS_ERR_ZERO(2b, 3b, %0, %1) \
> > + : "+r" (err), "=&r" (__lo), "=r" (__hi) \
> > + : "m" (__ptr[__LSW]), "m" (__ptr[__MSW])) \
> > + if (err) \
> > + __hi = 0; \
> > (x) = (__typeof__(x))((__typeof__((x)-(x)))( \
> > (((u64)__hi << 32) | __lo))); \
> > } while (0)
> > @@ -216,21 +199,14 @@ do { \
> >
> > #define __put_user_asm(insn, x, ptr, err) \
> > do { \
> > - uintptr_t __tmp; \
> > __typeof__(*(ptr)) __x = x; \
> > __asm__ __volatile__ ( \
> > "1:\n" \
> > - " " insn " %z3, %2\n" \
> > + " " insn " %z2, %1\n" \
> > "2:\n" \
> > - " .section .fixup,\"ax\"\n" \
> > - " .balign 4\n" \
> > - "3:\n" \
> > - " li %0, %4\n" \
> > - " jump 2b, %1\n" \
> > - " .previous\n" \
> > - _ASM_EXTABLE(1b, 3b) \
> > - : "+r" (err), "=r" (__tmp), "=m" (*(ptr)) \
> > - : "rJ" (__x), "i" (-EFAULT)); \
> > + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %0) \
> > + : "+r" (err), "=m" (*(ptr)) \
> > + : "rJ" (__x)); \
> > } while (0)
> >
> > #ifdef CONFIG_64BIT
> > @@ -244,22 +220,16 @@ do { \
> > uintptr_t __tmp; \
> > __asm__ __volatile__ ( \
> > "1:\n" \
> > - " sw %z4, %2\n" \
> > + " sw %z3, %1\n" \
> > "2:\n" \
> > - " sw %z5, %3\n" \
> > + " sw %z4, %2\n" \
> > "3:\n" \
> > - " .section .fixup,\"ax\"\n" \
> > - " .balign 4\n" \
> > - "4:\n" \
> > - " li %0, %6\n" \
> > - " jump 3b, %1\n" \
> > - " .previous\n" \
> > - _ASM_EXTABLE(1b, 4b) \
> > - _ASM_EXTABLE(2b, 4b) \
> > - : "+r" (err), "=r" (__tmp), \
> > + _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %0) \
> > + _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %0) \
> > + : "+r" (err), \
> > "=m" (__ptr[__LSW]), \
> > "=m" (__ptr[__MSW]) \
> > - : "rJ" (__x), "rJ" (__x >> 32), "i" (-EFAULT)); \
> > + : "rJ" (__x), "rJ" (__x >> 32)); \
> > } while (0)
> > #endif /* CONFIG_64BIT */
> >
> > diff --git a/arch/riscv/mm/extable.c b/arch/riscv/mm/extable.c
> > index 91e52c4bb33a..05978f78579f 100644
> > --- a/arch/riscv/mm/extable.c
> > +++ b/arch/riscv/mm/extable.c
> > @@ -7,10 +7,12 @@
> > */
> >
> >
> > +#include <linux/bitfield.h>
> > #include <linux/extable.h>
> > #include <linux/module.h>
> > #include <linux/uaccess.h>
> > #include <asm/asm-extable.h>
> > +#include <asm/ptrace.h>
> >
> > static inline unsigned long
> > get_ex_fixup(const struct exception_table_entry *ex)
> > @@ -25,6 +27,29 @@ static bool ex_handler_fixup(const struct exception_table_entry *ex,
> > return true;
> > }
> >
> > +static inline void regs_set_gpr(struct pt_regs *regs, unsigned int offset,
> > + unsigned long val)
> > +{
> > + if (unlikely(offset > MAX_REG_OFFSET))
> > + return;
> > +
> > + if (!offset)
> > + *(unsigned long *)((unsigned long)regs + offset) = val;
> > +}
> > +
> > +static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex,
> > + struct pt_regs *regs)
> > +{
> > + int reg_err = FIELD_GET(EX_DATA_REG_ERR, ex->data);
> > + int reg_zero = FIELD_GET(EX_DATA_REG_ZERO, ex->data);
> > +
> > + regs_set_gpr(regs, reg_err, -EFAULT);
> > + regs_set_gpr(regs, reg_zero, 0);
> > +
> > + regs->epc = get_ex_fixup(ex);
> > + return true;
> > +}
> > +
> > bool fixup_exception(struct pt_regs *regs)
> > {
> > const struct exception_table_entry *ex;
> > @@ -38,6 +63,8 @@ bool fixup_exception(struct pt_regs *regs)
> > return ex_handler_fixup(ex, regs);
> > case EX_TYPE_BPF:
> > return ex_handler_bpf(ex, regs);
> > + case EX_TYPE_UACCESS_ERR_ZERO:
> > + return ex_handler_uaccess_err_zero(ex, regs);
> > }
> >
> > BUG();
> > --
> > 2.33.0
> >
> >
> >
> > _______________________________________________
> > linux-riscv mailing list
> > [email protected]
> > http://lists.infradead.org/mailman/listinfo/linux-riscv
On Fri, Jan 21, 2022 at 08:16:51PM +0800, Jisheng Zhang wrote:
> On Thu, Jan 20, 2022 at 11:45:34PM +0530, Mayuresh Chitale wrote:
> > Hello Jisheng,
>
> Hi,
>
> >
> > Just wanted to inform you that this patch breaks the writev02 test
> > case in LTP and if it is reverted then the test passes. If we run the
> > test through strace then we see that the test hangs and following is
> > the last line printed by strace:
> >
> > "writev(3, [{iov_base=0x7fff848a6000, iov_len=8192}, {iov_base=NULL,
> > iov_len=0}]"
> >
>
> Thanks for the bug report. I will try to fix it.
Hi Mayuresh,
I just sent out a fix for this bug. Per my test, the issue is fixed.
Could you please try?
Thanks
Hi Jisheng,
On Sun, Jan 23, 2022 at 2:50 PM Jisheng Zhang <[email protected]> wrote:
>
> On Fri, Jan 21, 2022 at 08:16:51PM +0800, Jisheng Zhang wrote:
> > On Thu, Jan 20, 2022 at 11:45:34PM +0530, Mayuresh Chitale wrote:
> > > Hello Jisheng,
> >
> > Hi,
> >
> > >
> > > Just wanted to inform you that this patch breaks the writev02 test
> > > case in LTP and if it is reverted then the test passes. If we run the
> > > test through strace then we see that the test hangs and following is
> > > the last line printed by strace:
> > >
> > > "writev(3, [{iov_base=0x7fff848a6000, iov_len=8192}, {iov_base=NULL,
> > > iov_len=0}]"
> > >
> >
> > Thanks for the bug report. I will try to fix it.
>
> Hi Mayuresh,
>
> I just sent out a fix for this bug. Per my test, the issue is fixed.
> Could you please try?
>
> Thanks
Your fix works as expected.
Thanks,
Mayuresh.