2023-08-04 02:34:29

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH 00/10] RISC-V: Refactor instructions

There are numerous systems in the kernel that rely on directly
modifying, creating, and reading instructions. Many of these systems
have rewritten code to do this. This patch will delegate all instruction
handling into insn.h and reg.h. All of the compressed instructions, RVI,
Zicsr, M, A instructions are included, as well as a subset of the F,D,Q
extensions.

---
This is modifying code that https://lore.kernel.org/lkml/[email protected]/
is also touching.

---
Testing:

There are a lot of subsystems touched and I have not tested every
individual instruction. I did a lot of copy-pasting from the RISC-V spec
so opcodes and such should be correct, but the construction of every
instruction is not fully tested.

vector: Compiled and booted

jump_label: Ensured static keys function as expected.

kgdb: Attempted to run the provided tests but they failed even without
my changes

module: Loaded and unloaded modules

patch.c: Ensured kernel booted

kprobes: Used a kprobing module to probe jalr, auipc, and branch
instructions

nommu misaligned addresses: Kernel boots

kvm: Ran KVM selftests

bpf: Kernel boots. Most of the instructions are exclusively used by BPF
but I am unsure of the best way of testing BPF.

Signed-off-by: Charlie Jenkins <[email protected]>

---
Charlie Jenkins (10):
RISC-V: Expand instruction definitions
RISC-V: vector: Refactor instructions
RISC-V: Refactor jump label instructions
RISC-V: KGDB: Refactor instructions
RISC-V: module: Refactor instructions
RISC-V: Refactor patch instructions
RISC-V: nommu: Refactor instructions
RISC-V: kvm: Refactor instructions
RISC-V: bpf: Refactor instructions
RISC-V: Refactor bug and traps instructions

arch/riscv/include/asm/bug.h | 18 +-
arch/riscv/include/asm/insn.h | 2744 +++++++++++++++++++++++++++---
arch/riscv/include/asm/reg.h | 88 +
arch/riscv/kernel/jump_label.c | 13 +-
arch/riscv/kernel/kgdb.c | 13 +-
arch/riscv/kernel/module.c | 80 +-
arch/riscv/kernel/patch.c | 3 +-
arch/riscv/kernel/probes/kprobes.c | 13 +-
arch/riscv/kernel/probes/simulate-insn.c | 100 +-
arch/riscv/kernel/probes/uprobes.c | 5 +-
arch/riscv/kernel/traps.c | 9 +-
arch/riscv/kernel/traps_misaligned.c | 218 +--
arch/riscv/kernel/vector.c | 5 +-
arch/riscv/kvm/vcpu_insn.c | 281 +--
arch/riscv/net/bpf_jit.h | 707 +-------
15 files changed, 2825 insertions(+), 1472 deletions(-)
---
base-commit: 5d0c230f1de8c7515b6567d9afba1f196fb4e2f4
change-id: 20230801-master-refactor-instructions-v4-433aa040da03
--
- Charlie



2023-08-04 02:34:57

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH 07/10] RISC-V: nommu: Refactor instructions

Use shared instruction definitions in insn.h instead of manually
constructing them.

Signed-off-by: Charlie Jenkins <[email protected]>
---
arch/riscv/kernel/traps_misaligned.c | 218 ++++++++---------------------------
1 file changed, 45 insertions(+), 173 deletions(-)

diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
index 378f5b151443..b72045ce432a 100644
--- a/arch/riscv/kernel/traps_misaligned.c
+++ b/arch/riscv/kernel/traps_misaligned.c
@@ -12,144 +12,10 @@
#include <asm/processor.h>
#include <asm/ptrace.h>
#include <asm/csr.h>
+#include <asm/insn.h>
+#include <asm/reg.h>

-#define INSN_MATCH_LB 0x3
-#define INSN_MASK_LB 0x707f
-#define INSN_MATCH_LH 0x1003
-#define INSN_MASK_LH 0x707f
-#define INSN_MATCH_LW 0x2003
-#define INSN_MASK_LW 0x707f
-#define INSN_MATCH_LD 0x3003
-#define INSN_MASK_LD 0x707f
-#define INSN_MATCH_LBU 0x4003
-#define INSN_MASK_LBU 0x707f
-#define INSN_MATCH_LHU 0x5003
-#define INSN_MASK_LHU 0x707f
-#define INSN_MATCH_LWU 0x6003
-#define INSN_MASK_LWU 0x707f
-#define INSN_MATCH_SB 0x23
-#define INSN_MASK_SB 0x707f
-#define INSN_MATCH_SH 0x1023
-#define INSN_MASK_SH 0x707f
-#define INSN_MATCH_SW 0x2023
-#define INSN_MASK_SW 0x707f
-#define INSN_MATCH_SD 0x3023
-#define INSN_MASK_SD 0x707f
-
-#define INSN_MATCH_FLW 0x2007
-#define INSN_MASK_FLW 0x707f
-#define INSN_MATCH_FLD 0x3007
-#define INSN_MASK_FLD 0x707f
-#define INSN_MATCH_FLQ 0x4007
-#define INSN_MASK_FLQ 0x707f
-#define INSN_MATCH_FSW 0x2027
-#define INSN_MASK_FSW 0x707f
-#define INSN_MATCH_FSD 0x3027
-#define INSN_MASK_FSD 0x707f
-#define INSN_MATCH_FSQ 0x4027
-#define INSN_MASK_FSQ 0x707f
-
-#define INSN_MATCH_C_LD 0x6000
-#define INSN_MASK_C_LD 0xe003
-#define INSN_MATCH_C_SD 0xe000
-#define INSN_MASK_C_SD 0xe003
-#define INSN_MATCH_C_LW 0x4000
-#define INSN_MASK_C_LW 0xe003
-#define INSN_MATCH_C_SW 0xc000
-#define INSN_MASK_C_SW 0xe003
-#define INSN_MATCH_C_LDSP 0x6002
-#define INSN_MASK_C_LDSP 0xe003
-#define INSN_MATCH_C_SDSP 0xe002
-#define INSN_MASK_C_SDSP 0xe003
-#define INSN_MATCH_C_LWSP 0x4002
-#define INSN_MASK_C_LWSP 0xe003
-#define INSN_MATCH_C_SWSP 0xc002
-#define INSN_MASK_C_SWSP 0xe003
-
-#define INSN_MATCH_C_FLD 0x2000
-#define INSN_MASK_C_FLD 0xe003
-#define INSN_MATCH_C_FLW 0x6000
-#define INSN_MASK_C_FLW 0xe003
-#define INSN_MATCH_C_FSD 0xa000
-#define INSN_MASK_C_FSD 0xe003
-#define INSN_MATCH_C_FSW 0xe000
-#define INSN_MASK_C_FSW 0xe003
-#define INSN_MATCH_C_FLDSP 0x2002
-#define INSN_MASK_C_FLDSP 0xe003
-#define INSN_MATCH_C_FSDSP 0xa002
-#define INSN_MASK_C_FSDSP 0xe003
-#define INSN_MATCH_C_FLWSP 0x6002
-#define INSN_MASK_C_FLWSP 0xe003
-#define INSN_MATCH_C_FSWSP 0xe002
-#define INSN_MASK_C_FSWSP 0xe003
-
-#define INSN_LEN(insn) ((((insn) & 0x3) < 0x3) ? 2 : 4)
-
-#if defined(CONFIG_64BIT)
-#define LOG_REGBYTES 3
-#define XLEN 64
-#else
-#define LOG_REGBYTES 2
-#define XLEN 32
-#endif
-#define REGBYTES (1 << LOG_REGBYTES)
-#define XLEN_MINUS_16 ((XLEN) - 16)
-
-#define SH_RD 7
-#define SH_RS1 15
-#define SH_RS2 20
-#define SH_RS2C 2
-
-#define RV_X(x, s, n) (((x) >> (s)) & ((1 << (n)) - 1))
-#define RVC_LW_IMM(x) ((RV_X(x, 6, 1) << 2) | \
- (RV_X(x, 10, 3) << 3) | \
- (RV_X(x, 5, 1) << 6))
-#define RVC_LD_IMM(x) ((RV_X(x, 10, 3) << 3) | \
- (RV_X(x, 5, 2) << 6))
-#define RVC_LWSP_IMM(x) ((RV_X(x, 4, 3) << 2) | \
- (RV_X(x, 12, 1) << 5) | \
- (RV_X(x, 2, 2) << 6))
-#define RVC_LDSP_IMM(x) ((RV_X(x, 5, 2) << 3) | \
- (RV_X(x, 12, 1) << 5) | \
- (RV_X(x, 2, 3) << 6))
-#define RVC_SWSP_IMM(x) ((RV_X(x, 9, 4) << 2) | \
- (RV_X(x, 7, 2) << 6))
-#define RVC_SDSP_IMM(x) ((RV_X(x, 10, 3) << 3) | \
- (RV_X(x, 7, 3) << 6))
-#define RVC_RS1S(insn) (8 + RV_X(insn, SH_RD, 3))
-#define RVC_RS2S(insn) (8 + RV_X(insn, SH_RS2C, 3))
-#define RVC_RS2(insn) RV_X(insn, SH_RS2C, 5)
-
-#define SHIFT_RIGHT(x, y) \
- ((y) < 0 ? ((x) << -(y)) : ((x) >> (y)))
-
-#define REG_MASK \
- ((1 << (5 + LOG_REGBYTES)) - (1 << LOG_REGBYTES))
-
-#define REG_OFFSET(insn, pos) \
- (SHIFT_RIGHT((insn), (pos) - LOG_REGBYTES) & REG_MASK)
-
-#define REG_PTR(insn, pos, regs) \
- (ulong *)((ulong)(regs) + REG_OFFSET(insn, pos))
-
-#define GET_RM(insn) (((insn) >> 12) & 7)
-
-#define GET_RS1(insn, regs) (*REG_PTR(insn, SH_RS1, regs))
-#define GET_RS2(insn, regs) (*REG_PTR(insn, SH_RS2, regs))
-#define GET_RS1S(insn, regs) (*REG_PTR(RVC_RS1S(insn), 0, regs))
-#define GET_RS2S(insn, regs) (*REG_PTR(RVC_RS2S(insn), 0, regs))
-#define GET_RS2C(insn, regs) (*REG_PTR(insn, SH_RS2C, regs))
-#define GET_SP(regs) (*REG_PTR(2, 0, regs))
-#define SET_RD(insn, regs, val) (*REG_PTR(insn, SH_RD, regs) = (val))
-#define IMM_I(insn) ((s32)(insn) >> 20)
-#define IMM_S(insn) (((s32)(insn) >> 25 << 5) | \
- (s32)(((insn) >> 7) & 0x1f))
-#define MASK_FUNCT3 0x7000
-
-#define GET_PRECISION(insn) (((insn) >> 25) & 3)
-#define GET_RM(insn) (((insn) >> 12) & 7)
-#define PRECISION_S 0
-#define PRECISION_D 1
+#define XLEN_MINUS_16 ((__riscv_xlen) - 16)

#define DECLARE_UNPRIVILEGED_LOAD_FUNCTION(type, insn) \
static inline type load_##type(const type *addr) \
@@ -245,58 +111,56 @@ int handle_misaligned_load(struct pt_regs *regs)

regs->epc = 0;

- if ((insn & INSN_MASK_LW) == INSN_MATCH_LW) {
+ if (riscv_insn_is_lw(insn)) {
len = 4;
shift = 8 * (sizeof(unsigned long) - len);
#if defined(CONFIG_64BIT)
- } else if ((insn & INSN_MASK_LD) == INSN_MATCH_LD) {
+ } else if (riscv_insn_is_ld(insn)) {
len = 8;
shift = 8 * (sizeof(unsigned long) - len);
- } else if ((insn & INSN_MASK_LWU) == INSN_MATCH_LWU) {
+ } else if (riscv_insn_is_lwu(insn)) {
len = 4;
#endif
- } else if ((insn & INSN_MASK_FLD) == INSN_MATCH_FLD) {
+ } else if (riscv_insn_is_fld(insn)) {
fp = 1;
len = 8;
- } else if ((insn & INSN_MASK_FLW) == INSN_MATCH_FLW) {
+ } else if (riscv_insn_is_flw(insn)) {
fp = 1;
len = 4;
- } else if ((insn & INSN_MASK_LH) == INSN_MATCH_LH) {
+ } else if (riscv_insn_is_lh(insn)) {
len = 2;
shift = 8 * (sizeof(unsigned long) - len);
- } else if ((insn & INSN_MASK_LHU) == INSN_MATCH_LHU) {
+ } else if (riscv_insn_is_lhu(insn)) {
len = 2;
#if defined(CONFIG_64BIT)
- } else if ((insn & INSN_MASK_C_LD) == INSN_MATCH_C_LD) {
+ } else if (riscv_insn_is_c_ld(insn)) {
len = 8;
shift = 8 * (sizeof(unsigned long) - len);
- insn = RVC_RS2S(insn) << SH_RD;
- } else if ((insn & INSN_MASK_C_LDSP) == INSN_MATCH_C_LDSP &&
- ((insn >> SH_RD) & 0x1f)) {
+ insn = riscv_insn_extract_csca_rs2(insn);
+ } else if (riscv_insn_is_c_ldsp(insn) && (RVC_RD_CI(insn))) {
len = 8;
shift = 8 * (sizeof(unsigned long) - len);
#endif
- } else if ((insn & INSN_MASK_C_LW) == INSN_MATCH_C_LW) {
+ } else if (riscv_insn_is_c_lw(insn)) {
len = 4;
shift = 8 * (sizeof(unsigned long) - len);
- insn = RVC_RS2S(insn) << SH_RD;
- } else if ((insn & INSN_MASK_C_LWSP) == INSN_MATCH_C_LWSP &&
- ((insn >> SH_RD) & 0x1f)) {
+ insn = riscv_insn_extract_csca_rs2(insn);
+ } else if (riscv_insn_is_c_lwsp(insn) && (RVC_RD_CI(insn))) {
len = 4;
shift = 8 * (sizeof(unsigned long) - len);
- } else if ((insn & INSN_MASK_C_FLD) == INSN_MATCH_C_FLD) {
+ } else if (riscv_insn_is_c_fld(insn)) {
fp = 1;
len = 8;
- insn = RVC_RS2S(insn) << SH_RD;
- } else if ((insn & INSN_MASK_C_FLDSP) == INSN_MATCH_C_FLDSP) {
+ insn = riscv_insn_extract_csca_rs2(insn);
+ } else if (riscv_insn_is_c_fldsp(insn)) {
fp = 1;
len = 8;
#if defined(CONFIG_32BIT)
- } else if ((insn & INSN_MASK_C_FLW) == INSN_MATCH_C_FLW) {
+ } else if (riscv_insn_is_c_flw(insn)) {
fp = 1;
len = 4;
- insn = RVC_RS2S(insn) << SH_RD;
- } else if ((insn & INSN_MASK_C_FLWSP) == INSN_MATCH_C_FLWSP) {
+ insn = riscv_insn_extract_csca_rs2(insn);
+ } else if (riscv_insn_is_c_flwsp(insn)) {
fp = 1;
len = 4;
#endif
@@ -311,7 +175,8 @@ int handle_misaligned_load(struct pt_regs *regs)

if (fp)
return -1;
- SET_RD(insn, regs, val.data_ulong << shift >> shift);
+ rv_insn_reg_set_val((unsigned long *)regs, RV_EXTRACT_RD_REG(insn),
+ val.data_ulong << shift >> shift);

regs->epc = epc + INSN_LEN(insn);

@@ -328,32 +193,39 @@ int handle_misaligned_store(struct pt_regs *regs)

regs->epc = 0;

- val.data_ulong = GET_RS2(insn, regs);
+ rv_insn_reg_get_val((unsigned long *)regs, riscv_insn_extract_rs2(insn),
+ &val.data_ulong);

- if ((insn & INSN_MASK_SW) == INSN_MATCH_SW) {
+ if (riscv_insn_is_sw(insn)) {
len = 4;
#if defined(CONFIG_64BIT)
- } else if ((insn & INSN_MASK_SD) == INSN_MATCH_SD) {
+ } else if (riscv_insn_is_sd(insn)) {
len = 8;
#endif
- } else if ((insn & INSN_MASK_SH) == INSN_MATCH_SH) {
+ } else if (riscv_insn_is_sh(insn)) {
len = 2;
#if defined(CONFIG_64BIT)
- } else if ((insn & INSN_MASK_C_SD) == INSN_MATCH_C_SD) {
+ } else if (riscv_insn_is_c_sd(insn)) {
len = 8;
- val.data_ulong = GET_RS2S(insn, regs);
- } else if ((insn & INSN_MASK_C_SDSP) == INSN_MATCH_C_SDSP &&
- ((insn >> SH_RD) & 0x1f)) {
+ rv_insn_reg_get_val((unsigned long *)regs,
+ riscv_insn_extract_cr_rs2(insn),
+ &val.data_ulong);
+ } else if (riscv_insn_is_c_sdsp(insn)) {
len = 8;
- val.data_ulong = GET_RS2C(insn, regs);
+ rv_insn_reg_get_val((unsigned long *)regs,
+ riscv_insn_extract_csca_rs2(insn),
+ &val.data_ulong);
#endif
- } else if ((insn & INSN_MASK_C_SW) == INSN_MATCH_C_SW) {
+ } else if (riscv_insn_is_c_sw(insn)) {
len = 4;
- val.data_ulong = GET_RS2S(insn, regs);
- } else if ((insn & INSN_MASK_C_SWSP) == INSN_MATCH_C_SWSP &&
- ((insn >> SH_RD) & 0x1f)) {
+ rv_insn_reg_get_val((unsigned long *)regs,
+ riscv_insn_extract_cr_rs2(insn),
+ &val.data_ulong);
+ } else if (riscv_insn_is_c_swsp(insn)) {
len = 4;
- val.data_ulong = GET_RS2C(insn, regs);
+ rv_insn_reg_get_val((unsigned long *)regs,
+ riscv_insn_extract_csca_rs2(insn),
+ &val.data_ulong);
} else {
regs->epc = epc;
return -1;

--
2.34.1


2023-08-04 03:05:23

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH 10/10] RISC-V: Refactor bug and traps instructions

Use shared instruction definitions in insn.h instead of manually
constructing them.

Signed-off-by: Charlie Jenkins <[email protected]>
---
arch/riscv/include/asm/bug.h | 18 +++++-------------
arch/riscv/kernel/traps.c | 9 +++++----
2 files changed, 10 insertions(+), 17 deletions(-)

diff --git a/arch/riscv/include/asm/bug.h b/arch/riscv/include/asm/bug.h
index 1aaea81fb141..6d9002d93f85 100644
--- a/arch/riscv/include/asm/bug.h
+++ b/arch/riscv/include/asm/bug.h
@@ -11,21 +11,13 @@
#include <linux/types.h>

#include <asm/asm.h>
+#include <asm/insn.h>

-#define __INSN_LENGTH_MASK _UL(0x3)
-#define __INSN_LENGTH_32 _UL(0x3)
-#define __COMPRESSED_INSN_MASK _UL(0xffff)
+#define __IS_BUG_INSN_32(insn) riscv_insn_is_c_ebreak(insn)
+#define __IS_BUG_INSN_16(insn) riscv_insn_is_ebreak(insn)

-#define __BUG_INSN_32 _UL(0x00100073) /* ebreak */
-#define __BUG_INSN_16 _UL(0x9002) /* c.ebreak */
-
-#define GET_INSN_LENGTH(insn) \
-({ \
- unsigned long __len; \
- __len = ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) ? \
- 4UL : 2UL; \
- __len; \
-})
+#define __BUG_INSN_32 RVG_MATCH_EBREAK
+#define __BUG_INSN_16 RVC_MATCH_C_EBREAK

typedef u32 bug_insn_t;

diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
index f910dfccbf5d..970b118d36b5 100644
--- a/arch/riscv/kernel/traps.c
+++ b/arch/riscv/kernel/traps.c
@@ -22,6 +22,7 @@
#include <asm/asm-prototypes.h>
#include <asm/bug.h>
#include <asm/csr.h>
+#include <asm/insn.h>
#include <asm/processor.h>
#include <asm/ptrace.h>
#include <asm/syscall.h>
@@ -243,7 +244,7 @@ static inline unsigned long get_break_insn_length(unsigned long pc)
if (get_kernel_nofault(insn, (bug_insn_t *)pc))
return 0;

- return GET_INSN_LENGTH(insn);
+ return INSN_LEN(insn);
}

void handle_break(struct pt_regs *regs)
@@ -389,10 +390,10 @@ int is_valid_bugaddr(unsigned long pc)
return 0;
if (get_kernel_nofault(insn, (bug_insn_t *)pc))
return 0;
- if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32)
- return (insn == __BUG_INSN_32);
+ if (INSN_IS_C(insn))
+ return __IS_BUG_INSN_16(insn);
else
- return ((insn & __COMPRESSED_INSN_MASK) == __BUG_INSN_16);
+ return __IS_BUG_INSN_32(insn);
}
#endif /* CONFIG_GENERIC_BUG */


--
2.34.1


2023-08-04 03:05:53

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH 05/10] RISC-V: module: Refactor instructions

Use shared instruction definitions in insn.h instead of manually
constructing them.

Additionally, extra work was being done in apply_r_riscv_lo12_s_rela
to ensure that the bits were set up properly for the lo12, but because
-(a-b)=b-a it wasn't actually doing anything.

Signed-off-by: Charlie Jenkins <[email protected]>
---
arch/riscv/kernel/module.c | 80 +++++++++++-----------------------------------
1 file changed, 18 insertions(+), 62 deletions(-)

diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
index 7c651d55fcbd..950783e5b5ae 100644
--- a/arch/riscv/kernel/module.c
+++ b/arch/riscv/kernel/module.c
@@ -12,8 +12,11 @@
#include <linux/sizes.h>
#include <linux/pgtable.h>
#include <asm/alternative.h>
+#include <asm/insn.h>
#include <asm/sections.h>

+#define HI20_OFFSET 0x800
+
/*
* The auipc+jalr instruction pair can reach any PC-relative offset
* in the range [-2^31 - 2^11, 2^31 - 2^11)
@@ -48,12 +51,8 @@ static int apply_r_riscv_branch_rela(struct module *me, u32 *location,
Elf_Addr v)
{
ptrdiff_t offset = (void *)v - (void *)location;
- u32 imm12 = (offset & 0x1000) << (31 - 12);
- u32 imm11 = (offset & 0x800) >> (11 - 7);
- u32 imm10_5 = (offset & 0x7e0) << (30 - 10);
- u32 imm4_1 = (offset & 0x1e) << (11 - 4);

- *location = (*location & 0x1fff07f) | imm12 | imm11 | imm10_5 | imm4_1;
+ riscv_insn_insert_btype_imm(location, ((s32)offset));
return 0;
}

@@ -61,12 +60,8 @@ static int apply_r_riscv_jal_rela(struct module *me, u32 *location,
Elf_Addr v)
{
ptrdiff_t offset = (void *)v - (void *)location;
- u32 imm20 = (offset & 0x100000) << (31 - 20);
- u32 imm19_12 = (offset & 0xff000);
- u32 imm11 = (offset & 0x800) << (20 - 11);
- u32 imm10_1 = (offset & 0x7fe) << (30 - 10);

- *location = (*location & 0xfff) | imm20 | imm19_12 | imm11 | imm10_1;
+ riscv_insn_insert_jtype_imm(location, ((s32)offset));
return 0;
}

@@ -74,14 +69,8 @@ static int apply_r_riscv_rvc_branch_rela(struct module *me, u32 *location,
Elf_Addr v)
{
ptrdiff_t offset = (void *)v - (void *)location;
- u16 imm8 = (offset & 0x100) << (12 - 8);
- u16 imm7_6 = (offset & 0xc0) >> (6 - 5);
- u16 imm5 = (offset & 0x20) >> (5 - 2);
- u16 imm4_3 = (offset & 0x18) << (12 - 5);
- u16 imm2_1 = (offset & 0x6) << (12 - 10);
-
- *(u16 *)location = (*(u16 *)location & 0xe383) |
- imm8 | imm7_6 | imm5 | imm4_3 | imm2_1;
+
+ riscv_insn_insert_cbztype_imm(location, (s32)offset);
return 0;
}

@@ -89,17 +78,8 @@ static int apply_r_riscv_rvc_jump_rela(struct module *me, u32 *location,
Elf_Addr v)
{
ptrdiff_t offset = (void *)v - (void *)location;
- u16 imm11 = (offset & 0x800) << (12 - 11);
- u16 imm10 = (offset & 0x400) >> (10 - 8);
- u16 imm9_8 = (offset & 0x300) << (12 - 11);
- u16 imm7 = (offset & 0x80) >> (7 - 6);
- u16 imm6 = (offset & 0x40) << (12 - 11);
- u16 imm5 = (offset & 0x20) >> (5 - 2);
- u16 imm4 = (offset & 0x10) << (12 - 5);
- u16 imm3_1 = (offset & 0xe) << (12 - 10);
-
- *(u16 *)location = (*(u16 *)location & 0xe003) |
- imm11 | imm10 | imm9_8 | imm7 | imm6 | imm5 | imm4 | imm3_1;
+
+ riscv_insn_insert_cjtype_imm(location, (s32)offset);
return 0;
}

@@ -107,7 +87,6 @@ static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location,
Elf_Addr v)
{
ptrdiff_t offset = (void *)v - (void *)location;
- s32 hi20;

if (!riscv_insn_valid_32bit_offset(offset)) {
pr_err(
@@ -116,8 +95,7 @@ static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location,
return -EINVAL;
}

- hi20 = (offset + 0x800) & 0xfffff000;
- *location = (*location & 0xfff) | hi20;
+ riscv_insn_insert_utype_imm(location, (offset + HI20_OFFSET));
return 0;
}

@@ -128,7 +106,7 @@ static int apply_r_riscv_pcrel_lo12_i_rela(struct module *me, u32 *location,
* v is the lo12 value to fill. It is calculated before calling this
* handler.
*/
- *location = (*location & 0xfffff) | ((v & 0xfff) << 20);
+ riscv_insn_insert_itype_imm(location, ((s32)v));
return 0;
}

@@ -139,18 +117,13 @@ static int apply_r_riscv_pcrel_lo12_s_rela(struct module *me, u32 *location,
* v is the lo12 value to fill. It is calculated before calling this
* handler.
*/
- u32 imm11_5 = (v & 0xfe0) << (31 - 11);
- u32 imm4_0 = (v & 0x1f) << (11 - 4);
-
- *location = (*location & 0x1fff07f) | imm11_5 | imm4_0;
+ riscv_insn_insert_stype_imm(location, ((s32)v));
return 0;
}

static int apply_r_riscv_hi20_rela(struct module *me, u32 *location,
Elf_Addr v)
{
- s32 hi20;
-
if (IS_ENABLED(CONFIG_CMODEL_MEDLOW)) {
pr_err(
"%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
@@ -158,8 +131,7 @@ static int apply_r_riscv_hi20_rela(struct module *me, u32 *location,
return -EINVAL;
}

- hi20 = ((s32)v + 0x800) & 0xfffff000;
- *location = (*location & 0xfff) | hi20;
+ riscv_insn_insert_utype_imm(location, ((s32)v + HI20_OFFSET));
return 0;
}

@@ -167,9 +139,7 @@ static int apply_r_riscv_lo12_i_rela(struct module *me, u32 *location,
Elf_Addr v)
{
/* Skip medlow checking because of filtering by HI20 already */
- s32 hi20 = ((s32)v + 0x800) & 0xfffff000;
- s32 lo12 = ((s32)v - hi20);
- *location = (*location & 0xfffff) | ((lo12 & 0xfff) << 20);
+ riscv_insn_insert_itype_imm(location, (s32)v);
return 0;
}

@@ -177,11 +147,7 @@ static int apply_r_riscv_lo12_s_rela(struct module *me, u32 *location,
Elf_Addr v)
{
/* Skip medlow checking because of filtering by HI20 already */
- s32 hi20 = ((s32)v + 0x800) & 0xfffff000;
- s32 lo12 = ((s32)v - hi20);
- u32 imm11_5 = (lo12 & 0xfe0) << (31 - 11);
- u32 imm4_0 = (lo12 & 0x1f) << (11 - 4);
- *location = (*location & 0x1fff07f) | imm11_5 | imm4_0;
+ riscv_insn_insert_stype_imm(location, (s32)v);
return 0;
}

@@ -189,7 +155,6 @@ static int apply_r_riscv_got_hi20_rela(struct module *me, u32 *location,
Elf_Addr v)
{
ptrdiff_t offset = (void *)v - (void *)location;
- s32 hi20;

/* Always emit the got entry */
if (IS_ENABLED(CONFIG_MODULE_SECTIONS)) {
@@ -202,8 +167,7 @@ static int apply_r_riscv_got_hi20_rela(struct module *me, u32 *location,
return -EINVAL;
}

- hi20 = (offset + 0x800) & 0xfffff000;
- *location = (*location & 0xfff) | hi20;
+ riscv_insn_insert_utype_imm(location, (s32)(offset + HI20_OFFSET));
return 0;
}

@@ -211,7 +175,6 @@ static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location,
Elf_Addr v)
{
ptrdiff_t offset = (void *)v - (void *)location;
- u32 hi20, lo12;

if (!riscv_insn_valid_32bit_offset(offset)) {
/* Only emit the plt entry if offset over 32-bit range */
@@ -226,10 +189,7 @@ static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location,
}
}

- hi20 = (offset + 0x800) & 0xfffff000;
- lo12 = (offset - hi20) & 0xfff;
- *location = (*location & 0xfff) | hi20;
- *(location + 1) = (*(location + 1) & 0xfffff) | (lo12 << 20);
+ riscv_insn_insert_utype_itype_imm(location, location + 1, (s32)offset);
return 0;
}

@@ -237,7 +197,6 @@ static int apply_r_riscv_call_rela(struct module *me, u32 *location,
Elf_Addr v)
{
ptrdiff_t offset = (void *)v - (void *)location;
- u32 hi20, lo12;

if (!riscv_insn_valid_32bit_offset(offset)) {
pr_err(
@@ -246,10 +205,7 @@ static int apply_r_riscv_call_rela(struct module *me, u32 *location,
return -EINVAL;
}

- hi20 = (offset + 0x800) & 0xfffff000;
- lo12 = (offset - hi20) & 0xfff;
- *location = (*location & 0xfff) | hi20;
- *(location + 1) = (*(location + 1) & 0xfffff) | (lo12 << 20);
+ riscv_insn_insert_utype_itype_imm(location, location + 1, (s32)offset);
return 0;
}


--
2.34.1


2023-08-04 03:27:03

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH 06/10] RISC-V: Refactor patch instructions

Use shared instruction definitions in insn.h.

Signed-off-by: Charlie Jenkins <[email protected]>
---
arch/riscv/kernel/patch.c | 3 +-
arch/riscv/kernel/probes/kprobes.c | 13 +++----
arch/riscv/kernel/probes/simulate-insn.c | 61 +++++++-------------------------
arch/riscv/kernel/probes/uprobes.c | 5 +--
4 files changed, 25 insertions(+), 57 deletions(-)

diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
index 575e71d6c8ae..df51f5155673 100644
--- a/arch/riscv/kernel/patch.c
+++ b/arch/riscv/kernel/patch.c
@@ -12,6 +12,7 @@
#include <asm/cacheflush.h>
#include <asm/fixmap.h>
#include <asm/ftrace.h>
+#include <asm/insn.h>
#include <asm/patch.h>

struct patch_insn {
@@ -118,7 +119,7 @@ static int patch_text_cb(void *data)

if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
for (i = 0; ret == 0 && i < patch->ninsns; i++) {
- len = GET_INSN_LENGTH(patch->insns[i]);
+ len = INSN_LEN(patch->insns[i]);
ret = patch_text_nosync(patch->addr + i * len,
&patch->insns[i], len);
}
diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c
index 2f08c14a933d..501c6ae4d803 100644
--- a/arch/riscv/kernel/probes/kprobes.c
+++ b/arch/riscv/kernel/probes/kprobes.c
@@ -12,6 +12,7 @@
#include <asm/cacheflush.h>
#include <asm/bug.h>
#include <asm/patch.h>
+#include <asm/insn.h>

#include "decode-insn.h"

@@ -24,7 +25,7 @@ post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs *);
static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
{
u32 insn = __BUG_INSN_32;
- unsigned long offset = GET_INSN_LENGTH(p->opcode);
+ unsigned long offset = INSN_LEN(p->opcode);

p->ainsn.api.restore = (unsigned long)p->addr + offset;

@@ -58,7 +59,7 @@ static bool __kprobes arch_check_kprobe(struct kprobe *p)
if (tmp == addr)
return true;

- tmp += GET_INSN_LENGTH(*(u16 *)tmp);
+ tmp += INSN_LEN(*(u16 *)tmp);
}

return false;
@@ -76,7 +77,7 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p)

/* copy instruction */
p->opcode = (kprobe_opcode_t)(*insn++);
- if (GET_INSN_LENGTH(p->opcode) == 4)
+ if (INSN_LEN(p->opcode) == 4)
p->opcode |= (kprobe_opcode_t)(*insn) << 16;

/* decode instruction */
@@ -117,8 +118,8 @@ void *alloc_insn_page(void)
/* install breakpoint in text */
void __kprobes arch_arm_kprobe(struct kprobe *p)
{
- u32 insn = (p->opcode & __INSN_LENGTH_MASK) == __INSN_LENGTH_32 ?
- __BUG_INSN_32 : __BUG_INSN_16;
+ u32 insn = INSN_IS_C(p->opcode) ?
+ __BUG_INSN_16 : __BUG_INSN_32;

patch_text(p->addr, &insn, 1);
}
@@ -344,7 +345,7 @@ kprobe_single_step_handler(struct pt_regs *regs)
struct kprobe *cur = kprobe_running();

if (cur && (kcb->kprobe_status & (KPROBE_HIT_SS | KPROBE_REENTER)) &&
- ((unsigned long)&cur->ainsn.api.insn[0] + GET_INSN_LENGTH(cur->opcode) == addr)) {
+ ((unsigned long)&cur->ainsn.api.insn[0] + INSN_LEN(cur->opcode) == addr)) {
kprobes_restore_local_irqflag(kcb, regs);
post_kprobe_handler(cur, kcb, regs);
return true;
diff --git a/arch/riscv/kernel/probes/simulate-insn.c b/arch/riscv/kernel/probes/simulate-insn.c
index 994edb4bd16a..f9671bb864a3 100644
--- a/arch/riscv/kernel/probes/simulate-insn.c
+++ b/arch/riscv/kernel/probes/simulate-insn.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0+

+#include <asm/insn.h>
#include <asm/reg.h>
#include <linux/bitops.h>
#include <linux/kernel.h>
@@ -16,19 +17,16 @@ bool __kprobes simulate_jal(u32 opcode, unsigned long addr, struct pt_regs *regs
* 1 10 1 8 5 JAL/J
*/
bool ret;
- u32 imm;
- u32 index = (opcode >> 7) & 0x1f;
+ s32 imm;
+ u32 index = riscv_insn_extract_rd(opcode);

ret = rv_insn_reg_set_val((unsigned long *)regs, index, addr + 4);
if (!ret)
return ret;

- imm = ((opcode >> 21) & 0x3ff) << 1;
- imm |= ((opcode >> 20) & 0x1) << 11;
- imm |= ((opcode >> 12) & 0xff) << 12;
- imm |= ((opcode >> 31) & 0x1) << 20;
+ imm = riscv_insn_extract_jtype_imm(opcode);

- instruction_pointer_set(regs, addr + sign_extend32((imm), 20));
+ instruction_pointer_set(regs, addr + imm);

return ret;
}
@@ -42,9 +40,9 @@ bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *reg
*/
bool ret;
unsigned long base_addr;
- u32 imm = (opcode >> 20) & 0xfff;
- u32 rd_index = (opcode >> 7) & 0x1f;
- u32 rs1_index = (opcode >> 15) & 0x1f;
+ s32 imm = riscv_insn_extract_itype_imm(opcode);
+ u32 rd_index = riscv_insn_extract_rd(opcode);
+ u32 rs1_index = riscv_insn_extract_rs1(opcode);

ret = rv_insn_reg_get_val((unsigned long *)regs, rs1_index, &base_addr);
if (!ret)
@@ -54,25 +52,11 @@ bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *reg
if (!ret)
return ret;

- instruction_pointer_set(regs, (base_addr + sign_extend32((imm), 11))&~1);
+ instruction_pointer_set(regs, (base_addr + imm) & ~1);

return ret;
}

-#define auipc_rd_idx(opcode) \
- ((opcode >> 7) & 0x1f)
-
-#define auipc_imm(opcode) \
- ((((opcode) >> 12) & 0xfffff) << 12)
-
-#if __riscv_xlen == 64
-#define auipc_offset(opcode) sign_extend64(auipc_imm(opcode), 31)
-#elif __riscv_xlen == 32
-#define auipc_offset(opcode) auipc_imm(opcode)
-#else
-#error "Unexpected __riscv_xlen"
-#endif
-
bool __kprobes simulate_auipc(u32 opcode, unsigned long addr, struct pt_regs *regs)
{
/*
@@ -82,35 +66,16 @@ bool __kprobes simulate_auipc(u32 opcode, unsigned long addr, struct pt_regs *re
* 20 5 7
*/

- u32 rd_idx = auipc_rd_idx(opcode);
- unsigned long rd_val = addr + auipc_offset(opcode);
+ u32 rd_idx = riscv_insn_extract_rd(opcode);
+ unsigned long rd_val = addr + riscv_insn_extract_utype_imm(opcode);

if (!rv_insn_reg_set_val((unsigned long *)regs, rd_idx, rd_val))
return false;

instruction_pointer_set(regs, addr + 4);
-
return true;
}

-#define branch_rs1_idx(opcode) \
- (((opcode) >> 15) & 0x1f)
-
-#define branch_rs2_idx(opcode) \
- (((opcode) >> 20) & 0x1f)
-
-#define branch_funct3(opcode) \
- (((opcode) >> 12) & 0x7)
-
-#define branch_imm(opcode) \
- (((((opcode) >> 8) & 0xf ) << 1) | \
- ((((opcode) >> 25) & 0x3f) << 5) | \
- ((((opcode) >> 7) & 0x1 ) << 11) | \
- ((((opcode) >> 31) & 0x1 ) << 12))
-
-#define branch_offset(opcode) \
- sign_extend32((branch_imm(opcode)), 12)
-
bool __kprobes simulate_branch(u32 opcode, unsigned long addr, struct pt_regs *regs)
{
/*
@@ -135,8 +100,8 @@ bool __kprobes simulate_branch(u32 opcode, unsigned long addr, struct pt_regs *r
!rv_insn_reg_get_val((unsigned long *)regs, riscv_insn_extract_rs2(opcode), &rs2_val))
return false;

- offset_tmp = branch_offset(opcode);
- switch (branch_funct3(opcode)) {
+ offset_tmp = riscv_insn_extract_btype_imm(opcode);
+ switch (riscv_insn_extract_funct3(opcode)) {
case RVG_FUNCT3_BEQ:
offset = (rs1_val == rs2_val) ? offset_tmp : 4;
break;
diff --git a/arch/riscv/kernel/probes/uprobes.c b/arch/riscv/kernel/probes/uprobes.c
index 194f166b2cc4..f2511cbaf931 100644
--- a/arch/riscv/kernel/probes/uprobes.c
+++ b/arch/riscv/kernel/probes/uprobes.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only

+#include <asm/insn.h>
#include <linux/highmem.h>
#include <linux/ptrace.h>
#include <linux/uprobes.h>
@@ -29,7 +30,7 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm,

opcode = *(probe_opcode_t *)(&auprobe->insn[0]);

- auprobe->insn_size = GET_INSN_LENGTH(opcode);
+ auprobe->insn_size = INSN_LEN(opcode);

switch (riscv_probe_decode_insn(&opcode, &auprobe->api)) {
case INSN_REJECTED:
@@ -166,7 +167,7 @@ void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,

/* Add ebreak behind opcode to simulate singlestep */
if (vaddr) {
- dst += GET_INSN_LENGTH(*(probe_opcode_t *)src);
+ dst += INSN_LEN(*(probe_opcode_t *)src);
*(uprobe_opcode_t *)dst = __BUG_INSN_32;
}


--
2.34.1


2023-08-04 03:28:53

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH 09/10] RISC-V: bpf: Refactor instructions

Use shared instruction definitions in insn.h instead of manually
constructing them.

Signed-off-by: Charlie Jenkins <[email protected]>
---
arch/riscv/net/bpf_jit.h | 707 +----------------------------------------------
1 file changed, 2 insertions(+), 705 deletions(-)

diff --git a/arch/riscv/net/bpf_jit.h b/arch/riscv/net/bpf_jit.h
index 2717f5490428..3f79c938166d 100644
--- a/arch/riscv/net/bpf_jit.h
+++ b/arch/riscv/net/bpf_jit.h
@@ -12,58 +12,8 @@
#include <linux/bpf.h>
#include <linux/filter.h>
#include <asm/cacheflush.h>
-
-static inline bool rvc_enabled(void)
-{
- return IS_ENABLED(CONFIG_RISCV_ISA_C);
-}
-
-enum {
- RV_REG_ZERO = 0, /* The constant value 0 */
- RV_REG_RA = 1, /* Return address */
- RV_REG_SP = 2, /* Stack pointer */
- RV_REG_GP = 3, /* Global pointer */
- RV_REG_TP = 4, /* Thread pointer */
- RV_REG_T0 = 5, /* Temporaries */
- RV_REG_T1 = 6,
- RV_REG_T2 = 7,
- RV_REG_FP = 8, /* Saved register/frame pointer */
- RV_REG_S1 = 9, /* Saved register */
- RV_REG_A0 = 10, /* Function argument/return values */
- RV_REG_A1 = 11, /* Function arguments */
- RV_REG_A2 = 12,
- RV_REG_A3 = 13,
- RV_REG_A4 = 14,
- RV_REG_A5 = 15,
- RV_REG_A6 = 16,
- RV_REG_A7 = 17,
- RV_REG_S2 = 18, /* Saved registers */
- RV_REG_S3 = 19,
- RV_REG_S4 = 20,
- RV_REG_S5 = 21,
- RV_REG_S6 = 22,
- RV_REG_S7 = 23,
- RV_REG_S8 = 24,
- RV_REG_S9 = 25,
- RV_REG_S10 = 26,
- RV_REG_S11 = 27,
- RV_REG_T3 = 28, /* Temporaries */
- RV_REG_T4 = 29,
- RV_REG_T5 = 30,
- RV_REG_T6 = 31,
-};
-
-static inline bool is_creg(u8 reg)
-{
- return (1 << reg) & (BIT(RV_REG_FP) |
- BIT(RV_REG_S1) |
- BIT(RV_REG_A0) |
- BIT(RV_REG_A1) |
- BIT(RV_REG_A2) |
- BIT(RV_REG_A3) |
- BIT(RV_REG_A4) |
- BIT(RV_REG_A5));
-}
+#include <asm/reg.h>
+#include <asm/insn.h>

struct rv_jit_context {
struct bpf_prog *prog;
@@ -221,659 +171,6 @@ static inline int rv_offset(int insn, int off, struct rv_jit_context *ctx)
return ninsns_rvoff(to - from);
}

-/* Instruction formats. */
-
-static inline u32 rv_r_insn(u8 funct7, u8 rs2, u8 rs1, u8 funct3, u8 rd,
- u8 opcode)
-{
- return (funct7 << 25) | (rs2 << 20) | (rs1 << 15) | (funct3 << 12) |
- (rd << 7) | opcode;
-}
-
-static inline u32 rv_i_insn(u16 imm11_0, u8 rs1, u8 funct3, u8 rd, u8 opcode)
-{
- return (imm11_0 << 20) | (rs1 << 15) | (funct3 << 12) | (rd << 7) |
- opcode;
-}
-
-static inline u32 rv_s_insn(u16 imm11_0, u8 rs2, u8 rs1, u8 funct3, u8 opcode)
-{
- u8 imm11_5 = imm11_0 >> 5, imm4_0 = imm11_0 & 0x1f;
-
- return (imm11_5 << 25) | (rs2 << 20) | (rs1 << 15) | (funct3 << 12) |
- (imm4_0 << 7) | opcode;
-}
-
-static inline u32 rv_b_insn(u16 imm12_1, u8 rs2, u8 rs1, u8 funct3, u8 opcode)
-{
- u8 imm12 = ((imm12_1 & 0x800) >> 5) | ((imm12_1 & 0x3f0) >> 4);
- u8 imm4_1 = ((imm12_1 & 0xf) << 1) | ((imm12_1 & 0x400) >> 10);
-
- return (imm12 << 25) | (rs2 << 20) | (rs1 << 15) | (funct3 << 12) |
- (imm4_1 << 7) | opcode;
-}
-
-static inline u32 rv_u_insn(u32 imm31_12, u8 rd, u8 opcode)
-{
- return (imm31_12 << 12) | (rd << 7) | opcode;
-}
-
-static inline u32 rv_j_insn(u32 imm20_1, u8 rd, u8 opcode)
-{
- u32 imm;
-
- imm = (imm20_1 & 0x80000) | ((imm20_1 & 0x3ff) << 9) |
- ((imm20_1 & 0x400) >> 2) | ((imm20_1 & 0x7f800) >> 11);
-
- return (imm << 12) | (rd << 7) | opcode;
-}
-
-static inline u32 rv_amo_insn(u8 funct5, u8 aq, u8 rl, u8 rs2, u8 rs1,
- u8 funct3, u8 rd, u8 opcode)
-{
- u8 funct7 = (funct5 << 2) | (aq << 1) | rl;
-
- return rv_r_insn(funct7, rs2, rs1, funct3, rd, opcode);
-}
-
-/* RISC-V compressed instruction formats. */
-
-static inline u16 rv_cr_insn(u8 funct4, u8 rd, u8 rs2, u8 op)
-{
- return (funct4 << 12) | (rd << 7) | (rs2 << 2) | op;
-}
-
-static inline u16 rv_ci_insn(u8 funct3, u32 imm6, u8 rd, u8 op)
-{
- u32 imm;
-
- imm = ((imm6 & 0x20) << 7) | ((imm6 & 0x1f) << 2);
- return (funct3 << 13) | (rd << 7) | op | imm;
-}
-
-static inline u16 rv_css_insn(u8 funct3, u32 uimm, u8 rs2, u8 op)
-{
- return (funct3 << 13) | (uimm << 7) | (rs2 << 2) | op;
-}
-
-static inline u16 rv_ciw_insn(u8 funct3, u32 uimm, u8 rd, u8 op)
-{
- return (funct3 << 13) | (uimm << 5) | ((rd & 0x7) << 2) | op;
-}
-
-static inline u16 rv_cl_insn(u8 funct3, u32 imm_hi, u8 rs1, u32 imm_lo, u8 rd,
- u8 op)
-{
- return (funct3 << 13) | (imm_hi << 10) | ((rs1 & 0x7) << 7) |
- (imm_lo << 5) | ((rd & 0x7) << 2) | op;
-}
-
-static inline u16 rv_cs_insn(u8 funct3, u32 imm_hi, u8 rs1, u32 imm_lo, u8 rs2,
- u8 op)
-{
- return (funct3 << 13) | (imm_hi << 10) | ((rs1 & 0x7) << 7) |
- (imm_lo << 5) | ((rs2 & 0x7) << 2) | op;
-}
-
-static inline u16 rv_ca_insn(u8 funct6, u8 rd, u8 funct2, u8 rs2, u8 op)
-{
- return (funct6 << 10) | ((rd & 0x7) << 7) | (funct2 << 5) |
- ((rs2 & 0x7) << 2) | op;
-}
-
-static inline u16 rv_cb_insn(u8 funct3, u32 imm6, u8 funct2, u8 rd, u8 op)
-{
- u32 imm;
-
- imm = ((imm6 & 0x20) << 7) | ((imm6 & 0x1f) << 2);
- return (funct3 << 13) | (funct2 << 10) | ((rd & 0x7) << 7) | op | imm;
-}
-
-/* Instructions shared by both RV32 and RV64. */
-
-static inline u32 rv_addi(u8 rd, u8 rs1, u16 imm11_0)
-{
- return rv_i_insn(imm11_0, rs1, 0, rd, 0x13);
-}
-
-static inline u32 rv_andi(u8 rd, u8 rs1, u16 imm11_0)
-{
- return rv_i_insn(imm11_0, rs1, 7, rd, 0x13);
-}
-
-static inline u32 rv_ori(u8 rd, u8 rs1, u16 imm11_0)
-{
- return rv_i_insn(imm11_0, rs1, 6, rd, 0x13);
-}
-
-static inline u32 rv_xori(u8 rd, u8 rs1, u16 imm11_0)
-{
- return rv_i_insn(imm11_0, rs1, 4, rd, 0x13);
-}
-
-static inline u32 rv_slli(u8 rd, u8 rs1, u16 imm11_0)
-{
- return rv_i_insn(imm11_0, rs1, 1, rd, 0x13);
-}
-
-static inline u32 rv_srli(u8 rd, u8 rs1, u16 imm11_0)
-{
- return rv_i_insn(imm11_0, rs1, 5, rd, 0x13);
-}
-
-static inline u32 rv_srai(u8 rd, u8 rs1, u16 imm11_0)
-{
- return rv_i_insn(0x400 | imm11_0, rs1, 5, rd, 0x13);
-}
-
-static inline u32 rv_lui(u8 rd, u32 imm31_12)
-{
- return rv_u_insn(imm31_12, rd, 0x37);
-}
-
-static inline u32 rv_auipc(u8 rd, u32 imm31_12)
-{
- return rv_u_insn(imm31_12, rd, 0x17);
-}
-
-static inline u32 rv_add(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(0, rs2, rs1, 0, rd, 0x33);
-}
-
-static inline u32 rv_sub(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(0x20, rs2, rs1, 0, rd, 0x33);
-}
-
-static inline u32 rv_sltu(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(0, rs2, rs1, 3, rd, 0x33);
-}
-
-static inline u32 rv_and(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(0, rs2, rs1, 7, rd, 0x33);
-}
-
-static inline u32 rv_or(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(0, rs2, rs1, 6, rd, 0x33);
-}
-
-static inline u32 rv_xor(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(0, rs2, rs1, 4, rd, 0x33);
-}
-
-static inline u32 rv_sll(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(0, rs2, rs1, 1, rd, 0x33);
-}
-
-static inline u32 rv_srl(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(0, rs2, rs1, 5, rd, 0x33);
-}
-
-static inline u32 rv_sra(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(0x20, rs2, rs1, 5, rd, 0x33);
-}
-
-static inline u32 rv_mul(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(1, rs2, rs1, 0, rd, 0x33);
-}
-
-static inline u32 rv_mulhu(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(1, rs2, rs1, 3, rd, 0x33);
-}
-
-static inline u32 rv_divu(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(1, rs2, rs1, 5, rd, 0x33);
-}
-
-static inline u32 rv_remu(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(1, rs2, rs1, 7, rd, 0x33);
-}
-
-static inline u32 rv_jal(u8 rd, u32 imm20_1)
-{
- return rv_j_insn(imm20_1, rd, 0x6f);
-}
-
-static inline u32 rv_jalr(u8 rd, u8 rs1, u16 imm11_0)
-{
- return rv_i_insn(imm11_0, rs1, 0, rd, 0x67);
-}
-
-static inline u32 rv_beq(u8 rs1, u8 rs2, u16 imm12_1)
-{
- return rv_b_insn(imm12_1, rs2, rs1, 0, 0x63);
-}
-
-static inline u32 rv_bne(u8 rs1, u8 rs2, u16 imm12_1)
-{
- return rv_b_insn(imm12_1, rs2, rs1, 1, 0x63);
-}
-
-static inline u32 rv_bltu(u8 rs1, u8 rs2, u16 imm12_1)
-{
- return rv_b_insn(imm12_1, rs2, rs1, 6, 0x63);
-}
-
-static inline u32 rv_bgtu(u8 rs1, u8 rs2, u16 imm12_1)
-{
- return rv_bltu(rs2, rs1, imm12_1);
-}
-
-static inline u32 rv_bgeu(u8 rs1, u8 rs2, u16 imm12_1)
-{
- return rv_b_insn(imm12_1, rs2, rs1, 7, 0x63);
-}
-
-static inline u32 rv_bleu(u8 rs1, u8 rs2, u16 imm12_1)
-{
- return rv_bgeu(rs2, rs1, imm12_1);
-}
-
-static inline u32 rv_blt(u8 rs1, u8 rs2, u16 imm12_1)
-{
- return rv_b_insn(imm12_1, rs2, rs1, 4, 0x63);
-}
-
-static inline u32 rv_bgt(u8 rs1, u8 rs2, u16 imm12_1)
-{
- return rv_blt(rs2, rs1, imm12_1);
-}
-
-static inline u32 rv_bge(u8 rs1, u8 rs2, u16 imm12_1)
-{
- return rv_b_insn(imm12_1, rs2, rs1, 5, 0x63);
-}
-
-static inline u32 rv_ble(u8 rs1, u8 rs2, u16 imm12_1)
-{
- return rv_bge(rs2, rs1, imm12_1);
-}
-
-static inline u32 rv_lw(u8 rd, u16 imm11_0, u8 rs1)
-{
- return rv_i_insn(imm11_0, rs1, 2, rd, 0x03);
-}
-
-static inline u32 rv_lbu(u8 rd, u16 imm11_0, u8 rs1)
-{
- return rv_i_insn(imm11_0, rs1, 4, rd, 0x03);
-}
-
-static inline u32 rv_lhu(u8 rd, u16 imm11_0, u8 rs1)
-{
- return rv_i_insn(imm11_0, rs1, 5, rd, 0x03);
-}
-
-static inline u32 rv_sb(u8 rs1, u16 imm11_0, u8 rs2)
-{
- return rv_s_insn(imm11_0, rs2, rs1, 0, 0x23);
-}
-
-static inline u32 rv_sh(u8 rs1, u16 imm11_0, u8 rs2)
-{
- return rv_s_insn(imm11_0, rs2, rs1, 1, 0x23);
-}
-
-static inline u32 rv_sw(u8 rs1, u16 imm11_0, u8 rs2)
-{
- return rv_s_insn(imm11_0, rs2, rs1, 2, 0x23);
-}
-
-static inline u32 rv_amoadd_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl)
-{
- return rv_amo_insn(0, aq, rl, rs2, rs1, 2, rd, 0x2f);
-}
-
-static inline u32 rv_amoand_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl)
-{
- return rv_amo_insn(0xc, aq, rl, rs2, rs1, 2, rd, 0x2f);
-}
-
-static inline u32 rv_amoor_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl)
-{
- return rv_amo_insn(0x8, aq, rl, rs2, rs1, 2, rd, 0x2f);
-}
-
-static inline u32 rv_amoxor_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl)
-{
- return rv_amo_insn(0x4, aq, rl, rs2, rs1, 2, rd, 0x2f);
-}
-
-static inline u32 rv_amoswap_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl)
-{
- return rv_amo_insn(0x1, aq, rl, rs2, rs1, 2, rd, 0x2f);
-}
-
-static inline u32 rv_lr_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl)
-{
- return rv_amo_insn(0x2, aq, rl, rs2, rs1, 2, rd, 0x2f);
-}
-
-static inline u32 rv_sc_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl)
-{
- return rv_amo_insn(0x3, aq, rl, rs2, rs1, 2, rd, 0x2f);
-}
-
-static inline u32 rv_fence(u8 pred, u8 succ)
-{
- u16 imm11_0 = pred << 4 | succ;
-
- return rv_i_insn(imm11_0, 0, 0, 0, 0xf);
-}
-
-static inline u32 rv_nop(void)
-{
- return rv_i_insn(0, 0, 0, 0, 0x13);
-}
-
-/* RVC instrutions. */
-
-static inline u16 rvc_addi4spn(u8 rd, u32 imm10)
-{
- u32 imm;
-
- imm = ((imm10 & 0x30) << 2) | ((imm10 & 0x3c0) >> 4) |
- ((imm10 & 0x4) >> 1) | ((imm10 & 0x8) >> 3);
- return rv_ciw_insn(0x0, imm, rd, 0x0);
-}
-
-static inline u16 rvc_lw(u8 rd, u32 imm7, u8 rs1)
-{
- u32 imm_hi, imm_lo;
-
- imm_hi = (imm7 & 0x38) >> 3;
- imm_lo = ((imm7 & 0x4) >> 1) | ((imm7 & 0x40) >> 6);
- return rv_cl_insn(0x2, imm_hi, rs1, imm_lo, rd, 0x0);
-}
-
-static inline u16 rvc_sw(u8 rs1, u32 imm7, u8 rs2)
-{
- u32 imm_hi, imm_lo;
-
- imm_hi = (imm7 & 0x38) >> 3;
- imm_lo = ((imm7 & 0x4) >> 1) | ((imm7 & 0x40) >> 6);
- return rv_cs_insn(0x6, imm_hi, rs1, imm_lo, rs2, 0x0);
-}
-
-static inline u16 rvc_addi(u8 rd, u32 imm6)
-{
- return rv_ci_insn(0, imm6, rd, 0x1);
-}
-
-static inline u16 rvc_li(u8 rd, u32 imm6)
-{
- return rv_ci_insn(0x2, imm6, rd, 0x1);
-}
-
-static inline u16 rvc_addi16sp(u32 imm10)
-{
- u32 imm;
-
- imm = ((imm10 & 0x200) >> 4) | (imm10 & 0x10) | ((imm10 & 0x40) >> 3) |
- ((imm10 & 0x180) >> 6) | ((imm10 & 0x20) >> 5);
- return rv_ci_insn(0x3, imm, RV_REG_SP, 0x1);
-}
-
-static inline u16 rvc_lui(u8 rd, u32 imm6)
-{
- return rv_ci_insn(0x3, imm6, rd, 0x1);
-}
-
-static inline u16 rvc_srli(u8 rd, u32 imm6)
-{
- return rv_cb_insn(0x4, imm6, 0, rd, 0x1);
-}
-
-static inline u16 rvc_srai(u8 rd, u32 imm6)
-{
- return rv_cb_insn(0x4, imm6, 0x1, rd, 0x1);
-}
-
-static inline u16 rvc_andi(u8 rd, u32 imm6)
-{
- return rv_cb_insn(0x4, imm6, 0x2, rd, 0x1);
-}
-
-static inline u16 rvc_sub(u8 rd, u8 rs)
-{
- return rv_ca_insn(0x23, rd, 0, rs, 0x1);
-}
-
-static inline u16 rvc_xor(u8 rd, u8 rs)
-{
- return rv_ca_insn(0x23, rd, 0x1, rs, 0x1);
-}
-
-static inline u16 rvc_or(u8 rd, u8 rs)
-{
- return rv_ca_insn(0x23, rd, 0x2, rs, 0x1);
-}
-
-static inline u16 rvc_and(u8 rd, u8 rs)
-{
- return rv_ca_insn(0x23, rd, 0x3, rs, 0x1);
-}
-
-static inline u16 rvc_slli(u8 rd, u32 imm6)
-{
- return rv_ci_insn(0, imm6, rd, 0x2);
-}
-
-static inline u16 rvc_lwsp(u8 rd, u32 imm8)
-{
- u32 imm;
-
- imm = ((imm8 & 0xc0) >> 6) | (imm8 & 0x3c);
- return rv_ci_insn(0x2, imm, rd, 0x2);
-}
-
-static inline u16 rvc_jr(u8 rs1)
-{
- return rv_cr_insn(0x8, rs1, RV_REG_ZERO, 0x2);
-}
-
-static inline u16 rvc_mv(u8 rd, u8 rs)
-{
- return rv_cr_insn(0x8, rd, rs, 0x2);
-}
-
-static inline u16 rvc_jalr(u8 rs1)
-{
- return rv_cr_insn(0x9, rs1, RV_REG_ZERO, 0x2);
-}
-
-static inline u16 rvc_add(u8 rd, u8 rs)
-{
- return rv_cr_insn(0x9, rd, rs, 0x2);
-}
-
-static inline u16 rvc_swsp(u32 imm8, u8 rs2)
-{
- u32 imm;
-
- imm = (imm8 & 0x3c) | ((imm8 & 0xc0) >> 6);
- return rv_css_insn(0x6, imm, rs2, 0x2);
-}
-
-/*
- * RV64-only instructions.
- *
- * These instructions are not available on RV32. Wrap them below a #if to
- * ensure that the RV32 JIT doesn't emit any of these instructions.
- */
-
-#if __riscv_xlen == 64
-
-static inline u32 rv_addiw(u8 rd, u8 rs1, u16 imm11_0)
-{
- return rv_i_insn(imm11_0, rs1, 0, rd, 0x1b);
-}
-
-static inline u32 rv_slliw(u8 rd, u8 rs1, u16 imm11_0)
-{
- return rv_i_insn(imm11_0, rs1, 1, rd, 0x1b);
-}
-
-static inline u32 rv_srliw(u8 rd, u8 rs1, u16 imm11_0)
-{
- return rv_i_insn(imm11_0, rs1, 5, rd, 0x1b);
-}
-
-static inline u32 rv_sraiw(u8 rd, u8 rs1, u16 imm11_0)
-{
- return rv_i_insn(0x400 | imm11_0, rs1, 5, rd, 0x1b);
-}
-
-static inline u32 rv_addw(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(0, rs2, rs1, 0, rd, 0x3b);
-}
-
-static inline u32 rv_subw(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(0x20, rs2, rs1, 0, rd, 0x3b);
-}
-
-static inline u32 rv_sllw(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(0, rs2, rs1, 1, rd, 0x3b);
-}
-
-static inline u32 rv_srlw(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(0, rs2, rs1, 5, rd, 0x3b);
-}
-
-static inline u32 rv_sraw(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(0x20, rs2, rs1, 5, rd, 0x3b);
-}
-
-static inline u32 rv_mulw(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(1, rs2, rs1, 0, rd, 0x3b);
-}
-
-static inline u32 rv_divuw(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(1, rs2, rs1, 5, rd, 0x3b);
-}
-
-static inline u32 rv_remuw(u8 rd, u8 rs1, u8 rs2)
-{
- return rv_r_insn(1, rs2, rs1, 7, rd, 0x3b);
-}
-
-static inline u32 rv_ld(u8 rd, u16 imm11_0, u8 rs1)
-{
- return rv_i_insn(imm11_0, rs1, 3, rd, 0x03);
-}
-
-static inline u32 rv_lwu(u8 rd, u16 imm11_0, u8 rs1)
-{
- return rv_i_insn(imm11_0, rs1, 6, rd, 0x03);
-}
-
-static inline u32 rv_sd(u8 rs1, u16 imm11_0, u8 rs2)
-{
- return rv_s_insn(imm11_0, rs2, rs1, 3, 0x23);
-}
-
-static inline u32 rv_amoadd_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl)
-{
- return rv_amo_insn(0, aq, rl, rs2, rs1, 3, rd, 0x2f);
-}
-
-static inline u32 rv_amoand_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl)
-{
- return rv_amo_insn(0xc, aq, rl, rs2, rs1, 3, rd, 0x2f);
-}
-
-static inline u32 rv_amoor_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl)
-{
- return rv_amo_insn(0x8, aq, rl, rs2, rs1, 3, rd, 0x2f);
-}
-
-static inline u32 rv_amoxor_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl)
-{
- return rv_amo_insn(0x4, aq, rl, rs2, rs1, 3, rd, 0x2f);
-}
-
-static inline u32 rv_amoswap_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl)
-{
- return rv_amo_insn(0x1, aq, rl, rs2, rs1, 3, rd, 0x2f);
-}
-
-static inline u32 rv_lr_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl)
-{
- return rv_amo_insn(0x2, aq, rl, rs2, rs1, 3, rd, 0x2f);
-}
-
-static inline u32 rv_sc_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl)
-{
- return rv_amo_insn(0x3, aq, rl, rs2, rs1, 3, rd, 0x2f);
-}
-
-/* RV64-only RVC instructions. */
-
-static inline u16 rvc_ld(u8 rd, u32 imm8, u8 rs1)
-{
- u32 imm_hi, imm_lo;
-
- imm_hi = (imm8 & 0x38) >> 3;
- imm_lo = (imm8 & 0xc0) >> 6;
- return rv_cl_insn(0x3, imm_hi, rs1, imm_lo, rd, 0x0);
-}
-
-static inline u16 rvc_sd(u8 rs1, u32 imm8, u8 rs2)
-{
- u32 imm_hi, imm_lo;
-
- imm_hi = (imm8 & 0x38) >> 3;
- imm_lo = (imm8 & 0xc0) >> 6;
- return rv_cs_insn(0x7, imm_hi, rs1, imm_lo, rs2, 0x0);
-}
-
-static inline u16 rvc_subw(u8 rd, u8 rs)
-{
- return rv_ca_insn(0x27, rd, 0, rs, 0x1);
-}
-
-static inline u16 rvc_addiw(u8 rd, u32 imm6)
-{
- return rv_ci_insn(0x1, imm6, rd, 0x1);
-}
-
-static inline u16 rvc_ldsp(u8 rd, u32 imm9)
-{
- u32 imm;
-
- imm = ((imm9 & 0x1c0) >> 6) | (imm9 & 0x38);
- return rv_ci_insn(0x3, imm, rd, 0x2);
-}
-
-static inline u16 rvc_sdsp(u32 imm9, u8 rs2)
-{
- u32 imm;
-
- imm = (imm9 & 0x38) | ((imm9 & 0x1c0) >> 6);
- return rv_css_insn(0x7, imm, rs2, 0x2);
-}
-
-#endif /* __riscv_xlen == 64 */
-
/* Helper functions that emit RVC instructions when possible. */

static inline void emit_jalr(u8 rd, u8 rs, s32 imm, struct rv_jit_context *ctx)

--
2.34.1


2023-08-04 03:53:56

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH 08/10] RISC-V: kvm: Refactor instructions

Use shared instruction definitions in insn.h instead of manually
constructing them.

Signed-off-by: Charlie Jenkins <[email protected]>
---
arch/riscv/kvm/vcpu_insn.c | 281 ++++++++++++++-------------------------------
1 file changed, 86 insertions(+), 195 deletions(-)

diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c
index 7a6abed41bc1..73c7d21b496e 100644
--- a/arch/riscv/kvm/vcpu_insn.c
+++ b/arch/riscv/kvm/vcpu_insn.c
@@ -6,130 +6,7 @@

#include <linux/bitops.h>
#include <linux/kvm_host.h>
-
-#define INSN_OPCODE_MASK 0x007c
-#define INSN_OPCODE_SHIFT 2
-#define INSN_OPCODE_SYSTEM 28
-
-#define INSN_MASK_WFI 0xffffffff
-#define INSN_MATCH_WFI 0x10500073
-
-#define INSN_MATCH_CSRRW 0x1073
-#define INSN_MASK_CSRRW 0x707f
-#define INSN_MATCH_CSRRS 0x2073
-#define INSN_MASK_CSRRS 0x707f
-#define INSN_MATCH_CSRRC 0x3073
-#define INSN_MASK_CSRRC 0x707f
-#define INSN_MATCH_CSRRWI 0x5073
-#define INSN_MASK_CSRRWI 0x707f
-#define INSN_MATCH_CSRRSI 0x6073
-#define INSN_MASK_CSRRSI 0x707f
-#define INSN_MATCH_CSRRCI 0x7073
-#define INSN_MASK_CSRRCI 0x707f
-
-#define INSN_MATCH_LB 0x3
-#define INSN_MASK_LB 0x707f
-#define INSN_MATCH_LH 0x1003
-#define INSN_MASK_LH 0x707f
-#define INSN_MATCH_LW 0x2003
-#define INSN_MASK_LW 0x707f
-#define INSN_MATCH_LD 0x3003
-#define INSN_MASK_LD 0x707f
-#define INSN_MATCH_LBU 0x4003
-#define INSN_MASK_LBU 0x707f
-#define INSN_MATCH_LHU 0x5003
-#define INSN_MASK_LHU 0x707f
-#define INSN_MATCH_LWU 0x6003
-#define INSN_MASK_LWU 0x707f
-#define INSN_MATCH_SB 0x23
-#define INSN_MASK_SB 0x707f
-#define INSN_MATCH_SH 0x1023
-#define INSN_MASK_SH 0x707f
-#define INSN_MATCH_SW 0x2023
-#define INSN_MASK_SW 0x707f
-#define INSN_MATCH_SD 0x3023
-#define INSN_MASK_SD 0x707f
-
-#define INSN_MATCH_C_LD 0x6000
-#define INSN_MASK_C_LD 0xe003
-#define INSN_MATCH_C_SD 0xe000
-#define INSN_MASK_C_SD 0xe003
-#define INSN_MATCH_C_LW 0x4000
-#define INSN_MASK_C_LW 0xe003
-#define INSN_MATCH_C_SW 0xc000
-#define INSN_MASK_C_SW 0xe003
-#define INSN_MATCH_C_LDSP 0x6002
-#define INSN_MASK_C_LDSP 0xe003
-#define INSN_MATCH_C_SDSP 0xe002
-#define INSN_MASK_C_SDSP 0xe003
-#define INSN_MATCH_C_LWSP 0x4002
-#define INSN_MASK_C_LWSP 0xe003
-#define INSN_MATCH_C_SWSP 0xc002
-#define INSN_MASK_C_SWSP 0xe003
-
-#define INSN_16BIT_MASK 0x3
-
-#define INSN_IS_16BIT(insn) (((insn) & INSN_16BIT_MASK) != INSN_16BIT_MASK)
-
-#define INSN_LEN(insn) (INSN_IS_16BIT(insn) ? 2 : 4)
-
-#ifdef CONFIG_64BIT
-#define LOG_REGBYTES 3
-#else
-#define LOG_REGBYTES 2
-#endif
-#define REGBYTES (1 << LOG_REGBYTES)
-
-#define SH_RD 7
-#define SH_RS1 15
-#define SH_RS2 20
-#define SH_RS2C 2
-#define MASK_RX 0x1f
-
-#define RV_X(x, s, n) (((x) >> (s)) & ((1 << (n)) - 1))
-#define RVC_LW_IMM(x) ((RV_X(x, 6, 1) << 2) | \
- (RV_X(x, 10, 3) << 3) | \
- (RV_X(x, 5, 1) << 6))
-#define RVC_LD_IMM(x) ((RV_X(x, 10, 3) << 3) | \
- (RV_X(x, 5, 2) << 6))
-#define RVC_LWSP_IMM(x) ((RV_X(x, 4, 3) << 2) | \
- (RV_X(x, 12, 1) << 5) | \
- (RV_X(x, 2, 2) << 6))
-#define RVC_LDSP_IMM(x) ((RV_X(x, 5, 2) << 3) | \
- (RV_X(x, 12, 1) << 5) | \
- (RV_X(x, 2, 3) << 6))
-#define RVC_SWSP_IMM(x) ((RV_X(x, 9, 4) << 2) | \
- (RV_X(x, 7, 2) << 6))
-#define RVC_SDSP_IMM(x) ((RV_X(x, 10, 3) << 3) | \
- (RV_X(x, 7, 3) << 6))
-#define RVC_RS1S(insn) (8 + RV_X(insn, SH_RD, 3))
-#define RVC_RS2S(insn) (8 + RV_X(insn, SH_RS2C, 3))
-#define RVC_RS2(insn) RV_X(insn, SH_RS2C, 5)
-
-#define SHIFT_RIGHT(x, y) \
- ((y) < 0 ? ((x) << -(y)) : ((x) >> (y)))
-
-#define REG_MASK \
- ((1 << (5 + LOG_REGBYTES)) - (1 << LOG_REGBYTES))
-
-#define REG_OFFSET(insn, pos) \
- (SHIFT_RIGHT((insn), (pos) - LOG_REGBYTES) & REG_MASK)
-
-#define REG_PTR(insn, pos, regs) \
- ((ulong *)((ulong)(regs) + REG_OFFSET(insn, pos)))
-
-#define GET_FUNCT3(insn) (((insn) >> 12) & 7)
-
-#define GET_RS1(insn, regs) (*REG_PTR(insn, SH_RS1, regs))
-#define GET_RS2(insn, regs) (*REG_PTR(insn, SH_RS2, regs))
-#define GET_RS1S(insn, regs) (*REG_PTR(RVC_RS1S(insn), 0, regs))
-#define GET_RS2S(insn, regs) (*REG_PTR(RVC_RS2S(insn), 0, regs))
-#define GET_RS2C(insn, regs) (*REG_PTR(insn, SH_RS2C, regs))
-#define GET_SP(regs) (*REG_PTR(2, 0, regs))
-#define SET_RD(insn, regs, val) (*REG_PTR(insn, SH_RD, regs) = (val))
-#define IMM_I(insn) ((s32)(insn) >> 20)
-#define IMM_S(insn) (((s32)(insn) >> 25 << 5) | \
- (s32)(((insn) >> 7) & 0x1f))
+#include <asm/insn.h>

struct insn_func {
unsigned long mask;
@@ -230,6 +107,7 @@ static const struct csr_func csr_funcs[] = {
int kvm_riscv_vcpu_csr_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
ulong insn;
+ u32 index;

if (vcpu->arch.csr_decode.return_handled)
return 0;
@@ -237,9 +115,10 @@ int kvm_riscv_vcpu_csr_return(struct kvm_vcpu *vcpu, struct kvm_run *run)

/* Update destination register for CSR reads */
insn = vcpu->arch.csr_decode.insn;
- if ((insn >> SH_RD) & MASK_RX)
- SET_RD(insn, &vcpu->arch.guest_context,
- run->riscv_csr.ret_value);
+ riscv_insn_extract_rd(insn);
+ if (index)
+ rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context,
+ index, run->riscv_csr.ret_value);

/* Move to next instruction */
vcpu->arch.guest_context.sepc += INSN_LEN(insn);
@@ -249,36 +128,39 @@ int kvm_riscv_vcpu_csr_return(struct kvm_vcpu *vcpu, struct kvm_run *run)

static int csr_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn)
{
+ ulong rs1_val;
int i, rc = KVM_INSN_ILLEGAL_TRAP;
- unsigned int csr_num = insn >> SH_RS2;
- unsigned int rs1_num = (insn >> SH_RS1) & MASK_RX;
- ulong rs1_val = GET_RS1(insn, &vcpu->arch.guest_context);
+ unsigned int csr_num = insn >> RV_I_IMM_11_0_OPOFF;
+ unsigned int rs1_num = riscv_insn_extract_rs1(insn);
const struct csr_func *tcfn, *cfn = NULL;
ulong val = 0, wr_mask = 0, new_val = 0;

+ rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context,
+ riscv_insn_extract_rs1(insn), &rs1_val);
+
/* Decode the CSR instruction */
- switch (GET_FUNCT3(insn)) {
- case GET_FUNCT3(INSN_MATCH_CSRRW):
+ switch (riscv_insn_extract_funct3(insn)) {
+ case RVG_FUNCT3_CSRRW:
wr_mask = -1UL;
new_val = rs1_val;
break;
- case GET_FUNCT3(INSN_MATCH_CSRRS):
+ case RVG_FUNCT3_CSRRS:
wr_mask = rs1_val;
new_val = -1UL;
break;
- case GET_FUNCT3(INSN_MATCH_CSRRC):
+ case RVG_FUNCT3_CSRRC:
wr_mask = rs1_val;
new_val = 0;
break;
- case GET_FUNCT3(INSN_MATCH_CSRRWI):
+ case RVG_FUNCT3_CSRRWI:
wr_mask = -1UL;
new_val = rs1_num;
break;
- case GET_FUNCT3(INSN_MATCH_CSRRSI):
+ case RVG_FUNCT3_CSRRSI:
wr_mask = rs1_num;
new_val = -1UL;
break;
- case GET_FUNCT3(INSN_MATCH_CSRRCI):
+ case RVG_FUNCT3_CSRRCI:
wr_mask = rs1_num;
new_val = 0;
break;
@@ -331,38 +213,38 @@ static int csr_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn)

static const struct insn_func system_opcode_funcs[] = {
{
- .mask = INSN_MASK_CSRRW,
- .match = INSN_MATCH_CSRRW,
+ .mask = RVG_MASK_CSRRW,
+ .match = RVG_MATCH_CSRRW,
.func = csr_insn,
},
{
- .mask = INSN_MASK_CSRRS,
- .match = INSN_MATCH_CSRRS,
+ .mask = RVG_MASK_CSRRS,
+ .match = RVG_MATCH_CSRRS,
.func = csr_insn,
},
{
- .mask = INSN_MASK_CSRRC,
- .match = INSN_MATCH_CSRRC,
+ .mask = RVG_MASK_CSRRC,
+ .match = RVG_MATCH_CSRRC,
.func = csr_insn,
},
{
- .mask = INSN_MASK_CSRRWI,
- .match = INSN_MATCH_CSRRWI,
+ .mask = RVG_MASK_CSRRWI,
+ .match = RVG_MATCH_CSRRWI,
.func = csr_insn,
},
{
- .mask = INSN_MASK_CSRRSI,
- .match = INSN_MATCH_CSRRSI,
+ .mask = RVG_MASK_CSRRSI,
+ .match = RVG_MATCH_CSRRSI,
.func = csr_insn,
},
{
- .mask = INSN_MASK_CSRRCI,
- .match = INSN_MATCH_CSRRCI,
+ .mask = RVG_MASK_CSRRCI,
+ .match = RVG_MATCH_CSRRCI,
.func = csr_insn,
},
{
- .mask = INSN_MASK_WFI,
- .match = INSN_MATCH_WFI,
+ .mask = RV_MASK_WFI,
+ .match = RV_MATCH_WFI,
.func = wfi_insn,
},
};
@@ -414,7 +296,7 @@ int kvm_riscv_vcpu_virtual_insn(struct kvm_vcpu *vcpu, struct kvm_run *run,
struct kvm_cpu_trap utrap = { 0 };
struct kvm_cpu_context *ct;

- if (unlikely(INSN_IS_16BIT(insn))) {
+ if (unlikely(INSN_IS_C(insn))) {
if (insn == 0) {
ct = &vcpu->arch.guest_context;
insn = kvm_riscv_vcpu_unpriv_read(vcpu, true,
@@ -426,12 +308,12 @@ int kvm_riscv_vcpu_virtual_insn(struct kvm_vcpu *vcpu, struct kvm_run *run,
return 1;
}
}
- if (INSN_IS_16BIT(insn))
+ if (INSN_IS_C(insn))
return truly_illegal_insn(vcpu, run, insn);
}

- switch ((insn & INSN_OPCODE_MASK) >> INSN_OPCODE_SHIFT) {
- case INSN_OPCODE_SYSTEM:
+ switch (insn) {
+ case RVG_OPCODE_SYSTEM:
return system_opcode_insn(vcpu, run, insn);
default:
return truly_illegal_insn(vcpu, run, insn);
@@ -466,7 +348,7 @@ int kvm_riscv_vcpu_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run,
* Bit[0] == 1 implies trapped instruction value is
* transformed instruction or custom instruction.
*/
- insn = htinst | INSN_16BIT_MASK;
+ insn = htinst | INSN_C_MASK;
insn_len = (htinst & BIT(1)) ? INSN_LEN(insn) : 2;
} else {
/*
@@ -485,43 +367,43 @@ int kvm_riscv_vcpu_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run,
}

/* Decode length of MMIO and shift */
- if ((insn & INSN_MASK_LW) == INSN_MATCH_LW) {
+ if (riscv_insn_is_lw(insn)) {
len = 4;
shift = 8 * (sizeof(ulong) - len);
- } else if ((insn & INSN_MASK_LB) == INSN_MATCH_LB) {
+ } else if (riscv_insn_is_lb(insn)) {
len = 1;
shift = 8 * (sizeof(ulong) - len);
- } else if ((insn & INSN_MASK_LBU) == INSN_MATCH_LBU) {
+ } else if (riscv_insn_is_lbu(insn)) {
len = 1;
shift = 8 * (sizeof(ulong) - len);
#ifdef CONFIG_64BIT
- } else if ((insn & INSN_MASK_LD) == INSN_MATCH_LD) {
+ } else if (riscv_insn_is_ld(insn)) {
len = 8;
shift = 8 * (sizeof(ulong) - len);
- } else if ((insn & INSN_MASK_LWU) == INSN_MATCH_LWU) {
+ } else if (riscv_insn_is_lwu(insn)) {
len = 4;
#endif
- } else if ((insn & INSN_MASK_LH) == INSN_MATCH_LH) {
+ } else if (riscv_insn_is_lh(insn)) {
len = 2;
shift = 8 * (sizeof(ulong) - len);
- } else if ((insn & INSN_MASK_LHU) == INSN_MATCH_LHU) {
+ } else if (riscv_insn_is_lhu(insn)) {
len = 2;
#ifdef CONFIG_64BIT
- } else if ((insn & INSN_MASK_C_LD) == INSN_MATCH_C_LD) {
+ } else if (riscv_insn_is_c_ld(insn)) {
len = 8;
shift = 8 * (sizeof(ulong) - len);
- insn = RVC_RS2S(insn) << SH_RD;
- } else if ((insn & INSN_MASK_C_LDSP) == INSN_MATCH_C_LDSP &&
- ((insn >> SH_RD) & 0x1f)) {
+ insn = riscv_insn_extract_csca_rs2(insn) << RVG_RD_OPOFF;
+ } else if (riscv_insn_is_c_ldsp(insn) &&
+ riscv_insn_extract_rd(insn)) {
len = 8;
shift = 8 * (sizeof(ulong) - len);
#endif
- } else if ((insn & INSN_MASK_C_LW) == INSN_MATCH_C_LW) {
+ } else if (riscv_insn_is_c_lw(insn)) {
len = 4;
shift = 8 * (sizeof(ulong) - len);
- insn = RVC_RS2S(insn) << SH_RD;
- } else if ((insn & INSN_MASK_C_LWSP) == INSN_MATCH_C_LWSP &&
- ((insn >> SH_RD) & 0x1f)) {
+ insn = riscv_insn_extract_csca_rs2(insn) << RVG_RD_OPOFF;
+ } else if (riscv_insn_is_c_lwsp(insn) &&
+ riscv_insn_extract_rd(insn)) {
len = 4;
shift = 8 * (sizeof(ulong) - len);
} else {
@@ -592,7 +474,7 @@ int kvm_riscv_vcpu_mmio_store(struct kvm_vcpu *vcpu, struct kvm_run *run,
* Bit[0] == 1 implies trapped instruction value is
* transformed instruction or custom instruction.
*/
- insn = htinst | INSN_16BIT_MASK;
+ insn = htinst | INSN_C_MASK;
insn_len = (htinst & BIT(1)) ? INSN_LEN(insn) : 2;
} else {
/*
@@ -610,35 +492,42 @@ int kvm_riscv_vcpu_mmio_store(struct kvm_vcpu *vcpu, struct kvm_run *run,
insn_len = INSN_LEN(insn);
}

- data = GET_RS2(insn, &vcpu->arch.guest_context);
+ rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context,
+ riscv_insn_extract_rs1(insn), &data);
data8 = data16 = data32 = data64 = data;

- if ((insn & INSN_MASK_SW) == INSN_MATCH_SW) {
+ if (riscv_insn_is_sw(insn)) {
len = 4;
- } else if ((insn & INSN_MASK_SB) == INSN_MATCH_SB) {
+ } else if (riscv_insn_is_sb(insn)) {
len = 1;
#ifdef CONFIG_64BIT
- } else if ((insn & INSN_MASK_SD) == INSN_MATCH_SD) {
+ } else if (riscv_insn_is_sd(insn)) {
len = 8;
#endif
- } else if ((insn & INSN_MASK_SH) == INSN_MATCH_SH) {
+ } else if (riscv_insn_is_sh(insn)) {
len = 2;
#ifdef CONFIG_64BIT
- } else if ((insn & INSN_MASK_C_SD) == INSN_MATCH_C_SD) {
+ } else if (riscv_insn_is_c_sd(insn)) {
len = 8;
- data64 = GET_RS2S(insn, &vcpu->arch.guest_context);
- } else if ((insn & INSN_MASK_C_SDSP) == INSN_MATCH_C_SDSP &&
- ((insn >> SH_RD) & 0x1f)) {
+ rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context,
+ riscv_insn_extract_rs2(insn),
+ (unsigned long *)&data64);
+ } else if (riscv_insn_is_c_sdsp(insn) && riscv_insn_extract_rd(insn)) {
len = 8;
- data64 = GET_RS2C(insn, &vcpu->arch.guest_context);
+ rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context,
+ riscv_insn_extract_csca_rs2(insn),
+ (unsigned long *)&data64);
#endif
- } else if ((insn & INSN_MASK_C_SW) == INSN_MATCH_C_SW) {
+ } else if (riscv_insn_is_c_sw(insn)) {
len = 4;
- data32 = GET_RS2S(insn, &vcpu->arch.guest_context);
- } else if ((insn & INSN_MASK_C_SWSP) == INSN_MATCH_C_SWSP &&
- ((insn >> SH_RD) & 0x1f)) {
+ rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context,
+ riscv_insn_extract_rs2(insn),
+ (unsigned long *)&data32);
+ } else if (riscv_insn_is_c_swsp(insn) && riscv_insn_extract_rd(insn)) {
len = 4;
- data32 = GET_RS2C(insn, &vcpu->arch.guest_context);
+ rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context,
+ riscv_insn_extract_csca_rs2(insn),
+ (unsigned long *)&data32);
} else {
return -EOPNOTSUPP;
}
@@ -707,6 +596,7 @@ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
u32 data32;
u64 data64;
ulong insn;
+ u32 index;
int len, shift;

if (vcpu->arch.mmio_decode.return_handled)
@@ -720,27 +610,28 @@ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)

len = vcpu->arch.mmio_decode.len;
shift = vcpu->arch.mmio_decode.shift;
+ index = riscv_insn_extract_rd(insn);

switch (len) {
case 1:
data8 = *((u8 *)run->mmio.data);
- SET_RD(insn, &vcpu->arch.guest_context,
- (ulong)data8 << shift >> shift);
+ rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context,
+ index, (ulong)data8 << shift >> shift);
break;
case 2:
data16 = *((u16 *)run->mmio.data);
- SET_RD(insn, &vcpu->arch.guest_context,
- (ulong)data16 << shift >> shift);
+ rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context,
+ index, (ulong)data16 << shift >> shift);
break;
case 4:
data32 = *((u32 *)run->mmio.data);
- SET_RD(insn, &vcpu->arch.guest_context,
- (ulong)data32 << shift >> shift);
+ rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context,
+ index, (ulong)data32 << shift >> shift);
break;
case 8:
data64 = *((u64 *)run->mmio.data);
- SET_RD(insn, &vcpu->arch.guest_context,
- (ulong)data64 << shift >> shift);
+ rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context,
+ index, (ulong)data64 << shift >> shift);
break;
default:
return -EOPNOTSUPP;

--
2.34.1


2023-08-04 03:59:03

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH 03/10] RISC-V: Refactor jump label instructions

Use shared instruction definitions in insn.h instead of manually
constructing them.

Signed-off-by: Charlie Jenkins <[email protected]>
---
arch/riscv/include/asm/insn.h | 2 +-
arch/riscv/kernel/jump_label.c | 13 ++++---------
2 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/arch/riscv/include/asm/insn.h b/arch/riscv/include/asm/insn.h
index 04f7649e1add..124ab02973a7 100644
--- a/arch/riscv/include/asm/insn.h
+++ b/arch/riscv/include/asm/insn.h
@@ -1984,7 +1984,7 @@ static __always_inline bool riscv_insn_is_branch(u32 code)
<< RVC_J_IMM_10_OFF) | \
(RVC_IMM_SIGN(x_) << RVC_J_IMM_SIGN_OFF); })

-#define RVC_EXTRACT_BTYPE_IMM(x) \
+#define RVC_EXTRACT_BZ_IMM(x) \
({typeof(x) x_ = (x); \
(RVC_X(x_, RVC_BZ_IMM_2_1_OPOFF, RVC_BZ_IMM_2_1_MASK) \
<< RVC_BZ_IMM_2_1_OFF) | \
diff --git a/arch/riscv/kernel/jump_label.c b/arch/riscv/kernel/jump_label.c
index e6694759dbd0..fdaac2a13eac 100644
--- a/arch/riscv/kernel/jump_label.c
+++ b/arch/riscv/kernel/jump_label.c
@@ -9,11 +9,9 @@
#include <linux/memory.h>
#include <linux/mutex.h>
#include <asm/bug.h>
+#include <asm/insn.h>
#include <asm/patch.h>

-#define RISCV_INSN_NOP 0x00000013U
-#define RISCV_INSN_JAL 0x0000006fU
-
void arch_jump_label_transform(struct jump_entry *entry,
enum jump_label_type type)
{
@@ -26,13 +24,10 @@ void arch_jump_label_transform(struct jump_entry *entry,
if (WARN_ON(offset & 1 || offset < -524288 || offset >= 524288))
return;

- insn = RISCV_INSN_JAL |
- (((u32)offset & GENMASK(19, 12)) << (12 - 12)) |
- (((u32)offset & GENMASK(11, 11)) << (20 - 11)) |
- (((u32)offset & GENMASK(10, 1)) << (21 - 1)) |
- (((u32)offset & GENMASK(20, 20)) << (31 - 20));
+ insn = RVG_OPCODE_JAL;
+ riscv_insn_insert_jtype_imm(&insn, (s32)offset);
} else {
- insn = RISCV_INSN_NOP;
+ insn = RVG_OPCODE_NOP;
}

mutex_lock(&text_mutex);

--
2.34.1


2023-08-04 08:26:03

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 10/10] RISC-V: Refactor bug and traps instructions

Hi Charlie,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 5d0c230f1de8c7515b6567d9afba1f196fb4e2f4]

url: https://github.com/intel-lab-lkp/linux/commits/Charlie-Jenkins/RISC-V-Expand-instruction-definitions/20230804-101437
base: 5d0c230f1de8c7515b6567d9afba1f196fb4e2f4
patch link: https://lore.kernel.org/r/20230803-master-refactor-instructions-v4-v1-10-2128e61fa4ff%40rivosinc.com
patch subject: [PATCH 10/10] RISC-V: Refactor bug and traps instructions
config: riscv-allyesconfig (https://download.01.org/0day-ci/archive/20230804/[email protected]/config)
compiler: riscv64-linux-gcc (GCC) 12.3.0
reproduce: (https://download.01.org/0day-ci/archive/20230804/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

>> arch/riscv/kernel/elf_kexec.c:326: warning: "RV_X" redefined
326 | #define RV_X(x, s, n) (((x) >> (s)) & ((1 << (n)) - 1))
|
In file included from arch/riscv/include/asm/bug.h:14,
from include/linux/bug.h:5,
from include/linux/thread_info.h:13,
from include/asm-generic/preempt.h:5,
from ./arch/riscv/include/generated/asm/preempt.h:1,
from include/linux/preempt.h:79,
from include/linux/spinlock.h:56,
from include/linux/ipc.h:5,
from include/uapi/linux/sem.h:5,
from include/linux/sem.h:5,
from include/linux/compat.h:14,
from arch/riscv/include/asm/elf.h:12,
from include/linux/elf.h:6,
from arch/riscv/kernel/elf_kexec.c:15:
arch/riscv/include/asm/insn.h:1915: note: this is the location of the previous definition
1915 | #define RV_X(X, s, mask) (((X) >> (s)) & (mask))
|


vim +/RV_X +326 arch/riscv/kernel/elf_kexec.c

6261586e0c91db1 Liao Chang 2022-04-08 325
838b3e28488f702 Li Zhengyu 2022-04-08 @326 #define RV_X(x, s, n) (((x) >> (s)) & ((1 << (n)) - 1))
838b3e28488f702 Li Zhengyu 2022-04-08 327 #define RISCV_IMM_BITS 12
838b3e28488f702 Li Zhengyu 2022-04-08 328 #define RISCV_IMM_REACH (1LL << RISCV_IMM_BITS)
838b3e28488f702 Li Zhengyu 2022-04-08 329 #define RISCV_CONST_HIGH_PART(x) \
838b3e28488f702 Li Zhengyu 2022-04-08 330 (((x) + (RISCV_IMM_REACH >> 1)) & ~(RISCV_IMM_REACH - 1))
838b3e28488f702 Li Zhengyu 2022-04-08 331 #define RISCV_CONST_LOW_PART(x) ((x) - RISCV_CONST_HIGH_PART(x))
838b3e28488f702 Li Zhengyu 2022-04-08 332

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-08-04 10:08:02

by Andrew Jones

[permalink] [raw]
Subject: Re: [PATCH 00/10] RISC-V: Refactor instructions

On Thu, Aug 03, 2023 at 07:10:25PM -0700, Charlie Jenkins wrote:
> There are numerous systems in the kernel that rely on directly
> modifying, creating, and reading instructions. Many of these systems
> have rewritten code to do this. This patch will delegate all instruction
> handling into insn.h and reg.h. All of the compressed instructions, RVI,
> Zicsr, M, A instructions are included, as well as a subset of the F,D,Q
> extensions.
>
> ---
> This is modifying code that https://lore.kernel.org/lkml/[email protected]/
> is also touching.
>
> ---
> Testing:
>
> There are a lot of subsystems touched and I have not tested every
> individual instruction. I did a lot of copy-pasting from the RISC-V spec
> so opcodes and such should be correct

How about we create macros which generate each of the functions an
instruction needs, e.g. riscv_insn_is_*(), etc. based on the output of
[1]. I know basically nothing about that project, but it looks like it
creates most the defines this series is creating from what we [hope] to
be an authoritative source. I also assume that if we don't like the
current output format, then we could probably post patches to the project
to get the format we want. For example, we could maybe propose an "lc"
format for "Linux C".

I'd also recommend only importing the generated defines and generating
the functions that will actually have immediate consumers or are part of
a set of defines that have immediate consumers. Each consumer of new
instructions will be responsible for generating and importing the defines
and adding the respective macro invocations to generate the functions.
This series can also take that approach, i.e. convert one set of
instructions at a time, each in a separate patch.

[1] https://github.com/riscv/riscv-opcodes

Thanks,
drew


> , but the construction of every
> instruction is not fully tested.
>
> vector: Compiled and booted
>
> jump_label: Ensured static keys function as expected.
>
> kgdb: Attempted to run the provided tests but they failed even without
> my changes
>
> module: Loaded and unloaded modules
>
> patch.c: Ensured kernel booted
>
> kprobes: Used a kprobing module to probe jalr, auipc, and branch
> instructions
>
> nommu misaligned addresses: Kernel boots
>
> kvm: Ran KVM selftests
>
> bpf: Kernel boots. Most of the instructions are exclusively used by BPF
> but I am unsure of the best way of testing BPF.
>
> Signed-off-by: Charlie Jenkins <[email protected]>
>
> ---
> Charlie Jenkins (10):
> RISC-V: Expand instruction definitions
> RISC-V: vector: Refactor instructions
> RISC-V: Refactor jump label instructions
> RISC-V: KGDB: Refactor instructions
> RISC-V: module: Refactor instructions
> RISC-V: Refactor patch instructions
> RISC-V: nommu: Refactor instructions
> RISC-V: kvm: Refactor instructions
> RISC-V: bpf: Refactor instructions
> RISC-V: Refactor bug and traps instructions
>
> arch/riscv/include/asm/bug.h | 18 +-
> arch/riscv/include/asm/insn.h | 2744 +++++++++++++++++++++++++++---
> arch/riscv/include/asm/reg.h | 88 +
> arch/riscv/kernel/jump_label.c | 13 +-
> arch/riscv/kernel/kgdb.c | 13 +-
> arch/riscv/kernel/module.c | 80 +-
> arch/riscv/kernel/patch.c | 3 +-
> arch/riscv/kernel/probes/kprobes.c | 13 +-
> arch/riscv/kernel/probes/simulate-insn.c | 100 +-
> arch/riscv/kernel/probes/uprobes.c | 5 +-
> arch/riscv/kernel/traps.c | 9 +-
> arch/riscv/kernel/traps_misaligned.c | 218 +--
> arch/riscv/kernel/vector.c | 5 +-
> arch/riscv/kvm/vcpu_insn.c | 281 +--
> arch/riscv/net/bpf_jit.h | 707 +-------
> 15 files changed, 2825 insertions(+), 1472 deletions(-)
> ---
> base-commit: 5d0c230f1de8c7515b6567d9afba1f196fb4e2f4
> change-id: 20230801-master-refactor-instructions-v4-433aa040da03
> --
> - Charlie
>
>
> --
> kvm-riscv mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/kvm-riscv

2023-08-04 18:26:50

by Charlie Jenkins

[permalink] [raw]
Subject: Re: [PATCH 00/10] RISC-V: Refactor instructions

On Fri, Aug 04, 2023 at 12:28:28PM +0300, Andrew Jones wrote:
> On Thu, Aug 03, 2023 at 07:10:25PM -0700, Charlie Jenkins wrote:
> > There are numerous systems in the kernel that rely on directly
> > modifying, creating, and reading instructions. Many of these systems
> > have rewritten code to do this. This patch will delegate all instruction
> > handling into insn.h and reg.h. All of the compressed instructions, RVI,
> > Zicsr, M, A instructions are included, as well as a subset of the F,D,Q
> > extensions.
> >
> > ---
> > This is modifying code that https://lore.kernel.org/lkml/[email protected]/
> > is also touching.
> >
> > ---
> > Testing:
> >
> > There are a lot of subsystems touched and I have not tested every
> > individual instruction. I did a lot of copy-pasting from the RISC-V spec
> > so opcodes and such should be correct
>
> How about we create macros which generate each of the functions an
> instruction needs, e.g. riscv_insn_is_*(), etc. based on the output of
> [1]. I know basically nothing about that project, but it looks like it
> creates most the defines this series is creating from what we [hope] to
> be an authoritative source. I also assume that if we don't like the
> current output format, then we could probably post patches to the project
> to get the format we want. For example, we could maybe propose an "lc"
> format for "Linux C".
That's a great idea, I didn't realize that existed!
>
> I'd also recommend only importing the generated defines and generating
> the functions that will actually have immediate consumers or are part of
> a set of defines that have immediate consumers. Each consumer of new
> instructions will be responsible for generating and importing the defines
> and adding the respective macro invocations to generate the functions.
> This series can also take that approach, i.e. convert one set of
> instructions at a time, each in a separate patch.
Since I was hand-writing everything and copying it wasn't too much
effort to just copy all of the instructions from a group. However, from
a testing standpoint it makes sense to exclude instructions not yet in
use.
>
> [1] https://github.com/riscv/riscv-opcodes
>
> Thanks,
> drew
>
>
> > , but the construction of every
> > instruction is not fully tested.
> >
> > vector: Compiled and booted
> >
> > jump_label: Ensured static keys function as expected.
> >
> > kgdb: Attempted to run the provided tests but they failed even without
> > my changes
> >
> > module: Loaded and unloaded modules
> >
> > patch.c: Ensured kernel booted
> >
> > kprobes: Used a kprobing module to probe jalr, auipc, and branch
> > instructions
> >
> > nommu misaligned addresses: Kernel boots
> >
> > kvm: Ran KVM selftests
> >
> > bpf: Kernel boots. Most of the instructions are exclusively used by BPF
> > but I am unsure of the best way of testing BPF.
> >
> > Signed-off-by: Charlie Jenkins <[email protected]>
> >
> > ---
> > Charlie Jenkins (10):
> > RISC-V: Expand instruction definitions
> > RISC-V: vector: Refactor instructions
> > RISC-V: Refactor jump label instructions
> > RISC-V: KGDB: Refactor instructions
> > RISC-V: module: Refactor instructions
> > RISC-V: Refactor patch instructions
> > RISC-V: nommu: Refactor instructions
> > RISC-V: kvm: Refactor instructions
> > RISC-V: bpf: Refactor instructions
> > RISC-V: Refactor bug and traps instructions
> >
> > arch/riscv/include/asm/bug.h | 18 +-
> > arch/riscv/include/asm/insn.h | 2744 +++++++++++++++++++++++++++---
> > arch/riscv/include/asm/reg.h | 88 +
> > arch/riscv/kernel/jump_label.c | 13 +-
> > arch/riscv/kernel/kgdb.c | 13 +-
> > arch/riscv/kernel/module.c | 80 +-
> > arch/riscv/kernel/patch.c | 3 +-
> > arch/riscv/kernel/probes/kprobes.c | 13 +-
> > arch/riscv/kernel/probes/simulate-insn.c | 100 +-
> > arch/riscv/kernel/probes/uprobes.c | 5 +-
> > arch/riscv/kernel/traps.c | 9 +-
> > arch/riscv/kernel/traps_misaligned.c | 218 +--
> > arch/riscv/kernel/vector.c | 5 +-
> > arch/riscv/kvm/vcpu_insn.c | 281 +--
> > arch/riscv/net/bpf_jit.h | 707 +-------
> > 15 files changed, 2825 insertions(+), 1472 deletions(-)
> > ---
> > base-commit: 5d0c230f1de8c7515b6567d9afba1f196fb4e2f4
> > change-id: 20230801-master-refactor-instructions-v4-433aa040da03
> > --
> > - Charlie
> >
> >
> > --
> > kvm-riscv mailing list
> > [email protected]
> > http://lists.infradead.org/mailman/listinfo/kvm-riscv

2023-08-19 18:04:09

by Charlie Jenkins

[permalink] [raw]
Subject: Re: [PATCH 00/10] RISC-V: Refactor instructions

On Fri, Aug 04, 2023 at 10:24:33AM -0700, Charlie Jenkins wrote:
> On Fri, Aug 04, 2023 at 12:28:28PM +0300, Andrew Jones wrote:
> > On Thu, Aug 03, 2023 at 07:10:25PM -0700, Charlie Jenkins wrote:
> > > There are numerous systems in the kernel that rely on directly
> > > modifying, creating, and reading instructions. Many of these systems
> > > have rewritten code to do this. This patch will delegate all instruction
> > > handling into insn.h and reg.h. All of the compressed instructions, RVI,
> > > Zicsr, M, A instructions are included, as well as a subset of the F,D,Q
> > > extensions.
> > >
> > > ---
> > > This is modifying code that https://lore.kernel.org/lkml/[email protected]/
> > > is also touching.
> > >
> > > ---
> > > Testing:
> > >
> > > There are a lot of subsystems touched and I have not tested every
> > > individual instruction. I did a lot of copy-pasting from the RISC-V spec
> > > so opcodes and such should be correct
> >
> > How about we create macros which generate each of the functions an
> > instruction needs, e.g. riscv_insn_is_*(), etc. based on the output of
> > [1]. I know basically nothing about that project, but it looks like it
> > creates most the defines this series is creating from what we [hope] to
> > be an authoritative source. I also assume that if we don't like the
> > current output format, then we could probably post patches to the project
> > to get the format we want. For example, we could maybe propose an "lc"
> > format for "Linux C".
> That's a great idea, I didn't realize that existed!
I have discovered that the riscv-opcodes repository is not in a state
that makes it helpful. If it were workable, it would make it easy to
include a "Linux C" format. I have had a pull request open on the repo
for two weeks now and the person who maintains the repo has not
interacted. At minimum, in order for it to be useful it would need an ability to
describe the bit order of immediates in an instruction and include script
arguments to select which instructions should be included. There is a
"C" format, but it is actually just a Spike format. Nonetheless, it
seems like it is prohibitive to use it.
> >
> > I'd also recommend only importing the generated defines and generating
> > the functions that will actually have immediate consumers or are part of
> > a set of defines that have immediate consumers. Each consumer of new
> > instructions will be responsible for generating and importing the defines
> > and adding the respective macro invocations to generate the functions.
> > This series can also take that approach, i.e. convert one set of
> > instructions at a time, each in a separate patch.
> Since I was hand-writing everything and copying it wasn't too much
> effort to just copy all of the instructions from a group. However, from
> a testing standpoint it makes sense to exclude instructions not yet in
> use.
> >
> > [1] https://github.com/riscv/riscv-opcodes
> >
> > Thanks,
> > drew
> >
> >
> > > , but the construction of every
> > > instruction is not fully tested.
> > >
> > > vector: Compiled and booted
> > >
> > > jump_label: Ensured static keys function as expected.
> > >
> > > kgdb: Attempted to run the provided tests but they failed even without
> > > my changes
> > >
> > > module: Loaded and unloaded modules
> > >
> > > patch.c: Ensured kernel booted
> > >
> > > kprobes: Used a kprobing module to probe jalr, auipc, and branch
> > > instructions
> > >
> > > nommu misaligned addresses: Kernel boots
> > >
> > > kvm: Ran KVM selftests
> > >
> > > bpf: Kernel boots. Most of the instructions are exclusively used by BPF
> > > but I am unsure of the best way of testing BPF.
> > >
> > > Signed-off-by: Charlie Jenkins <[email protected]>
> > >
> > > ---
> > > Charlie Jenkins (10):
> > > RISC-V: Expand instruction definitions
> > > RISC-V: vector: Refactor instructions
> > > RISC-V: Refactor jump label instructions
> > > RISC-V: KGDB: Refactor instructions
> > > RISC-V: module: Refactor instructions
> > > RISC-V: Refactor patch instructions
> > > RISC-V: nommu: Refactor instructions
> > > RISC-V: kvm: Refactor instructions
> > > RISC-V: bpf: Refactor instructions
> > > RISC-V: Refactor bug and traps instructions
> > >
> > > arch/riscv/include/asm/bug.h | 18 +-
> > > arch/riscv/include/asm/insn.h | 2744 +++++++++++++++++++++++++++---
> > > arch/riscv/include/asm/reg.h | 88 +
> > > arch/riscv/kernel/jump_label.c | 13 +-
> > > arch/riscv/kernel/kgdb.c | 13 +-
> > > arch/riscv/kernel/module.c | 80 +-
> > > arch/riscv/kernel/patch.c | 3 +-
> > > arch/riscv/kernel/probes/kprobes.c | 13 +-
> > > arch/riscv/kernel/probes/simulate-insn.c | 100 +-
> > > arch/riscv/kernel/probes/uprobes.c | 5 +-
> > > arch/riscv/kernel/traps.c | 9 +-
> > > arch/riscv/kernel/traps_misaligned.c | 218 +--
> > > arch/riscv/kernel/vector.c | 5 +-
> > > arch/riscv/kvm/vcpu_insn.c | 281 +--
> > > arch/riscv/net/bpf_jit.h | 707 +-------
> > > 15 files changed, 2825 insertions(+), 1472 deletions(-)
> > > ---
> > > base-commit: 5d0c230f1de8c7515b6567d9afba1f196fb4e2f4
> > > change-id: 20230801-master-refactor-instructions-v4-433aa040da03
> > > --
> > > - Charlie
> > >
> > >
> > > --
> > > kvm-riscv mailing list
> > > [email protected]
> > > http://lists.infradead.org/mailman/listinfo/kvm-riscv

2023-08-20 11:10:41

by Jessica Clarke

[permalink] [raw]
Subject: Re: [PATCH 00/10] RISC-V: Refactor instructions

On 17 Aug 2023, at 01:31, Charlie Jenkins <[email protected]> wrote:
>
> On Fri, Aug 04, 2023 at 10:24:33AM -0700, Charlie Jenkins wrote:
>> On Fri, Aug 04, 2023 at 12:28:28PM +0300, Andrew Jones wrote:
>>> On Thu, Aug 03, 2023 at 07:10:25PM -0700, Charlie Jenkins wrote:
>>>> There are numerous systems in the kernel that rely on directly
>>>> modifying, creating, and reading instructions. Many of these systems
>>>> have rewritten code to do this. This patch will delegate all instruction
>>>> handling into insn.h and reg.h. All of the compressed instructions, RVI,
>>>> Zicsr, M, A instructions are included, as well as a subset of the F,D,Q
>>>> extensions.
>>>>
>>>> ---
>>>> This is modifying code that https://lore.kernel.org/lkml/[email protected]/
>>>> is also touching.
>>>>
>>>> ---
>>>> Testing:
>>>>
>>>> There are a lot of subsystems touched and I have not tested every
>>>> individual instruction. I did a lot of copy-pasting from the RISC-V spec
>>>> so opcodes and such should be correct
>>>
>>> How about we create macros which generate each of the functions an
>>> instruction needs, e.g. riscv_insn_is_*(), etc. based on the output of
>>> [1]. I know basically nothing about that project, but it looks like it
>>> creates most the defines this series is creating from what we [hope] to
>>> be an authoritative source. I also assume that if we don't like the
>>> current output format, then we could probably post patches to the project
>>> to get the format we want. For example, we could maybe propose an "lc"
>>> format for "Linux C".
>> That's a great idea, I didn't realize that existed!
> I have discovered that the riscv-opcodes repository is not in a state
> that makes it helpful. If it were workable, it would make it easy to
> include a "Linux C" format. I have had a pull request open on the repo
> for two weeks now and the person who maintains the repo has not
> interacted.

Huh? Andrew has replied to you twice on your PR, and was the last one to
comment. That’s hardly “has not interacted”.

> At minimum, in order for it to be useful it would need an ability to
> describe the bit order of immediates in an instruction and include script
> arguments to select which instructions should be included. There is a
> "C" format, but it is actually just a Spike format.

So extend it? Or do something with QEMU’s equivalent that expresses it.

Jess

> Nonetheless, it
> seems like it is prohibitive to use it.
>>>
>>> I'd also recommend only importing the generated defines and generating
>>> the functions that will actually have immediate consumers or are part of
>>> a set of defines that have immediate consumers. Each consumer of new
>>> instructions will be responsible for generating and importing the defines
>>> and adding the respective macro invocations to generate the functions.
>>> This series can also take that approach, i.e. convert one set of
>>> instructions at a time, each in a separate patch.
>> Since I was hand-writing everything and copying it wasn't too much
>> effort to just copy all of the instructions from a group. However, from
>> a testing standpoint it makes sense to exclude instructions not yet in
>> use.
>>>
>>> [1] https://github.com/riscv/riscv-opcodes
>>>
>>> Thanks,
>>> drew
>>>
>>>
>>>> , but the construction of every
>>>> instruction is not fully tested.
>>>>
>>>> vector: Compiled and booted
>>>>
>>>> jump_label: Ensured static keys function as expected.
>>>>
>>>> kgdb: Attempted to run the provided tests but they failed even without
>>>> my changes
>>>>
>>>> module: Loaded and unloaded modules
>>>>
>>>> patch.c: Ensured kernel booted
>>>>
>>>> kprobes: Used a kprobing module to probe jalr, auipc, and branch
>>>> instructions
>>>>
>>>> nommu misaligned addresses: Kernel boots
>>>>
>>>> kvm: Ran KVM selftests
>>>>
>>>> bpf: Kernel boots. Most of the instructions are exclusively used by BPF
>>>> but I am unsure of the best way of testing BPF.
>>>>
>>>> Signed-off-by: Charlie Jenkins <[email protected]>
>>>>
>>>> ---
>>>> Charlie Jenkins (10):
>>>> RISC-V: Expand instruction definitions
>>>> RISC-V: vector: Refactor instructions
>>>> RISC-V: Refactor jump label instructions
>>>> RISC-V: KGDB: Refactor instructions
>>>> RISC-V: module: Refactor instructions
>>>> RISC-V: Refactor patch instructions
>>>> RISC-V: nommu: Refactor instructions
>>>> RISC-V: kvm: Refactor instructions
>>>> RISC-V: bpf: Refactor instructions
>>>> RISC-V: Refactor bug and traps instructions
>>>>
>>>> arch/riscv/include/asm/bug.h | 18 +-
>>>> arch/riscv/include/asm/insn.h | 2744 +++++++++++++++++++++++++++---
>>>> arch/riscv/include/asm/reg.h | 88 +
>>>> arch/riscv/kernel/jump_label.c | 13 +-
>>>> arch/riscv/kernel/kgdb.c | 13 +-
>>>> arch/riscv/kernel/module.c | 80 +-
>>>> arch/riscv/kernel/patch.c | 3 +-
>>>> arch/riscv/kernel/probes/kprobes.c | 13 +-
>>>> arch/riscv/kernel/probes/simulate-insn.c | 100 +-
>>>> arch/riscv/kernel/probes/uprobes.c | 5 +-
>>>> arch/riscv/kernel/traps.c | 9 +-
>>>> arch/riscv/kernel/traps_misaligned.c | 218 +--
>>>> arch/riscv/kernel/vector.c | 5 +-
>>>> arch/riscv/kvm/vcpu_insn.c | 281 +--
>>>> arch/riscv/net/bpf_jit.h | 707 +-------
>>>> 15 files changed, 2825 insertions(+), 1472 deletions(-)
>>>> ---
>>>> base-commit: 5d0c230f1de8c7515b6567d9afba1f196fb4e2f4
>>>> change-id: 20230801-master-refactor-instructions-v4-433aa040da03
>>>> --
>>>> - Charlie
>>>>
>>>>
>>>> --
>>>> kvm-riscv mailing list
>>>> [email protected]
>>>> http://lists.infradead.org/mailman/listinfo/kvm-riscv
>
> _______________________________________________
> linux-riscv mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-riscv