2022-02-01 20:48:07

by Emil Renner Berthing

[permalink] [raw]
Subject: [PATCH v2 0/7] Module relocation fixes and asm/insn.h header

Apologies! I messed up v1. Please consider this patch set only.

The first patch removes a bunch of code from the asm/module.h which is
included in almost all drivers through linux/module.h. Next are two
patches to fix unaligned access when doing module relocations and do
proper range checks for auipc+jalr offsets.

I'm a little less confident about the following patches, so consider
this more of an RFC for those. The idea is to consolidate the RISC-V
instruction generation and manipulation similar to arm64's asm/insn.h
header.

/Emil

Emil Renner Berthing (7):
riscv: Remove unneeded definitions from asm/module.h
riscv: Avoid unaligned access when relocating modules
riscv: Fix auipc+jalr relocation range checks
riscv: Add asm/insn.h header
riscv: Use asm/insn.h for module relocations
riscv: Use asm/insn.h to generate plt entries
riscv: Use asm/insn.h for jump labels

arch/riscv/include/asm/insn.h | 121 ++++++++++++++
arch/riscv/include/asm/module.h | 87 ----------
arch/riscv/kernel/jump_label.c | 12 +-
arch/riscv/kernel/module-sections.c | 71 +++++++++
arch/riscv/kernel/module.c | 237 +++++++++++++---------------
5 files changed, 306 insertions(+), 222 deletions(-)
create mode 100644 arch/riscv/include/asm/insn.h

--
2.35.1


2022-02-01 20:48:27

by Emil Renner Berthing

[permalink] [raw]
Subject: [PATCH v2 6/7] riscv: Use asm/insn.h to generate plt entries

This converts kernel/module-sections.c to use asm/insn.h to generate
the instructions in the plt entries.

Signed-off-by: Emil Renner Berthing <[email protected]>
---
arch/riscv/kernel/module-sections.c | 27 ++++-----------------------
1 file changed, 4 insertions(+), 23 deletions(-)

diff --git a/arch/riscv/kernel/module-sections.c b/arch/riscv/kernel/module-sections.c
index 39d4ac681c2a..cb73399c3603 100644
--- a/arch/riscv/kernel/module-sections.c
+++ b/arch/riscv/kernel/module-sections.c
@@ -9,6 +9,7 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleloader.h>
+#include <asm/insn.h>

struct got_entry {
unsigned long symbol_addr; /* the real variable address */
@@ -61,36 +62,16 @@ struct plt_entry {
u32 insn_jr; /* jr t1 */
};

-#define OPC_AUIPC 0x0017
-#define OPC_LD 0x3003
-#define OPC_JALR 0x0067
-#define REG_T0 0x5
-#define REG_T1 0x6
-
static struct plt_entry emit_plt_entry(unsigned long val,
unsigned long plt,
unsigned long got_plt)
{
- /*
- * U-Type encoding:
- * +------------+----------+----------+
- * | imm[31:12] | rd[11:7] | opc[6:0] |
- * +------------+----------+----------+
- *
- * I-Type encoding:
- * +------------+------------+--------+----------+----------+
- * | imm[31:20] | rs1[19:15] | funct3 | rd[11:7] | opc[6:0] |
- * +------------+------------+--------+----------+----------+
- *
- */
unsigned long offset = got_plt - plt;
- u32 hi20 = (offset + 0x800) & 0xfffff000;
- u32 lo12 = (offset - hi20);

return (struct plt_entry) {
- OPC_AUIPC | (REG_T0 << 7) | hi20,
- OPC_LD | (lo12 << 20) | (REG_T0 << 15) | (REG_T1 << 7),
- OPC_JALR | (REG_T1 << 15)
+ RISCV_INSN_AUIPC | RISCV_INSN_RD_T0 | riscv_insn_u_imm(offset + 0x800),
+ RISCV_INSN_LD | RISCV_INSN_RD_T1 | RISCV_INSN_RS1_T0 | riscv_insn_i_imm(offset),
+ RISCV_INSN_JALR | RISCV_INSN_RS1_T1,
};
}

--
2.35.1

2022-02-01 20:48:27

by Emil Renner Berthing

[permalink] [raw]
Subject: [PATCH v2 5/7] riscv: Use asm/insn.h for module relocations

This converts the module relocations in kernel/module.c to use
asm/insn.h for instruction manipulation.

Also RISC-V has a number of instruction pairs to
generate 32bit immediates or jump/call offsets. Eg.:

lui rd, hi20
addi rd, rd, lo12

..where hi20 is the upper 20bits to load into register rd and lo12 is
the lower 12bits. However both immediates are interpreted as two's
complement signed values. Hence the old code calculates hi20 and lo12
for 32bit immediates imm like this:

hi20 = (imm + 0x800) & 0xfffff000;
lo12 = (imm - hi20) & 0xfff;

This patch simplifies it to:

hi20 = (imm + 0x800) & 0xfffff000;
lo12 = imm & 0xfff;

..which amounts to the same: imm - hi20 may be become
negative/underflow, but it doesn't change the lower 12 bits.

Signed-off-by: Emil Renner Berthing <[email protected]>
---
arch/riscv/kernel/module.c | 138 +++++++++++++++----------------------
1 file changed, 56 insertions(+), 82 deletions(-)

diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
index a75ccf3a6ce8..2212d88776e0 100644
--- a/arch/riscv/kernel/module.c
+++ b/arch/riscv/kernel/module.c
@@ -2,6 +2,7 @@
/*
*
* Copyright (C) 2017 Zihao Yu
+ * Copyright (C) 2020 Emil Renner Berthing
*/

#include <linux/elf.h>
@@ -11,38 +12,27 @@
#include <linux/vmalloc.h>
#include <linux/sizes.h>
#include <linux/pgtable.h>
+#include <asm/insn.h>
#include <asm/sections.h>

-static inline bool riscv_insn_valid_32bit_offset(ptrdiff_t val)
-{
- if (IS_ENABLED(CONFIG_32BIT))
- return true;
-
- /*
- * auipc+jalr can reach any PC-relative offset in the range
- * [-2^31 - 2^11, 2^31 - 2^11)
- */
- return (-(1L << 31) - (1L << 11)) <= val && val < ((1L << 31) - (1L << 11));
-}
-
-static int riscv_insn_rmw(void *location, u32 keep, u32 set)
+static int riscv_insn_rmw(void *location, u32 mask, u32 value)
{
u16 *parcel = location;
u32 insn = (u32)parcel[0] | (u32)parcel[1] << 16;

- insn &= keep;
- insn |= set;
+ insn &= ~mask;
+ insn |= value;

parcel[0] = insn;
parcel[1] = insn >> 16;
return 0;
}

-static int riscv_insn_rvc_rmw(void *location, u16 keep, u16 set)
+static int riscv_insn_rvc_rmw(void *location, u16 mask, u16 value)
{
u16 *parcel = location;

- *parcel = (*parcel & keep) | set;
+ *parcel = (*parcel & ~mask) | value;
return 0;
}

@@ -67,55 +57,40 @@ static int apply_r_riscv_branch_rela(struct module *me, void *location,
Elf_Addr v)
{
ptrdiff_t offset = (void *)v - location;
- u32 imm12 = (offset & 0x1000) << (31 - 12);
- u32 imm11 = (offset & 0x800) >> (11 - 7);
- u32 imm10_5 = (offset & 0x7e0) << (30 - 10);
- u32 imm4_1 = (offset & 0x1e) << (11 - 4);

- return riscv_insn_rmw(location, 0x1fff07f, imm12 | imm11 | imm10_5 | imm4_1);
+ return riscv_insn_rmw(location,
+ RISCV_INSN_B_IMM_MASK,
+ riscv_insn_b_imm(offset));
}

static int apply_r_riscv_jal_rela(struct module *me, void *location,
Elf_Addr v)
{
ptrdiff_t offset = (void *)v - location;
- u32 imm20 = (offset & 0x100000) << (31 - 20);
- u32 imm19_12 = (offset & 0xff000);
- u32 imm11 = (offset & 0x800) << (20 - 11);
- u32 imm10_1 = (offset & 0x7fe) << (30 - 10);

- return riscv_insn_rmw(location, 0xfff, imm20 | imm19_12 | imm11 | imm10_1);
+ return riscv_insn_rmw(location,
+ RISCV_INSN_J_IMM_MASK,
+ riscv_insn_j_imm(offset));
}

static int apply_r_riscv_rvc_branch_rela(struct module *me, void *location,
Elf_Addr v)
{
ptrdiff_t offset = (void *)v - location;
- u16 imm8 = (offset & 0x100) << (12 - 8);
- u16 imm7_6 = (offset & 0xc0) >> (6 - 5);
- u16 imm5 = (offset & 0x20) >> (5 - 2);
- u16 imm4_3 = (offset & 0x18) << (12 - 5);
- u16 imm2_1 = (offset & 0x6) << (12 - 10);
-
- return riscv_insn_rvc_rmw(location, 0xe383,
- imm8 | imm7_6 | imm5 | imm4_3 | imm2_1);
+
+ return riscv_insn_rvc_rmw(location,
+ RISCV_INSN_CB_IMM_MASK,
+ riscv_insn_rvc_branch_imm(offset));
}

static int apply_r_riscv_rvc_jump_rela(struct module *me, void *location,
Elf_Addr v)
{
ptrdiff_t offset = (void *)v - location;
- u16 imm11 = (offset & 0x800) << (12 - 11);
- u16 imm10 = (offset & 0x400) >> (10 - 8);
- u16 imm9_8 = (offset & 0x300) << (12 - 11);
- u16 imm7 = (offset & 0x80) >> (7 - 6);
- u16 imm6 = (offset & 0x40) << (12 - 11);
- u16 imm5 = (offset & 0x20) >> (5 - 2);
- u16 imm4 = (offset & 0x10) << (12 - 5);
- u16 imm3_1 = (offset & 0xe) << (12 - 10);
-
- return riscv_insn_rvc_rmw(location, 0xe003,
- imm11 | imm10 | imm9_8 | imm7 | imm6 | imm5 | imm4 | imm3_1);
+
+ return riscv_insn_rvc_rmw(location,
+ RISCV_INSN_CJ_IMM_MASK,
+ riscv_insn_rvc_jump_imm(offset));
}

static int apply_r_riscv_pcrel_hi20_rela(struct module *me, void *location,
@@ -130,30 +105,27 @@ static int apply_r_riscv_pcrel_hi20_rela(struct module *me, void *location,
return -EINVAL;
}

- return riscv_insn_rmw(location, 0xfff, (offset + 0x800) & 0xfffff000);
+ return riscv_insn_rmw(location,
+ RISCV_INSN_U_IMM_MASK,
+ riscv_insn_u_imm(offset + 0x800));
}

static int apply_r_riscv_pcrel_lo12_i_rela(struct module *me, void *location,
Elf_Addr v)
{
- /*
- * v is the lo12 value to fill. It is calculated before calling this
- * handler.
- */
- return riscv_insn_rmw(location, 0xfffff, (v & 0xfff) << 20);
+ /* v is already the relative offset */
+ return riscv_insn_rmw(location,
+ RISCV_INSN_I_IMM_MASK,
+ riscv_insn_i_imm(v));
}

static int apply_r_riscv_pcrel_lo12_s_rela(struct module *me, void *location,
Elf_Addr v)
{
- /*
- * v is the lo12 value to fill. It is calculated before calling this
- * handler.
- */
- u32 imm11_5 = (v & 0xfe0) << (31 - 11);
- u32 imm4_0 = (v & 0x1f) << (11 - 4);
-
- return riscv_insn_rmw(location, 0x1fff07f, imm11_5 | imm4_0);
+ /* v is already the relative offset */
+ return riscv_insn_rmw(location,
+ RISCV_INSN_S_IMM_MASK,
+ riscv_insn_s_imm(v));
}

static int apply_r_riscv_hi20_rela(struct module *me, void *location,
@@ -166,29 +138,27 @@ static int apply_r_riscv_hi20_rela(struct module *me, void *location,
return -EINVAL;
}

- return riscv_insn_rmw(location, 0xfff, ((s32)v + 0x800) & 0xfffff000);
+ return riscv_insn_rmw(location,
+ RISCV_INSN_U_IMM_MASK,
+ riscv_insn_u_imm(v + 0x800));
}

static int apply_r_riscv_lo12_i_rela(struct module *me, void *location,
Elf_Addr v)
{
/* Skip medlow checking because of filtering by HI20 already */
- s32 hi20 = ((s32)v + 0x800) & 0xfffff000;
- s32 lo12 = ((s32)v - hi20);
-
- return riscv_insn_rmw(location, 0xfffff, (lo12 & 0xfff) << 20);
+ return riscv_insn_rmw(location,
+ RISCV_INSN_I_IMM_MASK,
+ riscv_insn_i_imm(v));
}

static int apply_r_riscv_lo12_s_rela(struct module *me, void *location,
Elf_Addr v)
{
/* Skip medlow checking because of filtering by HI20 already */
- s32 hi20 = ((s32)v + 0x800) & 0xfffff000;
- s32 lo12 = ((s32)v - hi20);
- u32 imm11_5 = (lo12 & 0xfe0) << (31 - 11);
- u32 imm4_0 = (lo12 & 0x1f) << (11 - 4);
-
- return riscv_insn_rmw(location, 0x1fff07f, imm11_5 | imm4_0);
+ return riscv_insn_rmw(location,
+ RISCV_INSN_S_IMM_MASK,
+ riscv_insn_s_imm(v));
}

static int apply_r_riscv_got_hi20_rela(struct module *me, void *location,
@@ -206,14 +176,15 @@ static int apply_r_riscv_got_hi20_rela(struct module *me, void *location,
return -EINVAL;
}

- return riscv_insn_rmw(location, 0xfff, (offset + 0x800) & 0xfffff000);
+ return riscv_insn_rmw(location,
+ RISCV_INSN_U_IMM_MASK,
+ riscv_insn_u_imm(offset + 0x800));
}

static int apply_r_riscv_call_plt_rela(struct module *me, void *location,
Elf_Addr v)
{
ptrdiff_t offset = (void *)v - location;
- u32 hi20, lo12;

if (!riscv_insn_valid_32bit_offset(offset)) {
/* Only emit the plt entry if offset over 32-bit range */
@@ -227,17 +198,18 @@ static int apply_r_riscv_call_plt_rela(struct module *me, void *location,
}
}

- hi20 = (offset + 0x800) & 0xfffff000;
- lo12 = (offset - hi20) & 0xfff;
- riscv_insn_rmw(location, 0xfff, hi20);
- return riscv_insn_rmw(location + 4, 0xfffff, lo12 << 20);
+ riscv_insn_rmw(location,
+ RISCV_INSN_U_IMM_MASK,
+ riscv_insn_u_imm(offset + 0x800));
+ return riscv_insn_rmw(location + 4,
+ RISCV_INSN_I_IMM_MASK,
+ riscv_insn_i_imm(offset));
}

static int apply_r_riscv_call_rela(struct module *me, void *location,
Elf_Addr v)
{
ptrdiff_t offset = (void *)v - location;
- u32 hi20, lo12;

if (!riscv_insn_valid_32bit_offset(offset)) {
pr_err(
@@ -246,10 +218,12 @@ static int apply_r_riscv_call_rela(struct module *me, void *location,
return -EINVAL;
}

- hi20 = (offset + 0x800) & 0xfffff000;
- lo12 = (offset - hi20) & 0xfff;
- riscv_insn_rmw(location, 0xfff, hi20);
- return riscv_insn_rmw(location + 4, 0xfffff, lo12 << 20);
+ riscv_insn_rmw(location,
+ RISCV_INSN_U_IMM_MASK,
+ riscv_insn_u_imm(offset + 0x800));
+ return riscv_insn_rmw(location + 4,
+ RISCV_INSN_I_IMM_MASK,
+ riscv_insn_i_imm(offset));
}

static int apply_r_riscv_relax_rela(struct module *me, void *location,
--
2.35.1

2022-02-01 20:50:02

by Emil Renner Berthing

[permalink] [raw]
Subject: [PATCH v2 2/7] riscv: Avoid unaligned access when relocating modules

With the C-extension regular 32bit instructions are not
necessarily aligned on 4-byte boundaries. RISC-V instructions
are in fact an ordered list of 16bit native-endian
"parcels", so access the instruction as such.

This should also make the code work in case someone builds
a big-endian RISC-V machine.

Fix rcv -> rvc typo while we're at it.

Signed-off-by: Emil Renner Berthing <[email protected]>
---
arch/riscv/kernel/module.c | 151 +++++++++++++++++++------------------
1 file changed, 76 insertions(+), 75 deletions(-)

diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
index 68a9e3d1fe16..3d33442226e7 100644
--- a/arch/riscv/kernel/module.c
+++ b/arch/riscv/kernel/module.c
@@ -13,68 +13,86 @@
#include <linux/pgtable.h>
#include <asm/sections.h>

-static int apply_r_riscv_32_rela(struct module *me, u32 *location, Elf_Addr v)
+static int riscv_insn_rmw(void *location, u32 keep, u32 set)
+{
+ u16 *parcel = location;
+ u32 insn = (u32)parcel[0] | (u32)parcel[1] << 16;
+
+ insn &= keep;
+ insn |= set;
+
+ parcel[0] = insn;
+ parcel[1] = insn >> 16;
+ return 0;
+}
+
+static int riscv_insn_rvc_rmw(void *location, u16 keep, u16 set)
+{
+ u16 *parcel = location;
+
+ *parcel = (*parcel & keep) | set;
+ return 0;
+}
+
+static int apply_r_riscv_32_rela(struct module *me, void *location, Elf_Addr v)
{
if (v != (u32)v) {
pr_err("%s: value %016llx out of range for 32-bit field\n",
me->name, (long long)v);
return -EINVAL;
}
- *location = v;
+ *(u32 *)location = v;
return 0;
}

-static int apply_r_riscv_64_rela(struct module *me, u32 *location, Elf_Addr v)
+static int apply_r_riscv_64_rela(struct module *me, void *location, Elf_Addr v)
{
*(u64 *)location = v;
return 0;
}

-static int apply_r_riscv_branch_rela(struct module *me, u32 *location,
+static int apply_r_riscv_branch_rela(struct module *me, void *location,
Elf_Addr v)
{
- ptrdiff_t offset = (void *)v - (void *)location;
+ ptrdiff_t offset = (void *)v - location;
u32 imm12 = (offset & 0x1000) << (31 - 12);
u32 imm11 = (offset & 0x800) >> (11 - 7);
u32 imm10_5 = (offset & 0x7e0) << (30 - 10);
u32 imm4_1 = (offset & 0x1e) << (11 - 4);

- *location = (*location & 0x1fff07f) | imm12 | imm11 | imm10_5 | imm4_1;
- return 0;
+ return riscv_insn_rmw(location, 0x1fff07f, imm12 | imm11 | imm10_5 | imm4_1);
}

-static int apply_r_riscv_jal_rela(struct module *me, u32 *location,
+static int apply_r_riscv_jal_rela(struct module *me, void *location,
Elf_Addr v)
{
- ptrdiff_t offset = (void *)v - (void *)location;
+ ptrdiff_t offset = (void *)v - location;
u32 imm20 = (offset & 0x100000) << (31 - 20);
u32 imm19_12 = (offset & 0xff000);
u32 imm11 = (offset & 0x800) << (20 - 11);
u32 imm10_1 = (offset & 0x7fe) << (30 - 10);

- *location = (*location & 0xfff) | imm20 | imm19_12 | imm11 | imm10_1;
- return 0;
+ return riscv_insn_rmw(location, 0xfff, imm20 | imm19_12 | imm11 | imm10_1);
}

-static int apply_r_riscv_rcv_branch_rela(struct module *me, u32 *location,
+static int apply_r_riscv_rvc_branch_rela(struct module *me, void *location,
Elf_Addr v)
{
- ptrdiff_t offset = (void *)v - (void *)location;
+ ptrdiff_t offset = (void *)v - location;
u16 imm8 = (offset & 0x100) << (12 - 8);
u16 imm7_6 = (offset & 0xc0) >> (6 - 5);
u16 imm5 = (offset & 0x20) >> (5 - 2);
u16 imm4_3 = (offset & 0x18) << (12 - 5);
u16 imm2_1 = (offset & 0x6) << (12 - 10);

- *(u16 *)location = (*(u16 *)location & 0xe383) |
- imm8 | imm7_6 | imm5 | imm4_3 | imm2_1;
- return 0;
+ return riscv_insn_rvc_rmw(location, 0xe383,
+ imm8 | imm7_6 | imm5 | imm4_3 | imm2_1);
}

-static int apply_r_riscv_rvc_jump_rela(struct module *me, u32 *location,
+static int apply_r_riscv_rvc_jump_rela(struct module *me, void *location,
Elf_Addr v)
{
- ptrdiff_t offset = (void *)v - (void *)location;
+ ptrdiff_t offset = (void *)v - location;
u16 imm11 = (offset & 0x800) << (12 - 11);
u16 imm10 = (offset & 0x400) >> (10 - 8);
u16 imm9_8 = (offset & 0x300) << (12 - 11);
@@ -84,16 +102,14 @@ static int apply_r_riscv_rvc_jump_rela(struct module *me, u32 *location,
u16 imm4 = (offset & 0x10) << (12 - 5);
u16 imm3_1 = (offset & 0xe) << (12 - 10);

- *(u16 *)location = (*(u16 *)location & 0xe003) |
- imm11 | imm10 | imm9_8 | imm7 | imm6 | imm5 | imm4 | imm3_1;
- return 0;
+ return riscv_insn_rvc_rmw(location, 0xe003,
+ imm11 | imm10 | imm9_8 | imm7 | imm6 | imm5 | imm4 | imm3_1);
}

-static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location,
+static int apply_r_riscv_pcrel_hi20_rela(struct module *me, void *location,
Elf_Addr v)
{
- ptrdiff_t offset = (void *)v - (void *)location;
- s32 hi20;
+ ptrdiff_t offset = (void *)v - location;

if (offset != (s32)offset) {
pr_err(
@@ -102,23 +118,20 @@ static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location,
return -EINVAL;
}

- hi20 = (offset + 0x800) & 0xfffff000;
- *location = (*location & 0xfff) | hi20;
- return 0;
+ return riscv_insn_rmw(location, 0xfff, (offset + 0x800) & 0xfffff000);
}

-static int apply_r_riscv_pcrel_lo12_i_rela(struct module *me, u32 *location,
+static int apply_r_riscv_pcrel_lo12_i_rela(struct module *me, void *location,
Elf_Addr v)
{
/*
* v is the lo12 value to fill. It is calculated before calling this
* handler.
*/
- *location = (*location & 0xfffff) | ((v & 0xfff) << 20);
- return 0;
+ return riscv_insn_rmw(location, 0xfffff, (v & 0xfff) << 20);
}

-static int apply_r_riscv_pcrel_lo12_s_rela(struct module *me, u32 *location,
+static int apply_r_riscv_pcrel_lo12_s_rela(struct module *me, void *location,
Elf_Addr v)
{
/*
@@ -128,15 +141,12 @@ static int apply_r_riscv_pcrel_lo12_s_rela(struct module *me, u32 *location,
u32 imm11_5 = (v & 0xfe0) << (31 - 11);
u32 imm4_0 = (v & 0x1f) << (11 - 4);

- *location = (*location & 0x1fff07f) | imm11_5 | imm4_0;
- return 0;
+ return riscv_insn_rmw(location, 0x1fff07f, imm11_5 | imm4_0);
}

-static int apply_r_riscv_hi20_rela(struct module *me, u32 *location,
+static int apply_r_riscv_hi20_rela(struct module *me, void *location,
Elf_Addr v)
{
- s32 hi20;
-
if (IS_ENABLED(CONFIG_CMODEL_MEDLOW)) {
pr_err(
"%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
@@ -144,22 +154,20 @@ static int apply_r_riscv_hi20_rela(struct module *me, u32 *location,
return -EINVAL;
}

- hi20 = ((s32)v + 0x800) & 0xfffff000;
- *location = (*location & 0xfff) | hi20;
- return 0;
+ return riscv_insn_rmw(location, 0xfff, ((s32)v + 0x800) & 0xfffff000);
}

-static int apply_r_riscv_lo12_i_rela(struct module *me, u32 *location,
+static int apply_r_riscv_lo12_i_rela(struct module *me, void *location,
Elf_Addr v)
{
/* Skip medlow checking because of filtering by HI20 already */
s32 hi20 = ((s32)v + 0x800) & 0xfffff000;
s32 lo12 = ((s32)v - hi20);
- *location = (*location & 0xfffff) | ((lo12 & 0xfff) << 20);
- return 0;
+
+ return riscv_insn_rmw(location, 0xfffff, (lo12 & 0xfff) << 20);
}

-static int apply_r_riscv_lo12_s_rela(struct module *me, u32 *location,
+static int apply_r_riscv_lo12_s_rela(struct module *me, void *location,
Elf_Addr v)
{
/* Skip medlow checking because of filtering by HI20 already */
@@ -167,20 +175,18 @@ static int apply_r_riscv_lo12_s_rela(struct module *me, u32 *location,
s32 lo12 = ((s32)v - hi20);
u32 imm11_5 = (lo12 & 0xfe0) << (31 - 11);
u32 imm4_0 = (lo12 & 0x1f) << (11 - 4);
- *location = (*location & 0x1fff07f) | imm11_5 | imm4_0;
- return 0;
+
+ return riscv_insn_rmw(location, 0x1fff07f, imm11_5 | imm4_0);
}

-static int apply_r_riscv_got_hi20_rela(struct module *me, u32 *location,
+static int apply_r_riscv_got_hi20_rela(struct module *me, void *location,
Elf_Addr v)
{
- ptrdiff_t offset = (void *)v - (void *)location;
- s32 hi20;
+ ptrdiff_t offset = (void *)v - location;

/* Always emit the got entry */
if (IS_ENABLED(CONFIG_MODULE_SECTIONS)) {
- offset = module_emit_got_entry(me, v);
- offset = (void *)offset - (void *)location;
+ offset = (void *)module_emit_got_entry(me, v) - location;
} else {
pr_err(
"%s: can not generate the GOT entry for symbol = %016llx from PC = %p\n",
@@ -188,23 +194,20 @@ static int apply_r_riscv_got_hi20_rela(struct module *me, u32 *location,
return -EINVAL;
}

- hi20 = (offset + 0x800) & 0xfffff000;
- *location = (*location & 0xfff) | hi20;
- return 0;
+ return riscv_insn_rmw(location, 0xfff, (offset + 0x800) & 0xfffff000);
}

-static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location,
+static int apply_r_riscv_call_plt_rela(struct module *me, void *location,
Elf_Addr v)
{
- ptrdiff_t offset = (void *)v - (void *)location;
+ ptrdiff_t offset = (void *)v - location;
s32 fill_v = offset;
u32 hi20, lo12;

if (offset != fill_v) {
/* Only emit the plt entry if offset over 32-bit range */
if (IS_ENABLED(CONFIG_MODULE_SECTIONS)) {
- offset = module_emit_plt_entry(me, v);
- offset = (void *)offset - (void *)location;
+ offset = (void *)module_emit_plt_entry(me, v) - location;
} else {
pr_err(
"%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
@@ -215,15 +218,14 @@ static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location,

hi20 = (offset + 0x800) & 0xfffff000;
lo12 = (offset - hi20) & 0xfff;
- *location = (*location & 0xfff) | hi20;
- *(location + 1) = (*(location + 1) & 0xfffff) | (lo12 << 20);
- return 0;
+ riscv_insn_rmw(location, 0xfff, hi20);
+ return riscv_insn_rmw(location + 4, 0xfffff, lo12 << 20);
}

-static int apply_r_riscv_call_rela(struct module *me, u32 *location,
+static int apply_r_riscv_call_rela(struct module *me, void *location,
Elf_Addr v)
{
- ptrdiff_t offset = (void *)v - (void *)location;
+ ptrdiff_t offset = (void *)v - location;
s32 fill_v = offset;
u32 hi20, lo12;

@@ -236,18 +238,17 @@ static int apply_r_riscv_call_rela(struct module *me, u32 *location,

hi20 = (offset + 0x800) & 0xfffff000;
lo12 = (offset - hi20) & 0xfff;
- *location = (*location & 0xfff) | hi20;
- *(location + 1) = (*(location + 1) & 0xfffff) | (lo12 << 20);
- return 0;
+ riscv_insn_rmw(location, 0xfff, hi20);
+ return riscv_insn_rmw(location + 4, 0xfffff, lo12 << 20);
}

-static int apply_r_riscv_relax_rela(struct module *me, u32 *location,
+static int apply_r_riscv_relax_rela(struct module *me, void *location,
Elf_Addr v)
{
return 0;
}

-static int apply_r_riscv_align_rela(struct module *me, u32 *location,
+static int apply_r_riscv_align_rela(struct module *me, void *location,
Elf_Addr v)
{
pr_err(
@@ -256,41 +257,41 @@ static int apply_r_riscv_align_rela(struct module *me, u32 *location,
return -EINVAL;
}

-static int apply_r_riscv_add32_rela(struct module *me, u32 *location,
+static int apply_r_riscv_add32_rela(struct module *me, void *location,
Elf_Addr v)
{
*(u32 *)location += (u32)v;
return 0;
}

-static int apply_r_riscv_add64_rela(struct module *me, u32 *location,
+static int apply_r_riscv_add64_rela(struct module *me, void *location,
Elf_Addr v)
{
*(u64 *)location += (u64)v;
return 0;
}

-static int apply_r_riscv_sub32_rela(struct module *me, u32 *location,
+static int apply_r_riscv_sub32_rela(struct module *me, void *location,
Elf_Addr v)
{
*(u32 *)location -= (u32)v;
return 0;
}

-static int apply_r_riscv_sub64_rela(struct module *me, u32 *location,
+static int apply_r_riscv_sub64_rela(struct module *me, void *location,
Elf_Addr v)
{
*(u64 *)location -= (u64)v;
return 0;
}

-static int (*reloc_handlers_rela[]) (struct module *me, u32 *location,
+static int (*reloc_handlers_rela[]) (struct module *me, void *location,
Elf_Addr v) = {
[R_RISCV_32] = apply_r_riscv_32_rela,
[R_RISCV_64] = apply_r_riscv_64_rela,
[R_RISCV_BRANCH] = apply_r_riscv_branch_rela,
[R_RISCV_JAL] = apply_r_riscv_jal_rela,
- [R_RISCV_RVC_BRANCH] = apply_r_riscv_rcv_branch_rela,
+ [R_RISCV_RVC_BRANCH] = apply_r_riscv_rvc_branch_rela,
[R_RISCV_RVC_JUMP] = apply_r_riscv_rvc_jump_rela,
[R_RISCV_PCREL_HI20] = apply_r_riscv_pcrel_hi20_rela,
[R_RISCV_PCREL_LO12_I] = apply_r_riscv_pcrel_lo12_i_rela,
@@ -314,9 +315,9 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
struct module *me)
{
Elf_Rela *rel = (void *) sechdrs[relsec].sh_addr;
- int (*handler)(struct module *me, u32 *location, Elf_Addr v);
+ int (*handler)(struct module *me, void *location, Elf_Addr v);
Elf_Sym *sym;
- u32 *location;
+ void *location;
unsigned int i, type;
Elf_Addr v;
int res;
--
2.35.1

2022-02-23 02:38:49

by Palmer Dabbelt

[permalink] [raw]
Subject: Re: [PATCH v2 0/7] Module relocation fixes and asm/insn.h header

On Mon, 31 Jan 2022 10:27:13 PST (-0800), [email protected] wrote:
> Apologies! I messed up v1. Please consider this patch set only.
>
> The first patch removes a bunch of code from the asm/module.h which is
> included in almost all drivers through linux/module.h. Next are two
> patches to fix unaligned access when doing module relocations and do
> proper range checks for auipc+jalr offsets.
>
> I'm a little less confident about the following patches, so consider
> this more of an RFC for those. The idea is to consolidate the RISC-V
> instruction generation and manipulation similar to arm64's asm/insn.h
> header.
>
> /Emil
>
> Emil Renner Berthing (7):
> riscv: Remove unneeded definitions from asm/module.h
> riscv: Avoid unaligned access when relocating modules
> riscv: Fix auipc+jalr relocation range checks
> riscv: Add asm/insn.h header
> riscv: Use asm/insn.h for module relocations
> riscv: Use asm/insn.h to generate plt entries
> riscv: Use asm/insn.h for jump labels
>
> arch/riscv/include/asm/insn.h | 121 ++++++++++++++
> arch/riscv/include/asm/module.h | 87 ----------
> arch/riscv/kernel/jump_label.c | 12 +-
> arch/riscv/kernel/module-sections.c | 71 +++++++++
> arch/riscv/kernel/module.c | 237 +++++++++++++---------------
> 5 files changed, 306 insertions(+), 222 deletions(-)
> create mode 100644 arch/riscv/include/asm/insn.h

These generally look good to me, though there's a lot of bit-field
twiddling so I'll take another look before merging it. There's a
handful of minor issues:

* There's a fix in here, mixed into the cleanups. It's generally best
to split those out.
* There's another copy of the insn patterns in our BPF JIT, it'd be nice
to clean that up too. That can be a follow-on, though.
* It's 2022, but there's some 2020 copyrights. If this really is old
stuff that's OK, I just wanted to check.

I'm usually OK just re-ordering patches myself, but I figured I'd have
to ask about the copyright dates anyway. LMK if you want to send a v2
with the fix pulled to the front, and what you want me to do about the
copyright dates (if you're going to send a v2 then just fix them, but if
you're not then just telling me is OK).

Thanks!

2022-02-23 18:25:45

by Emil Renner Berthing

[permalink] [raw]
Subject: Re: [PATCH v2 0/7] Module relocation fixes and asm/insn.h header

On Wed, 23 Feb 2022 at 00:15, Palmer Dabbelt <[email protected]> wrote:
> On Mon, 31 Jan 2022 10:27:13 PST (-0800), [email protected] wrote:
> > Apologies! I messed up v1. Please consider this patch set only.
> >
> > The first patch removes a bunch of code from the asm/module.h which is
> > included in almost all drivers through linux/module.h. Next are two
> > patches to fix unaligned access when doing module relocations and do
> > proper range checks for auipc+jalr offsets.
> >
> > I'm a little less confident about the following patches, so consider
> > this more of an RFC for those. The idea is to consolidate the RISC-V
> > instruction generation and manipulation similar to arm64's asm/insn.h
> > header.
> >
> > /Emil
> >
> > Emil Renner Berthing (7):
> > riscv: Remove unneeded definitions from asm/module.h
> > riscv: Avoid unaligned access when relocating modules
> > riscv: Fix auipc+jalr relocation range checks
> > riscv: Add asm/insn.h header
> > riscv: Use asm/insn.h for module relocations
> > riscv: Use asm/insn.h to generate plt entries
> > riscv: Use asm/insn.h for jump labels
> >
> > arch/riscv/include/asm/insn.h | 121 ++++++++++++++
> > arch/riscv/include/asm/module.h | 87 ----------
> > arch/riscv/kernel/jump_label.c | 12 +-
> > arch/riscv/kernel/module-sections.c | 71 +++++++++
> > arch/riscv/kernel/module.c | 237 +++++++++++++---------------
> > 5 files changed, 306 insertions(+), 222 deletions(-)
> > create mode 100644 arch/riscv/include/asm/insn.h
>
> These generally look good to me, though there's a lot of bit-field
> twiddling so I'll take another look before merging it. There's a
> handful of minor issues:
>
> * There's a fix in here, mixed into the cleanups. It's generally best
> to split those out.

There are two fixes. The 32bit range check on rv64 and unaligned 32bit
access. The code has been like that for years so I was unsure if they
were worth splitting out and adding early. Since you only mention one
I guess that's the range check. I'll send that separately.

> * There's another copy of the insn patterns in our BPF JIT, it'd be nice
> to clean that up too. That can be a follow-on, though.
> * It's 2022, but there's some 2020 copyrights. If this really is old
> stuff that's OK, I just wanted to check.

Nice catch, but the year is actually correct. These patches have been
well aged in my local repo. The reason is exactly that I never got
around to doing the BPF conversion, so now I decided to just send them
and see if it was worth finishing.

> I'm usually OK just re-ordering patches myself, but I figured I'd have
> to ask about the copyright dates anyway. LMK if you want to send a v2
> with the fix pulled to the front, and what you want me to do about the
> copyright dates (if you're going to send a v2 then just fix them, but if
> you're not then just telling me is OK).

Thank you. I'll send the range check separately and a v2 converting
the "if (IS_ENABLED(CONFIG_32BIT))" to an #ifdef to avoid the warning
the kernel test robot found.