2023-01-26 17:06:26

by Guo Ren

[permalink] [raw]
Subject: [PATCH -next V3 0/2] riscv: jump_label: Fixup & Optimization

From: Guo Ren <[email protected]>

Patch 1 is the fixup patch should be merged into stable-tree.
Patch 2 is a continuous optimization for jump_label patch_text
atomicity.

Changes in V3:
- Correct the typo C.JAL -> C.J (Thx Jessica)
- Fixup compile error when CONFIG_RISCV_ISA_C=n
- Rebase on riscv for-next (20230127)

Changes in V2:
https://lore.kernel.org/linux-riscv/[email protected]/

Changes in V1:
https://lore.kernel.org/linux-riscv/[email protected]/

Andy Chiu (1):
riscv: jump_label: Fixup unaligned arch_static_branch function

Guo Ren (1):
riscv: jump_label: Optimize the code size with compressed instruction

arch/riscv/include/asm/jump_label.h | 14 ++++++++++++--
arch/riscv/kernel/jump_label.c | 30 +++++++++++++++++++++++++++--
2 files changed, 40 insertions(+), 4 deletions(-)

--
2.36.1



2023-01-26 17:06:31

by Guo Ren

[permalink] [raw]
Subject: [PATCH -next V3 1/2] riscv: jump_label: Fixup unaligned arch_static_branch function

From: Andy Chiu <[email protected]>

Runtime code patching must be done at a naturally aligned address, or we
may execute on a partial instruction.

We have encountered problems traced back to static jump functions during
the test. We switched the tracer randomly for every 1~5 seconds on a
dual-core QEMU setup and found the kernel sucking at a static branch
where it jumps to itself.

The reason is that the static branch was 2-byte but not 4-byte aligned.
Then, the kernel would patch the instruction, either J or NOP, with two
half-word stores if the machine does not have efficient unaligned
accesses. Thus, moments exist where half of the NOP mixes with the other
half of the J when transitioning the branch. In our particular case, on
a little-endian machine, the upper half of the NOP was mixed with the
lower part of the J when enabling the branch, resulting in a jump that
jumped to itself. Conversely, it would result in a HINT instruction when
disabling the branch, but it might not be observable.

ARM64 does not have this problem since all instructions must be 4-byte
aligned.

Fixes: ebc00dde8a97 ("riscv: Add jump-label implementation")
Link: https://lore.kernel.org/linux-riscv/[email protected]/
Signed-off-by: Andy Chiu <[email protected]>
Reviewed-by: Greentime Hu <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
---
arch/riscv/include/asm/jump_label.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/riscv/include/asm/jump_label.h b/arch/riscv/include/asm/jump_label.h
index 6d58bbb5da46..14a5ea8d8ef0 100644
--- a/arch/riscv/include/asm/jump_label.h
+++ b/arch/riscv/include/asm/jump_label.h
@@ -18,6 +18,7 @@ static __always_inline bool arch_static_branch(struct static_key * const key,
const bool branch)
{
asm_volatile_goto(
+ " .align 2 \n\t"
" .option push \n\t"
" .option norelax \n\t"
" .option norvc \n\t"
@@ -39,6 +40,7 @@ static __always_inline bool arch_static_branch_jump(struct static_key * const ke
const bool branch)
{
asm_volatile_goto(
+ " .align 2 \n\t"
" .option push \n\t"
" .option norelax \n\t"
" .option norvc \n\t"
--
2.36.1


2023-01-26 17:06:34

by Guo Ren

[permalink] [raw]
Subject: [PATCH -next V3 2/2] riscv: jump_label: Optimize the code size with compressed instruction

From: Guo Ren <[email protected]>

Reduce the size of the static branch instruction and prevent atomic
update problems when CONFIG_RISCV_ISA_C=y. It also reduces the jump
range from 1MB to 4KB, but 4KB is enough for the current riscv
requirement.

Signed-off-by: Guo Ren <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
---
arch/riscv/include/asm/jump_label.h | 16 +++++++++++----
arch/riscv/kernel/jump_label.c | 30 +++++++++++++++++++++++++++--
2 files changed, 40 insertions(+), 6 deletions(-)

diff --git a/arch/riscv/include/asm/jump_label.h b/arch/riscv/include/asm/jump_label.h
index 14a5ea8d8ef0..afc58c31d02b 100644
--- a/arch/riscv/include/asm/jump_label.h
+++ b/arch/riscv/include/asm/jump_label.h
@@ -12,17 +12,23 @@
#include <linux/types.h>
#include <asm/asm.h>

+#ifdef CONFIG_RISCV_ISA_C
+#define JUMP_LABEL_NOP_SIZE 2
+#else
#define JUMP_LABEL_NOP_SIZE 4
+#endif

static __always_inline bool arch_static_branch(struct static_key * const key,
const bool branch)
{
asm_volatile_goto(
- " .align 2 \n\t"
" .option push \n\t"
" .option norelax \n\t"
- " .option norvc \n\t"
+#ifdef CONFIG_RISCV_ISA_C
+ "1: c.nop \n\t"
+#else
"1: nop \n\t"
+#endif
" .option pop \n\t"
" .pushsection __jump_table, \"aw\" \n\t"
" .align " RISCV_LGPTR " \n\t"
@@ -40,11 +46,13 @@ static __always_inline bool arch_static_branch_jump(struct static_key * const ke
const bool branch)
{
asm_volatile_goto(
- " .align 2 \n\t"
" .option push \n\t"
" .option norelax \n\t"
- " .option norvc \n\t"
+#ifdef CONFIG_RISCV_ISA_C
+ "1: c.j %l[label] \n\t"
+#else
"1: jal zero, %l[label] \n\t"
+#endif
" .option pop \n\t"
" .pushsection __jump_table, \"aw\" \n\t"
" .align " RISCV_LGPTR " \n\t"
diff --git a/arch/riscv/kernel/jump_label.c b/arch/riscv/kernel/jump_label.c
index e6694759dbd0..08f42c49e3a0 100644
--- a/arch/riscv/kernel/jump_label.c
+++ b/arch/riscv/kernel/jump_label.c
@@ -11,26 +11,52 @@
#include <asm/bug.h>
#include <asm/patch.h>

+#ifdef CONFIG_RISCV_ISA_C
+#define RISCV_INSN_NOP 0x0001U
+#define RISCV_INSN_C_J 0xa001U
+#else
#define RISCV_INSN_NOP 0x00000013U
#define RISCV_INSN_JAL 0x0000006fU
+#endif

void arch_jump_label_transform(struct jump_entry *entry,
enum jump_label_type type)
{
void *addr = (void *)jump_entry_code(entry);
+#ifdef CONFIG_RISCV_ISA_C
+ u16 insn;
+#else
u32 insn;
+#endif

if (type == JUMP_LABEL_JMP) {
long offset = jump_entry_target(entry) - jump_entry_code(entry);
-
- if (WARN_ON(offset & 1 || offset < -524288 || offset >= 524288))
+ if (WARN_ON(offset & 1 || offset < -2048 || offset >= 2048))
return;

+#ifdef CONFIG_RISCV_ISA_C
+ /*
+ * 001 | imm[11|4|9:8|10|6|7|3:1|5] 01 - C.J
+ */
+ insn = RISCV_INSN_C_J |
+ (((u16)offset & GENMASK(5, 5)) >> (5 - 2)) |
+ (((u16)offset & GENMASK(3, 1)) << (3 - 1)) |
+ (((u16)offset & GENMASK(7, 7)) >> (7 - 6)) |
+ (((u16)offset & GENMASK(6, 6)) << (7 - 6)) |
+ (((u16)offset & GENMASK(10, 10)) >> (10 - 8)) |
+ (((u16)offset & GENMASK(9, 8)) << (9 - 8)) |
+ (((u16)offset & GENMASK(4, 4)) << (11 - 4)) |
+ (((u16)offset & GENMASK(11, 11)) << (12 - 11));
+#else
+ /*
+ * imm[20|10:1|11|19:12] | rd | 1101111 - JAL
+ */
insn = RISCV_INSN_JAL |
(((u32)offset & GENMASK(19, 12)) << (12 - 12)) |
(((u32)offset & GENMASK(11, 11)) << (20 - 11)) |
(((u32)offset & GENMASK(10, 1)) << (21 - 1)) |
(((u32)offset & GENMASK(20, 20)) << (31 - 20));
+#endif
} else {
insn = RISCV_INSN_NOP;
}
--
2.36.1


2023-01-30 11:57:55

by Björn Töpel

[permalink] [raw]
Subject: Re: [PATCH -next V3 1/2] riscv: jump_label: Fixup unaligned arch_static_branch function

[email protected] writes:

> From: Andy Chiu <[email protected]>
>
> Runtime code patching must be done at a naturally aligned address, or we
> may execute on a partial instruction.
>
> We have encountered problems traced back to static jump functions during
> the test. We switched the tracer randomly for every 1~5 seconds on a
> dual-core QEMU setup and found the kernel sucking at a static branch
> where it jumps to itself.
>
> The reason is that the static branch was 2-byte but not 4-byte aligned.
> Then, the kernel would patch the instruction, either J or NOP, with two
> half-word stores if the machine does not have efficient unaligned
> accesses. Thus, moments exist where half of the NOP mixes with the other
> half of the J when transitioning the branch. In our particular case, on
> a little-endian machine, the upper half of the NOP was mixed with the
> lower part of the J when enabling the branch, resulting in a jump that
> jumped to itself. Conversely, it would result in a HINT instruction when
> disabling the branch, but it might not be observable.
>
> ARM64 does not have this problem since all instructions must be 4-byte
> aligned.

Reviewed-by: Björn Töpel <[email protected]>

Nice catch! And I guess this is an issue for kprobes as well, no?
I.e. in general replacing 32b insns with an ebreak. This is only valid
for natural aligned 32b insns?

@Guo I don't see the point of doing a series for this, and asking the
maintainers to "pick this patch to stable, and the other for
next". Isn't that just more work for the maintainers/reviewers?


Björn

2023-01-31 13:36:21

by Guo Ren

[permalink] [raw]
Subject: Re: [PATCH -next V3 1/2] riscv: jump_label: Fixup unaligned arch_static_branch function

On Mon, Jan 30, 2023 at 7:57 PM Björn Töpel <[email protected]> wrote:
>
> [email protected] writes:
>
> > From: Andy Chiu <[email protected]>
> >
> > Runtime code patching must be done at a naturally aligned address, or we
> > may execute on a partial instruction.
> >
> > We have encountered problems traced back to static jump functions during
> > the test. We switched the tracer randomly for every 1~5 seconds on a
> > dual-core QEMU setup and found the kernel sucking at a static branch
> > where it jumps to itself.
> >
> > The reason is that the static branch was 2-byte but not 4-byte aligned.
> > Then, the kernel would patch the instruction, either J or NOP, with two
> > half-word stores if the machine does not have efficient unaligned
> > accesses. Thus, moments exist where half of the NOP mixes with the other
> > half of the J when transitioning the branch. In our particular case, on
> > a little-endian machine, the upper half of the NOP was mixed with the
> > lower part of the J when enabling the branch, resulting in a jump that
> > jumped to itself. Conversely, it would result in a HINT instruction when
> > disabling the branch, but it might not be observable.
> >
> > ARM64 does not have this problem since all instructions must be 4-byte
> > aligned.
>
> Reviewed-by: Björn Töpel <[email protected]>
>
> Nice catch! And I guess this is an issue for kprobes as well, no?
> I.e. in general replacing 32b insns with an ebreak. This is only valid
> for natural aligned 32b insns?
>
> @Guo I don't see the point of doing a series for this, and asking the
> maintainers to "pick this patch to stable, and the other for
> next". Isn't that just more work for the maintainers/reviewers?
If these two patches are separated, they are all fixup that issue and
competed with each other. Making my patch an optimization patch must
depend on it. That's why I put them in one series.

>
>
> Björn



--
Best Regards
Guo Ren

Subject: Re: [PATCH -next V3 0/2] riscv: jump_label: Fixup & Optimization

Hello:

This series was applied to riscv/linux.git (for-next)
by Palmer Dabbelt <[email protected]>:

On Thu, 26 Jan 2023 12:06:05 -0500 you wrote:
> From: Guo Ren <[email protected]>
>
> Patch 1 is the fixup patch should be merged into stable-tree.
> Patch 2 is a continuous optimization for jump_label patch_text
> atomicity.
>
> Changes in V3:
> - Correct the typo C.JAL -> C.J (Thx Jessica)
> - Fixup compile error when CONFIG_RISCV_ISA_C=n
> - Rebase on riscv for-next (20230127)
>
> [...]

Here is the summary with links:
- [-next,V3,1/2] riscv: jump_label: Fixup unaligned arch_static_branch function
https://git.kernel.org/riscv/c/9ddfc3cd8060
- [-next,V3,2/2] riscv: jump_label: Optimize the code size with compressed instruction
(no matching commit)

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html