2023-12-25 04:54:58

by Jisheng Zhang

[permalink] [raw]
Subject: [PATCH v4 0/2] riscv: enable EFFICIENT_UNALIGNED_ACCESS and DCACHE_WORD_ACCESS

Some riscv implementations such as T-HEAD's C906, C908, C910 and C920
support efficient unaligned access, for performance reason we want
to enable HAVE_EFFICIENT_UNALIGNED_ACCESS on these platforms. To
avoid performance regressions on non efficient unaligned access
platforms, HAVE_EFFICIENT_UNALIGNED_ACCESS can't be globally selected.

To solve this problem, runtime code patching based on the detected
speed is a good solution. But that's not easy, it involves lots of
work to modify vairous subsystems such as net, mm, lib and so on.
This can be done step by step.

So let's take an easier solution: add support to efficient unaligned
access and hide the support under NONPORTABLE.

patch1 introduces RISCV_EFFICIENT_UNALIGNED_ACCESS which depends on
NONPORTABLE, if users know during config time that the kernel will be
only run on those efficient unaligned access hw platforms, they can
enable it. Obviously, generic unified kernel Image shouldn't enable it.

patch2 adds support DCACHE_WORD_ACCESS when MMU and
RISCV_EFFICIENT_UNALIGNED_ACCESS.

Below test program and step shows how much performance can be improved:

$ cat tt.c
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>

#define ITERATIONS 1000000

#define PATH "123456781234567812345678123456781"

int main(void)
{
unsigned long i;
struct stat buf;

for (i = 0; i < ITERATIONS; i++)
stat(PATH, &buf);

return 0;
}

$ gcc -O2 tt.c
$ touch 123456781234567812345678123456781
$ time ./a.out

Per my test on T-HEAD C910 platforms, the above test performance is
improved by about 7.5%.

Since v3:
- adopt Eric's suggestions, such as better Kconfig help msg and so on.

Since v2:
- Don't set "-mstrict-align" CFLAGS if HAVE_EFFICIENT_UNALIGNED_ACCESS
- collect Reviewed-by tag

Since v1:
- fix typo in commit msg
- fix build error if NOMMU


Jisheng Zhang (2):
riscv: introduce RISCV_EFFICIENT_UNALIGNED_ACCESS
riscv: select DCACHE_WORD_ACCESS for efficient unaligned access HW

arch/riscv/Kconfig | 14 +++++++++++
arch/riscv/Makefile | 2 ++
arch/riscv/include/asm/asm-extable.h | 15 ++++++++++++
arch/riscv/include/asm/word-at-a-time.h | 27 +++++++++++++++++++++
arch/riscv/mm/extable.c | 31 +++++++++++++++++++++++++
5 files changed, 89 insertions(+)

--
2.40.0



2023-12-25 04:55:10

by Jisheng Zhang

[permalink] [raw]
Subject: [PATCH v4 1/2] riscv: introduce RISCV_EFFICIENT_UNALIGNED_ACCESS

Some riscv implementations such as T-HEAD's C906, C908, C910 and C920
support efficient unaligned access, for performance reason we want
to enable HAVE_EFFICIENT_UNALIGNED_ACCESS on these platforms. To
avoid performance regressions on other non efficient unaligned access
platforms, HAVE_EFFICIENT_UNALIGNED_ACCESS can't be globally selected.

To solve this problem, runtime code patching based on the detected
speed is a good solution. But that's not easy, it involves lots of
work to modify vairous subsystems such as net, mm, lib and so on.
This can be done step by step.

So let's take an easier solution: add support to efficient unaligned
access and hide the support under NONPORTABLE.

Now let's introduce RISCV_EFFICIENT_UNALIGNED_ACCESS which depends on
NONPORTABLE, if users know during config time that the kernel will be
only run on those efficient unaligned access hw platforms, they can
enable it. Obviously, generic unified kernel Image shouldn't enable it.

Signed-off-by: Jisheng Zhang <[email protected]>
Reviewed-by: Charlie Jenkins <[email protected]>
---
arch/riscv/Kconfig | 13 +++++++++++++
arch/riscv/Makefile | 2 ++
2 files changed, 15 insertions(+)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 24c1799e2ec4..afcc5fdc16f7 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -651,6 +651,19 @@ config RISCV_MISALIGNED
load/store for both kernel and userspace. When disable, misaligned
accesses will generate SIGBUS in userspace and panic in kernel.

+config RISCV_EFFICIENT_UNALIGNED_ACCESS
+ bool "Assume the CPU supports fast unaligned memory accesses"
+ depends on NONPORTABLE
+ select HAVE_EFFICIENT_UNALIGNED_ACCESS
+ help
+ Say Y here if you want the kernel to assume that the CPU supports
+ efficient unaligned memory accesses. When enabled, this option
+ improves the performance of the kernel on such CPUs. However, the
+ kernel will run much more slowly, or will not be able to run at all,
+ on CPUs that do not support efficient unaligned memory accesses.
+
+ If unsure what to do here, say N.
+
endmenu # "Platform type"

menu "Kernel features"
diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
index a74be78678eb..ebbe02628a27 100644
--- a/arch/riscv/Makefile
+++ b/arch/riscv/Makefile
@@ -108,7 +108,9 @@ KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax)
# unaligned accesses. While unaligned accesses are explicitly allowed in the
# RISC-V ISA, they're emulated by machine mode traps on all extant
# architectures. It's faster to have GCC emit only aligned accesses.
+ifneq ($(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS),y)
KBUILD_CFLAGS += $(call cc-option,-mstrict-align)
+endif

ifeq ($(CONFIG_STACKPROTECTOR_PER_TASK),y)
prepare: stack_protector_prepare
--
2.40.0


2023-12-25 04:55:25

by Jisheng Zhang

[permalink] [raw]
Subject: [PATCH v4 2/2] riscv: select DCACHE_WORD_ACCESS for efficient unaligned access HW

DCACHE_WORD_ACCESS uses the word-at-a-time API for optimised string
comparisons in the vfs layer.

This patch implements support for load_unaligned_zeropad in much the
same way as has been done for arm64.

Here is the test program and step:

$ cat tt.c
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>

#define ITERATIONS 1000000

#define PATH "123456781234567812345678123456781"

int main(void)
{
unsigned long i;
struct stat buf;

for (i = 0; i < ITERATIONS; i++)
stat(PATH, &buf);

return 0;
}

$ gcc -O2 tt.c
$ touch 123456781234567812345678123456781
$ time ./a.out

Per my test on T-HEAD C910 platforms, the above test performance is
improved by about 7.5%.

Signed-off-by: Jisheng Zhang <[email protected]>
---
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/asm-extable.h | 15 ++++++++++++
arch/riscv/include/asm/word-at-a-time.h | 27 +++++++++++++++++++++
arch/riscv/mm/extable.c | 31 +++++++++++++++++++++++++
4 files changed, 74 insertions(+)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index afcc5fdc16f7..e34863c5a8ed 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -654,6 +654,7 @@ config RISCV_MISALIGNED
config RISCV_EFFICIENT_UNALIGNED_ACCESS
bool "Assume the CPU supports fast unaligned memory accesses"
depends on NONPORTABLE
+ select DCACHE_WORD_ACCESS if MMU
select HAVE_EFFICIENT_UNALIGNED_ACCESS
help
Say Y here if you want the kernel to assume that the CPU supports
diff --git a/arch/riscv/include/asm/asm-extable.h b/arch/riscv/include/asm/asm-extable.h
index 00a96e7a9664..0c8bfd54fc4e 100644
--- a/arch/riscv/include/asm/asm-extable.h
+++ b/arch/riscv/include/asm/asm-extable.h
@@ -6,6 +6,7 @@
#define EX_TYPE_FIXUP 1
#define EX_TYPE_BPF 2
#define EX_TYPE_UACCESS_ERR_ZERO 3
+#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4

#ifdef CONFIG_MMU

@@ -47,6 +48,11 @@
#define EX_DATA_REG_ZERO_SHIFT 5
#define EX_DATA_REG_ZERO GENMASK(9, 5)

+#define EX_DATA_REG_DATA_SHIFT 0
+#define EX_DATA_REG_DATA GENMASK(4, 0)
+#define EX_DATA_REG_ADDR_SHIFT 5
+#define EX_DATA_REG_ADDR GENMASK(9, 5)
+
#define EX_DATA_REG(reg, gpr) \
"((.L__gpr_num_" #gpr ") << " __stringify(EX_DATA_REG_##reg##_SHIFT) ")"

@@ -62,6 +68,15 @@
#define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err) \
_ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero)

+#define _ASM_EXTABLE_LOAD_UNALIGNED_ZEROPAD(insn, fixup, data, addr) \
+ __DEFINE_ASM_GPR_NUMS \
+ __ASM_EXTABLE_RAW(#insn, #fixup, \
+ __stringify(EX_TYPE_LOAD_UNALIGNED_ZEROPAD), \
+ "(" \
+ EX_DATA_REG(DATA, data) " | " \
+ EX_DATA_REG(ADDR, addr) \
+ ")")
+
#endif /* __ASSEMBLY__ */

#else /* CONFIG_MMU */
diff --git a/arch/riscv/include/asm/word-at-a-time.h b/arch/riscv/include/asm/word-at-a-time.h
index 7c086ac6ecd4..f3f031e34191 100644
--- a/arch/riscv/include/asm/word-at-a-time.h
+++ b/arch/riscv/include/asm/word-at-a-time.h
@@ -9,6 +9,7 @@
#define _ASM_RISCV_WORD_AT_A_TIME_H


+#include <asm/asm-extable.h>
#include <linux/kernel.h>

struct word_at_a_time {
@@ -45,4 +46,30 @@ static inline unsigned long find_zero(unsigned long mask)
/* The mask we created is directly usable as a bytemask */
#define zero_bytemask(mask) (mask)

+#ifdef CONFIG_DCACHE_WORD_ACCESS
+
+/*
+ * Load an unaligned word from kernel space.
+ *
+ * In the (very unlikely) case of the word being a page-crosser
+ * and the next page not being mapped, take the exception and
+ * return zeroes in the non-existing part.
+ */
+static inline unsigned long load_unaligned_zeropad(const void *addr)
+{
+ unsigned long ret;
+
+ /* Load word from unaligned pointer addr */
+ asm(
+ "1: " REG_L " %0, %2\n"
+ "2:\n"
+ _ASM_EXTABLE_LOAD_UNALIGNED_ZEROPAD(1b, 2b, %0, %1)
+ : "=&r" (ret)
+ : "r" (addr), "m" (*(unsigned long *)addr));
+
+ return ret;
+}
+
+#endif /* CONFIG_DCACHE_WORD_ACCESS */
+
#endif /* _ASM_RISCV_WORD_AT_A_TIME_H */
diff --git a/arch/riscv/mm/extable.c b/arch/riscv/mm/extable.c
index 35484d830fd6..dd1530af3ef1 100644
--- a/arch/riscv/mm/extable.c
+++ b/arch/riscv/mm/extable.c
@@ -27,6 +27,14 @@ static bool ex_handler_fixup(const struct exception_table_entry *ex,
return true;
}

+static inline unsigned long regs_get_gpr(struct pt_regs *regs, unsigned int offset)
+{
+ if (unlikely(!offset || offset > MAX_REG_OFFSET))
+ return 0;
+
+ return *(unsigned long *)((unsigned long)regs + offset);
+}
+
static inline void regs_set_gpr(struct pt_regs *regs, unsigned int offset,
unsigned long val)
{
@@ -50,6 +58,27 @@ static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex,
return true;
}

+static bool
+ex_handler_load_unaligned_zeropad(const struct exception_table_entry *ex,
+ struct pt_regs *regs)
+{
+ int reg_data = FIELD_GET(EX_DATA_REG_DATA, ex->data);
+ int reg_addr = FIELD_GET(EX_DATA_REG_ADDR, ex->data);
+ unsigned long data, addr, offset;
+
+ addr = regs_get_gpr(regs, reg_addr * sizeof(unsigned long));
+
+ offset = addr & 0x7UL;
+ addr &= ~0x7UL;
+
+ data = *(unsigned long *)addr >> (offset * 8);
+
+ regs_set_gpr(regs, reg_data * sizeof(unsigned long), data);
+
+ regs->epc = get_ex_fixup(ex);
+ return true;
+}
+
bool fixup_exception(struct pt_regs *regs)
{
const struct exception_table_entry *ex;
@@ -65,6 +94,8 @@ bool fixup_exception(struct pt_regs *regs)
return ex_handler_bpf(ex, regs);
case EX_TYPE_UACCESS_ERR_ZERO:
return ex_handler_uaccess_err_zero(ex, regs);
+ case EX_TYPE_LOAD_UNALIGNED_ZEROPAD:
+ return ex_handler_load_unaligned_zeropad(ex, regs);
}

BUG();
--
2.40.0


2023-12-27 04:22:59

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH v4 1/2] riscv: introduce RISCV_EFFICIENT_UNALIGNED_ACCESS

On Mon, Dec 25, 2023 at 12:42:06PM +0800, Jisheng Zhang wrote:
> Some riscv implementations such as T-HEAD's C906, C908, C910 and C920
> support efficient unaligned access, for performance reason we want
> to enable HAVE_EFFICIENT_UNALIGNED_ACCESS on these platforms. To
> avoid performance regressions on other non efficient unaligned access
> platforms, HAVE_EFFICIENT_UNALIGNED_ACCESS can't be globally selected.
>
> To solve this problem, runtime code patching based on the detected
> speed is a good solution. But that's not easy, it involves lots of
> work to modify vairous subsystems such as net, mm, lib and so on.
> This can be done step by step.
>
> So let's take an easier solution: add support to efficient unaligned
> access and hide the support under NONPORTABLE.
>
> Now let's introduce RISCV_EFFICIENT_UNALIGNED_ACCESS which depends on
> NONPORTABLE, if users know during config time that the kernel will be
> only run on those efficient unaligned access hw platforms, they can
> enable it. Obviously, generic unified kernel Image shouldn't enable it.
>
> Signed-off-by: Jisheng Zhang <[email protected]>
> Reviewed-by: Charlie Jenkins <[email protected]>
> ---
> arch/riscv/Kconfig | 13 +++++++++++++
> arch/riscv/Makefile | 2 ++
> 2 files changed, 15 insertions(+)

Reviewed-by: Eric Biggers <[email protected]>

- Eric

2023-12-27 10:42:31

by David Laight

[permalink] [raw]
Subject: RE: [PATCH v4 1/2] riscv: introduce RISCV_EFFICIENT_UNALIGNED_ACCESS

From: Jisheng Zhang
> Sent: 25 December 2023 04:42
>
> Some riscv implementations such as T-HEAD's C906, C908, C910 and C920
> support efficient unaligned access, for performance reason we want
> to enable HAVE_EFFICIENT_UNALIGNED_ACCESS on these platforms. To
> avoid performance regressions on other non efficient unaligned access
> platforms, HAVE_EFFICIENT_UNALIGNED_ACCESS can't be globally selected.

How efficient are these EFFICIENT_UNALIGNED_ACCESS ?

For single word accesses it doesn't matter much (since they don't fault).
But for memcpy() (and similar) if they are slightly slow (eg the same
as two aligned accesses) it is likely still worth doing misaligned
transfers for both ends and aligned transfers for the middle.

For example, on modern x86 it really isn't worth worrying about
misaligned transfers of 64bit registers.
AFAICT accesses within a cacheline just use byte enables - so are zero
cost. Accesses that cross cache line boundaries do get split - but the
out-of-order execute, store-buffer and the ability to do two reads in
each clock cycle make the overall cost only just measurable.

Not sure how the various RISC-V cpu compare though.
You might get an extra clock delay a lot more often.
So, while mostly you 'don't care' about the alignment, there may
still be a few places where it does matter.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


Subject: Re: [PATCH v4 0/2] riscv: enable EFFICIENT_UNALIGNED_ACCESS and DCACHE_WORD_ACCESS

Hello:

This series was applied to riscv/linux.git (for-next)
by Palmer Dabbelt <[email protected]>:

On Mon, 25 Dec 2023 12:42:05 +0800 you wrote:
> Some riscv implementations such as T-HEAD's C906, C908, C910 and C920
> support efficient unaligned access, for performance reason we want
> to enable HAVE_EFFICIENT_UNALIGNED_ACCESS on these platforms. To
> avoid performance regressions on non efficient unaligned access
> platforms, HAVE_EFFICIENT_UNALIGNED_ACCESS can't be globally selected.
>
> To solve this problem, runtime code patching based on the detected
> speed is a good solution. But that's not easy, it involves lots of
> work to modify vairous subsystems such as net, mm, lib and so on.
> This can be done step by step.
>
> [...]

Here is the summary with links:
- [v4,1/2] riscv: introduce RISCV_EFFICIENT_UNALIGNED_ACCESS
https://git.kernel.org/riscv/c/b6da6cbe13eb
- [v4,2/2] riscv: select DCACHE_WORD_ACCESS for efficient unaligned access HW
https://git.kernel.org/riscv/c/d0fdc20b0429

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html