2024-03-07 19:06:12

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH v8 0/4] riscv: Use Kconfig to set unaligned access speed

If the hardware unaligned access speed is known at compile time, it is
possible to avoid running the unaligned access speed probe to speedup
boot-time.

Signed-off-by: Charlie Jenkins <[email protected]>
---
Changes in v8:
- Minor commit message changes (Conor)
- Clean up hwprobe_misaligned() (Conor)
- Link to v7: https://lore.kernel.org/r/20240306-disable_misaligned_probe_config-v7-0-6c90419e7a96@rivosinc.com

Changes in v7:
- Fix check_unaligned_access_emulated_all_cpus to return false when any
cpu has emulated accesses
- Fix wording in Kconfig (Conor)
- Link to v6: https://lore.kernel.org/r/20240301-disable_misaligned_probe_config-v6-0-612ebd69f430@rivosinc.com

Changes in v6:
- Consolidate Kconfig into 4 options (probe, emulated, slow,
efficient)
- Change the behavior of "emulated" to allow hwprobe to return "slow" if
unaligned accesses are not emulated by the kernel
- With this consolidation, check_unaligned_access_emulated is able to be
moved back into the original file (traps_misaligned.c)
- Link to v5: https://lore.kernel.org/r/20240227-disable_misaligned_probe_config-v5-0-b6853846e27a@rivosinc.com

Changes in v5:
- Clarify Kconfig options from Conor's feedback
- Use "unaligned" instead of "misaligned" in introduced file/function.
This is a bit hard to standardize because the riscv manual says
"misaligned" but the existing Linux configs say "unaligned".
- Link to v4: https://lore.kernel.org/r/20240216-disable_misaligned_probe_config-v4-0-dc01e581c0ac@rivosinc.com

Changes in v4:
- Add additional Kconfig options for the other unaligned access speeds
- Link to v3: https://lore.kernel.org/r/20240202-disable_misaligned_probe_config-v3-0-c44f91f03bb6@rivosinc.com

Changes in v3:
- Revert change to csum (Eric)
- Change ifndefs for ifdefs (Eric)
- Change config in Makefile (Elliot/Eric)
- Link to v2: https://lore.kernel.org/r/20240201-disable_misaligned_probe_config-v2-0-77c368bed7b2@rivosinc.com

Changes in v2:
- Move around definitions to reduce ifdefs (Clément)
- Make RISCV_MISALIGNED depend on !HAVE_EFFICIENT_UNALIGNED_ACCESS
(Clément)
- Link to v1: https://lore.kernel.org/r/20240131-disable_misaligned_probe_config-v1-0-98d155e9cda8@rivosinc.com

---
Charlie Jenkins (4):
riscv: lib: Introduce has_fast_unaligned_access()
riscv: Only check online cpus for emulated accesses
riscv: Decouple emulated unaligned accesses from access speed
riscv: Set unaligned access speed at compile time

arch/riscv/Kconfig | 60 ++++--
arch/riscv/include/asm/cpufeature.h | 31 ++--
arch/riscv/kernel/Makefile | 4 +-
arch/riscv/kernel/cpufeature.c | 255 --------------------------
arch/riscv/kernel/sys_hwprobe.c | 13 ++
arch/riscv/kernel/traps_misaligned.c | 17 +-
arch/riscv/kernel/unaligned_access_speed.c | 282 +++++++++++++++++++++++++++++
arch/riscv/lib/csum.c | 7 +-
8 files changed, 376 insertions(+), 293 deletions(-)
---
base-commit: 6613476e225e090cc9aad49be7fa504e290dd33d
change-id: 20240131-disable_misaligned_probe_config-043aea375f93
--
- Charlie



2024-03-07 19:06:19

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH v8 1/4] riscv: lib: Introduce has_fast_unaligned_access()

Create has_fast_unaligned_access to avoid needing to explicitly check
the fast_misaligned_access_speed_key static key.

Signed-off-by: Charlie Jenkins <[email protected]>
Reviewed-by: Evan Green <[email protected]>
Reviewed-by: Conor Dooley <[email protected]>
---
arch/riscv/include/asm/cpufeature.h | 11 ++++++++---
arch/riscv/kernel/cpufeature.c | 6 +++---
arch/riscv/lib/csum.c | 7 ++-----
3 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
index 5a626ed2c47a..466e1f591919 100644
--- a/arch/riscv/include/asm/cpufeature.h
+++ b/arch/riscv/include/asm/cpufeature.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
- * Copyright 2022-2023 Rivos, Inc
+ * Copyright 2022-2024 Rivos, Inc
*/

#ifndef _ASM_CPUFEATURE_H
@@ -53,6 +53,13 @@ static inline bool check_unaligned_access_emulated(int cpu)
static inline void unaligned_emulation_finish(void) {}
#endif

+DECLARE_STATIC_KEY_FALSE(fast_unaligned_access_speed_key);
+
+static __always_inline bool has_fast_unaligned_accesses(void)
+{
+ return static_branch_likely(&fast_unaligned_access_speed_key);
+}
+
unsigned long riscv_get_elf_hwcap(void);

struct riscv_isa_ext_data {
@@ -135,6 +142,4 @@ static __always_inline bool riscv_cpu_has_extension_unlikely(int cpu, const unsi
return __riscv_isa_extension_available(hart_isa[cpu].isa, ext);
}

-DECLARE_STATIC_KEY_FALSE(fast_misaligned_access_speed_key);
-
#endif
diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
index 89920f84d0a3..7878cddccc0d 100644
--- a/arch/riscv/kernel/cpufeature.c
+++ b/arch/riscv/kernel/cpufeature.c
@@ -810,14 +810,14 @@ static void check_unaligned_access_nonboot_cpu(void *param)
check_unaligned_access(pages[cpu]);
}

-DEFINE_STATIC_KEY_FALSE(fast_misaligned_access_speed_key);
+DEFINE_STATIC_KEY_FALSE(fast_unaligned_access_speed_key);

static void modify_unaligned_access_branches(cpumask_t *mask, int weight)
{
if (cpumask_weight(mask) == weight)
- static_branch_enable_cpuslocked(&fast_misaligned_access_speed_key);
+ static_branch_enable_cpuslocked(&fast_unaligned_access_speed_key);
else
- static_branch_disable_cpuslocked(&fast_misaligned_access_speed_key);
+ static_branch_disable_cpuslocked(&fast_unaligned_access_speed_key);
}

static void set_unaligned_access_static_branches_except_cpu(int cpu)
diff --git a/arch/riscv/lib/csum.c b/arch/riscv/lib/csum.c
index af3df5274ccb..7178e0acfa22 100644
--- a/arch/riscv/lib/csum.c
+++ b/arch/riscv/lib/csum.c
@@ -3,7 +3,7 @@
* Checksum library
*
* Influenced by arch/arm64/lib/csum.c
- * Copyright (C) 2023 Rivos Inc.
+ * Copyright (C) 2023-2024 Rivos Inc.
*/
#include <linux/bitops.h>
#include <linux/compiler.h>
@@ -318,10 +318,7 @@ unsigned int do_csum(const unsigned char *buff, int len)
* branches. The largest chunk of overlap was delegated into the
* do_csum_common function.
*/
- if (static_branch_likely(&fast_misaligned_access_speed_key))
- return do_csum_no_alignment(buff, len);
-
- if (((unsigned long)buff & OFFSET_MASK) == 0)
+ if (has_fast_unaligned_accesses() || (((unsigned long)buff & OFFSET_MASK) == 0))
return do_csum_no_alignment(buff, len);

return do_csum_with_alignment(buff, len);

--
2.43.2


2024-03-07 19:06:27

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH v8 2/4] riscv: Only check online cpus for emulated accesses

The unaligned access checker only sets valid values for online cpus.
Check for these values on online cpus rather than on present cpus.

Signed-off-by: Charlie Jenkins <[email protected]>
Reviewed-by: Conor Dooley <[email protected]>
Fixes: 71c54b3d169d ("riscv: report misaligned accesses emulation to hwprobe")
---
arch/riscv/kernel/traps_misaligned.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
index 8ded225e8c5b..c2ed4e689bf9 100644
--- a/arch/riscv/kernel/traps_misaligned.c
+++ b/arch/riscv/kernel/traps_misaligned.c
@@ -632,7 +632,7 @@ void unaligned_emulation_finish(void)
* accesses emulated since tasks requesting such control can run on any
* CPU.
*/
- for_each_present_cpu(cpu) {
+ for_each_online_cpu(cpu) {
if (per_cpu(misaligned_access_speed, cpu) !=
RISCV_HWPROBE_MISALIGNED_EMULATED) {
return;

--
2.43.2


2024-03-07 19:06:46

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH v8 3/4] riscv: Decouple emulated unaligned accesses from access speed

Detecting if a system traps into the kernel on an unaligned access
can be performed separately from checking the speed of unaligned
accesses. This decoupling will make it possible to selectively enable
or disable each of these checks.

Signed-off-by: Charlie Jenkins <[email protected]>
Reviewed-by: Conor Dooley <[email protected]>
---
arch/riscv/include/asm/cpufeature.h | 2 +-
arch/riscv/kernel/cpufeature.c | 25 +++++++++++++++++++++----
arch/riscv/kernel/traps_misaligned.c | 15 +++++++--------
3 files changed, 29 insertions(+), 13 deletions(-)

diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
index 466e1f591919..6fec91845aa0 100644
--- a/arch/riscv/include/asm/cpufeature.h
+++ b/arch/riscv/include/asm/cpufeature.h
@@ -37,7 +37,7 @@ void riscv_user_isa_enable(void);

#ifdef CONFIG_RISCV_MISALIGNED
bool unaligned_ctl_available(void);
-bool check_unaligned_access_emulated(int cpu);
+bool check_unaligned_access_emulated_all_cpus(void);
void unaligned_emulation_finish(void);
#else
static inline bool unaligned_ctl_available(void)
diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
index 7878cddccc0d..abb3a2f53106 100644
--- a/arch/riscv/kernel/cpufeature.c
+++ b/arch/riscv/kernel/cpufeature.c
@@ -719,7 +719,8 @@ static int check_unaligned_access(void *param)
void *src;
long speed = RISCV_HWPROBE_MISALIGNED_SLOW;

- if (check_unaligned_access_emulated(cpu))
+ if (IS_ENABLED(CONFIG_RISCV_MISALIGNED) &&
+ per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN)
return 0;

/* Make an unaligned destination buffer. */
@@ -896,8 +897,8 @@ static int riscv_offline_cpu(unsigned int cpu)
return 0;
}

-/* Measure unaligned access on all CPUs present at boot in parallel. */
-static int check_unaligned_access_all_cpus(void)
+/* Measure unaligned access speed on all CPUs present at boot in parallel. */
+static int check_unaligned_access_speed_all_cpus(void)
{
unsigned int cpu;
unsigned int cpu_count = num_possible_cpus();
@@ -935,7 +936,6 @@ static int check_unaligned_access_all_cpus(void)
riscv_online_cpu, riscv_offline_cpu);

out:
- unaligned_emulation_finish();
for_each_cpu(cpu, cpu_online_mask) {
if (bufs[cpu])
__free_pages(bufs[cpu], MISALIGNED_BUFFER_ORDER);
@@ -945,6 +945,23 @@ static int check_unaligned_access_all_cpus(void)
return 0;
}

+#ifdef CONFIG_RISCV_MISALIGNED
+static int check_unaligned_access_all_cpus(void)
+{
+ bool all_cpus_emulated = check_unaligned_access_emulated_all_cpus();
+
+ if (!all_cpus_emulated)
+ return check_unaligned_access_speed_all_cpus();
+
+ return 0;
+}
+#else
+static int check_unaligned_access_all_cpus(void)
+{
+ return check_unaligned_access_speed_all_cpus();
+}
+#endif
+
arch_initcall(check_unaligned_access_all_cpus);

void riscv_user_isa_enable(void)
diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
index c2ed4e689bf9..e55718179f42 100644
--- a/arch/riscv/kernel/traps_misaligned.c
+++ b/arch/riscv/kernel/traps_misaligned.c
@@ -596,7 +596,7 @@ int handle_misaligned_store(struct pt_regs *regs)
return 0;
}

-bool check_unaligned_access_emulated(int cpu)
+static bool check_unaligned_access_emulated(int cpu)
{
long *mas_ptr = per_cpu_ptr(&misaligned_access_speed, cpu);
unsigned long tmp_var, tmp_val;
@@ -623,7 +623,7 @@ bool check_unaligned_access_emulated(int cpu)
return misaligned_emu_detected;
}

-void unaligned_emulation_finish(void)
+bool check_unaligned_access_emulated_all_cpus(void)
{
int cpu;

@@ -632,13 +632,12 @@ void unaligned_emulation_finish(void)
* accesses emulated since tasks requesting such control can run on any
* CPU.
*/
- for_each_online_cpu(cpu) {
- if (per_cpu(misaligned_access_speed, cpu) !=
- RISCV_HWPROBE_MISALIGNED_EMULATED) {
- return;
- }
- }
+ for_each_online_cpu(cpu)
+ if (!check_unaligned_access_emulated(cpu))
+ return false;
+
unaligned_ctl = true;
+ return true;
}

bool unaligned_ctl_available(void)

--
2.43.2


2024-03-07 19:08:03

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH v8 4/4] riscv: Set unaligned access speed at compile time

Introduce Kconfig options to set the kernel unaligned access support.
These options provide a non-portable alternative to the runtime
unaligned access probe.

To support this, the unaligned access probing code is moved into it's
own file and gated behind a new RISCV_PROBE_UNALIGNED_ACCESS_SUPPORT
option.

Signed-off-by: Charlie Jenkins <[email protected]>
Reviewed-by: Conor Dooley <[email protected]>
---
arch/riscv/Kconfig | 60 ++++--
arch/riscv/include/asm/cpufeature.h | 24 +--
arch/riscv/kernel/Makefile | 4 +-
arch/riscv/kernel/cpufeature.c | 272 ----------------------------
arch/riscv/kernel/sys_hwprobe.c | 13 ++
arch/riscv/kernel/traps_misaligned.c | 2 +
arch/riscv/kernel/unaligned_access_speed.c | 282 +++++++++++++++++++++++++++++
7 files changed, 361 insertions(+), 296 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index bffbd869a068..28c1e75ea88a 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -688,27 +688,63 @@ config THREAD_SIZE_ORDER
affects irq stack size, which is equal to thread stack size.

config RISCV_MISALIGNED
- bool "Support misaligned load/store traps for kernel and userspace"
+ bool
select SYSCTL_ARCH_UNALIGN_ALLOW
- default y
help
- Say Y here if you want the kernel to embed support for misaligned
- load/store for both kernel and userspace. When disable, misaligned
- accesses will generate SIGBUS in userspace and panic in kernel.
+ Embed support for misaligned load/store for both kernel and userspace.
+ When disabled, misaligned accesses will generate SIGBUS in userspace
+ and panic in the kernel.
+
+choice
+ prompt "Unaligned Accesses Support"
+ default RISCV_PROBE_UNALIGNED_ACCESS
+ help
+ This determines the level of support for unaligned accesses. This
+ information is used by the kernel to perform optimizations. It is also
+ exposed to user space via the hwprobe syscall. The hardware will be
+ probed at boot by default.
+
+config RISCV_PROBE_UNALIGNED_ACCESS
+ bool "Probe for hardware unaligned access support"
+ select RISCV_MISALIGNED
+ help
+ During boot, the kernel will run a series of tests to determine the
+ speed of unaligned accesses. This probing will dynamically determine
+ the speed of unaligned accesses on the underlying system. If unaligned
+ memory accesses trap into the kernel as they are not supported by the
+ system, the kernel will emulate the unaligned accesses to preserve the
+ UABI.
+
+config RISCV_EMULATED_UNALIGNED_ACCESS
+ bool "Emulate unaligned access where system support is missing"
+ select RISCV_MISALIGNED
+ help
+ If unaligned memory accesses trap into the kernel as they are not
+ supported by the system, the kernel will emulate the unaligned
+ accesses to preserve the UABI. When the underlying system does support
+ unaligned accesses, the unaligned accesses are assumed to be slow.
+
+config RISCV_SLOW_UNALIGNED_ACCESS
+ bool "Assume the system supports slow unaligned memory accesses"
+ depends on NONPORTABLE
+ help
+ Assume that the system supports slow unaligned memory accesses. The
+ kernel and userspace programs may not be able to run at all on systems
+ that do not support unaligned memory accesses.

config RISCV_EFFICIENT_UNALIGNED_ACCESS
- bool "Assume the CPU supports fast unaligned memory accesses"
+ bool "Assume the system supports fast unaligned memory accesses"
depends on NONPORTABLE
select DCACHE_WORD_ACCESS if MMU
select HAVE_EFFICIENT_UNALIGNED_ACCESS
help
- Say Y here if you want the kernel to assume that the CPU supports
- efficient unaligned memory accesses. When enabled, this option
- improves the performance of the kernel on such CPUs. However, the
- kernel will run much more slowly, or will not be able to run at all,
- on CPUs that do not support efficient unaligned memory accesses.
+ Assume that the system supports fast unaligned memory accesses. When
+ enabled, this option improves the performance of the kernel on such
+ systems. However, the kernel and userspace programs will run much more
+ slowly, or will not be able to run at all, on systems that do not
+ support efficient unaligned memory accesses.

- If unsure what to do here, say N.
+endchoice

endmenu # "Platform type"

diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
index 6fec91845aa0..46061f5e9764 100644
--- a/arch/riscv/include/asm/cpufeature.h
+++ b/arch/riscv/include/asm/cpufeature.h
@@ -28,37 +28,39 @@ struct riscv_isainfo {

DECLARE_PER_CPU(struct riscv_cpuinfo, riscv_cpuinfo);

-DECLARE_PER_CPU(long, misaligned_access_speed);
-
/* Per-cpu ISA extensions. */
extern struct riscv_isainfo hart_isa[NR_CPUS];

void riscv_user_isa_enable(void);

-#ifdef CONFIG_RISCV_MISALIGNED
-bool unaligned_ctl_available(void);
+#if defined(CONFIG_RISCV_MISALIGNED)
bool check_unaligned_access_emulated_all_cpus(void);
void unaligned_emulation_finish(void);
+bool unaligned_ctl_available(void);
+DECLARE_PER_CPU(long, misaligned_access_speed);
#else
static inline bool unaligned_ctl_available(void)
{
return false;
}
-
-static inline bool check_unaligned_access_emulated(int cpu)
-{
- return false;
-}
-
-static inline void unaligned_emulation_finish(void) {}
#endif

+#if defined(CONFIG_RISCV_PROBE_UNALIGNED_ACCESS)
DECLARE_STATIC_KEY_FALSE(fast_unaligned_access_speed_key);

static __always_inline bool has_fast_unaligned_accesses(void)
{
return static_branch_likely(&fast_unaligned_access_speed_key);
}
+#else
+static __always_inline bool has_fast_unaligned_accesses(void)
+{
+ if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS))
+ return true;
+ else
+ return false;
+}
+#endif

unsigned long riscv_get_elf_hwcap(void);

diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
index f71910718053..c8085126a6f9 100644
--- a/arch/riscv/kernel/Makefile
+++ b/arch/riscv/kernel/Makefile
@@ -38,7 +38,6 @@ extra-y += vmlinux.lds
obj-y += head.o
obj-y += soc.o
obj-$(CONFIG_RISCV_ALTERNATIVE) += alternative.o
-obj-y += copy-unaligned.o
obj-y += cpu.o
obj-y += cpufeature.o
obj-y += entry.o
@@ -62,6 +61,9 @@ obj-y += tests/
obj-$(CONFIG_MMU) += vdso.o vdso/

obj-$(CONFIG_RISCV_MISALIGNED) += traps_misaligned.o
+obj-$(CONFIG_RISCV_MISALIGNED) += unaligned_access_speed.o
+obj-$(CONFIG_RISCV_PROBE_UNALIGNED_ACCESS) += copy-unaligned.o
+
obj-$(CONFIG_FPU) += fpu.o
obj-$(CONFIG_RISCV_ISA_V) += vector.o
obj-$(CONFIG_RISCV_ISA_V) += kernel_mode_vector.o
diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
index abb3a2f53106..319670af5704 100644
--- a/arch/riscv/kernel/cpufeature.c
+++ b/arch/riscv/kernel/cpufeature.c
@@ -11,7 +11,6 @@
#include <linux/cpu.h>
#include <linux/cpuhotplug.h>
#include <linux/ctype.h>
-#include <linux/jump_label.h>
#include <linux/log2.h>
#include <linux/memory.h>
#include <linux/module.h>
@@ -21,20 +20,12 @@
#include <asm/cacheflush.h>
#include <asm/cpufeature.h>
#include <asm/hwcap.h>
-#include <asm/hwprobe.h>
#include <asm/patch.h>
#include <asm/processor.h>
#include <asm/vector.h>

-#include "copy-unaligned.h"
-
#define NUM_ALPHA_EXTS ('z' - 'a' + 1)

-#define MISALIGNED_ACCESS_JIFFIES_LG2 1
-#define MISALIGNED_BUFFER_SIZE 0x4000
-#define MISALIGNED_BUFFER_ORDER get_order(MISALIGNED_BUFFER_SIZE)
-#define MISALIGNED_COPY_SIZE ((MISALIGNED_BUFFER_SIZE / 2) - 0x80)
-
unsigned long elf_hwcap __read_mostly;

/* Host ISA bitmap */
@@ -43,11 +34,6 @@ static DECLARE_BITMAP(riscv_isa, RISCV_ISA_EXT_MAX) __read_mostly;
/* Per-cpu ISA extensions. */
struct riscv_isainfo hart_isa[NR_CPUS];

-/* Performance information */
-DEFINE_PER_CPU(long, misaligned_access_speed);
-
-static cpumask_t fast_misaligned_access;
-
/**
* riscv_isa_extension_base() - Get base extension word
*
@@ -706,264 +692,6 @@ unsigned long riscv_get_elf_hwcap(void)
return hwcap;
}

-static int check_unaligned_access(void *param)
-{
- int cpu = smp_processor_id();
- u64 start_cycles, end_cycles;
- u64 word_cycles;
- u64 byte_cycles;
- int ratio;
- unsigned long start_jiffies, now;
- struct page *page = param;
- void *dst;
- void *src;
- long speed = RISCV_HWPROBE_MISALIGNED_SLOW;
-
- if (IS_ENABLED(CONFIG_RISCV_MISALIGNED) &&
- per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN)
- return 0;
-
- /* Make an unaligned destination buffer. */
- dst = (void *)((unsigned long)page_address(page) | 0x1);
- /* Unalign src as well, but differently (off by 1 + 2 = 3). */
- src = dst + (MISALIGNED_BUFFER_SIZE / 2);
- src += 2;
- word_cycles = -1ULL;
- /* Do a warmup. */
- __riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE);
- preempt_disable();
- start_jiffies = jiffies;
- while ((now = jiffies) == start_jiffies)
- cpu_relax();
-
- /*
- * For a fixed amount of time, repeatedly try the function, and take
- * the best time in cycles as the measurement.
- */
- while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) {
- start_cycles = get_cycles64();
- /* Ensure the CSR read can't reorder WRT to the copy. */
- mb();
- __riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE);
- /* Ensure the copy ends before the end time is snapped. */
- mb();
- end_cycles = get_cycles64();
- if ((end_cycles - start_cycles) < word_cycles)
- word_cycles = end_cycles - start_cycles;
- }
-
- byte_cycles = -1ULL;
- __riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE);
- start_jiffies = jiffies;
- while ((now = jiffies) == start_jiffies)
- cpu_relax();
-
- while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) {
- start_cycles = get_cycles64();
- mb();
- __riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE);
- mb();
- end_cycles = get_cycles64();
- if ((end_cycles - start_cycles) < byte_cycles)
- byte_cycles = end_cycles - start_cycles;
- }
-
- preempt_enable();
-
- /* Don't divide by zero. */
- if (!word_cycles || !byte_cycles) {
- pr_warn("cpu%d: rdtime lacks granularity needed to measure unaligned access speed\n",
- cpu);
-
- return 0;
- }
-
- if (word_cycles < byte_cycles)
- speed = RISCV_HWPROBE_MISALIGNED_FAST;
-
- ratio = div_u64((byte_cycles * 100), word_cycles);
- pr_info("cpu%d: Ratio of byte access time to unaligned word access is %d.%02d, unaligned accesses are %s\n",
- cpu,
- ratio / 100,
- ratio % 100,
- (speed == RISCV_HWPROBE_MISALIGNED_FAST) ? "fast" : "slow");
-
- per_cpu(misaligned_access_speed, cpu) = speed;
-
- /*
- * Set the value of fast_misaligned_access of a CPU. These operations
- * are atomic to avoid race conditions.
- */
- if (speed == RISCV_HWPROBE_MISALIGNED_FAST)
- cpumask_set_cpu(cpu, &fast_misaligned_access);
- else
- cpumask_clear_cpu(cpu, &fast_misaligned_access);
-
- return 0;
-}
-
-static void check_unaligned_access_nonboot_cpu(void *param)
-{
- unsigned int cpu = smp_processor_id();
- struct page **pages = param;
-
- if (smp_processor_id() != 0)
- check_unaligned_access(pages[cpu]);
-}
-
-DEFINE_STATIC_KEY_FALSE(fast_unaligned_access_speed_key);
-
-static void modify_unaligned_access_branches(cpumask_t *mask, int weight)
-{
- if (cpumask_weight(mask) == weight)
- static_branch_enable_cpuslocked(&fast_unaligned_access_speed_key);
- else
- static_branch_disable_cpuslocked(&fast_unaligned_access_speed_key);
-}
-
-static void set_unaligned_access_static_branches_except_cpu(int cpu)
-{
- /*
- * Same as set_unaligned_access_static_branches, except excludes the
- * given CPU from the result. When a CPU is hotplugged into an offline
- * state, this function is called before the CPU is set to offline in
- * the cpumask, and thus the CPU needs to be explicitly excluded.
- */
-
- cpumask_t fast_except_me;
-
- cpumask_and(&fast_except_me, &fast_misaligned_access, cpu_online_mask);
- cpumask_clear_cpu(cpu, &fast_except_me);
-
- modify_unaligned_access_branches(&fast_except_me, num_online_cpus() - 1);
-}
-
-static void set_unaligned_access_static_branches(void)
-{
- /*
- * This will be called after check_unaligned_access_all_cpus so the
- * result of unaligned access speed for all CPUs will be available.
- *
- * To avoid the number of online cpus changing between reading
- * cpu_online_mask and calling num_online_cpus, cpus_read_lock must be
- * held before calling this function.
- */
-
- cpumask_t fast_and_online;
-
- cpumask_and(&fast_and_online, &fast_misaligned_access, cpu_online_mask);
-
- modify_unaligned_access_branches(&fast_and_online, num_online_cpus());
-}
-
-static int lock_and_set_unaligned_access_static_branch(void)
-{
- cpus_read_lock();
- set_unaligned_access_static_branches();
- cpus_read_unlock();
-
- return 0;
-}
-
-arch_initcall_sync(lock_and_set_unaligned_access_static_branch);
-
-static int riscv_online_cpu(unsigned int cpu)
-{
- static struct page *buf;
-
- /* We are already set since the last check */
- if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN)
- goto exit;
-
- buf = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);
- if (!buf) {
- pr_warn("Allocation failure, not measuring misaligned performance\n");
- return -ENOMEM;
- }
-
- check_unaligned_access(buf);
- __free_pages(buf, MISALIGNED_BUFFER_ORDER);
-
-exit:
- set_unaligned_access_static_branches();
-
- return 0;
-}
-
-static int riscv_offline_cpu(unsigned int cpu)
-{
- set_unaligned_access_static_branches_except_cpu(cpu);
-
- return 0;
-}
-
-/* Measure unaligned access speed on all CPUs present at boot in parallel. */
-static int check_unaligned_access_speed_all_cpus(void)
-{
- unsigned int cpu;
- unsigned int cpu_count = num_possible_cpus();
- struct page **bufs = kzalloc(cpu_count * sizeof(struct page *),
- GFP_KERNEL);
-
- if (!bufs) {
- pr_warn("Allocation failure, not measuring misaligned performance\n");
- return 0;
- }
-
- /*
- * Allocate separate buffers for each CPU so there's no fighting over
- * cache lines.
- */
- for_each_cpu(cpu, cpu_online_mask) {
- bufs[cpu] = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);
- if (!bufs[cpu]) {
- pr_warn("Allocation failure, not measuring misaligned performance\n");
- goto out;
- }
- }
-
- /* Check everybody except 0, who stays behind to tend jiffies. */
- on_each_cpu(check_unaligned_access_nonboot_cpu, bufs, 1);
-
- /* Check core 0. */
- smp_call_on_cpu(0, check_unaligned_access, bufs[0], true);
-
- /*
- * Setup hotplug callbacks for any new CPUs that come online or go
- * offline.
- */
- cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "riscv:online",
- riscv_online_cpu, riscv_offline_cpu);
-
-out:
- for_each_cpu(cpu, cpu_online_mask) {
- if (bufs[cpu])
- __free_pages(bufs[cpu], MISALIGNED_BUFFER_ORDER);
- }
-
- kfree(bufs);
- return 0;
-}
-
-#ifdef CONFIG_RISCV_MISALIGNED
-static int check_unaligned_access_all_cpus(void)
-{
- bool all_cpus_emulated = check_unaligned_access_emulated_all_cpus();
-
- if (!all_cpus_emulated)
- return check_unaligned_access_speed_all_cpus();
-
- return 0;
-}
-#else
-static int check_unaligned_access_all_cpus(void)
-{
- return check_unaligned_access_speed_all_cpus();
-}
-#endif
-
-arch_initcall(check_unaligned_access_all_cpus);
-
void riscv_user_isa_enable(void)
{
if (riscv_cpu_has_extension_unlikely(smp_processor_id(), RISCV_ISA_EXT_ZICBOZ))
diff --git a/arch/riscv/kernel/sys_hwprobe.c b/arch/riscv/kernel/sys_hwprobe.c
index a7c56b41efd2..8cae41a502dd 100644
--- a/arch/riscv/kernel/sys_hwprobe.c
+++ b/arch/riscv/kernel/sys_hwprobe.c
@@ -147,6 +147,7 @@ static bool hwprobe_ext0_has(const struct cpumask *cpus, unsigned long ext)
return (pair.value & ext);
}

+#if defined(CONFIG_RISCV_PROBE_UNALIGNED_ACCESS)
static u64 hwprobe_misaligned(const struct cpumask *cpus)
{
int cpu;
@@ -169,6 +170,18 @@ static u64 hwprobe_misaligned(const struct cpumask *cpus)

return perf;
}
+#else
+static u64 hwprobe_misaligned(const struct cpumask *cpus)
+{
+ if (IS_ENABLED(CONFIG_RISCV_EFFICIENT_UNALIGNED_ACCESS))
+ return RISCV_HWPROBE_MISALIGNED_FAST;
+
+ if (IS_ENABLED(CONFIG_RISCV_EMULATED_UNALIGNED_ACCESS) && unaligned_ctl_available())
+ return RISCV_HWPROBE_MISALIGNED_EMULATED;
+
+ return RISCV_HWPROBE_MISALIGNED_SLOW;
+}
+#endif

static void hwprobe_one_pair(struct riscv_hwprobe *pair,
const struct cpumask *cpus)
diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
index e55718179f42..2adb7c3e4dd5 100644
--- a/arch/riscv/kernel/traps_misaligned.c
+++ b/arch/riscv/kernel/traps_misaligned.c
@@ -413,7 +413,9 @@ int handle_misaligned_load(struct pt_regs *regs)

perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr);

+#ifdef CONFIG_RISCV_PROBE_UNALIGNED_ACCESS
*this_cpu_ptr(&misaligned_access_speed) = RISCV_HWPROBE_MISALIGNED_EMULATED;
+#endif

if (!unaligned_enabled)
return -1;
diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c
new file mode 100644
index 000000000000..52264ea4f0bd
--- /dev/null
+++ b/arch/riscv/kernel/unaligned_access_speed.c
@@ -0,0 +1,282 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright 2024 Rivos Inc.
+ */
+
+#include <linux/cpu.h>
+#include <linux/cpumask.h>
+#include <linux/jump_label.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/types.h>
+#include <asm/cpufeature.h>
+#include <asm/hwprobe.h>
+
+#include "copy-unaligned.h"
+
+#define MISALIGNED_ACCESS_JIFFIES_LG2 1
+#define MISALIGNED_BUFFER_SIZE 0x4000
+#define MISALIGNED_BUFFER_ORDER get_order(MISALIGNED_BUFFER_SIZE)
+#define MISALIGNED_COPY_SIZE ((MISALIGNED_BUFFER_SIZE / 2) - 0x80)
+
+DEFINE_PER_CPU(long, misaligned_access_speed);
+
+#ifdef CONFIG_RISCV_PROBE_UNALIGNED_ACCESS
+static cpumask_t fast_misaligned_access;
+static int check_unaligned_access(void *param)
+{
+ int cpu = smp_processor_id();
+ u64 start_cycles, end_cycles;
+ u64 word_cycles;
+ u64 byte_cycles;
+ int ratio;
+ unsigned long start_jiffies, now;
+ struct page *page = param;
+ void *dst;
+ void *src;
+ long speed = RISCV_HWPROBE_MISALIGNED_SLOW;
+
+ if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN)
+ return 0;
+
+ /* Make an unaligned destination buffer. */
+ dst = (void *)((unsigned long)page_address(page) | 0x1);
+ /* Unalign src as well, but differently (off by 1 + 2 = 3). */
+ src = dst + (MISALIGNED_BUFFER_SIZE / 2);
+ src += 2;
+ word_cycles = -1ULL;
+ /* Do a warmup. */
+ __riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE);
+ preempt_disable();
+ start_jiffies = jiffies;
+ while ((now = jiffies) == start_jiffies)
+ cpu_relax();
+
+ /*
+ * For a fixed amount of time, repeatedly try the function, and take
+ * the best time in cycles as the measurement.
+ */
+ while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) {
+ start_cycles = get_cycles64();
+ /* Ensure the CSR read can't reorder WRT to the copy. */
+ mb();
+ __riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE);
+ /* Ensure the copy ends before the end time is snapped. */
+ mb();
+ end_cycles = get_cycles64();
+ if ((end_cycles - start_cycles) < word_cycles)
+ word_cycles = end_cycles - start_cycles;
+ }
+
+ byte_cycles = -1ULL;
+ __riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE);
+ start_jiffies = jiffies;
+ while ((now = jiffies) == start_jiffies)
+ cpu_relax();
+
+ while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) {
+ start_cycles = get_cycles64();
+ mb();
+ __riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE);
+ mb();
+ end_cycles = get_cycles64();
+ if ((end_cycles - start_cycles) < byte_cycles)
+ byte_cycles = end_cycles - start_cycles;
+ }
+
+ preempt_enable();
+
+ /* Don't divide by zero. */
+ if (!word_cycles || !byte_cycles) {
+ pr_warn("cpu%d: rdtime lacks granularity needed to measure unaligned access speed\n",
+ cpu);
+
+ return 0;
+ }
+
+ if (word_cycles < byte_cycles)
+ speed = RISCV_HWPROBE_MISALIGNED_FAST;
+
+ ratio = div_u64((byte_cycles * 100), word_cycles);
+ pr_info("cpu%d: Ratio of byte access time to unaligned word access is %d.%02d, unaligned accesses are %s\n",
+ cpu,
+ ratio / 100,
+ ratio % 100,
+ (speed == RISCV_HWPROBE_MISALIGNED_FAST) ? "fast" : "slow");
+
+ per_cpu(misaligned_access_speed, cpu) = speed;
+
+ /*
+ * Set the value of fast_misaligned_access of a CPU. These operations
+ * are atomic to avoid race conditions.
+ */
+ if (speed == RISCV_HWPROBE_MISALIGNED_FAST)
+ cpumask_set_cpu(cpu, &fast_misaligned_access);
+ else
+ cpumask_clear_cpu(cpu, &fast_misaligned_access);
+
+ return 0;
+}
+
+static void check_unaligned_access_nonboot_cpu(void *param)
+{
+ unsigned int cpu = smp_processor_id();
+ struct page **pages = param;
+
+ if (smp_processor_id() != 0)
+ check_unaligned_access(pages[cpu]);
+}
+
+DEFINE_STATIC_KEY_FALSE(fast_unaligned_access_speed_key);
+
+static void modify_unaligned_access_branches(cpumask_t *mask, int weight)
+{
+ if (cpumask_weight(mask) == weight)
+ static_branch_enable_cpuslocked(&fast_unaligned_access_speed_key);
+ else
+ static_branch_disable_cpuslocked(&fast_unaligned_access_speed_key);
+}
+
+static void set_unaligned_access_static_branches_except_cpu(int cpu)
+{
+ /*
+ * Same as set_unaligned_access_static_branches, except excludes the
+ * given CPU from the result. When a CPU is hotplugged into an offline
+ * state, this function is called before the CPU is set to offline in
+ * the cpumask, and thus the CPU needs to be explicitly excluded.
+ */
+
+ cpumask_t fast_except_me;
+
+ cpumask_and(&fast_except_me, &fast_misaligned_access, cpu_online_mask);
+ cpumask_clear_cpu(cpu, &fast_except_me);
+
+ modify_unaligned_access_branches(&fast_except_me, num_online_cpus() - 1);
+}
+
+static void set_unaligned_access_static_branches(void)
+{
+ /*
+ * This will be called after check_unaligned_access_all_cpus so the
+ * result of unaligned access speed for all CPUs will be available.
+ *
+ * To avoid the number of online cpus changing between reading
+ * cpu_online_mask and calling num_online_cpus, cpus_read_lock must be
+ * held before calling this function.
+ */
+
+ cpumask_t fast_and_online;
+
+ cpumask_and(&fast_and_online, &fast_misaligned_access, cpu_online_mask);
+
+ modify_unaligned_access_branches(&fast_and_online, num_online_cpus());
+}
+
+static int lock_and_set_unaligned_access_static_branch(void)
+{
+ cpus_read_lock();
+ set_unaligned_access_static_branches();
+ cpus_read_unlock();
+
+ return 0;
+}
+
+arch_initcall_sync(lock_and_set_unaligned_access_static_branch);
+
+static int riscv_online_cpu(unsigned int cpu)
+{
+ static struct page *buf;
+
+ /* We are already set since the last check */
+ if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN)
+ goto exit;
+
+ buf = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);
+ if (!buf) {
+ pr_warn("Allocation failure, not measuring misaligned performance\n");
+ return -ENOMEM;
+ }
+
+ check_unaligned_access(buf);
+ __free_pages(buf, MISALIGNED_BUFFER_ORDER);
+
+exit:
+ set_unaligned_access_static_branches();
+
+ return 0;
+}
+
+static int riscv_offline_cpu(unsigned int cpu)
+{
+ set_unaligned_access_static_branches_except_cpu(cpu);
+
+ return 0;
+}
+
+/* Measure unaligned access speed on all CPUs present at boot in parallel. */
+static int check_unaligned_access_speed_all_cpus(void)
+{
+ unsigned int cpu;
+ unsigned int cpu_count = num_possible_cpus();
+ struct page **bufs = kzalloc(cpu_count * sizeof(struct page *),
+ GFP_KERNEL);
+
+ if (!bufs) {
+ pr_warn("Allocation failure, not measuring misaligned performance\n");
+ return 0;
+ }
+
+ /*
+ * Allocate separate buffers for each CPU so there's no fighting over
+ * cache lines.
+ */
+ for_each_cpu(cpu, cpu_online_mask) {
+ bufs[cpu] = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);
+ if (!bufs[cpu]) {
+ pr_warn("Allocation failure, not measuring misaligned performance\n");
+ goto out;
+ }
+ }
+
+ /* Check everybody except 0, who stays behind to tend jiffies. */
+ on_each_cpu(check_unaligned_access_nonboot_cpu, bufs, 1);
+
+ /* Check core 0. */
+ smp_call_on_cpu(0, check_unaligned_access, bufs[0], true);
+
+ /*
+ * Setup hotplug callbacks for any new CPUs that come online or go
+ * offline.
+ */
+ cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "riscv:online",
+ riscv_online_cpu, riscv_offline_cpu);
+
+out:
+ for_each_cpu(cpu, cpu_online_mask) {
+ if (bufs[cpu])
+ __free_pages(bufs[cpu], MISALIGNED_BUFFER_ORDER);
+ }
+
+ kfree(bufs);
+ return 0;
+}
+
+static int check_unaligned_access_all_cpus(void)
+{
+ bool all_cpus_emulated = check_unaligned_access_emulated_all_cpus();
+
+ if (!all_cpus_emulated)
+ return check_unaligned_access_speed_all_cpus();
+
+ return 0;
+}
+#else /* CONFIG_RISCV_PROBE_UNALIGNED_ACCESS */
+static int check_unaligned_access_all_cpus(void)
+{
+ check_unaligned_access_emulated_all_cpus();
+
+ return 0;
+}
+#endif
+
+arch_initcall(check_unaligned_access_all_cpus);

--
2.43.2


2024-03-08 09:52:50

by Emil Renner Berthing

[permalink] [raw]
Subject: Re: [PATCH v8 4/4] riscv: Set unaligned access speed at compile time

Charlie Jenkins wrote:
> Introduce Kconfig options to set the kernel unaligned access support.
> These options provide a non-portable alternative to the runtime
> unaligned access probe.
>
> To support this, the unaligned access probing code is moved into it's
> own file and gated behind a new RISCV_PROBE_UNALIGNED_ACCESS_SUPPORT
> option.
>
> Signed-off-by: Charlie Jenkins <[email protected]>
> Reviewed-by: Conor Dooley <[email protected]>
> ---
> arch/riscv/Kconfig | 60 ++++--
> arch/riscv/include/asm/cpufeature.h | 24 +--
> arch/riscv/kernel/Makefile | 4 +-
> arch/riscv/kernel/cpufeature.c | 272 ----------------------------
> arch/riscv/kernel/sys_hwprobe.c | 13 ++
> arch/riscv/kernel/traps_misaligned.c | 2 +
> arch/riscv/kernel/unaligned_access_speed.c | 282 +++++++++++++++++++++++++++++
> 7 files changed, 361 insertions(+), 296 deletions(-)
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index bffbd869a068..28c1e75ea88a 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -688,27 +688,63 @@ config THREAD_SIZE_ORDER
> affects irq stack size, which is equal to thread stack size.
>
> config RISCV_MISALIGNED
> - bool "Support misaligned load/store traps for kernel and userspace"
> + bool
> select SYSCTL_ARCH_UNALIGN_ALLOW
> - default y
> help
> - Say Y here if you want the kernel to embed support for misaligned
> - load/store for both kernel and userspace. When disable, misaligned
> - accesses will generate SIGBUS in userspace and panic in kernel.
> + Embed support for misaligned load/store for both kernel and userspace.
> + When disabled, misaligned accesses will generate SIGBUS in userspace
> + and panic in the kernel.

Hmm.. this is *may* generate SIGBUS in userspace and panic the kernel. The CPU
could support unaligned access natively or there might be a handler in M-mode,
right?

> +
> +choice
> + prompt "Unaligned Accesses Support"
> + default RISCV_PROBE_UNALIGNED_ACCESS
> + help
> + This determines the level of support for unaligned accesses. This
> + information is used by the kernel to perform optimizations. It is also
> + exposed to user space via the hwprobe syscall. The hardware will be
> + probed at boot by default.
> +
> +config RISCV_PROBE_UNALIGNED_ACCESS
> + bool "Probe for hardware unaligned access support"
> + select RISCV_MISALIGNED
> + help
> + During boot, the kernel will run a series of tests to determine the
> + speed of unaligned accesses. This probing will dynamically determine
> + the speed of unaligned accesses on the underlying system. If unaligned
> + memory accesses trap into the kernel as they are not supported by the
> + system, the kernel will emulate the unaligned accesses to preserve the
> + UABI.
> +
> +config RISCV_EMULATED_UNALIGNED_ACCESS
> + bool "Emulate unaligned access where system support is missing"
> + select RISCV_MISALIGNED
> + help
> + If unaligned memory accesses trap into the kernel as they are not
> + supported by the system, the kernel will emulate the unaligned
> + accesses to preserve the UABI. When the underlying system does support
> + unaligned accesses, the unaligned accesses are assumed to be slow.

It's still not quite clear to me when you'd want to choose this over probing
above. Assuming the probe measures correctly this can only result in a kernel
that behaves the same or slower than with the option above, right?

> +
> +config RISCV_SLOW_UNALIGNED_ACCESS
> + bool "Assume the system supports slow unaligned memory accesses"
> + depends on NONPORTABLE
> + help
> + Assume that the system supports slow unaligned memory accesses. The
> + kernel and userspace programs may not be able to run at all on systems
> + that do not support unaligned memory accesses.

Again you're just explicitly saying no to the optimizations the kernel might do
if it detects fast unaligned access, only here you'll also crash if they're not
handled by the CPU or M-mode. Why would you want that?

I'm probably missing something, but the only reason I can think of is if you
want build a really small kernel and save the few bytes for the handler and
probing code.

/Emil

> config RISCV_EFFICIENT_UNALIGNED_ACCESS
> - bool "Assume the CPU supports fast unaligned memory accesses"
> + bool "Assume the system supports fast unaligned memory accesses"
> depends on NONPORTABLE
> select DCACHE_WORD_ACCESS if MMU
> select HAVE_EFFICIENT_UNALIGNED_ACCESS
> help
> - Say Y here if you want the kernel to assume that the CPU supports
> - efficient unaligned memory accesses. When enabled, this option
> - improves the performance of the kernel on such CPUs. However, the
> - kernel will run much more slowly, or will not be able to run at all,
> - on CPUs that do not support efficient unaligned memory accesses.
> + Assume that the system supports fast unaligned memory accesses. When
> + enabled, this option improves the performance of the kernel on such
> + systems. However, the kernel and userspace programs will run much more
> + slowly, or will not be able to run at all, on systems that do not
> + support efficient unaligned memory accesses.
>
> - If unsure what to do here, say N.
> +endchoice
>
> endmenu # "Platform type"
>
> diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
> index 6fec91845aa0..46061f5e9764 100644
> --- a/arch/riscv/include/asm/cpufeature.h
> +++ b/arch/riscv/include/asm/cpufeature.h
> @@ -28,37 +28,39 @@ struct riscv_isainfo {
>
> DECLARE_PER_CPU(struct riscv_cpuinfo, riscv_cpuinfo);
>
> -DECLARE_PER_CPU(long, misaligned_access_speed);
> -
> /* Per-cpu ISA extensions. */
> extern struct riscv_isainfo hart_isa[NR_CPUS];
>
> void riscv_user_isa_enable(void);
>
> -#ifdef CONFIG_RISCV_MISALIGNED
> -bool unaligned_ctl_available(void);
> +#if defined(CONFIG_RISCV_MISALIGNED)
> bool check_unaligned_access_emulated_all_cpus(void);
> void unaligned_emulation_finish(void);
> +bool unaligned_ctl_available(void);
> +DECLARE_PER_CPU(long, misaligned_access_speed);
> #else
> static inline bool unaligned_ctl_available(void)
> {
> return false;
> }
> -
> -static inline bool check_unaligned_access_emulated(int cpu)
> -{
> - return false;
> -}
> -
> -static inline void unaligned_emulation_finish(void) {}
> #endif
>
> +#if defined(CONFIG_RISCV_PROBE_UNALIGNED_ACCESS)
> DECLARE_STATIC_KEY_FALSE(fast_unaligned_access_speed_key);
>
> static __always_inline bool has_fast_unaligned_accesses(void)
> {
> return static_branch_likely(&fast_unaligned_access_speed_key);
> }
> +#else
> +static __always_inline bool has_fast_unaligned_accesses(void)
> +{
> + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS))
> + return true;
> + else
> + return false;
> +}
> +#endif
>
> unsigned long riscv_get_elf_hwcap(void);
>
> diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
> index f71910718053..c8085126a6f9 100644
> --- a/arch/riscv/kernel/Makefile
> +++ b/arch/riscv/kernel/Makefile
> @@ -38,7 +38,6 @@ extra-y += vmlinux.lds
> obj-y += head.o
> obj-y += soc.o
> obj-$(CONFIG_RISCV_ALTERNATIVE) += alternative.o
> -obj-y += copy-unaligned.o
> obj-y += cpu.o
> obj-y += cpufeature.o
> obj-y += entry.o
> @@ -62,6 +61,9 @@ obj-y += tests/
> obj-$(CONFIG_MMU) += vdso.o vdso/
>
> obj-$(CONFIG_RISCV_MISALIGNED) += traps_misaligned.o
> +obj-$(CONFIG_RISCV_MISALIGNED) += unaligned_access_speed.o
> +obj-$(CONFIG_RISCV_PROBE_UNALIGNED_ACCESS) += copy-unaligned.o
> +
> obj-$(CONFIG_FPU) += fpu.o
> obj-$(CONFIG_RISCV_ISA_V) += vector.o
> obj-$(CONFIG_RISCV_ISA_V) += kernel_mode_vector.o
> diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
> index abb3a2f53106..319670af5704 100644
> --- a/arch/riscv/kernel/cpufeature.c
> +++ b/arch/riscv/kernel/cpufeature.c
> @@ -11,7 +11,6 @@
> #include <linux/cpu.h>
> #include <linux/cpuhotplug.h>
> #include <linux/ctype.h>
> -#include <linux/jump_label.h>
> #include <linux/log2.h>
> #include <linux/memory.h>
> #include <linux/module.h>
> @@ -21,20 +20,12 @@
> #include <asm/cacheflush.h>
> #include <asm/cpufeature.h>
> #include <asm/hwcap.h>
> -#include <asm/hwprobe.h>
> #include <asm/patch.h>
> #include <asm/processor.h>
> #include <asm/vector.h>
>
> -#include "copy-unaligned.h"
> -
> #define NUM_ALPHA_EXTS ('z' - 'a' + 1)
>
> -#define MISALIGNED_ACCESS_JIFFIES_LG2 1
> -#define MISALIGNED_BUFFER_SIZE 0x4000
> -#define MISALIGNED_BUFFER_ORDER get_order(MISALIGNED_BUFFER_SIZE)
> -#define MISALIGNED_COPY_SIZE ((MISALIGNED_BUFFER_SIZE / 2) - 0x80)
> -
> unsigned long elf_hwcap __read_mostly;
>
> /* Host ISA bitmap */
> @@ -43,11 +34,6 @@ static DECLARE_BITMAP(riscv_isa, RISCV_ISA_EXT_MAX) __read_mostly;
> /* Per-cpu ISA extensions. */
> struct riscv_isainfo hart_isa[NR_CPUS];
>
> -/* Performance information */
> -DEFINE_PER_CPU(long, misaligned_access_speed);
> -
> -static cpumask_t fast_misaligned_access;
> -
> /**
> * riscv_isa_extension_base() - Get base extension word
> *
> @@ -706,264 +692,6 @@ unsigned long riscv_get_elf_hwcap(void)
> return hwcap;
> }
>
> -static int check_unaligned_access(void *param)
> -{
> - int cpu = smp_processor_id();
> - u64 start_cycles, end_cycles;
> - u64 word_cycles;
> - u64 byte_cycles;
> - int ratio;
> - unsigned long start_jiffies, now;
> - struct page *page = param;
> - void *dst;
> - void *src;
> - long speed = RISCV_HWPROBE_MISALIGNED_SLOW;
> -
> - if (IS_ENABLED(CONFIG_RISCV_MISALIGNED) &&
> - per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN)
> - return 0;
> -
> - /* Make an unaligned destination buffer. */
> - dst = (void *)((unsigned long)page_address(page) | 0x1);
> - /* Unalign src as well, but differently (off by 1 + 2 = 3). */
> - src = dst + (MISALIGNED_BUFFER_SIZE / 2);
> - src += 2;
> - word_cycles = -1ULL;
> - /* Do a warmup. */
> - __riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE);
> - preempt_disable();
> - start_jiffies = jiffies;
> - while ((now = jiffies) == start_jiffies)
> - cpu_relax();
> -
> - /*
> - * For a fixed amount of time, repeatedly try the function, and take
> - * the best time in cycles as the measurement.
> - */
> - while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) {
> - start_cycles = get_cycles64();
> - /* Ensure the CSR read can't reorder WRT to the copy. */
> - mb();
> - __riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE);
> - /* Ensure the copy ends before the end time is snapped. */
> - mb();
> - end_cycles = get_cycles64();
> - if ((end_cycles - start_cycles) < word_cycles)
> - word_cycles = end_cycles - start_cycles;
> - }
> -
> - byte_cycles = -1ULL;
> - __riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE);
> - start_jiffies = jiffies;
> - while ((now = jiffies) == start_jiffies)
> - cpu_relax();
> -
> - while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) {
> - start_cycles = get_cycles64();
> - mb();
> - __riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE);
> - mb();
> - end_cycles = get_cycles64();
> - if ((end_cycles - start_cycles) < byte_cycles)
> - byte_cycles = end_cycles - start_cycles;
> - }
> -
> - preempt_enable();
> -
> - /* Don't divide by zero. */
> - if (!word_cycles || !byte_cycles) {
> - pr_warn("cpu%d: rdtime lacks granularity needed to measure unaligned access speed\n",
> - cpu);
> -
> - return 0;
> - }
> -
> - if (word_cycles < byte_cycles)
> - speed = RISCV_HWPROBE_MISALIGNED_FAST;
> -
> - ratio = div_u64((byte_cycles * 100), word_cycles);
> - pr_info("cpu%d: Ratio of byte access time to unaligned word access is %d.%02d, unaligned accesses are %s\n",
> - cpu,
> - ratio / 100,
> - ratio % 100,
> - (speed == RISCV_HWPROBE_MISALIGNED_FAST) ? "fast" : "slow");
> -
> - per_cpu(misaligned_access_speed, cpu) = speed;
> -
> - /*
> - * Set the value of fast_misaligned_access of a CPU. These operations
> - * are atomic to avoid race conditions.
> - */
> - if (speed == RISCV_HWPROBE_MISALIGNED_FAST)
> - cpumask_set_cpu(cpu, &fast_misaligned_access);
> - else
> - cpumask_clear_cpu(cpu, &fast_misaligned_access);
> -
> - return 0;
> -}
> -
> -static void check_unaligned_access_nonboot_cpu(void *param)
> -{
> - unsigned int cpu = smp_processor_id();
> - struct page **pages = param;
> -
> - if (smp_processor_id() != 0)
> - check_unaligned_access(pages[cpu]);
> -}
> -
> -DEFINE_STATIC_KEY_FALSE(fast_unaligned_access_speed_key);
> -
> -static void modify_unaligned_access_branches(cpumask_t *mask, int weight)
> -{
> - if (cpumask_weight(mask) == weight)
> - static_branch_enable_cpuslocked(&fast_unaligned_access_speed_key);
> - else
> - static_branch_disable_cpuslocked(&fast_unaligned_access_speed_key);
> -}
> -
> -static void set_unaligned_access_static_branches_except_cpu(int cpu)
> -{
> - /*
> - * Same as set_unaligned_access_static_branches, except excludes the
> - * given CPU from the result. When a CPU is hotplugged into an offline
> - * state, this function is called before the CPU is set to offline in
> - * the cpumask, and thus the CPU needs to be explicitly excluded.
> - */
> -
> - cpumask_t fast_except_me;
> -
> - cpumask_and(&fast_except_me, &fast_misaligned_access, cpu_online_mask);
> - cpumask_clear_cpu(cpu, &fast_except_me);
> -
> - modify_unaligned_access_branches(&fast_except_me, num_online_cpus() - 1);
> -}
> -
> -static void set_unaligned_access_static_branches(void)
> -{
> - /*
> - * This will be called after check_unaligned_access_all_cpus so the
> - * result of unaligned access speed for all CPUs will be available.
> - *
> - * To avoid the number of online cpus changing between reading
> - * cpu_online_mask and calling num_online_cpus, cpus_read_lock must be
> - * held before calling this function.
> - */
> -
> - cpumask_t fast_and_online;
> -
> - cpumask_and(&fast_and_online, &fast_misaligned_access, cpu_online_mask);
> -
> - modify_unaligned_access_branches(&fast_and_online, num_online_cpus());
> -}
> -
> -static int lock_and_set_unaligned_access_static_branch(void)
> -{
> - cpus_read_lock();
> - set_unaligned_access_static_branches();
> - cpus_read_unlock();
> -
> - return 0;
> -}
> -
> -arch_initcall_sync(lock_and_set_unaligned_access_static_branch);
> -
> -static int riscv_online_cpu(unsigned int cpu)
> -{
> - static struct page *buf;
> -
> - /* We are already set since the last check */
> - if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN)
> - goto exit;
> -
> - buf = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);
> - if (!buf) {
> - pr_warn("Allocation failure, not measuring misaligned performance\n");
> - return -ENOMEM;
> - }
> -
> - check_unaligned_access(buf);
> - __free_pages(buf, MISALIGNED_BUFFER_ORDER);
> -
> -exit:
> - set_unaligned_access_static_branches();
> -
> - return 0;
> -}
> -
> -static int riscv_offline_cpu(unsigned int cpu)
> -{
> - set_unaligned_access_static_branches_except_cpu(cpu);
> -
> - return 0;
> -}
> -
> -/* Measure unaligned access speed on all CPUs present at boot in parallel. */
> -static int check_unaligned_access_speed_all_cpus(void)
> -{
> - unsigned int cpu;
> - unsigned int cpu_count = num_possible_cpus();
> - struct page **bufs = kzalloc(cpu_count * sizeof(struct page *),
> - GFP_KERNEL);
> -
> - if (!bufs) {
> - pr_warn("Allocation failure, not measuring misaligned performance\n");
> - return 0;
> - }
> -
> - /*
> - * Allocate separate buffers for each CPU so there's no fighting over
> - * cache lines.
> - */
> - for_each_cpu(cpu, cpu_online_mask) {
> - bufs[cpu] = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);
> - if (!bufs[cpu]) {
> - pr_warn("Allocation failure, not measuring misaligned performance\n");
> - goto out;
> - }
> - }
> -
> - /* Check everybody except 0, who stays behind to tend jiffies. */
> - on_each_cpu(check_unaligned_access_nonboot_cpu, bufs, 1);
> -
> - /* Check core 0. */
> - smp_call_on_cpu(0, check_unaligned_access, bufs[0], true);
> -
> - /*
> - * Setup hotplug callbacks for any new CPUs that come online or go
> - * offline.
> - */
> - cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "riscv:online",
> - riscv_online_cpu, riscv_offline_cpu);
> -
> -out:
> - for_each_cpu(cpu, cpu_online_mask) {
> - if (bufs[cpu])
> - __free_pages(bufs[cpu], MISALIGNED_BUFFER_ORDER);
> - }
> -
> - kfree(bufs);
> - return 0;
> -}
> -
> -#ifdef CONFIG_RISCV_MISALIGNED
> -static int check_unaligned_access_all_cpus(void)
> -{
> - bool all_cpus_emulated = check_unaligned_access_emulated_all_cpus();
> -
> - if (!all_cpus_emulated)
> - return check_unaligned_access_speed_all_cpus();
> -
> - return 0;
> -}
> -#else
> -static int check_unaligned_access_all_cpus(void)
> -{
> - return check_unaligned_access_speed_all_cpus();
> -}
> -#endif
> -
> -arch_initcall(check_unaligned_access_all_cpus);
> -
> void riscv_user_isa_enable(void)
> {
> if (riscv_cpu_has_extension_unlikely(smp_processor_id(), RISCV_ISA_EXT_ZICBOZ))
> diff --git a/arch/riscv/kernel/sys_hwprobe.c b/arch/riscv/kernel/sys_hwprobe.c
> index a7c56b41efd2..8cae41a502dd 100644
> --- a/arch/riscv/kernel/sys_hwprobe.c
> +++ b/arch/riscv/kernel/sys_hwprobe.c
> @@ -147,6 +147,7 @@ static bool hwprobe_ext0_has(const struct cpumask *cpus, unsigned long ext)
> return (pair.value & ext);
> }
>
> +#if defined(CONFIG_RISCV_PROBE_UNALIGNED_ACCESS)
> static u64 hwprobe_misaligned(const struct cpumask *cpus)
> {
> int cpu;
> @@ -169,6 +170,18 @@ static u64 hwprobe_misaligned(const struct cpumask *cpus)
>
> return perf;
> }
> +#else
> +static u64 hwprobe_misaligned(const struct cpumask *cpus)
> +{
> + if (IS_ENABLED(CONFIG_RISCV_EFFICIENT_UNALIGNED_ACCESS))
> + return RISCV_HWPROBE_MISALIGNED_FAST;
> +
> + if (IS_ENABLED(CONFIG_RISCV_EMULATED_UNALIGNED_ACCESS) && unaligned_ctl_available())
> + return RISCV_HWPROBE_MISALIGNED_EMULATED;
> +
> + return RISCV_HWPROBE_MISALIGNED_SLOW;
> +}
> +#endif
>
> static void hwprobe_one_pair(struct riscv_hwprobe *pair,
> const struct cpumask *cpus)
> diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
> index e55718179f42..2adb7c3e4dd5 100644
> --- a/arch/riscv/kernel/traps_misaligned.c
> +++ b/arch/riscv/kernel/traps_misaligned.c
> @@ -413,7 +413,9 @@ int handle_misaligned_load(struct pt_regs *regs)
>
> perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr);
>
> +#ifdef CONFIG_RISCV_PROBE_UNALIGNED_ACCESS
> *this_cpu_ptr(&misaligned_access_speed) = RISCV_HWPROBE_MISALIGNED_EMULATED;
> +#endif
>
> if (!unaligned_enabled)
> return -1;
> diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c
> new file mode 100644
> index 000000000000..52264ea4f0bd
> --- /dev/null
> +++ b/arch/riscv/kernel/unaligned_access_speed.c
> @@ -0,0 +1,282 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright 2024 Rivos Inc.
> + */
> +
> +#include <linux/cpu.h>
> +#include <linux/cpumask.h>
> +#include <linux/jump_label.h>
> +#include <linux/mm.h>
> +#include <linux/smp.h>
> +#include <linux/types.h>
> +#include <asm/cpufeature.h>
> +#include <asm/hwprobe.h>
> +
> +#include "copy-unaligned.h"
> +
> +#define MISALIGNED_ACCESS_JIFFIES_LG2 1
> +#define MISALIGNED_BUFFER_SIZE 0x4000
> +#define MISALIGNED_BUFFER_ORDER get_order(MISALIGNED_BUFFER_SIZE)
> +#define MISALIGNED_COPY_SIZE ((MISALIGNED_BUFFER_SIZE / 2) - 0x80)
> +
> +DEFINE_PER_CPU(long, misaligned_access_speed);
> +
> +#ifdef CONFIG_RISCV_PROBE_UNALIGNED_ACCESS
> +static cpumask_t fast_misaligned_access;
> +static int check_unaligned_access(void *param)
> +{
> + int cpu = smp_processor_id();
> + u64 start_cycles, end_cycles;
> + u64 word_cycles;
> + u64 byte_cycles;
> + int ratio;
> + unsigned long start_jiffies, now;
> + struct page *page = param;
> + void *dst;
> + void *src;
> + long speed = RISCV_HWPROBE_MISALIGNED_SLOW;
> +
> + if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN)
> + return 0;
> +
> + /* Make an unaligned destination buffer. */
> + dst = (void *)((unsigned long)page_address(page) | 0x1);
> + /* Unalign src as well, but differently (off by 1 + 2 = 3). */
> + src = dst + (MISALIGNED_BUFFER_SIZE / 2);
> + src += 2;
> + word_cycles = -1ULL;
> + /* Do a warmup. */
> + __riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE);
> + preempt_disable();
> + start_jiffies = jiffies;
> + while ((now = jiffies) == start_jiffies)
> + cpu_relax();
> +
> + /*
> + * For a fixed amount of time, repeatedly try the function, and take
> + * the best time in cycles as the measurement.
> + */
> + while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) {
> + start_cycles = get_cycles64();
> + /* Ensure the CSR read can't reorder WRT to the copy. */
> + mb();
> + __riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE);
> + /* Ensure the copy ends before the end time is snapped. */
> + mb();
> + end_cycles = get_cycles64();
> + if ((end_cycles - start_cycles) < word_cycles)
> + word_cycles = end_cycles - start_cycles;
> + }
> +
> + byte_cycles = -1ULL;
> + __riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE);
> + start_jiffies = jiffies;
> + while ((now = jiffies) == start_jiffies)
> + cpu_relax();
> +
> + while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) {
> + start_cycles = get_cycles64();
> + mb();
> + __riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE);
> + mb();
> + end_cycles = get_cycles64();
> + if ((end_cycles - start_cycles) < byte_cycles)
> + byte_cycles = end_cycles - start_cycles;
> + }
> +
> + preempt_enable();
> +
> + /* Don't divide by zero. */
> + if (!word_cycles || !byte_cycles) {
> + pr_warn("cpu%d: rdtime lacks granularity needed to measure unaligned access speed\n",
> + cpu);
> +
> + return 0;
> + }
> +
> + if (word_cycles < byte_cycles)
> + speed = RISCV_HWPROBE_MISALIGNED_FAST;
> +
> + ratio = div_u64((byte_cycles * 100), word_cycles);
> + pr_info("cpu%d: Ratio of byte access time to unaligned word access is %d.%02d, unaligned accesses are %s\n",
> + cpu,
> + ratio / 100,
> + ratio % 100,
> + (speed == RISCV_HWPROBE_MISALIGNED_FAST) ? "fast" : "slow");
> +
> + per_cpu(misaligned_access_speed, cpu) = speed;
> +
> + /*
> + * Set the value of fast_misaligned_access of a CPU. These operations
> + * are atomic to avoid race conditions.
> + */
> + if (speed == RISCV_HWPROBE_MISALIGNED_FAST)
> + cpumask_set_cpu(cpu, &fast_misaligned_access);
> + else
> + cpumask_clear_cpu(cpu, &fast_misaligned_access);
> +
> + return 0;
> +}
> +
> +static void check_unaligned_access_nonboot_cpu(void *param)
> +{
> + unsigned int cpu = smp_processor_id();
> + struct page **pages = param;
> +
> + if (smp_processor_id() != 0)
> + check_unaligned_access(pages[cpu]);
> +}
> +
> +DEFINE_STATIC_KEY_FALSE(fast_unaligned_access_speed_key);
> +
> +static void modify_unaligned_access_branches(cpumask_t *mask, int weight)
> +{
> + if (cpumask_weight(mask) == weight)
> + static_branch_enable_cpuslocked(&fast_unaligned_access_speed_key);
> + else
> + static_branch_disable_cpuslocked(&fast_unaligned_access_speed_key);
> +}
> +
> +static void set_unaligned_access_static_branches_except_cpu(int cpu)
> +{
> + /*
> + * Same as set_unaligned_access_static_branches, except excludes the
> + * given CPU from the result. When a CPU is hotplugged into an offline
> + * state, this function is called before the CPU is set to offline in
> + * the cpumask, and thus the CPU needs to be explicitly excluded.
> + */
> +
> + cpumask_t fast_except_me;
> +
> + cpumask_and(&fast_except_me, &fast_misaligned_access, cpu_online_mask);
> + cpumask_clear_cpu(cpu, &fast_except_me);
> +
> + modify_unaligned_access_branches(&fast_except_me, num_online_cpus() - 1);
> +}
> +
> +static void set_unaligned_access_static_branches(void)
> +{
> + /*
> + * This will be called after check_unaligned_access_all_cpus so the
> + * result of unaligned access speed for all CPUs will be available.
> + *
> + * To avoid the number of online cpus changing between reading
> + * cpu_online_mask and calling num_online_cpus, cpus_read_lock must be
> + * held before calling this function.
> + */
> +
> + cpumask_t fast_and_online;
> +
> + cpumask_and(&fast_and_online, &fast_misaligned_access, cpu_online_mask);
> +
> + modify_unaligned_access_branches(&fast_and_online, num_online_cpus());
> +}
> +
> +static int lock_and_set_unaligned_access_static_branch(void)
> +{
> + cpus_read_lock();
> + set_unaligned_access_static_branches();
> + cpus_read_unlock();
> +
> + return 0;
> +}
> +
> +arch_initcall_sync(lock_and_set_unaligned_access_static_branch);
> +
> +static int riscv_online_cpu(unsigned int cpu)
> +{
> + static struct page *buf;
> +
> + /* We are already set since the last check */
> + if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN)
> + goto exit;
> +
> + buf = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);
> + if (!buf) {
> + pr_warn("Allocation failure, not measuring misaligned performance\n");
> + return -ENOMEM;
> + }
> +
> + check_unaligned_access(buf);
> + __free_pages(buf, MISALIGNED_BUFFER_ORDER);
> +
> +exit:
> + set_unaligned_access_static_branches();
> +
> + return 0;
> +}
> +
> +static int riscv_offline_cpu(unsigned int cpu)
> +{
> + set_unaligned_access_static_branches_except_cpu(cpu);
> +
> + return 0;
> +}
> +
> +/* Measure unaligned access speed on all CPUs present at boot in parallel. */
> +static int check_unaligned_access_speed_all_cpus(void)
> +{
> + unsigned int cpu;
> + unsigned int cpu_count = num_possible_cpus();
> + struct page **bufs = kzalloc(cpu_count * sizeof(struct page *),
> + GFP_KERNEL);
> +
> + if (!bufs) {
> + pr_warn("Allocation failure, not measuring misaligned performance\n");
> + return 0;
> + }
> +
> + /*
> + * Allocate separate buffers for each CPU so there's no fighting over
> + * cache lines.
> + */
> + for_each_cpu(cpu, cpu_online_mask) {
> + bufs[cpu] = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);
> + if (!bufs[cpu]) {
> + pr_warn("Allocation failure, not measuring misaligned performance\n");
> + goto out;
> + }
> + }
> +
> + /* Check everybody except 0, who stays behind to tend jiffies. */
> + on_each_cpu(check_unaligned_access_nonboot_cpu, bufs, 1);
> +
> + /* Check core 0. */
> + smp_call_on_cpu(0, check_unaligned_access, bufs[0], true);
> +
> + /*
> + * Setup hotplug callbacks for any new CPUs that come online or go
> + * offline.
> + */
> + cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "riscv:online",
> + riscv_online_cpu, riscv_offline_cpu);
> +
> +out:
> + for_each_cpu(cpu, cpu_online_mask) {
> + if (bufs[cpu])
> + __free_pages(bufs[cpu], MISALIGNED_BUFFER_ORDER);
> + }
> +
> + kfree(bufs);
> + return 0;
> +}
> +
> +static int check_unaligned_access_all_cpus(void)
> +{
> + bool all_cpus_emulated = check_unaligned_access_emulated_all_cpus();
> +
> + if (!all_cpus_emulated)
> + return check_unaligned_access_speed_all_cpus();
> +
> + return 0;
> +}
> +#else /* CONFIG_RISCV_PROBE_UNALIGNED_ACCESS */
> +static int check_unaligned_access_all_cpus(void)
> +{
> + check_unaligned_access_emulated_all_cpus();
> +
> + return 0;
> +}
> +#endif
> +
> +arch_initcall(check_unaligned_access_all_cpus);
>
> --
> 2.43.2
>
>
> _______________________________________________
> linux-riscv mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-riscv

2024-03-08 11:12:37

by Conor Dooley

[permalink] [raw]
Subject: Re: [PATCH v8 4/4] riscv: Set unaligned access speed at compile time

On Fri, Mar 08, 2024 at 01:52:24AM -0800, Emil Renner Berthing wrote:
> Charlie Jenkins wrote:

> > config RISCV_MISALIGNED
> > - bool "Support misaligned load/store traps for kernel and userspace"
> > + bool
> > select SYSCTL_ARCH_UNALIGN_ALLOW
> > - default y
> > help
> > - Say Y here if you want the kernel to embed support for misaligned
> > - load/store for both kernel and userspace. When disable, misaligned
> > - accesses will generate SIGBUS in userspace and panic in kernel.
> > + Embed support for misaligned load/store for both kernel and userspace.
> > + When disabled, misaligned accesses will generate SIGBUS in userspace
> > + and panic in the kernel.
>
> Hmm.. this is *may* generate SIGBUS in userspace and panic the kernel. The CPU
> could support unaligned access natively or there might be a handler in M-mode,
> right?

Correct. The last sentence could become "When disabled, and there is no
support in hardware or firmware, unsigned accesses will...". That said,
this option is no longer user visible, so we could really simplify the
hell out of this option to just mention that it controls building the
in-kernel emulator.

> > +choice
> > + prompt "Unaligned Accesses Support"
> > + default RISCV_PROBE_UNALIGNED_ACCESS
> > + help
> > + This determines the level of support for unaligned accesses. This
> > + information is used by the kernel to perform optimizations. It is also
> > + exposed to user space via the hwprobe syscall. The hardware will be
> > + probed at boot by default.
> > +
> > +config RISCV_PROBE_UNALIGNED_ACCESS
> > + bool "Probe for hardware unaligned access support"
> > + select RISCV_MISALIGNED
> > + help
> > + During boot, the kernel will run a series of tests to determine the
> > + speed of unaligned accesses. This probing will dynamically determine
> > + the speed of unaligned accesses on the underlying system. If unaligned
> > + memory accesses trap into the kernel as they are not supported by the
> > + system, the kernel will emulate the unaligned accesses to preserve the
> > + UABI.
> > +
> > +config RISCV_EMULATED_UNALIGNED_ACCESS
> > + bool "Emulate unaligned access where system support is missing"
> > + select RISCV_MISALIGNED
> > + help
> > + If unaligned memory accesses trap into the kernel as they are not
> > + supported by the system, the kernel will emulate the unaligned
> > + accesses to preserve the UABI. When the underlying system does support
> > + unaligned accesses, the unaligned accesses are assumed to be slow.
>
> It's still not quite clear to me when you'd want to choose this over probing
> above. Assuming the probe measures correctly this can only result in a kernel
> that behaves the same or slower than with the option above, right?

Aye, mostly the same - some people don't want the boot-time overhead
of actually running the profiling code, so this option is for them.
Maybe that's not such a big deal anymore with the improvements to do it
in parallel, but given how bad performance on some of the systems is
when firmware does the emulation, it is definitely still noticeable.
I know we definitely have customers that care about their boot time very
strongly, so I can imagine they'd be turning this off.

> > +
> > +config RISCV_SLOW_UNALIGNED_ACCESS
> > + bool "Assume the system supports slow unaligned memory accesses"
> > + depends on NONPORTABLE
> > + help
> > + Assume that the system supports slow unaligned memory accesses. The
> > + kernel and userspace programs may not be able to run at all on systems
> > + that do not support unaligned memory accesses.
>
> Again you're just explicitly saying no to the optimizations the kernel might do
> if it detects fast unaligned access, only here you'll also crash if they're not
> handled by the CPU or M-mode. Why would you want that?
>
> I'm probably missing something, but the only reason I can think of is if you
> want build a really small kernel and save the few bytes for the handler and
> probing code.

Aye, just to allow you to disable the in-kernel emulator. That's
currently a choice that is presented to people, so this option preserves
that. IMO this is by far the least useful option and is locked behind
NONPORTABLE anyway. Maybe we could delete it, and if someone really wants
it, it would not be all that much of a hassle to add back in the future?


Attachments:
(No filename) (4.36 kB)
signature.asc (235.00 B)
Download all attachments

2024-03-08 13:42:56

by Emil Renner Berthing

[permalink] [raw]
Subject: Re: [PATCH v8 4/4] riscv: Set unaligned access speed at compile time

Conor Dooley wrote:
> On Fri, Mar 08, 2024 at 01:52:24AM -0800, Emil Renner Berthing wrote:
> > Charlie Jenkins wrote:
>
> > > config RISCV_MISALIGNED
> > > - bool "Support misaligned load/store traps for kernel and userspace"
> > > + bool
> > > select SYSCTL_ARCH_UNALIGN_ALLOW
> > > - default y
> > > help
> > > - Say Y here if you want the kernel to embed support for misaligned
> > > - load/store for both kernel and userspace. When disable, misaligned
> > > - accesses will generate SIGBUS in userspace and panic in kernel.
> > > + Embed support for misaligned load/store for both kernel and userspace.
> > > + When disabled, misaligned accesses will generate SIGBUS in userspace
> > > + and panic in the kernel.
> >
> > Hmm.. this is *may* generate SIGBUS in userspace and panic the kernel. The CPU
> > could support unaligned access natively or there might be a handler in M-mode,
> > right?
>
> Correct. The last sentence could become "When disabled, and there is no
> support in hardware or firmware, unsigned accesses will...". That said,
> this option is no longer user visible, so we could really simplify the
> hell out of this option to just mention that it controls building the
> in-kernel emulator.
>
> > > +choice
> > > + prompt "Unaligned Accesses Support"
> > > + default RISCV_PROBE_UNALIGNED_ACCESS
> > > + help
> > > + This determines the level of support for unaligned accesses. This
> > > + information is used by the kernel to perform optimizations. It is also
> > > + exposed to user space via the hwprobe syscall. The hardware will be
> > > + probed at boot by default.
> > > +
> > > +config RISCV_PROBE_UNALIGNED_ACCESS
> > > + bool "Probe for hardware unaligned access support"
> > > + select RISCV_MISALIGNED
> > > + help
> > > + During boot, the kernel will run a series of tests to determine the
> > > + speed of unaligned accesses. This probing will dynamically determine
> > > + the speed of unaligned accesses on the underlying system. If unaligned
> > > + memory accesses trap into the kernel as they are not supported by the
> > > + system, the kernel will emulate the unaligned accesses to preserve the
> > > + UABI.
> > > +
> > > +config RISCV_EMULATED_UNALIGNED_ACCESS
> > > + bool "Emulate unaligned access where system support is missing"
> > > + select RISCV_MISALIGNED
> > > + help
> > > + If unaligned memory accesses trap into the kernel as they are not
> > > + supported by the system, the kernel will emulate the unaligned
> > > + accesses to preserve the UABI. When the underlying system does support
> > > + unaligned accesses, the unaligned accesses are assumed to be slow.
> >
> > It's still not quite clear to me when you'd want to choose this over probing
> > above. Assuming the probe measures correctly this can only result in a kernel
> > that behaves the same or slower than with the option above, right?
>
> Aye, mostly the same - some people don't want the boot-time overhead
> of actually running the profiling code, so this option is for them.
> Maybe that's not such a big deal anymore with the improvements to do it
> in parallel, but given how bad performance on some of the systems is
> when firmware does the emulation, it is definitely still noticeable.
> I know we definitely have customers that care about their boot time very
> strongly, so I can imagine they'd be turning this off.

Ah, that makes sense. So maybe a help text more along the lines of "Disable
probing and optimizations for CPUs with fast unaligned memory access" would be
a better description of this choice?

> > > +
> > > +config RISCV_SLOW_UNALIGNED_ACCESS
> > > + bool "Assume the system supports slow unaligned memory accesses"
> > > + depends on NONPORTABLE
> > > + help
> > > + Assume that the system supports slow unaligned memory accesses. The
> > > + kernel and userspace programs may not be able to run at all on systems
> > > + that do not support unaligned memory accesses.
> >
> > Again you're just explicitly saying no to the optimizations the kernel might do
> > if it detects fast unaligned access, only here you'll also crash if they're not
> > handled by the CPU or M-mode. Why would you want that?
> >
> > I'm probably missing something, but the only reason I can think of is if you
> > want build a really small kernel and save the few bytes for the handler and
> > probing code.
>
> Aye, just to allow you to disable the in-kernel emulator. That's
> currently a choice that is presented to people, so this option preserves
> that. IMO this is by far the least useful option and is locked behind
> NONPORTABLE anyway. Maybe we could delete it, and if someone really wants
> it, it would not be all that much of a hassle to add back in the future?

Yeah, if noone really needs this less config options is better, but I don't
feel strongly about this option either way.

/Emil

2024-03-08 18:05:17

by Charlie Jenkins

[permalink] [raw]
Subject: Re: [PATCH v8 4/4] riscv: Set unaligned access speed at compile time

On Fri, Mar 08, 2024 at 05:35:03AM -0800, Emil Renner Berthing wrote:
> Conor Dooley wrote:
> > On Fri, Mar 08, 2024 at 01:52:24AM -0800, Emil Renner Berthing wrote:
> > > Charlie Jenkins wrote:
> >
> > > > config RISCV_MISALIGNED
> > > > - bool "Support misaligned load/store traps for kernel and userspace"
> > > > + bool
> > > > select SYSCTL_ARCH_UNALIGN_ALLOW
> > > > - default y
> > > > help
> > > > - Say Y here if you want the kernel to embed support for misaligned
> > > > - load/store for both kernel and userspace. When disable, misaligned
> > > > - accesses will generate SIGBUS in userspace and panic in kernel.
> > > > + Embed support for misaligned load/store for both kernel and userspace.
> > > > + When disabled, misaligned accesses will generate SIGBUS in userspace
> > > > + and panic in the kernel.
> > >
> > > Hmm.. this is *may* generate SIGBUS in userspace and panic the kernel. The CPU
> > > could support unaligned access natively or there might be a handler in M-mode,
> > > right?
> >
> > Correct. The last sentence could become "When disabled, and there is no
> > support in hardware or firmware, unsigned accesses will...". That said,
> > this option is no longer user visible, so we could really simplify the
> > hell out of this option to just mention that it controls building the
> > in-kernel emulator.
> >
> > > > +choice
> > > > + prompt "Unaligned Accesses Support"
> > > > + default RISCV_PROBE_UNALIGNED_ACCESS
> > > > + help
> > > > + This determines the level of support for unaligned accesses. This
> > > > + information is used by the kernel to perform optimizations. It is also
> > > > + exposed to user space via the hwprobe syscall. The hardware will be
> > > > + probed at boot by default.
> > > > +
> > > > +config RISCV_PROBE_UNALIGNED_ACCESS
> > > > + bool "Probe for hardware unaligned access support"
> > > > + select RISCV_MISALIGNED
> > > > + help
> > > > + During boot, the kernel will run a series of tests to determine the
> > > > + speed of unaligned accesses. This probing will dynamically determine
> > > > + the speed of unaligned accesses on the underlying system. If unaligned
> > > > + memory accesses trap into the kernel as they are not supported by the
> > > > + system, the kernel will emulate the unaligned accesses to preserve the
> > > > + UABI.
> > > > +
> > > > +config RISCV_EMULATED_UNALIGNED_ACCESS
> > > > + bool "Emulate unaligned access where system support is missing"
> > > > + select RISCV_MISALIGNED
> > > > + help
> > > > + If unaligned memory accesses trap into the kernel as they are not
> > > > + supported by the system, the kernel will emulate the unaligned
> > > > + accesses to preserve the UABI. When the underlying system does support
> > > > + unaligned accesses, the unaligned accesses are assumed to be slow.
> > >
> > > It's still not quite clear to me when you'd want to choose this over probing
> > > above. Assuming the probe measures correctly this can only result in a kernel
> > > that behaves the same or slower than with the option above, right?
> >
> > Aye, mostly the same - some people don't want the boot-time overhead
> > of actually running the profiling code, so this option is for them.
> > Maybe that's not such a big deal anymore with the improvements to do it
> > in parallel, but given how bad performance on some of the systems is
> > when firmware does the emulation, it is definitely still noticeable.
> > I know we definitely have customers that care about their boot time very
> > strongly, so I can imagine they'd be turning this off.
>
> Ah, that makes sense. So maybe a help text more along the lines of "Disable
> probing and optimizations for CPUs with fast unaligned memory access" would be
> a better description of this choice?

It does cause probing/optimizations to not be enabled, but it does not
"disable" them. For maximal optimizations for fast unaligned accesses,
the user must select RISCV_EFFICIENT_UNALIGNED_ACCESS before and after
this change. For probing, the user must select
RISCV_PROBE_UNALIGNED_ACCESS.

- Charlie

>
> > > > +
> > > > +config RISCV_SLOW_UNALIGNED_ACCESS
> > > > + bool "Assume the system supports slow unaligned memory accesses"
> > > > + depends on NONPORTABLE
> > > > + help
> > > > + Assume that the system supports slow unaligned memory accesses. The
> > > > + kernel and userspace programs may not be able to run at all on systems
> > > > + that do not support unaligned memory accesses.
> > >
> > > Again you're just explicitly saying no to the optimizations the kernel might do
> > > if it detects fast unaligned access, only here you'll also crash if they're not
> > > handled by the CPU or M-mode. Why would you want that?
> > >
> > > I'm probably missing something, but the only reason I can think of is if you
> > > want build a really small kernel and save the few bytes for the handler and
> > > probing code.
> >
> > Aye, just to allow you to disable the in-kernel emulator. That's
> > currently a choice that is presented to people, so this option preserves
> > that. IMO this is by far the least useful option and is locked behind
> > NONPORTABLE anyway. Maybe we could delete it, and if someone really wants
> > it, it would not be all that much of a hassle to add back in the future?
>
> Yeah, if noone really needs this less config options is better, but I don't
> feel strongly about this option either way.
>
> /Emil