2024-02-12 23:36:45

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH v11 0/4] riscv: Create and document PR_RISCV_SET_ICACHE_FLUSH_CTX prctl

Improve the performance of icache flushing by creating a new prctl flag
PR_RISCV_SET_ICACHE_FLUSH_CTX. The interface is left generic to allow
for future expansions such as with the proposed J extension [1].

Documentation is also provided to explain the use case.

Patch sent to add PR_RISCV_SET_ICACHE_FLUSH_CTX to man-pages [2].

[1] https://github.com/riscv/riscv-j-extension
[2] https://lore.kernel.org/linux-man/[email protected]

Signed-off-by: Charlie Jenkins <[email protected]>
---
Changes in v11:
- Add back PR_RISCV_CTX_SW_FENCEI_OFF (Samuel)
- Fix under nosmp (Samuel)
- Change set_prev_cpu (Samuel)
- Fixup example testcase in docs
- Change wording of documentation slightly (Alejandor Colomar)
- Link to v10: https://lore.kernel.org/r/[email protected]

Changes in v10:
- Fix fence.i condition to properly only flush on migration (Alex)
- Fix documentation wording (Alex)
- Link to v9: https://lore.kernel.org/r/[email protected]

Changes in v9:
- Remove prev_cpu from mm (Alex)
- Link to v8: https://lore.kernel.org/r/[email protected]

Changes in v8:
- Only flush icache if migrated to different cpu (Alex)
- Move flushing to switch_to to catch per-thread flushing properly
- Link to v7: https://lore.kernel.org/r/[email protected]

Changes in v7:
- Change "per_thread" parameter to "scope" and provide constants for the
parameter.
- Link to v6: https://lore.kernel.org/r/[email protected]

Changes in v6:
- Fixup documentation formatting
- Link to v5: https://lore.kernel.org/r/[email protected]

Changes in v5:
- Minor documentation changes (Randy)
- Link to v4: https://lore.kernel.org/r/[email protected]

Changes in v4:
- Add OFF flag to disallow fence.i in userspace (Atish)
- Fix documentation issues (Atish)
- Link to v3: https://lore.kernel.org/r/[email protected]

Changes in v3:
- Check if value force_icache_flush set on thread, rather than in mm
twice (Clément)
- Link to v2: https://lore.kernel.org/r/[email protected]

Changes in v2:
- Fix kernel-doc comment (Conor)
- Link to v1: https://lore.kernel.org/r/[email protected]

---
Charlie Jenkins (4):
riscv: Remove unnecessary irqflags processor.h include
riscv: Include riscv_set_icache_flush_ctx prctl
documentation: Document PR_RISCV_SET_ICACHE_FLUSH_CTX prctl
cpumask: Add assign cpu

Documentation/arch/riscv/cmodx.rst | 98 ++++++++++++++++++++++++++++++++++
Documentation/arch/riscv/index.rst | 1 +
arch/riscv/include/asm/irqflags.h | 1 -
arch/riscv/include/asm/mmu.h | 2 +
arch/riscv/include/asm/processor.h | 12 +++++
arch/riscv/include/asm/switch_to.h | 23 ++++++++
arch/riscv/mm/cacheflush.c | 105 +++++++++++++++++++++++++++++++++++++
arch/riscv/mm/context.c | 18 +++++--
include/linux/cpumask.h | 16 ++++++
include/uapi/linux/prctl.h | 6 +++
kernel/sys.c | 6 +++
11 files changed, 282 insertions(+), 6 deletions(-)
---
base-commit: 6613476e225e090cc9aad49be7fa504e290dd33d
change-id: 20231117-fencei-f9f60d784fa0
--
- Charlie



2024-02-12 23:36:51

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH v11 1/4] riscv: Remove unnecessary irqflags processor.h include

This include is not used. Remove it to avoid a circular dependency in
the next patch in the series.

Signed-off-by: Charlie Jenkins <[email protected]>
---
arch/riscv/include/asm/irqflags.h | 1 -
1 file changed, 1 deletion(-)

diff --git a/arch/riscv/include/asm/irqflags.h b/arch/riscv/include/asm/irqflags.h
index 08d4d6a5b7e9..6fd8cbfcfcc7 100644
--- a/arch/riscv/include/asm/irqflags.h
+++ b/arch/riscv/include/asm/irqflags.h
@@ -7,7 +7,6 @@
#ifndef _ASM_RISCV_IRQFLAGS_H
#define _ASM_RISCV_IRQFLAGS_H

-#include <asm/processor.h>
#include <asm/csr.h>

/* read interrupt enabled status */

--
2.43.0


2024-02-12 23:37:11

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH v11 3/4] documentation: Document PR_RISCV_SET_ICACHE_FLUSH_CTX prctl

Provide documentation that explains how to properly do CMODX in riscv.

Signed-off-by: Charlie Jenkins <[email protected]>
Reviewed-by: Atish Patra <[email protected]>
Reviewed-by: Alexandre Ghiti <[email protected]>
---
Documentation/arch/riscv/cmodx.rst | 98 ++++++++++++++++++++++++++++++++++++++
Documentation/arch/riscv/index.rst | 1 +
2 files changed, 99 insertions(+)

diff --git a/Documentation/arch/riscv/cmodx.rst b/Documentation/arch/riscv/cmodx.rst
new file mode 100644
index 000000000000..1c0ca06b6c97
--- /dev/null
+++ b/Documentation/arch/riscv/cmodx.rst
@@ -0,0 +1,98 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==============================================================================
+Concurrent Modification and Execution of Instructions (CMODX) for RISC-V Linux
+==============================================================================
+
+CMODX is a programming technique where a program executes instructions that were
+modified by the program itself. Instruction storage and the instruction cache
+(icache) are not guaranteed to be synchronized on RISC-V hardware. Therefore, the
+program must enforce its own synchronization with the unprivileged fence.i
+instruction.
+
+However, the default Linux ABI prohibits the use of fence.i in userspace
+applications. At any point the scheduler may migrate a task onto a new hart. If
+migration occurs after the userspace synchronized the icache and instruction
+storage with fence.i, the icache on the new hart will no longer be clean. This
+is due to the behavior of fence.i only affecting the hart that it is called on.
+Thus, the hart that the task has been migrated to may not have synchronized
+instruction storage and icache.
+
+There are two ways to solve this problem: use the riscv_flush_icache() syscall,
+or use the ``PR_RISCV_SET_ICACHE_FLUSH_CTX`` prctl() and emit fence.i in
+userspace. The syscall performs a one-off icache flushing operation. The prctl
+changes the Linux ABI to allow userspace to emit icache flushing operations.
+
+As an aside, "deferred" icache flushes can sometimes be triggered in the kernel.
+At the time of writing, this only occurs during the riscv_flush_icache() syscall
+and when the kernel uses copy_to_user_page(). These deferred flushes happen only
+when the memory map being used by a hart changes. If the prctl() context caused
+an icache flush, this deferred icache flush will be skipped as it is redundant.
+Therefore, there will be no additional flush when using the riscv_flush_icache()
+syscall inside of the prctl() context.
+
+prctl() Interface
+---------------------
+
+Call prctl() with ``PR_RISCV_SET_ICACHE_FLUSH_CTX`` as the first argument. The
+remaining arguments will be delegated to the riscv_set_icache_flush_ctx
+function detailed below.
+
+.. kernel-doc:: arch/riscv/mm/cacheflush.c
+ :identifiers: riscv_set_icache_flush_ctx
+
+Example usage:
+
+The following files are meant to be compiled and linked with each other. The
+modify_instruction() function replaces an add with 0 with an add with one,
+causing the instruction sequence in get_value() to change from returning a zero
+to returning a one.
+
+cmodx.c::
+
+ #include <stdio.h>
+ #include <sys/prctl.h>
+
+ extern int get_value();
+ extern void modify_instruction();
+
+ int main()
+ {
+ int value = get_value();
+ printf("Value before cmodx: %d\n", value);
+
+ // Call prctl before first fence.i is called inside modify_instruction
+ prctl(PR_RISCV_SET_ICACHE_FLUSH_CTX_ON, PR_RISCV_CTX_SW_FENCEI, PR_RISCV_SCOPE_PER_PROCESS);
+ modify_instruction();
+ // Call prctl after final fence.i is called in process
+ prctl(PR_RISCV_SET_ICACHE_FLUSH_CTX_OFF, PR_RISCV_CTX_SW_FENCEI, PR_RISCV_SCOPE_PER_PROCESS);
+
+ value = get_value();
+ printf("Value after cmodx: %d\n", value);
+ return 0;
+ }
+
+cmodx.S::
+
+ .option norvc
+
+ .text
+ .global modify_instruction
+ modify_instruction:
+ lw a0, new_insn
+ lui a5,%hi(old_insn)
+ sw a0,%lo(old_insn)(a5)
+ fence.i
+ ret
+
+ .section modifiable, "awx"
+ .global get_value
+ get_value:
+ li a0, 0
+ old_insn:
+ addi a0, a0, 0
+ ret
+
+ .data
+ new_insn:
+ addi a0, a0, 1
diff --git a/Documentation/arch/riscv/index.rst b/Documentation/arch/riscv/index.rst
index 4dab0cb4b900..eecf347ce849 100644
--- a/Documentation/arch/riscv/index.rst
+++ b/Documentation/arch/riscv/index.rst
@@ -13,6 +13,7 @@ RISC-V architecture
patch-acceptance
uabi
vector
+ cmodx

features


--
2.43.0


2024-02-12 23:37:23

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH v11 4/4] cpumask: Add assign cpu

Standardize an assign_cpu function for cpumasks.

Signed-off-by: Charlie Jenkins <[email protected]>
---
arch/riscv/mm/cacheflush.c | 2 +-
include/linux/cpumask.h | 16 ++++++++++++++++
2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
index 6513a0ab8655..d10c2cba8aff 100644
--- a/arch/riscv/mm/cacheflush.c
+++ b/arch/riscv/mm/cacheflush.c
@@ -234,7 +234,7 @@ int riscv_set_icache_flush_ctx(unsigned long ctx, unsigned long scope)
stale_cpu = cpumask_test_cpu(smp_processor_id(), mask);

cpumask_setall(mask);
- assign_bit(cpumask_check(smp_processor_id()), cpumask_bits(mask), stale_cpu);
+ cpumask_assign_cpu(smp_processor_id(), mask, stale_cpu);
break;
case PR_RISCV_SCOPE_PER_THREAD:
current->thread.force_icache_flush = false;
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index cfb545841a2c..1b85e09c4ba5 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -492,6 +492,22 @@ static __always_inline void __cpumask_clear_cpu(int cpu, struct cpumask *dstp)
__clear_bit(cpumask_check(cpu), cpumask_bits(dstp));
}

+/**
+ * cpumask_assign_cpu - assign a cpu in a cpumask
+ * @cpu: cpu number (< nr_cpu_ids)
+ * @dstp: the cpumask pointer
+ * @bool: the value to assign
+ */
+static __always_inline void cpumask_assign_cpu(int cpu, struct cpumask *dstp, bool value)
+{
+ assign_bit(cpumask_check(cpu), cpumask_bits(dstp), value);
+}
+
+static __always_inline void __cpumask_assign_cpu(int cpu, struct cpumask *dstp, bool value)
+{
+ __assign_bit(cpumask_check(cpu), cpumask_bits(dstp), value);
+}
+
/**
* cpumask_test_cpu - test for a cpu in a cpumask
* @cpu: cpu number (< nr_cpu_ids)

--
2.43.0


2024-02-12 23:37:31

by Charlie Jenkins

[permalink] [raw]
Subject: [PATCH v11 2/4] riscv: Include riscv_set_icache_flush_ctx prctl

Support new prctl with key PR_RISCV_SET_ICACHE_FLUSH_CTX to enable
optimization of cross modifying code. This prctl enables userspace code
to use icache flushing instructions such as fence.i with the guarantee
that the icache will continue to be clean after thread migration.

Signed-off-by: Charlie Jenkins <[email protected]>
Reviewed-by: Atish Patra <[email protected]>
Reviewed-by: Alexandre Ghiti <[email protected]>
---
arch/riscv/include/asm/mmu.h | 2 +
arch/riscv/include/asm/processor.h | 12 +++++
arch/riscv/include/asm/switch_to.h | 23 ++++++++
arch/riscv/mm/cacheflush.c | 105 +++++++++++++++++++++++++++++++++++++
arch/riscv/mm/context.c | 18 +++++--
include/uapi/linux/prctl.h | 6 +++
kernel/sys.c | 6 +++
7 files changed, 167 insertions(+), 5 deletions(-)

diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
index 355504b37f8e..60be458e94da 100644
--- a/arch/riscv/include/asm/mmu.h
+++ b/arch/riscv/include/asm/mmu.h
@@ -19,6 +19,8 @@ typedef struct {
#ifdef CONFIG_SMP
/* A local icache flush is needed before user execution can resume. */
cpumask_t icache_stale_mask;
+ /* Force local icache flush on all migrations. */
+ bool force_icache_flush;
#endif
#ifdef CONFIG_BINFMT_ELF_FDPIC
unsigned long exec_fdpic_loadmap;
diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
index a8509cc31ab2..46c5c3b91165 100644
--- a/arch/riscv/include/asm/processor.h
+++ b/arch/riscv/include/asm/processor.h
@@ -69,6 +69,7 @@
#endif

#ifndef __ASSEMBLY__
+#include <linux/cpumask.h>

struct task_struct;
struct pt_regs;
@@ -123,6 +124,14 @@ struct thread_struct {
struct __riscv_v_ext_state vstate;
unsigned long align_ctl;
struct __riscv_v_ext_state kernel_vstate;
+#ifdef CONFIG_SMP
+ /* A local icache flush is needed before user execution can resume on one of these cpus. */
+ cpumask_t icache_stale_mask;
+ /* Regardless of the icache_stale_mask, flush the icache on migration */
+ bool force_icache_flush;
+ /* A forced icache flush is not needed if migrating to the previous cpu. */
+ unsigned int prev_cpu;
+#endif
};

/* Whitelist the fstate from the task_struct for hardened usercopy */
@@ -184,6 +193,9 @@ extern int set_unalign_ctl(struct task_struct *tsk, unsigned int val);
#define GET_UNALIGN_CTL(tsk, addr) get_unalign_ctl((tsk), (addr))
#define SET_UNALIGN_CTL(tsk, val) set_unalign_ctl((tsk), (val))

+#define RISCV_SET_ICACHE_FLUSH_CTX(arg1, arg2) riscv_set_icache_flush_ctx(arg1, arg2)
+extern int riscv_set_icache_flush_ctx(unsigned long ctx, unsigned long per_thread);
+
#endif /* __ASSEMBLY__ */

#endif /* _ASM_RISCV_PROCESSOR_H */
diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h
index 7efdb0584d47..7594df37cc9f 100644
--- a/arch/riscv/include/asm/switch_to.h
+++ b/arch/riscv/include/asm/switch_to.h
@@ -8,6 +8,7 @@

#include <linux/jump_label.h>
#include <linux/sched/task_stack.h>
+#include <linux/mm_types.h>
#include <asm/vector.h>
#include <asm/cpufeature.h>
#include <asm/processor.h>
@@ -72,14 +73,36 @@ static __always_inline bool has_fpu(void) { return false; }
extern struct task_struct *__switch_to(struct task_struct *,
struct task_struct *);

+static inline bool switch_to_should_flush_icache(struct task_struct *task)
+{
+#ifdef CONFIG_SMP
+ bool stale_mm = task->mm && task->mm->context.force_icache_flush;
+ bool stale_thread = task->thread.force_icache_flush;
+ bool thread_migrated = smp_processor_id() != task->thread.prev_cpu;
+
+ return thread_migrated && (stale_mm || stale_thread);
+#else
+ return false;
+#endif
+}
+
+#ifdef CONFIG_SMP
+#define __set_prev_cpu(thread) ((thread).prev_cpu = smp_processor_id())
+#else
+#define __set_prev_cpu(thread)
+#endif
+
#define switch_to(prev, next, last) \
do { \
struct task_struct *__prev = (prev); \
struct task_struct *__next = (next); \
+ __set_prev_cpu(__prev->thread); \
if (has_fpu()) \
__switch_to_fpu(__prev, __next); \
if (has_vector()) \
__switch_to_vector(__prev, __next); \
+ if (switch_to_should_flush_icache(__next)) \
+ local_flush_icache_all(); \
((last) = __switch_to(__prev, __next)); \
} while (0)

diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
index 55a34f2020a8..6513a0ab8655 100644
--- a/arch/riscv/mm/cacheflush.c
+++ b/arch/riscv/mm/cacheflush.c
@@ -5,6 +5,7 @@

#include <linux/acpi.h>
#include <linux/of.h>
+#include <linux/prctl.h>
#include <asm/acpi.h>
#include <asm/cacheflush.h>

@@ -152,3 +153,107 @@ void __init riscv_init_cbo_blocksizes(void)
if (cboz_block_size)
riscv_cboz_block_size = cboz_block_size;
}
+
+/**
+ * riscv_set_icache_flush_ctx() - Enable/disable icache flushing instructions in
+ * userspace.
+ * @ctx: Set the type of icache flushing instructions permitted/prohibited in
+ * userspace. Supported values described below.
+ *
+ * Supported values for ctx:
+ *
+ * * %PR_RISCV_CTX_SW_FENCEI_ON: Allow fence.i in user space.
+ *
+ * * %PR_RISCV_CTX_SW_FENCEI_OFF: Disallow fence.i in user space. All threads in
+ * a process will be affected when ``scope == PR_RISCV_SCOPE_PER_PROCESS``.
+ * Therefore, caution must be taken; use this flag only when you can guarantee
+ * that no thread in the process will emit fence.i from this point onward.
+ *
+ * @scope: Set scope of where icache flushing instructions are allowed to be
+ * emitted. Supported values described below.
+ *
+ * Supported values for scope:
+ *
+ * * %PR_RISCV_SCOPE_PER_PROCESS: Ensure the icache of any thread in this process
+ * is coherent with instruction storage upon
+ * migration.
+ *
+ * * %PR_RISCV_SCOPE_PER_THREAD: Ensure the icache of the current thread is
+ * coherent with instruction storage upon
+ * migration.
+ *
+ * When ``scope == PR_RISCV_SCOPE_PER_PROCESS``, all threads in the process are
+ * permitted to emit icache flushing instructions. Whenever any thread in the
+ * process is migrated, the corresponding hart's icache will be guaranteed to be
+ * consistent with instruction storage. This does not enforce any guarantees
+ * outside of migration. If a thread modifies an instruction that another thread
+ * may attempt to execute, the other thread must still emit an icache flushing
+ * instruction before attempting to execute the potentially modified
+ * instruction. This must be performed by the user-space program.
+ *
+ * In per-thread context (eg. ``scope == PR_RISCV_SCOPE_PER_THREAD``) only the
+ * thread calling this function is permitted to emit icache flushing
+ * instructions. When the thread is migrated, the corresponding hart's icache
+ * will be guaranteed to be consistent with instruction storage.
+ *
+ * On kernels configured without SMP, this function is a nop as migrations
+ * across harts will not occur.
+ */
+int riscv_set_icache_flush_ctx(unsigned long ctx, unsigned long scope)
+{
+#ifdef CONFIG_SMP
+ switch (ctx) {
+ case PR_RISCV_CTX_SW_FENCEI_ON:
+ switch (scope) {
+ case PR_RISCV_SCOPE_PER_PROCESS:
+ current->mm->context.force_icache_flush = true;
+ break;
+ case PR_RISCV_SCOPE_PER_THREAD:
+ current->thread.force_icache_flush = true;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case PR_RISCV_CTX_SW_FENCEI_OFF:
+ cpumask_t *mask;
+
+ switch (scope) {
+ case PR_RISCV_SCOPE_PER_PROCESS:
+ bool stale_cpu;
+
+ current->mm->context.force_icache_flush = false;
+
+ /*
+ * Mark every other hart's icache as needing a flush for
+ * this MM. Maintain the previous value of the current
+ * cpu to handle the case when this function is called
+ * concurrently on different harts.
+ */
+ mask = &current->mm->context.icache_stale_mask;
+ stale_cpu = cpumask_test_cpu(smp_processor_id(), mask);
+
+ cpumask_setall(mask);
+ assign_bit(cpumask_check(smp_processor_id()), cpumask_bits(mask), stale_cpu);
+ break;
+ case PR_RISCV_SCOPE_PER_THREAD:
+ current->thread.force_icache_flush = false;
+
+ /*
+ * Mark every other hart's icache as needing a flush for
+ * this thread.
+ */
+ mask = &current->thread.icache_stale_mask;
+ cpumask_setall(mask);
+ cpumask_clear_cpu(smp_processor_id(), mask);
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+#endif
+ return 0;
+}
diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c
index 217fd4de6134..2eb13b89cced 100644
--- a/arch/riscv/mm/context.c
+++ b/arch/riscv/mm/context.c
@@ -15,6 +15,7 @@
#include <asm/tlbflush.h>
#include <asm/cacheflush.h>
#include <asm/mmu_context.h>
+#include <asm/switch_to.h>

#ifdef CONFIG_MMU

@@ -297,21 +298,28 @@ static inline void set_mm(struct mm_struct *prev,
*
* The "cpu" argument must be the current local CPU number.
*/
-static inline void flush_icache_deferred(struct mm_struct *mm, unsigned int cpu)
+static inline void flush_icache_deferred(struct mm_struct *mm, unsigned int cpu,
+ struct task_struct *task)
{
#ifdef CONFIG_SMP
cpumask_t *mask = &mm->context.icache_stale_mask;

- if (cpumask_test_cpu(cpu, mask)) {
+ if (cpumask_test_and_clear_cpu(cpu, mask) ||
+ (task && cpumask_test_and_clear_cpu(cpu, &task->thread.icache_stale_mask))) {
cpumask_clear_cpu(cpu, mask);
+
/*
* Ensure the remote hart's writes are visible to this hart.
* This pairs with a barrier in flush_icache_mm.
*/
smp_mb();
- local_flush_icache_all();
- }

+ /*
+ * If cache will be flushed in switch_to, no need to flush here.
+ */
+ if (!(task && switch_to_should_flush_icache(task)))
+ local_flush_icache_all();
+ }
#endif
}

@@ -332,5 +340,5 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next,

set_mm(prev, next, cpu);

- flush_icache_deferred(next, cpu);
+ flush_icache_deferred(next, cpu, task);
}
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index 370ed14b1ae0..524d546d697b 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -306,4 +306,10 @@ struct prctl_mm_map {
# define PR_RISCV_V_VSTATE_CTRL_NEXT_MASK 0xc
# define PR_RISCV_V_VSTATE_CTRL_MASK 0x1f

+#define PR_RISCV_SET_ICACHE_FLUSH_CTX 71
+# define PR_RISCV_CTX_SW_FENCEI_ON 0
+# define PR_RISCV_CTX_SW_FENCEI_OFF 1
+# define PR_RISCV_SCOPE_PER_PROCESS 0
+# define PR_RISCV_SCOPE_PER_THREAD 1
+
#endif /* _LINUX_PRCTL_H */
diff --git a/kernel/sys.c b/kernel/sys.c
index e219fcfa112d..69afdd8b430f 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -146,6 +146,9 @@
#ifndef RISCV_V_GET_CONTROL
# define RISCV_V_GET_CONTROL() (-EINVAL)
#endif
+#ifndef RISCV_SET_ICACHE_FLUSH_CTX
+# define RISCV_SET_ICACHE_FLUSH_CTX(a, b) (-EINVAL)
+#endif

/*
* this is where the system-wide overflow UID and GID are defined, for
@@ -2743,6 +2746,9 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
case PR_RISCV_V_GET_CONTROL:
error = RISCV_V_GET_CONTROL();
break;
+ case PR_RISCV_SET_ICACHE_FLUSH_CTX:
+ error = RISCV_SET_ICACHE_FLUSH_CTX(arg2, arg3);
+ break;
default:
error = -EINVAL;
break;

--
2.43.0


2024-03-05 23:22:38

by Charlie Jenkins

[permalink] [raw]
Subject: Re: [PATCH v11 0/4] riscv: Create and document PR_RISCV_SET_ICACHE_FLUSH_CTX prctl

On Mon, Feb 12, 2024 at 03:36:25PM -0800, Charlie Jenkins wrote:
> Improve the performance of icache flushing by creating a new prctl flag
> PR_RISCV_SET_ICACHE_FLUSH_CTX. The interface is left generic to allow
> for future expansions such as with the proposed J extension [1].
>
> Documentation is also provided to explain the use case.
>
> Patch sent to add PR_RISCV_SET_ICACHE_FLUSH_CTX to man-pages [2].
>
> [1] https://github.com/riscv/riscv-j-extension
> [2] https://lore.kernel.org/linux-man/[email protected]
>
> Signed-off-by: Charlie Jenkins <[email protected]>
> ---
> Changes in v11:
> - Add back PR_RISCV_CTX_SW_FENCEI_OFF (Samuel)
> - Fix under nosmp (Samuel)
> - Change set_prev_cpu (Samuel)
> - Fixup example testcase in docs
> - Change wording of documentation slightly (Alejandor Colomar)
> - Link to v10: https://lore.kernel.org/r/[email protected]
>
> Changes in v10:
> - Fix fence.i condition to properly only flush on migration (Alex)
> - Fix documentation wording (Alex)
> - Link to v9: https://lore.kernel.org/r/[email protected]
>
> Changes in v9:
> - Remove prev_cpu from mm (Alex)
> - Link to v8: https://lore.kernel.org/r/[email protected]
>
> Changes in v8:
> - Only flush icache if migrated to different cpu (Alex)
> - Move flushing to switch_to to catch per-thread flushing properly
> - Link to v7: https://lore.kernel.org/r/[email protected]
>
> Changes in v7:
> - Change "per_thread" parameter to "scope" and provide constants for the
> parameter.
> - Link to v6: https://lore.kernel.org/r/[email protected]
>
> Changes in v6:
> - Fixup documentation formatting
> - Link to v5: https://lore.kernel.org/r/[email protected]
>
> Changes in v5:
> - Minor documentation changes (Randy)
> - Link to v4: https://lore.kernel.org/r/[email protected]
>
> Changes in v4:
> - Add OFF flag to disallow fence.i in userspace (Atish)
> - Fix documentation issues (Atish)
> - Link to v3: https://lore.kernel.org/r/[email protected]
>
> Changes in v3:
> - Check if value force_icache_flush set on thread, rather than in mm
> twice (Cl?ment)
> - Link to v2: https://lore.kernel.org/r/[email protected]
>
> Changes in v2:
> - Fix kernel-doc comment (Conor)
> - Link to v1: https://lore.kernel.org/r/[email protected]
>
> ---
> Charlie Jenkins (4):
> riscv: Remove unnecessary irqflags processor.h include
> riscv: Include riscv_set_icache_flush_ctx prctl
> documentation: Document PR_RISCV_SET_ICACHE_FLUSH_CTX prctl
> cpumask: Add assign cpu
>
> Documentation/arch/riscv/cmodx.rst | 98 ++++++++++++++++++++++++++++++++++
> Documentation/arch/riscv/index.rst | 1 +
> arch/riscv/include/asm/irqflags.h | 1 -
> arch/riscv/include/asm/mmu.h | 2 +
> arch/riscv/include/asm/processor.h | 12 +++++
> arch/riscv/include/asm/switch_to.h | 23 ++++++++
> arch/riscv/mm/cacheflush.c | 105 +++++++++++++++++++++++++++++++++++++
> arch/riscv/mm/context.c | 18 +++++--
> include/linux/cpumask.h | 16 ++++++
> include/uapi/linux/prctl.h | 6 +++
> kernel/sys.c | 6 +++
> 11 files changed, 282 insertions(+), 6 deletions(-)
> ---
> base-commit: 6613476e225e090cc9aad49be7fa504e290dd33d
> change-id: 20231117-fencei-f9f60d784fa0
> --
> - Charlie
>

Copy Samuel Holland on this patch.

- Charlie


2024-03-12 18:00:56

by Samuel Holland

[permalink] [raw]
Subject: Re: [PATCH v11 2/4] riscv: Include riscv_set_icache_flush_ctx prctl

On 2024-02-12 5:36 PM, Charlie Jenkins wrote:
> Support new prctl with key PR_RISCV_SET_ICACHE_FLUSH_CTX to enable
> optimization of cross modifying code. This prctl enables userspace code
> to use icache flushing instructions such as fence.i with the guarantee
> that the icache will continue to be clean after thread migration.
>
> Signed-off-by: Charlie Jenkins <[email protected]>
> Reviewed-by: Atish Patra <[email protected]>
> Reviewed-by: Alexandre Ghiti <[email protected]>
> ---
> arch/riscv/include/asm/mmu.h | 2 +
> arch/riscv/include/asm/processor.h | 12 +++++
> arch/riscv/include/asm/switch_to.h | 23 ++++++++
> arch/riscv/mm/cacheflush.c | 105 +++++++++++++++++++++++++++++++++++++
> arch/riscv/mm/context.c | 18 +++++--
> include/uapi/linux/prctl.h | 6 +++
> kernel/sys.c | 6 +++
> 7 files changed, 167 insertions(+), 5 deletions(-)
>
> diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
> index 355504b37f8e..60be458e94da 100644
> --- a/arch/riscv/include/asm/mmu.h
> +++ b/arch/riscv/include/asm/mmu.h
> @@ -19,6 +19,8 @@ typedef struct {
> #ifdef CONFIG_SMP
> /* A local icache flush is needed before user execution can resume. */
> cpumask_t icache_stale_mask;
> + /* Force local icache flush on all migrations. */
> + bool force_icache_flush;
> #endif
> #ifdef CONFIG_BINFMT_ELF_FDPIC
> unsigned long exec_fdpic_loadmap;
> diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
> index a8509cc31ab2..46c5c3b91165 100644
> --- a/arch/riscv/include/asm/processor.h
> +++ b/arch/riscv/include/asm/processor.h
> @@ -69,6 +69,7 @@
> #endif
>
> #ifndef __ASSEMBLY__
> +#include <linux/cpumask.h>
>
> struct task_struct;
> struct pt_regs;
> @@ -123,6 +124,14 @@ struct thread_struct {
> struct __riscv_v_ext_state vstate;
> unsigned long align_ctl;
> struct __riscv_v_ext_state kernel_vstate;
> +#ifdef CONFIG_SMP
> + /* A local icache flush is needed before user execution can resume on one of these cpus. */
> + cpumask_t icache_stale_mask;
> + /* Regardless of the icache_stale_mask, flush the icache on migration */
> + bool force_icache_flush;
> + /* A forced icache flush is not needed if migrating to the previous cpu. */
> + unsigned int prev_cpu;
> +#endif
> };
>
> /* Whitelist the fstate from the task_struct for hardened usercopy */
> @@ -184,6 +193,9 @@ extern int set_unalign_ctl(struct task_struct *tsk, unsigned int val);
> #define GET_UNALIGN_CTL(tsk, addr) get_unalign_ctl((tsk), (addr))
> #define SET_UNALIGN_CTL(tsk, val) set_unalign_ctl((tsk), (val))
>
> +#define RISCV_SET_ICACHE_FLUSH_CTX(arg1, arg2) riscv_set_icache_flush_ctx(arg1, arg2)
> +extern int riscv_set_icache_flush_ctx(unsigned long ctx, unsigned long per_thread);
> +
> #endif /* __ASSEMBLY__ */
>
> #endif /* _ASM_RISCV_PROCESSOR_H */
> diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h
> index 7efdb0584d47..7594df37cc9f 100644
> --- a/arch/riscv/include/asm/switch_to.h
> +++ b/arch/riscv/include/asm/switch_to.h
> @@ -8,6 +8,7 @@
>
> #include <linux/jump_label.h>
> #include <linux/sched/task_stack.h>
> +#include <linux/mm_types.h>
> #include <asm/vector.h>
> #include <asm/cpufeature.h>
> #include <asm/processor.h>
> @@ -72,14 +73,36 @@ static __always_inline bool has_fpu(void) { return false; }
> extern struct task_struct *__switch_to(struct task_struct *,
> struct task_struct *);
>
> +static inline bool switch_to_should_flush_icache(struct task_struct *task)
> +{
> +#ifdef CONFIG_SMP
> + bool stale_mm = task->mm && task->mm->context.force_icache_flush;
> + bool stale_thread = task->thread.force_icache_flush;
> + bool thread_migrated = smp_processor_id() != task->thread.prev_cpu;
> +
> + return thread_migrated && (stale_mm || stale_thread);
> +#else
> + return false;
> +#endif
> +}
> +
> +#ifdef CONFIG_SMP
> +#define __set_prev_cpu(thread) ((thread).prev_cpu = smp_processor_id())
> +#else
> +#define __set_prev_cpu(thread)
> +#endif
> +
> #define switch_to(prev, next, last) \
> do { \
> struct task_struct *__prev = (prev); \
> struct task_struct *__next = (next); \
> + __set_prev_cpu(__prev->thread); \
> if (has_fpu()) \
> __switch_to_fpu(__prev, __next); \
> if (has_vector()) \
> __switch_to_vector(__prev, __next); \
> + if (switch_to_should_flush_icache(__next)) \
> + local_flush_icache_all(); \
> ((last) = __switch_to(__prev, __next)); \
> } while (0)
>
> diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
> index 55a34f2020a8..6513a0ab8655 100644
> --- a/arch/riscv/mm/cacheflush.c
> +++ b/arch/riscv/mm/cacheflush.c
> @@ -5,6 +5,7 @@
>
> #include <linux/acpi.h>
> #include <linux/of.h>
> +#include <linux/prctl.h>
> #include <asm/acpi.h>
> #include <asm/cacheflush.h>
>
> @@ -152,3 +153,107 @@ void __init riscv_init_cbo_blocksizes(void)
> if (cboz_block_size)
> riscv_cboz_block_size = cboz_block_size;
> }
> +
> +/**
> + * riscv_set_icache_flush_ctx() - Enable/disable icache flushing instructions in
> + * userspace.
> + * @ctx: Set the type of icache flushing instructions permitted/prohibited in
> + * userspace. Supported values described below.
> + *
> + * Supported values for ctx:
> + *
> + * * %PR_RISCV_CTX_SW_FENCEI_ON: Allow fence.i in user space.
> + *
> + * * %PR_RISCV_CTX_SW_FENCEI_OFF: Disallow fence.i in user space. All threads in
> + * a process will be affected when ``scope == PR_RISCV_SCOPE_PER_PROCESS``.
> + * Therefore, caution must be taken; use this flag only when you can guarantee
> + * that no thread in the process will emit fence.i from this point onward.
> + *
> + * @scope: Set scope of where icache flushing instructions are allowed to be
> + * emitted. Supported values described below.
> + *
> + * Supported values for scope:
> + *
> + * * %PR_RISCV_SCOPE_PER_PROCESS: Ensure the icache of any thread in this process
> + * is coherent with instruction storage upon
> + * migration.
> + *
> + * * %PR_RISCV_SCOPE_PER_THREAD: Ensure the icache of the current thread is
> + * coherent with instruction storage upon
> + * migration.
> + *
> + * When ``scope == PR_RISCV_SCOPE_PER_PROCESS``, all threads in the process are
> + * permitted to emit icache flushing instructions. Whenever any thread in the
> + * process is migrated, the corresponding hart's icache will be guaranteed to be
> + * consistent with instruction storage. This does not enforce any guarantees
> + * outside of migration. If a thread modifies an instruction that another thread
> + * may attempt to execute, the other thread must still emit an icache flushing
> + * instruction before attempting to execute the potentially modified
> + * instruction. This must be performed by the user-space program.
> + *
> + * In per-thread context (eg. ``scope == PR_RISCV_SCOPE_PER_THREAD``) only the
> + * thread calling this function is permitted to emit icache flushing
> + * instructions. When the thread is migrated, the corresponding hart's icache
> + * will be guaranteed to be consistent with instruction storage.
> + *
> + * On kernels configured without SMP, this function is a nop as migrations
> + * across harts will not occur.
> + */
> +int riscv_set_icache_flush_ctx(unsigned long ctx, unsigned long scope)
> +{
> +#ifdef CONFIG_SMP
> + switch (ctx) {
> + case PR_RISCV_CTX_SW_FENCEI_ON:
> + switch (scope) {
> + case PR_RISCV_SCOPE_PER_PROCESS:
> + current->mm->context.force_icache_flush = true;
> + break;
> + case PR_RISCV_SCOPE_PER_THREAD:
> + current->thread.force_icache_flush = true;
> + break;
> + default:
> + return -EINVAL;
> + }
> + break;
> + case PR_RISCV_CTX_SW_FENCEI_OFF:
> + cpumask_t *mask;
> +
> + switch (scope) {
> + case PR_RISCV_SCOPE_PER_PROCESS:
> + bool stale_cpu;
> +
> + current->mm->context.force_icache_flush = false;
> +
> + /*
> + * Mark every other hart's icache as needing a flush for
> + * this MM. Maintain the previous value of the current
> + * cpu to handle the case when this function is called
> + * concurrently on different harts.
> + */
> + mask = &current->mm->context.icache_stale_mask;
> + stale_cpu = cpumask_test_cpu(smp_processor_id(), mask);
> +
> + cpumask_setall(mask);
> + assign_bit(cpumask_check(smp_processor_id()), cpumask_bits(mask), stale_cpu);
> + break;
> + case PR_RISCV_SCOPE_PER_THREAD:
> + current->thread.force_icache_flush = false;
> +
> + /*
> + * Mark every other hart's icache as needing a flush for
> + * this thread.
> + */
> + mask = &current->thread.icache_stale_mask;
> + cpumask_setall(mask);
> + cpumask_clear_cpu(smp_processor_id(), mask);

I don't think we need to maintain another cpumask for every thread (and check it
at every mm switch) just to optimize this prctl() call.
PR_RISCV_CTX_SW_FENCEI_OFF is unlikely to be in anything's hot path. It should
be sufficient for both scopes to just do:

cpumask_setall(&current->mm->context.icache_stale_mask);

and accept the extra one-time icache flushes.

Otherwise this looks good to me.

Regards,
Samuel

> + break;
> + default:
> + return -EINVAL;
> + }
> + break;
> + default:
> + return -EINVAL;
> + }
> +#endif
> + return 0;
> +}
> diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c
> index 217fd4de6134..2eb13b89cced 100644
> --- a/arch/riscv/mm/context.c
> +++ b/arch/riscv/mm/context.c
> @@ -15,6 +15,7 @@
> #include <asm/tlbflush.h>
> #include <asm/cacheflush.h>
> #include <asm/mmu_context.h>
> +#include <asm/switch_to.h>
>
> #ifdef CONFIG_MMU
>
> @@ -297,21 +298,28 @@ static inline void set_mm(struct mm_struct *prev,
> *
> * The "cpu" argument must be the current local CPU number.
> */
> -static inline void flush_icache_deferred(struct mm_struct *mm, unsigned int cpu)
> +static inline void flush_icache_deferred(struct mm_struct *mm, unsigned int cpu,
> + struct task_struct *task)
> {
> #ifdef CONFIG_SMP
> cpumask_t *mask = &mm->context.icache_stale_mask;
>
> - if (cpumask_test_cpu(cpu, mask)) {
> + if (cpumask_test_and_clear_cpu(cpu, mask) ||
> + (task && cpumask_test_and_clear_cpu(cpu, &task->thread.icache_stale_mask))) {
> cpumask_clear_cpu(cpu, mask);
> +
> /*
> * Ensure the remote hart's writes are visible to this hart.
> * This pairs with a barrier in flush_icache_mm.
> */
> smp_mb();
> - local_flush_icache_all();
> - }
>
> + /*
> + * If cache will be flushed in switch_to, no need to flush here.
> + */
> + if (!(task && switch_to_should_flush_icache(task)))
> + local_flush_icache_all();
> + }
> #endif
> }
>
> @@ -332,5 +340,5 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>
> set_mm(prev, next, cpu);
>
> - flush_icache_deferred(next, cpu);
> + flush_icache_deferred(next, cpu, task);
> }
> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
> index 370ed14b1ae0..524d546d697b 100644
> --- a/include/uapi/linux/prctl.h
> +++ b/include/uapi/linux/prctl.h
> @@ -306,4 +306,10 @@ struct prctl_mm_map {
> # define PR_RISCV_V_VSTATE_CTRL_NEXT_MASK 0xc
> # define PR_RISCV_V_VSTATE_CTRL_MASK 0x1f
>
> +#define PR_RISCV_SET_ICACHE_FLUSH_CTX 71
> +# define PR_RISCV_CTX_SW_FENCEI_ON 0
> +# define PR_RISCV_CTX_SW_FENCEI_OFF 1
> +# define PR_RISCV_SCOPE_PER_PROCESS 0
> +# define PR_RISCV_SCOPE_PER_THREAD 1
> +
> #endif /* _LINUX_PRCTL_H */
> diff --git a/kernel/sys.c b/kernel/sys.c
> index e219fcfa112d..69afdd8b430f 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -146,6 +146,9 @@
> #ifndef RISCV_V_GET_CONTROL
> # define RISCV_V_GET_CONTROL() (-EINVAL)
> #endif
> +#ifndef RISCV_SET_ICACHE_FLUSH_CTX
> +# define RISCV_SET_ICACHE_FLUSH_CTX(a, b) (-EINVAL)
> +#endif
>
> /*
> * this is where the system-wide overflow UID and GID are defined, for
> @@ -2743,6 +2746,9 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
> case PR_RISCV_V_GET_CONTROL:
> error = RISCV_V_GET_CONTROL();
> break;
> + case PR_RISCV_SET_ICACHE_FLUSH_CTX:
> + error = RISCV_SET_ICACHE_FLUSH_CTX(arg2, arg3);
> + break;
> default:
> error = -EINVAL;
> break;
>


2024-03-12 18:02:46

by Samuel Holland

[permalink] [raw]
Subject: Re: [PATCH v11 1/4] riscv: Remove unnecessary irqflags processor.h include

On 2024-02-12 5:36 PM, Charlie Jenkins wrote:
> This include is not used. Remove it to avoid a circular dependency in
> the next patch in the series.
>
> Signed-off-by: Charlie Jenkins <[email protected]>
> ---
> arch/riscv/include/asm/irqflags.h | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/arch/riscv/include/asm/irqflags.h b/arch/riscv/include/asm/irqflags.h
> index 08d4d6a5b7e9..6fd8cbfcfcc7 100644
> --- a/arch/riscv/include/asm/irqflags.h
> +++ b/arch/riscv/include/asm/irqflags.h
> @@ -7,7 +7,6 @@
> #ifndef _ASM_RISCV_IRQFLAGS_H
> #define _ASM_RISCV_IRQFLAGS_H
>
> -#include <asm/processor.h>
> #include <asm/csr.h>
>
> /* read interrupt enabled status */
>

Reviewed-by: Samuel Holland <[email protected]>


2024-03-12 19:08:03

by Charlie Jenkins

[permalink] [raw]
Subject: Re: [PATCH v11 2/4] riscv: Include riscv_set_icache_flush_ctx prctl

On Tue, Mar 12, 2024 at 01:00:43PM -0500, Samuel Holland wrote:
> On 2024-02-12 5:36 PM, Charlie Jenkins wrote:
> > Support new prctl with key PR_RISCV_SET_ICACHE_FLUSH_CTX to enable
> > optimization of cross modifying code. This prctl enables userspace code
> > to use icache flushing instructions such as fence.i with the guarantee
> > that the icache will continue to be clean after thread migration.
> >
> > Signed-off-by: Charlie Jenkins <[email protected]>
> > Reviewed-by: Atish Patra <[email protected]>
> > Reviewed-by: Alexandre Ghiti <[email protected]>
> > ---
> > arch/riscv/include/asm/mmu.h | 2 +
> > arch/riscv/include/asm/processor.h | 12 +++++
> > arch/riscv/include/asm/switch_to.h | 23 ++++++++
> > arch/riscv/mm/cacheflush.c | 105 +++++++++++++++++++++++++++++++++++++
> > arch/riscv/mm/context.c | 18 +++++--
> > include/uapi/linux/prctl.h | 6 +++
> > kernel/sys.c | 6 +++
> > 7 files changed, 167 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
> > index 355504b37f8e..60be458e94da 100644
> > --- a/arch/riscv/include/asm/mmu.h
> > +++ b/arch/riscv/include/asm/mmu.h
> > @@ -19,6 +19,8 @@ typedef struct {
> > #ifdef CONFIG_SMP
> > /* A local icache flush is needed before user execution can resume. */
> > cpumask_t icache_stale_mask;
> > + /* Force local icache flush on all migrations. */
> > + bool force_icache_flush;
> > #endif
> > #ifdef CONFIG_BINFMT_ELF_FDPIC
> > unsigned long exec_fdpic_loadmap;
> > diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
> > index a8509cc31ab2..46c5c3b91165 100644
> > --- a/arch/riscv/include/asm/processor.h
> > +++ b/arch/riscv/include/asm/processor.h
> > @@ -69,6 +69,7 @@
> > #endif
> >
> > #ifndef __ASSEMBLY__
> > +#include <linux/cpumask.h>
> >
> > struct task_struct;
> > struct pt_regs;
> > @@ -123,6 +124,14 @@ struct thread_struct {
> > struct __riscv_v_ext_state vstate;
> > unsigned long align_ctl;
> > struct __riscv_v_ext_state kernel_vstate;
> > +#ifdef CONFIG_SMP
> > + /* A local icache flush is needed before user execution can resume on one of these cpus. */
> > + cpumask_t icache_stale_mask;
> > + /* Regardless of the icache_stale_mask, flush the icache on migration */
> > + bool force_icache_flush;
> > + /* A forced icache flush is not needed if migrating to the previous cpu. */
> > + unsigned int prev_cpu;
> > +#endif
> > };
> >
> > /* Whitelist the fstate from the task_struct for hardened usercopy */
> > @@ -184,6 +193,9 @@ extern int set_unalign_ctl(struct task_struct *tsk, unsigned int val);
> > #define GET_UNALIGN_CTL(tsk, addr) get_unalign_ctl((tsk), (addr))
> > #define SET_UNALIGN_CTL(tsk, val) set_unalign_ctl((tsk), (val))
> >
> > +#define RISCV_SET_ICACHE_FLUSH_CTX(arg1, arg2) riscv_set_icache_flush_ctx(arg1, arg2)
> > +extern int riscv_set_icache_flush_ctx(unsigned long ctx, unsigned long per_thread);
> > +
> > #endif /* __ASSEMBLY__ */
> >
> > #endif /* _ASM_RISCV_PROCESSOR_H */
> > diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h
> > index 7efdb0584d47..7594df37cc9f 100644
> > --- a/arch/riscv/include/asm/switch_to.h
> > +++ b/arch/riscv/include/asm/switch_to.h
> > @@ -8,6 +8,7 @@
> >
> > #include <linux/jump_label.h>
> > #include <linux/sched/task_stack.h>
> > +#include <linux/mm_types.h>
> > #include <asm/vector.h>
> > #include <asm/cpufeature.h>
> > #include <asm/processor.h>
> > @@ -72,14 +73,36 @@ static __always_inline bool has_fpu(void) { return false; }
> > extern struct task_struct *__switch_to(struct task_struct *,
> > struct task_struct *);
> >
> > +static inline bool switch_to_should_flush_icache(struct task_struct *task)
> > +{
> > +#ifdef CONFIG_SMP
> > + bool stale_mm = task->mm && task->mm->context.force_icache_flush;
> > + bool stale_thread = task->thread.force_icache_flush;
> > + bool thread_migrated = smp_processor_id() != task->thread.prev_cpu;
> > +
> > + return thread_migrated && (stale_mm || stale_thread);
> > +#else
> > + return false;
> > +#endif
> > +}
> > +
> > +#ifdef CONFIG_SMP
> > +#define __set_prev_cpu(thread) ((thread).prev_cpu = smp_processor_id())
> > +#else
> > +#define __set_prev_cpu(thread)
> > +#endif
> > +
> > #define switch_to(prev, next, last) \
> > do { \
> > struct task_struct *__prev = (prev); \
> > struct task_struct *__next = (next); \
> > + __set_prev_cpu(__prev->thread); \
> > if (has_fpu()) \
> > __switch_to_fpu(__prev, __next); \
> > if (has_vector()) \
> > __switch_to_vector(__prev, __next); \
> > + if (switch_to_should_flush_icache(__next)) \
> > + local_flush_icache_all(); \
> > ((last) = __switch_to(__prev, __next)); \
> > } while (0)
> >
> > diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
> > index 55a34f2020a8..6513a0ab8655 100644
> > --- a/arch/riscv/mm/cacheflush.c
> > +++ b/arch/riscv/mm/cacheflush.c
> > @@ -5,6 +5,7 @@
> >
> > #include <linux/acpi.h>
> > #include <linux/of.h>
> > +#include <linux/prctl.h>
> > #include <asm/acpi.h>
> > #include <asm/cacheflush.h>
> >
> > @@ -152,3 +153,107 @@ void __init riscv_init_cbo_blocksizes(void)
> > if (cboz_block_size)
> > riscv_cboz_block_size = cboz_block_size;
> > }
> > +
> > +/**
> > + * riscv_set_icache_flush_ctx() - Enable/disable icache flushing instructions in
> > + * userspace.
> > + * @ctx: Set the type of icache flushing instructions permitted/prohibited in
> > + * userspace. Supported values described below.
> > + *
> > + * Supported values for ctx:
> > + *
> > + * * %PR_RISCV_CTX_SW_FENCEI_ON: Allow fence.i in user space.
> > + *
> > + * * %PR_RISCV_CTX_SW_FENCEI_OFF: Disallow fence.i in user space. All threads in
> > + * a process will be affected when ``scope == PR_RISCV_SCOPE_PER_PROCESS``.
> > + * Therefore, caution must be taken; use this flag only when you can guarantee
> > + * that no thread in the process will emit fence.i from this point onward.
> > + *
> > + * @scope: Set scope of where icache flushing instructions are allowed to be
> > + * emitted. Supported values described below.
> > + *
> > + * Supported values for scope:
> > + *
> > + * * %PR_RISCV_SCOPE_PER_PROCESS: Ensure the icache of any thread in this process
> > + * is coherent with instruction storage upon
> > + * migration.
> > + *
> > + * * %PR_RISCV_SCOPE_PER_THREAD: Ensure the icache of the current thread is
> > + * coherent with instruction storage upon
> > + * migration.
> > + *
> > + * When ``scope == PR_RISCV_SCOPE_PER_PROCESS``, all threads in the process are
> > + * permitted to emit icache flushing instructions. Whenever any thread in the
> > + * process is migrated, the corresponding hart's icache will be guaranteed to be
> > + * consistent with instruction storage. This does not enforce any guarantees
> > + * outside of migration. If a thread modifies an instruction that another thread
> > + * may attempt to execute, the other thread must still emit an icache flushing
> > + * instruction before attempting to execute the potentially modified
> > + * instruction. This must be performed by the user-space program.
> > + *
> > + * In per-thread context (eg. ``scope == PR_RISCV_SCOPE_PER_THREAD``) only the
> > + * thread calling this function is permitted to emit icache flushing
> > + * instructions. When the thread is migrated, the corresponding hart's icache
> > + * will be guaranteed to be consistent with instruction storage.
> > + *
> > + * On kernels configured without SMP, this function is a nop as migrations
> > + * across harts will not occur.
> > + */
> > +int riscv_set_icache_flush_ctx(unsigned long ctx, unsigned long scope)
> > +{
> > +#ifdef CONFIG_SMP
> > + switch (ctx) {
> > + case PR_RISCV_CTX_SW_FENCEI_ON:
> > + switch (scope) {
> > + case PR_RISCV_SCOPE_PER_PROCESS:
> > + current->mm->context.force_icache_flush = true;
> > + break;
> > + case PR_RISCV_SCOPE_PER_THREAD:
> > + current->thread.force_icache_flush = true;
> > + break;
> > + default:
> > + return -EINVAL;
> > + }
> > + break;
> > + case PR_RISCV_CTX_SW_FENCEI_OFF:
> > + cpumask_t *mask;
> > +
> > + switch (scope) {
> > + case PR_RISCV_SCOPE_PER_PROCESS:
> > + bool stale_cpu;
> > +
> > + current->mm->context.force_icache_flush = false;
> > +
> > + /*
> > + * Mark every other hart's icache as needing a flush for
> > + * this MM. Maintain the previous value of the current
> > + * cpu to handle the case when this function is called
> > + * concurrently on different harts.
> > + */
> > + mask = &current->mm->context.icache_stale_mask;
> > + stale_cpu = cpumask_test_cpu(smp_processor_id(), mask);
> > +
> > + cpumask_setall(mask);
> > + assign_bit(cpumask_check(smp_processor_id()), cpumask_bits(mask), stale_cpu);
> > + break;
> > + case PR_RISCV_SCOPE_PER_THREAD:
> > + current->thread.force_icache_flush = false;
> > +
> > + /*
> > + * Mark every other hart's icache as needing a flush for
> > + * this thread.
> > + */
> > + mask = &current->thread.icache_stale_mask;
> > + cpumask_setall(mask);
> > + cpumask_clear_cpu(smp_processor_id(), mask);
>
> I don't think we need to maintain another cpumask for every thread (and check it
> at every mm switch) just to optimize this prctl() call.
> PR_RISCV_CTX_SW_FENCEI_OFF is unlikely to be in anything's hot path. It should
> be sufficient for both scopes to just do:
>
> cpumask_setall(&current->mm->context.icache_stale_mask);
>
> and accept the extra one-time icache flushes.
>
> Otherwise this looks good to me.
>
> Regards,
> Samuel

Yes that is reasonable. I will collapse the switch statement to execute
the PR_RISCV_SCOPE_PER_PROCESS code for PR_RISCV_SCOPE_PER_THREAD as
well. However, there is still value to doing the setall + assign_cpu
instead of just a setall since it eliminates the redundant flush when
migrating back onto the same hart.

- Charlie

>
> > + break;
> > + default:
> > + return -EINVAL;
> > + }
> > + break;
> > + default:
> > + return -EINVAL;
> > + }
> > +#endif
> > + return 0;
> > +}
> > diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c
> > index 217fd4de6134..2eb13b89cced 100644
> > --- a/arch/riscv/mm/context.c
> > +++ b/arch/riscv/mm/context.c
> > @@ -15,6 +15,7 @@
> > #include <asm/tlbflush.h>
> > #include <asm/cacheflush.h>
> > #include <asm/mmu_context.h>
> > +#include <asm/switch_to.h>
> >
> > #ifdef CONFIG_MMU
> >
> > @@ -297,21 +298,28 @@ static inline void set_mm(struct mm_struct *prev,
> > *
> > * The "cpu" argument must be the current local CPU number.
> > */
> > -static inline void flush_icache_deferred(struct mm_struct *mm, unsigned int cpu)
> > +static inline void flush_icache_deferred(struct mm_struct *mm, unsigned int cpu,
> > + struct task_struct *task)
> > {
> > #ifdef CONFIG_SMP
> > cpumask_t *mask = &mm->context.icache_stale_mask;
> >
> > - if (cpumask_test_cpu(cpu, mask)) {
> > + if (cpumask_test_and_clear_cpu(cpu, mask) ||
> > + (task && cpumask_test_and_clear_cpu(cpu, &task->thread.icache_stale_mask))) {
> > cpumask_clear_cpu(cpu, mask);
> > +
> > /*
> > * Ensure the remote hart's writes are visible to this hart.
> > * This pairs with a barrier in flush_icache_mm.
> > */
> > smp_mb();
> > - local_flush_icache_all();
> > - }
> >
> > + /*
> > + * If cache will be flushed in switch_to, no need to flush here.
> > + */
> > + if (!(task && switch_to_should_flush_icache(task)))
> > + local_flush_icache_all();
> > + }
> > #endif
> > }
> >
> > @@ -332,5 +340,5 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next,
> >
> > set_mm(prev, next, cpu);
> >
> > - flush_icache_deferred(next, cpu);
> > + flush_icache_deferred(next, cpu, task);
> > }
> > diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
> > index 370ed14b1ae0..524d546d697b 100644
> > --- a/include/uapi/linux/prctl.h
> > +++ b/include/uapi/linux/prctl.h
> > @@ -306,4 +306,10 @@ struct prctl_mm_map {
> > # define PR_RISCV_V_VSTATE_CTRL_NEXT_MASK 0xc
> > # define PR_RISCV_V_VSTATE_CTRL_MASK 0x1f
> >
> > +#define PR_RISCV_SET_ICACHE_FLUSH_CTX 71
> > +# define PR_RISCV_CTX_SW_FENCEI_ON 0
> > +# define PR_RISCV_CTX_SW_FENCEI_OFF 1
> > +# define PR_RISCV_SCOPE_PER_PROCESS 0
> > +# define PR_RISCV_SCOPE_PER_THREAD 1
> > +
> > #endif /* _LINUX_PRCTL_H */
> > diff --git a/kernel/sys.c b/kernel/sys.c
> > index e219fcfa112d..69afdd8b430f 100644
> > --- a/kernel/sys.c
> > +++ b/kernel/sys.c
> > @@ -146,6 +146,9 @@
> > #ifndef RISCV_V_GET_CONTROL
> > # define RISCV_V_GET_CONTROL() (-EINVAL)
> > #endif
> > +#ifndef RISCV_SET_ICACHE_FLUSH_CTX
> > +# define RISCV_SET_ICACHE_FLUSH_CTX(a, b) (-EINVAL)
> > +#endif
> >
> > /*
> > * this is where the system-wide overflow UID and GID are defined, for
> > @@ -2743,6 +2746,9 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
> > case PR_RISCV_V_GET_CONTROL:
> > error = RISCV_V_GET_CONTROL();
> > break;
> > + case PR_RISCV_SET_ICACHE_FLUSH_CTX:
> > + error = RISCV_SET_ICACHE_FLUSH_CTX(arg2, arg3);
> > + break;
> > default:
> > error = -EINVAL;
> > break;
> >
>