Hi all,
This patch series keeps softirqs enabled while touching FPSIMD/SVE state.
For more details on the impact see patch #3.
This patch series has been benchmarked on Linux 5.1-rc4 with defconfig.
On Juno2:
* hackbench 100 process 1000 (10 times)
* .7% quicker
On ThunderX 2:
* hackbench 1000 process 1000 (20 times)
* 3.4% quicker
Note that while the benchmark has been done on 5.1-rc4, the patch series is
based on kvm-arm/next as it has few conflicts with the
SVE KVM series.
Cheers,
Julien Grall (3):
arm64/fpsimd: Remove the prototype for sve_flush_cpu_state()
arch/arm64: fpsimd: Introduce fpsimd_save_and_flush_cpu_state() and
use it
arm64/fpsimd: Don't disable softirq when touching FPSIMD/SVE state
arch/arm64/include/asm/fpsimd.h | 6 +-
arch/arm64/include/asm/simd.h | 10 +--
arch/arm64/kernel/fpsimd.c | 135 +++++++++++++++++++++++++++-------------
arch/arm64/kvm/fpsimd.c | 4 +-
4 files changed, 100 insertions(+), 55 deletions(-)
--
2.11.0
The function sve_flush_cpu_state() has been removed in commit 21cdd7fd76e3
("KVM: arm64: Remove eager host SVE state saving").
So remove the associated prototype in asm/fpsimd.h.
Signed-off-by: Julien Grall <[email protected]>
Reviewed-by: Dave Martin <[email protected]>
---
Changes in v3:
- Add Dave's reviewed-by
- Fix checkpatch style error when mentioning a commit
Changes in v2:
- Patch added
---
arch/arm64/include/asm/fpsimd.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index df62bbd33a9a..b73d12fcc7f9 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -64,7 +64,6 @@ extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state,
extern void fpsimd_flush_task_state(struct task_struct *target);
extern void fpsimd_flush_cpu_state(void);
-extern void sve_flush_cpu_state(void);
/* Maximum VL that SVE VL-agnostic software can transparently support */
#define SVE_VL_ARCH_MAX 0x100
--
2.11.0
The only external user of fpsimd_save() and fpsimd_flush_cpu_state() is
the KVM FPSIMD code.
A following patch will introduce a mechanism to acquire owernship of the
FPSIMD/SVE context for performing context management operations. Rather
than having to export the new helpers to get/put the context, we can just
introduce a new function to combine fpsimd_save() and
fpsimd_flush_cpu_state().
This has also the advantage to remove any external call of fpsimd_save()
and fpsimd_flush_cpu_state(), so they can be turned static.
Lastly, the new function can also be used in the PM notifier.
Signed-off-by: Julien Grall <[email protected]>
---
kernel_neon_begin() does not use fpsimd_save_and_flush_cpu_state()
because the next patch will modify the function to also grab the
FPSIMD/SVE context.
Changes in v3:
- Rework the commit message
- Move the prototype of fpsimd_save_and_flush_cpu_state()
further down in the header
- Remove comment in kvm_arch_vcpu_put_fp()
Changes in v2:
- Patch added
---
arch/arm64/include/asm/fpsimd.h | 5 ++---
arch/arm64/kernel/fpsimd.c | 17 +++++++++++++----
arch/arm64/kvm/fpsimd.c | 4 +---
3 files changed, 16 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index b73d12fcc7f9..c311d36ecffe 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -48,8 +48,6 @@ struct task_struct;
extern void fpsimd_save_state(struct user_fpsimd_state *state);
extern void fpsimd_load_state(struct user_fpsimd_state *state);
-extern void fpsimd_save(void);
-
extern void fpsimd_thread_switch(struct task_struct *next);
extern void fpsimd_flush_thread(void);
@@ -63,7 +61,8 @@ extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state,
void *sve_state, unsigned int sve_vl);
extern void fpsimd_flush_task_state(struct task_struct *target);
-extern void fpsimd_flush_cpu_state(void);
+
+extern void fpsimd_save_and_flush_cpu_state(void);
/* Maximum VL that SVE VL-agnostic software can transparently support */
#define SVE_VL_ARCH_MAX 0x100
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 56afa40263d9..5313aa257be6 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -246,7 +246,7 @@ static void task_fpsimd_load(void)
*
* Softirqs (and preemption) must be disabled.
*/
-void fpsimd_save(void)
+static void fpsimd_save(void)
{
struct fpsimd_last_state_struct const *last =
this_cpu_ptr(&fpsimd_last_state);
@@ -1122,12 +1122,22 @@ void fpsimd_flush_task_state(struct task_struct *t)
* Invalidate any task's FPSIMD state that is present on this cpu.
* This function must be called with softirqs disabled.
*/
-void fpsimd_flush_cpu_state(void)
+static void fpsimd_flush_cpu_state(void)
{
__this_cpu_write(fpsimd_last_state.st, NULL);
set_thread_flag(TIF_FOREIGN_FPSTATE);
}
+/*
+ * Save the FPSIMD state to memory and invalidate cpu view.
+ * This function must be called with softirqs (and preemption) disabled.
+ */
+void fpsimd_save_and_flush_cpu_state(void)
+{
+ fpsimd_save();
+ fpsimd_flush_cpu_state();
+}
+
#ifdef CONFIG_KERNEL_MODE_NEON
DEFINE_PER_CPU(bool, kernel_neon_busy);
@@ -1284,8 +1294,7 @@ static int fpsimd_cpu_pm_notifier(struct notifier_block *self,
{
switch (cmd) {
case CPU_PM_ENTER:
- fpsimd_save();
- fpsimd_flush_cpu_state();
+ fpsimd_save_and_flush_cpu_state();
break;
case CPU_PM_EXIT:
break;
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 6e3c9c8b2df9..525010504f9d 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -112,9 +112,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
u64 *guest_zcr = &vcpu->arch.ctxt.sys_regs[ZCR_EL1];
- /* Clean guest FP state to memory and invalidate cpu view */
- fpsimd_save();
- fpsimd_flush_cpu_state();
+ fpsimd_save_and_flush_cpu_state();
if (guest_has_sve)
*guest_zcr = read_sysreg_s(SYS_ZCR_EL12);
--
2.11.0
When the kernel is compiled with CONFIG_KERNEL_MODE_NEON, some part of
the kernel may be able to use FPSIMD/SVE. This is for instance the case
for crypto code.
Any use of FPSIMD/SVE in the kernel are clearly marked by using the
function kernel_neon_{begin, end}. Furthermore, this can only be used
when may_use_simd() returns true.
The current implementation of may_use_simd() allows softirq to use
FPSIMD/SVE unless it is currently in use (i.e kernel_neon_busy is true).
When in use, softirqs usually fall back to a software method.
At the moment, as a softirq may use FPSIMD/SVE, softirqs are disabled
when touching the FPSIMD/SVE context. This has the drawback to disable
all softirqs even if they are not using FPSIMD/SVE.
Since a softirq is supposed to check may_use_simd() anyway before
attempting to use FPSIMD/SVE, there is limited reason to keep softirq
disabled when touching the FPSIMD/SVE context. Instead, we can simply
disable preemption and mark the FPSIMD/SVE context as in use by setting
CPU's kernel_neon_busy flag.
Two new helpers {get, put}_cpu_fpsimd_context is introduced to mark the
area using FPSIMD/SVE context and uses them in replacement of
local_bh_{disable, enable}. The functions kernel_neon_{begin, end} are
also re-implemented to use the new helpers.
For use in fpsimd_thread_switch(), which is a critical path where
preemption is already disabled, doule-underscored versions of the
helpers are provided to avoid disabling preemption again.
The change has been benchmarked on Linux 5.1-rc4 with defconfig.
On Juno2:
* hackbench 100 process 1000 (10 times)
* .7% quicker
On ThunderX 2:
* hackbench 1000 process 1000 (20 times)
* 3.4% quicker
Signed-off-by: Julien Grall <[email protected]>
---
Changes in v3:
- Fix typoes in the commit message
- Rework a bit the commit message
- Use imperative mood
- Rename kernel_neon_busy to fpsimd_context_busy
- Remove debug code
- Update comments
- Don't require preemption when calling fpsimd_save_and_flush_cpu_state()
Changes in v2:
- Remove spurious call to kernel_neon_enable in kernel_neon_begin.
- Rename kernel_neon_{enable, disable} to {get, put}_cpu_fpsimd_context
- Introduce a double-underscore version of the helpers for case
where preemption is already disabled
- Introduce have_cpu_fpsimd_context() and use it in WARN_ON(...)
- Surround more places in the code with the new helpers
- Rework the comments
- Update the commit message with the benchmark result
---
arch/arm64/include/asm/simd.h | 10 ++--
arch/arm64/kernel/fpsimd.c | 126 ++++++++++++++++++++++++++++--------------
2 files changed, 88 insertions(+), 48 deletions(-)
diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index 6495cc51246f..a6307e43b8c2 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -15,9 +15,9 @@
#include <linux/preempt.h>
#include <linux/types.h>
-#ifdef CONFIG_KERNEL_MODE_NEON
+DECLARE_PER_CPU(bool, fpsimd_context_busy);
-DECLARE_PER_CPU(bool, kernel_neon_busy);
+#ifdef CONFIG_KERNEL_MODE_NEON
/*
* may_use_simd - whether it is allowable at this time to issue SIMD
@@ -29,15 +29,15 @@ DECLARE_PER_CPU(bool, kernel_neon_busy);
static __must_check inline bool may_use_simd(void)
{
/*
- * kernel_neon_busy is only set while preemption is disabled,
+ * fpsimd_context_busy is only set while preemption is disabled,
* and is clear whenever preemption is enabled. Since
- * this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy
+ * this_cpu_read() is atomic w.r.t. preemption, fpsimd_context_busy
* cannot change under our feet -- if it's set we cannot be
* migrated, and if it's clear we cannot be migrated to a CPU
* where it is set.
*/
return !in_irq() && !irqs_disabled() && !in_nmi() &&
- !this_cpu_read(kernel_neon_busy);
+ !this_cpu_read(fpsimd_context_busy);
}
#else /* ! CONFIG_KERNEL_MODE_NEON */
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 5313aa257be6..6168d06bbd20 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -92,7 +92,8 @@
* To prevent this from racing with the manipulation of the task's FPSIMD state
* from task context and thereby corrupting the state, it is necessary to
* protect any manipulation of a task's fpsimd_state or TIF_FOREIGN_FPSTATE
- * flag with local_bh_disable() unless softirqs are already masked.
+ * flag with {, __}get_cpu_fpsimd_context(). This will still allow softirqs to
+ * run but prevent them to use FPSIMD.
*
* For a certain task, the sequence may look something like this:
* - the task gets scheduled in; if both the task's fpsimd_cpu field
@@ -155,6 +156,56 @@ extern void __percpu *efi_sve_state;
#endif /* ! CONFIG_ARM64_SVE */
+DEFINE_PER_CPU(bool, fpsimd_context_busy);
+EXPORT_PER_CPU_SYMBOL(fpsimd_context_busy);
+
+static void __get_cpu_fpsimd_context(void)
+{
+ bool busy = __this_cpu_xchg(fpsimd_context_busy, true);
+
+ WARN_ON(busy);
+}
+
+/*
+ * Claim ownership of the CPU FPSIMD context for use by the calling context.
+ *
+ * The caller may freely modify FPSIMD context until *put_cpu_fpsimd_context()
+ * is called.
+ *
+ * The double-underscore version must only be called if you know the task
+ * can't be preempted.
+ */
+static void get_cpu_fpsimd_context(void)
+{
+ preempt_disable();
+ __get_cpu_fpsimd_context();
+}
+
+static void __put_cpu_fpsimd_context(void)
+{
+ bool busy = __this_cpu_xchg(fpsimd_context_busy, false);
+
+ WARN_ON(!busy); /* No matching get_cpu_fpsimd_context()? */
+}
+
+/*
+ * Release the CPU FPSIMD context.
+ *
+ * Must be called from a context in which get_cpu_fpsimd_context() was
+ * previously called, with no call to put_cpu_fpsimd_context() in the
+ * meantime.
+ */
+static void put_cpu_fpsimd_context(void)
+{
+ __put_cpu_fpsimd_context();
+ preempt_enable();
+}
+
+static bool have_cpu_fpsimd_context(void)
+{
+ return !preemptible() && __this_cpu_read(fpsimd_context_busy);
+}
+
/*
* Call __sve_free() directly only if you know task can't be scheduled
* or preempted.
@@ -225,12 +276,10 @@ static void sve_free(struct task_struct *task)
* This function should be called only when the FPSIMD/SVE state in
* thread_struct is known to be up to date, when preparing to enter
* userspace.
- *
- * Softirqs (and preemption) must be disabled.
*/
static void task_fpsimd_load(void)
{
- WARN_ON(!in_softirq() && !irqs_disabled());
+ WARN_ON(!have_cpu_fpsimd_context());
if (system_supports_sve() && test_thread_flag(TIF_SVE))
sve_load_state(sve_pffr(¤t->thread),
@@ -243,8 +292,6 @@ static void task_fpsimd_load(void)
/*
* Ensure FPSIMD/SVE storage in memory for the loaded context is up to
* date with respect to the CPU registers.
- *
- * Softirqs (and preemption) must be disabled.
*/
static void fpsimd_save(void)
{
@@ -252,7 +299,7 @@ static void fpsimd_save(void)
this_cpu_ptr(&fpsimd_last_state);
/* set by fpsimd_bind_task_to_cpu() or fpsimd_bind_state_to_cpu() */
- WARN_ON(!in_softirq() && !irqs_disabled());
+ WARN_ON(!have_cpu_fpsimd_context());
if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) {
if (system_supports_sve() && test_thread_flag(TIF_SVE)) {
@@ -357,7 +404,8 @@ static int __init sve_sysctl_init(void) { return 0; }
* task->thread.sve_state.
*
* Task can be a non-runnable task, or current. In the latter case,
- * softirqs (and preemption) must be disabled.
+ * the caller must have ownership of the cpu FPSIMD context before calling
+ * this function.
* task->thread.sve_state must point to at least sve_state_size(task)
* bytes of allocated kernel memory.
* task->thread.uw.fpsimd_state must be up to date before calling this
@@ -384,7 +432,8 @@ static void fpsimd_to_sve(struct task_struct *task)
* task->thread.uw.fpsimd_state.
*
* Task can be a non-runnable task, or current. In the latter case,
- * softirqs (and preemption) must be disabled.
+ * the caller must have ownership of the cpu FPSIMD context before calling
+ * this function.
* task->thread.sve_state must point to at least sve_state_size(task)
* bytes of allocated kernel memory.
* task->thread.sve_state must be up to date before calling this function.
@@ -544,7 +593,7 @@ int sve_set_vector_length(struct task_struct *task,
* non-SVE thread.
*/
if (task == current) {
- local_bh_disable();
+ get_cpu_fpsimd_context();
fpsimd_save();
}
@@ -554,7 +603,7 @@ int sve_set_vector_length(struct task_struct *task,
sve_to_fpsimd(task);
if (task == current)
- local_bh_enable();
+ put_cpu_fpsimd_context();
/*
* Force reallocation of task SVE state to the correct size
@@ -867,7 +916,7 @@ asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs)
sve_alloc(current);
- local_bh_disable();
+ get_cpu_fpsimd_context();
fpsimd_save();
@@ -878,7 +927,7 @@ asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs)
if (test_and_set_thread_flag(TIF_SVE))
WARN_ON(1); /* SVE access shouldn't have trapped */
- local_bh_enable();
+ put_cpu_fpsimd_context();
}
/*
@@ -922,6 +971,8 @@ void fpsimd_thread_switch(struct task_struct *next)
if (!system_supports_fpsimd())
return;
+ __get_cpu_fpsimd_context();
+
/* Save unsaved fpsimd state, if any: */
fpsimd_save();
@@ -936,6 +987,8 @@ void fpsimd_thread_switch(struct task_struct *next)
update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
wrong_task || wrong_cpu);
+
+ __put_cpu_fpsimd_context();
}
void fpsimd_flush_thread(void)
@@ -945,7 +998,7 @@ void fpsimd_flush_thread(void)
if (!system_supports_fpsimd())
return;
- local_bh_disable();
+ get_cpu_fpsimd_context();
fpsimd_flush_task_state(current);
memset(¤t->thread.uw.fpsimd_state, 0,
@@ -986,7 +1039,7 @@ void fpsimd_flush_thread(void)
current->thread.sve_vl_onexec = 0;
}
- local_bh_enable();
+ put_cpu_fpsimd_context();
}
/*
@@ -998,9 +1051,9 @@ void fpsimd_preserve_current_state(void)
if (!system_supports_fpsimd())
return;
- local_bh_disable();
+ get_cpu_fpsimd_context();
fpsimd_save();
- local_bh_enable();
+ put_cpu_fpsimd_context();
}
/*
@@ -1017,7 +1070,8 @@ void fpsimd_signal_preserve_current_state(void)
/*
* Associate current's FPSIMD context with this cpu
- * Preemption must be disabled when calling this function.
+ * The caller must have ownership of the cpu FPSIMD context before calling
+ * this function.
*/
void fpsimd_bind_task_to_cpu(void)
{
@@ -1063,14 +1117,14 @@ void fpsimd_restore_current_state(void)
if (!system_supports_fpsimd())
return;
- local_bh_disable();
+ get_cpu_fpsimd_context();
if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) {
task_fpsimd_load();
fpsimd_bind_task_to_cpu();
}
- local_bh_enable();
+ put_cpu_fpsimd_context();
}
/*
@@ -1083,7 +1137,7 @@ void fpsimd_update_current_state(struct user_fpsimd_state const *state)
if (!system_supports_fpsimd())
return;
- local_bh_disable();
+ get_cpu_fpsimd_context();
current->thread.uw.fpsimd_state = *state;
if (system_supports_sve() && test_thread_flag(TIF_SVE))
@@ -1094,7 +1148,7 @@ void fpsimd_update_current_state(struct user_fpsimd_state const *state)
clear_thread_flag(TIF_FOREIGN_FPSTATE);
- local_bh_enable();
+ put_cpu_fpsimd_context();
}
/*
@@ -1120,7 +1174,8 @@ void fpsimd_flush_task_state(struct task_struct *t)
/*
* Invalidate any task's FPSIMD state that is present on this cpu.
- * This function must be called with softirqs disabled.
+ * The FPSIMD context should be acquired with get_cpu_fpsimd_context()
+ * before calling this function.
*/
static void fpsimd_flush_cpu_state(void)
{
@@ -1128,21 +1183,17 @@ static void fpsimd_flush_cpu_state(void)
set_thread_flag(TIF_FOREIGN_FPSTATE);
}
-/*
- * Save the FPSIMD state to memory and invalidate cpu view.
- * This function must be called with softirqs (and preemption) disabled.
- */
+/* Save the FPSIMD state to memory and invalidate cpu view. */
void fpsimd_save_and_flush_cpu_state(void)
{
+ get_cpu_fpsimd_context();
fpsimd_save();
fpsimd_flush_cpu_state();
+ put_cpu_fpsimd_context();
}
#ifdef CONFIG_KERNEL_MODE_NEON
-DEFINE_PER_CPU(bool, kernel_neon_busy);
-EXPORT_PER_CPU_SYMBOL(kernel_neon_busy);
-
/*
* Kernel-side NEON support functions
*/
@@ -1167,19 +1218,13 @@ void kernel_neon_begin(void)
BUG_ON(!may_use_simd());
- local_bh_disable();
-
- __this_cpu_write(kernel_neon_busy, true);
+ get_cpu_fpsimd_context();
/* Save unsaved fpsimd state, if any: */
fpsimd_save();
/* Invalidate any task state remaining in the fpsimd regs: */
fpsimd_flush_cpu_state();
-
- preempt_disable();
-
- local_bh_enable();
}
EXPORT_SYMBOL(kernel_neon_begin);
@@ -1194,15 +1239,10 @@ EXPORT_SYMBOL(kernel_neon_begin);
*/
void kernel_neon_end(void)
{
- bool busy;
-
if (!system_supports_fpsimd())
return;
- busy = __this_cpu_xchg(kernel_neon_busy, false);
- WARN_ON(!busy); /* No matching kernel_neon_begin()? */
-
- preempt_enable();
+ put_cpu_fpsimd_context();
}
EXPORT_SYMBOL(kernel_neon_end);
--
2.11.0
On Tue, Apr 23, 2019 at 02:57:18PM +0100, Julien Grall wrote:
> tent-Length: 4295
> Lines: 128
>
> The only external user of fpsimd_save() and fpsimd_flush_cpu_state() is
> the KVM FPSIMD code.
>
> A following patch will introduce a mechanism to acquire owernship of the
> FPSIMD/SVE context for performing context management operations. Rather
> than having to export the new helpers to get/put the context, we can just
> introduce a new function to combine fpsimd_save() and
> fpsimd_flush_cpu_state().
>
> This has also the advantage to remove any external call of fpsimd_save()
> and fpsimd_flush_cpu_state(), so they can be turned static.
>
> Lastly, the new function can also be used in the PM notifier.
>
> Signed-off-by: Julien Grall <[email protected]>
>
> ---
> kernel_neon_begin() does not use fpsimd_save_and_flush_cpu_state()
> because the next patch will modify the function to also grab the
> FPSIMD/SVE context.
>
> Changes in v3:
> - Rework the commit message
> - Move the prototype of fpsimd_save_and_flush_cpu_state()
> further down in the header
> - Remove comment in kvm_arch_vcpu_put_fp()
>
> Changes in v2:
> - Patch added
> ---
> arch/arm64/include/asm/fpsimd.h | 5 ++---
> arch/arm64/kernel/fpsimd.c | 17 +++++++++++++----
> arch/arm64/kvm/fpsimd.c | 4 +---
> 3 files changed, 16 insertions(+), 10 deletions(-)
>
> diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> index b73d12fcc7f9..c311d36ecffe 100644
> --- a/arch/arm64/include/asm/fpsimd.h
> +++ b/arch/arm64/include/asm/fpsimd.h
> @@ -48,8 +48,6 @@ struct task_struct;
> extern void fpsimd_save_state(struct user_fpsimd_state *state);
> extern void fpsimd_load_state(struct user_fpsimd_state *state);
>
> -extern void fpsimd_save(void);
> -
> extern void fpsimd_thread_switch(struct task_struct *next);
> extern void fpsimd_flush_thread(void);
>
> @@ -63,7 +61,8 @@ extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state,
> void *sve_state, unsigned int sve_vl);
>
> extern void fpsimd_flush_task_state(struct task_struct *target);
> -extern void fpsimd_flush_cpu_state(void);
> +
> +extern void fpsimd_save_and_flush_cpu_state(void);
Nit: You could drop this blank line: these are all "state maintenance"
operations, roughly speaking.
But don't bother unless you respin for some other reason.
[...]
With or without that,
Reviewed-by: Dave Martin <[email protected]>
Cheers
---Dave
On Tue, Apr 23, 2019 at 02:57:19PM +0100, Julien Grall wrote:
> When the kernel is compiled with CONFIG_KERNEL_MODE_NEON, some part of
> the kernel may be able to use FPSIMD/SVE. This is for instance the case
> for crypto code.
>
> Any use of FPSIMD/SVE in the kernel are clearly marked by using the
> function kernel_neon_{begin, end}. Furthermore, this can only be used
> when may_use_simd() returns true.
>
> The current implementation of may_use_simd() allows softirq to use
> FPSIMD/SVE unless it is currently in use (i.e kernel_neon_busy is true).
> When in use, softirqs usually fall back to a software method.
>
> At the moment, as a softirq may use FPSIMD/SVE, softirqs are disabled
> when touching the FPSIMD/SVE context. This has the drawback to disable
> all softirqs even if they are not using FPSIMD/SVE.
>
> Since a softirq is supposed to check may_use_simd() anyway before
> attempting to use FPSIMD/SVE, there is limited reason to keep softirq
> disabled when touching the FPSIMD/SVE context. Instead, we can simply
> disable preemption and mark the FPSIMD/SVE context as in use by setting
> CPU's kernel_neon_busy flag.
>
> Two new helpers {get, put}_cpu_fpsimd_context is introduced to mark the
> area using FPSIMD/SVE context and uses them in replacement of
> local_bh_{disable, enable}. The functions kernel_neon_{begin, end} are
> also re-implemented to use the new helpers.
>
> For use in fpsimd_thread_switch(), which is a critical path where
> preemption is already disabled, doule-underscored versions of the
> helpers are provided to avoid disabling preemption again.
>
> The change has been benchmarked on Linux 5.1-rc4 with defconfig.
>
> On Juno2:
> * hackbench 100 process 1000 (10 times)
> * .7% quicker
>
> On ThunderX 2:
> * hackbench 1000 process 1000 (20 times)
> * 3.4% quicker
>
> Signed-off-by: Julien Grall <[email protected]>
>
> ---
> Changes in v3:
> - Fix typoes in the commit message
> - Rework a bit the commit message
> - Use imperative mood
> - Rename kernel_neon_busy to fpsimd_context_busy
> - Remove debug code
> - Update comments
> - Don't require preemption when calling fpsimd_save_and_flush_cpu_state()
>
> Changes in v2:
> - Remove spurious call to kernel_neon_enable in kernel_neon_begin.
> - Rename kernel_neon_{enable, disable} to {get, put}_cpu_fpsimd_context
> - Introduce a double-underscore version of the helpers for case
> where preemption is already disabled
> - Introduce have_cpu_fpsimd_context() and use it in WARN_ON(...)
> - Surround more places in the code with the new helpers
> - Rework the comments
> - Update the commit message with the benchmark result
> ---
> arch/arm64/include/asm/simd.h | 10 ++--
> arch/arm64/kernel/fpsimd.c | 126 ++++++++++++++++++++++++++++--------------
> 2 files changed, 88 insertions(+), 48 deletions(-)
>
> diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
> index 6495cc51246f..a6307e43b8c2 100644
> --- a/arch/arm64/include/asm/simd.h
> +++ b/arch/arm64/include/asm/simd.h
> @@ -15,9 +15,9 @@
> #include <linux/preempt.h>
> #include <linux/types.h>
>
> -#ifdef CONFIG_KERNEL_MODE_NEON
> +DECLARE_PER_CPU(bool, fpsimd_context_busy);
>
> -DECLARE_PER_CPU(bool, kernel_neon_busy);
> +#ifdef CONFIG_KERNEL_MODE_NEON
>
> /*
> * may_use_simd - whether it is allowable at this time to issue SIMD
> @@ -29,15 +29,15 @@ DECLARE_PER_CPU(bool, kernel_neon_busy);
> static __must_check inline bool may_use_simd(void)
> {
> /*
> - * kernel_neon_busy is only set while preemption is disabled,
> + * fpsimd_context_busy is only set while preemption is disabled,
> * and is clear whenever preemption is enabled. Since
> - * this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy
> + * this_cpu_read() is atomic w.r.t. preemption, fpsimd_context_busy
> * cannot change under our feet -- if it's set we cannot be
> * migrated, and if it's clear we cannot be migrated to a CPU
> * where it is set.
> */
> return !in_irq() && !irqs_disabled() && !in_nmi() &&
> - !this_cpu_read(kernel_neon_busy);
> + !this_cpu_read(fpsimd_context_busy);
> }
>
> #else /* ! CONFIG_KERNEL_MODE_NEON */
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index 5313aa257be6..6168d06bbd20 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -92,7 +92,8 @@
> * To prevent this from racing with the manipulation of the task's FPSIMD state
> * from task context and thereby corrupting the state, it is necessary to
> * protect any manipulation of a task's fpsimd_state or TIF_FOREIGN_FPSTATE
> - * flag with local_bh_disable() unless softirqs are already masked.
> + * flag with {, __}get_cpu_fpsimd_context(). This will still allow softirqs to
> + * run but prevent them to use FPSIMD.
> *
> * For a certain task, the sequence may look something like this:
> * - the task gets scheduled in; if both the task's fpsimd_cpu field
> @@ -155,6 +156,56 @@ extern void __percpu *efi_sve_state;
>
> #endif /* ! CONFIG_ARM64_SVE */
>
> +DEFINE_PER_CPU(bool, fpsimd_context_busy);
> +EXPORT_PER_CPU_SYMBOL(fpsimd_context_busy);
> +
> +static void __get_cpu_fpsimd_context(void)
> +{
> + bool busy = __this_cpu_xchg(fpsimd_context_busy, true);
> +
> + WARN_ON(busy);
> +}
> +
> +/*
> + * Claim ownership of the CPU FPSIMD context for use by the calling context.
> + *
> + * The caller may freely modify FPSIMD context until *put_cpu_fpsimd_context()
> + * is called.
Nit: it may be better to say "freely manipulate the FPSIMD context
metadata".
get_cpu_fpsimd_context() isn't enough to allow the FPSIMD regs to be
safely trashed, because they may still contain live data (or an up to
date copy) for some task.
(For that you also need fpsimd_save_and_flush_cpu_state(), or just use
kernel_neon_begin() instead.)
[...]
> @@ -922,6 +971,8 @@ void fpsimd_thread_switch(struct task_struct *next)
> if (!system_supports_fpsimd())
> return;
>
> + __get_cpu_fpsimd_context();
> +
> /* Save unsaved fpsimd state, if any: */
> fpsimd_save();
>
> @@ -936,6 +987,8 @@ void fpsimd_thread_switch(struct task_struct *next)
>
> update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
> wrong_task || wrong_cpu);
> +
> + __put_cpu_fpsimd_context();
There should be a note in the commit message explaining why these are
here.
Are they actually needed, other than to keep
WARN_ON(have_cpu_fpsimd_context()) happy elsewhere?
Does PREEMPT_RT allow non-threaded softirqs to execute while we're in
this code?
OTOH, if the overall effect on performance remains positive, we can
probably argue that these operations make the code more self-describing
and help guard against mistakes during future maintanence, even if
they're not strictly needed today.
[...]
> -/*
> - * Save the FPSIMD state to memory and invalidate cpu view.
> - * This function must be called with softirqs (and preemption) disabled.
> - */
> +/* Save the FPSIMD state to memory and invalidate cpu view. */
> void fpsimd_save_and_flush_cpu_state(void)
> {
> + get_cpu_fpsimd_context();
> fpsimd_save();
> fpsimd_flush_cpu_state();
> + put_cpu_fpsimd_context();
> }
Again, are these added just to keep WARN_ON()s happy?
Now I look at the diff, I think after all that
WARN_ON(preemptible());
__get_cpu_fpsimd_context();
...
__put_cpu_fpsimd_context();
is preferable. The purpose of this function is to free up the FPSIMD
regs for use by the kernel, so it makes no sense to call it with
preemption enabled: the regs could spontaneously become live again due
to a context switch. So we shouldn't encourage misuse by making the
function "safe" to call with preemption enabled.
(In fact, this starts to look so much like kernel_neon_begin() that we
could perhaps merge the two together -- but let's not bother for now.
That will make noise due to renaming, so it's best to leave it for a
future patch.)
[...]
Also, have you tested this patch with CONFIG_KERNEL_MODE_NEON=n?
Otherwise, this looks like it's almost there.
Cheers
---Dave
On Thu, Apr 25, 2019 at 01:37:40PM +0100, Julien Grall wrote:
> Hi Dave,
>
> On 24/04/2019 14:17, Dave Martin wrote:
> >On Tue, Apr 23, 2019 at 02:57:18PM +0100, Julien Grall wrote:
> >>tent-Length: 4295
> >>Lines: 128
> >>
> >>The only external user of fpsimd_save() and fpsimd_flush_cpu_state() is
> >>the KVM FPSIMD code.
> >>
> >>A following patch will introduce a mechanism to acquire owernship of the
> >>FPSIMD/SVE context for performing context management operations. Rather
> >>than having to export the new helpers to get/put the context, we can just
> >>introduce a new function to combine fpsimd_save() and
> >>fpsimd_flush_cpu_state().
> >>
> >>This has also the advantage to remove any external call of fpsimd_save()
> >>and fpsimd_flush_cpu_state(), so they can be turned static.
> >>
> >>Lastly, the new function can also be used in the PM notifier.
> >>
> >>Signed-off-by: Julien Grall <[email protected]>
> >>
> >>---
> >> kernel_neon_begin() does not use fpsimd_save_and_flush_cpu_state()
> >> because the next patch will modify the function to also grab the
> >> FPSIMD/SVE context.
> >>
> >> Changes in v3:
> >> - Rework the commit message
> >> - Move the prototype of fpsimd_save_and_flush_cpu_state()
> >> further down in the header
> >> - Remove comment in kvm_arch_vcpu_put_fp()
> >>
> >> Changes in v2:
> >> - Patch added
> >>---
> >> arch/arm64/include/asm/fpsimd.h | 5 ++---
> >> arch/arm64/kernel/fpsimd.c | 17 +++++++++++++----
> >> arch/arm64/kvm/fpsimd.c | 4 +---
> >> 3 files changed, 16 insertions(+), 10 deletions(-)
> >>
> >>diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> >>index b73d12fcc7f9..c311d36ecffe 100644
> >>--- a/arch/arm64/include/asm/fpsimd.h
> >>+++ b/arch/arm64/include/asm/fpsimd.h
> >>@@ -48,8 +48,6 @@ struct task_struct;
> >> extern void fpsimd_save_state(struct user_fpsimd_state *state);
> >> extern void fpsimd_load_state(struct user_fpsimd_state *state);
> >>-extern void fpsimd_save(void);
> >>-
> >> extern void fpsimd_thread_switch(struct task_struct *next);
> >> extern void fpsimd_flush_thread(void);
> >>@@ -63,7 +61,8 @@ extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state,
> >> void *sve_state, unsigned int sve_vl);
> >> extern void fpsimd_flush_task_state(struct task_struct *target);
> >>-extern void fpsimd_flush_cpu_state(void);
> >>+
> >>+extern void fpsimd_save_and_flush_cpu_state(void);
> >
> >Nit: You could drop this blank line: these are all "state maintenance"
> >operations, roughly speaking.
> >
> >But don't bother unless you respin for some other reason.
>
> It looks like I will need to make some changes in patch #3. So I will remove
> the line at the same time.
OK, fair enough.
Cheers
---Dave
Hi Dave,
On 24/04/2019 14:17, Dave Martin wrote:
> On Tue, Apr 23, 2019 at 02:57:19PM +0100, Julien Grall wrote:
>> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
>> index 5313aa257be6..6168d06bbd20 100644
>> --- a/arch/arm64/kernel/fpsimd.c
>> +++ b/arch/arm64/kernel/fpsimd.c
>> @@ -92,7 +92,8 @@
>> * To prevent this from racing with the manipulation of the task's FPSIMD state
>> * from task context and thereby corrupting the state, it is necessary to
>> * protect any manipulation of a task's fpsimd_state or TIF_FOREIGN_FPSTATE
>> - * flag with local_bh_disable() unless softirqs are already masked.
>> + * flag with {, __}get_cpu_fpsimd_context(). This will still allow softirqs to
>> + * run but prevent them to use FPSIMD.
>> *
>> * For a certain task, the sequence may look something like this:
>> * - the task gets scheduled in; if both the task's fpsimd_cpu field
>> @@ -155,6 +156,56 @@ extern void __percpu *efi_sve_state;
>>
>> #endif /* ! CONFIG_ARM64_SVE */
>>
>> +DEFINE_PER_CPU(bool, fpsimd_context_busy);
>> +EXPORT_PER_CPU_SYMBOL(fpsimd_context_busy);
>> +
>> +static void __get_cpu_fpsimd_context(void)
>> +{
>> + bool busy = __this_cpu_xchg(fpsimd_context_busy, true);
>> +
>> + WARN_ON(busy);
>> +}
>> +
>> +/*
>> + * Claim ownership of the CPU FPSIMD context for use by the calling context.
>> + *
>> + * The caller may freely modify FPSIMD context until *put_cpu_fpsimd_context()
>> + * is called.
>
> Nit: it may be better to say "freely manipulate the FPSIMD context
> metadata".
>
> get_cpu_fpsimd_context() isn't enough to allow the FPSIMD regs to be
> safely trashed, because they may still contain live data (or an up to
> date copy) for some task.
Good point, I will update the comment.
>
> (For that you also need fpsimd_save_and_flush_cpu_state(), or just use
> kernel_neon_begin() instead.)
>
> [...]
>
>> @@ -922,6 +971,8 @@ void fpsimd_thread_switch(struct task_struct *next)
>> if (!system_supports_fpsimd())
>> return;
>>
>> + __get_cpu_fpsimd_context();
>> +
>> /* Save unsaved fpsimd state, if any: */
>> fpsimd_save();
>>
>> @@ -936,6 +987,8 @@ void fpsimd_thread_switch(struct task_struct *next)
>>
>> update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
>> wrong_task || wrong_cpu);
>> +
>> + __put_cpu_fpsimd_context();
>
> There should be a note in the commit message explaining why these are
> here.
>
> Are they actually needed, other than to keep
> WARN_ON(have_cpu_fpsimd_context()) happy elsewhere?
It depends on how fpsimd_thread_switch() is called. I will answer more below.
>
> Does PREEMPT_RT allow non-threaded softirqs to execute while we're in
> this code?
This has nothing to do with PREEMPT_RT. Softirqs might be executed after
handling interrupt (see irq_exit()).
A call to preempt_disable() will not be enough to prevent softirqs, you actually
need to either mask interrupts or have BH disabled.
fpsimd_thread_switch() seems to be only called from the context switch code.
AFAICT, interrupt will be masked. Therefore, holding the FPSIMD CPU is not
necessary. However...
>
>
> OTOH, if the overall effect on performance remains positive, we can
> probably argue that these operations make the code more self-describing
> and help guard against mistakes during future maintanence, even if
> they're not strictly needed today.
.... I think it would help guard against mistakes. The more I haven't seen any
performance impact in the benchmark.
[...]
>> -/*
>> - * Save the FPSIMD state to memory and invalidate cpu view.
>> - * This function must be called with softirqs (and preemption) disabled.
>> - */
>> +/* Save the FPSIMD state to memory and invalidate cpu view. */
>> void fpsimd_save_and_flush_cpu_state(void)
>> {
>> + get_cpu_fpsimd_context();
>> fpsimd_save();
>> fpsimd_flush_cpu_state();
>> + put_cpu_fpsimd_context();
>> }
>
> Again, are these added just to keep WARN_ON()s happy?
!preemptible() is not sufficient to prevent softirq running. You also need to
have either interrupt masked or BH disabled.
>
> Now I look at the diff, I think after all that
>
> WARN_ON(preemptible());
> __get_cpu_fpsimd_context();
>
> ...
>
> __put_cpu_fpsimd_context();
>
> is preferable. The purpose of this function is to free up the FPSIMD
> regs for use by the kernel, so it makes no sense to call it with
> preemption enabled: the regs could spontaneously become live again due
> to a context switch. So we shouldn't encourage misuse by making the
> function "safe" to call with preemption enabled.
Ok, I will switch back to the underscore version and add a WARN_ON(...).
>
> [...]
>
> Also, have you tested this patch with CONFIG_KERNEL_MODE_NEON=n?
AFAICT, CONFIG_KERNEL_MODE_NEON has always turned on by default on arm64.
I will have a look took hack Kconfig and see if it is still build.
Cheers,
--
Julien Grall
On Thu, Apr 25, 2019 at 04:57:26PM +0100, Julien Grall wrote:
> Hi Dave,
>
> On 24/04/2019 14:17, Dave Martin wrote:
> >On Tue, Apr 23, 2019 at 02:57:19PM +0100, Julien Grall wrote:
> >>diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> >>index 5313aa257be6..6168d06bbd20 100644
> >>--- a/arch/arm64/kernel/fpsimd.c
> >>+++ b/arch/arm64/kernel/fpsimd.c
> >>@@ -92,7 +92,8 @@
> >> * To prevent this from racing with the manipulation of the task's FPSIMD state
> >> * from task context and thereby corrupting the state, it is necessary to
> >> * protect any manipulation of a task's fpsimd_state or TIF_FOREIGN_FPSTATE
> >>- * flag with local_bh_disable() unless softirqs are already masked.
> >>+ * flag with {, __}get_cpu_fpsimd_context(). This will still allow softirqs to
> >>+ * run but prevent them to use FPSIMD.
> >> *
> >> * For a certain task, the sequence may look something like this:
> >> * - the task gets scheduled in; if both the task's fpsimd_cpu field
> >>@@ -155,6 +156,56 @@ extern void __percpu *efi_sve_state;
> >> #endif /* ! CONFIG_ARM64_SVE */
> >>+DEFINE_PER_CPU(bool, fpsimd_context_busy);
> >>+EXPORT_PER_CPU_SYMBOL(fpsimd_context_busy);
> >>+
> >>+static void __get_cpu_fpsimd_context(void)
> >>+{
> >>+ bool busy = __this_cpu_xchg(fpsimd_context_busy, true);
> >>+
> >>+ WARN_ON(busy);
> >>+}
> >>+
> >>+/*
> >>+ * Claim ownership of the CPU FPSIMD context for use by the calling context.
> >>+ *
> >>+ * The caller may freely modify FPSIMD context until *put_cpu_fpsimd_context()
> >>+ * is called.
> >
> >Nit: it may be better to say "freely manipulate the FPSIMD context
> >metadata".
> >
> >get_cpu_fpsimd_context() isn't enough to allow the FPSIMD regs to be
> >safely trashed, because they may still contain live data (or an up to
> >date copy) for some task.
>
> Good point, I will update the comment.
>
> >
> >(For that you also need fpsimd_save_and_flush_cpu_state(), or just use
> >kernel_neon_begin() instead.)
> >
> >[...]
> >
> >>@@ -922,6 +971,8 @@ void fpsimd_thread_switch(struct task_struct *next)
> >> if (!system_supports_fpsimd())
> >> return;
> >>+ __get_cpu_fpsimd_context();
> >>+
> >> /* Save unsaved fpsimd state, if any: */
> >> fpsimd_save();
> >>@@ -936,6 +987,8 @@ void fpsimd_thread_switch(struct task_struct *next)
> >> update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
> >> wrong_task || wrong_cpu);
> >>+
> >>+ __put_cpu_fpsimd_context();
> >
> >There should be a note in the commit message explaining why these are
> >here.
> >
> >Are they actually needed, other than to keep
> >WARN_ON(have_cpu_fpsimd_context()) happy elsewhere?
>
> It depends on how fpsimd_thread_switch() is called. I will answer more below.
>
> >
> >Does PREEMPT_RT allow non-threaded softirqs to execute while we're in
> >this code?
>
> This has nothing to do with PREEMPT_RT. Softirqs might be executed after
> handling interrupt (see irq_exit()).
>
> A call to preempt_disable() will not be enough to prevent softirqs, you
> actually need to either mask interrupts or have BH disabled.
>
> fpsimd_thread_switch() seems to be only called from the context switch code.
> AFAICT, interrupt will be masked. Therefore, holding the FPSIMD CPU is not
> necessary. However...
>
> >
> >
> >OTOH, if the overall effect on performance remains positive, we can
> >probably argue that these operations make the code more self-describing
> >and help guard against mistakes during future maintanence, even if
> >they're not strictly needed today.
>
> .... I think it would help guard against mistakes. The more I haven't seen
> any performance impact in the benchmark.
Which generally seems a good thing. The commit message should explain
that these are being added for hygiene rather than necessity here,
though.
> [...]
>
> >>-/*
> >>- * Save the FPSIMD state to memory and invalidate cpu view.
> >>- * This function must be called with softirqs (and preemption) disabled.
> >>- */
> >>+/* Save the FPSIMD state to memory and invalidate cpu view. */
> >> void fpsimd_save_and_flush_cpu_state(void)
> >> {
> >>+ get_cpu_fpsimd_context();
> >> fpsimd_save();
> >> fpsimd_flush_cpu_state();
> >>+ put_cpu_fpsimd_context();
> >> }
> >
> >Again, are these added just to keep WARN_ON()s happy?
>
> !preemptible() is not sufficient to prevent softirq running. You also need
> to have either interrupt masked or BH disabled.
So, why was the code safe before this series? (In fact, _was_ it safe?)
AFAICT, we have local_irq_disable() around context switch, which covers
preempt notifiers (where kvm_arch_vcpu_put_fp() gets called) and
fpsimd_thread_switch(): this is what prevents softirqs from firing.
So, while it's clean to have get/put here, I still don't see why they're
required.
I think the arguments are basically similar to fpsimd_thread_switch().
Since fpsimd_save_and_flush_cpu_state() and fpsimd_thread_switch() are
called from similar contexts, is makes sense to keep them aligned.
> >Now I look at the diff, I think after all that
> >
> > WARN_ON(preemptible());
> > __get_cpu_fpsimd_context();
> >
> > ...
> >
> > __put_cpu_fpsimd_context();
> >
> >is preferable. The purpose of this function is to free up the FPSIMD
> >regs for use by the kernel, so it makes no sense to call it with
> >preemption enabled: the regs could spontaneously become live again due
> >to a context switch. So we shouldn't encourage misuse by making the
> >function "safe" to call with preemption enabled.
>
> Ok, I will switch back to the underscore version and add a WARN_ON(...).
Thanks.
[...]
Cheers
---Dave
Hi Dave,
On 25/04/2019 17:39, Dave Martin wrote:
> On Thu, Apr 25, 2019 at 04:57:26PM +0100, Julien Grall wrote:
>> Hi Dave,
>>
>> On 24/04/2019 14:17, Dave Martin wrote:
>>> On Tue, Apr 23, 2019 at 02:57:19PM +0100, Julien Grall wrote:
>>>> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
>>>> index 5313aa257be6..6168d06bbd20 100644
>>>> --- a/arch/arm64/kernel/fpsimd.c
>>>> +++ b/arch/arm64/kernel/fpsimd.c
>>>> @@ -92,7 +92,8 @@
>>>> * To prevent this from racing with the manipulation of the task's FPSIMD state
>>>> * from task context and thereby corrupting the state, it is necessary to
>>>> * protect any manipulation of a task's fpsimd_state or TIF_FOREIGN_FPSTATE
>>>> - * flag with local_bh_disable() unless softirqs are already masked.
>>>> + * flag with {, __}get_cpu_fpsimd_context(). This will still allow softirqs to
>>>> + * run but prevent them to use FPSIMD.
>>>> *
>>>> * For a certain task, the sequence may look something like this:
>>>> * - the task gets scheduled in; if both the task's fpsimd_cpu field
>>>> @@ -155,6 +156,56 @@ extern void __percpu *efi_sve_state;
>>>> #endif /* ! CONFIG_ARM64_SVE */
>>>> +DEFINE_PER_CPU(bool, fpsimd_context_busy);
>>>> +EXPORT_PER_CPU_SYMBOL(fpsimd_context_busy);
>>>> +
>>>> +static void __get_cpu_fpsimd_context(void)
>>>> +{
>>>> + bool busy = __this_cpu_xchg(fpsimd_context_busy, true);
>>>> +
>>>> + WARN_ON(busy);
>>>> +}
>>>> +
>>>> +/*
>>>> + * Claim ownership of the CPU FPSIMD context for use by the calling context.
>>>> + *
>>>> + * The caller may freely modify FPSIMD context until *put_cpu_fpsimd_context()
>>>> + * is called.
>>>
>>> Nit: it may be better to say "freely manipulate the FPSIMD context
>>> metadata".
>>>
>>> get_cpu_fpsimd_context() isn't enough to allow the FPSIMD regs to be
>>> safely trashed, because they may still contain live data (or an up to
>>> date copy) for some task.
>>
>> Good point, I will update the comment.
>>
>>>
>>> (For that you also need fpsimd_save_and_flush_cpu_state(), or just use
>>> kernel_neon_begin() instead.)
>>>
>>> [...]
>>>
>>>> @@ -922,6 +971,8 @@ void fpsimd_thread_switch(struct task_struct *next)
>>>> if (!system_supports_fpsimd())
>>>> return;
>>>> + __get_cpu_fpsimd_context();
>>>> +
>>>> /* Save unsaved fpsimd state, if any: */
>>>> fpsimd_save();
>>>> @@ -936,6 +987,8 @@ void fpsimd_thread_switch(struct task_struct *next)
>>>> update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
>>>> wrong_task || wrong_cpu);
>>>> +
>>>> + __put_cpu_fpsimd_context();
>>>
>>> There should be a note in the commit message explaining why these are
>>> here.
>>>
>>> Are they actually needed, other than to keep
>>> WARN_ON(have_cpu_fpsimd_context()) happy elsewhere?
>>
>> It depends on how fpsimd_thread_switch() is called. I will answer more below.
>>
>>>
>>> Does PREEMPT_RT allow non-threaded softirqs to execute while we're in
>>> this code?
>>
>> This has nothing to do with PREEMPT_RT. Softirqs might be executed after
>> handling interrupt (see irq_exit()).
>>
>> A call to preempt_disable() will not be enough to prevent softirqs, you
>> actually need to either mask interrupts or have BH disabled.
>>
>> fpsimd_thread_switch() seems to be only called from the context switch code.
>> AFAICT, interrupt will be masked. Therefore, holding the FPSIMD CPU is not
>> necessary. However...
>>
>>>
>>>
>>> OTOH, if the overall effect on performance remains positive, we can
>>> probably argue that these operations make the code more self-describing
>>> and help guard against mistakes during future maintanence, even if
>>> they're not strictly needed today.
>>
>> .... I think it would help guard against mistakes. The more I haven't seen
>> any performance impact in the benchmark.
>
> Which generally seems a good thing. The commit message should explain
> that these are being added for hygiene rather than necessity here,
> though.
I will update the commit message.
>
>> [...]
>>
>>>> -/*
>>>> - * Save the FPSIMD state to memory and invalidate cpu view.
>>>> - * This function must be called with softirqs (and preemption) disabled.
>>>> - */
>>>> +/* Save the FPSIMD state to memory and invalidate cpu view. */
>>>> void fpsimd_save_and_flush_cpu_state(void)
>>>> {
>>>> + get_cpu_fpsimd_context();
>>>> fpsimd_save();
>>>> fpsimd_flush_cpu_state();
>>>> + put_cpu_fpsimd_context();
>>>> }
>>>
>>> Again, are these added just to keep WARN_ON()s happy?
>>
>> !preemptible() is not sufficient to prevent softirq running. You also need
>> to have either interrupt masked or BH disabled.
>
> So, why was the code safe before this series? (In fact, _was_ it safe?)
>
> AFAICT, we have local_irq_disable() around context switch, which covers
> preempt notifiers (where kvm_arch_vcpu_put_fp() gets called) and
> fpsimd_thread_switch(): this is what prevents softirqs from firing.
That's correct, both callers of the function will have IRQs masked. Also, the
function fpsimd_save() contained:
WARN_ON(!in_softirq() && !irqs_disabled());
So we were covered in case of misuse.
>
> So, while it's clean to have get/put here, I still don't see why they're
> required.
>
> I think the arguments are basically similar to fpsimd_thread_switch().
> Since fpsimd_save_and_flush_cpu_state() and fpsimd_thread_switch() are
> called from similar contexts, is makes sense to keep them aligned.
Yes, this is for hygiene rather than a real bug (thought the WARN_ON() in
fpsimd_save() would fire). I will update the message accordingly.
Cheers,
--
Julien Grall
Hi Dave,
On 24/04/2019 14:17, Dave Martin wrote:
> On Tue, Apr 23, 2019 at 02:57:18PM +0100, Julien Grall wrote:
>> tent-Length: 4295
>> Lines: 128
>>
>> The only external user of fpsimd_save() and fpsimd_flush_cpu_state() is
>> the KVM FPSIMD code.
>>
>> A following patch will introduce a mechanism to acquire owernship of the
>> FPSIMD/SVE context for performing context management operations. Rather
>> than having to export the new helpers to get/put the context, we can just
>> introduce a new function to combine fpsimd_save() and
>> fpsimd_flush_cpu_state().
>>
>> This has also the advantage to remove any external call of fpsimd_save()
>> and fpsimd_flush_cpu_state(), so they can be turned static.
>>
>> Lastly, the new function can also be used in the PM notifier.
>>
>> Signed-off-by: Julien Grall <[email protected]>
>>
>> ---
>> kernel_neon_begin() does not use fpsimd_save_and_flush_cpu_state()
>> because the next patch will modify the function to also grab the
>> FPSIMD/SVE context.
>>
>> Changes in v3:
>> - Rework the commit message
>> - Move the prototype of fpsimd_save_and_flush_cpu_state()
>> further down in the header
>> - Remove comment in kvm_arch_vcpu_put_fp()
>>
>> Changes in v2:
>> - Patch added
>> ---
>> arch/arm64/include/asm/fpsimd.h | 5 ++---
>> arch/arm64/kernel/fpsimd.c | 17 +++++++++++++----
>> arch/arm64/kvm/fpsimd.c | 4 +---
>> 3 files changed, 16 insertions(+), 10 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
>> index b73d12fcc7f9..c311d36ecffe 100644
>> --- a/arch/arm64/include/asm/fpsimd.h
>> +++ b/arch/arm64/include/asm/fpsimd.h
>> @@ -48,8 +48,6 @@ struct task_struct;
>> extern void fpsimd_save_state(struct user_fpsimd_state *state);
>> extern void fpsimd_load_state(struct user_fpsimd_state *state);
>>
>> -extern void fpsimd_save(void);
>> -
>> extern void fpsimd_thread_switch(struct task_struct *next);
>> extern void fpsimd_flush_thread(void);
>>
>> @@ -63,7 +61,8 @@ extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state,
>> void *sve_state, unsigned int sve_vl);
>>
>> extern void fpsimd_flush_task_state(struct task_struct *target);
>> -extern void fpsimd_flush_cpu_state(void);
>> +
>> +extern void fpsimd_save_and_flush_cpu_state(void);
>
> Nit: You could drop this blank line: these are all "state maintenance"
> operations, roughly speaking.
>
> But don't bother unless you respin for some other reason.
It looks like I will need to make some changes in patch #3. So I will remove the
line at the same time.
>
> [...]
>
> With or without that,
>
> Reviewed-by: Dave Martin <[email protected]>
Thank you!
Cheers,
--
Julien Grall
On Thu, Apr 25, 2019 at 06:12:59PM +0100, Julien Grall wrote:
> Hi Dave,
>
> On 25/04/2019 17:39, Dave Martin wrote:
> >On Thu, Apr 25, 2019 at 04:57:26PM +0100, Julien Grall wrote:
> >>Hi Dave,
> >>
> >>On 24/04/2019 14:17, Dave Martin wrote:
> >>>On Tue, Apr 23, 2019 at 02:57:19PM +0100, Julien Grall wrote:
[...]
> >>>(For that you also need fpsimd_save_and_flush_cpu_state(), or just use
> >>>kernel_neon_begin() instead.)
> >>>
> >>>[...]
> >>>
> >>>>@@ -922,6 +971,8 @@ void fpsimd_thread_switch(struct task_struct *next)
> >>>> if (!system_supports_fpsimd())
> >>>> return;
> >>>>+ __get_cpu_fpsimd_context();
> >>>>+
> >>>> /* Save unsaved fpsimd state, if any: */
> >>>> fpsimd_save();
> >>>>@@ -936,6 +987,8 @@ void fpsimd_thread_switch(struct task_struct *next)
> >>>> update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
> >>>> wrong_task || wrong_cpu);
> >>>>+
> >>>>+ __put_cpu_fpsimd_context();
> >>>
> >>>There should be a note in the commit message explaining why these are
> >>>here.
> >>>
> >>>Are they actually needed, other than to keep
> >>>WARN_ON(have_cpu_fpsimd_context()) happy elsewhere?
> >>
> >>It depends on how fpsimd_thread_switch() is called. I will answer more below.
> >>
> >>>
> >>>Does PREEMPT_RT allow non-threaded softirqs to execute while we're in
> >>>this code?
> >>
> >>This has nothing to do with PREEMPT_RT. Softirqs might be executed after
> >>handling interrupt (see irq_exit()).
> >>
> >>A call to preempt_disable() will not be enough to prevent softirqs, you
> >>actually need to either mask interrupts or have BH disabled.
> >>
> >>fpsimd_thread_switch() seems to be only called from the context switch code.
> >>AFAICT, interrupt will be masked. Therefore, holding the FPSIMD CPU is not
> >>necessary. However...
> >>
> >>>
> >>>
> >>>OTOH, if the overall effect on performance remains positive, we can
> >>>probably argue that these operations make the code more self-describing
> >>>and help guard against mistakes during future maintanence, even if
> >>>they're not strictly needed today.
> >>
> >>.... I think it would help guard against mistakes. The more I haven't seen
> >>any performance impact in the benchmark.
> >
> >Which generally seems a good thing. The commit message should explain
> >that these are being added for hygiene rather than necessity here,
> >though.
>
> I will update the commit message.
Thanks
[...]
> >>>>-/*
> >>>>- * Save the FPSIMD state to memory and invalidate cpu view.
> >>>>- * This function must be called with softirqs (and preemption) disabled.
> >>>>- */
> >>>>+/* Save the FPSIMD state to memory and invalidate cpu view. */
> >>>> void fpsimd_save_and_flush_cpu_state(void)
> >>>> {
> >>>>+ get_cpu_fpsimd_context();
> >>>> fpsimd_save();
> >>>> fpsimd_flush_cpu_state();
> >>>>+ put_cpu_fpsimd_context();
> >>>> }
> >>>
> >>>Again, are these added just to keep WARN_ON()s happy?
> >>
> >>!preemptible() is not sufficient to prevent softirq running. You also need
> >>to have either interrupt masked or BH disabled.
> >
> >So, why was the code safe before this series? (In fact, _was_ it safe?)
> >
> >AFAICT, we have local_irq_disable() around context switch, which covers
> >preempt notifiers (where kvm_arch_vcpu_put_fp() gets called) and
> >fpsimd_thread_switch(): this is what prevents softirqs from firing.
>
> That's correct, both callers of the function will have IRQs masked. Also,
> the function fpsimd_save() contained:
> WARN_ON(!in_softirq() && !irqs_disabled());
>
> So we were covered in case of misuse.
OK -- my main worry here was that there might be a bug in mainline.
Adding protection where none existed previously raises the question of
whether we're papering over an existing bug, so it's good to mention in
the commit message if this is not the case.
> >So, while it's clean to have get/put here, I still don't see why they're
> >required.
> >
> >I think the arguments are basically similar to fpsimd_thread_switch().
> >Since fpsimd_save_and_flush_cpu_state() and fpsimd_thread_switch() are
> >called from similar contexts, is makes sense to keep them aligned.
>
> Yes, this is for hygiene rather than a real bug (thought the WARN_ON() in
> fpsimd_save() would fire). I will update the message accordingly.
OK, thanks
---Dave