Whilst a PR_SET_FP_MODE prctl is performed there are decisions made
based upon whether the task is executing on the current CPU. This may
change if we're preempted, so disable preemption to avoid such changes
for the lifetime of the mode switch.
Signed-off-by: Paul Burton <[email protected]>
Fixes: 9791554b45a2 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Cc: stable <[email protected]> # v4.0+
---
arch/mips/kernel/process.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
index 92880ce..ce55ea0 100644
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -601,6 +601,9 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
if (!(value & PR_FP_MODE_FR) && cpu_has_fpu && cpu_has_mips_r6)
return -EOPNOTSUPP;
+ /* Proceed with the mode switch */
+ preempt_disable();
+
/* Save FP & vector context, then disable FPU & MSA */
if (task->signal == current->signal)
lose_fpu(1);
@@ -659,6 +662,7 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
/* Allow threads to use FP again */
atomic_set(&task->mm->context.fp_mode_switching, 0);
+ preempt_enable();
return 0;
}
--
2.8.0
Commit 9791554b45a2 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options
for MIPS") added support for the PR_SET_FP_MODE prctl, which allows a
userland program to modify its FP mode at runtime. This is most notably
required if dynamic linking leads to the FP mode requirement changing at
runtime from that indicated in the initial executable's ELF header. In
order to avoid overhead in the general FP context restore code, it aimed
to have threads in the process become unable to enable the FPU during a
mode switch & have the thread calling the prctl syscall wait for all
other threads in the process to be context switched at least once. Once
that happens we can know that no thread in the process whose mode will
be switched has live FP context, and it's safe to perform the mode
switch. However in the (rare) case of modeswitches occurring in
multithreaded programs this can lead to indeterminate delays for the
thread invoking the prctl syscall, and the code monitoring for those
context switches was woefully inadequate for all but the simplest cases.
Fix this by broadcasting an IPI if other CPUs may have live FP context
for an affected thread, with a handler causing those CPUs to relinquish
their FPU ownership. Threads will then be allowed to continue running
but will stall on the wait_on_atomic_t in enable_restore_fp_context if
they attempt to use FP again whilst the mode switch is still in
progress. The end result is less fragile poking at scheduler context
switch counts & a more expedient completion of the mode switch.
Signed-off-by: Paul Burton <[email protected]>
Fixes: 9791554b45a2 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Cc: stable <[email protected]> # v4.0+
---
arch/mips/kernel/process.c | 40 +++++++++++++++++-----------------------
1 file changed, 17 insertions(+), 23 deletions(-)
diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
index ce55ea0..e1b36a4 100644
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -580,11 +580,19 @@ int mips_get_process_fp_mode(struct task_struct *task)
return value;
}
+static void prepare_for_fp_mode_switch(void *info)
+{
+ struct mm_struct *mm = info;
+
+ if (current->mm == mm)
+ lose_fpu(1);
+}
+
int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
{
const unsigned int known_bits = PR_FP_MODE_FR | PR_FP_MODE_FRE;
- unsigned long switch_count;
struct task_struct *t;
+ int max_users;
/* Check the value is valid */
if (value & ~known_bits)
@@ -613,31 +621,17 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
smp_mb__after_atomic();
/*
- * If there are multiple online CPUs then wait until all threads whose
- * FP mode is about to change have been context switched. This approach
- * allows us to only worry about whether an FP mode switch is in
- * progress when FP is first used in a tasks time slice. Pretty much all
- * of the mode switch overhead can thus be confined to cases where mode
- * switches are actually occurring. That is, to here. However for the
- * thread performing the mode switch it may take a while...
+ * If there are multiple online CPUs then force any which are running
+ * threads in this process to lose their FPU context, which they can't
+ * regain until fp_mode_switching is cleared later.
*/
if (num_online_cpus() > 1) {
- spin_lock_irq(&task->sighand->siglock);
-
- for_each_thread(task, t) {
- if (t == current)
- continue;
-
- switch_count = t->nvcsw + t->nivcsw;
-
- do {
- spin_unlock_irq(&task->sighand->siglock);
- cond_resched();
- spin_lock_irq(&task->sighand->siglock);
- } while ((t->nvcsw + t->nivcsw) == switch_count);
- }
+ /* No need to send an IPI for the local CPU */
+ max_users = (task->mm == current->mm) ? 1 : 0;
- spin_unlock_irq(&task->sighand->siglock);
+ if (atomic_read(¤t->mm->mm_users) > max_users)
+ smp_call_function(prepare_for_fp_mode_switch,
+ (void *)current->mm, 1);
}
/*
--
2.8.0
On Thu, 21 Apr 2016, Paul Burton wrote:
> Whilst a PR_SET_FP_MODE prctl is performed there are decisions made
> based upon whether the task is executing on the current CPU. This may
> change if we're preempted, so disable preemption to avoid such changes
> for the lifetime of the mode switch.
Reviewed-by: Maciej W. Rozycki <[email protected]>
Thanks!
Maciej
On Thu, 21 Apr 2016, Paul Burton wrote:
> Fix this by broadcasting an IPI if other CPUs may have live FP context
> for an affected thread, with a handler causing those CPUs to relinquish
> their FPU ownership. Threads will then be allowed to continue running
> but will stall on the wait_on_atomic_t in enable_restore_fp_context if
> they attempt to use FP again whilst the mode switch is still in
> progress. The end result is less fragile poking at scheduler context
> switch counts & a more expedient completion of the mode switch.
Reviewed-by: Maciej W. Rozycki <[email protected]>
Thanks, Paul, for your work on this problem! I'll rebase my outstanding
NaN interlinking stuff (https://patchwork.linux-mips.org/patch/11491/) on
top of your patches -- they address the concern expressed there.
Maciej