2015-04-02 17:39:34

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 0/3] context_tracking: A few improvements

Hi,

A few updates that I plan to push in a few days:

_ Fix recursion issues (rare crashes reported)
_ Optimize TIF_NOHZ propagation
_ Some __init tags

git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
nohz/core

HEAD: bf7d3e11e99120f6c435cc709842e9b82431c003

Thanks,
Frederic
---

Frederic Weisbecker (3):
context_tracking: Protect against recursion
context_tracking: Inherit TIF_NOHZ through forks instead of context switches
context_tracking: Tag init code


include/linux/context_tracking.h | 10 -----
include/linux/context_tracking_state.h | 1 +
kernel/context_tracking.c | 76 ++++++++++++++++++++++++----------
kernel/sched/core.c | 1 -
4 files changed, 55 insertions(+), 33 deletions(-)


2015-04-02 17:39:38

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 1/3] context_tracking: Protect against recursion

Context tracking recursion can happen when an exception triggers in the
middle of a call to a context tracking probe.

This special case can be caused by vmalloc faults. If an access to a
memory area allocated by vmalloc happens in the middle of
context_tracking_enter(), we may run into an endless fault loop because
the exception in turn calls context_tracking_enter() which faults on
the same vmalloc'ed memory, triggering an exception again, etc...

Some rare crashes have been reported so lets protect against this with
a recursion counter.

Reported-by: Dave Jones <[email protected]>
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Dave Jones <[email protected]>
Cc: Oleg Nesterov <[email protected]>
---
include/linux/context_tracking_state.h | 1 +
kernel/context_tracking.c | 30 ++++++++++++++++++++++++++++++
2 files changed, 31 insertions(+)

diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h
index 6b7b96a..678ecdf 100644
--- a/include/linux/context_tracking_state.h
+++ b/include/linux/context_tracking_state.h
@@ -12,6 +12,7 @@ struct context_tracking {
* may be further optimized using static keys.
*/
bool active;
+ int recursion;
enum ctx_state {
CONTEXT_KERNEL = 0,
CONTEXT_USER,
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index 72d59a1..b9e0b4f 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -38,6 +38,25 @@ void context_tracking_cpu_set(int cpu)
}
}

+static bool context_tracking_recursion_enter(void)
+{
+ int recursion;
+
+ recursion = __this_cpu_inc_return(context_tracking.recursion);
+ if (recursion == 1)
+ return true;
+
+ WARN_ONCE((recursion < 1), "Invalid context tracking recursion value %d\n", recursion);
+ __this_cpu_dec(context_tracking.recursion);
+
+ return false;
+}
+
+static void context_tracking_recursion_exit(void)
+{
+ __this_cpu_dec(context_tracking.recursion);
+}
+
/**
* context_tracking_enter - Inform the context tracking that the CPU is going
* enter user or guest space mode.
@@ -75,6 +94,11 @@ void context_tracking_enter(enum ctx_state state)
WARN_ON_ONCE(!current->mm);

local_irq_save(flags);
+ if (!context_tracking_recursion_enter()) {
+ local_irq_restore(flags);
+ return;
+ }
+
if ( __this_cpu_read(context_tracking.state) != state) {
if (__this_cpu_read(context_tracking.active)) {
/*
@@ -105,6 +129,7 @@ void context_tracking_enter(enum ctx_state state)
*/
__this_cpu_write(context_tracking.state, state);
}
+ context_tracking_recursion_exit();
local_irq_restore(flags);
}
NOKPROBE_SYMBOL(context_tracking_enter);
@@ -139,6 +164,10 @@ void context_tracking_exit(enum ctx_state state)
return;

local_irq_save(flags);
+ if (!context_tracking_recursion_enter()) {
+ local_irq_restore(flags);
+ return;
+ }
if (__this_cpu_read(context_tracking.state) == state) {
if (__this_cpu_read(context_tracking.active)) {
/*
@@ -153,6 +182,7 @@ void context_tracking_exit(enum ctx_state state)
}
__this_cpu_write(context_tracking.state, CONTEXT_KERNEL);
}
+ context_tracking_recursion_exit();
local_irq_restore(flags);
}
NOKPROBE_SYMBOL(context_tracking_exit);
--
2.1.4

2015-04-02 17:40:42

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 2/3] context_tracking: Inherit TIF_NOHZ through forks instead of context switches

TIF_NOHZ is used by context_tracking to force syscall slow-path on every
task in order to track userspace roundtrips. As such, it must be set on
all running tasks.

It's currently explicitly inherited through context switches. There is
no need to do it on this fast-path though. The flag could be simply
set once for all on all tasks, whether they are running or not.

Lets do this by setting the flag to init task on early boot and let it
propagate through fork inheritance.

Suggested-by: Oleg Nesterov <[email protected]>
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Dave Jones <[email protected]>
Cc: Oleg Nesterov <[email protected]>
---
include/linux/context_tracking.h | 10 --------
kernel/context_tracking.c | 52 +++++++++++++++++++++-------------------
kernel/sched/core.c | 1 -
3 files changed, 27 insertions(+), 36 deletions(-)

diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h
index 2821838..b96bd29 100644
--- a/include/linux/context_tracking.h
+++ b/include/linux/context_tracking.h
@@ -14,8 +14,6 @@ extern void context_tracking_enter(enum ctx_state state);
extern void context_tracking_exit(enum ctx_state state);
extern void context_tracking_user_enter(void);
extern void context_tracking_user_exit(void);
-extern void __context_tracking_task_switch(struct task_struct *prev,
- struct task_struct *next);

static inline void user_enter(void)
{
@@ -51,19 +49,11 @@ static inline void exception_exit(enum ctx_state prev_ctx)
}
}

-static inline void context_tracking_task_switch(struct task_struct *prev,
- struct task_struct *next)
-{
- if (context_tracking_is_enabled())
- __context_tracking_task_switch(prev, next);
-}
#else
static inline void user_enter(void) { }
static inline void user_exit(void) { }
static inline enum ctx_state exception_enter(void) { return 0; }
static inline void exception_exit(enum ctx_state prev_ctx) { }
-static inline void context_tracking_task_switch(struct task_struct *prev,
- struct task_struct *next) { }
#endif /* !CONFIG_CONTEXT_TRACKING */


diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index b9e0b4f..ced8558 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -30,14 +30,6 @@ EXPORT_SYMBOL_GPL(context_tracking_enabled);
DEFINE_PER_CPU(struct context_tracking, context_tracking);
EXPORT_SYMBOL_GPL(context_tracking);

-void context_tracking_cpu_set(int cpu)
-{
- if (!per_cpu(context_tracking.active, cpu)) {
- per_cpu(context_tracking.active, cpu) = true;
- static_key_slow_inc(&context_tracking_enabled);
- }
-}
-
static bool context_tracking_recursion_enter(void)
{
int recursion;
@@ -194,24 +186,34 @@ void context_tracking_user_exit(void)
}
NOKPROBE_SYMBOL(context_tracking_user_exit);

-/**
- * __context_tracking_task_switch - context switch the syscall callbacks
- * @prev: the task that is being switched out
- * @next: the task that is being switched in
- *
- * The context tracking uses the syscall slow path to implement its user-kernel
- * boundaries probes on syscalls. This way it doesn't impact the syscall fast
- * path on CPUs that don't do context tracking.
- *
- * But we need to clear the flag on the previous task because it may later
- * migrate to some CPU that doesn't do the context tracking. As such the TIF
- * flag may not be desired there.
- */
-void __context_tracking_task_switch(struct task_struct *prev,
- struct task_struct *next)
+void context_tracking_cpu_set(int cpu)
{
- clear_tsk_thread_flag(prev, TIF_NOHZ);
- set_tsk_thread_flag(next, TIF_NOHZ);
+ static bool initialized = false;
+ struct task_struct *p, *t;
+ unsigned long flags;
+
+ if (!per_cpu(context_tracking.active, cpu)) {
+ per_cpu(context_tracking.active, cpu) = true;
+ static_key_slow_inc(&context_tracking_enabled);
+ }
+
+ if (initialized)
+ return;
+
+ set_tsk_thread_flag(&init_task, TIF_NOHZ);
+
+ /*
+ * There shouldn't be any thread at this early boot stage
+ * but the scheduler is ready to host any. So lets walk
+ * the tasklist just in case. tasklist_lock isn't necessary
+ * either that early but take it for correctness checkers.
+ */
+ read_lock_irqsave(&tasklist_lock, flags);
+ for_each_process_thread(p, t)
+ set_tsk_thread_flag(t, TIF_NOHZ);
+ read_unlock_irqrestore(&tasklist_lock, flags);
+
+ initialized = true;
}

#ifdef CONFIG_CONTEXT_TRACKING_FORCE
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 06b9a00..7aec5ba 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2312,7 +2312,6 @@ context_switch(struct rq *rq, struct task_struct *prev,
*/
spin_release(&rq->lock.dep_map, 1, _THIS_IP_);

- context_tracking_task_switch(prev, next);
/* Here we just switch the register state and the stack. */
switch_to(prev, next, prev);
barrier();
--
2.1.4

2015-04-02 17:39:43

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 3/3] context_tracking: Tag init code

Mark context_tracking_cpu_set() as init code, we only need it at
early boot time.

Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Dave Jones <[email protected]>
Cc: Oleg Nesterov <[email protected]>
---
kernel/context_tracking.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index ced8558..b80e923 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -186,9 +186,9 @@ void context_tracking_user_exit(void)
}
NOKPROBE_SYMBOL(context_tracking_user_exit);

-void context_tracking_cpu_set(int cpu)
+void __init context_tracking_cpu_set(int cpu)
{
- static bool initialized = false;
+ static __initdata bool initialized = false;
struct task_struct *p, *t;
unsigned long flags;

--
2.1.4

2015-04-02 18:01:39

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH 1/3] context_tracking: Protect against recursion

On 04/02/2015 01:39 PM, Frederic Weisbecker wrote:
> Context tracking recursion can happen when an exception triggers in the
> middle of a call to a context tracking probe.
>
> This special case can be caused by vmalloc faults. If an access to a
> memory area allocated by vmalloc happens in the middle of
> context_tracking_enter(), we may run into an endless fault loop because
> the exception in turn calls context_tracking_enter() which faults on
> the same vmalloc'ed memory, triggering an exception again, etc...
>
> Some rare crashes have been reported so lets protect against this with
> a recursion counter.
>
> Reported-by: Dave Jones <[email protected]>
> Signed-off-by: Frederic Weisbecker <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Rik van Riel <[email protected]>
> Cc: Paul E. McKenney <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Dave Jones <[email protected]>
> Cc: Oleg Nesterov <[email protected]>

Reviewed-by: Rik van Riel <[email protected]>

2015-04-02 18:07:50

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH 2/3] context_tracking: Inherit TIF_NOHZ through forks instead of context switches

On 04/02/2015 01:39 PM, Frederic Weisbecker wrote:
> TIF_NOHZ is used by context_tracking to force syscall slow-path on every
> task in order to track userspace roundtrips. As such, it must be set on
> all running tasks.
>
> It's currently explicitly inherited through context switches. There is
> no need to do it on this fast-path though. The flag could be simply
> set once for all on all tasks, whether they are running or not.
>
> Lets do this by setting the flag to init task on early boot and let it
> propagate through fork inheritance.
>
> Suggested-by: Oleg Nesterov <[email protected]>
> Signed-off-by: Frederic Weisbecker <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Rik van Riel <[email protected]>
> Cc: Paul E. McKenney <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Dave Jones <[email protected]>
> Cc: Oleg Nesterov <[email protected]>

Reviewed-by: Rik van Riel <[email protected]>

2015-04-02 18:08:22

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH 3/3] context_tracking: Tag init code

On 04/02/2015 01:39 PM, Frederic Weisbecker wrote:
> Mark context_tracking_cpu_set() as init code, we only need it at
> early boot time.
>
> Signed-off-by: Frederic Weisbecker <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Rik van Riel <[email protected]>
> Cc: Paul E. McKenney <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Dave Jones <[email protected]>
> Cc: Oleg Nesterov <[email protected]>

Reviewed-by: Rik van Riel <[email protected]>

2015-04-02 18:09:30

by Oleg Nesterov

[permalink] [raw]
Subject: Re: [PATCH 2/3] context_tracking: Inherit TIF_NOHZ through forks instead of context switches

On 04/02, Frederic Weisbecker wrote:
>
> +void context_tracking_cpu_set(int cpu)
> {
> - clear_tsk_thread_flag(prev, TIF_NOHZ);
> - set_tsk_thread_flag(next, TIF_NOHZ);
> + static bool initialized = false;
> + struct task_struct *p, *t;
> + unsigned long flags;
> +
> + if (!per_cpu(context_tracking.active, cpu)) {
> + per_cpu(context_tracking.active, cpu) = true;
> + static_key_slow_inc(&context_tracking_enabled);
> + }
> +
> + if (initialized)
> + return;
> +
> + set_tsk_thread_flag(&init_task, TIF_NOHZ);
> +
> + /*
> + * There shouldn't be any thread at this early boot stage
> + * but the scheduler is ready to host any. So lets walk
> + * the tasklist just in case. tasklist_lock isn't necessary
> + * either that early but take it for correctness checkers.
> + */
> + read_lock_irqsave(&tasklist_lock, flags);
> + for_each_process_thread(p, t)
> + set_tsk_thread_flag(t, TIF_NOHZ);
> + read_unlock_irqrestore(&tasklist_lock, flags);
> +
> + initialized = true;
> }

Agreed, but _irqsave is not needed. read_lock(tasklist) should work
just fine.

Any reason 3/3 comes as a separate change? I won't argue, just curious.

Oleg.

2015-04-02 19:09:17

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 2/3] context_tracking: Inherit TIF_NOHZ through forks instead of context switches

On Thu, Apr 02, 2015 at 07:39:24PM +0200, Frederic Weisbecker wrote:
> TIF_NOHZ is used by context_tracking to force syscall slow-path on every
> task in order to track userspace roundtrips. As such, it must be set on
> all running tasks.
>
> It's currently explicitly inherited through context switches. There is
> no need to do it on this fast-path though. The flag could be simply
> set once for all on all tasks, whether they are running or not.
>
> Lets do this by setting the flag to init task on early boot and let it
> propagate through fork inheritance.
>

One must ask, what's the point of the flag if everybody must always have
it set?

2015-04-02 19:11:25

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH 2/3] context_tracking: Inherit TIF_NOHZ through forks instead of context switches

On 04/02/2015 03:09 PM, Peter Zijlstra wrote:
> On Thu, Apr 02, 2015 at 07:39:24PM +0200, Frederic Weisbecker wrote:
>> TIF_NOHZ is used by context_tracking to force syscall slow-path on every
>> task in order to track userspace roundtrips. As such, it must be set on
>> all running tasks.
>>
>> It's currently explicitly inherited through context switches. There is
>> no need to do it on this fast-path though. The flag could be simply
>> set once for all on all tasks, whether they are running or not.
>>
>> Lets do this by setting the flag to init task on early boot and let it
>> propagate through fork inheritance.
>>
>
> One must ask, what's the point of the flag if everybody must always have
> it set?

We already test this word full of flags in the syscall
entry and exit path.

Testing this same word for an additional flag is cheaper
than testing a different variable.

See the places in entry_{32,64}.S where do_notify_resume,
syscall_trace_enter, syscall_trace_leave, etc get called.
All are called as a result of testing flags in the same
word.

2015-04-02 19:19:58

by Oleg Nesterov

[permalink] [raw]
Subject: Re: [PATCH 2/3] context_tracking: Inherit TIF_NOHZ through forks instead of context switches

On 04/02, Peter Zijlstra wrote:
>
> On Thu, Apr 02, 2015 at 07:39:24PM +0200, Frederic Weisbecker wrote:
> > TIF_NOHZ is used by context_tracking to force syscall slow-path on every
> > task in order to track userspace roundtrips. As such, it must be set on
> > all running tasks.
> >
> > It's currently explicitly inherited through context switches. There is
> > no need to do it on this fast-path though. The flag could be simply
> > set once for all on all tasks, whether they are running or not.
> >
> > Lets do this by setting the flag to init task on early boot and let it
> > propagate through fork inheritance.
> >
>
> One must ask, what's the point of the flag if everybody must always have
> it set?

To force the slow path in the syscall paths. Unfortunately it is not
easy to change this ...

Oleg.

2015-04-03 17:19:57

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH 2/3] context_tracking: Inherit TIF_NOHZ through forks instead of context switches

On Thu, Apr 02, 2015 at 08:08:03PM +0200, Oleg Nesterov wrote:
> On 04/02, Frederic Weisbecker wrote:
> >
> > +void context_tracking_cpu_set(int cpu)
> > {
> > - clear_tsk_thread_flag(prev, TIF_NOHZ);
> > - set_tsk_thread_flag(next, TIF_NOHZ);
> > + static bool initialized = false;
> > + struct task_struct *p, *t;
> > + unsigned long flags;
> > +
> > + if (!per_cpu(context_tracking.active, cpu)) {
> > + per_cpu(context_tracking.active, cpu) = true;
> > + static_key_slow_inc(&context_tracking_enabled);
> > + }
> > +
> > + if (initialized)
> > + return;
> > +
> > + set_tsk_thread_flag(&init_task, TIF_NOHZ);
> > +
> > + /*
> > + * There shouldn't be any thread at this early boot stage
> > + * but the scheduler is ready to host any. So lets walk
> > + * the tasklist just in case. tasklist_lock isn't necessary
> > + * either that early but take it for correctness checkers.
> > + */
> > + read_lock_irqsave(&tasklist_lock, flags);
> > + for_each_process_thread(p, t)
> > + set_tsk_thread_flag(t, TIF_NOHZ);
> > + read_unlock_irqrestore(&tasklist_lock, flags);
> > +
> > + initialized = true;
> > }
>
> Agreed, but _irqsave is not needed. read_lock(tasklist) should work
> just fine.

Indeed, that was just in case we get more callers but then if it happens,
it's their responsibility to fix the callee. So I'll just change that.

>
> Any reason 3/3 comes as a separate change? I won't argue, just curious.

Nope, I should fold them indeed.

Thanks.

>
> Oleg.
>

2015-04-03 17:21:54

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH 2/3] context_tracking: Inherit TIF_NOHZ through forks instead of context switches

On Thu, Apr 02, 2015 at 03:11:20PM -0400, Rik van Riel wrote:
> On 04/02/2015 03:09 PM, Peter Zijlstra wrote:
> > On Thu, Apr 02, 2015 at 07:39:24PM +0200, Frederic Weisbecker wrote:
> >> TIF_NOHZ is used by context_tracking to force syscall slow-path on every
> >> task in order to track userspace roundtrips. As such, it must be set on
> >> all running tasks.
> >>
> >> It's currently explicitly inherited through context switches. There is
> >> no need to do it on this fast-path though. The flag could be simply
> >> set once for all on all tasks, whether they are running or not.
> >>
> >> Lets do this by setting the flag to init task on early boot and let it
> >> propagate through fork inheritance.
> >>
> >
> > One must ask, what's the point of the flag if everybody must always have
> > it set?
>
> We already test this word full of flags in the syscall
> entry and exit path.
>
> Testing this same word for an additional flag is cheaper
> than testing a different variable.
>
> See the places in entry_{32,64}.S where do_notify_resume,
> syscall_trace_enter, syscall_trace_leave, etc get called.
> All are called as a result of testing flags in the same
> word.

Indeed, we could add another check for a global flag but that's going
to bloat the syscall fastpath.