Hi,
Here is the 3rd version of the series to improve preempt
related behavior in kprobes/x86. This actually includes
many enhancements/fixes from the 2nd version, which is
https://lkml.org/lkml/2017/9/11/482
With the previous patch, lkp-bot reported that an issue
( https://lkml.org/lkml/2017/9/14/3 ) and I couldn't
reproduce it. However, I found a suspicious bug and fixed
it ([2/7]).
Also, while I was checking the correct condition for
*probe handlers in Documentation/kprobes.txt, I also
found that current implementations for ftrace-based kprobe
and optprobe were mis-reading the document.
>From the document, handlers must be run with preempt-
disabled, but interrupt disabling is not guaranteed.
So in the middle of this series, patches ([4/7],[5/7],
[6/7]) adding preempt-disabling and removing irq-disabling.
And at last, I placed the original patch (Enable optprobe
with CONFIG_PREEMPT).
The others are just for making sure this fix works well.
- [1/7] is just adding preemptible checker in kprobe
smake tests so that we can easily find mistake.
- [3/7] is adding an assert if user tries to change
execution path in optprobe, which is obviously
prohibited in the document (there also be how to
avoid it.)
Thank you,
---
Masami Hiramatsu (7):
kprobes: Improve smoke test to check preemptible
kprobes/x86: Move get_kprobe_ctlblk in irq-disabled block
kprobes: Warn if optprobe handler tries to change execution path
kprobes/x86: Disable preempt in optprobe
kprobes/x86: Disable preempt ftrace-based jprobe
kprobes/x86: Remove disable_irq from ftrace-based/optimized kprobe
kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT
arch/Kconfig | 2 +-
arch/x86/kernel/kprobes/ftrace.c | 32 ++++++++++++++++----------------
arch/x86/kernel/kprobes/opt.c | 8 +++-----
kernel/kprobes.c | 23 +++++++++++++++++------
kernel/test_kprobes.c | 20 ++++++++++++++++++++
5 files changed, 57 insertions(+), 28 deletions(-)
--
Masami Hiramatsu
Add preemptible check to each handler. Handlers are called with
non-preemtible, which is guaranteed by Documentation/kprobes.txt.
Signed-off-by: Masami Hiramatsu <[email protected]>
---
kernel/test_kprobes.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/kernel/test_kprobes.c b/kernel/test_kprobes.c
index 0dbab6d1acb4..47106a1e645a 100644
--- a/kernel/test_kprobes.c
+++ b/kernel/test_kprobes.c
@@ -34,6 +34,10 @@ static noinline u32 kprobe_target(u32 value)
static int kp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
+ if (preemptible()) {
+ handler_errors++;
+ pr_err("pre-handler is preemptible\n");
+ }
preh_val = (rand1 / div_factor);
return 0;
}
@@ -41,6 +45,10 @@ static int kp_pre_handler(struct kprobe *p, struct pt_regs *regs)
static void kp_post_handler(struct kprobe *p, struct pt_regs *regs,
unsigned long flags)
{
+ if (preemptible()) {
+ handler_errors++;
+ pr_err("post-handler is preemptible\n");
+ }
if (preh_val != (rand1 / div_factor)) {
handler_errors++;
pr_err("incorrect value in post_handler\n");
@@ -156,6 +164,10 @@ static int test_kprobes(void)
static u32 j_kprobe_target(u32 value)
{
+ if (preemptible()) {
+ handler_errors++;
+ pr_err("jprobe-handler is preemptible\n");
+ }
if (value != rand1) {
handler_errors++;
pr_err("incorrect value in jprobe handler\n");
@@ -232,6 +244,10 @@ static u32 krph_val;
static int entry_handler(struct kretprobe_instance *ri, struct pt_regs *regs)
{
+ if (preemptible()) {
+ handler_errors++;
+ pr_err("kretprobe entry handler is preemptible\n");
+ }
krph_val = (rand1 / div_factor);
return 0;
}
@@ -240,6 +256,10 @@ static int return_handler(struct kretprobe_instance *ri, struct pt_regs *regs)
{
unsigned long ret = regs_return_value(regs);
+ if (preemptible()) {
+ handler_errors++;
+ pr_err("kretprobe return handler is preemptible\n");
+ }
if (ret != (rand1 / div_factor)) {
handler_errors++;
pr_err("incorrect value in kretprobe handler\n");
Since get_kprobe_ctlblk() accesses per-cpu variable
which calls smp_processor_id(), it must be called under
preempt-disabled or irq-disabled.
Signed-off-by: Masami Hiramatsu <[email protected]>
---
arch/x86/kernel/kprobes/opt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 4f98aad38237..259b7e828b02 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -154,7 +154,6 @@ STACK_FRAME_NON_STANDARD(optprobe_template_func);
static void
optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
{
- struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
unsigned long flags;
/* This is possible if op is under delayed unoptimizing */
@@ -165,6 +164,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
if (kprobe_running()) {
kprobes_inc_nmissed_count(&op->kp);
} else {
+ struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
/* Save skipped registers */
#ifdef CONFIG_X86_64
regs->cs = __KERNEL_CS;
Warn if optprobe handler tries to change execution path.
As described in Documentation/kprobes.txt, with optprobe
user handler can not change instruction pointer. In that
case user must avoid optimizing the kprobes by setting
post_handler or break_handler.
Signed-off-by: Masami Hiramatsu <[email protected]>
---
kernel/kprobes.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index a1606a4224e1..de73b843c623 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -387,7 +387,10 @@ void opt_pre_handler(struct kprobe *p, struct pt_regs *regs)
list_for_each_entry_rcu(kp, &p->list, list) {
if (kp->pre_handler && likely(!kprobe_disabled(kp))) {
set_kprobe_instance(kp);
- kp->pre_handler(kp, regs);
+ if (kp->pre_handler(kp, regs)) {
+ if (WARN_ON_ONCE(1))
+ pr_err("Optprobe ignores instruction pointer changing.(%pF)\n", p->addr);
+ }
}
reset_kprobe_instance();
}
Disable preempt in optprobe handler as described
in Documentation/kprobes.txt, there is
"Probe handlers are run with preemption disabled."
Signed-off-by: Masami Hiramatsu <[email protected]>
---
arch/x86/kernel/kprobes/opt.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 259b7e828b02..36e4f61c3eec 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -161,6 +161,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
return;
local_irq_save(flags);
+ preempt_disable();
if (kprobe_running()) {
kprobes_inc_nmissed_count(&op->kp);
} else {
@@ -180,6 +181,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
opt_pre_handler(&op->kp, regs);
__this_cpu_write(current_kprobe, NULL);
}
+ preempt_enable_no_resched();
local_irq_restore(flags);
}
NOKPROBE_SYMBOL(optimized_callback);
Disable preempt in ftrace-based jprobe handler as
described in Documentation/kprobes.txt, there is
"Probe handlers are run with preemption disabled."
This will fix jprobes behavior when CONFIG_PREEMPT=y.
Signed-off-by: Masami Hiramatsu <[email protected]>
---
arch/x86/kernel/kprobes/ftrace.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index 041f7b6dfa0f..bcfee4f69b0e 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -26,7 +26,7 @@
#include "common.h"
static nokprobe_inline
-int __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
+void __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb, unsigned long orig_ip)
{
/*
@@ -41,20 +41,21 @@ int __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
__this_cpu_write(current_kprobe, NULL);
if (orig_ip)
regs->ip = orig_ip;
- return 1;
}
int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb)
{
- if (kprobe_ftrace(p))
- return __skip_singlestep(p, regs, kcb, 0);
- else
- return 0;
+ if (kprobe_ftrace(p)) {
+ __skip_singlestep(p, regs, kcb, 0);
+ preempt_enable_no_resched();
+ return 1;
+ }
+ return 0;
}
NOKPROBE_SYMBOL(skip_singlestep);
-/* Ftrace callback handler for kprobes */
+/* Ftrace callback handler for kprobes -- called under preepmt disabed */
void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *ops, struct pt_regs *regs)
{
@@ -77,13 +78,17 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
/* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
regs->ip = ip + sizeof(kprobe_opcode_t);
+ /* To emulate trap based kprobes, preempt_disable here */
+ preempt_disable();
__this_cpu_write(current_kprobe, p);
kcb->kprobe_status = KPROBE_HIT_ACTIVE;
- if (!p->pre_handler || !p->pre_handler(p, regs))
+ if (!p->pre_handler || !p->pre_handler(p, regs)) {
__skip_singlestep(p, regs, kcb, orig_ip);
+ preempt_enable_no_resched();
+ }
/*
* If pre_handler returns !0, it sets regs->ip and
- * resets current kprobe.
+ * resets current kprobe, and keep preempt count +1.
*/
}
end:
Actually kprobes doesn't need to disable irq if it is
called from ftrace/jump trampoline code because
Documentation/kprobes.txt says
-----
Probe handlers are run with preemption disabled. Depending on the
architecture and optimization state, handlers may also run with
interrupts disabled (e.g., kretprobe handlers and optimized kprobe
handlers run without interrupt disabled on x86/x86-64).
-----
So let's remove irq disabling from those handlers.
Signed-off-by: Masami Hiramatsu <[email protected]>
---
arch/x86/kernel/kprobes/ftrace.c | 9 ++-------
arch/x86/kernel/kprobes/opt.c | 4 ----
2 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index bcfee4f69b0e..8dc0161cec8f 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -61,14 +61,11 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
{
struct kprobe *p;
struct kprobe_ctlblk *kcb;
- unsigned long flags;
-
- /* Disable irq for emulating a breakpoint and avoiding preempt */
- local_irq_save(flags);
+ /* Preempt is disabled by ftrace */
p = get_kprobe((kprobe_opcode_t *)ip);
if (unlikely(!p) || kprobe_disabled(p))
- goto end;
+ return;
kcb = get_kprobe_ctlblk();
if (kprobe_running()) {
@@ -91,8 +88,6 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
* resets current kprobe, and keep preempt count +1.
*/
}
-end:
- local_irq_restore(flags);
}
NOKPROBE_SYMBOL(kprobe_ftrace_handler);
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 36e4f61c3eec..511aad1990a0 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -154,13 +154,10 @@ STACK_FRAME_NON_STANDARD(optprobe_template_func);
static void
optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
{
- unsigned long flags;
-
/* This is possible if op is under delayed unoptimizing */
if (kprobe_disabled(&op->kp))
return;
- local_irq_save(flags);
preempt_disable();
if (kprobe_running()) {
kprobes_inc_nmissed_count(&op->kp);
@@ -182,7 +179,6 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
__this_cpu_write(current_kprobe, NULL);
}
preempt_enable_no_resched();
- local_irq_restore(flags);
}
NOKPROBE_SYMBOL(optimized_callback);
To enable jump optimized probe with CONFIG_PREEMPT, use
synchronize_rcu_tasks() to wait for all tasks preempted
on trampoline code back on track.
Since the jump optimized kprobes can replace multiple
instructions, there can be tasks which are preempted
on the 2nd (or 3rd) instructions. If the kprobe
replaces those instructions by a jump instruction,
when those tasks back to the preempted place, it is
a middle of the jump instruction and causes a kernel
panic.
To avoid such tragedies in advance, kprobe optimizer
prepare a detour route using normal kprobe (e.g.
int3 breakpoint on x86), and wait for the tasks which
is interrrupted on such place by synchronize_sched()
when CONFIG_PREEMPT=n.
If CONFIG_PREEMPT=y, things be more complicated, because
such interrupted thread can be preempted (other thread
can be scheduled in interrupt handler.) So, kprobes
optimizer has to wait for those tasks scheduled normally.
In this case we can use synchronize_rcu_tasks() which
ensures that all preempted tasks back on track and
schedule it.
Signed-off-by: Masami Hiramatsu <[email protected]>
Reviewed-by: Paul E. McKenney <[email protected]>
---
arch/Kconfig | 2 +-
kernel/kprobes.c | 18 +++++++++++++-----
2 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index 1aafb4efbb51..f75c8e8a229b 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -90,7 +90,7 @@ config STATIC_KEYS_SELFTEST
config OPTPROBES
def_bool y
depends on KPROBES && HAVE_OPTPROBES
- depends on !PREEMPT
+ select TASKS_RCU if PREEMPT
config KPROBES_ON_FTRACE
def_bool y
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index de73b843c623..21d42ed2aaa5 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -577,12 +577,20 @@ static void kprobe_optimizer(struct work_struct *work)
/*
* Step 2: Wait for quiesence period to ensure all running interrupts
- * are done. Because optprobe may modify multiple instructions
- * there is a chance that Nth instruction is interrupted. In that
- * case, running interrupt can return to 2nd-Nth byte of jump
- * instruction. This wait is for avoiding it.
+ * are done. Because optprobe may modify multiple instructions,
+ * there is a chance that the Nth instruction is interrupted. In that
+ * case, running interrupt can return to the Nth byte of jump
+ * instruction. This can be avoided by waiting for returning of
+ * such interrupts, since (until here) the first byte of the optimized
+ * probe is already replaced with normal kprobe (sw breakpoint) and
+ * all threads which reach to the probed address will hit it and
+ * bypass the copied instructions (instead of executing the original.)
+ * With CONFIG_PREEMPT, such interrupts can be preepmted. To wait
+ * for such thread, we will use synchronize_rcu_tasks() which ensures
+ * all preeempted tasks are scheduled normally (not preempted).
+ * So we can ensure there is no threads preempted at probed address.
*/
- synchronize_sched();
+ synchronize_rcu_tasks();
/* Step 3: Optimize kprobes after quiesence period */
do_optimize_kprobes();
On 9/19/17 2:58 AM, Masami Hiramatsu wrote:
> Hi,
>
> Here is the 3rd version of the series to improve preempt
> related behavior in kprobes/x86. This actually includes
> many enhancements/fixes from the 2nd version, which is
>
> https://lkml.org/lkml/2017/9/11/482
>
> With the previous patch, lkp-bot reported that an issue
> ( https://lkml.org/lkml/2017/9/14/3 ) and I couldn't
> reproduce it. However, I found a suspicious bug and fixed
> it ([2/7]).
>
> Also, while I was checking the correct condition for
> *probe handlers in Documentation/kprobes.txt, I also
> found that current implementations for ftrace-based kprobe
> and optprobe were mis-reading the document.
>>From the document, handlers must be run with preempt-
> disabled, but interrupt disabling is not guaranteed.
> So in the middle of this series, patches ([4/7],[5/7],
> [6/7]) adding preempt-disabling and removing irq-disabling.
>
> And at last, I placed the original patch (Enable optprobe
> with CONFIG_PREEMPT).
>
> The others are just for making sure this fix works well.
> - [1/7] is just adding preemptible checker in kprobe
> smake tests so that we can easily find mistake.
> - [3/7] is adding an assert if user tries to change
> execution path in optprobe, which is obviously
> prohibited in the document (there also be how to
> avoid it.)
all patches look great to me.
Acked-by: Alexei Starovoitov <[email protected]>
* Masami Hiramatsu <[email protected]> wrote:
> To enable jump optimized probe with CONFIG_PREEMPT, use
> synchronize_rcu_tasks() to wait for all tasks preempted
> on trampoline code back on track.
This sentence does not parse. It's missing a verb, but I'm not sure.
> Since the jump optimized kprobes can replace multiple
> instructions, there can be tasks which are preempted
> on the 2nd (or 3rd) instructions. If the kprobe
> replaces those instructions by a jump instruction,
> when those tasks back to the preempted place, it is
> a middle of the jump instruction and causes a kernel
> panic.
Again, sentence appears to be missing a verb and also an adjective I think.
> To avoid such tragedies in advance, kprobe optimizer
> prepare a detour route using normal kprobe (e.g.
> int3 breakpoint on x86), and wait for the tasks which
> is interrrupted on such place by synchronize_sched()
> when CONFIG_PREEMPT=n.
s/tragedies/mishaps
Part after the first comma does not parse.
Also the way to refer to kprobes is "kprobes" and "normal kprobes".
Use 'kprobe' only when talking about a specific kprobe instance or such.
You use this correctly later on in the changelog ...
> If CONFIG_PREEMPT=y, things be more complicated, because
s/be/are or s/be/get
> such interrupted thread can be preempted (other thread
> can be scheduled in interrupt handler.) So, kprobes
full stop in the wrong place.
> optimizer has to wait for those tasks scheduled normally.
missing verb.
> In this case we can use synchronize_rcu_tasks() which
> ensures that all preempted tasks back on track and
> schedule it.
More careful changelogs please.
> + * are done. Because optprobe may modify multiple instructions,
> + * there is a chance that the Nth instruction is interrupted. In that
> + * case, running interrupt can return to the Nth byte of jump
> + * instruction. This can be avoided by waiting for returning of
> + * such interrupts, since (until here) the first byte of the optimized
> + * probe is already replaced with normal kprobe (sw breakpoint) and
> + * all threads which reach to the probed address will hit it and
> + * bypass the copied instructions (instead of executing the original.)
> + * With CONFIG_PREEMPT, such interrupts can be preepmted. To wait
> + * for such thread, we will use synchronize_rcu_tasks() which ensures
> + * all preeempted tasks are scheduled normally (not preempted).
> + * So we can ensure there is no threads preempted at probed address.
What? Interrupts cannot be preempted.
Also, "To wait for such threads", or "To wait for such a thread".
Thanks,
Ingo
* Masami Hiramatsu <[email protected]> wrote:
> Actually kprobes doesn't need to disable irq if it is
> called from ftrace/jump trampoline code because
> Documentation/kprobes.txt says
>
> -----
> Probe handlers are run with preemption disabled. Depending on the
> architecture and optimization state, handlers may also run with
> interrupts disabled (e.g., kretprobe handlers and optimized kprobe
> handlers run without interrupt disabled on x86/x86-64).
> -----
>
> So let's remove irq disabling from those handlers.
> - local_irq_save(flags);
The title is talking about disable_irq():
kprobes/x86: Remove disable_irq from ftrace-based/optimized kprobe
... but the patch is actually using local_irq_save(), which is an entirely
different thing! You probably wanted to say:
kprobes/x86: Remove irq disabling from ftrace-based/optimized kprobes
Also note the plural of 'kprobes' when we refer to them as a generic thing.
I fixed the title, but _please_ read changelogs more carefully before sending
them.
Thanks,
Ingo
Commit-ID: 3539d09154e11336c31a900a9cd49e386ba6d9b2
Gitweb: https://git.kernel.org/tip/3539d09154e11336c31a900a9cd49e386ba6d9b2
Author: Masami Hiramatsu <[email protected]>
AuthorDate: Tue, 19 Sep 2017 18:59:00 +0900
Committer: Ingo Molnar <[email protected]>
CommitDate: Thu, 28 Sep 2017 09:23:03 +0200
kprobes: Improve smoke test to check preemptibility
Add preemptible check to each handler. Handlers are called with
non-preemtible, which is guaranteed by Documentation/kprobes.txt.
Signed-off-by: Masami Hiramatsu <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Ananth N Mavinakayanahalli <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul E . McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/150581513991.32348.7956810394499654272.stgit@devbox
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/test_kprobes.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/kernel/test_kprobes.c b/kernel/test_kprobes.c
index 0dbab6d..47106a1 100644
--- a/kernel/test_kprobes.c
+++ b/kernel/test_kprobes.c
@@ -34,6 +34,10 @@ static noinline u32 kprobe_target(u32 value)
static int kp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
+ if (preemptible()) {
+ handler_errors++;
+ pr_err("pre-handler is preemptible\n");
+ }
preh_val = (rand1 / div_factor);
return 0;
}
@@ -41,6 +45,10 @@ static int kp_pre_handler(struct kprobe *p, struct pt_regs *regs)
static void kp_post_handler(struct kprobe *p, struct pt_regs *regs,
unsigned long flags)
{
+ if (preemptible()) {
+ handler_errors++;
+ pr_err("post-handler is preemptible\n");
+ }
if (preh_val != (rand1 / div_factor)) {
handler_errors++;
pr_err("incorrect value in post_handler\n");
@@ -156,6 +164,10 @@ static int test_kprobes(void)
static u32 j_kprobe_target(u32 value)
{
+ if (preemptible()) {
+ handler_errors++;
+ pr_err("jprobe-handler is preemptible\n");
+ }
if (value != rand1) {
handler_errors++;
pr_err("incorrect value in jprobe handler\n");
@@ -232,6 +244,10 @@ static u32 krph_val;
static int entry_handler(struct kretprobe_instance *ri, struct pt_regs *regs)
{
+ if (preemptible()) {
+ handler_errors++;
+ pr_err("kretprobe entry handler is preemptible\n");
+ }
krph_val = (rand1 / div_factor);
return 0;
}
@@ -240,6 +256,10 @@ static int return_handler(struct kretprobe_instance *ri, struct pt_regs *regs)
{
unsigned long ret = regs_return_value(regs);
+ if (preemptible()) {
+ handler_errors++;
+ pr_err("kretprobe return handler is preemptible\n");
+ }
if (ret != (rand1 / div_factor)) {
handler_errors++;
pr_err("incorrect value in kretprobe handler\n");
Commit-ID: cd52edad55fbcd8064877a77d31445b2fb4b85c3
Gitweb: https://git.kernel.org/tip/cd52edad55fbcd8064877a77d31445b2fb4b85c3
Author: Masami Hiramatsu <[email protected]>
AuthorDate: Tue, 19 Sep 2017 18:59:39 +0900
Committer: Ingo Molnar <[email protected]>
CommitDate: Thu, 28 Sep 2017 09:23:03 +0200
kprobes/x86: Move the get_kprobe_ctlblk() into irq-disabled block
Since get_kprobe_ctlblk() accesses per-cpu variables
which calls smp_processor_id(), it must be called under
preempt-disabled or irq-disabled.
Signed-off-by: Masami Hiramatsu <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Ananth N Mavinakayanahalli <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul E . McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/150581517952.32348.2655896843219158446.stgit@devbox
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/x86/kernel/kprobes/opt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 0cae7c0..f558103 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -154,7 +154,6 @@ STACK_FRAME_NON_STANDARD(optprobe_template_func);
static void
optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
{
- struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
unsigned long flags;
/* This is possible if op is under delayed unoptimizing */
@@ -165,6 +164,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
if (kprobe_running()) {
kprobes_inc_nmissed_count(&op->kp);
} else {
+ struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
/* Save skipped registers */
#ifdef CONFIG_X86_64
regs->cs = __KERNEL_CS;
Commit-ID: e863d5396146411b615231cae0c518cb2a23371c
Gitweb: https://git.kernel.org/tip/e863d5396146411b615231cae0c518cb2a23371c
Author: Masami Hiramatsu <[email protected]>
AuthorDate: Tue, 19 Sep 2017 19:00:19 +0900
Committer: Ingo Molnar <[email protected]>
CommitDate: Thu, 28 Sep 2017 09:23:04 +0200
kprobes: Warn if optprobe handler tries to change execution path
Warn if optprobe handler tries to change execution path.
As described in Documentation/kprobes.txt, with optprobe
user handler can not change instruction pointer. In that
case user must avoid optimizing the kprobes by setting
post_handler or break_handler.
Signed-off-by: Masami Hiramatsu <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Ananth N Mavinakayanahalli <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul E . McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/150581521955.32348.3615624715034787365.stgit@devbox
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/kprobes.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 15fba7f..2d28377 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -387,7 +387,10 @@ void opt_pre_handler(struct kprobe *p, struct pt_regs *regs)
list_for_each_entry_rcu(kp, &p->list, list) {
if (kp->pre_handler && likely(!kprobe_disabled(kp))) {
set_kprobe_instance(kp);
- kp->pre_handler(kp, regs);
+ if (kp->pre_handler(kp, regs)) {
+ if (WARN_ON_ONCE(1))
+ pr_err("Optprobe ignores instruction pointer changing.(%pF)\n", p->addr);
+ }
}
reset_kprobe_instance();
}
Commit-ID: 9a09f261a4fa52de916b0db34a36956c95f78fdc
Gitweb: https://git.kernel.org/tip/9a09f261a4fa52de916b0db34a36956c95f78fdc
Author: Masami Hiramatsu <[email protected]>
AuthorDate: Tue, 19 Sep 2017 19:00:59 +0900
Committer: Ingo Molnar <[email protected]>
CommitDate: Thu, 28 Sep 2017 09:23:04 +0200
kprobes/x86: Disable preemption in optprobe
Disable preemption in optprobe handler as described
in Documentation/kprobes.txt, which says:
"Probe handlers are run with preemption disabled."
Signed-off-by: Masami Hiramatsu <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Ananth N Mavinakayanahalli <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul E . McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/150581525942.32348.6359217983269060829.stgit@devbox
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/x86/kernel/kprobes/opt.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index f558103..32c35cb 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -161,6 +161,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
return;
local_irq_save(flags);
+ preempt_disable();
if (kprobe_running()) {
kprobes_inc_nmissed_count(&op->kp);
} else {
@@ -180,6 +181,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
opt_pre_handler(&op->kp, regs);
__this_cpu_write(current_kprobe, NULL);
}
+ preempt_enable_no_resched();
local_irq_restore(flags);
}
NOKPROBE_SYMBOL(optimized_callback);
Commit-ID: 5bb4fc2d8641219732eb2bb654206775a4219aca
Gitweb: https://git.kernel.org/tip/5bb4fc2d8641219732eb2bb654206775a4219aca
Author: Masami Hiramatsu <[email protected]>
AuthorDate: Tue, 19 Sep 2017 19:01:40 +0900
Committer: Ingo Molnar <[email protected]>
CommitDate: Thu, 28 Sep 2017 09:23:04 +0200
kprobes/x86: Disable preemption in ftrace-based jprobes
Disable preemption in ftrace-based jprobe handlers as
described in Documentation/kprobes.txt:
"Probe handlers are run with preemption disabled."
This will fix jprobes behavior when CONFIG_PREEMPT=y.
Signed-off-by: Masami Hiramatsu <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Ananth N Mavinakayanahalli <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul E . McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/150581530024.32348.9863783558598926771.stgit@devbox
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/x86/kernel/kprobes/ftrace.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index 041f7b6..bcfee4f 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -26,7 +26,7 @@
#include "common.h"
static nokprobe_inline
-int __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
+void __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb, unsigned long orig_ip)
{
/*
@@ -41,20 +41,21 @@ int __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
__this_cpu_write(current_kprobe, NULL);
if (orig_ip)
regs->ip = orig_ip;
- return 1;
}
int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb)
{
- if (kprobe_ftrace(p))
- return __skip_singlestep(p, regs, kcb, 0);
- else
- return 0;
+ if (kprobe_ftrace(p)) {
+ __skip_singlestep(p, regs, kcb, 0);
+ preempt_enable_no_resched();
+ return 1;
+ }
+ return 0;
}
NOKPROBE_SYMBOL(skip_singlestep);
-/* Ftrace callback handler for kprobes */
+/* Ftrace callback handler for kprobes -- called under preepmt disabed */
void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *ops, struct pt_regs *regs)
{
@@ -77,13 +78,17 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
/* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
regs->ip = ip + sizeof(kprobe_opcode_t);
+ /* To emulate trap based kprobes, preempt_disable here */
+ preempt_disable();
__this_cpu_write(current_kprobe, p);
kcb->kprobe_status = KPROBE_HIT_ACTIVE;
- if (!p->pre_handler || !p->pre_handler(p, regs))
+ if (!p->pre_handler || !p->pre_handler(p, regs)) {
__skip_singlestep(p, regs, kcb, orig_ip);
+ preempt_enable_no_resched();
+ }
/*
* If pre_handler returns !0, it sets regs->ip and
- * resets current kprobe.
+ * resets current kprobe, and keep preempt count +1.
*/
}
end:
Commit-ID: a19b2e3d783964d48d2b494439648e929bcdc976
Gitweb: https://git.kernel.org/tip/a19b2e3d783964d48d2b494439648e929bcdc976
Author: Masami Hiramatsu <[email protected]>
AuthorDate: Tue, 19 Sep 2017 19:02:20 +0900
Committer: Ingo Molnar <[email protected]>
CommitDate: Thu, 28 Sep 2017 09:25:50 +0200
kprobes/x86: Remove IRQ disabling from ftrace-based/optimized kprobes
Kkprobes don't need to disable IRQs if they are called from the
ftrace/jump trampoline code, because Documentation/kprobes.txt says:
-----
Probe handlers are run with preemption disabled. Depending on the
architecture and optimization state, handlers may also run with
interrupts disabled (e.g., kretprobe handlers and optimized kprobe
handlers run without interrupt disabled on x86/x86-64).
-----
So let's remove IRQ disabling from those handlers.
Signed-off-by: Masami Hiramatsu <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Ananth N Mavinakayanahalli <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul E . McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/150581534039.32348.11331736206004264553.stgit@devbox
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/x86/kernel/kprobes/ftrace.c | 9 ++-------
arch/x86/kernel/kprobes/opt.c | 4 ----
2 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index bcfee4f..8dc0161 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -61,14 +61,11 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
{
struct kprobe *p;
struct kprobe_ctlblk *kcb;
- unsigned long flags;
-
- /* Disable irq for emulating a breakpoint and avoiding preempt */
- local_irq_save(flags);
+ /* Preempt is disabled by ftrace */
p = get_kprobe((kprobe_opcode_t *)ip);
if (unlikely(!p) || kprobe_disabled(p))
- goto end;
+ return;
kcb = get_kprobe_ctlblk();
if (kprobe_running()) {
@@ -91,8 +88,6 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
* resets current kprobe, and keep preempt count +1.
*/
}
-end:
- local_irq_restore(flags);
}
NOKPROBE_SYMBOL(kprobe_ftrace_handler);
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 32c35cb..e941136 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -154,13 +154,10 @@ STACK_FRAME_NON_STANDARD(optprobe_template_func);
static void
optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
{
- unsigned long flags;
-
/* This is possible if op is under delayed unoptimizing */
if (kprobe_disabled(&op->kp))
return;
- local_irq_save(flags);
preempt_disable();
if (kprobe_running()) {
kprobes_inc_nmissed_count(&op->kp);
@@ -182,7 +179,6 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
__this_cpu_write(current_kprobe, NULL);
}
preempt_enable_no_resched();
- local_irq_restore(flags);
}
NOKPROBE_SYMBOL(optimized_callback);
On Thu, 28 Sep 2017 09:25:41 +0200
Ingo Molnar <[email protected]> wrote:
>
> * Masami Hiramatsu <[email protected]> wrote:
>
> > Actually kprobes doesn't need to disable irq if it is
> > called from ftrace/jump trampoline code because
> > Documentation/kprobes.txt says
> >
> > -----
> > Probe handlers are run with preemption disabled. Depending on the
> > architecture and optimization state, handlers may also run with
> > interrupts disabled (e.g., kretprobe handlers and optimized kprobe
> > handlers run without interrupt disabled on x86/x86-64).
> > -----
> >
> > So let's remove irq disabling from those handlers.
>
> > - local_irq_save(flags);
>
> The title is talking about disable_irq():
>
> kprobes/x86: Remove disable_irq from ftrace-based/optimized kprobe
>
> ... but the patch is actually using local_irq_save(), which is an entirely
> different thing! You probably wanted to say:
>
> kprobes/x86: Remove irq disabling from ftrace-based/optimized kprobes
Correct! That's my mistake. thanks!
>
> Also note the plural of 'kprobes' when we refer to them as a generic thing.
>
> I fixed the title, but _please_ read changelogs more carefully before sending
> them.
Thank you again,
>
> Thanks,
>
> Ingo
--
Masami Hiramatsu <[email protected]>
On Thu, 28 Sep 2017 09:22:20 +0200
Ingo Molnar <[email protected]> wrote:
>
> * Masami Hiramatsu <[email protected]> wrote:
>
> > To enable jump optimized probe with CONFIG_PREEMPT, use
> > synchronize_rcu_tasks() to wait for all tasks preempted
> > on trampoline code back on track.
>
> This sentence does not parse. It's missing a verb, but I'm not sure.
Hmm, how about this?
Use synchthnize_rcu_tasks() to wait for all tasks preempted
on trampoline code back on track so that jump optimized probe
can be enabled with CONFIG_PREEMPT.
>
> > Since the jump optimized kprobes can replace multiple
> > instructions, there can be tasks which are preempted
> > on the 2nd (or 3rd) instructions. If the kprobe
> > replaces those instructions by a jump instruction,
> > when those tasks back to the preempted place, it is
> > a middle of the jump instruction and causes a kernel
> > panic.
>
>
> Again, sentence appears to be missing a verb and also an adjective I think.
>
Hmm, I couldn't understand, I think you are pointing below
sentence,
----
If the kprobe replaces those instructions by a jump instruction,
when those tasks back to the preempted place, it is a middle of
the jump instruction and causes a kernel panic.
----
Of course "If" and "when" look ugly, but both have verb...
> > To avoid such tragedies in advance, kprobe optimizer
> > prepare a detour route using normal kprobe (e.g.
> > int3 breakpoint on x86), and wait for the tasks which
> > is interrrupted on such place by synchronize_sched()
> > when CONFIG_PREEMPT=n.
>
> s/tragedies/mishaps
I got it.
>
> Part after the first comma does not parse.
Yeah, some typos, but
kprobe optimizer prepares a detour route using normal kprobe ()
and waits for the tasks, which is interrupted on such place, by
synchronize_sched(), when CONFIG_PREEMPT=n.
will be able to parsed. ( at least google translate can ...)
>
> Also the way to refer to kprobes is "kprobes" and "normal kprobes".
> Use 'kprobe' only when talking about a specific kprobe instance or such.
> You use this correctly later on in the changelog ...
>
> > If CONFIG_PREEMPT=y, things be more complicated, because
>
> s/be/are or s/be/get
thanks, get is preferred :)
>
> > such interrupted thread can be preempted (other thread
> > can be scheduled in interrupt handler.) So, kprobes
>
> full stop in the wrong place.
>
> > optimizer has to wait for those tasks scheduled normally.
>
> missing verb.
kprobe optimizer must wait for those ...
will it work?
>
> > In this case we can use synchronize_rcu_tasks() which
> > ensures that all preempted tasks back on track and
> > schedule it.
>
> More careful changelogs please.
>
> > + * are done. Because optprobe may modify multiple instructions,
> > + * there is a chance that the Nth instruction is interrupted. In that
> > + * case, running interrupt can return to the Nth byte of jump
> > + * instruction. This can be avoided by waiting for returning of
> > + * such interrupts, since (until here) the first byte of the optimized
> > + * probe is already replaced with normal kprobe (sw breakpoint) and
> > + * all threads which reach to the probed address will hit it and
> > + * bypass the copied instructions (instead of executing the original.)
> > + * With CONFIG_PREEMPT, such interrupts can be preepmted. To wait
> > + * for such thread, we will use synchronize_rcu_tasks() which ensures
> > + * all preeempted tasks are scheduled normally (not preempted).
> > + * So we can ensure there is no threads preempted at probed address.
>
> What? Interrupts cannot be preempted.
Steve, could you correct me if I'm wrong. I thought if the kernel is
compiled with CONFIG_PREEMPT=y, even in the kernel, it can be preempted
suddenly. It means timer interrupt occurs at kernel path and it yield
to new task (=preempt.) Do I miss something?
>
> Also, "To wait for such threads", or "To wait for such a thread".
OK,
Thank you,
>
> Thanks,
>
> Ingo
--
Masami Hiramatsu <[email protected]>
* Masami Hiramatsu <[email protected]> wrote:
> On Thu, 28 Sep 2017 09:22:20 +0200
> Ingo Molnar <[email protected]> wrote:
>
> >
> > * Masami Hiramatsu <[email protected]> wrote:
> >
> > > To enable jump optimized probe with CONFIG_PREEMPT, use
> > > synchronize_rcu_tasks() to wait for all tasks preempted
> > > on trampoline code back on track.
> >
> > This sentence does not parse. It's missing a verb, but I'm not sure.
>
> Hmm, how about this?
>
> Use synchthnize_rcu_tasks() to wait for all tasks preempted
> on trampoline code back on track so that jump optimized probe
> can be enabled with CONFIG_PREEMPT.
What's "synchthnize"? ...
More seriously, I still don't understand it. What is 'back on track'?
Do you mean to say:
We want to wait for all potentially preempted kprobes trampoline execution to
have completed. This guarantees that any freed trampoline memory is not in use
by any task in the system anymore. synchronize_rcu_tasks() gives such a
guarantee, so use it.
or something else?
Thanks,
Ingo
On Fri, 29 Sep 2017 09:37:55 +0200
Ingo Molnar <[email protected]> wrote:
>
> * Masami Hiramatsu <[email protected]> wrote:
>
> > On Thu, 28 Sep 2017 09:22:20 +0200
> > Ingo Molnar <[email protected]> wrote:
> >
> > >
> > > * Masami Hiramatsu <[email protected]> wrote:
> > >
> > > > To enable jump optimized probe with CONFIG_PREEMPT, use
> > > > synchronize_rcu_tasks() to wait for all tasks preempted
> > > > on trampoline code back on track.
> > >
> > > This sentence does not parse. It's missing a verb, but I'm not sure.
> >
> > Hmm, how about this?
> >
> > Use synchthnize_rcu_tasks() to wait for all tasks preempted
> > on trampoline code back on track so that jump optimized probe
> > can be enabled with CONFIG_PREEMPT.
>
> What's "synchthnize"? ...
Oops, it's my typo. my XPS touch pad is really unstable...
>
> More seriously, I still don't understand it. What is 'back on track'?
>
> Do you mean to say:
>
> We want to wait for all potentially preempted kprobes trampoline execution to
> have completed. This guarantees that any freed trampoline memory is not in use
> by any task in the system anymore. synchronize_rcu_tasks() gives such a
> guarantee, so use it.
Exactly, this is correct!
Thank you,
>
> or something else?
>
> Thanks,
>
> Ingo
--
Masami Hiramatsu <[email protected]>
* Masami Hiramatsu <[email protected]> wrote:
> On Fri, 29 Sep 2017 09:37:55 +0200
> Ingo Molnar <[email protected]> wrote:
>
> >
> > * Masami Hiramatsu <[email protected]> wrote:
> >
> > > On Thu, 28 Sep 2017 09:22:20 +0200
> > > Ingo Molnar <[email protected]> wrote:
> > >
> > > >
> > > > * Masami Hiramatsu <[email protected]> wrote:
> > > >
> > > > > To enable jump optimized probe with CONFIG_PREEMPT, use
> > > > > synchronize_rcu_tasks() to wait for all tasks preempted
> > > > > on trampoline code back on track.
> > > >
> > > > This sentence does not parse. It's missing a verb, but I'm not sure.
> > >
> > > Hmm, how about this?
> > >
> > > Use synchthnize_rcu_tasks() to wait for all tasks preempted
> > > on trampoline code back on track so that jump optimized probe
> > > can be enabled with CONFIG_PREEMPT.
> >
> > What's "synchthnize"? ...
>
> Oops, it's my typo. my XPS touch pad is really unstable...
>
> >
> > More seriously, I still don't understand it. What is 'back on track'?
> >
> > Do you mean to say:
> >
> > We want to wait for all potentially preempted kprobes trampoline execution to
> > have completed. This guarantees that any freed trampoline memory is not in use
> > by any task in the system anymore. synchronize_rcu_tasks() gives such a
> > guarantee, so use it.
>
> Exactly, this is correct!
Ok, great - please re-send the remaining kprobes patches that I have not applied
yet - I'll read through the changelogs and fix any bits that might still be
unclear.
Thanks,
Ingo
On Fri, 29 Sep 2017 19:45:28 +0200
Ingo Molnar <[email protected]> wrote:
>
> * Masami Hiramatsu <[email protected]> wrote:
>
> > On Fri, 29 Sep 2017 09:37:55 +0200
> > Ingo Molnar <[email protected]> wrote:
> >
> > >
> > > * Masami Hiramatsu <[email protected]> wrote:
> > >
> > > > On Thu, 28 Sep 2017 09:22:20 +0200
> > > > Ingo Molnar <[email protected]> wrote:
> > > >
> > > > >
> > > > > * Masami Hiramatsu <[email protected]> wrote:
> > > > >
> > > > > > To enable jump optimized probe with CONFIG_PREEMPT, use
> > > > > > synchronize_rcu_tasks() to wait for all tasks preempted
> > > > > > on trampoline code back on track.
> > > > >
> > > > > This sentence does not parse. It's missing a verb, but I'm not sure.
> > > >
> > > > Hmm, how about this?
> > > >
> > > > Use synchthnize_rcu_tasks() to wait for all tasks preempted
> > > > on trampoline code back on track so that jump optimized probe
> > > > can be enabled with CONFIG_PREEMPT.
> > >
> > > What's "synchthnize"? ...
> >
> > Oops, it's my typo. my XPS touch pad is really unstable...
> >
> > >
> > > More seriously, I still don't understand it. What is 'back on track'?
> > >
> > > Do you mean to say:
> > >
> > > We want to wait for all potentially preempted kprobes trampoline execution to
> > > have completed. This guarantees that any freed trampoline memory is not in use
> > > by any task in the system anymore. synchronize_rcu_tasks() gives such a
> > > guarantee, so use it.
> >
> > Exactly, this is correct!
>
> Ok, great - please re-send the remaining kprobes patches that I have not applied
> yet - I'll read through the changelogs and fix any bits that might still be
> unclear.
OK, I got it. I'll check the remaining patches!
Thank you!
>
> Thanks,
>
> Ingo
--
Masami Hiramatsu <[email protected]>