Subject: [PATCH -tip 0/3] kprobes, x86: Move optprobe on top of int3-based instruction patching

Hi,

Here is a series of kprobes and x86 patches for moving
optprobe (jump optimized kprobe) onto the int3-based
instruction patching (a.k.a. text_poke_bp, introduced by
Jiri Kosina).

Since this completely moves text_poke_smp* user to new
text_poke_bp, I also remove the old text_poke_smp* code
from alternative.c. As a side effect, it also fixes a
Kconfig warning about CONFIG_STOP_MACHINE dependency
confliction.

Thank you,

---

Masami Hiramatsu (3):
[CLEANUP] kprobes/x86: Remove an incorrect comment about int3 in NMI/MCE
kprobes/x86: Use text_poke_bp() instead of text_poke_smp*()
x86: Remove unused text_poke_smp and text_poke_smp_batch


arch/x86/Kconfig | 5 --
arch/x86/include/asm/alternative.h | 11 ----
arch/x86/kernel/alternative.c | 98 +-------------------------------
arch/x86/kernel/kprobes/common.h | 5 --
arch/x86/kernel/kprobes/core.c | 2 -
arch/x86/kernel/kprobes/opt.c | 110 +++++++-----------------------------
6 files changed, 25 insertions(+), 206 deletions(-)

--
Masami Hiramatsu <[email protected]>
IT Management Research Dept. and Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory


Subject: [PATCH -tip 1/3] [CLEANUP] kprobes/x86: Remove an incorrect comment about int3 in NMI/MCE

Remove a comment about an int3 issue in NMI/MCE, since
the commit 3f3c8b8c already fixed that. Keeping this
incorrect comment can mislead developers.

Signed-off-by: Masami Hiramatsu <[email protected]>
---
arch/x86/kernel/kprobes/opt.c | 10 ----------
1 file changed, 10 deletions(-)

diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 76dc6f0..d7d8a8c 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -415,11 +415,6 @@ void __kprobes arch_optimize_kprobes(struct list_head *oplist)
break;
}

- /*
- * text_poke_smp doesn't support NMI/MCE code modifying.
- * However, since kprobes itself also doesn't support NMI/MCE
- * code probing, it's not a problem.
- */
text_poke_smp_batch(jump_poke_params, c);
}

@@ -455,11 +450,6 @@ extern void arch_unoptimize_kprobes(struct list_head *oplist,
break;
}

- /*
- * text_poke_smp doesn't support NMI/MCE code modifying.
- * However, since kprobes itself also doesn't support NMI/MCE
- * code probing, it's not a problem.
- */
text_poke_smp_batch(jump_poke_params, c);
}

Subject: [PATCH -tip 3/3] x86: Remove unused text_poke_smp and text_poke_smp_batch

Since introducing the text_poke_bp for all text_poke_smp*
callers, text_poke_smp* are now unused. This basically
reverts 3d55cc8a, 7deb18dc and related commits.

This also fixes a Kconfig dependency issue on STOP_MACHINE
in the case of CONFIG_SMP && !CONFIG_MODULE_UNLOAD.

Signed-off-by: Masami Hiramatsu <[email protected]>
---
arch/x86/Kconfig | 5 --
arch/x86/include/asm/alternative.h | 11 ----
arch/x86/kernel/alternative.c | 98 +-----------------------------------
3 files changed, 2 insertions(+), 112 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 8dbbdef..9e1e0b6 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -81,7 +81,6 @@ config X86
select HAVE_USER_RETURN_NOTIFIER
select ARCH_BINFMT_ELF_RANDOMIZE_PIE
select HAVE_ARCH_JUMP_LABEL
- select HAVE_TEXT_POKE_SMP
select HAVE_GENERIC_HARDIRQS
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select SPARSE_IRQ
@@ -2364,10 +2363,6 @@ config HAVE_ATOMIC_IOMAP
def_bool y
depends on X86_32

-config HAVE_TEXT_POKE_SMP
- bool
- select STOP_MACHINE if SMP
-
config X86_DEV_DMA_OPS
bool
depends on X86_64 || STA2X11
diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
index 3abf8dd..4daf8c5 100644
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -220,21 +220,10 @@ extern void *text_poke_early(void *addr, const void *opcode, size_t len);
* no thread can be preempted in the instructions being modified (no iret to an
* invalid instruction possible) or if the instructions are changed from a
* consistent state to another consistent state atomically.
- * More care must be taken when modifying code in the SMP case because of
- * Intel's errata. text_poke_smp() takes care that errata, but still
- * doesn't support NMI/MCE handler code modifying.
* On the local CPU you need to be protected again NMI or MCE handlers seeing an
* inconsistent instruction while you patch.
*/
-struct text_poke_param {
- void *addr;
- const void *opcode;
- size_t len;
-};
-
extern void *text_poke(void *addr, const void *opcode, size_t len);
extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
-extern void *text_poke_smp(void *addr, const void *opcode, size_t len);
-extern void text_poke_smp_batch(struct text_poke_param *params, int n);

#endif /* _ASM_X86_ALTERNATIVE_H */
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 0ab4936..5d8782e 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -633,8 +633,8 @@ static int int3_notify(struct notifier_block *self, unsigned long val, void *dat
* @handler: address to jump to when the temporary breakpoint is hit
*
* Modify multi-byte instruction by using int3 breakpoint on SMP.
- * In contrary to text_poke_smp(), we completely avoid stop_machine() here,
- * and achieve the synchronization using int3 breakpoint.
+ * We completely avoid stop_machine() here, and achieve the
+ * synchronization using int3 breakpoint.
*
* The way it is done:
* - add a int3 trap to the address that will be patched
@@ -702,97 +702,3 @@ static int __init int3_init(void)
}

arch_initcall(int3_init);
-/*
- * Cross-modifying kernel text with stop_machine().
- * This code originally comes from immediate value.
- */
-static atomic_t stop_machine_first;
-static int wrote_text;
-
-struct text_poke_params {
- struct text_poke_param *params;
- int nparams;
-};
-
-static int __kprobes stop_machine_text_poke(void *data)
-{
- struct text_poke_params *tpp = data;
- struct text_poke_param *p;
- int i;
-
- if (atomic_xchg(&stop_machine_first, 0)) {
- for (i = 0; i < tpp->nparams; i++) {
- p = &tpp->params[i];
- text_poke(p->addr, p->opcode, p->len);
- }
- smp_wmb(); /* Make sure other cpus see that this has run */
- wrote_text = 1;
- } else {
- while (!wrote_text)
- cpu_relax();
- smp_mb(); /* Load wrote_text before following execution */
- }
-
- for (i = 0; i < tpp->nparams; i++) {
- p = &tpp->params[i];
- flush_icache_range((unsigned long)p->addr,
- (unsigned long)p->addr + p->len);
- }
- /*
- * Intel Archiecture Software Developer's Manual section 7.1.3 specifies
- * that a core serializing instruction such as "cpuid" should be
- * executed on _each_ core before the new instruction is made visible.
- */
- sync_core();
- return 0;
-}
-
-/**
- * text_poke_smp - Update instructions on a live kernel on SMP
- * @addr: address to modify
- * @opcode: source of the copy
- * @len: length to copy
- *
- * Modify multi-byte instruction by using stop_machine() on SMP. This allows
- * user to poke/set multi-byte text on SMP. Only non-NMI/MCE code modifying
- * should be allowed, since stop_machine() does _not_ protect code against
- * NMI and MCE.
- *
- * Note: Must be called under get_online_cpus() and text_mutex.
- */
-void *__kprobes text_poke_smp(void *addr, const void *opcode, size_t len)
-{
- struct text_poke_params tpp;
- struct text_poke_param p;
-
- p.addr = addr;
- p.opcode = opcode;
- p.len = len;
- tpp.params = &p;
- tpp.nparams = 1;
- atomic_set(&stop_machine_first, 1);
- wrote_text = 0;
- /* Use __stop_machine() because the caller already got online_cpus. */
- __stop_machine(stop_machine_text_poke, (void *)&tpp, cpu_online_mask);
- return addr;
-}
-
-/**
- * text_poke_smp_batch - Update instructions on a live kernel on SMP
- * @params: an array of text_poke parameters
- * @n: the number of elements in params.
- *
- * Modify multi-byte instruction by using stop_machine() on SMP. Since the
- * stop_machine() is heavy task, it is better to aggregate text_poke requests
- * and do it once if possible.
- *
- * Note: Must be called under get_online_cpus() and text_mutex.
- */
-void __kprobes text_poke_smp_batch(struct text_poke_param *params, int n)
-{
- struct text_poke_params tpp = {.params = params, .nparams = n};
-
- atomic_set(&stop_machine_first, 1);
- wrote_text = 0;
- __stop_machine(stop_machine_text_poke, (void *)&tpp, cpu_online_mask);
-}

Subject: [PATCH -tip 2/3] kprobes/x86: Use text_poke_bp() instead of text_poke_smp*()

Use text_poke_bp() for optimizing kprobes instead of text_poke_smp*().
Since the number of kprobes are usually not so much (<100) and
text_poke_bp() is much lighter than text_poke_smp(), this
just stops to use batch processing.

Signed-off-by: Masami Hiramatsu <[email protected]>
---
arch/x86/kernel/kprobes/common.h | 5 --
arch/x86/kernel/kprobes/core.c | 2 -
arch/x86/kernel/kprobes/opt.c | 100 ++++++++------------------------------
3 files changed, 23 insertions(+), 84 deletions(-)

diff --git a/arch/x86/kernel/kprobes/common.h b/arch/x86/kernel/kprobes/common.h
index 2e9d4b5..c6ee63f 100644
--- a/arch/x86/kernel/kprobes/common.h
+++ b/arch/x86/kernel/kprobes/common.h
@@ -82,14 +82,9 @@ extern void synthesize_reljump(void *from, void *to);
extern void synthesize_relcall(void *from, void *to);

#ifdef CONFIG_OPTPROBES
-extern int arch_init_optprobes(void);
extern int setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter);
extern unsigned long __recover_optprobed_insn(kprobe_opcode_t *buf, unsigned long addr);
#else /* !CONFIG_OPTPROBES */
-static inline int arch_init_optprobes(void)
-{
- return 0;
-}
static inline int setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter)
{
return 0;
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 211bce4..cd49b2c 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -1068,7 +1068,7 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)

int __init arch_init_kprobes(void)
{
- return arch_init_optprobes();
+ return 0;
}

int __kprobes arch_trampoline_kprobe(struct kprobe *p)
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index d7d8a8c..d71e994 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -371,31 +371,6 @@ int __kprobes arch_prepare_optimized_kprobe(struct optimized_kprobe *op)
return 0;
}

-#define MAX_OPTIMIZE_PROBES 256
-static struct text_poke_param *jump_poke_params;
-static struct jump_poke_buffer {
- u8 buf[RELATIVEJUMP_SIZE];
-} *jump_poke_bufs;
-
-static void __kprobes setup_optimize_kprobe(struct text_poke_param *tprm,
- u8 *insn_buf,
- struct optimized_kprobe *op)
-{
- s32 rel = (s32)((long)op->optinsn.insn -
- ((long)op->kp.addr + RELATIVEJUMP_SIZE));
-
- /* Backup instructions which will be replaced by jump address */
- memcpy(op->optinsn.copied_insn, op->kp.addr + INT3_SIZE,
- RELATIVE_ADDR_SIZE);
-
- insn_buf[0] = RELATIVEJUMP_OPCODE;
- *(s32 *)(&insn_buf[1]) = rel;
-
- tprm->addr = op->kp.addr;
- tprm->opcode = insn_buf;
- tprm->len = RELATIVEJUMP_SIZE;
-}
-
/*
* Replace breakpoints (int3) with relative jumps.
* Caller must call with locking kprobe_mutex and text_mutex.
@@ -403,32 +378,38 @@ static void __kprobes setup_optimize_kprobe(struct text_poke_param *tprm,
void __kprobes arch_optimize_kprobes(struct list_head *oplist)
{
struct optimized_kprobe *op, *tmp;
- int c = 0;
+ u8 insn_buf[RELATIVEJUMP_SIZE];

list_for_each_entry_safe(op, tmp, oplist, list) {
+ s32 rel = (s32)((long)op->optinsn.insn -
+ ((long)op->kp.addr + RELATIVEJUMP_SIZE));
+
WARN_ON(kprobe_disabled(&op->kp));
- /* Setup param */
- setup_optimize_kprobe(&jump_poke_params[c],
- jump_poke_bufs[c].buf, op);
+
+ /* Backup instructions which will be replaced by jump address */
+ memcpy(op->optinsn.copied_insn, op->kp.addr + INT3_SIZE,
+ RELATIVE_ADDR_SIZE);
+
+ insn_buf[0] = RELATIVEJUMP_OPCODE;
+ *(s32 *)(&insn_buf[1]) = rel;
+
+ text_poke_bp(op->kp.addr, insn_buf, RELATIVEJUMP_SIZE,
+ op->optinsn.insn);
+
list_del_init(&op->list);
- if (++c >= MAX_OPTIMIZE_PROBES)
- break;
}
-
- text_poke_smp_batch(jump_poke_params, c);
}

-static void __kprobes setup_unoptimize_kprobe(struct text_poke_param *tprm,
- u8 *insn_buf,
- struct optimized_kprobe *op)
+/* Replace a relative jump with a breakpoint (int3). */
+void __kprobes arch_unoptimize_kprobe(struct optimized_kprobe *op)
{
+ u8 insn_buf[RELATIVEJUMP_SIZE];
+
/* Set int3 to first byte for kprobes */
insn_buf[0] = BREAKPOINT_INSTRUCTION;
memcpy(insn_buf + 1, op->optinsn.copied_insn, RELATIVE_ADDR_SIZE);
-
- tprm->addr = op->kp.addr;
- tprm->opcode = insn_buf;
- tprm->len = RELATIVEJUMP_SIZE;
+ text_poke_bp(op->kp.addr, insn_buf, RELATIVEJUMP_SIZE,
+ op->optinsn.insn);
}

/*
@@ -439,29 +420,11 @@ extern void arch_unoptimize_kprobes(struct list_head *oplist,
struct list_head *done_list)
{
struct optimized_kprobe *op, *tmp;
- int c = 0;

list_for_each_entry_safe(op, tmp, oplist, list) {
- /* Setup param */
- setup_unoptimize_kprobe(&jump_poke_params[c],
- jump_poke_bufs[c].buf, op);
+ arch_unoptimize_kprobe(op);
list_move(&op->list, done_list);
- if (++c >= MAX_OPTIMIZE_PROBES)
- break;
}
-
- text_poke_smp_batch(jump_poke_params, c);
-}
-
-/* Replace a relative jump with a breakpoint (int3). */
-void __kprobes arch_unoptimize_kprobe(struct optimized_kprobe *op)
-{
- u8 buf[RELATIVEJUMP_SIZE];
-
- /* Set int3 to first byte for kprobes */
- buf[0] = BREAKPOINT_INSTRUCTION;
- memcpy(buf + 1, op->optinsn.copied_insn, RELATIVE_ADDR_SIZE);
- text_poke_smp(op->kp.addr, buf, RELATIVEJUMP_SIZE);
}

int __kprobes
@@ -481,22 +444,3 @@ setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter)
}
return 0;
}
-
-int __kprobes arch_init_optprobes(void)
-{
- /* Allocate code buffer and parameter array */
- jump_poke_bufs = kmalloc(sizeof(struct jump_poke_buffer) *
- MAX_OPTIMIZE_PROBES, GFP_KERNEL);
- if (!jump_poke_bufs)
- return -ENOMEM;
-
- jump_poke_params = kmalloc(sizeof(struct text_poke_param) *
- MAX_OPTIMIZE_PROBES, GFP_KERNEL);
- if (!jump_poke_params) {
- kfree(jump_poke_bufs);
- jump_poke_bufs = NULL;
- return -ENOMEM;
- }
-
- return 0;
-}

2013-07-18 22:24:58

by Jiri Kosina

[permalink] [raw]
Subject: Re: [PATCH -tip 0/3] kprobes, x86: Move optprobe on top of int3-based instruction patching

On Thu, 18 Jul 2013, Masami Hiramatsu wrote:

> Hi,
>
> Here is a series of kprobes and x86 patches for moving
> optprobe (jump optimized kprobe) onto the int3-based
> instruction patching (a.k.a. text_poke_bp, introduced by
> Jiri Kosina).
>
> Since this completely moves text_poke_smp* user to new
> text_poke_bp, I also remove the old text_poke_smp* code
> from alternative.c. As a side effect, it also fixes a
> Kconfig warning about CONFIG_STOP_MACHINE dependency
> confliction.
>
> Thank you,
>
> ---
>
> Masami Hiramatsu (3):
> [CLEANUP] kprobes/x86: Remove an incorrect comment about int3 in NMI/MCE
> kprobes/x86: Use text_poke_bp() instead of text_poke_smp*()
> x86: Remove unused text_poke_smp and text_poke_smp_batch

Nice.

Reviewed-by: Jiri Kosina <[email protected]>

Thanks.

--
Jiri Kosina
SUSE Labs

2013-07-20 01:36:01

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH -tip 0/3] kprobes, x86: Move optprobe on top of int3-based instruction patching

On Thu, 2013-07-18 at 20:47 +0900, Masami Hiramatsu wrote:
> Hi,
>
> Here is a series of kprobes and x86 patches for moving
> optprobe (jump optimized kprobe) onto the int3-based
> instruction patching (a.k.a. text_poke_bp, introduced by
> Jiri Kosina).
>
> Since this completely moves text_poke_smp* user to new
> text_poke_bp, I also remove the old text_poke_smp* code
> from alternative.c. As a side effect, it also fixes a
> Kconfig warning about CONFIG_STOP_MACHINE dependency
> confliction.
>
> Thank you,
>
> ---
>
> Masami Hiramatsu (3):
> [CLEANUP] kprobes/x86: Remove an incorrect comment about int3 in NMI/MCE
> kprobes/x86: Use text_poke_bp() instead of text_poke_smp*()
> x86: Remove unused text_poke_smp and text_poke_smp_batch

Nice work Masami!

Reviewed-by: Steven Rostedt <[email protected]>

-- Steve

>
>
> arch/x86/Kconfig | 5 --
> arch/x86/include/asm/alternative.h | 11 ----
> arch/x86/kernel/alternative.c | 98 +-------------------------------
> arch/x86/kernel/kprobes/common.h | 5 --
> arch/x86/kernel/kprobes/core.c | 2 -
> arch/x86/kernel/kprobes/opt.c | 110 +++++++-----------------------------
> 6 files changed, 25 insertions(+), 206 deletions(-)
>

Subject: [tip:perf/core] kprobes/x86: Remove an incorrect comment about int3 in NMI/MCE

Commit-ID: c7e85c42be68fca743df58a306edd29aa295e155
Gitweb: http://git.kernel.org/tip/c7e85c42be68fca743df58a306edd29aa295e155
Author: Masami Hiramatsu <[email protected]>
AuthorDate: Thu, 18 Jul 2013 20:47:47 +0900
Committer: Ingo Molnar <[email protected]>
CommitDate: Fri, 19 Jul 2013 09:57:03 +0200

kprobes/x86: Remove an incorrect comment about int3 in NMI/MCE

Remove a comment about an int3 issue in NMI/MCE, since
commit:

3f3c8b8c4b2a ("x86: Add workaround to NMI iret woes")

already fixed that. Keeping this incorrect comment can mislead developers.

Signed-off-by: Masami Hiramatsu <[email protected]>
Reviewed-by: Jiri Kosina <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Jason Baron <[email protected]>
Cc: [email protected]
Cc: Borislav Petkov <[email protected]>
Link: http://lkml.kernel.org/r/20130718114747.26675.84110.stgit@mhiramat-M0-7522
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/x86/kernel/kprobes/opt.c | 10 ----------
1 file changed, 10 deletions(-)

diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 76dc6f0..d7d8a8c 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -415,11 +415,6 @@ void __kprobes arch_optimize_kprobes(struct list_head *oplist)
break;
}

- /*
- * text_poke_smp doesn't support NMI/MCE code modifying.
- * However, since kprobes itself also doesn't support NMI/MCE
- * code probing, it's not a problem.
- */
text_poke_smp_batch(jump_poke_params, c);
}

@@ -455,11 +450,6 @@ extern void arch_unoptimize_kprobes(struct list_head *oplist,
break;
}

- /*
- * text_poke_smp doesn't support NMI/MCE code modifying.
- * However, since kprobes itself also doesn't support NMI/MCE
- * code probing, it's not a problem.
- */
text_poke_smp_batch(jump_poke_params, c);
}

Subject: [tip:perf/core] kprobes/x86: Use text_poke_bp() instead of text_poke_smp*()

Commit-ID: a7b0133ea94e4421a81702d5c0e6dcdbbbab8f6b
Gitweb: http://git.kernel.org/tip/a7b0133ea94e4421a81702d5c0e6dcdbbbab8f6b
Author: Masami Hiramatsu <[email protected]>
AuthorDate: Thu, 18 Jul 2013 20:47:50 +0900
Committer: Ingo Molnar <[email protected]>
CommitDate: Fri, 19 Jul 2013 09:57:04 +0200

kprobes/x86: Use text_poke_bp() instead of text_poke_smp*()

Use text_poke_bp() for optimizing kprobes instead of
text_poke_smp*(). Since the number of kprobes is usually not so
large (<100) and text_poke_bp() is much lighter than
text_poke_smp() [which uses stop_machine()], this just stops
using batch processing.

Signed-off-by: Masami Hiramatsu <[email protected]>
Reviewed-by: Jiri Kosina <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Jason Baron <[email protected]>
Cc: [email protected]
Cc: Borislav Petkov <[email protected]>
Link: http://lkml.kernel.org/r/20130718114750.26675.9174.stgit@mhiramat-M0-7522
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/x86/kernel/kprobes/common.h | 5 --
arch/x86/kernel/kprobes/core.c | 2 +-
arch/x86/kernel/kprobes/opt.c | 100 +++++++++------------------------------
3 files changed, 23 insertions(+), 84 deletions(-)

diff --git a/arch/x86/kernel/kprobes/common.h b/arch/x86/kernel/kprobes/common.h
index 2e9d4b5..c6ee63f 100644
--- a/arch/x86/kernel/kprobes/common.h
+++ b/arch/x86/kernel/kprobes/common.h
@@ -82,14 +82,9 @@ extern void synthesize_reljump(void *from, void *to);
extern void synthesize_relcall(void *from, void *to);

#ifdef CONFIG_OPTPROBES
-extern int arch_init_optprobes(void);
extern int setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter);
extern unsigned long __recover_optprobed_insn(kprobe_opcode_t *buf, unsigned long addr);
#else /* !CONFIG_OPTPROBES */
-static inline int arch_init_optprobes(void)
-{
- return 0;
-}
static inline int setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter)
{
return 0;
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 211bce4..cd49b2c 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -1068,7 +1068,7 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)

int __init arch_init_kprobes(void)
{
- return arch_init_optprobes();
+ return 0;
}

int __kprobes arch_trampoline_kprobe(struct kprobe *p)
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index d7d8a8c..d71e994 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -371,31 +371,6 @@ int __kprobes arch_prepare_optimized_kprobe(struct optimized_kprobe *op)
return 0;
}

-#define MAX_OPTIMIZE_PROBES 256
-static struct text_poke_param *jump_poke_params;
-static struct jump_poke_buffer {
- u8 buf[RELATIVEJUMP_SIZE];
-} *jump_poke_bufs;
-
-static void __kprobes setup_optimize_kprobe(struct text_poke_param *tprm,
- u8 *insn_buf,
- struct optimized_kprobe *op)
-{
- s32 rel = (s32)((long)op->optinsn.insn -
- ((long)op->kp.addr + RELATIVEJUMP_SIZE));
-
- /* Backup instructions which will be replaced by jump address */
- memcpy(op->optinsn.copied_insn, op->kp.addr + INT3_SIZE,
- RELATIVE_ADDR_SIZE);
-
- insn_buf[0] = RELATIVEJUMP_OPCODE;
- *(s32 *)(&insn_buf[1]) = rel;
-
- tprm->addr = op->kp.addr;
- tprm->opcode = insn_buf;
- tprm->len = RELATIVEJUMP_SIZE;
-}
-
/*
* Replace breakpoints (int3) with relative jumps.
* Caller must call with locking kprobe_mutex and text_mutex.
@@ -403,32 +378,38 @@ static void __kprobes setup_optimize_kprobe(struct text_poke_param *tprm,
void __kprobes arch_optimize_kprobes(struct list_head *oplist)
{
struct optimized_kprobe *op, *tmp;
- int c = 0;
+ u8 insn_buf[RELATIVEJUMP_SIZE];

list_for_each_entry_safe(op, tmp, oplist, list) {
+ s32 rel = (s32)((long)op->optinsn.insn -
+ ((long)op->kp.addr + RELATIVEJUMP_SIZE));
+
WARN_ON(kprobe_disabled(&op->kp));
- /* Setup param */
- setup_optimize_kprobe(&jump_poke_params[c],
- jump_poke_bufs[c].buf, op);
+
+ /* Backup instructions which will be replaced by jump address */
+ memcpy(op->optinsn.copied_insn, op->kp.addr + INT3_SIZE,
+ RELATIVE_ADDR_SIZE);
+
+ insn_buf[0] = RELATIVEJUMP_OPCODE;
+ *(s32 *)(&insn_buf[1]) = rel;
+
+ text_poke_bp(op->kp.addr, insn_buf, RELATIVEJUMP_SIZE,
+ op->optinsn.insn);
+
list_del_init(&op->list);
- if (++c >= MAX_OPTIMIZE_PROBES)
- break;
}
-
- text_poke_smp_batch(jump_poke_params, c);
}

-static void __kprobes setup_unoptimize_kprobe(struct text_poke_param *tprm,
- u8 *insn_buf,
- struct optimized_kprobe *op)
+/* Replace a relative jump with a breakpoint (int3). */
+void __kprobes arch_unoptimize_kprobe(struct optimized_kprobe *op)
{
+ u8 insn_buf[RELATIVEJUMP_SIZE];
+
/* Set int3 to first byte for kprobes */
insn_buf[0] = BREAKPOINT_INSTRUCTION;
memcpy(insn_buf + 1, op->optinsn.copied_insn, RELATIVE_ADDR_SIZE);
-
- tprm->addr = op->kp.addr;
- tprm->opcode = insn_buf;
- tprm->len = RELATIVEJUMP_SIZE;
+ text_poke_bp(op->kp.addr, insn_buf, RELATIVEJUMP_SIZE,
+ op->optinsn.insn);
}

/*
@@ -439,29 +420,11 @@ extern void arch_unoptimize_kprobes(struct list_head *oplist,
struct list_head *done_list)
{
struct optimized_kprobe *op, *tmp;
- int c = 0;

list_for_each_entry_safe(op, tmp, oplist, list) {
- /* Setup param */
- setup_unoptimize_kprobe(&jump_poke_params[c],
- jump_poke_bufs[c].buf, op);
+ arch_unoptimize_kprobe(op);
list_move(&op->list, done_list);
- if (++c >= MAX_OPTIMIZE_PROBES)
- break;
}
-
- text_poke_smp_batch(jump_poke_params, c);
-}
-
-/* Replace a relative jump with a breakpoint (int3). */
-void __kprobes arch_unoptimize_kprobe(struct optimized_kprobe *op)
-{
- u8 buf[RELATIVEJUMP_SIZE];
-
- /* Set int3 to first byte for kprobes */
- buf[0] = BREAKPOINT_INSTRUCTION;
- memcpy(buf + 1, op->optinsn.copied_insn, RELATIVE_ADDR_SIZE);
- text_poke_smp(op->kp.addr, buf, RELATIVEJUMP_SIZE);
}

int __kprobes
@@ -481,22 +444,3 @@ setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter)
}
return 0;
}
-
-int __kprobes arch_init_optprobes(void)
-{
- /* Allocate code buffer and parameter array */
- jump_poke_bufs = kmalloc(sizeof(struct jump_poke_buffer) *
- MAX_OPTIMIZE_PROBES, GFP_KERNEL);
- if (!jump_poke_bufs)
- return -ENOMEM;
-
- jump_poke_params = kmalloc(sizeof(struct text_poke_param) *
- MAX_OPTIMIZE_PROBES, GFP_KERNEL);
- if (!jump_poke_params) {
- kfree(jump_poke_bufs);
- jump_poke_bufs = NULL;
- return -ENOMEM;
- }
-
- return 0;
-}

Subject: [tip:perf/core] kprobes/x86: Remove unused text_poke_smp() and text_poke_smp_batch() functions

Commit-ID: ea8596bb2d8d37957f3e92db9511c50801689180
Gitweb: http://git.kernel.org/tip/ea8596bb2d8d37957f3e92db9511c50801689180
Author: Masami Hiramatsu <[email protected]>
AuthorDate: Thu, 18 Jul 2013 20:47:53 +0900
Committer: Ingo Molnar <[email protected]>
CommitDate: Fri, 19 Jul 2013 09:57:04 +0200

kprobes/x86: Remove unused text_poke_smp() and text_poke_smp_batch() functions

Since introducing the text_poke_bp() for all text_poke_smp*()
callers, text_poke_smp*() are now unused. This patch basically
reverts:

3d55cc8a058e ("x86: Add text_poke_smp for SMP cross modifying code")
7deb18dcf047 ("x86: Introduce text_poke_smp_batch() for batch-code modifying")

and related commits.

This patch also fixes a Kconfig dependency issue on STOP_MACHINE
in the case of CONFIG_SMP && !CONFIG_MODULE_UNLOAD.

Signed-off-by: Masami Hiramatsu <[email protected]>
Reviewed-by: Jiri Kosina <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Jason Baron <[email protected]>
Cc: [email protected]
Cc: Borislav Petkov <[email protected]>
Link: http://lkml.kernel.org/r/20130718114753.26675.18714.stgit@mhiramat-M0-7522
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/x86/Kconfig | 5 --
arch/x86/include/asm/alternative.h | 11 -----
arch/x86/kernel/alternative.c | 98 +-------------------------------------
3 files changed, 2 insertions(+), 112 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index b32ebf9..3106e0e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -81,7 +81,6 @@ config X86
select HAVE_USER_RETURN_NOTIFIER
select ARCH_BINFMT_ELF_RANDOMIZE_PIE
select HAVE_ARCH_JUMP_LABEL
- select HAVE_TEXT_POKE_SMP
select HAVE_GENERIC_HARDIRQS
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select SPARSE_IRQ
@@ -2332,10 +2331,6 @@ config HAVE_ATOMIC_IOMAP
def_bool y
depends on X86_32

-config HAVE_TEXT_POKE_SMP
- bool
- select STOP_MACHINE if SMP
-
config X86_DEV_DMA_OPS
bool
depends on X86_64 || STA2X11
diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
index 3abf8dd..4daf8c5 100644
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -220,21 +220,10 @@ extern void *text_poke_early(void *addr, const void *opcode, size_t len);
* no thread can be preempted in the instructions being modified (no iret to an
* invalid instruction possible) or if the instructions are changed from a
* consistent state to another consistent state atomically.
- * More care must be taken when modifying code in the SMP case because of
- * Intel's errata. text_poke_smp() takes care that errata, but still
- * doesn't support NMI/MCE handler code modifying.
* On the local CPU you need to be protected again NMI or MCE handlers seeing an
* inconsistent instruction while you patch.
*/
-struct text_poke_param {
- void *addr;
- const void *opcode;
- size_t len;
-};
-
extern void *text_poke(void *addr, const void *opcode, size_t len);
extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
-extern void *text_poke_smp(void *addr, const void *opcode, size_t len);
-extern void text_poke_smp_batch(struct text_poke_param *params, int n);

#endif /* _ASM_X86_ALTERNATIVE_H */
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 0ab4936..5d8782e 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -633,8 +633,8 @@ static int int3_notify(struct notifier_block *self, unsigned long val, void *dat
* @handler: address to jump to when the temporary breakpoint is hit
*
* Modify multi-byte instruction by using int3 breakpoint on SMP.
- * In contrary to text_poke_smp(), we completely avoid stop_machine() here,
- * and achieve the synchronization using int3 breakpoint.
+ * We completely avoid stop_machine() here, and achieve the
+ * synchronization using int3 breakpoint.
*
* The way it is done:
* - add a int3 trap to the address that will be patched
@@ -702,97 +702,3 @@ static int __init int3_init(void)
}

arch_initcall(int3_init);
-/*
- * Cross-modifying kernel text with stop_machine().
- * This code originally comes from immediate value.
- */
-static atomic_t stop_machine_first;
-static int wrote_text;
-
-struct text_poke_params {
- struct text_poke_param *params;
- int nparams;
-};
-
-static int __kprobes stop_machine_text_poke(void *data)
-{
- struct text_poke_params *tpp = data;
- struct text_poke_param *p;
- int i;
-
- if (atomic_xchg(&stop_machine_first, 0)) {
- for (i = 0; i < tpp->nparams; i++) {
- p = &tpp->params[i];
- text_poke(p->addr, p->opcode, p->len);
- }
- smp_wmb(); /* Make sure other cpus see that this has run */
- wrote_text = 1;
- } else {
- while (!wrote_text)
- cpu_relax();
- smp_mb(); /* Load wrote_text before following execution */
- }
-
- for (i = 0; i < tpp->nparams; i++) {
- p = &tpp->params[i];
- flush_icache_range((unsigned long)p->addr,
- (unsigned long)p->addr + p->len);
- }
- /*
- * Intel Archiecture Software Developer's Manual section 7.1.3 specifies
- * that a core serializing instruction such as "cpuid" should be
- * executed on _each_ core before the new instruction is made visible.
- */
- sync_core();
- return 0;
-}
-
-/**
- * text_poke_smp - Update instructions on a live kernel on SMP
- * @addr: address to modify
- * @opcode: source of the copy
- * @len: length to copy
- *
- * Modify multi-byte instruction by using stop_machine() on SMP. This allows
- * user to poke/set multi-byte text on SMP. Only non-NMI/MCE code modifying
- * should be allowed, since stop_machine() does _not_ protect code against
- * NMI and MCE.
- *
- * Note: Must be called under get_online_cpus() and text_mutex.
- */
-void *__kprobes text_poke_smp(void *addr, const void *opcode, size_t len)
-{
- struct text_poke_params tpp;
- struct text_poke_param p;
-
- p.addr = addr;
- p.opcode = opcode;
- p.len = len;
- tpp.params = &p;
- tpp.nparams = 1;
- atomic_set(&stop_machine_first, 1);
- wrote_text = 0;
- /* Use __stop_machine() because the caller already got online_cpus. */
- __stop_machine(stop_machine_text_poke, (void *)&tpp, cpu_online_mask);
- return addr;
-}
-
-/**
- * text_poke_smp_batch - Update instructions on a live kernel on SMP
- * @params: an array of text_poke parameters
- * @n: the number of elements in params.
- *
- * Modify multi-byte instruction by using stop_machine() on SMP. Since the
- * stop_machine() is heavy task, it is better to aggregate text_poke requests
- * and do it once if possible.
- *
- * Note: Must be called under get_online_cpus() and text_mutex.
- */
-void __kprobes text_poke_smp_batch(struct text_poke_param *params, int n)
-{
- struct text_poke_params tpp = {.params = params, .nparams = n};
-
- atomic_set(&stop_machine_first, 1);
- wrote_text = 0;
- __stop_machine(stop_machine_text_poke, (void *)&tpp, cpu_online_mask);
-}

Subject: [tip:x86/jumplabel] kprobes/x86: Remove unused text_poke_smp() and text_poke_smp_batch() functions

Commit-ID: 70039118a1fc5c5dae86f8513ff7c1354f74b7ae
Gitweb: http://git.kernel.org/tip/70039118a1fc5c5dae86f8513ff7c1354f74b7ae
Author: Masami Hiramatsu <[email protected]>
AuthorDate: Thu, 18 Jul 2013 20:47:53 +0900
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 22 Jul 2013 09:05:06 +0200

kprobes/x86: Remove unused text_poke_smp() and text_poke_smp_batch() functions

Since introducing the text_poke_bp() for all text_poke_smp*()
callers, text_poke_smp*() are now unused. This patch basically
reverts:

3d55cc8a058e ("x86: Add text_poke_smp for SMP cross modifying code")
7deb18dcf047 ("x86: Introduce text_poke_smp_batch() for batch-code modifying")

and related commits.

This patch also fixes a Kconfig dependency issue on STOP_MACHINE
in the case of CONFIG_SMP && !CONFIG_MODULE_UNLOAD.

Signed-off-by: Masami Hiramatsu <[email protected]>
Reviewed-by: Jiri Kosina <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Jason Baron <[email protected]>
Cc: [email protected]
Cc: Borislav Petkov <[email protected]>
Link: http://lkml.kernel.org/r/20130718114753.26675.18714.stgit@mhiramat-M0-7522
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/x86/Kconfig | 5 --
arch/x86/include/asm/alternative.h | 11 -----
arch/x86/kernel/alternative.c | 98 +-------------------------------------
3 files changed, 2 insertions(+), 112 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index b32ebf9..3106e0e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -81,7 +81,6 @@ config X86
select HAVE_USER_RETURN_NOTIFIER
select ARCH_BINFMT_ELF_RANDOMIZE_PIE
select HAVE_ARCH_JUMP_LABEL
- select HAVE_TEXT_POKE_SMP
select HAVE_GENERIC_HARDIRQS
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select SPARSE_IRQ
@@ -2332,10 +2331,6 @@ config HAVE_ATOMIC_IOMAP
def_bool y
depends on X86_32

-config HAVE_TEXT_POKE_SMP
- bool
- select STOP_MACHINE if SMP
-
config X86_DEV_DMA_OPS
bool
depends on X86_64 || STA2X11
diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
index 3abf8dd..4daf8c5 100644
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -220,21 +220,10 @@ extern void *text_poke_early(void *addr, const void *opcode, size_t len);
* no thread can be preempted in the instructions being modified (no iret to an
* invalid instruction possible) or if the instructions are changed from a
* consistent state to another consistent state atomically.
- * More care must be taken when modifying code in the SMP case because of
- * Intel's errata. text_poke_smp() takes care that errata, but still
- * doesn't support NMI/MCE handler code modifying.
* On the local CPU you need to be protected again NMI or MCE handlers seeing an
* inconsistent instruction while you patch.
*/
-struct text_poke_param {
- void *addr;
- const void *opcode;
- size_t len;
-};
-
extern void *text_poke(void *addr, const void *opcode, size_t len);
extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
-extern void *text_poke_smp(void *addr, const void *opcode, size_t len);
-extern void text_poke_smp_batch(struct text_poke_param *params, int n);

#endif /* _ASM_X86_ALTERNATIVE_H */
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 0ab4936..5d8782e 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -633,8 +633,8 @@ static int int3_notify(struct notifier_block *self, unsigned long val, void *dat
* @handler: address to jump to when the temporary breakpoint is hit
*
* Modify multi-byte instruction by using int3 breakpoint on SMP.
- * In contrary to text_poke_smp(), we completely avoid stop_machine() here,
- * and achieve the synchronization using int3 breakpoint.
+ * We completely avoid stop_machine() here, and achieve the
+ * synchronization using int3 breakpoint.
*
* The way it is done:
* - add a int3 trap to the address that will be patched
@@ -702,97 +702,3 @@ static int __init int3_init(void)
}

arch_initcall(int3_init);
-/*
- * Cross-modifying kernel text with stop_machine().
- * This code originally comes from immediate value.
- */
-static atomic_t stop_machine_first;
-static int wrote_text;
-
-struct text_poke_params {
- struct text_poke_param *params;
- int nparams;
-};
-
-static int __kprobes stop_machine_text_poke(void *data)
-{
- struct text_poke_params *tpp = data;
- struct text_poke_param *p;
- int i;
-
- if (atomic_xchg(&stop_machine_first, 0)) {
- for (i = 0; i < tpp->nparams; i++) {
- p = &tpp->params[i];
- text_poke(p->addr, p->opcode, p->len);
- }
- smp_wmb(); /* Make sure other cpus see that this has run */
- wrote_text = 1;
- } else {
- while (!wrote_text)
- cpu_relax();
- smp_mb(); /* Load wrote_text before following execution */
- }
-
- for (i = 0; i < tpp->nparams; i++) {
- p = &tpp->params[i];
- flush_icache_range((unsigned long)p->addr,
- (unsigned long)p->addr + p->len);
- }
- /*
- * Intel Archiecture Software Developer's Manual section 7.1.3 specifies
- * that a core serializing instruction such as "cpuid" should be
- * executed on _each_ core before the new instruction is made visible.
- */
- sync_core();
- return 0;
-}
-
-/**
- * text_poke_smp - Update instructions on a live kernel on SMP
- * @addr: address to modify
- * @opcode: source of the copy
- * @len: length to copy
- *
- * Modify multi-byte instruction by using stop_machine() on SMP. This allows
- * user to poke/set multi-byte text on SMP. Only non-NMI/MCE code modifying
- * should be allowed, since stop_machine() does _not_ protect code against
- * NMI and MCE.
- *
- * Note: Must be called under get_online_cpus() and text_mutex.
- */
-void *__kprobes text_poke_smp(void *addr, const void *opcode, size_t len)
-{
- struct text_poke_params tpp;
- struct text_poke_param p;
-
- p.addr = addr;
- p.opcode = opcode;
- p.len = len;
- tpp.params = &p;
- tpp.nparams = 1;
- atomic_set(&stop_machine_first, 1);
- wrote_text = 0;
- /* Use __stop_machine() because the caller already got online_cpus. */
- __stop_machine(stop_machine_text_poke, (void *)&tpp, cpu_online_mask);
- return addr;
-}
-
-/**
- * text_poke_smp_batch - Update instructions on a live kernel on SMP
- * @params: an array of text_poke parameters
- * @n: the number of elements in params.
- *
- * Modify multi-byte instruction by using stop_machine() on SMP. Since the
- * stop_machine() is heavy task, it is better to aggregate text_poke requests
- * and do it once if possible.
- *
- * Note: Must be called under get_online_cpus() and text_mutex.
- */
-void __kprobes text_poke_smp_batch(struct text_poke_param *params, int n)
-{
- struct text_poke_params tpp = {.params = params, .nparams = n};
-
- atomic_set(&stop_machine_first, 1);
- wrote_text = 0;
- __stop_machine(stop_machine_text_poke, (void *)&tpp, cpu_online_mask);
-}

2013-07-22 07:43:48

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH -tip 3/3] x86: Remove unused text_poke_smp and text_poke_smp_batch


* Masami Hiramatsu <[email protected]> wrote:

> Since introducing the text_poke_bp for all text_poke_smp*
> callers, text_poke_smp* are now unused. This basically
> reverts 3d55cc8a, 7deb18dc and related commits.
>
> This also fixes a Kconfig dependency issue on STOP_MACHINE
> in the case of CONFIG_SMP && !CONFIG_MODULE_UNLOAD.
>
> Signed-off-by: Masami Hiramatsu <[email protected]>
> ---
> arch/x86/Kconfig | 5 --
> arch/x86/include/asm/alternative.h | 11 ----
> arch/x86/kernel/alternative.c | 98 +-----------------------------------
> 3 files changed, 2 insertions(+), 112 deletions(-)

Hm, it does not build with the attached config:

arch/x86/kernel/kprobes/opt.c: In function 'setup_optimize_kprobe':
arch/x86/kernel/kprobes/opt.c:394:6: error: dereferencing pointer to incomplete type
arch/x86/kernel/kprobes/opt.c:395:6: error: dereferencing pointer to incomplete type
arch/x86/kernel/kprobes/opt.c:396:6: error: dereferencing pointer to incomplete type

Thanks,

Ingo


Attachments:
(No filename) (1.02 kB)
config (87.94 kB)
Download all attachments
Subject: Re: Re: [PATCH -tip 3/3] x86: Remove unused text_poke_smp and text_poke_smp_batch

(2013/07/22 16:43), Ingo Molnar wrote:
> arch/x86/kernel/kprobes/opt.c: In function 'setup_optimize_kprobe':
> arch/x86/kernel/kprobes/opt.c:394:6: error: dereferencing pointer to incomplete type
> arch/x86/kernel/kprobes/opt.c:395:6: error: dereferencing pointer to incomplete type
> arch/x86/kernel/kprobes/opt.c:396:6: error: dereferencing pointer to incomplete type

Hmm, setup_optimized_kprobes is already removed by 2/3 (this is actually
a series of tightly coupled patches). Could you check your working tree?

Thank you,

--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]

2013-07-22 08:21:11

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH -tip 3/3] x86: Remove unused text_poke_smp and text_poke_smp_batch


* Ingo Molnar <[email protected]> wrote:

>
> * Masami Hiramatsu <[email protected]> wrote:
>
> > Since introducing the text_poke_bp for all text_poke_smp*
> > callers, text_poke_smp* are now unused. This basically
> > reverts 3d55cc8a, 7deb18dc and related commits.
> >
> > This also fixes a Kconfig dependency issue on STOP_MACHINE
> > in the case of CONFIG_SMP && !CONFIG_MODULE_UNLOAD.
> >
> > Signed-off-by: Masami Hiramatsu <[email protected]>
> > ---
> > arch/x86/Kconfig | 5 --
> > arch/x86/include/asm/alternative.h | 11 ----
> > arch/x86/kernel/alternative.c | 98 +-----------------------------------
> > 3 files changed, 2 insertions(+), 112 deletions(-)
>
> Hm, it does not build with the attached config:
>
> arch/x86/kernel/kprobes/opt.c: In function 'setup_optimize_kprobe':
> arch/x86/kernel/kprobes/opt.c:394:6: error: dereferencing pointer to incomplete type
> arch/x86/kernel/kprobes/opt.c:395:6: error: dereferencing pointer to incomplete type
> arch/x86/kernel/kprobes/opt.c:396:6: error: dereferencing pointer to incomplete type

32-bit appears to be the key pattern.

Thanks,

Ingo

Subject: Re: Re: [PATCH -tip 3/3] x86: Remove unused text_poke_smp and text_poke_smp_batch

(2013/07/22 17:21), Ingo Molnar wrote:
>
> * Ingo Molnar <[email protected]> wrote:
>
>>
>> * Masami Hiramatsu <[email protected]> wrote:
>>
>>> Since introducing the text_poke_bp for all text_poke_smp*
>>> callers, text_poke_smp* are now unused. This basically
>>> reverts 3d55cc8a, 7deb18dc and related commits.
>>>
>>> This also fixes a Kconfig dependency issue on STOP_MACHINE
>>> in the case of CONFIG_SMP && !CONFIG_MODULE_UNLOAD.
>>>
>>> Signed-off-by: Masami Hiramatsu <[email protected]>
>>> ---
>>> arch/x86/Kconfig | 5 --
>>> arch/x86/include/asm/alternative.h | 11 ----
>>> arch/x86/kernel/alternative.c | 98 +-----------------------------------
>>> 3 files changed, 2 insertions(+), 112 deletions(-)
>>
>> Hm, it does not build with the attached config:
>>
>> arch/x86/kernel/kprobes/opt.c: In function 'setup_optimize_kprobe':
>> arch/x86/kernel/kprobes/opt.c:394:6: error: dereferencing pointer to incomplete type
>> arch/x86/kernel/kprobes/opt.c:395:6: error: dereferencing pointer to incomplete type
>> arch/x86/kernel/kprobes/opt.c:396:6: error: dereferencing pointer to incomplete type
>
> 32-bit appears to be the key pattern.

Ingo, please make sure your x86/jumplabel tree picked all of my patches.
This 3/3 strongly depends on the previous 1/3 and 2/3.
Without it, kprobes/opt.c can not be build because it still uses the
text_poke_params data structure which 3/3 removes.

Thank you,

--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]

2013-07-22 10:02:07

by Ingo Molnar

[permalink] [raw]
Subject: Re: Re: [PATCH -tip 3/3] x86: Remove unused text_poke_smp and text_poke_smp_batch


* Masami Hiramatsu <[email protected]> wrote:

> (2013/07/22 17:21), Ingo Molnar wrote:
> >
> > * Ingo Molnar <[email protected]> wrote:
> >
> >>
> >> * Masami Hiramatsu <[email protected]> wrote:
> >>
> >>> Since introducing the text_poke_bp for all text_poke_smp*
> >>> callers, text_poke_smp* are now unused. This basically
> >>> reverts 3d55cc8a, 7deb18dc and related commits.
> >>>
> >>> This also fixes a Kconfig dependency issue on STOP_MACHINE
> >>> in the case of CONFIG_SMP && !CONFIG_MODULE_UNLOAD.
> >>>
> >>> Signed-off-by: Masami Hiramatsu <[email protected]>
> >>> ---
> >>> arch/x86/Kconfig | 5 --
> >>> arch/x86/include/asm/alternative.h | 11 ----
> >>> arch/x86/kernel/alternative.c | 98 +-----------------------------------
> >>> 3 files changed, 2 insertions(+), 112 deletions(-)
> >>
> >> Hm, it does not build with the attached config:
> >>
> >> arch/x86/kernel/kprobes/opt.c: In function 'setup_optimize_kprobe':
> >> arch/x86/kernel/kprobes/opt.c:394:6: error: dereferencing pointer to incomplete type
> >> arch/x86/kernel/kprobes/opt.c:395:6: error: dereferencing pointer to incomplete type
> >> arch/x86/kernel/kprobes/opt.c:396:6: error: dereferencing pointer to incomplete type
> >
> > 32-bit appears to be the key pattern.
>
> Ingo, please make sure your x86/jumplabel tree picked all of my patches.
> This 3/3 strongly depends on the previous 1/3 and 2/3.
> Without it, kprobes/opt.c can not be build because it still uses the
> text_poke_params data structure which 3/3 removes.

Yes I know, and it was on top of -tip which already had these included:

51b2c07b2226 x86: Make jump_label use int3-based patching
fd4363fff3d9 x86: Introduce int3 (breakpoint)-based instruction patching

without these the third patch wouldn't even apply I think.

Thanks,

Ingo

Subject: Re: [PATCH -tip 3/3] x86: Remove unused text_poke_smp and text_poke_smp_batch

(2013/07/22 19:01), Ingo Molnar wrote:
>
> * Masami Hiramatsu <[email protected]> wrote:
>
>> (2013/07/22 17:21), Ingo Molnar wrote:
>>>
>>> * Ingo Molnar <[email protected]> wrote:
>>>
>>>>
>>>> * Masami Hiramatsu <[email protected]> wrote:
>>>>
>>>>> Since introducing the text_poke_bp for all text_poke_smp*
>>>>> callers, text_poke_smp* are now unused. This basically
>>>>> reverts 3d55cc8a, 7deb18dc and related commits.
>>>>>
>>>>> This also fixes a Kconfig dependency issue on STOP_MACHINE
>>>>> in the case of CONFIG_SMP && !CONFIG_MODULE_UNLOAD.
>>>>>
>>>>> Signed-off-by: Masami Hiramatsu <[email protected]>
>>>>> ---
>>>>> arch/x86/Kconfig | 5 --
>>>>> arch/x86/include/asm/alternative.h | 11 ----
>>>>> arch/x86/kernel/alternative.c | 98 +-----------------------------------
>>>>> 3 files changed, 2 insertions(+), 112 deletions(-)
>>>>
>>>> Hm, it does not build with the attached config:
>>>>
>>>> arch/x86/kernel/kprobes/opt.c: In function 'setup_optimize_kprobe':
>>>> arch/x86/kernel/kprobes/opt.c:394:6: error: dereferencing pointer to incomplete type
>>>> arch/x86/kernel/kprobes/opt.c:395:6: error: dereferencing pointer to incomplete type
>>>> arch/x86/kernel/kprobes/opt.c:396:6: error: dereferencing pointer to incomplete type
>>>
>>> 32-bit appears to be the key pattern.
>>
>> Ingo, please make sure your x86/jumplabel tree picked all of my patches.
>> This 3/3 strongly depends on the previous 1/3 and 2/3.
>> Without it, kprobes/opt.c can not be build because it still uses the
>> text_poke_params data structure which 3/3 removes.
>
> Yes I know, and it was on top of -tip which already had these included:
>
> 51b2c07b2226 x86: Make jump_label use int3-based patching
> fd4363fff3d9 x86: Introduce int3 (breakpoint)-based instruction patching
>
> without these the third patch wouldn't even apply I think.

No, I meant below two patches:
kprobes/x86: Remove an incorrect comment about int3 in NMI/MCE
kprobes/x86: Use text_poke_bp() instead of text_poke_smp*()

Since the "setup_optimize_kprobe" in the error message is completely
*removed* by the second one, the above error should not happen.

Thank you,


--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]

2013-07-22 10:21:53

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH -tip 3/3] x86: Remove unused text_poke_smp and text_poke_smp_batch


* Masami Hiramatsu <[email protected]> wrote:

> (2013/07/22 19:01), Ingo Molnar wrote:
> >
> > * Masami Hiramatsu <[email protected]> wrote:
> >
> >> (2013/07/22 17:21), Ingo Molnar wrote:
> >>>
> >>> * Ingo Molnar <[email protected]> wrote:
> >>>
> >>>>
> >>>> * Masami Hiramatsu <[email protected]> wrote:
> >>>>
> >>>>> Since introducing the text_poke_bp for all text_poke_smp*
> >>>>> callers, text_poke_smp* are now unused. This basically
> >>>>> reverts 3d55cc8a, 7deb18dc and related commits.
> >>>>>
> >>>>> This also fixes a Kconfig dependency issue on STOP_MACHINE
> >>>>> in the case of CONFIG_SMP && !CONFIG_MODULE_UNLOAD.
> >>>>>
> >>>>> Signed-off-by: Masami Hiramatsu <[email protected]>
> >>>>> ---
> >>>>> arch/x86/Kconfig | 5 --
> >>>>> arch/x86/include/asm/alternative.h | 11 ----
> >>>>> arch/x86/kernel/alternative.c | 98 +-----------------------------------
> >>>>> 3 files changed, 2 insertions(+), 112 deletions(-)
> >>>>
> >>>> Hm, it does not build with the attached config:
> >>>>
> >>>> arch/x86/kernel/kprobes/opt.c: In function 'setup_optimize_kprobe':
> >>>> arch/x86/kernel/kprobes/opt.c:394:6: error: dereferencing pointer to incomplete type
> >>>> arch/x86/kernel/kprobes/opt.c:395:6: error: dereferencing pointer to incomplete type
> >>>> arch/x86/kernel/kprobes/opt.c:396:6: error: dereferencing pointer to incomplete type
> >>>
> >>> 32-bit appears to be the key pattern.
> >>
> >> Ingo, please make sure your x86/jumplabel tree picked all of my patches.
> >> This 3/3 strongly depends on the previous 1/3 and 2/3.
> >> Without it, kprobes/opt.c can not be build because it still uses the
> >> text_poke_params data structure which 3/3 removes.
> >
> > Yes I know, and it was on top of -tip which already had these included:
> >
> > 51b2c07b2226 x86: Make jump_label use int3-based patching
> > fd4363fff3d9 x86: Introduce int3 (breakpoint)-based instruction patching
> >
> > without these the third patch wouldn't even apply I think.
>
> No, I meant below two patches:
> kprobes/x86: Remove an incorrect comment about int3 in NMI/MCE
> kprobes/x86: Use text_poke_bp() instead of text_poke_smp*()
>
> Since the "setup_optimize_kprobe" in the error message is completely
> *removed* by the second one, the above error should not happen.

Oh, indeed - I correctly applied them but then messed up the
cherry-picking, that's all.

So never mind, it's all fine.

Thanks,

Ingo

2013-07-22 13:53:09

by Jiri Kosina

[permalink] [raw]
Subject: Re: [PATCH -tip 3/3] x86: Remove unused text_poke_smp and text_poke_smp_batch

On Mon, 22 Jul 2013, Ingo Molnar wrote:

> > >>>> Hm, it does not build with the attached config:
> > >>>>
> > >>>> arch/x86/kernel/kprobes/opt.c: In function 'setup_optimize_kprobe':
> > >>>> arch/x86/kernel/kprobes/opt.c:394:6: error: dereferencing pointer to incomplete type
> > >>>> arch/x86/kernel/kprobes/opt.c:395:6: error: dereferencing pointer to incomplete type
> > >>>> arch/x86/kernel/kprobes/opt.c:396:6: error: dereferencing pointer to incomplete type
> > >>>
> > >>> 32-bit appears to be the key pattern.
> > >>
> > >> Ingo, please make sure your x86/jumplabel tree picked all of my patches.
> > >> This 3/3 strongly depends on the previous 1/3 and 2/3.
> > >> Without it, kprobes/opt.c can not be build because it still uses the
> > >> text_poke_params data structure which 3/3 removes.
> > >
> > > Yes I know, and it was on top of -tip which already had these included:
> > >
> > > 51b2c07b2226 x86: Make jump_label use int3-based patching
> > > fd4363fff3d9 x86: Introduce int3 (breakpoint)-based instruction patching
> > >
> > > without these the third patch wouldn't even apply I think.
> >
> > No, I meant below two patches:
> > kprobes/x86: Remove an incorrect comment about int3 in NMI/MCE
> > kprobes/x86: Use text_poke_bp() instead of text_poke_smp*()
> >
> > Since the "setup_optimize_kprobe" in the error message is completely
> > *removed* by the second one, the above error should not happen.
>
> Oh, indeed - I correctly applied them but then messed up the
> cherry-picking, that's all.
>
> So never mind, it's all fine.

Hi Ingo,

x86/jumplabel branch still seems to contain only Masami's "kprobes/x86:
Remove unused text_poke_smp() and text_poke_smp_batch() functions", with
the the prerequisity patches missing, which effectively means it's broken
... are you planning to fix that up, please?

Thanks,

--
Jiri Kosina
SUSE Labs

2013-07-23 08:11:34

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH -tip 3/3] x86: Remove unused text_poke_smp and text_poke_smp_batch


* Jiri Kosina <[email protected]> wrote:

> On Mon, 22 Jul 2013, Ingo Molnar wrote:
>
> > > >>>> Hm, it does not build with the attached config:
> > > >>>>
> > > >>>> arch/x86/kernel/kprobes/opt.c: In function 'setup_optimize_kprobe':
> > > >>>> arch/x86/kernel/kprobes/opt.c:394:6: error: dereferencing pointer to incomplete type
> > > >>>> arch/x86/kernel/kprobes/opt.c:395:6: error: dereferencing pointer to incomplete type
> > > >>>> arch/x86/kernel/kprobes/opt.c:396:6: error: dereferencing pointer to incomplete type
> > > >>>
> > > >>> 32-bit appears to be the key pattern.
> > > >>
> > > >> Ingo, please make sure your x86/jumplabel tree picked all of my patches.
> > > >> This 3/3 strongly depends on the previous 1/3 and 2/3.
> > > >> Without it, kprobes/opt.c can not be build because it still uses the
> > > >> text_poke_params data structure which 3/3 removes.
> > > >
> > > > Yes I know, and it was on top of -tip which already had these included:
> > > >
> > > > 51b2c07b2226 x86: Make jump_label use int3-based patching
> > > > fd4363fff3d9 x86: Introduce int3 (breakpoint)-based instruction patching
> > > >
> > > > without these the third patch wouldn't even apply I think.
> > >
> > > No, I meant below two patches:
> > > kprobes/x86: Remove an incorrect comment about int3 in NMI/MCE
> > > kprobes/x86: Use text_poke_bp() instead of text_poke_smp*()
> > >
> > > Since the "setup_optimize_kprobe" in the error message is completely
> > > *removed* by the second one, the above error should not happen.
> >
> > Oh, indeed - I correctly applied them but then messed up the
> > cherry-picking, that's all.
> >
> > So never mind, it's all fine.
>
> Hi Ingo,
>
> x86/jumplabel branch still seems to contain only Masami's "kprobes/x86:
> Remove unused text_poke_smp() and text_poke_smp_batch() functions", with
> the the prerequisity patches missing, which effectively means it's
> broken ... are you planning to fix that up, please?

That must be a Git server push-out delay - it should contain two patches:

51b2c07b2226 x86: Make jump_label use int3-based patching
fd4363fff3d9 x86: Introduce int3 (breakpoint)-based instruction patching

The kprobes patches are in perf/core, with x86/jumplabel merged in:

ea8596bb2d8d kprobes/x86: Remove unused text_poke_smp() and text_poke_smp_batch() functions
a7b0133ea94e kprobes/x86: Use text_poke_bp() instead of text_poke_smp*()
c7e85c42be68 kprobes/x86: Remove an incorrect comment about int3 in NMI/MCE
9bb15425c3bc Merge branch 'x86/jumplabel' into perf/core

Thanks,

Ingo