Userspace Statically Defined Tracepoints[1] are dtrace style markers
inside userspace applications. Applications like PostgreSQL, MySQL,
Pthread, Perl, Python, Java, Ruby, Node.js, libvirt, QEMU, glib etc
have these markers embedded in them. These markers are added by developer
at important places in the code. Each marker source expands to a single
nop instruction in the compiled code but there may be additional
overhead for computing the marker arguments which expands to couple of
instructions. In case the overhead is more, execution of it can be
omitted by runtime if() condition when no one is tracing on the marker:
if (reference_counter > 0) {
Execute marker instructions;
}
Default value of reference counter is 0. Tracer has to increment the
reference counter before tracing on a marker and decrement it when
done with the tracing.
Currently, perf tool has limited supports for SDT markers. I.e. it
can not trace markers surrounded by reference counter. Also, it's
not easy to add reference counter logic in userspace tool like perf,
so basic idea for this patchset is to add reference counter logic in
the trace_uprobe infrastructure. Ex,[2]
# cat tick.c
...
for (i = 0; i < 100; i++) {
DTRACE_PROBE1(tick, loop1, i);
if (TICK_LOOP2_ENABLED()) {
DTRACE_PROBE1(tick, loop2, i);
}
printf("hi: %d\n", i);
sleep(1);
}
...
Here tick:loop1 is marker without reference counter where as tick:loop2
is surrounded by reference counter condition.
# perf buildid-cache --add /tmp/tick
# perf probe sdt_tick:loop1
# perf probe sdt_tick:loop2
# perf stat -e sdt_tick:loop1,sdt_tick:loop2 -- /tmp/tick
hi: 0
hi: 1
hi: 2
^C
Performance counter stats for '/tmp/tick':
3 sdt_tick:loop1
0 sdt_tick:loop2
2.747086086 seconds time elapsed
Perf failed to record data for tick:loop2. Same experiment with this
patch series:
# ./perf buildid-cache --add /tmp/tick
# ./perf probe sdt_tick:loop2
# ./perf stat -e sdt_tick:loop2 /tmp/tick
hi: 0
hi: 1
hi: 2
^C
Performance counter stats for '/tmp/tick':
3 sdt_tick:loop2
2.561851452 seconds time elapsed
Note:
- 'reference counter' is called as 'semaphore' in original Dtrace
(or Systemtap, bcc and even in ELF) documentation and code. But the
term 'semaphore' is misleading in this context. This is just a counter
used to hold number of tracers tracing on a marker. This is not really
used for any synchronization. So we are referring it as 'reference
counter' in kernel / perf code.
v3 changes:
- [PATCH v3 6/9] Fix build failure.
- [PATCH v3 6/9] Move uprobe_mmap_callback() after no_uprobe_events()
check. Actually, it should be moved after MMF_HAS_UPROBES as well.
But current implementation is sub-optimal. If there are multiple
instances of same application running and user wants to trace any
particular instance, trace_uprobe is updating reference counter in
all instances. This is not a problem on user side because instruction
is not replaced with trap/int3 and thus user will only see samples
from his interested process. But still this is more of a correctness
issue. I'm working on a fix for this. Once this gets fixed, we can
move uprobe_mmap_callback() call after MMF_HAS_UPROBES.
- [PATCH v3 7/9] Remove mmu_notifier. Instead, use callback from
uprobe_clear_state(). Again, uprobe_clear_state_callback() should be
moved after MMF_HAS_UPROBES. But that should be done when move
uprobe_mmap_callback() first.
- [PATCH v3 7/9] Properly handle error cases for sdt_increment_ref_ctr()
and trace_uprobe_mmap().
- [PATCH v3 9/9] Show warning if kernel doesn't support ref_ctr logic
and user tries to use it. Also, return error in this case instead
of adding entry in uprobe_events.
- [PATCH v3 9/9] Don't check kernel ref_ctr support while adding files
into buildid-cache.
v2 can be found at:
https://lkml.org/lkml/2018/4/4/127
v2 changes:
- [PATCH v2 3/9] is new. build_map_info() has a side effect. One has
to perform mmput() when he is done with the mm. Let free_map_info()
take care of mmput() so that one does not need to worry about it.
- [PATCH v2 6/9] sdt_update_ref_ctr(). No need to use memcpy().
Reference counter can be directly updated using normal assignment.
- [PATCH v2 6/9] Check valid vma is returned by sdt_find_vma() before
incrementing / decrementing a reference counter.
- [PATCH v2 6/9] Introduce utility functions for taking write lock on
dup_mmap_sem. Use these functions in trace_uprobe to avoid race with
fork / dup_mmap().
- [PATCH v2 6/9] Don't check presence of mm in tu->sml at decrement
time. Purpose of maintaining the list is to ensure increment happen
only once for each {trace_uprobe,mm} tuple.
- [PATCH v2 7/9] v1 was not removing mm from tu->sml when process
exits and tracing is still on. This leads to a problem if same
address gets used by new mm. Use mmu_notifier to remove such mm
from the list. This guarantees that all mm which has been added
to tu->sml will be removed from list either when tracing ends or
when process goes away.
- [PATCH v2 7/9] Patch description was misleading. Change it. Add
more generic python example.
- [PATCH v2 7/9] Convert sml_rw_sem into mutex sml_lock.
- [PATCH v2 7/9] Use builtin linked list in sdt_mm_list instead of
defining it's own pointer chain.
- Change the order of last two patches.
- [PATCH v2 9/9] Check availability of ref_ctr_offset support by
trace_uprobe infrastructure before using it. This ensures newer
perf tool will still work on older kernels which does not support
trace_uprobe with reference counter.
- Other changes as suggested by Masami, Oleg and Steve.
v1 can be found at:
https://lkml.org/lkml/2018/3/13/432
[1] https://sourceware.org/systemtap/wiki/UserSpaceProbeImplementation
[2] https://github.com/iovisor/bcc/issues/327#issuecomment-200576506
[3] https://lkml.org/lkml/2017/12/6/976
Oleg Nesterov (1):
Uprobe: Move mmput() into free_map_info()
Ravi Bangoria (8):
Uprobe: Export vaddr <-> offset conversion functions
mm: Prefix vma_ to vaddr_to_offset() and offset_to_vaddr()
Uprobe: Rename map_info to uprobe_map_info
Uprobe: Export uprobe_map_info along with
uprobe_{build/free}_map_info()
trace_uprobe: Support SDT markers having reference count (semaphore)
trace_uprobe/sdt: Fix multiple update of same reference counter
trace_uprobe/sdt: Document about reference counter
perf probe: Support SDT markers having reference counter (semaphore)
Documentation/trace/uprobetracer.txt | 16 ++-
include/linux/mm.h | 12 ++
include/linux/uprobes.h | 20 +++
kernel/events/uprobes.c | 90 +++++++-----
kernel/trace/trace.c | 2 +-
kernel/trace/trace_uprobe.c | 271 ++++++++++++++++++++++++++++++++++-
tools/perf/util/probe-event.c | 39 ++++-
tools/perf/util/probe-event.h | 1 +
tools/perf/util/probe-file.c | 34 ++++-
tools/perf/util/probe-file.h | 1 +
tools/perf/util/symbol-elf.c | 46 ++++--
tools/perf/util/symbol.h | 7 +
12 files changed, 472 insertions(+), 67 deletions(-)
--
1.8.3.1
From: Ravi Bangoria <[email protected]>
Make function names more meaningful by adding vma_ prefix
to them.
Signed-off-by: Ravi Bangoria <[email protected]>
Reviewed-by: Jérôme Glisse <[email protected]>
---
include/linux/mm.h | 4 ++--
kernel/events/uprobes.c | 14 +++++++-------
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index de0cc08..47fd8a9 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2273,13 +2273,13 @@ struct vm_unmapped_area_info {
}
static inline unsigned long
-offset_to_vaddr(struct vm_area_struct *vma, loff_t offset)
+vma_offset_to_vaddr(struct vm_area_struct *vma, loff_t offset)
{
return vma->vm_start + offset - ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
}
static inline loff_t
-vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr)
+vma_vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr)
{
return ((loff_t)vma->vm_pgoff << PAGE_SHIFT) + (vaddr - vma->vm_start);
}
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index bd6f230..535fd39 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -748,7 +748,7 @@ static inline struct map_info *free_map_info(struct map_info *info)
curr = info;
info->mm = vma->vm_mm;
- info->vaddr = offset_to_vaddr(vma, offset);
+ info->vaddr = vma_offset_to_vaddr(vma, offset);
}
i_mmap_unlock_read(mapping);
@@ -807,7 +807,7 @@ static inline struct map_info *free_map_info(struct map_info *info)
goto unlock;
if (vma->vm_start > info->vaddr ||
- vaddr_to_offset(vma, info->vaddr) != uprobe->offset)
+ vma_vaddr_to_offset(vma, info->vaddr) != uprobe->offset)
goto unlock;
if (is_register) {
@@ -977,7 +977,7 @@ static int unapply_uprobe(struct uprobe *uprobe, struct mm_struct *mm)
uprobe->offset >= offset + vma->vm_end - vma->vm_start)
continue;
- vaddr = offset_to_vaddr(vma, uprobe->offset);
+ vaddr = vma_offset_to_vaddr(vma, uprobe->offset);
err |= remove_breakpoint(uprobe, mm, vaddr);
}
up_read(&mm->mmap_sem);
@@ -1023,7 +1023,7 @@ static void build_probe_list(struct inode *inode,
struct uprobe *u;
INIT_LIST_HEAD(head);
- min = vaddr_to_offset(vma, start);
+ min = vma_vaddr_to_offset(vma, start);
max = min + (end - start) - 1;
spin_lock(&uprobes_treelock);
@@ -1076,7 +1076,7 @@ int uprobe_mmap(struct vm_area_struct *vma)
list_for_each_entry_safe(uprobe, u, &tmp_list, pending_list) {
if (!fatal_signal_pending(current) &&
filter_chain(uprobe, UPROBE_FILTER_MMAP, vma->vm_mm)) {
- unsigned long vaddr = offset_to_vaddr(vma, uprobe->offset);
+ unsigned long vaddr = vma_offset_to_vaddr(vma, uprobe->offset);
install_breakpoint(uprobe, vma->vm_mm, vma, vaddr);
}
put_uprobe(uprobe);
@@ -1095,7 +1095,7 @@ int uprobe_mmap(struct vm_area_struct *vma)
inode = file_inode(vma->vm_file);
- min = vaddr_to_offset(vma, start);
+ min = vma_vaddr_to_offset(vma, start);
max = min + (end - start) - 1;
spin_lock(&uprobes_treelock);
@@ -1730,7 +1730,7 @@ static struct uprobe *find_active_uprobe(unsigned long bp_vaddr, int *is_swbp)
if (vma && vma->vm_start <= bp_vaddr) {
if (valid_vma(vma, false)) {
struct inode *inode = file_inode(vma->vm_file);
- loff_t offset = vaddr_to_offset(vma, bp_vaddr);
+ loff_t offset = vma_vaddr_to_offset(vma, bp_vaddr);
uprobe = find_uprobe(inode, offset);
}
--
1.8.3.1
From: Ravi Bangoria <[email protected]>
map_info is very generic name, rename it to uprobe_map_info.
Renaming will help to export this structure outside of the
file.
Also rename free_map_info() to uprobe_free_map_info() and
build_map_info() to uprobe_build_map_info().
Signed-off-by: Ravi Bangoria <[email protected]>
Reviewed-by: Jérôme Glisse <[email protected]>
---
kernel/events/uprobes.c | 30 ++++++++++++++++--------------
1 file changed, 16 insertions(+), 14 deletions(-)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 1d439c7..477dc42 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -695,28 +695,30 @@ static void delete_uprobe(struct uprobe *uprobe)
put_uprobe(uprobe);
}
-struct map_info {
- struct map_info *next;
+struct uprobe_map_info {
+ struct uprobe_map_info *next;
struct mm_struct *mm;
unsigned long vaddr;
};
-static inline struct map_info *free_map_info(struct map_info *info)
+static inline struct uprobe_map_info *
+uprobe_free_map_info(struct uprobe_map_info *info)
{
- struct map_info *next = info->next;
+ struct uprobe_map_info *next = info->next;
mmput(info->mm);
kfree(info);
return next;
}
-static struct map_info *
-build_map_info(struct address_space *mapping, loff_t offset, bool is_register)
+static struct uprobe_map_info *
+uprobe_build_map_info(struct address_space *mapping, loff_t offset,
+ bool is_register)
{
unsigned long pgoff = offset >> PAGE_SHIFT;
struct vm_area_struct *vma;
- struct map_info *curr = NULL;
- struct map_info *prev = NULL;
- struct map_info *info;
+ struct uprobe_map_info *curr = NULL;
+ struct uprobe_map_info *prev = NULL;
+ struct uprobe_map_info *info;
int more = 0;
again:
@@ -730,7 +732,7 @@ static inline struct map_info *free_map_info(struct map_info *info)
* Needs GFP_NOWAIT to avoid i_mmap_rwsem recursion through
* reclaim. This is optimistic, no harm done if it fails.
*/
- prev = kmalloc(sizeof(struct map_info),
+ prev = kmalloc(sizeof(struct uprobe_map_info),
GFP_NOWAIT | __GFP_NOMEMALLOC | __GFP_NOWARN);
if (prev)
prev->next = NULL;
@@ -763,7 +765,7 @@ static inline struct map_info *free_map_info(struct map_info *info)
}
do {
- info = kmalloc(sizeof(struct map_info), GFP_KERNEL);
+ info = kmalloc(sizeof(struct uprobe_map_info), GFP_KERNEL);
if (!info) {
curr = ERR_PTR(-ENOMEM);
goto out;
@@ -786,11 +788,11 @@ static inline struct map_info *free_map_info(struct map_info *info)
register_for_each_vma(struct uprobe *uprobe, struct uprobe_consumer *new)
{
bool is_register = !!new;
- struct map_info *info;
+ struct uprobe_map_info *info;
int err = 0;
percpu_down_write(&dup_mmap_sem);
- info = build_map_info(uprobe->inode->i_mapping,
+ info = uprobe_build_map_info(uprobe->inode->i_mapping,
uprobe->offset, is_register);
if (IS_ERR(info)) {
err = PTR_ERR(info);
@@ -828,7 +830,7 @@ static inline struct map_info *free_map_info(struct map_info *info)
unlock:
up_write(&mm->mmap_sem);
free:
- info = free_map_info(info);
+ info = uprobe_free_map_info(info);
}
out:
percpu_up_write(&dup_mmap_sem);
--
1.8.3.1
From: Ravi Bangoria <[email protected]>
Given the file(inode) and offset, build_map_info() finds all
existing mm that map the portion of file containing offset.
Exporting these functions and data structure will help to use
them in other set of files.
Signed-off-by: Ravi Bangoria <[email protected]>
Reviewed-by: Jérôme Glisse <[email protected]>
---
include/linux/uprobes.h | 9 +++++++++
kernel/events/uprobes.c | 14 +++-----------
2 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
index 0a294e9..7bd2760 100644
--- a/include/linux/uprobes.h
+++ b/include/linux/uprobes.h
@@ -109,12 +109,19 @@ enum rp_check {
RP_CHECK_RET,
};
+struct address_space;
struct xol_area;
struct uprobes_state {
struct xol_area *xol_area;
};
+struct uprobe_map_info {
+ struct uprobe_map_info *next;
+ struct mm_struct *mm;
+ unsigned long vaddr;
+};
+
extern int set_swbp(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr);
extern int set_orig_insn(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr);
extern bool is_swbp_insn(uprobe_opcode_t *insn);
@@ -149,6 +156,8 @@ struct uprobes_state {
extern bool arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs);
extern void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
void *src, unsigned long len);
+extern struct uprobe_map_info *uprobe_free_map_info(struct uprobe_map_info *info);
+extern struct uprobe_map_info *uprobe_build_map_info(struct address_space *mapping, loff_t offset, bool is_register);
#else /* !CONFIG_UPROBES */
struct uprobes_state {
};
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 477dc42..096d1e6 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -695,14 +695,7 @@ static void delete_uprobe(struct uprobe *uprobe)
put_uprobe(uprobe);
}
-struct uprobe_map_info {
- struct uprobe_map_info *next;
- struct mm_struct *mm;
- unsigned long vaddr;
-};
-
-static inline struct uprobe_map_info *
-uprobe_free_map_info(struct uprobe_map_info *info)
+struct uprobe_map_info *uprobe_free_map_info(struct uprobe_map_info *info)
{
struct uprobe_map_info *next = info->next;
mmput(info->mm);
@@ -710,9 +703,8 @@ struct uprobe_map_info {
return next;
}
-static struct uprobe_map_info *
-uprobe_build_map_info(struct address_space *mapping, loff_t offset,
- bool is_register)
+struct uprobe_map_info *uprobe_build_map_info(struct address_space *mapping,
+ loff_t offset, bool is_register)
{
unsigned long pgoff = offset >> PAGE_SHIFT;
struct vm_area_struct *vma;
--
1.8.3.1
From: Ravi Bangoria <[email protected]>
With this, perf buildid-cache will save SDT markers with reference
counter in probe cache. Perf probe will be able to probe markers
having reference counter. Ex,
# readelf -n /tmp/tick | grep -A1 loop2
Name: loop2
... Semaphore: 0x0000000010020036
# ./perf buildid-cache --add /tmp/tick
# ./perf probe sdt_tick:loop2
# ./perf stat -e sdt_tick:loop2 /tmp/tick
hi: 0
hi: 1
hi: 2
^C
Performance counter stats for '/tmp/tick':
3 sdt_tick:loop2
2.561851452 seconds time elapsed
Signed-off-by: Ravi Bangoria <[email protected]>
---
tools/perf/util/probe-event.c | 39 ++++++++++++++++++++++++++++++++----
tools/perf/util/probe-event.h | 1 +
tools/perf/util/probe-file.c | 34 ++++++++++++++++++++++++++------
tools/perf/util/probe-file.h | 1 +
tools/perf/util/symbol-elf.c | 46 ++++++++++++++++++++++++++++++++-----------
tools/perf/util/symbol.h | 7 +++++++
6 files changed, 106 insertions(+), 22 deletions(-)
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index e1dbc98..9b9c26e 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -1832,6 +1832,12 @@ int parse_probe_trace_command(const char *cmd, struct probe_trace_event *tev)
tp->offset = strtoul(fmt2_str, NULL, 10);
}
+ if (tev->uprobes) {
+ fmt2_str = strchr(p, '(');
+ if (fmt2_str)
+ tp->ref_ctr_offset = strtoul(fmt2_str + 1, NULL, 0);
+ }
+
tev->nargs = argc - 2;
tev->args = zalloc(sizeof(struct probe_trace_arg) * tev->nargs);
if (tev->args == NULL) {
@@ -2025,6 +2031,22 @@ static int synthesize_probe_trace_arg(struct probe_trace_arg *arg,
return err;
}
+static int
+synthesize_uprobe_trace_def(struct probe_trace_event *tev, struct strbuf *buf)
+{
+ struct probe_trace_point *tp = &tev->point;
+ int err;
+
+ err = strbuf_addf(buf, "%s:0x%lx", tp->module, tp->address);
+
+ if (err >= 0 && tp->ref_ctr_offset) {
+ if (!uprobe_ref_ctr_is_supported())
+ return -1;
+ err = strbuf_addf(buf, "(0x%lx)", tp->ref_ctr_offset);
+ }
+ return err >= 0 ? 0 : -1;
+}
+
char *synthesize_probe_trace_command(struct probe_trace_event *tev)
{
struct probe_trace_point *tp = &tev->point;
@@ -2054,15 +2076,17 @@ char *synthesize_probe_trace_command(struct probe_trace_event *tev)
}
/* Use the tp->address for uprobes */
- if (tev->uprobes)
- err = strbuf_addf(&buf, "%s:0x%lx", tp->module, tp->address);
- else if (!strncmp(tp->symbol, "0x", 2))
+ if (tev->uprobes) {
+ err = synthesize_uprobe_trace_def(tev, &buf);
+ } else if (!strncmp(tp->symbol, "0x", 2)) {
/* Absolute address. See try_to_find_absolute_address() */
err = strbuf_addf(&buf, "%s%s0x%lx", tp->module ?: "",
tp->module ? ":" : "", tp->address);
- else
+ } else {
err = strbuf_addf(&buf, "%s%s%s+%lu", tp->module ?: "",
tp->module ? ":" : "", tp->symbol, tp->offset);
+ }
+
if (err)
goto error;
@@ -2646,6 +2670,13 @@ static void warn_uprobe_event_compat(struct probe_trace_event *tev)
{
int i;
char *buf = synthesize_probe_trace_command(tev);
+ struct probe_trace_point *tp = &tev->point;
+
+ if (tp->ref_ctr_offset && !uprobe_ref_ctr_is_supported()) {
+ pr_warning("A semaphore is associated with %s:%s and "
+ "seems your kernel doesn't support it.\n",
+ tev->group, tev->event);
+ }
/* Old uprobe event doesn't support memory dereference */
if (!tev->uprobes || tev->nargs == 0 || !buf)
diff --git a/tools/perf/util/probe-event.h b/tools/perf/util/probe-event.h
index 45b14f0..15a98c3 100644
--- a/tools/perf/util/probe-event.h
+++ b/tools/perf/util/probe-event.h
@@ -27,6 +27,7 @@ struct probe_trace_point {
char *symbol; /* Base symbol */
char *module; /* Module name */
unsigned long offset; /* Offset from symbol */
+ unsigned long ref_ctr_offset; /* SDT reference counter offset */
unsigned long address; /* Actual address of the trace point */
bool retprobe; /* Return probe flag */
};
diff --git a/tools/perf/util/probe-file.c b/tools/perf/util/probe-file.c
index 4ae1123..a17ba6a 100644
--- a/tools/perf/util/probe-file.c
+++ b/tools/perf/util/probe-file.c
@@ -697,8 +697,16 @@ int probe_cache__add_entry(struct probe_cache *pcache,
#ifdef HAVE_GELF_GETNOTE_SUPPORT
static unsigned long long sdt_note__get_addr(struct sdt_note *note)
{
- return note->bit32 ? (unsigned long long)note->addr.a32[0]
- : (unsigned long long)note->addr.a64[0];
+ return note->bit32 ?
+ (unsigned long long)note->addr.a32[SDT_NOTE_IDX_LOC] :
+ (unsigned long long)note->addr.a64[SDT_NOTE_IDX_LOC];
+}
+
+static unsigned long long sdt_note__get_ref_ctr_offset(struct sdt_note *note)
+{
+ return note->bit32 ?
+ (unsigned long long)note->addr.a32[SDT_NOTE_IDX_REFCTR] :
+ (unsigned long long)note->addr.a64[SDT_NOTE_IDX_REFCTR];
}
static const char * const type_to_suffix[] = {
@@ -776,14 +784,21 @@ static char *synthesize_sdt_probe_command(struct sdt_note *note,
{
struct strbuf buf;
char *ret = NULL, **args;
- int i, args_count;
+ int i, args_count, err;
+ unsigned long long ref_ctr_offset;
if (strbuf_init(&buf, 32) < 0)
return NULL;
- if (strbuf_addf(&buf, "p:%s/%s %s:0x%llx",
- sdtgrp, note->name, pathname,
- sdt_note__get_addr(note)) < 0)
+ err = strbuf_addf(&buf, "p:%s/%s %s:0x%llx",
+ sdtgrp, note->name, pathname,
+ sdt_note__get_addr(note));
+
+ ref_ctr_offset = sdt_note__get_ref_ctr_offset(note);
+ if (ref_ctr_offset && err >= 0)
+ err = strbuf_addf(&buf, "(0x%llx)", ref_ctr_offset);
+
+ if (err < 0)
goto error;
if (!note->args)
@@ -999,6 +1014,7 @@ int probe_cache__show_all_caches(struct strfilter *filter)
enum ftrace_readme {
FTRACE_README_PROBE_TYPE_X = 0,
FTRACE_README_KRETPROBE_OFFSET,
+ FTRACE_README_UPROBE_REF_CTR,
FTRACE_README_END,
};
@@ -1010,6 +1026,7 @@ enum ftrace_readme {
[idx] = {.pattern = pat, .avail = false}
DEFINE_TYPE(FTRACE_README_PROBE_TYPE_X, "*type: * x8/16/32/64,*"),
DEFINE_TYPE(FTRACE_README_KRETPROBE_OFFSET, "*place (kretprobe): *"),
+ DEFINE_TYPE(FTRACE_README_UPROBE_REF_CTR, "*ref_ctr_offset*"),
};
static bool scan_ftrace_readme(enum ftrace_readme type)
@@ -1065,3 +1082,8 @@ bool kretprobe_offset_is_supported(void)
{
return scan_ftrace_readme(FTRACE_README_KRETPROBE_OFFSET);
}
+
+bool uprobe_ref_ctr_is_supported(void)
+{
+ return scan_ftrace_readme(FTRACE_README_UPROBE_REF_CTR);
+}
diff --git a/tools/perf/util/probe-file.h b/tools/perf/util/probe-file.h
index 63f29b1..2a24918 100644
--- a/tools/perf/util/probe-file.h
+++ b/tools/perf/util/probe-file.h
@@ -69,6 +69,7 @@ struct probe_cache_entry *probe_cache__find_by_name(struct probe_cache *pcache,
int probe_cache__show_all_caches(struct strfilter *filter);
bool probe_type_is_available(enum probe_type type);
bool kretprobe_offset_is_supported(void);
+bool uprobe_ref_ctr_is_supported(void);
#else /* ! HAVE_LIBELF_SUPPORT */
static inline struct probe_cache *probe_cache__new(const char *tgt __maybe_unused, struct nsinfo *nsi __maybe_unused)
{
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 2de7705..45b7dba 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1803,6 +1803,34 @@ void kcore_extract__delete(struct kcore_extract *kce)
}
#ifdef HAVE_GELF_GETNOTE_SUPPORT
+
+static void sdt_adjust_loc(struct sdt_note *tmp, GElf_Addr base_off)
+{
+ if (!base_off)
+ return;
+
+ if (tmp->bit32)
+ tmp->addr.a32[SDT_NOTE_IDX_LOC] =
+ tmp->addr.a32[SDT_NOTE_IDX_LOC] + base_off -
+ tmp->addr.a32[SDT_NOTE_IDX_BASE];
+ else
+ tmp->addr.a64[SDT_NOTE_IDX_LOC] =
+ tmp->addr.a64[SDT_NOTE_IDX_LOC] + base_off -
+ tmp->addr.a64[SDT_NOTE_IDX_BASE];
+}
+
+static void sdt_adjust_refctr(struct sdt_note *tmp, GElf_Addr base_addr,
+ GElf_Addr base_off)
+{
+ if (!base_off)
+ return;
+
+ if (tmp->bit32)
+ tmp->addr.a32[SDT_NOTE_IDX_REFCTR] -= (base_addr - base_off);
+ else
+ tmp->addr.a64[SDT_NOTE_IDX_REFCTR] -= (base_addr - base_off);
+}
+
/**
* populate_sdt_note : Parse raw data and identify SDT note
* @elf: elf of the opened file
@@ -1820,7 +1848,6 @@ static int populate_sdt_note(Elf **elf, const char *data, size_t len,
const char *provider, *name, *args;
struct sdt_note *tmp = NULL;
GElf_Ehdr ehdr;
- GElf_Addr base_off = 0;
GElf_Shdr shdr;
int ret = -EINVAL;
@@ -1916,17 +1943,12 @@ static int populate_sdt_note(Elf **elf, const char *data, size_t len,
* base address in the description of the SDT note. If its different,
* then accordingly, adjust the note location.
*/
- if (elf_section_by_name(*elf, &ehdr, &shdr, SDT_BASE_SCN, NULL)) {
- base_off = shdr.sh_offset;
- if (base_off) {
- if (tmp->bit32)
- tmp->addr.a32[0] = tmp->addr.a32[0] + base_off -
- tmp->addr.a32[1];
- else
- tmp->addr.a64[0] = tmp->addr.a64[0] + base_off -
- tmp->addr.a64[1];
- }
- }
+ if (elf_section_by_name(*elf, &ehdr, &shdr, SDT_BASE_SCN, NULL))
+ sdt_adjust_loc(tmp, shdr.sh_offset);
+
+ /* Adjust reference counter offset */
+ if (elf_section_by_name(*elf, &ehdr, &shdr, SDT_PROBES_SCN, NULL))
+ sdt_adjust_refctr(tmp, shdr.sh_addr, shdr.sh_offset);
list_add_tail(&tmp->note_list, sdt_notes);
return 0;
diff --git a/tools/perf/util/symbol.h b/tools/perf/util/symbol.h
index 70c16741..aa095bf 100644
--- a/tools/perf/util/symbol.h
+++ b/tools/perf/util/symbol.h
@@ -384,12 +384,19 @@ struct sdt_note {
int cleanup_sdt_note_list(struct list_head *sdt_notes);
int sdt_notes__get_count(struct list_head *start);
+#define SDT_PROBES_SCN ".probes"
#define SDT_BASE_SCN ".stapsdt.base"
#define SDT_NOTE_SCN ".note.stapsdt"
#define SDT_NOTE_TYPE 3
#define SDT_NOTE_NAME "stapsdt"
#define NR_ADDR 3
+enum {
+ SDT_NOTE_IDX_LOC = 0,
+ SDT_NOTE_IDX_BASE,
+ SDT_NOTE_IDX_REFCTR,
+};
+
struct mem_info *mem_info__new(void);
struct mem_info *mem_info__get(struct mem_info *mi);
void mem_info__put(struct mem_info *mi);
--
1.8.3.1
From: Oleg Nesterov <[email protected]>
build_map_info() has a side effect like one need to perform
mmput() when done with the mm. Add mmput() in free_map_info()
so that user does not have to call it explicitly.
Signed-off-by: Oleg Nesterov <[email protected]>
Signed-off-by: Ravi Bangoria <[email protected]>
---
kernel/events/uprobes.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 535fd39..1d439c7 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -704,6 +704,7 @@ struct map_info {
static inline struct map_info *free_map_info(struct map_info *info)
{
struct map_info *next = info->next;
+ mmput(info->mm);
kfree(info);
return next;
}
@@ -773,8 +774,11 @@ static inline struct map_info *free_map_info(struct map_info *info)
goto again;
out:
- while (prev)
- prev = free_map_info(prev);
+ while (prev) {
+ info = prev;
+ prev = prev->next;
+ kfree(info);
+ }
return curr;
}
@@ -824,7 +828,6 @@ static inline struct map_info *free_map_info(struct map_info *info)
unlock:
up_write(&mm->mmap_sem);
free:
- mmput(mm);
info = free_map_info(info);
}
out:
--
1.8.3.1
From: Ravi Bangoria <[email protected]>
Userspace Statically Defined Tracepoints[1] are dtrace style markers
inside userspace applications. Applications like PostgreSQL, MySQL,
Pthread, Perl, Python, Java, Ruby, Node.js, libvirt, QEMU, glib etc
have these markers embedded in them. These markers are added by developer
at important places in the code. Each marker source expands to a single
nop instruction in the compiled code but there may be additional
overhead for computing the marker arguments which expands to couple of
instructions. In case the overhead is more, execution of it can be
omitted by runtime if() condition when no one is tracing on the marker:
if (reference_counter > 0) {
Execute marker instructions;
}
Default value of reference counter is 0. Tracer has to increment the
reference counter before tracing on a marker and decrement it when
done with the tracing.
Implement the reference counter logic in trace_uprobe, leaving core
uprobe infrastructure as is, except one new callback from uprobe_mmap()
to trace_uprobe.
trace_uprobe definition with reference counter will now be:
<path>:<offset>[(ref_ctr_offset)]
There are two different cases while enabling the marker,
1. Trace existing process. In this case, find all suitable processes
and increment the reference counter in them.
2. Enable trace before running target binary. In this case, all mmaps
will get notified to trace_uprobe and trace_uprobe will increment
the reference counter if corresponding uprobe is enabled.
At the time of disabling probes, decrement reference counter in all
existing target processes.
[1] https://sourceware.org/systemtap/wiki/UserSpaceProbeImplementation
Note: 'reference counter' is called as 'semaphore' in original Dtrace
(or Systemtap, bcc and even in ELF) documentation and code. But the
term 'semaphore' is misleading in this context. This is just a counter
used to hold number of tracers tracing on a marker. This is not really
used for any synchronization. So we are referring it as 'reference
counter' in kernel / perf code.
Signed-off-by: Ravi Bangoria <[email protected]>
Signed-off-by: Fengguang Wu <[email protected]>
[Fengguang reported/fixed build failure]
---
include/linux/uprobes.h | 10 +++
kernel/events/uprobes.c | 21 +++++-
kernel/trace/trace_uprobe.c | 162 +++++++++++++++++++++++++++++++++++++++++++-
3 files changed, 190 insertions(+), 3 deletions(-)
diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
index 7bd2760..2db3ed1 100644
--- a/include/linux/uprobes.h
+++ b/include/linux/uprobes.h
@@ -122,6 +122,8 @@ struct uprobe_map_info {
unsigned long vaddr;
};
+extern void (*uprobe_mmap_callback)(struct vm_area_struct *vma);
+
extern int set_swbp(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr);
extern int set_orig_insn(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr);
extern bool is_swbp_insn(uprobe_opcode_t *insn);
@@ -136,6 +138,8 @@ struct uprobe_map_info {
extern void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end);
extern void uprobe_start_dup_mmap(void);
extern void uprobe_end_dup_mmap(void);
+extern void uprobe_down_write_dup_mmap(void);
+extern void uprobe_up_write_dup_mmap(void);
extern void uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm);
extern void uprobe_free_utask(struct task_struct *t);
extern void uprobe_copy_process(struct task_struct *t, unsigned long flags);
@@ -192,6 +196,12 @@ static inline void uprobe_start_dup_mmap(void)
static inline void uprobe_end_dup_mmap(void)
{
}
+static inline void uprobe_down_write_dup_mmap(void)
+{
+}
+static inline void uprobe_up_write_dup_mmap(void)
+{
+}
static inline void
uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm)
{
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 096d1e6..e26ad83 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -1044,6 +1044,9 @@ static void build_probe_list(struct inode *inode,
spin_unlock(&uprobes_treelock);
}
+/* Rightnow the only user of this is trace_uprobe. */
+void (*uprobe_mmap_callback)(struct vm_area_struct *vma);
+
/*
* Called from mmap_region/vma_adjust with mm->mmap_sem acquired.
*
@@ -1056,7 +1059,13 @@ int uprobe_mmap(struct vm_area_struct *vma)
struct uprobe *uprobe, *u;
struct inode *inode;
- if (no_uprobe_events() || !valid_vma(vma, true))
+ if (no_uprobe_events())
+ return 0;
+
+ if (uprobe_mmap_callback)
+ uprobe_mmap_callback(vma);
+
+ if (!valid_vma(vma, true))
return 0;
inode = file_inode(vma->vm_file);
@@ -1247,6 +1256,16 @@ void uprobe_end_dup_mmap(void)
percpu_up_read(&dup_mmap_sem);
}
+void uprobe_down_write_dup_mmap(void)
+{
+ percpu_down_write(&dup_mmap_sem);
+}
+
+void uprobe_up_write_dup_mmap(void)
+{
+ percpu_up_write(&dup_mmap_sem);
+}
+
void uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm)
{
if (test_bit(MMF_HAS_UPROBES, &oldmm->flags)) {
diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index 0d450b4..1a48b04 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -25,6 +25,8 @@
#include <linux/namei.h>
#include <linux/string.h>
#include <linux/rculist.h>
+#include <linux/sched/mm.h>
+#include <linux/highmem.h>
#include "trace_probe.h"
@@ -58,6 +60,7 @@ struct trace_uprobe {
struct inode *inode;
char *filename;
unsigned long offset;
+ unsigned long ref_ctr_offset;
unsigned long nhit;
struct trace_probe tp;
};
@@ -364,10 +367,10 @@ static int create_trace_uprobe(int argc, char **argv)
{
struct trace_uprobe *tu;
struct inode *inode;
- char *arg, *event, *group, *filename;
+ char *arg, *event, *group, *filename, *rctr, *rctr_end;
char buf[MAX_EVENT_NAME_LEN];
struct path path;
- unsigned long offset;
+ unsigned long offset, ref_ctr_offset;
bool is_delete, is_return;
int i, ret;
@@ -377,6 +380,7 @@ static int create_trace_uprobe(int argc, char **argv)
is_return = false;
event = NULL;
group = NULL;
+ ref_ctr_offset = 0;
/* argc must be >= 1 */
if (argv[0][0] == '-')
@@ -456,6 +460,26 @@ static int create_trace_uprobe(int argc, char **argv)
goto fail_address_parse;
}
+ /* Parse reference counter offset if specified. */
+ rctr = strchr(arg, '(');
+ if (rctr) {
+ rctr_end = strchr(rctr, ')');
+ if (rctr > rctr_end || *(rctr_end + 1) != 0) {
+ ret = -EINVAL;
+ pr_info("Invalid reference counter offset.\n");
+ goto fail_address_parse;
+ }
+
+ *rctr++ = '\0';
+ *rctr_end = '\0';
+ ret = kstrtoul(rctr, 0, &ref_ctr_offset);
+ if (ret) {
+ pr_info("Invalid reference counter offset.\n");
+ goto fail_address_parse;
+ }
+ }
+
+ /* Parse uprobe offset. */
ret = kstrtoul(arg, 0, &offset);
if (ret)
goto fail_address_parse;
@@ -490,6 +514,7 @@ static int create_trace_uprobe(int argc, char **argv)
goto fail_address_parse;
}
tu->offset = offset;
+ tu->ref_ctr_offset = ref_ctr_offset;
tu->inode = inode;
tu->filename = kstrdup(filename, GFP_KERNEL);
@@ -622,6 +647,8 @@ static int probes_seq_show(struct seq_file *m, void *v)
break;
}
}
+ if (tu->ref_ctr_offset)
+ seq_printf(m, "(0x%lx)", tu->ref_ctr_offset);
for (i = 0; i < tu->tp.nr_args; i++)
seq_printf(m, " %s=%s", tu->tp.args[i].name, tu->tp.args[i].comm);
@@ -896,6 +923,129 @@ static void uretprobe_trace_func(struct trace_uprobe *tu, unsigned long func,
return trace_handle_return(s);
}
+static bool sdt_valid_vma(struct trace_uprobe *tu,
+ struct vm_area_struct *vma,
+ unsigned long vaddr)
+{
+ return tu->ref_ctr_offset &&
+ vma->vm_file &&
+ file_inode(vma->vm_file) == tu->inode &&
+ vma->vm_flags & VM_WRITE &&
+ vma->vm_start <= vaddr &&
+ vma->vm_end > vaddr;
+}
+
+static struct vm_area_struct *sdt_find_vma(struct trace_uprobe *tu,
+ struct mm_struct *mm,
+ unsigned long vaddr)
+{
+ struct vm_area_struct *vma = find_vma(mm, vaddr);
+
+ return (vma && sdt_valid_vma(tu, vma, vaddr)) ? vma : NULL;
+}
+
+/*
+ * Reference counter gate the invocation of probe. If present,
+ * by default reference counter is 0. One needs to increment
+ * it before tracing the probe and decrement it when done.
+ */
+static int
+sdt_update_ref_ctr(struct mm_struct *mm, unsigned long vaddr, short d)
+{
+ void *kaddr;
+ struct page *page;
+ struct vm_area_struct *vma;
+ int ret = 0;
+ unsigned short *ptr;
+
+ if (vaddr == 0)
+ return -EINVAL;
+
+ ret = get_user_pages_remote(NULL, mm, vaddr, 1,
+ FOLL_FORCE | FOLL_WRITE, &page, &vma, NULL);
+ if (ret <= 0)
+ return ret;
+
+ kaddr = kmap_atomic(page);
+ ptr = kaddr + (vaddr & ~PAGE_MASK);
+ *ptr += d;
+ kunmap_atomic(kaddr);
+
+ put_page(page);
+ return 0;
+}
+
+static void sdt_increment_ref_ctr(struct trace_uprobe *tu)
+{
+ struct uprobe_map_info *info;
+
+ uprobe_down_write_dup_mmap();
+ info = uprobe_build_map_info(tu->inode->i_mapping,
+ tu->ref_ctr_offset, false);
+ if (IS_ERR(info))
+ goto out;
+
+ while (info) {
+ down_write(&info->mm->mmap_sem);
+
+ if (sdt_find_vma(tu, info->mm, info->vaddr))
+ sdt_update_ref_ctr(info->mm, info->vaddr, 1);
+
+ up_write(&info->mm->mmap_sem);
+ info = uprobe_free_map_info(info);
+ }
+
+out:
+ uprobe_up_write_dup_mmap();
+}
+
+/* Called with down_write(&vma->vm_mm->mmap_sem) */
+static void trace_uprobe_mmap(struct vm_area_struct *vma)
+{
+ struct trace_uprobe *tu;
+ unsigned long vaddr;
+
+ if (!(vma->vm_flags & VM_WRITE))
+ return;
+
+ mutex_lock(&uprobe_lock);
+ list_for_each_entry(tu, &uprobe_list, list) {
+ if (!trace_probe_is_enabled(&tu->tp))
+ continue;
+
+ vaddr = vma_offset_to_vaddr(vma, tu->ref_ctr_offset);
+ if (!sdt_valid_vma(tu, vma, vaddr))
+ continue;
+
+ sdt_update_ref_ctr(vma->vm_mm, vaddr, 1);
+ }
+ mutex_unlock(&uprobe_lock);
+}
+
+static void sdt_decrement_ref_ctr(struct trace_uprobe *tu)
+{
+ struct uprobe_map_info *info;
+
+ uprobe_down_write_dup_mmap();
+ info = uprobe_build_map_info(tu->inode->i_mapping,
+ tu->ref_ctr_offset, false);
+ if (IS_ERR(info))
+ goto out;
+
+ while (info) {
+ down_write(&info->mm->mmap_sem);
+
+ if (sdt_find_vma(tu, info->mm, info->vaddr))
+ sdt_update_ref_ctr(info->mm, info->vaddr, -1);
+
+ up_write(&info->mm->mmap_sem);
+ info = uprobe_free_map_info(info);
+ }
+
+out:
+ uprobe_up_write_dup_mmap();
+}
+
typedef bool (*filter_func_t)(struct uprobe_consumer *self,
enum uprobe_filter_ctx ctx,
struct mm_struct *mm);
@@ -941,6 +1091,9 @@ typedef bool (*filter_func_t)(struct uprobe_consumer *self,
if (ret)
goto err_buffer;
+ if (tu->ref_ctr_offset)
+ sdt_increment_ref_ctr(tu);
+
return 0;
err_buffer:
@@ -981,6 +1134,9 @@ typedef bool (*filter_func_t)(struct uprobe_consumer *self,
WARN_ON(!uprobe_filter_is_empty(&tu->filter));
+ if (tu->ref_ctr_offset)
+ sdt_decrement_ref_ctr(tu);
+
uprobe_unregister(tu->inode, tu->offset, &tu->consumer);
tu->tp.flags &= file ? ~TP_FLAG_TRACE : ~TP_FLAG_PROFILE;
@@ -1425,6 +1581,8 @@ static __init int init_uprobe_trace(void)
/* Profile interface */
trace_create_file("uprobe_profile", 0444, d_tracer,
NULL, &uprobe_profile_ops);
+
+ uprobe_mmap_callback = trace_uprobe_mmap;
return 0;
}
--
1.8.3.1
From: Ravi Bangoria <[email protected]>
Reference counter gate the invocation of probe. If present,
by default reference count is 0. Kernel needs to increment
it before tracing the probe and decrement it when done. This
is identical to semaphore in Userspace Statically Defined
Tracepoints (USDT).
Document usage of reference counter.
Signed-off-by: Ravi Bangoria <[email protected]>
---
Documentation/trace/uprobetracer.txt | 16 +++++++++++++---
kernel/trace/trace.c | 2 +-
2 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/Documentation/trace/uprobetracer.txt b/Documentation/trace/uprobetracer.txt
index bf526a7c..cb6751d 100644
--- a/Documentation/trace/uprobetracer.txt
+++ b/Documentation/trace/uprobetracer.txt
@@ -19,15 +19,25 @@ user to calculate the offset of the probepoint in the object.
Synopsis of uprobe_tracer
-------------------------
- p[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS] : Set a uprobe
- r[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS] : Set a return uprobe (uretprobe)
- -:[GRP/]EVENT : Clear uprobe or uretprobe event
+ p[:[GRP/]EVENT] PATH:OFFSET[(REF_CTR_OFFSET)] [FETCHARGS]
+ r[:[GRP/]EVENT] PATH:OFFSET[(REF_CTR_OFFSET)] [FETCHARGS]
+ -:[GRP/]EVENT
+
+ p : Set a uprobe
+ r : Set a return uprobe (uretprobe)
+ - : Clear uprobe or uretprobe event
GRP : Group name. If omitted, "uprobes" is the default value.
EVENT : Event name. If omitted, the event name is generated based
on PATH+OFFSET.
PATH : Path to an executable or a library.
OFFSET : Offset where the probe is inserted.
+ REF_CTR_OFFSET: Reference counter offset. Optional field. Reference count
+ gate the invocation of probe. If present, by default
+ reference count is 0. Kernel needs to increment it before
+ tracing the probe and decrement it when done. This is
+ identical to semaphore in Userspace Statically Defined
+ Tracepoints (USDT).
FETCHARGS : Arguments. Each probe can have up to 128 args.
%REG : Fetch register REG
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 300f4ea..d211937 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4604,7 +4604,7 @@ static int tracing_trace_options_open(struct inode *inode, struct file *file)
"place (kretprobe): [<module>:]<symbol>[+<offset>]|<memaddr>\n"
#endif
#ifdef CONFIG_UPROBE_EVENTS
- "\t place: <path>:<offset>\n"
+ " place (uprobe): <path>:<offset>[(ref_ctr_offset)]\n"
#endif
"\t args: <name>=fetcharg[:type]\n"
"\t fetcharg: %<register>, @<address>, @<symbol>[+|-<offset>],\n"
--
1.8.3.1
From: Ravi Bangoria <[email protected]>
These are generic functions which operates on file offset
and virtual address. Make these functions available outside
of uprobe code so that other can use it as well.
Signed-off-by: Ravi Bangoria <[email protected]>
Reviewed-by: Jérôme Glisse <[email protected]>
---
include/linux/mm.h | 12 ++++++++++++
kernel/events/uprobes.c | 10 ----------
2 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ccac106..de0cc08 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2272,6 +2272,18 @@ struct vm_unmapped_area_info {
return unmapped_area(info);
}
+static inline unsigned long
+offset_to_vaddr(struct vm_area_struct *vma, loff_t offset)
+{
+ return vma->vm_start + offset - ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
+}
+
+static inline loff_t
+vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr)
+{
+ return ((loff_t)vma->vm_pgoff << PAGE_SHIFT) + (vaddr - vma->vm_start);
+}
+
/* truncate.c */
extern void truncate_inode_pages(struct address_space *, loff_t);
extern void truncate_inode_pages_range(struct address_space *,
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index ce6848e..bd6f230 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -130,16 +130,6 @@ static bool valid_vma(struct vm_area_struct *vma, bool is_register)
return vma->vm_file && (vma->vm_flags & flags) == VM_MAYEXEC;
}
-static unsigned long offset_to_vaddr(struct vm_area_struct *vma, loff_t offset)
-{
- return vma->vm_start + offset - ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
-}
-
-static loff_t vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr)
-{
- return ((loff_t)vma->vm_pgoff << PAGE_SHIFT) + (vaddr - vma->vm_start);
-}
-
/**
* __replace_page - replace page in vma by new page.
* based on replace_page in mm/ksm.c
--
1.8.3.1
From: Ravi Bangoria <[email protected]>
When virtual memory map for binary/library is being prepared, there is
no direct one to one mapping between mmap() and virtual memory area. Ex,
when loader loads the library, it first calls mmap(size = total_size),
where total_size is addition of size of all elf sections that are going
to be mapped. Then it splits individual vmas with new mmap()/mprotect()
calls. Loader does this to ensure it gets continuous address range for
a library. load_elf_binary() also uses similar tricks while preparing
mappings of binary.
Ex for pyhton library,
# strace -o out python
mmap(NULL, 2738968, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fff92460000
mmap(0x7fff926a0000, 327680, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x230000) = 0x7fff926a0000
mprotect(0x7fff926a0000, 65536, PROT_READ) = 0
Here, the first mmap() maps the whole library into one region. Second
mmap() and third mprotect() split out the whole region into smaller
vmas and sets appropriate protection flags.
Now, in this case, trace_uprobe_mmap_callback() update the reference
counter twice -- by second mmap() call and by third mprotect() call --
because both regions contain reference counter.
But while de-registration, reference counter will get decremented only
by once leaving reference counter > 0 even if no one is tracing on that
marker.
Example with python library before patch:
# readelf -n /lib64/libpython2.7.so.1.0 | grep -A1 function__entry
Name: function__entry
... Semaphore: 0x00000000002899d8
Probe on a marker:
# echo "p:sdt_python/function__entry /usr/lib64/libpython2.7.so.1.0:0x16a4d4(0x2799d8)" > uprobe_events
Start tracing:
# perf record -e sdt_python:function__entry -a
Run python workload:
# python
# cat /proc/`pgrep python`/maps | grep libpython
7fffadb00000-7fffadd40000 r-xp 00000000 08:05 403934 /usr/lib64/libpython2.7.so.1.0
7fffadd40000-7fffadd50000 r--p 00230000 08:05 403934 /usr/lib64/libpython2.7.so.1.0
7fffadd50000-7fffadd90000 rw-p 00240000 08:05 403934 /usr/lib64/libpython2.7.so.1.0
Reference counter value has been incremented twice:
# dd if=/proc/`pgrep python`/mem bs=1 count=1 skip=$(( 0x7fffadd899d8 )) 2>/dev/null | xxd
0000000: 02 .
Kill perf:
#
^C[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.322 MB perf.data (1273 samples) ]
Reference conter is still 1 even when no one is tracing on it:
# dd if=/proc/`pgrep python`/mem bs=1 count=1 skip=$(( 0x7fffadd899d8 )) 2>/dev/null | xxd
0000000: 01 .
Ensure increment and decrement happens in sync by keeping list of mms
in trace_uprobe. Check presence of mm in the list before incrementing
the reference counter. I.e. for each {trace_uprobe,mm} tuple, reference
counter must be incremented only by one. Note that we don't check the
presence of mm in the list at decrement time.
We consider only two case while incrementing the reference counter:
1. Target binary is already running when we start tracing. In this
case, find all mm which maps region of target binary containing
reference counter. Loop over all mms and increment the counter
if mm is not already present in the list.
2. Tracer is already tracing before target binary starts execution.
In this case, all mmap(vma) gets notified to trace_uprobe.
Trace_uprobe will update reference counter if vma->vm_mm is not
already present in the list.
There is also a third case which we don't consider, a fork() case.
When process with markers forks itself, we don't explicitly increment
the reference counter in child process because it should be taken care
by dup_mmap(). We also don't add the child mm in the list. This is
fine because we don't check presence of mm in the list at decrement
time.
After patch:
Start perf record and then run python...
Reference counter value has been incremented only once:
# dd if=/proc/`pgrep python`/mem bs=1 count=1 skip=$(( 0x7fff9cbf99d8 )) 2>/dev/null | xxd
0000000: 01 .
Kill perf:
#
^C[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.364 MB perf.data (1427 samples) ]
Reference conter is reset to 0:
# dd if=/proc/`pgrep python`/mem bs=1 count=1 skip=$(( 0x7fff9cbb99d8 )) 2>/dev/null | xxd
0000000: 00 .
Signed-off-by: Ravi Bangoria <[email protected]>
---
include/linux/uprobes.h | 1 +
kernel/events/uprobes.c | 6 +++
kernel/trace/trace_uprobe.c | 121 +++++++++++++++++++++++++++++++++++++++++---
3 files changed, 122 insertions(+), 6 deletions(-)
diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
index 2db3ed1..e447991 100644
--- a/include/linux/uprobes.h
+++ b/include/linux/uprobes.h
@@ -123,6 +123,7 @@ struct uprobe_map_info {
};
extern void (*uprobe_mmap_callback)(struct vm_area_struct *vma);
+extern void (*uprobe_clear_state_callback)(struct mm_struct *mm);
extern int set_swbp(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr);
extern int set_orig_insn(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr);
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index e26ad83..e8005d2 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -1231,6 +1231,9 @@ static struct xol_area *get_xol_area(void)
return area;
}
+/* Rightnow the only user of this is trace_uprobe. */
+void (*uprobe_clear_state_callback)(struct mm_struct *mm);
+
/*
* uprobe_clear_state - Free the area allocated for slots.
*/
@@ -1238,6 +1241,9 @@ void uprobe_clear_state(struct mm_struct *mm)
{
struct xol_area *area = mm->uprobes_state.xol_area;
+ if (uprobe_clear_state_callback)
+ uprobe_clear_state_callback(mm);
+
if (!area)
return;
diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index 1a48b04..7341042c 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -50,6 +50,11 @@ struct trace_uprobe_filter {
struct list_head perf_events;
};
+struct sdt_mm_list {
+ struct list_head list;
+ struct mm_struct *mm;
+};
+
/*
* uprobe event core functions
*/
@@ -61,6 +66,8 @@ struct trace_uprobe {
char *filename;
unsigned long offset;
unsigned long ref_ctr_offset;
+ struct sdt_mm_list sml;
+ struct mutex sml_lock;
unsigned long nhit;
struct trace_probe tp;
};
@@ -276,6 +283,8 @@ static inline bool is_ret_probe(struct trace_uprobe *tu)
if (is_ret)
tu->consumer.ret_handler = uretprobe_dispatcher;
init_trace_uprobe_filter(&tu->filter);
+ mutex_init(&tu->sml_lock);
+ INIT_LIST_HEAD(&(tu->sml.list));
return tu;
error:
@@ -923,6 +932,43 @@ static void uretprobe_trace_func(struct trace_uprobe *tu, unsigned long func,
return trace_handle_return(s);
}
+static bool sdt_check_mm_list(struct trace_uprobe *tu, struct mm_struct *mm)
+{
+ struct sdt_mm_list *sml;
+
+ list_for_each_entry(sml, &(tu->sml.list), list)
+ if (sml->mm == mm)
+ return true;
+
+ return false;
+}
+
+static int sdt_add_mm_list(struct trace_uprobe *tu, struct mm_struct *mm)
+{
+ struct sdt_mm_list *sml = kzalloc(sizeof(*sml), GFP_KERNEL);
+
+ if (!sml)
+ return -ENOMEM;
+
+ sml->mm = mm;
+ list_add(&(sml->list), &(tu->sml.list));
+ return 0;
+}
+
+static void sdt_del_mm_list(struct trace_uprobe *tu, struct mm_struct *mm)
+{
+ struct list_head *pos, *q;
+ struct sdt_mm_list *sml;
+
+ list_for_each_safe(pos, q, &(tu->sml.list)) {
+ sml = list_entry(pos, struct sdt_mm_list, list);
+ if (sml->mm == mm) {
+ list_del(pos);
+ kfree(sml);
+ }
+ }
+}
+
static bool sdt_valid_vma(struct trace_uprobe *tu,
struct vm_area_struct *vma,
unsigned long vaddr)
@@ -975,6 +1021,31 @@ static struct vm_area_struct *sdt_find_vma(struct trace_uprobe *tu,
return 0;
}
+static void __sdt_increment_ref_ctr(struct trace_uprobe *tu,
+ struct mm_struct *mm,
+ unsigned long vaddr)
+{
+ int ret = 0;
+
+ ret = sdt_update_ref_ctr(mm, vaddr, 1);
+ if (unlikely(ret)) {
+ pr_info("Failed to increment ref_ctr. (%d)\n", ret);
+ return;
+ }
+
+ ret = sdt_add_mm_list(tu, mm);
+ if (unlikely(ret)) {
+ pr_info("Failed to add mm into list. (%d)\n", ret);
+ goto revert_ctr;
+ }
+ return;
+
+revert_ctr:
+ ret = sdt_update_ref_ctr(mm, vaddr, -1);
+ if (ret)
+ pr_info("Reverting ref_ctr update failed. (%d)\n", ret);
+}
+
static void sdt_increment_ref_ctr(struct trace_uprobe *tu)
{
struct uprobe_map_info *info;
@@ -985,15 +1056,19 @@ static void sdt_increment_ref_ctr(struct trace_uprobe *tu)
if (IS_ERR(info))
goto out;
+ mutex_lock(&tu->sml_lock);
while (info) {
- down_write(&info->mm->mmap_sem);
+ if (sdt_check_mm_list(tu, info->mm))
+ goto cont;
+ down_write(&info->mm->mmap_sem);
if (sdt_find_vma(tu, info->mm, info->vaddr))
- sdt_update_ref_ctr(info->mm, info->vaddr, 1);
-
+ __sdt_increment_ref_ctr(tu, info->mm, info->vaddr);
up_write(&info->mm->mmap_sem);
+cont:
info = uprobe_free_map_info(info);
}
+ mutex_unlock(&tu->sml_lock);
out:
uprobe_up_write_dup_mmap();
@@ -1017,14 +1092,28 @@ static void trace_uprobe_mmap(struct vm_area_struct *vma)
if (!sdt_valid_vma(tu, vma, vaddr))
continue;
- sdt_update_ref_ctr(vma->vm_mm, vaddr, 1);
+ mutex_lock(&tu->sml_lock);
+ if (!sdt_check_mm_list(tu, vma->vm_mm))
+ __sdt_increment_ref_ctr(tu, vma->vm_mm, vaddr);
+ mutex_unlock(&tu->sml_lock);
}
mutex_unlock(&uprobe_lock);
}
+/*
+ * We don't check presence of mm in tu->sml here. We just decrement
+ * the reference counter if we find vma holding the reference counter.
+ *
+ * For tiny binaries/libraries, different mmap regions point to the
+ * same file portion. In such cases, uprobe_build_map_info() returns
+ * same mm multiple times with different virtual address of one
+ * reference counter. But we don't decrement the reference counter
+ * multiple time because we check for VM_WRITE in sdt_valid_vma().
+ */
static void sdt_decrement_ref_ctr(struct trace_uprobe *tu)
{
struct uprobe_map_info *info;
+ int ret;
uprobe_down_write_dup_mmap();
info = uprobe_build_map_info(tu->inode->i_mapping,
@@ -1032,20 +1121,39 @@ static void sdt_decrement_ref_ctr(struct trace_uprobe *tu)
if (IS_ERR(info))
goto out;
+ mutex_lock(&tu->sml_lock);
while (info) {
down_write(&info->mm->mmap_sem);
- if (sdt_find_vma(tu, info->mm, info->vaddr))
- sdt_update_ref_ctr(info->mm, info->vaddr, -1);
+ if (sdt_find_vma(tu, info->mm, info->vaddr)) {
+ ret = sdt_update_ref_ctr(info->mm, info->vaddr, -1);
+ if (unlikely(ret))
+ pr_info("Failed to Decrement ref_ctr. (%d)\n", ret);
+ }
up_write(&info->mm->mmap_sem);
+ sdt_del_mm_list(tu, info->mm);
info = uprobe_free_map_info(info);
}
+ mutex_unlock(&tu->sml_lock);
out:
uprobe_up_write_dup_mmap();
}
+static void sdt_mm_release(struct mm_struct *mm)
+{
+ struct trace_uprobe *tu;
+
+ mutex_lock(&uprobe_lock);
+ list_for_each_entry(tu, &uprobe_list, list) {
+ mutex_lock(&tu->sml_lock);
+ sdt_del_mm_list(tu, mm);
+ mutex_unlock(&tu->sml_lock);
+ }
+ mutex_unlock(&uprobe_lock);
+}
+
typedef bool (*filter_func_t)(struct uprobe_consumer *self,
enum uprobe_filter_ctx ctx,
struct mm_struct *mm);
@@ -1583,6 +1691,7 @@ static __init int init_uprobe_trace(void)
NULL, &uprobe_profile_ops);
uprobe_mmap_callback = trace_uprobe_mmap;
+ uprobe_clear_state_callback = sdt_mm_release;
return 0;
}
--
1.8.3.1
On 04/17/2018 10:02 AM, Ravi Bangoria wrote:
> Userspace Statically Defined Tracepoints[1] are dtrace style markers
> inside userspace applications. Applications like PostgreSQL, MySQL,
> Pthread, Perl, Python, Java, Ruby, Node.js, libvirt, QEMU, glib etc
> have these markers embedded in them. These markers are added by developer
> at important places in the code. Each marker source expands to a single
> nop instruction in the compiled code but there may be additional
> overhead for computing the marker arguments which expands to couple of
> instructions. In case the overhead is more, execution of it can be
> omitted by runtime if() condition when no one is tracing on the marker:
>
> if (reference_counter > 0) {
> Execute marker instructions;
> }
>
> Default value of reference counter is 0. Tracer has to increment the
> reference counter before tracing on a marker and decrement it when
> done with the tracing.
Hi Oleg, Masami,
Can you please review this :) ?
Thanks.
On Tue, 17 Apr 2018 10:02:44 +0530
Ravi Bangoria <[email protected]> wrote:
> From: Ravi Bangoria <[email protected]>
>
> With this, perf buildid-cache will save SDT markers with reference
> counter in probe cache. Perf probe will be able to probe markers
> having reference counter. Ex,
>
> # readelf -n /tmp/tick | grep -A1 loop2
> Name: loop2
> ... Semaphore: 0x0000000010020036
>
> # ./perf buildid-cache --add /tmp/tick
> # ./perf probe sdt_tick:loop2
> # ./perf stat -e sdt_tick:loop2 /tmp/tick
> hi: 0
> hi: 1
> hi: 2
> ^C
> Performance counter stats for '/tmp/tick':
> 3 sdt_tick:loop2
> 2.561851452 seconds time elapsed
>
> Signed-off-by: Ravi Bangoria <[email protected]>
Looks good to me.
Acked-by: Masami Hiramatsu <[email protected]>
Thanks!
> ---
> tools/perf/util/probe-event.c | 39 ++++++++++++++++++++++++++++++++----
> tools/perf/util/probe-event.h | 1 +
> tools/perf/util/probe-file.c | 34 ++++++++++++++++++++++++++------
> tools/perf/util/probe-file.h | 1 +
> tools/perf/util/symbol-elf.c | 46 ++++++++++++++++++++++++++++++++-----------
> tools/perf/util/symbol.h | 7 +++++++
> 6 files changed, 106 insertions(+), 22 deletions(-)
>
> diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
> index e1dbc98..9b9c26e 100644
> --- a/tools/perf/util/probe-event.c
> +++ b/tools/perf/util/probe-event.c
> @@ -1832,6 +1832,12 @@ int parse_probe_trace_command(const char *cmd, struct probe_trace_event *tev)
> tp->offset = strtoul(fmt2_str, NULL, 10);
> }
>
> + if (tev->uprobes) {
> + fmt2_str = strchr(p, '(');
> + if (fmt2_str)
> + tp->ref_ctr_offset = strtoul(fmt2_str + 1, NULL, 0);
> + }
> +
> tev->nargs = argc - 2;
> tev->args = zalloc(sizeof(struct probe_trace_arg) * tev->nargs);
> if (tev->args == NULL) {
> @@ -2025,6 +2031,22 @@ static int synthesize_probe_trace_arg(struct probe_trace_arg *arg,
> return err;
> }
>
> +static int
> +synthesize_uprobe_trace_def(struct probe_trace_event *tev, struct strbuf *buf)
> +{
> + struct probe_trace_point *tp = &tev->point;
> + int err;
> +
> + err = strbuf_addf(buf, "%s:0x%lx", tp->module, tp->address);
> +
> + if (err >= 0 && tp->ref_ctr_offset) {
> + if (!uprobe_ref_ctr_is_supported())
> + return -1;
> + err = strbuf_addf(buf, "(0x%lx)", tp->ref_ctr_offset);
> + }
> + return err >= 0 ? 0 : -1;
> +}
> +
> char *synthesize_probe_trace_command(struct probe_trace_event *tev)
> {
> struct probe_trace_point *tp = &tev->point;
> @@ -2054,15 +2076,17 @@ char *synthesize_probe_trace_command(struct probe_trace_event *tev)
> }
>
> /* Use the tp->address for uprobes */
> - if (tev->uprobes)
> - err = strbuf_addf(&buf, "%s:0x%lx", tp->module, tp->address);
> - else if (!strncmp(tp->symbol, "0x", 2))
> + if (tev->uprobes) {
> + err = synthesize_uprobe_trace_def(tev, &buf);
> + } else if (!strncmp(tp->symbol, "0x", 2)) {
> /* Absolute address. See try_to_find_absolute_address() */
> err = strbuf_addf(&buf, "%s%s0x%lx", tp->module ?: "",
> tp->module ? ":" : "", tp->address);
> - else
> + } else {
> err = strbuf_addf(&buf, "%s%s%s+%lu", tp->module ?: "",
> tp->module ? ":" : "", tp->symbol, tp->offset);
> + }
> +
> if (err)
> goto error;
>
> @@ -2646,6 +2670,13 @@ static void warn_uprobe_event_compat(struct probe_trace_event *tev)
> {
> int i;
> char *buf = synthesize_probe_trace_command(tev);
> + struct probe_trace_point *tp = &tev->point;
> +
> + if (tp->ref_ctr_offset && !uprobe_ref_ctr_is_supported()) {
> + pr_warning("A semaphore is associated with %s:%s and "
> + "seems your kernel doesn't support it.\n",
> + tev->group, tev->event);
> + }
>
> /* Old uprobe event doesn't support memory dereference */
> if (!tev->uprobes || tev->nargs == 0 || !buf)
> diff --git a/tools/perf/util/probe-event.h b/tools/perf/util/probe-event.h
> index 45b14f0..15a98c3 100644
> --- a/tools/perf/util/probe-event.h
> +++ b/tools/perf/util/probe-event.h
> @@ -27,6 +27,7 @@ struct probe_trace_point {
> char *symbol; /* Base symbol */
> char *module; /* Module name */
> unsigned long offset; /* Offset from symbol */
> + unsigned long ref_ctr_offset; /* SDT reference counter offset */
> unsigned long address; /* Actual address of the trace point */
> bool retprobe; /* Return probe flag */
> };
> diff --git a/tools/perf/util/probe-file.c b/tools/perf/util/probe-file.c
> index 4ae1123..a17ba6a 100644
> --- a/tools/perf/util/probe-file.c
> +++ b/tools/perf/util/probe-file.c
> @@ -697,8 +697,16 @@ int probe_cache__add_entry(struct probe_cache *pcache,
> #ifdef HAVE_GELF_GETNOTE_SUPPORT
> static unsigned long long sdt_note__get_addr(struct sdt_note *note)
> {
> - return note->bit32 ? (unsigned long long)note->addr.a32[0]
> - : (unsigned long long)note->addr.a64[0];
> + return note->bit32 ?
> + (unsigned long long)note->addr.a32[SDT_NOTE_IDX_LOC] :
> + (unsigned long long)note->addr.a64[SDT_NOTE_IDX_LOC];
> +}
> +
> +static unsigned long long sdt_note__get_ref_ctr_offset(struct sdt_note *note)
> +{
> + return note->bit32 ?
> + (unsigned long long)note->addr.a32[SDT_NOTE_IDX_REFCTR] :
> + (unsigned long long)note->addr.a64[SDT_NOTE_IDX_REFCTR];
> }
>
> static const char * const type_to_suffix[] = {
> @@ -776,14 +784,21 @@ static char *synthesize_sdt_probe_command(struct sdt_note *note,
> {
> struct strbuf buf;
> char *ret = NULL, **args;
> - int i, args_count;
> + int i, args_count, err;
> + unsigned long long ref_ctr_offset;
>
> if (strbuf_init(&buf, 32) < 0)
> return NULL;
>
> - if (strbuf_addf(&buf, "p:%s/%s %s:0x%llx",
> - sdtgrp, note->name, pathname,
> - sdt_note__get_addr(note)) < 0)
> + err = strbuf_addf(&buf, "p:%s/%s %s:0x%llx",
> + sdtgrp, note->name, pathname,
> + sdt_note__get_addr(note));
> +
> + ref_ctr_offset = sdt_note__get_ref_ctr_offset(note);
> + if (ref_ctr_offset && err >= 0)
> + err = strbuf_addf(&buf, "(0x%llx)", ref_ctr_offset);
> +
> + if (err < 0)
> goto error;
>
> if (!note->args)
> @@ -999,6 +1014,7 @@ int probe_cache__show_all_caches(struct strfilter *filter)
> enum ftrace_readme {
> FTRACE_README_PROBE_TYPE_X = 0,
> FTRACE_README_KRETPROBE_OFFSET,
> + FTRACE_README_UPROBE_REF_CTR,
> FTRACE_README_END,
> };
>
> @@ -1010,6 +1026,7 @@ enum ftrace_readme {
> [idx] = {.pattern = pat, .avail = false}
> DEFINE_TYPE(FTRACE_README_PROBE_TYPE_X, "*type: * x8/16/32/64,*"),
> DEFINE_TYPE(FTRACE_README_KRETPROBE_OFFSET, "*place (kretprobe): *"),
> + DEFINE_TYPE(FTRACE_README_UPROBE_REF_CTR, "*ref_ctr_offset*"),
> };
>
> static bool scan_ftrace_readme(enum ftrace_readme type)
> @@ -1065,3 +1082,8 @@ bool kretprobe_offset_is_supported(void)
> {
> return scan_ftrace_readme(FTRACE_README_KRETPROBE_OFFSET);
> }
> +
> +bool uprobe_ref_ctr_is_supported(void)
> +{
> + return scan_ftrace_readme(FTRACE_README_UPROBE_REF_CTR);
> +}
> diff --git a/tools/perf/util/probe-file.h b/tools/perf/util/probe-file.h
> index 63f29b1..2a24918 100644
> --- a/tools/perf/util/probe-file.h
> +++ b/tools/perf/util/probe-file.h
> @@ -69,6 +69,7 @@ struct probe_cache_entry *probe_cache__find_by_name(struct probe_cache *pcache,
> int probe_cache__show_all_caches(struct strfilter *filter);
> bool probe_type_is_available(enum probe_type type);
> bool kretprobe_offset_is_supported(void);
> +bool uprobe_ref_ctr_is_supported(void);
> #else /* ! HAVE_LIBELF_SUPPORT */
> static inline struct probe_cache *probe_cache__new(const char *tgt __maybe_unused, struct nsinfo *nsi __maybe_unused)
> {
> diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
> index 2de7705..45b7dba 100644
> --- a/tools/perf/util/symbol-elf.c
> +++ b/tools/perf/util/symbol-elf.c
> @@ -1803,6 +1803,34 @@ void kcore_extract__delete(struct kcore_extract *kce)
> }
>
> #ifdef HAVE_GELF_GETNOTE_SUPPORT
> +
> +static void sdt_adjust_loc(struct sdt_note *tmp, GElf_Addr base_off)
> +{
> + if (!base_off)
> + return;
> +
> + if (tmp->bit32)
> + tmp->addr.a32[SDT_NOTE_IDX_LOC] =
> + tmp->addr.a32[SDT_NOTE_IDX_LOC] + base_off -
> + tmp->addr.a32[SDT_NOTE_IDX_BASE];
> + else
> + tmp->addr.a64[SDT_NOTE_IDX_LOC] =
> + tmp->addr.a64[SDT_NOTE_IDX_LOC] + base_off -
> + tmp->addr.a64[SDT_NOTE_IDX_BASE];
> +}
> +
> +static void sdt_adjust_refctr(struct sdt_note *tmp, GElf_Addr base_addr,
> + GElf_Addr base_off)
> +{
> + if (!base_off)
> + return;
> +
> + if (tmp->bit32)
> + tmp->addr.a32[SDT_NOTE_IDX_REFCTR] -= (base_addr - base_off);
> + else
> + tmp->addr.a64[SDT_NOTE_IDX_REFCTR] -= (base_addr - base_off);
> +}
> +
> /**
> * populate_sdt_note : Parse raw data and identify SDT note
> * @elf: elf of the opened file
> @@ -1820,7 +1848,6 @@ static int populate_sdt_note(Elf **elf, const char *data, size_t len,
> const char *provider, *name, *args;
> struct sdt_note *tmp = NULL;
> GElf_Ehdr ehdr;
> - GElf_Addr base_off = 0;
> GElf_Shdr shdr;
> int ret = -EINVAL;
>
> @@ -1916,17 +1943,12 @@ static int populate_sdt_note(Elf **elf, const char *data, size_t len,
> * base address in the description of the SDT note. If its different,
> * then accordingly, adjust the note location.
> */
> - if (elf_section_by_name(*elf, &ehdr, &shdr, SDT_BASE_SCN, NULL)) {
> - base_off = shdr.sh_offset;
> - if (base_off) {
> - if (tmp->bit32)
> - tmp->addr.a32[0] = tmp->addr.a32[0] + base_off -
> - tmp->addr.a32[1];
> - else
> - tmp->addr.a64[0] = tmp->addr.a64[0] + base_off -
> - tmp->addr.a64[1];
> - }
> - }
> + if (elf_section_by_name(*elf, &ehdr, &shdr, SDT_BASE_SCN, NULL))
> + sdt_adjust_loc(tmp, shdr.sh_offset);
> +
> + /* Adjust reference counter offset */
> + if (elf_section_by_name(*elf, &ehdr, &shdr, SDT_PROBES_SCN, NULL))
> + sdt_adjust_refctr(tmp, shdr.sh_addr, shdr.sh_offset);
>
> list_add_tail(&tmp->note_list, sdt_notes);
> return 0;
> diff --git a/tools/perf/util/symbol.h b/tools/perf/util/symbol.h
> index 70c16741..aa095bf 100644
> --- a/tools/perf/util/symbol.h
> +++ b/tools/perf/util/symbol.h
> @@ -384,12 +384,19 @@ struct sdt_note {
> int cleanup_sdt_note_list(struct list_head *sdt_notes);
> int sdt_notes__get_count(struct list_head *start);
>
> +#define SDT_PROBES_SCN ".probes"
> #define SDT_BASE_SCN ".stapsdt.base"
> #define SDT_NOTE_SCN ".note.stapsdt"
> #define SDT_NOTE_TYPE 3
> #define SDT_NOTE_NAME "stapsdt"
> #define NR_ADDR 3
>
> +enum {
> + SDT_NOTE_IDX_LOC = 0,
> + SDT_NOTE_IDX_BASE,
> + SDT_NOTE_IDX_REFCTR,
> +};
> +
> struct mem_info *mem_info__new(void);
> struct mem_info *mem_info__get(struct mem_info *mi);
> void mem_info__put(struct mem_info *mi);
> --
> 1.8.3.1
>
--
Masami Hiramatsu <[email protected]>
On Tue, 17 Apr 2018 10:02:43 +0530
Ravi Bangoria <[email protected]> wrote:
> From: Ravi Bangoria <[email protected]>
>
> Reference counter gate the invocation of probe. If present,
> by default reference count is 0. Kernel needs to increment
> it before tracing the probe and decrement it when done. This
> is identical to semaphore in Userspace Statically Defined
> Tracepoints (USDT).
>
> Document usage of reference counter.
>
> Signed-off-by: Ravi Bangoria <[email protected]>
Looks good to me.
Acked-by: Masami Hiramatsu <[email protected]>
Thanks!
> ---
> Documentation/trace/uprobetracer.txt | 16 +++++++++++++---
> kernel/trace/trace.c | 2 +-
> 2 files changed, 14 insertions(+), 4 deletions(-)
>
> diff --git a/Documentation/trace/uprobetracer.txt b/Documentation/trace/uprobetracer.txt
> index bf526a7c..cb6751d 100644
> --- a/Documentation/trace/uprobetracer.txt
> +++ b/Documentation/trace/uprobetracer.txt
> @@ -19,15 +19,25 @@ user to calculate the offset of the probepoint in the object.
>
> Synopsis of uprobe_tracer
> -------------------------
> - p[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS] : Set a uprobe
> - r[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS] : Set a return uprobe (uretprobe)
> - -:[GRP/]EVENT : Clear uprobe or uretprobe event
> + p[:[GRP/]EVENT] PATH:OFFSET[(REF_CTR_OFFSET)] [FETCHARGS]
> + r[:[GRP/]EVENT] PATH:OFFSET[(REF_CTR_OFFSET)] [FETCHARGS]
> + -:[GRP/]EVENT
> +
> + p : Set a uprobe
> + r : Set a return uprobe (uretprobe)
> + - : Clear uprobe or uretprobe event
>
> GRP : Group name. If omitted, "uprobes" is the default value.
> EVENT : Event name. If omitted, the event name is generated based
> on PATH+OFFSET.
> PATH : Path to an executable or a library.
> OFFSET : Offset where the probe is inserted.
> + REF_CTR_OFFSET: Reference counter offset. Optional field. Reference count
> + gate the invocation of probe. If present, by default
> + reference count is 0. Kernel needs to increment it before
> + tracing the probe and decrement it when done. This is
> + identical to semaphore in Userspace Statically Defined
> + Tracepoints (USDT).
>
> FETCHARGS : Arguments. Each probe can have up to 128 args.
> %REG : Fetch register REG
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index 300f4ea..d211937 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -4604,7 +4604,7 @@ static int tracing_trace_options_open(struct inode *inode, struct file *file)
> "place (kretprobe): [<module>:]<symbol>[+<offset>]|<memaddr>\n"
> #endif
> #ifdef CONFIG_UPROBE_EVENTS
> - "\t place: <path>:<offset>\n"
> + " place (uprobe): <path>:<offset>[(ref_ctr_offset)]\n"
> #endif
> "\t args: <name>=fetcharg[:type]\n"
> "\t fetcharg: %<register>, @<address>, @<symbol>[+|-<offset>],\n"
> --
> 1.8.3.1
>
--
Masami Hiramatsu <[email protected]>
On Tue, 17 Apr 2018 10:02:37 +0530
Ravi Bangoria <[email protected]> wrote:
> From: Ravi Bangoria <[email protected]>
>
> Make function names more meaningful by adding vma_ prefix
> to them.
Actually, I would have done this patch before the first one, since the
first one makes the functions global.
-- Steve
>
> Signed-off-by: Ravi Bangoria <[email protected]>
> Reviewed-by: Jérôme Glisse <[email protected]>
> ---
>
Hi Ravi,
I have some comments, please see below.
On Tue, 17 Apr 2018 10:02:41 +0530
Ravi Bangoria <[email protected]> wrote:\
> diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
> index 7bd2760..2db3ed1 100644
> --- a/include/linux/uprobes.h
> +++ b/include/linux/uprobes.h
> @@ -122,6 +122,8 @@ struct uprobe_map_info {
> unsigned long vaddr;
> };
>
> +extern void (*uprobe_mmap_callback)(struct vm_area_struct *vma);
> +
> extern int set_swbp(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr);
> extern int set_orig_insn(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr);
> extern bool is_swbp_insn(uprobe_opcode_t *insn);
> @@ -136,6 +138,8 @@ struct uprobe_map_info {
> extern void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> extern void uprobe_start_dup_mmap(void);
> extern void uprobe_end_dup_mmap(void);
> +extern void uprobe_down_write_dup_mmap(void);
> +extern void uprobe_up_write_dup_mmap(void);
> extern void uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm);
> extern void uprobe_free_utask(struct task_struct *t);
> extern void uprobe_copy_process(struct task_struct *t, unsigned long flags);
> @@ -192,6 +196,12 @@ static inline void uprobe_start_dup_mmap(void)
> static inline void uprobe_end_dup_mmap(void)
> {
> }
> +static inline void uprobe_down_write_dup_mmap(void)
> +{
> +}
> +static inline void uprobe_up_write_dup_mmap(void)
> +{
> +}
> static inline void
> uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm)
> {
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index 096d1e6..e26ad83 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -1044,6 +1044,9 @@ static void build_probe_list(struct inode *inode,
> spin_unlock(&uprobes_treelock);
> }
>
> +/* Rightnow the only user of this is trace_uprobe. */
> +void (*uprobe_mmap_callback)(struct vm_area_struct *vma);
> +
> /*
> * Called from mmap_region/vma_adjust with mm->mmap_sem acquired.
> *
> @@ -1056,7 +1059,13 @@ int uprobe_mmap(struct vm_area_struct *vma)
> struct uprobe *uprobe, *u;
> struct inode *inode;
>
> - if (no_uprobe_events() || !valid_vma(vma, true))
> + if (no_uprobe_events())
> + return 0;
> +
> + if (uprobe_mmap_callback)
> + uprobe_mmap_callback(vma);
> +
> + if (!valid_vma(vma, true))
> return 0;
>
> inode = file_inode(vma->vm_file);
> @@ -1247,6 +1256,16 @@ void uprobe_end_dup_mmap(void)
> percpu_up_read(&dup_mmap_sem);
> }
>
> +void uprobe_down_write_dup_mmap(void)
> +{
> + percpu_down_write(&dup_mmap_sem);
> +}
> +
> +void uprobe_up_write_dup_mmap(void)
> +{
> + percpu_up_write(&dup_mmap_sem);
> +}
> +
I'm not sure why these hunks are not done in previous patch.
If you separate "uprobe_map_info" export patch, this also
should be separated. (Or both merged into this patch)
> void uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm)
> {
> if (test_bit(MMF_HAS_UPROBES, &oldmm->flags)) {
> diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
> index 0d450b4..1a48b04 100644
> --- a/kernel/trace/trace_uprobe.c
> +++ b/kernel/trace/trace_uprobe.c
> @@ -25,6 +25,8 @@
> #include <linux/namei.h>
> #include <linux/string.h>
> #include <linux/rculist.h>
> +#include <linux/sched/mm.h>
> +#include <linux/highmem.h>
>
> #include "trace_probe.h"
>
> @@ -58,6 +60,7 @@ struct trace_uprobe {
> struct inode *inode;
> char *filename;
> unsigned long offset;
> + unsigned long ref_ctr_offset;
> unsigned long nhit;
> struct trace_probe tp;
> };
> @@ -364,10 +367,10 @@ static int create_trace_uprobe(int argc, char **argv)
> {
> struct trace_uprobe *tu;
> struct inode *inode;
> - char *arg, *event, *group, *filename;
> + char *arg, *event, *group, *filename, *rctr, *rctr_end;
> char buf[MAX_EVENT_NAME_LEN];
> struct path path;
> - unsigned long offset;
> + unsigned long offset, ref_ctr_offset;
> bool is_delete, is_return;
> int i, ret;
>
> @@ -377,6 +380,7 @@ static int create_trace_uprobe(int argc, char **argv)
> is_return = false;
> event = NULL;
> group = NULL;
> + ref_ctr_offset = 0;
>
> /* argc must be >= 1 */
> if (argv[0][0] == '-')
> @@ -456,6 +460,26 @@ static int create_trace_uprobe(int argc, char **argv)
> goto fail_address_parse;
> }
>
> + /* Parse reference counter offset if specified. */
> + rctr = strchr(arg, '(');
> + if (rctr) {
> + rctr_end = strchr(rctr, ')');
> + if (rctr > rctr_end || *(rctr_end + 1) != 0) {
> + ret = -EINVAL;
> + pr_info("Invalid reference counter offset.\n");
> + goto fail_address_parse;
> + }
> +
> + *rctr++ = '\0';
> + *rctr_end = '\0';
> + ret = kstrtoul(rctr, 0, &ref_ctr_offset);
> + if (ret) {
> + pr_info("Invalid reference counter offset.\n");
> + goto fail_address_parse;
> + }
> + }
> +
> + /* Parse uprobe offset. */
> ret = kstrtoul(arg, 0, &offset);
> if (ret)
> goto fail_address_parse;
> @@ -490,6 +514,7 @@ static int create_trace_uprobe(int argc, char **argv)
> goto fail_address_parse;
> }
> tu->offset = offset;
> + tu->ref_ctr_offset = ref_ctr_offset;
> tu->inode = inode;
> tu->filename = kstrdup(filename, GFP_KERNEL);
>
> @@ -622,6 +647,8 @@ static int probes_seq_show(struct seq_file *m, void *v)
> break;
> }
> }
> + if (tu->ref_ctr_offset)
> + seq_printf(m, "(0x%lx)", tu->ref_ctr_offset);
>
> for (i = 0; i < tu->tp.nr_args; i++)
> seq_printf(m, " %s=%s", tu->tp.args[i].name, tu->tp.args[i].comm);
> @@ -896,6 +923,129 @@ static void uretprobe_trace_func(struct trace_uprobe *tu, unsigned long func,
> return trace_handle_return(s);
> }
>
> +static bool sdt_valid_vma(struct trace_uprobe *tu,
> + struct vm_area_struct *vma,
> + unsigned long vaddr)
> +{
> + return tu->ref_ctr_offset &&
> + vma->vm_file &&
> + file_inode(vma->vm_file) == tu->inode &&
> + vma->vm_flags & VM_WRITE &&
> + vma->vm_start <= vaddr &&
> + vma->vm_end > vaddr;
> +}
> +
> +static struct vm_area_struct *sdt_find_vma(struct trace_uprobe *tu,
> + struct mm_struct *mm,
> + unsigned long vaddr)
> +{
> + struct vm_area_struct *vma = find_vma(mm, vaddr);
> +
> + return (vma && sdt_valid_vma(tu, vma, vaddr)) ? vma : NULL;
> +}
> +
> +/*
> + * Reference counter gate the invocation of probe. If present,
> + * by default reference counter is 0. One needs to increment
> + * it before tracing the probe and decrement it when done.
> + */
> +static int
> +sdt_update_ref_ctr(struct mm_struct *mm, unsigned long vaddr, short d)
> +{
> + void *kaddr;
> + struct page *page;
> + struct vm_area_struct *vma;
> + int ret = 0;
> + unsigned short *ptr;
> +
> + if (vaddr == 0)
> + return -EINVAL;
> +
> + ret = get_user_pages_remote(NULL, mm, vaddr, 1,
> + FOLL_FORCE | FOLL_WRITE, &page, &vma, NULL);
> + if (ret <= 0)
> + return ret;
Hmm, get_user_pages_remote() said
===
If nr_pages is 0 or negative, returns 0. If no pages were pinned, returns -errno.
===
And you've passed 1 for nr_pages, so it must be 1 or -errno.
> +
> + kaddr = kmap_atomic(page);
> + ptr = kaddr + (vaddr & ~PAGE_MASK);
> + *ptr += d;
> + kunmap_atomic(kaddr);
> +
> + put_page(page);
> + return 0;
And obviously 0 means "success" for sdt_update_ref_ctr().
I think if get_user_pages_remote returns 0, this should
return -EBUSY (*) or something else.
* It seems that if faultin_page() in __get_user_pages()
returns -EBUSY, get_user_pages_remote() can return 0.
> +}
> +
> +static void sdt_increment_ref_ctr(struct trace_uprobe *tu)
> +{
> + struct uprobe_map_info *info;
> +
> + uprobe_down_write_dup_mmap();
> + info = uprobe_build_map_info(tu->inode->i_mapping,
> + tu->ref_ctr_offset, false);
> + if (IS_ERR(info))
> + goto out;
> +
> + while (info) {
> + down_write(&info->mm->mmap_sem);
> +
> + if (sdt_find_vma(tu, info->mm, info->vaddr))
> + sdt_update_ref_ctr(info->mm, info->vaddr, 1);
Don't you have to handle the error to map pages here?
> +
> + up_write(&info->mm->mmap_sem);
> + info = uprobe_free_map_info(info);
> + }
> +
> +out:
> + uprobe_up_write_dup_mmap();
> +}
> +
> +/* Called with down_write(&vma->vm_mm->mmap_sem) */
> +static void trace_uprobe_mmap(struct vm_area_struct *vma)
> +{
> + struct trace_uprobe *tu;
> + unsigned long vaddr;
> +
> + if (!(vma->vm_flags & VM_WRITE))
> + return;
> +
> + mutex_lock(&uprobe_lock);
> + list_for_each_entry(tu, &uprobe_list, list) {
> + if (!trace_probe_is_enabled(&tu->tp))
> + continue;
> +
> + vaddr = vma_offset_to_vaddr(vma, tu->ref_ctr_offset);
> + if (!sdt_valid_vma(tu, vma, vaddr))
> + continue;
> +
> + sdt_update_ref_ctr(vma->vm_mm, vaddr, 1);
Same here.
> + }
> + mutex_unlock(&uprobe_lock);
> +}
> +
> +static void sdt_decrement_ref_ctr(struct trace_uprobe *tu)
> +{
> + struct uprobe_map_info *info;
> +
> + uprobe_down_write_dup_mmap();
> + info = uprobe_build_map_info(tu->inode->i_mapping,
> + tu->ref_ctr_offset, false);
> + if (IS_ERR(info))
> + goto out;
> +
> + while (info) {
> + down_write(&info->mm->mmap_sem);
> +
> + if (sdt_find_vma(tu, info->mm, info->vaddr))
> + sdt_update_ref_ctr(info->mm, info->vaddr, -1);
Ditto.
Thank you,
> +
> + up_write(&info->mm->mmap_sem);
> + info = uprobe_free_map_info(info);
> + }
> +
> +out:
> + uprobe_up_write_dup_mmap();
> +}
> +
> typedef bool (*filter_func_t)(struct uprobe_consumer *self,
> enum uprobe_filter_ctx ctx,
> struct mm_struct *mm);
> @@ -941,6 +1091,9 @@ typedef bool (*filter_func_t)(struct uprobe_consumer *self,
> if (ret)
> goto err_buffer;
>
> + if (tu->ref_ctr_offset)
> + sdt_increment_ref_ctr(tu);
> +
> return 0;
>
> err_buffer:
> @@ -981,6 +1134,9 @@ typedef bool (*filter_func_t)(struct uprobe_consumer *self,
>
> WARN_ON(!uprobe_filter_is_empty(&tu->filter));
>
> + if (tu->ref_ctr_offset)
> + sdt_decrement_ref_ctr(tu);
> +
> uprobe_unregister(tu->inode, tu->offset, &tu->consumer);
> tu->tp.flags &= file ? ~TP_FLAG_TRACE : ~TP_FLAG_PROFILE;
>
> @@ -1425,6 +1581,8 @@ static __init int init_uprobe_trace(void)
> /* Profile interface */
> trace_create_file("uprobe_profile", 0444, d_tracer,
> NULL, &uprobe_profile_ops);
> +
> + uprobe_mmap_callback = trace_uprobe_mmap;
> return 0;
> }
>
> --
> 1.8.3.1
>
--
Masami Hiramatsu <[email protected]>
Hi Masami,
On 05/04/2018 10:18 AM, Masami Hiramatsu wrote:
>> +void uprobe_down_write_dup_mmap(void)
>> +{
>> + percpu_down_write(&dup_mmap_sem);
>> +}
>> +
>> +void uprobe_up_write_dup_mmap(void)
>> +{
>> + percpu_up_write(&dup_mmap_sem);
>> +}
>> +
> I'm not sure why these hunks are not done in previous patch.
> If you separate "uprobe_map_info" export patch, this also
> should be separated. (Or both merged into this patch)
Sure, I'll add separate patch for dup_mmap_sem.
>> +/*
>> + * Reference counter gate the invocation of probe. If present,
>> + * by default reference counter is 0. One needs to increment
>> + * it before tracing the probe and decrement it when done.
>> + */
>> +static int
>> +sdt_update_ref_ctr(struct mm_struct *mm, unsigned long vaddr, short d)
>> +{
>> + void *kaddr;
>> + struct page *page;
>> + struct vm_area_struct *vma;
>> + int ret = 0;
>> + unsigned short *ptr;
>> +
>> + if (vaddr == 0)
>> + return -EINVAL;
>> +
>> + ret = get_user_pages_remote(NULL, mm, vaddr, 1,
>> + FOLL_FORCE | FOLL_WRITE, &page, &vma, NULL);
>> + if (ret <= 0)
>> + return ret;
> Hmm, get_user_pages_remote() said
>
> ===
> If nr_pages is 0 or negative, returns 0. If no pages were pinned, returns -errno.
> ===
>
> And you've passed 1 for nr_pages, so it must be 1 or -errno.
>
>> +
>> + kaddr = kmap_atomic(page);
>> + ptr = kaddr + (vaddr & ~PAGE_MASK);
>> + *ptr += d;
>> + kunmap_atomic(kaddr);
>> +
>> + put_page(page);
>> + return 0;
> And obviously 0 means "success" for sdt_update_ref_ctr().
> I think if get_user_pages_remote returns 0, this should
> return -EBUSY (*) or something else.
>
> * It seems that if faultin_page() in __get_user_pages()
> returns -EBUSY, get_user_pages_remote() can return 0.
Ah good catch :). Will change it.
>> +}
>> +
>> +static void sdt_increment_ref_ctr(struct trace_uprobe *tu)
>> +{
>> + struct uprobe_map_info *info;
>> +
>> + uprobe_down_write_dup_mmap();
>> + info = uprobe_build_map_info(tu->inode->i_mapping,
>> + tu->ref_ctr_offset, false);
>> + if (IS_ERR(info))
>> + goto out;
>> +
>> + while (info) {
>> + down_write(&info->mm->mmap_sem);
>> +
>> + if (sdt_find_vma(tu, info->mm, info->vaddr))
>> + sdt_update_ref_ctr(info->mm, info->vaddr, 1);
> Don't you have to handle the error to map pages here?
Correct.. I think, I've to feedback error code to probe_event_{enable|disable}
and handler failure there.
Thanks for the review,
Ravi
Sorry Ravi, I saved the new version for review and forgot about it... I'll
try to do this on weekend.
On 05/03, Ravi Bangoria wrote:
>
> On 04/17/2018 10:02 AM, Ravi Bangoria wrote:
> > Userspace Statically Defined Tracepoints[1] are dtrace style markers
> > inside userspace applications. Applications like PostgreSQL, MySQL,
> > Pthread, Perl, Python, Java, Ruby, Node.js, libvirt, QEMU, glib etc
> > have these markers embedded in them. These markers are added by developer
> > at important places in the code. Each marker source expands to a single
> > nop instruction in the compiled code but there may be additional
> > overhead for computing the marker arguments which expands to couple of
> > instructions. In case the overhead is more, execution of it can be
> > omitted by runtime if() condition when no one is tracing on the marker:
> >
> > if (reference_counter > 0) {
> > Execute marker instructions;
> > }
> >
> > Default value of reference counter is 0. Tracer has to increment the
> > reference counter before tracing on a marker and decrement it when
> > done with the tracing.
>
> Hi Oleg, Masami,
>
> Can you please review this :) ?
>
> Thanks.
>
Hi Masami,
On 05/04/2018 07:51 PM, Ravi Bangoria wrote:
>
>>> +}
>>> +
>>> +static void sdt_increment_ref_ctr(struct trace_uprobe *tu)
>>> +{
>>> + struct uprobe_map_info *info;
>>> +
>>> + uprobe_down_write_dup_mmap();
>>> + info = uprobe_build_map_info(tu->inode->i_mapping,
>>> + tu->ref_ctr_offset, false);
>>> + if (IS_ERR(info))
>>> + goto out;
>>> +
>>> + while (info) {
>>> + down_write(&info->mm->mmap_sem);
>>> +
>>> + if (sdt_find_vma(tu, info->mm, info->vaddr))
>>> + sdt_update_ref_ctr(info->mm, info->vaddr, 1);
>> Don't you have to handle the error to map pages here?
> Correct.. I think, I've to feedback error code to probe_event_{enable|disable}
> and handler failure there.
I looked at this. Actually, It looks difficult to feedback errors to
probe_event_{enable|disable}, esp. in the mmap() case.
Is it fine if we just warn sdt_update_ref_ctr() failures in dmesg? I'm
doing this in [PATCH 7]. (Though, it makes more sense to do that in
[PATCH 6], will change it in next version).
Any better ideas?
BTW, same issue exists for normal uprobe. If uprobe_mmap() fails,
there is no feedback to trace_uprobe and no warnigns in dmesg as
well !! There was a patch by Naveen to warn such failures in dmesg
but that didn't go in: https://lkml.org/lkml/2017/9/22/155
Also, I'll add a check in sdt_update_ref_ctr() to make sure reference
counter never goes to negative incase increment fails but decrement
succeeds. OTOH, if increment succeeds but decrement fails, the
counter remains >0 but there is no harm as such, except we will
execute some unnecessary code.
Thanks,
Ravi
On Mon, 7 May 2018 13:51:21 +0530
Ravi Bangoria <[email protected]> wrote:
> Hi Masami,
>
> On 05/04/2018 07:51 PM, Ravi Bangoria wrote:
> >
> >>> +}
> >>> +
> >>> +static void sdt_increment_ref_ctr(struct trace_uprobe *tu)
> >>> +{
> >>> + struct uprobe_map_info *info;
> >>> +
> >>> + uprobe_down_write_dup_mmap();
> >>> + info = uprobe_build_map_info(tu->inode->i_mapping,
> >>> + tu->ref_ctr_offset, false);
> >>> + if (IS_ERR(info))
> >>> + goto out;
> >>> +
> >>> + while (info) {
> >>> + down_write(&info->mm->mmap_sem);
> >>> +
> >>> + if (sdt_find_vma(tu, info->mm, info->vaddr))
> >>> + sdt_update_ref_ctr(info->mm, info->vaddr, 1);
> >> Don't you have to handle the error to map pages here?
> > Correct.. I think, I've to feedback error code to probe_event_{enable|disable}
> > and handler failure there.
>
> I looked at this. Actually, It looks difficult to feedback errors to
> probe_event_{enable|disable}, esp. in the mmap() case.
Hmm, can't you roll that back if sdt_increment_ref_ctr() fails?
If so, how does sdt_decrement_ref_ctr() work in that case?
> Is it fine if we just warn sdt_update_ref_ctr() failures in dmesg? I'm
> doing this in [PATCH 7]. (Though, it makes more sense to do that in
> [PATCH 6], will change it in next version).
Of course we need to warn it at least, but the best is rejecting to
enable it.
>
> Any better ideas?
>
> BTW, same issue exists for normal uprobe. If uprobe_mmap() fails,
> there is no feedback to trace_uprobe and no warnigns in dmesg as
> well !! There was a patch by Naveen to warn such failures in dmesg
> but that didn't go in: https://lkml.org/lkml/2017/9/22/155
Oops, that's a real bug. It seems the ball is in Naveen's hand.
Naveen, could you update it according to Oleg's comment, and resend it?
>
> Also, I'll add a check in sdt_update_ref_ctr() to make sure reference
> counter never goes to negative incase increment fails but decrement
> succeeds. OTOH, if increment succeeds but decrement fails, the
> counter remains >0 but there is no harm as such, except we will
> execute some unnecessary code.
I see. Please carefully clarify whether such case is kernel's bug or not.
I would like to know what the condition causes that uneven behavior.
Thank you,
>
> Thanks,
> Ravi
>
--
Masami Hiramatsu <[email protected]>
Masami Hiramatsu wrote:
> On Mon, 7 May 2018 13:51:21 +0530
> Ravi Bangoria <[email protected]> wrote:
>
>> BTW, same issue exists for normal uprobe. If uprobe_mmap() fails,
>> there is no feedback to trace_uprobe and no warnigns in dmesg as
>> well !! There was a patch by Naveen to warn such failures in dmesg
>> but that didn't go in: https://lkml.org/lkml/2017/9/22/155
>
> Oops, that's a real bug. It seems the ball is in Naveen's hand.
> Naveen, could you update it according to Oleg's comment, and resend it?
Yes, I've had to put that series on the backburner. I will try and get
to it soon. Thanks for the reminder.
- Naveen
Hi Masami,
On 05/07/2018 09:26 PM, Masami Hiramatsu wrote:
> On Mon, 7 May 2018 13:51:21 +0530
> Ravi Bangoria <[email protected]> wrote:
>
>> Hi Masami,
>>
>> On 05/04/2018 07:51 PM, Ravi Bangoria wrote:
>>>>> +}
>>>>> +
>>>>> +static void sdt_increment_ref_ctr(struct trace_uprobe *tu)
>>>>> +{
>>>>> + struct uprobe_map_info *info;
>>>>> +
>>>>> + uprobe_down_write_dup_mmap();
>>>>> + info = uprobe_build_map_info(tu->inode->i_mapping,
>>>>> + tu->ref_ctr_offset, false);
>>>>> + if (IS_ERR(info))
>>>>> + goto out;
>>>>> +
>>>>> + while (info) {
>>>>> + down_write(&info->mm->mmap_sem);
>>>>> +
>>>>> + if (sdt_find_vma(tu, info->mm, info->vaddr))
>>>>> + sdt_update_ref_ctr(info->mm, info->vaddr, 1);
>>>> Don't you have to handle the error to map pages here?
>>> Correct.. I think, I've to feedback error code to probe_event_{enable|disable}
>>> and handler failure there.
>> I looked at this. Actually, It looks difficult to feedback errors to
>> probe_event_{enable|disable}, esp. in the mmap() case.
> Hmm, can't you roll that back if sdt_increment_ref_ctr() fails?
> If so, how does sdt_decrement_ref_ctr() work in that case?
Yes, it's easy to rollback in sdt_increment_ref_ctr(). But not much can
be done if trace_uprobe_mmap() fails.
What would be good is, if we can feedback uprobe_mmap() failures
to the perf infrastructure, which can finally be parsed by perf record.
But that should be done as a separate work.
>> Is it fine if we just warn sdt_update_ref_ctr() failures in dmesg? I'm
>> doing this in [PATCH 7]. (Though, it makes more sense to do that in
>> [PATCH 6], will change it in next version).
> Of course we need to warn it at least, but the best is rejecting to
> enable it.
Yes, we can reject it for sdt_increment_ref_ctr() failures.
>> Any better ideas?
>>
>> BTW, same issue exists for normal uprobe. If uprobe_mmap() fails,
>> there is no feedback to trace_uprobe and no warnigns in dmesg as
>> well !! There was a patch by Naveen to warn such failures in dmesg
>> but that didn't go in: https://lkml.org/lkml/2017/9/22/155
> Oops, that's a real bug. It seems the ball is in Naveen's hand.
> Naveen, could you update it according to Oleg's comment, and resend it?
>
>> Also, I'll add a check in sdt_update_ref_ctr() to make sure reference
>> counter never goes to negative incase increment fails but decrement
>> succeeds. OTOH, if increment succeeds but decrement fails, the
>> counter remains >0 but there is no harm as such, except we will
>> execute some unnecessary code.
> I see. Please carefully clarify whether such case is kernel's bug or not.
> I would like to know what the condition causes that uneven behavior.
Sure, will do that.
Thanks,
Ravi
Hi Ravi,
sorry for delay!
I am trying to recall what this code should do ;) At first glance, I do
not see any serious problem in this version... except it doesn't apply
to Linus's tree. just one question for now.
On 04/17, Ravi Bangoria wrote:
>
> @@ -941,6 +1091,9 @@ typedef bool (*filter_func_t)(struct uprobe_consumer *self,
> if (ret)
> goto err_buffer;
>
> + if (tu->ref_ctr_offset)
> + sdt_increment_ref_ctr(tu);
> +
iiuc, this is probe_event_enable()...
Looks racy, but afaics the race with uprobe_mmap() will be closed by the next
change. However, it seems that probe_event_disable() can race with trace_uprobe_mmap()
too and the next 7/9 patch won't help,
> + if (tu->ref_ctr_offset)
> + sdt_decrement_ref_ctr(tu);
> +
> uprobe_unregister(tu->inode, tu->offset, &tu->consumer);
> tu->tp.flags &= file ? ~TP_FLAG_TRACE : ~TP_FLAG_PROFILE;
so what if trace_uprobe_mmap() comes right after uprobe_unregister() ?
Note that trace_probe_is_enabled() is T until we update tp.flags.
Oleg.
Thanks Oleg for the review,
On 05/24/2018 09:56 PM, Oleg Nesterov wrote:
> On 04/17, Ravi Bangoria wrote:
>>
>> @@ -941,6 +1091,9 @@ typedef bool (*filter_func_t)(struct uprobe_consumer *self,
>> if (ret)
>> goto err_buffer;
>>
>> + if (tu->ref_ctr_offset)
>> + sdt_increment_ref_ctr(tu);
>> +
>
> iiuc, this is probe_event_enable()...
>
> Looks racy, but afaics the race with uprobe_mmap() will be closed by the next
> change. However, it seems that probe_event_disable() can race with trace_uprobe_mmap()
> too and the next 7/9 patch won't help,
>
>> + if (tu->ref_ctr_offset)
>> + sdt_decrement_ref_ctr(tu);
>> +
>> uprobe_unregister(tu->inode, tu->offset, &tu->consumer);
>> tu->tp.flags &= file ? ~TP_FLAG_TRACE : ~TP_FLAG_PROFILE;
>
> so what if trace_uprobe_mmap() comes right after uprobe_unregister() ?
> Note that trace_probe_is_enabled() is T until we update tp.flags.
Sure, I'll look at your comments.
Apart from these, I've also found a deadlock between uprobe_lock and
mm->mmap_sem. trace_uprobe_mmap() takes these locks in
mm->mmap_sem
uprobe_lock
order but some other code path is taking these locks in reverse order. I've
mentioned sample lockdep warning at the end. The issue is, mm->mmap_sem is
not in control of trace_uprobe_mmap() and we have to take uprobe_lock to
loop over all trace_uprobes.
Any idea how this can be resolved?
Sample lockdep warning:
[ 499.258006] ======================================================
[ 499.258205] WARNING: possible circular locking dependency detected
[ 499.258409] 4.17.0-rc3+ #76 Not tainted
[ 499.258528] ------------------------------------------------------
[ 499.258731] perf/6744 is trying to acquire lock:
[ 499.258895] 00000000e4895f49 (uprobe_lock){+.+.}, at: trace_uprobe_mmap+0x78/0x130
[ 499.259147]
[ 499.259147] but task is already holding lock:
[ 499.259349] 000000009ec93a76 (&mm->mmap_sem){++++}, at: vm_mmap_pgoff+0xe0/0x160
[ 499.259597]
[ 499.259597] which lock already depends on the new lock.
[ 499.259597]
[ 499.259848]
[ 499.259848] the existing dependency chain (in reverse order) is:
[ 499.260086]
[ 499.260086] -> #4 (&mm->mmap_sem){++++}:
[ 499.260277] __lock_acquire+0x53c/0x910
[ 499.260442] lock_acquire+0xf4/0x2f0
[ 499.260595] down_write_killable+0x6c/0x150
[ 499.260764] copy_process.isra.34.part.35+0x1594/0x1be0
[ 499.260967] _do_fork+0xf8/0x910
[ 499.261090] ppc_clone+0x8/0xc
[ 499.261209]
[ 499.261209] -> #3 (&dup_mmap_sem){++++}:
[ 499.261378] __lock_acquire+0x53c/0x910
[ 499.261540] lock_acquire+0xf4/0x2f0
[ 499.261669] down_write+0x6c/0x110
[ 499.261793] percpu_down_write+0x48/0x140
[ 499.261954] register_for_each_vma+0x6c/0x2a0
[ 499.262116] uprobe_register+0x230/0x320
[ 499.262277] probe_event_enable+0x1cc/0x540
[ 499.262435] perf_trace_event_init+0x1e0/0x350
[ 499.262587] perf_trace_init+0xb0/0x110
[ 499.262750] perf_tp_event_init+0x38/0x90
[ 499.262910] perf_try_init_event+0x10c/0x150
[ 499.263075] perf_event_alloc+0xbb0/0xf10
[ 499.263235] sys_perf_event_open+0x2a8/0xdd0
[ 499.263396] system_call+0x58/0x6c
[ 499.263516]
[ 499.263516] -> #2 (&uprobe->register_rwsem){++++}:
[ 499.263723] __lock_acquire+0x53c/0x910
[ 499.263884] lock_acquire+0xf4/0x2f0
[ 499.264002] down_write+0x6c/0x110
[ 499.264118] uprobe_register+0x1ec/0x320
[ 499.264283] probe_event_enable+0x1cc/0x540
[ 499.264442] perf_trace_event_init+0x1e0/0x350
[ 499.264603] perf_trace_init+0xb0/0x110
[ 499.264766] perf_tp_event_init+0x38/0x90
[ 499.264930] perf_try_init_event+0x10c/0x150
[ 499.265092] perf_event_alloc+0xbb0/0xf10
[ 499.265261] sys_perf_event_open+0x2a8/0xdd0
[ 499.265424] system_call+0x58/0x6c
[ 499.265542]
[ 499.265542] -> #1 (event_mutex){+.+.}:
[ 499.265738] __lock_acquire+0x53c/0x910
[ 499.265896] lock_acquire+0xf4/0x2f0
[ 499.266019] __mutex_lock+0xa0/0xab0
[ 499.266142] trace_add_event_call+0x44/0x100
[ 499.266310] create_trace_uprobe+0x4a0/0x8b0
[ 499.266474] trace_run_command+0xa4/0xc0
[ 499.266631] trace_parse_run_command+0xe4/0x200
[ 499.266799] probes_write+0x20/0x40
[ 499.266922] __vfs_write+0x6c/0x240
[ 499.267041] vfs_write+0xd0/0x240
[ 499.267166] ksys_write+0x6c/0x110
[ 499.267295] system_call+0x58/0x6c
[ 499.267413]
[ 499.267413] -> #0 (uprobe_lock){+.+.}:
[ 499.267591] validate_chain.isra.34+0xbd0/0x1000
[ 499.267747] __lock_acquire+0x53c/0x910
[ 499.267917] lock_acquire+0xf4/0x2f0
[ 499.268048] __mutex_lock+0xa0/0xab0
[ 499.268170] trace_uprobe_mmap+0x78/0x130
[ 499.268335] uprobe_mmap+0x80/0x3b0
[ 499.268464] mmap_region+0x290/0x660
[ 499.268590] do_mmap+0x40c/0x500
[ 499.268718] vm_mmap_pgoff+0x114/0x160
[ 499.268870] ksys_mmap_pgoff+0xe8/0x2e0
[ 499.269034] sys_mmap+0x84/0xf0
[ 499.269161] system_call+0x58/0x6c
[ 499.269279]
[ 499.269279] other info that might help us debug this:
[ 499.269279]
[ 499.269524] Chain exists of:
[ 499.269524] uprobe_lock --> &dup_mmap_sem --> &mm->mmap_sem
[ 499.269524]
[ 499.269856] Possible unsafe locking scenario:
[ 499.269856]
[ 499.270058] CPU0 CPU1
[ 499.270223] ---- ----
[ 499.270384] lock(&mm->mmap_sem);
[ 499.270514] lock(&dup_mmap_sem);
[ 499.270711] lock(&mm->mmap_sem);
[ 499.270923] lock(uprobe_lock);
[ 499.271046]
[ 499.271046] *** DEADLOCK ***
[ 499.271046]
[ 499.271256] 1 lock held by perf/6744:
[ 499.271377] #0: 000000009ec93a76 (&mm->mmap_sem){++++}, at: vm_mmap_pgoff+0xe0/0x160
[ 499.271628]
[ 499.271628] stack backtrace:
[ 499.271797] CPU: 25 PID: 6744 Comm: perf Not tainted 4.17.0-rc3+ #76
[ 499.272003] Call Trace:
[ 499.272094] [c0000000e32d74a0] [c000000000b00174] dump_stack+0xe8/0x164 (unreliable)
[ 499.272349] [c0000000e32d74f0] [c0000000001a905c] print_circular_bug.isra.30+0x354/0x388
[ 499.272590] [c0000000e32d7590] [c0000000001a3050] check_prev_add.constprop.38+0x8f0/0x910
[ 499.272828] [c0000000e32d7690] [c0000000001a3c40] validate_chain.isra.34+0xbd0/0x1000
[ 499.273070] [c0000000e32d7780] [c0000000001a57cc] __lock_acquire+0x53c/0x910
[ 499.273311] [c0000000e32d7860] [c0000000001a65b4] lock_acquire+0xf4/0x2f0
[ 499.273510] [c0000000e32d7930] [c000000000b1d1f0] __mutex_lock+0xa0/0xab0
[ 499.273717] [c0000000e32d7a40] [c0000000002b01b8] trace_uprobe_mmap+0x78/0x130
[ 499.273952] [c0000000e32d7a90] [c0000000002d7070] uprobe_mmap+0x80/0x3b0
[ 499.274153] [c0000000e32d7b20] [c0000000003550a0] mmap_region+0x290/0x660
[ 499.274353] [c0000000e32d7c00] [c00000000035587c] do_mmap+0x40c/0x500
[ 499.274560] [c0000000e32d7c80] [c00000000031ebc4] vm_mmap_pgoff+0x114/0x160
[ 499.274763] [c0000000e32d7d60] [c000000000352818] ksys_mmap_pgoff+0xe8/0x2e0
[ 499.275013] [c0000000e32d7de0] [c000000000016864] sys_mmap+0x84/0xf0
[ 499.275207] [c0000000e32d7e30] [c00000000000b404] system_call+0x58/0x6c