This patchset adds tracepoints around mmap_lock acquisition. This is useful so
we can measure the latency of lock acquisition, in order to detect contention.
This version is based upon linux-next (since it depends on some recently-merged
patches [1] [2]).
I removed the existing {Reviewed,Acked}-by: lines from v4, since I think the
patch has changed significantly enough to warrant another look (and I figure
it's better to err in this direction in any case :) ).
Changes since v4:
- Redesigned buffer allocation to deal with the fact that a trace event might be
interrupted by e.g. an IRQ, for which a per-cpu buffer is insufficient. Now we
allocate one buffer per CPU * one buffer per context we might be called in
(currently 4: normal, irq, softirq, NMI). We have three trace events which can
potentially all be enabled, and all of which need a buffer; to avoid further
multiplying the number of buffers by 3, they share the same set of buffers,
which requires a spinlock + counter setup so we only allocate the buffers
once, and then free them only when *all* of the trace events are _unreg()-ed.
Changes since v3:
- Switched EXPORT_SYMBOL to EXPORT_TRACEPOINT_SYMBOL, removed comment.
- Removed redundant trace_..._enabled() check.
- Defined the three TRACE_EVENTs separately, instead of sharing an event class.
The tradeoff is 524 more bytes in .text, but the start_locking and released
events no longer have a vestigial "success" field, so they're simpler +
faster.
Changes since v2:
- Refactored tracing helper functions so the helpers are simper, but the locking
functinos are slightly more verbose. Overall, this decreased the delta to
mmap_lock.h slightly.
- Fixed a typo in a comment. :)
Changes since v1:
- Functions renamed to reserve the "trace_" prefix for actual tracepoints.
- We no longer measure the duration directly. Instead, users are expected to
construct a synthetic event which computes the interval between "start
locking" and "acquire returned".
- The new helper for checking if tracepoints are enabled in a header is used to
avoid un-inlining any of the lock wrappers. This yields ~zero overhead if the
tracepoints aren't enabled, and therefore obviates the need for a Kconfig for
this change.
[1] https://lore.kernel.org/patchwork/patch/1316922/
[2] https://lore.kernel.org/patchwork/patch/1311996/
Axel Rasmussen (1):
mmap_lock: add tracepoints around lock acquisition
include/linux/mmap_lock.h | 95 +++++++++++++++-
include/trace/events/mmap_lock.h | 107 ++++++++++++++++++
mm/Makefile | 2 +-
mm/mmap_lock.c | 187 +++++++++++++++++++++++++++++++
4 files changed, 385 insertions(+), 6 deletions(-)
create mode 100644 include/trace/events/mmap_lock.h
create mode 100644 mm/mmap_lock.c
--
2.29.0.rc2.309.g374f81d7ae-goog