2023-03-07 14:50:02

by Valentin Schneider

[permalink] [raw]
Subject: [PATCH v5 0/7] Generic IPI sending tracepoint

Background
==========

Detecting IPI *reception* is relatively easy, e.g. using
trace_irq_handler_{entry,exit} or even just function-trace
flush_smp_call_function_queue() for SMP calls.

Figuring out their *origin*, is trickier as there is no generic tracepoint tied
to e.g. smp_call_function():

o AFAIA x86 has no tracepoint tied to sending IPIs, only receiving them
(cf. trace_call_function{_single}_entry()).
o arm/arm64 do have trace_ipi_raise(), which gives us the target cpus but also a
mostly useless string (smp_calls will all be "Function call interrupts").
o Other architectures don't seem to have any IPI-sending related tracepoint.

I believe one reason those tracepoints used by arm/arm64 ended up as they were
is because these archs used to handle IPIs differently from regular interrupts
(the IRQ driver would directly invoke an IPI-handling routine), which meant they
never showed up in trace_irq_handler_{entry, exit}. The trace_ipi_{entry,exit}
tracepoints gave a way to trace IPI reception but those have become redundant as
of:

56afcd3dbd19 ("ARM: Allow IPIs to be handled as normal interrupts")
d3afc7f12987 ("arm64: Allow IPIs to be handled as normal interrupts")

which gave IPIs a "proper" handler function used through
generic_handle_domain_irq(), which makes them show up via
trace_irq_handler_{entry, exit}.

Changing stuff up
=================

Per the above, it would make sense to reshuffle trace_ipi_raise() and move it
into generic code. This also came up during Daniel's talk on Osnoise at the CPU
isolation MC of LPC 2022 [1].

Now, to be useful, such a tracepoint needs to export:
o targeted CPU(s)
o calling context

The only way to get the calling context with trace_ipi_raise() is to trigger a
stack dump, e.g. $(trace-cmd -e ipi* -T echo 42).

This is instead introducing a new tracepoint which exports the relevant context
(callsite, and requested callback for when the callsite isn't helpful), and is
usable by all architectures as it sits in generic code.

Another thing worth mentioning is that depending on the callsite, the _RET_IP_
fed to the tracepoint is not always useful - generic_exec_single() doesn't tell
you much about the actual callback being sent via IPI, which is why the new
tracepoint also has a @callback argument.

Patches
=======

o Patches 1-5 spread out the tracepoint across relevant sites.
Patch 5 ends up sprinkling lots of #include <trace/events/ipi.h> which I'm not
the biggest fan of, but is the least horrible solution I've been able to come
up with so far.

o Patch 7 is trying to be smart about tracing the callback associated with the
IPI.

This results in having IPI trace events for:

o smp_call_function*()
o smp_send_reschedule()
o irq_work_queue*()
o standalone uses of __smp_call_single_queue()

This is incomplete, just looking at arm64 there's more IPI types that aren't
covered:

IPI_CPU_STOP,
IPI_CPU_CRASH_STOP,
IPI_TIMER,
IPI_WAKEUP,

but apart from IPI_TIMER (cf. tick_broadcast()), those IPIs are both unfrequent
and accompanied with identifiable interference (stopper or cpuhp threads being
scheduled). I've added a point in my todolist to handle those in a later series
for the sake of completeness, but IMO this is ready to use.

Results
=======

Using a recent enough libtraceevent (1.7.0 and above):

$ trace-cmd record -e 'ipi:*' hackbench
$ trace-cmd report
hackbench-159 [002] 136.973122: ipi_send_cpumask: cpumask=0 callsite=generic_exec_single+0x33 callback=nohz_csd_func+0x0
hackbench-159 [002] 136.977945: ipi_send_cpumask: cpumask=0 callsite=generic_exec_single+0x33 callback=nohz_csd_func+0x0
hackbench-159 [002] 136.984576: ipi_send_cpumask: cpumask=3 callsite=check_preempt_curr+0x37 callback=0x0
hackbench-159 [002] 136.985996: ipi_send_cpumask: cpumask=0 callsite=generic_exec_single+0x33 callback=nohz_csd_func+0x0
[...]

Links
=====

[1]: https://youtu.be/5gT57y4OzBM?t=14234

Revisions
=========

v4: https://lore.kernel.org/lkml/[email protected]/
v3: https://lore.kernel.org/lkml/[email protected]/
v2: https://lore.kernel.org/lkml/[email protected]/
v1: https://lore.kernel.org/lkml/[email protected]/

v5 -> v4
++++++++

o Rebased against 6.3-rc1

v3 -> v4
++++++++

o Rebased against 6.2-rc4
Re-ran my coccinelle scripts for the treewide change; only loongarch needed
changes
o Dropped cpumask trace event field patch (now in 6.2-rc1)
o Applied RB and Ack tags
Ingo, I wasn't sure if you meant to Ack the whole series or just the patch you
replied to, so since I didn't want to unlawfully forge any tag I only added
the one.
o Did a small pass on comments and changelogs

v2 -> v3
++++++++

o Dropped the generic export of smp_send_reschedule(), turned it into a macro
and a bunch of imports
o Dropped the send_call_function_single_ipi() macro madness, split it into sched
and smp bits using some of Peter's suggestions

v1 -> v2
++++++++

o Ditched single-CPU tracepoint
o Changed tracepoint signature to include callback
o Changed tracepoint callsite field to void *; the parameter is still UL to save
up on casts due to using _RET_IP_.
o Fixed linking failures due to not exporting smp_send_reschedule()

Valentin Schneider (7):
trace: Add trace_ipi_send_cpumask()
sched, smp: Trace IPIs sent via send_call_function_single_ipi()
smp: Trace IPIs sent via arch_send_call_function_ipi_mask()
irq_work: Trace self-IPIs sent via arch_irq_work_raise()
treewide: Trace IPIs sent via smp_send_reschedule()
smp: reword smp call IPI comment
sched, smp: Trace smp callback causing an IPI

arch/alpha/kernel/smp.c | 2 +-
arch/arc/kernel/smp.c | 2 +-
arch/arm/kernel/smp.c | 5 +-
arch/arm/mach-actions/platsmp.c | 2 +
arch/arm64/kernel/smp.c | 3 +-
arch/csky/kernel/smp.c | 2 +-
arch/hexagon/kernel/smp.c | 2 +-
arch/ia64/kernel/smp.c | 4 +-
arch/loongarch/kernel/smp.c | 4 +-
arch/mips/include/asm/smp.h | 2 +-
arch/mips/kernel/rtlx-cmp.c | 2 +
arch/openrisc/kernel/smp.c | 2 +-
arch/parisc/kernel/smp.c | 4 +-
arch/powerpc/kernel/smp.c | 6 +-
arch/powerpc/kvm/book3s_hv.c | 3 +
arch/powerpc/platforms/powernv/subcore.c | 2 +
arch/riscv/kernel/smp.c | 4 +-
arch/s390/kernel/smp.c | 2 +-
arch/sh/kernel/smp.c | 2 +-
arch/sparc/kernel/smp_32.c | 2 +-
arch/sparc/kernel/smp_64.c | 2 +-
arch/x86/include/asm/smp.h | 2 +-
arch/x86/kvm/svm/svm.c | 4 ++
arch/x86/kvm/x86.c | 2 +
arch/xtensa/kernel/smp.c | 2 +-
include/linux/smp.h | 11 +++-
include/trace/events/ipi.h | 22 +++++++
kernel/irq_work.c | 14 ++++-
kernel/sched/core.c | 19 ++++--
kernel/sched/smp.h | 2 +-
kernel/smp.c | 78 +++++++++++++++++++-----
virt/kvm/kvm_main.c | 2 +
32 files changed, 164 insertions(+), 53 deletions(-)

--
2.31.1



2023-03-07 14:50:07

by Valentin Schneider

[permalink] [raw]
Subject: [PATCH v5 3/7] smp: Trace IPIs sent via arch_send_call_function_ipi_mask()

This simply wraps around the arch function and prepends it with a
tracepoint, similar to send_call_function_single_ipi().

Signed-off-by: Valentin Schneider <[email protected]>
Reviewed-by: Steven Rostedt (Google) <[email protected]>
---
kernel/smp.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index e2ca1e2f31274..93b4386cd3096 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -160,6 +160,13 @@ void __init call_function_init(void)
smpcfd_prepare_cpu(smp_processor_id());
}

+static __always_inline void
+send_call_function_ipi_mask(const struct cpumask *mask)
+{
+ trace_ipi_send_cpumask(mask, _RET_IP_, NULL);
+ arch_send_call_function_ipi_mask(mask);
+}
+
#ifdef CONFIG_CSD_LOCK_WAIT_DEBUG

static DEFINE_STATIC_KEY_FALSE(csdlock_debug_enabled);
@@ -970,7 +977,7 @@ static void smp_call_function_many_cond(const struct cpumask *mask,
if (nr_cpus == 1)
send_call_function_single_ipi(last_cpu);
else if (likely(nr_cpus > 1))
- arch_send_call_function_ipi_mask(cfd->cpumask_ipi);
+ send_call_function_ipi_mask(cfd->cpumask_ipi);

cfd_seq_store(this_cpu_ptr(&cfd_seq_local)->pinged, this_cpu, CFD_SEQ_NOCPU, CFD_SEQ_PINGED);
}
--
2.31.1


2023-03-07 14:50:12

by Valentin Schneider

[permalink] [raw]
Subject: [PATCH v5 4/7] irq_work: Trace self-IPIs sent via arch_irq_work_raise()

IPIs sent to remote CPUs via irq_work_queue_on() are now covered by
trace_ipi_send_cpumask(), add another instance of the tracepoint to cover
self-IPIs.

Signed-off-by: Valentin Schneider <[email protected]>
Reviewed-by: Steven Rostedt (Google) <[email protected]>
---
kernel/irq_work.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/kernel/irq_work.c b/kernel/irq_work.c
index 7afa40fe5cc43..c33e88e32a67a 100644
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -22,6 +22,8 @@
#include <asm/processor.h>
#include <linux/kasan.h>

+#include <trace/events/ipi.h>
+
static DEFINE_PER_CPU(struct llist_head, raised_list);
static DEFINE_PER_CPU(struct llist_head, lazy_list);
static DEFINE_PER_CPU(struct task_struct *, irq_workd);
@@ -74,6 +76,16 @@ void __weak arch_irq_work_raise(void)
*/
}

+static __always_inline void irq_work_raise(struct irq_work *work)
+{
+ if (trace_ipi_send_cpumask_enabled() && arch_irq_work_has_interrupt())
+ trace_ipi_send_cpumask(cpumask_of(smp_processor_id()),
+ _RET_IP_,
+ work->func);
+
+ arch_irq_work_raise();
+}
+
/* Enqueue on current CPU, work must already be claimed and preempt disabled */
static void __irq_work_queue_local(struct irq_work *work)
{
@@ -99,7 +111,7 @@ static void __irq_work_queue_local(struct irq_work *work)

/* If the work is "lazy", handle it from next tick if any */
if (!lazy_work || tick_nohz_tick_stopped())
- arch_irq_work_raise();
+ irq_work_raise(work);
}

/* Enqueue the irq work @work on the current CPU */
--
2.31.1


2023-03-07 15:00:40

by Valentin Schneider

[permalink] [raw]
Subject: [PATCH v5 2/7] sched, smp: Trace IPIs sent via send_call_function_single_ipi()

send_call_function_single_ipi() is the thing that sends IPIs at the bottom
of smp_call_function*() via either generic_exec_single() or
smp_call_function_many_cond(). Give it an IPI-related tracepoint.

Note that this ends up tracing any IPI sent via __smp_call_single_queue(),
which covers __ttwu_queue_wakelist() and irq_work_queue_on() "for free".

Signed-off-by: Valentin Schneider <[email protected]>
Reviewed-by: Steven Rostedt (Google) <[email protected]>
Acked-by: Ingo Molnar <[email protected]>
---
arch/arm/kernel/smp.c | 3 ---
arch/arm64/kernel/smp.c | 1 -
kernel/sched/core.c | 7 +++++--
kernel/smp.c | 4 ++++
4 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 0b8c25763adc3..b6c832e195427 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -48,9 +48,6 @@
#include <asm/mach/arch.h>
#include <asm/mpu.h>

-#define CREATE_TRACE_POINTS
-#include <trace/events/ipi.h>
-
/*
* as from 2.5, kernels no longer have an init_tasks structure
* so we need some other way of telling a new secondary core
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 4e83272642552..438c16fc44633 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -51,7 +51,6 @@
#include <asm/ptrace.h>
#include <asm/virt.h>

-#define CREATE_TRACE_POINTS
#include <trace/events/ipi.h>

DEFINE_PER_CPU_READ_MOSTLY(int, cpu_number);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index af017e038b482..85114f75f1c9c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -81,6 +81,7 @@
#include <linux/sched/rseq_api.h>
#include <trace/events/sched.h>
#undef CREATE_TRACE_POINTS
+#include <trace/events/ipi.h>

#include "sched.h"
#include "stats.h"
@@ -3830,10 +3831,12 @@ void send_call_function_single_ipi(int cpu)
{
struct rq *rq = cpu_rq(cpu);

- if (!set_nr_if_polling(rq->idle))
+ if (!set_nr_if_polling(rq->idle)) {
+ trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, NULL);
arch_send_call_function_single_ipi(cpu);
- else
+ } else {
trace_sched_wake_idle_without_ipi(cpu);
+ }
}

/*
diff --git a/kernel/smp.c b/kernel/smp.c
index 06a413987a14a..e2ca1e2f31274 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -26,6 +26,10 @@
#include <linux/sched/debug.h>
#include <linux/jump_label.h>

+#define CREATE_TRACE_POINTS
+#include <trace/events/ipi.h>
+#undef CREATE_TRACE_POINTS
+
#include "smpboot.h"
#include "sched/smp.h"

--
2.31.1


2023-03-07 15:00:45

by Valentin Schneider

[permalink] [raw]
Subject: [PATCH v5 1/7] trace: Add trace_ipi_send_cpumask()

trace_ipi_raise() is unsuitable for generically tracing IPI sources due to
its "reason" argument being an uninformative string (on arm64 all you get
is "Function call interrupts" for SMP calls).

Add a variant of it that exports a target cpumask, a callsite and a callback.

Signed-off-by: Valentin Schneider <[email protected]>
Reviewed-by: Steven Rostedt (Google) <[email protected]>
---
include/trace/events/ipi.h | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)

diff --git a/include/trace/events/ipi.h b/include/trace/events/ipi.h
index 0be71dad6ec03..b1125dc27682c 100644
--- a/include/trace/events/ipi.h
+++ b/include/trace/events/ipi.h
@@ -35,6 +35,28 @@ TRACE_EVENT(ipi_raise,
TP_printk("target_mask=%s (%s)", __get_bitmask(target_cpus), __entry->reason)
);

+TRACE_EVENT(ipi_send_cpumask,
+
+ TP_PROTO(const struct cpumask *cpumask, unsigned long callsite, void *callback),
+
+ TP_ARGS(cpumask, callsite, callback),
+
+ TP_STRUCT__entry(
+ __cpumask(cpumask)
+ __field(void *, callsite)
+ __field(void *, callback)
+ ),
+
+ TP_fast_assign(
+ __assign_cpumask(cpumask, cpumask_bits(cpumask));
+ __entry->callsite = (void *)callsite;
+ __entry->callback = callback;
+ ),
+
+ TP_printk("cpumask=%s callsite=%pS callback=%pS",
+ __get_cpumask(cpumask), __entry->callsite, __entry->callback)
+);
+
DECLARE_EVENT_CLASS(ipi_handler,

TP_PROTO(const char *reason),
--
2.31.1


2023-03-07 15:01:03

by Valentin Schneider

[permalink] [raw]
Subject: [PATCH v5 5/7] treewide: Trace IPIs sent via smp_send_reschedule()

To be able to trace invocations of smp_send_reschedule(), rename the
arch-specific definitions of it to arch_smp_send_reschedule() and wrap it
into an smp_send_reschedule() that contains a tracepoint.

Changes to include the declaration of the tracepoint were driven by the
following coccinelle script:

@func_use@
@@
smp_send_reschedule(...);

@include@
@@
#include <trace/events/ipi.h>

@no_include depends on func_use && !include@
@@
#include <...>
+
+ #include <trace/events/ipi.h>

Signed-off-by: Valentin Schneider <[email protected]>
[csky bits]
Acked-by: Guo Ren <[email protected]>
[riscv bits]
Acked-by: Palmer Dabbelt <[email protected]>
---
arch/alpha/kernel/smp.c | 2 +-
arch/arc/kernel/smp.c | 2 +-
arch/arm/kernel/smp.c | 2 +-
arch/arm/mach-actions/platsmp.c | 2 ++
arch/arm64/kernel/smp.c | 2 +-
arch/csky/kernel/smp.c | 2 +-
arch/hexagon/kernel/smp.c | 2 +-
arch/ia64/kernel/smp.c | 4 ++--
arch/loongarch/kernel/smp.c | 4 ++--
arch/mips/include/asm/smp.h | 2 +-
arch/mips/kernel/rtlx-cmp.c | 2 ++
arch/openrisc/kernel/smp.c | 2 +-
arch/parisc/kernel/smp.c | 4 ++--
arch/powerpc/kernel/smp.c | 6 ++++--
arch/powerpc/kvm/book3s_hv.c | 3 +++
arch/powerpc/platforms/powernv/subcore.c | 2 ++
arch/riscv/kernel/smp.c | 4 ++--
arch/s390/kernel/smp.c | 2 +-
arch/sh/kernel/smp.c | 2 +-
arch/sparc/kernel/smp_32.c | 2 +-
arch/sparc/kernel/smp_64.c | 2 +-
arch/x86/include/asm/smp.h | 2 +-
arch/x86/kvm/svm/svm.c | 4 ++++
arch/x86/kvm/x86.c | 2 ++
arch/xtensa/kernel/smp.c | 2 +-
include/linux/smp.h | 11 +++++++++--
virt/kvm/kvm_main.c | 2 ++
27 files changed, 52 insertions(+), 26 deletions(-)

diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
index 0ede4b044e869..7439b2377df57 100644
--- a/arch/alpha/kernel/smp.c
+++ b/arch/alpha/kernel/smp.c
@@ -562,7 +562,7 @@ handle_ipi(struct pt_regs *regs)
}

void
-smp_send_reschedule(int cpu)
+arch_smp_send_reschedule(int cpu)
{
#ifdef DEBUG_IPI_MSG
if (cpu == hard_smp_processor_id())
diff --git a/arch/arc/kernel/smp.c b/arch/arc/kernel/smp.c
index ad93fe6e4b77d..409cfa4675b40 100644
--- a/arch/arc/kernel/smp.c
+++ b/arch/arc/kernel/smp.c
@@ -292,7 +292,7 @@ static void ipi_send_msg(const struct cpumask *callmap, enum ipi_msg_type msg)
ipi_send_msg_one(cpu, msg);
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
ipi_send_msg_one(cpu, IPI_RESCHEDULE);
}
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index b6c832e195427..46b23dc1f94ad 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -744,7 +744,7 @@ void __init set_smp_ipi_range(int ipi_base, int n)
ipi_setup(smp_processor_id());
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE);
}
diff --git a/arch/arm/mach-actions/platsmp.c b/arch/arm/mach-actions/platsmp.c
index f26618b435145..7b208e96fbb67 100644
--- a/arch/arm/mach-actions/platsmp.c
+++ b/arch/arm/mach-actions/platsmp.c
@@ -20,6 +20,8 @@
#include <asm/smp_plat.h>
#include <asm/smp_scu.h>

+#include <trace/events/ipi.h>
+
#define OWL_CPU1_ADDR 0x50
#define OWL_CPU1_FLAG 0x5c

diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 438c16fc44633..66f2745062dda 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -976,7 +976,7 @@ void __init set_smp_ipi_range(int ipi_base, int n)
ipi_setup(smp_processor_id());
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE);
}
diff --git a/arch/csky/kernel/smp.c b/arch/csky/kernel/smp.c
index b45d1073307f2..be77383acb5fc 100644
--- a/arch/csky/kernel/smp.c
+++ b/arch/csky/kernel/smp.c
@@ -140,7 +140,7 @@ void smp_send_stop(void)
on_each_cpu(ipi_stop, NULL, 1);
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE);
}
diff --git a/arch/hexagon/kernel/smp.c b/arch/hexagon/kernel/smp.c
index 4ba93e59370c4..4e8bee25b8c68 100644
--- a/arch/hexagon/kernel/smp.c
+++ b/arch/hexagon/kernel/smp.c
@@ -217,7 +217,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
}
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
send_ipi(cpumask_of(cpu), IPI_RESCHEDULE);
}
diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c
index e2cc59db86bc2..ea4f009a232b4 100644
--- a/arch/ia64/kernel/smp.c
+++ b/arch/ia64/kernel/smp.c
@@ -220,11 +220,11 @@ kdump_smp_send_init(void)
* Called with preemption disabled.
*/
void
-smp_send_reschedule (int cpu)
+arch_smp_send_reschedule (int cpu)
{
ia64_send_ipi(cpu, IA64_IPI_RESCHEDULE, IA64_IPI_DM_INT, 0);
}
-EXPORT_SYMBOL_GPL(smp_send_reschedule);
+EXPORT_SYMBOL_GPL(arch_smp_send_reschedule);

/*
* Called with preemption disabled.
diff --git a/arch/loongarch/kernel/smp.c b/arch/loongarch/kernel/smp.c
index 8c6e227cb29df..83225610a1480 100644
--- a/arch/loongarch/kernel/smp.c
+++ b/arch/loongarch/kernel/smp.c
@@ -155,11 +155,11 @@ void loongson_send_ipi_mask(const struct cpumask *mask, unsigned int action)
* it goes straight through and wastes no time serializing
* anything. Worst case is that we lose a reschedule ...
*/
-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
loongson_send_ipi_single(cpu, SMP_RESCHEDULE);
}
-EXPORT_SYMBOL_GPL(smp_send_reschedule);
+EXPORT_SYMBOL_GPL(arch_smp_send_reschedule);

irqreturn_t loongson_ipi_interrupt(int irq, void *dev)
{
diff --git a/arch/mips/include/asm/smp.h b/arch/mips/include/asm/smp.h
index 5d9ff61004ca7..9806e79895d99 100644
--- a/arch/mips/include/asm/smp.h
+++ b/arch/mips/include/asm/smp.h
@@ -66,7 +66,7 @@ extern void calculate_cpu_foreign_map(void);
* it goes straight through and wastes no time serializing
* anything. Worst case is that we lose a reschedule ...
*/
-static inline void smp_send_reschedule(int cpu)
+static inline void arch_smp_send_reschedule(int cpu)
{
extern const struct plat_smp_ops *mp_ops; /* private */

diff --git a/arch/mips/kernel/rtlx-cmp.c b/arch/mips/kernel/rtlx-cmp.c
index d26dcc4b46e74..e991cc936c1cd 100644
--- a/arch/mips/kernel/rtlx-cmp.c
+++ b/arch/mips/kernel/rtlx-cmp.c
@@ -17,6 +17,8 @@
#include <asm/vpe.h>
#include <asm/rtlx.h>

+#include <trace/events/ipi.h>
+
static int major;

static void rtlx_interrupt(void)
diff --git a/arch/openrisc/kernel/smp.c b/arch/openrisc/kernel/smp.c
index e1419095a6f0a..0a7a059e2dff4 100644
--- a/arch/openrisc/kernel/smp.c
+++ b/arch/openrisc/kernel/smp.c
@@ -173,7 +173,7 @@ void handle_IPI(unsigned int ipi_msg)
}
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE);
}
diff --git a/arch/parisc/kernel/smp.c b/arch/parisc/kernel/smp.c
index 7dbd92cafae38..b7fc859fa87db 100644
--- a/arch/parisc/kernel/smp.c
+++ b/arch/parisc/kernel/smp.c
@@ -246,8 +246,8 @@ void kgdb_roundup_cpus(void)
inline void
smp_send_stop(void) { send_IPI_allbutself(IPI_CPU_STOP); }

-void
-smp_send_reschedule(int cpu) { send_IPI_single(cpu, IPI_RESCHEDULE); }
+void
+arch_smp_send_reschedule(int cpu) { send_IPI_single(cpu, IPI_RESCHEDULE); }

void
smp_send_all_nop(void)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 6b90f10a6c819..35f101ccb540d 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -61,6 +61,8 @@
#include <asm/kup.h>
#include <asm/fadump.h>

+#include <trace/events/ipi.h>
+
#ifdef DEBUG
#include <asm/udbg.h>
#define DBG(fmt...) udbg_printf(fmt)
@@ -364,12 +366,12 @@ static inline void do_message_pass(int cpu, int msg)
#endif
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
if (likely(smp_ops))
do_message_pass(cpu, PPC_MSG_RESCHEDULE);
}
-EXPORT_SYMBOL_GPL(smp_send_reschedule);
+EXPORT_SYMBOL_GPL(arch_smp_send_reschedule);

void arch_send_call_function_single_ipi(int cpu)
{
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 6ba68dd6190bd..3b70b5f80bd56 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -43,6 +43,7 @@
#include <linux/compiler.h>
#include <linux/of.h>
#include <linux/irqdomain.h>
+#include <linux/smp.h>

#include <asm/ftrace.h>
#include <asm/reg.h>
@@ -80,6 +81,8 @@
#include <asm/dtl.h>
#include <asm/plpar_wrappers.h>

+#include <trace/events/ipi.h>
+
#include "book3s.h"
#include "book3s_hv.h"

diff --git a/arch/powerpc/platforms/powernv/subcore.c b/arch/powerpc/platforms/powernv/subcore.c
index 7e98b00ea2e84..c53c4c7977680 100644
--- a/arch/powerpc/platforms/powernv/subcore.c
+++ b/arch/powerpc/platforms/powernv/subcore.c
@@ -20,6 +20,8 @@
#include <asm/opal.h>
#include <asm/smp.h>

+#include <trace/events/ipi.h>
+
#include "subcore.h"
#include "powernv.h"

diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
index 8c3b59f1f9b80..42e9656a1db2e 100644
--- a/arch/riscv/kernel/smp.c
+++ b/arch/riscv/kernel/smp.c
@@ -328,8 +328,8 @@ bool smp_crash_stop_failed(void)
}
#endif

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
send_ipi_single(cpu, IPI_RESCHEDULE);
}
-EXPORT_SYMBOL_GPL(smp_send_reschedule);
+EXPORT_SYMBOL_GPL(arch_smp_send_reschedule);
diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
index d4888453bbf8b..a710319f97e94 100644
--- a/arch/s390/kernel/smp.c
+++ b/arch/s390/kernel/smp.c
@@ -553,7 +553,7 @@ void arch_send_call_function_single_ipi(int cpu)
* it goes straight through and wastes no time serializing
* anything. Worst case is that we lose a reschedule ...
*/
-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
pcpu_ec_call(pcpu_devices + cpu, ec_schedule);
}
diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
index 65924d9ec2459..5cf35a774dc70 100644
--- a/arch/sh/kernel/smp.c
+++ b/arch/sh/kernel/smp.c
@@ -256,7 +256,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
(bogosum / (5000/HZ)) % 100);
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
mp_ops->send_ipi(cpu, SMP_MSG_RESCHEDULE);
}
diff --git a/arch/sparc/kernel/smp_32.c b/arch/sparc/kernel/smp_32.c
index ad8094d955eba..87eaa7719fa27 100644
--- a/arch/sparc/kernel/smp_32.c
+++ b/arch/sparc/kernel/smp_32.c
@@ -120,7 +120,7 @@ void cpu_panic(void)

struct linux_prom_registers smp_penguin_ctable = { 0 };

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
/*
* CPU model dependent way of implementing IPI generation targeting
diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index a55295d1b9244..e5964d1d8b37d 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -1430,7 +1430,7 @@ static unsigned long send_cpu_poke(int cpu)
return hv_err;
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
if (cpu == smp_processor_id()) {
WARN_ON_ONCE(preemptible());
diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
index b4dbb20dab1a1..f9757123d8fa1 100644
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -98,7 +98,7 @@ static inline void play_dead(void)
smp_ops.play_dead();
}

-static inline void smp_send_reschedule(int cpu)
+static inline void arch_smp_send_reschedule(int cpu)
{
smp_ops.smp_send_reschedule(cpu);
}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 252e7f37e4e2e..424fcdba4c783 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -27,6 +27,7 @@
#include <linux/swap.h>
#include <linux/rwsem.h>
#include <linux/cc_platform.h>
+#include <linux/smp.h>

#include <asm/apic.h>
#include <asm/perf_event.h>
@@ -41,6 +42,9 @@
#include <asm/fpu/api.h>

#include <asm/virtext.h>
+
+#include <trace/events/ipi.h>
+
#include "trace.h"

#include "svm.h"
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 7713420abab09..07ba937bdb6f1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -60,7 +60,9 @@
#include <linux/mem_encrypt.h>
#include <linux/entry-kvm.h>
#include <linux/suspend.h>
+#include <linux/smp.h>

+#include <trace/events/ipi.h>
#include <trace/events/kvm.h>

#include <asm/debugreg.h>
diff --git a/arch/xtensa/kernel/smp.c b/arch/xtensa/kernel/smp.c
index 4dc109dd6214e..d95907b8e4d38 100644
--- a/arch/xtensa/kernel/smp.c
+++ b/arch/xtensa/kernel/smp.c
@@ -389,7 +389,7 @@ void arch_send_call_function_single_ipi(int cpu)
send_ipi_message(cpumask_of(cpu), IPI_CALL_FUNC);
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE);
}
diff --git a/include/linux/smp.h b/include/linux/smp.h
index a80ab58ae3f1d..c036a2228d8d0 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -125,8 +125,15 @@ extern void smp_send_stop(void);
/*
* sends a 'reschedule' event to another CPU:
*/
-extern void smp_send_reschedule(int cpu);
-
+extern void arch_smp_send_reschedule(int cpu);
+/*
+ * scheduler_ipi() is inline so can't be passed as callback reason, but the
+ * callsite IP should be sufficient for root-causing IPIs sent from here.
+ */
+#define smp_send_reschedule(cpu) ({ \
+ trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, NULL); \
+ arch_smp_send_reschedule(cpu); \
+})

/*
* Prepare machine for booting other CPUs.
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index d255964ec331e..2e27af08d84c3 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -67,6 +67,8 @@

#include <linux/kvm_dirty_ring.h>

+#include <trace/events/ipi.h>
+
/* Worst case buffer size needed for holding an integer. */
#define ITOA_MAX_LEN 12

--
2.31.1


2023-03-22 09:49:29

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v5 1/7] trace: Add trace_ipi_send_cpumask()

On Tue, Mar 07, 2023 at 02:35:52PM +0000, Valentin Schneider wrote:
> trace_ipi_raise() is unsuitable for generically tracing IPI sources due to
> its "reason" argument being an uninformative string (on arm64 all you get
> is "Function call interrupts" for SMP calls).
>
> Add a variant of it that exports a target cpumask, a callsite and a callback.
>
> Signed-off-by: Valentin Schneider <[email protected]>
> Reviewed-by: Steven Rostedt (Google) <[email protected]>
> ---
> include/trace/events/ipi.h | 22 ++++++++++++++++++++++
> 1 file changed, 22 insertions(+)
>
> diff --git a/include/trace/events/ipi.h b/include/trace/events/ipi.h
> index 0be71dad6ec03..b1125dc27682c 100644
> --- a/include/trace/events/ipi.h
> +++ b/include/trace/events/ipi.h
> @@ -35,6 +35,28 @@ TRACE_EVENT(ipi_raise,
> TP_printk("target_mask=%s (%s)", __get_bitmask(target_cpus), __entry->reason)
> );
>
> +TRACE_EVENT(ipi_send_cpumask,
> +
> + TP_PROTO(const struct cpumask *cpumask, unsigned long callsite, void *callback),
> +
> + TP_ARGS(cpumask, callsite, callback),
> +
> + TP_STRUCT__entry(
> + __cpumask(cpumask)
> + __field(void *, callsite)
> + __field(void *, callback)
> + ),
> +
> + TP_fast_assign(
> + __assign_cpumask(cpumask, cpumask_bits(cpumask));
> + __entry->callsite = (void *)callsite;
> + __entry->callback = callback;
> + ),
> +
> + TP_printk("cpumask=%s callsite=%pS callback=%pS",
> + __get_cpumask(cpumask), __entry->callsite, __entry->callback)
> +);

Would it make sense to add a variant like: ipi_send_cpu() that records a
single cpu instead of a cpumask. A lot of sites seems to do:
cpumask_of(cpu) for that first argument, and it seems to me it is quite
daft to have to memcpy a full multi-word cpumask in those cases.

Remember, nr_possible_cpus > 64 is quite common these days.

2023-03-22 10:31:34

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v5 1/7] trace: Add trace_ipi_send_cpumask()

On Wed, Mar 22, 2023 at 10:39:55AM +0100, Peter Zijlstra wrote:
> On Tue, Mar 07, 2023 at 02:35:52PM +0000, Valentin Schneider wrote:
> > trace_ipi_raise() is unsuitable for generically tracing IPI sources due to
> > its "reason" argument being an uninformative string (on arm64 all you get
> > is "Function call interrupts" for SMP calls).
> >
> > Add a variant of it that exports a target cpumask, a callsite and a callback.
> >
> > Signed-off-by: Valentin Schneider <[email protected]>
> > Reviewed-by: Steven Rostedt (Google) <[email protected]>
> > ---
> > include/trace/events/ipi.h | 22 ++++++++++++++++++++++
> > 1 file changed, 22 insertions(+)
> >
> > diff --git a/include/trace/events/ipi.h b/include/trace/events/ipi.h
> > index 0be71dad6ec03..b1125dc27682c 100644
> > --- a/include/trace/events/ipi.h
> > +++ b/include/trace/events/ipi.h
> > @@ -35,6 +35,28 @@ TRACE_EVENT(ipi_raise,
> > TP_printk("target_mask=%s (%s)", __get_bitmask(target_cpus), __entry->reason)
> > );
> >
> > +TRACE_EVENT(ipi_send_cpumask,
> > +
> > + TP_PROTO(const struct cpumask *cpumask, unsigned long callsite, void *callback),
> > +
> > + TP_ARGS(cpumask, callsite, callback),
> > +
> > + TP_STRUCT__entry(
> > + __cpumask(cpumask)
> > + __field(void *, callsite)
> > + __field(void *, callback)
> > + ),
> > +
> > + TP_fast_assign(
> > + __assign_cpumask(cpumask, cpumask_bits(cpumask));
> > + __entry->callsite = (void *)callsite;
> > + __entry->callback = callback;
> > + ),
> > +
> > + TP_printk("cpumask=%s callsite=%pS callback=%pS",
> > + __get_cpumask(cpumask), __entry->callsite, __entry->callback)
> > +);
>
> Would it make sense to add a variant like: ipi_send_cpu() that records a
> single cpu instead of a cpumask. A lot of sites seems to do:
> cpumask_of(cpu) for that first argument, and it seems to me it is quite
> daft to have to memcpy a full multi-word cpumask in those cases.
>
> Remember, nr_possible_cpus > 64 is quite common these days.

Something we litte bit like so...

---
Subject: trace: Add trace_ipi_send_cpu()
From: Peter Zijlstra <[email protected]>
Date: Wed Mar 22 11:28:36 CET 2023

Because copying cpumasks around when targeting a single CPU is a bit
daft...

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---
include/linux/smp.h | 6 +++---
include/trace/events/ipi.h | 22 ++++++++++++++++++++++
kernel/irq_work.c | 6 ++----
kernel/smp.c | 4 ++--
4 files changed, 29 insertions(+), 9 deletions(-)

--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -130,9 +130,9 @@ extern void arch_smp_send_reschedule(int
* scheduler_ipi() is inline so can't be passed as callback reason, but the
* callsite IP should be sufficient for root-causing IPIs sent from here.
*/
-#define smp_send_reschedule(cpu) ({ \
- trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, NULL); \
- arch_smp_send_reschedule(cpu); \
+#define smp_send_reschedule(cpu) ({ \
+ trace_ipi_send_cpu(cpu, _RET_IP_, NULL); \
+ arch_smp_send_reschedule(cpu); \
})

/*
--- a/include/trace/events/ipi.h
+++ b/include/trace/events/ipi.h
@@ -35,6 +35,28 @@ TRACE_EVENT(ipi_raise,
TP_printk("target_mask=%s (%s)", __get_bitmask(target_cpus), __entry->reason)
);

+TRACE_EVENT(ipi_send_cpu,
+
+ TP_PROTO(const unsigned int cpu, unsigned long callsite, void *callback),
+
+ TP_ARGS(cpu, callsite, callback),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, cpu)
+ __field(void *, callsite)
+ __field(void *, callback)
+ ),
+
+ TP_fast_assign(
+ __entry->cpu = cpu;
+ __entry->callsite = (void *)callsite;
+ __entry->callback = callback;
+ ),
+
+ TP_printk("cpu=%s callsite=%pS callback=%pS",
+ __entry->cpu, __entry->callsite, __entry->callback)
+);
+
TRACE_EVENT(ipi_send_cpumask,

TP_PROTO(const struct cpumask *cpumask, unsigned long callsite, void *callback),
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -78,10 +78,8 @@ void __weak arch_irq_work_raise(void)

static __always_inline void irq_work_raise(struct irq_work *work)
{
- if (trace_ipi_send_cpumask_enabled() && arch_irq_work_has_interrupt())
- trace_ipi_send_cpumask(cpumask_of(smp_processor_id()),
- _RET_IP_,
- work->func);
+ if (trace_ipi_send_cpu_enabled() && arch_irq_work_has_interrupt())
+ trace_ipi_send_cpu(smp_processor_id(), _RET_IP_, work->func);

arch_irq_work_raise();
}
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -109,7 +109,7 @@ static __always_inline void
send_call_function_single_ipi(int cpu, smp_call_func_t func)
{
if (call_function_single_prep_ipi(cpu)) {
- trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, func);
+ trace_ipi_send_cpu(cpu, _RET_IP_, func);
arch_send_call_function_single_ipi(cpu);
}
}
@@ -348,7 +348,7 @@ void __smp_call_single_queue(int cpu, st
* even if we haven't sent the smp_call IPI yet (e.g. the stopper
* executes migration_cpu_stop() on the remote CPU).
*/
- if (trace_ipi_send_cpumask_enabled()) {
+ if (trace_ipi_send_cpu_enabled()) {
call_single_data_t *csd;
smp_call_func_t func;

2023-03-22 11:33:22

by Valentin Schneider

[permalink] [raw]
Subject: Re: [PATCH v5 1/7] trace: Add trace_ipi_send_cpumask()

On 22/03/23 11:30, Peter Zijlstra wrote:
> On Wed, Mar 22, 2023 at 10:39:55AM +0100, Peter Zijlstra wrote:
>> On Tue, Mar 07, 2023 at 02:35:52PM +0000, Valentin Schneider wrote:
>> > +TRACE_EVENT(ipi_send_cpumask,
>> > +
>> > + TP_PROTO(const struct cpumask *cpumask, unsigned long callsite, void *callback),
>> > +
>> > + TP_ARGS(cpumask, callsite, callback),
>> > +
>> > + TP_STRUCT__entry(
>> > + __cpumask(cpumask)
>> > + __field(void *, callsite)
>> > + __field(void *, callback)
>> > + ),
>> > +
>> > + TP_fast_assign(
>> > + __assign_cpumask(cpumask, cpumask_bits(cpumask));
>> > + __entry->callsite = (void *)callsite;
>> > + __entry->callback = callback;
>> > + ),
>> > +
>> > + TP_printk("cpumask=%s callsite=%pS callback=%pS",
>> > + __get_cpumask(cpumask), __entry->callsite, __entry->callback)
>> > +);
>>
>> Would it make sense to add a variant like: ipi_send_cpu() that records a
>> single cpu instead of a cpumask. A lot of sites seems to do:
>> cpumask_of(cpu) for that first argument, and it seems to me it is quite
>> daft to have to memcpy a full multi-word cpumask in those cases.
>>
>> Remember, nr_possible_cpus > 64 is quite common these days.
>
> Something we litte bit like so...
>

I was wondering whether we could stick with a single trace event, but let
ftrace be aware of weight=1 vs weight>1 cpumasks.

For weight>1, it would memcpy() as usual, for weight=1, it could write a
pointer to a cpu_bit_bitmap[] equivalent embedded in the trace itself.

Unfortunately, Ftrace bitmasks are represented as a u32 made of two 16 bit
values: [offset in event record, size], so there isn't a straightforward
way to point to a "reusable" cpumask. AFAICT the only alternative would be
to do that via a different trace event, but then we should just go with a
plain old uint - i.e. do what you're doing here, so:

Tested-and-reviewed-by: Valentin Schneider <[email protected]>

(with the tiny typo fix below)

> @@ -35,6 +35,28 @@ TRACE_EVENT(ipi_raise,
> TP_printk("target_mask=%s (%s)", __get_bitmask(target_cpus), __entry->reason)
> );
>
> +TRACE_EVENT(ipi_send_cpu,
> +
> + TP_PROTO(const unsigned int cpu, unsigned long callsite, void *callback),
> +
> + TP_ARGS(cpu, callsite, callback),
> +
> + TP_STRUCT__entry(
> + __field(unsigned int, cpu)
> + __field(void *, callsite)
> + __field(void *, callback)
> + ),
> +
> + TP_fast_assign(
> + __entry->cpu = cpu;
> + __entry->callsite = (void *)callsite;
> + __entry->callback = callback;
> + ),
> +
> + TP_printk("cpu=%s callsite=%pS callback=%pS",
^
s/s/u/

> + __entry->cpu, __entry->callsite, __entry->callback)
> +);
> +

Subject: [tip: smp/core] smp: Trace IPIs sent via arch_send_call_function_ipi_mask()

The following commit has been merged into the smp/core branch of tip:

Commit-ID: 08407b5f61c1bbd4ebb26a76474df4354fd76fb7
Gitweb: https://git.kernel.org/tip/08407b5f61c1bbd4ebb26a76474df4354fd76fb7
Author: Valentin Schneider <[email protected]>
AuthorDate: Tue, 07 Mar 2023 14:35:54
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Fri, 24 Mar 2023 11:01:27 +01:00

smp: Trace IPIs sent via arch_send_call_function_ipi_mask()

This simply wraps around the arch function and prepends it with a
tracepoint, similar to send_call_function_single_ipi().

Signed-off-by: Valentin Schneider <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Steven Rostedt (Google) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
kernel/smp.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 770e879..03e6d57 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -103,6 +103,13 @@ void __init call_function_init(void)
smpcfd_prepare_cpu(smp_processor_id());
}

+static __always_inline void
+send_call_function_ipi_mask(struct cpumask *mask)
+{
+ trace_ipi_send_cpumask(mask, _RET_IP_, NULL);
+ arch_send_call_function_ipi_mask(mask);
+}
+
#ifdef CONFIG_CSD_LOCK_WAIT_DEBUG

static DEFINE_STATIC_KEY_MAYBE(CONFIG_CSD_LOCK_WAIT_DEBUG_DEFAULT, csdlock_debug_enabled);
@@ -762,7 +769,7 @@ static void smp_call_function_many_cond(const struct cpumask *mask,
if (nr_cpus == 1)
send_call_function_single_ipi(last_cpu);
else if (likely(nr_cpus > 1))
- arch_send_call_function_ipi_mask(cfd->cpumask_ipi);
+ send_call_function_ipi_mask(cfd->cpumask_ipi);
}

if (run_local && (!cond_func || cond_func(this_cpu, info))) {

Subject: [tip: smp/core] sched, smp: Trace IPIs sent via send_call_function_single_ipi()

The following commit has been merged into the smp/core branch of tip:

Commit-ID: cc9cb0a71725aa8dd8d8f534a9b562bbf7981f75
Gitweb: https://git.kernel.org/tip/cc9cb0a71725aa8dd8d8f534a9b562bbf7981f75
Author: Valentin Schneider <[email protected]>
AuthorDate: Tue, 07 Mar 2023 14:35:53
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Fri, 24 Mar 2023 11:01:27 +01:00

sched, smp: Trace IPIs sent via send_call_function_single_ipi()

send_call_function_single_ipi() is the thing that sends IPIs at the bottom
of smp_call_function*() via either generic_exec_single() or
smp_call_function_many_cond(). Give it an IPI-related tracepoint.

Note that this ends up tracing any IPI sent via __smp_call_single_queue(),
which covers __ttwu_queue_wakelist() and irq_work_queue_on() "for free".

Signed-off-by: Valentin Schneider <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Steven Rostedt (Google) <[email protected]>
Acked-by: Ingo Molnar <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/arm/kernel/smp.c | 1 -
arch/arm64/kernel/smp.c | 1 -
kernel/sched/core.c | 9 +++++++--
kernel/smp.c | 2 ++
4 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 0b8c257..5edf092 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -48,7 +48,6 @@
#include <asm/mach/arch.h>
#include <asm/mpu.h>

-#define CREATE_TRACE_POINTS
#include <trace/events/ipi.h>

/*
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 4e83272..438c16f 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -51,7 +51,6 @@
#include <asm/ptrace.h>
#include <asm/virt.h>

-#define CREATE_TRACE_POINTS
#include <trace/events/ipi.h>

DEFINE_PER_CPU_READ_MOSTLY(int, cpu_number);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 488655f..c26a2cd 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -80,6 +80,7 @@
#define CREATE_TRACE_POINTS
#include <linux/sched/rseq_api.h>
#include <trace/events/sched.h>
+#include <trace/events/ipi.h>
#undef CREATE_TRACE_POINTS

#include "sched.h"
@@ -95,6 +96,8 @@
#include "../../io_uring/io-wq.h"
#include "../smpboot.h"

+EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
+
/*
* Export tracepoints that act as a bare tracehook (ie: have no trace event
* associated with them) to allow external modules to probe them.
@@ -3830,10 +3833,12 @@ void send_call_function_single_ipi(int cpu)
{
struct rq *rq = cpu_rq(cpu);

- if (!set_nr_if_polling(rq->idle))
+ if (!set_nr_if_polling(rq->idle)) {
+ trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, NULL);
arch_send_call_function_single_ipi(cpu);
- else
+ } else {
trace_sched_wake_idle_without_ipi(cpu);
+ }
}

/*
diff --git a/kernel/smp.c b/kernel/smp.c
index 298ba75..770e879 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -26,6 +26,8 @@
#include <linux/sched/debug.h>
#include <linux/jump_label.h>

+#include <trace/events/ipi.h>
+
#include "smpboot.h"
#include "sched/smp.h"

Subject: [tip: smp/core] treewide: Trace IPIs sent via smp_send_reschedule()

The following commit has been merged into the smp/core branch of tip:

Commit-ID: 4c8c3c7f70a6779d30f5492acbc9978f4636fe7a
Gitweb: https://git.kernel.org/tip/4c8c3c7f70a6779d30f5492acbc9978f4636fe7a
Author: Valentin Schneider <[email protected]>
AuthorDate: Tue, 07 Mar 2023 14:35:56
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Fri, 24 Mar 2023 11:01:28 +01:00

treewide: Trace IPIs sent via smp_send_reschedule()

To be able to trace invocations of smp_send_reschedule(), rename the
arch-specific definitions of it to arch_smp_send_reschedule() and wrap it
into an smp_send_reschedule() that contains a tracepoint.

Changes to include the declaration of the tracepoint were driven by the
following coccinelle script:

@func_use@
@@
smp_send_reschedule(...);

@include@
@@
#include <trace/events/ipi.h>

@no_include depends on func_use && !include@
@@
#include <...>
+
+ #include <trace/events/ipi.h>

[csky bits]
[riscv bits]
Signed-off-by: Valentin Schneider <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Guo Ren <[email protected]>
Acked-by: Palmer Dabbelt <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/alpha/kernel/smp.c | 2 +-
arch/arc/kernel/smp.c | 2 +-
arch/arm/kernel/smp.c | 2 +-
arch/arm/mach-actions/platsmp.c | 2 ++
arch/arm64/kernel/smp.c | 2 +-
arch/csky/kernel/smp.c | 2 +-
arch/hexagon/kernel/smp.c | 2 +-
arch/ia64/kernel/smp.c | 4 ++--
arch/loongarch/kernel/smp.c | 4 ++--
arch/mips/include/asm/smp.h | 2 +-
arch/mips/kernel/rtlx-cmp.c | 2 ++
arch/openrisc/kernel/smp.c | 2 +-
arch/parisc/kernel/smp.c | 4 ++--
arch/powerpc/kernel/smp.c | 6 ++++--
arch/powerpc/kvm/book3s_hv.c | 3 +++
arch/powerpc/platforms/powernv/subcore.c | 2 ++
arch/riscv/kernel/smp.c | 4 ++--
arch/s390/kernel/smp.c | 2 +-
arch/sh/kernel/smp.c | 2 +-
arch/sparc/kernel/smp_32.c | 2 +-
arch/sparc/kernel/smp_64.c | 2 +-
arch/x86/include/asm/smp.h | 2 +-
arch/x86/kvm/svm/svm.c | 4 ++++
arch/x86/kvm/x86.c | 2 ++
arch/xtensa/kernel/smp.c | 2 +-
include/linux/smp.h | 11 +++++++++--
virt/kvm/kvm_main.c | 3 +++
27 files changed, 53 insertions(+), 26 deletions(-)

diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
index 0ede4b0..7439b23 100644
--- a/arch/alpha/kernel/smp.c
+++ b/arch/alpha/kernel/smp.c
@@ -562,7 +562,7 @@ handle_ipi(struct pt_regs *regs)
}

void
-smp_send_reschedule(int cpu)
+arch_smp_send_reschedule(int cpu)
{
#ifdef DEBUG_IPI_MSG
if (cpu == hard_smp_processor_id())
diff --git a/arch/arc/kernel/smp.c b/arch/arc/kernel/smp.c
index ad93fe6..409cfa4 100644
--- a/arch/arc/kernel/smp.c
+++ b/arch/arc/kernel/smp.c
@@ -292,7 +292,7 @@ static void ipi_send_msg(const struct cpumask *callmap, enum ipi_msg_type msg)
ipi_send_msg_one(cpu, msg);
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
ipi_send_msg_one(cpu, IPI_RESCHEDULE);
}
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 5edf092..b350bfc 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -746,7 +746,7 @@ void __init set_smp_ipi_range(int ipi_base, int n)
ipi_setup(smp_processor_id());
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE);
}
diff --git a/arch/arm/mach-actions/platsmp.c b/arch/arm/mach-actions/platsmp.c
index f26618b..7b208e9 100644
--- a/arch/arm/mach-actions/platsmp.c
+++ b/arch/arm/mach-actions/platsmp.c
@@ -20,6 +20,8 @@
#include <asm/smp_plat.h>
#include <asm/smp_scu.h>

+#include <trace/events/ipi.h>
+
#define OWL_CPU1_ADDR 0x50
#define OWL_CPU1_FLAG 0x5c

diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 438c16f..66f2745 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -976,7 +976,7 @@ void __init set_smp_ipi_range(int ipi_base, int n)
ipi_setup(smp_processor_id());
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE);
}
diff --git a/arch/csky/kernel/smp.c b/arch/csky/kernel/smp.c
index b45d107..be77383 100644
--- a/arch/csky/kernel/smp.c
+++ b/arch/csky/kernel/smp.c
@@ -140,7 +140,7 @@ void smp_send_stop(void)
on_each_cpu(ipi_stop, NULL, 1);
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE);
}
diff --git a/arch/hexagon/kernel/smp.c b/arch/hexagon/kernel/smp.c
index 4ba93e5..4e8bee2 100644
--- a/arch/hexagon/kernel/smp.c
+++ b/arch/hexagon/kernel/smp.c
@@ -217,7 +217,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
}
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
send_ipi(cpumask_of(cpu), IPI_RESCHEDULE);
}
diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c
index e2cc59d..ea4f009 100644
--- a/arch/ia64/kernel/smp.c
+++ b/arch/ia64/kernel/smp.c
@@ -220,11 +220,11 @@ kdump_smp_send_init(void)
* Called with preemption disabled.
*/
void
-smp_send_reschedule (int cpu)
+arch_smp_send_reschedule (int cpu)
{
ia64_send_ipi(cpu, IA64_IPI_RESCHEDULE, IA64_IPI_DM_INT, 0);
}
-EXPORT_SYMBOL_GPL(smp_send_reschedule);
+EXPORT_SYMBOL_GPL(arch_smp_send_reschedule);

/*
* Called with preemption disabled.
diff --git a/arch/loongarch/kernel/smp.c b/arch/loongarch/kernel/smp.c
index 8c6e227..8322561 100644
--- a/arch/loongarch/kernel/smp.c
+++ b/arch/loongarch/kernel/smp.c
@@ -155,11 +155,11 @@ void loongson_send_ipi_mask(const struct cpumask *mask, unsigned int action)
* it goes straight through and wastes no time serializing
* anything. Worst case is that we lose a reschedule ...
*/
-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
loongson_send_ipi_single(cpu, SMP_RESCHEDULE);
}
-EXPORT_SYMBOL_GPL(smp_send_reschedule);
+EXPORT_SYMBOL_GPL(arch_smp_send_reschedule);

irqreturn_t loongson_ipi_interrupt(int irq, void *dev)
{
diff --git a/arch/mips/include/asm/smp.h b/arch/mips/include/asm/smp.h
index 5d9ff61..9806e79 100644
--- a/arch/mips/include/asm/smp.h
+++ b/arch/mips/include/asm/smp.h
@@ -66,7 +66,7 @@ extern void calculate_cpu_foreign_map(void);
* it goes straight through and wastes no time serializing
* anything. Worst case is that we lose a reschedule ...
*/
-static inline void smp_send_reschedule(int cpu)
+static inline void arch_smp_send_reschedule(int cpu)
{
extern const struct plat_smp_ops *mp_ops; /* private */

diff --git a/arch/mips/kernel/rtlx-cmp.c b/arch/mips/kernel/rtlx-cmp.c
index d26dcc4..e991cc9 100644
--- a/arch/mips/kernel/rtlx-cmp.c
+++ b/arch/mips/kernel/rtlx-cmp.c
@@ -17,6 +17,8 @@
#include <asm/vpe.h>
#include <asm/rtlx.h>

+#include <trace/events/ipi.h>
+
static int major;

static void rtlx_interrupt(void)
diff --git a/arch/openrisc/kernel/smp.c b/arch/openrisc/kernel/smp.c
index e141909..0a7a059 100644
--- a/arch/openrisc/kernel/smp.c
+++ b/arch/openrisc/kernel/smp.c
@@ -173,7 +173,7 @@ void handle_IPI(unsigned int ipi_msg)
}
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE);
}
diff --git a/arch/parisc/kernel/smp.c b/arch/parisc/kernel/smp.c
index 7dbd92c..b7fc859 100644
--- a/arch/parisc/kernel/smp.c
+++ b/arch/parisc/kernel/smp.c
@@ -246,8 +246,8 @@ void kgdb_roundup_cpus(void)
inline void
smp_send_stop(void) { send_IPI_allbutself(IPI_CPU_STOP); }

-void
-smp_send_reschedule(int cpu) { send_IPI_single(cpu, IPI_RESCHEDULE); }
+void
+arch_smp_send_reschedule(int cpu) { send_IPI_single(cpu, IPI_RESCHEDULE); }

void
smp_send_all_nop(void)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 6b90f10..35f101c 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -61,6 +61,8 @@
#include <asm/kup.h>
#include <asm/fadump.h>

+#include <trace/events/ipi.h>
+
#ifdef DEBUG
#include <asm/udbg.h>
#define DBG(fmt...) udbg_printf(fmt)
@@ -364,12 +366,12 @@ static inline void do_message_pass(int cpu, int msg)
#endif
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
if (likely(smp_ops))
do_message_pass(cpu, PPC_MSG_RESCHEDULE);
}
-EXPORT_SYMBOL_GPL(smp_send_reschedule);
+EXPORT_SYMBOL_GPL(arch_smp_send_reschedule);

void arch_send_call_function_single_ipi(int cpu)
{
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 6ba68dd..3b70b5f 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -43,6 +43,7 @@
#include <linux/compiler.h>
#include <linux/of.h>
#include <linux/irqdomain.h>
+#include <linux/smp.h>

#include <asm/ftrace.h>
#include <asm/reg.h>
@@ -80,6 +81,8 @@
#include <asm/dtl.h>
#include <asm/plpar_wrappers.h>

+#include <trace/events/ipi.h>
+
#include "book3s.h"
#include "book3s_hv.h"

diff --git a/arch/powerpc/platforms/powernv/subcore.c b/arch/powerpc/platforms/powernv/subcore.c
index 7e98b00..c53c4c7 100644
--- a/arch/powerpc/platforms/powernv/subcore.c
+++ b/arch/powerpc/platforms/powernv/subcore.c
@@ -20,6 +20,8 @@
#include <asm/opal.h>
#include <asm/smp.h>

+#include <trace/events/ipi.h>
+
#include "subcore.h"
#include "powernv.h"

diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
index 8c3b59f..42e9656 100644
--- a/arch/riscv/kernel/smp.c
+++ b/arch/riscv/kernel/smp.c
@@ -328,8 +328,8 @@ bool smp_crash_stop_failed(void)
}
#endif

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
send_ipi_single(cpu, IPI_RESCHEDULE);
}
-EXPORT_SYMBOL_GPL(smp_send_reschedule);
+EXPORT_SYMBOL_GPL(arch_smp_send_reschedule);
diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
index d488845..a710319 100644
--- a/arch/s390/kernel/smp.c
+++ b/arch/s390/kernel/smp.c
@@ -553,7 +553,7 @@ void arch_send_call_function_single_ipi(int cpu)
* it goes straight through and wastes no time serializing
* anything. Worst case is that we lose a reschedule ...
*/
-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
pcpu_ec_call(pcpu_devices + cpu, ec_schedule);
}
diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
index 65924d9..5cf35a7 100644
--- a/arch/sh/kernel/smp.c
+++ b/arch/sh/kernel/smp.c
@@ -256,7 +256,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
(bogosum / (5000/HZ)) % 100);
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
mp_ops->send_ipi(cpu, SMP_MSG_RESCHEDULE);
}
diff --git a/arch/sparc/kernel/smp_32.c b/arch/sparc/kernel/smp_32.c
index ad8094d..87eaa77 100644
--- a/arch/sparc/kernel/smp_32.c
+++ b/arch/sparc/kernel/smp_32.c
@@ -120,7 +120,7 @@ void cpu_panic(void)

struct linux_prom_registers smp_penguin_ctable = { 0 };

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
/*
* CPU model dependent way of implementing IPI generation targeting
diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index a55295d..e5964d1 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -1430,7 +1430,7 @@ static unsigned long send_cpu_poke(int cpu)
return hv_err;
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
if (cpu == smp_processor_id()) {
WARN_ON_ONCE(preemptible());
diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
index b4dbb20..f975712 100644
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -98,7 +98,7 @@ static inline void play_dead(void)
smp_ops.play_dead();
}

-static inline void smp_send_reschedule(int cpu)
+static inline void arch_smp_send_reschedule(int cpu)
{
smp_ops.smp_send_reschedule(cpu);
}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 252e7f3..424fcdb 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -27,6 +27,7 @@
#include <linux/swap.h>
#include <linux/rwsem.h>
#include <linux/cc_platform.h>
+#include <linux/smp.h>

#include <asm/apic.h>
#include <asm/perf_event.h>
@@ -41,6 +42,9 @@
#include <asm/fpu/api.h>

#include <asm/virtext.h>
+
+#include <trace/events/ipi.h>
+
#include "trace.h"

#include "svm.h"
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 7713420..07ba937 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -60,7 +60,9 @@
#include <linux/mem_encrypt.h>
#include <linux/entry-kvm.h>
#include <linux/suspend.h>
+#include <linux/smp.h>

+#include <trace/events/ipi.h>
#include <trace/events/kvm.h>

#include <asm/debugreg.h>
diff --git a/arch/xtensa/kernel/smp.c b/arch/xtensa/kernel/smp.c
index 4dc109d..d95907b 100644
--- a/arch/xtensa/kernel/smp.c
+++ b/arch/xtensa/kernel/smp.c
@@ -389,7 +389,7 @@ void arch_send_call_function_single_ipi(int cpu)
send_ipi_message(cpumask_of(cpu), IPI_CALL_FUNC);
}

-void smp_send_reschedule(int cpu)
+void arch_smp_send_reschedule(int cpu)
{
send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE);
}
diff --git a/include/linux/smp.h b/include/linux/smp.h
index a80ab58..c036a22 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -125,8 +125,15 @@ extern void smp_send_stop(void);
/*
* sends a 'reschedule' event to another CPU:
*/
-extern void smp_send_reschedule(int cpu);
-
+extern void arch_smp_send_reschedule(int cpu);
+/*
+ * scheduler_ipi() is inline so can't be passed as callback reason, but the
+ * callsite IP should be sufficient for root-causing IPIs sent from here.
+ */
+#define smp_send_reschedule(cpu) ({ \
+ trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, NULL); \
+ arch_smp_send_reschedule(cpu); \
+})

/*
* Prepare machine for booting other CPUs.
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index d255964..7d18896 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -62,11 +62,14 @@
#include "kvm_mm.h"
#include "vfio.h"

+#include <trace/events/ipi.h>
+
#define CREATE_TRACE_POINTS
#include <trace/events/kvm.h>

#include <linux/kvm_dirty_ring.h>

+
/* Worst case buffer size needed for holding an integer. */
#define ITOA_MAX_LEN 12

Subject: [tip: smp/core] irq_work: Trace self-IPIs sent via arch_irq_work_raise()

The following commit has been merged into the smp/core branch of tip:

Commit-ID: 4468161a5ca2ea239c92de7c0a0dca61854ec4da
Gitweb: https://git.kernel.org/tip/4468161a5ca2ea239c92de7c0a0dca61854ec4da
Author: Valentin Schneider <[email protected]>
AuthorDate: Tue, 07 Mar 2023 14:35:55
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Fri, 24 Mar 2023 11:01:27 +01:00

irq_work: Trace self-IPIs sent via arch_irq_work_raise()

IPIs sent to remote CPUs via irq_work_queue_on() are now covered by
trace_ipi_send_cpumask(), add another instance of the tracepoint to cover
self-IPIs.

Signed-off-by: Valentin Schneider <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Steven Rostedt (Google) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
kernel/irq_work.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/kernel/irq_work.c b/kernel/irq_work.c
index 7afa40f..c33e88e 100644
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -22,6 +22,8 @@
#include <asm/processor.h>
#include <linux/kasan.h>

+#include <trace/events/ipi.h>
+
static DEFINE_PER_CPU(struct llist_head, raised_list);
static DEFINE_PER_CPU(struct llist_head, lazy_list);
static DEFINE_PER_CPU(struct task_struct *, irq_workd);
@@ -74,6 +76,16 @@ void __weak arch_irq_work_raise(void)
*/
}

+static __always_inline void irq_work_raise(struct irq_work *work)
+{
+ if (trace_ipi_send_cpumask_enabled() && arch_irq_work_has_interrupt())
+ trace_ipi_send_cpumask(cpumask_of(smp_processor_id()),
+ _RET_IP_,
+ work->func);
+
+ arch_irq_work_raise();
+}
+
/* Enqueue on current CPU, work must already be claimed and preempt disabled */
static void __irq_work_queue_local(struct irq_work *work)
{
@@ -99,7 +111,7 @@ static void __irq_work_queue_local(struct irq_work *work)

/* If the work is "lazy", handle it from next tick if any */
if (!lazy_work || tick_nohz_tick_stopped())
- arch_irq_work_raise();
+ irq_work_raise(work);
}

/* Enqueue the irq work @work on the current CPU */

Subject: [tip: smp/core] trace: Add trace_ipi_send_cpumask()

The following commit has been merged into the smp/core branch of tip:

Commit-ID: 56eb0598c7a30c76009a082d3213486d6a013df0
Gitweb: https://git.kernel.org/tip/56eb0598c7a30c76009a082d3213486d6a013df0
Author: Valentin Schneider <[email protected]>
AuthorDate: Tue, 07 Mar 2023 14:35:52
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Fri, 24 Mar 2023 11:01:26 +01:00

trace: Add trace_ipi_send_cpumask()

trace_ipi_raise() is unsuitable for generically tracing IPI sources due to
its "reason" argument being an uninformative string (on arm64 all you get
is "Function call interrupts" for SMP calls).

Add a variant of it that exports a target cpumask, a callsite and a callback.

Signed-off-by: Valentin Schneider <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Steven Rostedt (Google) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
include/trace/events/ipi.h | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)

diff --git a/include/trace/events/ipi.h b/include/trace/events/ipi.h
index 0be71da..b1125dc 100644
--- a/include/trace/events/ipi.h
+++ b/include/trace/events/ipi.h
@@ -35,6 +35,28 @@ TRACE_EVENT(ipi_raise,
TP_printk("target_mask=%s (%s)", __get_bitmask(target_cpus), __entry->reason)
);

+TRACE_EVENT(ipi_send_cpumask,
+
+ TP_PROTO(const struct cpumask *cpumask, unsigned long callsite, void *callback),
+
+ TP_ARGS(cpumask, callsite, callback),
+
+ TP_STRUCT__entry(
+ __cpumask(cpumask)
+ __field(void *, callsite)
+ __field(void *, callback)
+ ),
+
+ TP_fast_assign(
+ __assign_cpumask(cpumask, cpumask_bits(cpumask));
+ __entry->callsite = (void *)callsite;
+ __entry->callback = callback;
+ ),
+
+ TP_printk("cpumask=%s callsite=%pS callback=%pS",
+ __get_cpumask(cpumask), __entry->callsite, __entry->callback)
+);
+
DECLARE_EVENT_CLASS(ipi_handler,

TP_PROTO(const char *reason),