2022-09-27 12:49:40

by Marco Elver

[permalink] [raw]
Subject: [PATCH] perf: Fix missing SIGTRAPs due to pending_disable abuse

Due to the implementation of how SIGTRAP are delivered if
perf_event_attr::sigtrap is set, we've noticed 3 issues:

1. Missing SIGTRAP due to a race with event_sched_out() (more
details below).

2. Hardware PMU events being disabled due to returning 1 from
perf_event_overflow(). The only way to re-enable the event is
for user space to first "properly" disable the event and then
re-enable it.

3. The inability to automatically disable an event after a
specified number of overflows via PERF_EVENT_IOC_REFRESH.

The worst of the 3 issues is problem (1), which occurs when a
pending_disable is "consumed" by a racing event_sched_out(), observed as
follows:

CPU0 | CPU1
--------------------------------+---------------------------
__perf_event_overflow() |
perf_event_disable_inatomic() |
pending_disable = CPU0 | ...
| _perf_event_enable()
| event_function_call()
| task_function_call()
| /* sends IPI to CPU0 */
<IPI> | ...
__perf_event_enable() +---------------------------
ctx_resched()
task_ctx_sched_out()
ctx_sched_out()
group_sched_out()
event_sched_out()
pending_disable = -1
</IPI>
<IRQ-work>
perf_pending_event()
perf_pending_event_disable()
/* Fails to send SIGTRAP because no pending_disable! */
</IRQ-work>

In the above case, not only is that particular SIGTRAP missed, but also
all future SIGTRAPs because 'event_limit' is not reset back to 1.

To fix, rework pending delivery of SIGTRAP via IRQ-work by introduction
of a separate 'pending_sigtrap', no longer using 'event_limit' and
'pending_disable' for its delivery.

During testing, this also revealed several more possible races between
reschedules and pending IRQ work; see code comments for details.

Doing so makes it possible to use 'event_limit' normally (thereby
enabling use of PERF_EVENT_IOC_REFRESH), perf_event_overflow() no longer
returns 1 on SIGTRAP causing disabling of hardware PMUs, and finally the
race is no longer possible due to event_sched_out() not consuming
'pending_disable'.

Fixes: 97ba62b27867 ("perf: Add support for SIGTRAP on perf events")
Reported-by: Dmitry Vyukov <[email protected]>
Debugged-by: Dmitry Vyukov <[email protected]>
Signed-off-by: Marco Elver <[email protected]>
---
include/linux/perf_event.h | 2 +
kernel/events/core.c | 85 ++++++++++++++++++++++++++++++++------
2 files changed, 75 insertions(+), 12 deletions(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 907b0e3f1318..dff3430844a2 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -740,8 +740,10 @@ struct perf_event {
int pending_wakeup;
int pending_kill;
int pending_disable;
+ int pending_sigtrap;
unsigned long pending_addr; /* SIGTRAP */
struct irq_work pending;
+ struct irq_work pending_resched;

atomic_t event_limit;

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 75f5705b6892..df90777262bf 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2527,6 +2527,14 @@ event_sched_in(struct perf_event *event,
if (event->attr.exclusive)
cpuctx->exclusive = 1;

+ if (event->pending_sigtrap) {
+ /*
+ * The task and event might have been moved to another CPU:
+ * queue another IRQ work. See perf_pending_event_sigtrap().
+ */
+ WARN_ON_ONCE(!irq_work_queue(&event->pending_resched));
+ }
+
out:
perf_pmu_enable(event->pmu);

@@ -4938,6 +4946,7 @@ static void perf_addr_filters_splice(struct perf_event *event,
static void _free_event(struct perf_event *event)
{
irq_work_sync(&event->pending);
+ irq_work_sync(&event->pending_resched);

unaccount_event(event);

@@ -6446,6 +6455,37 @@ static void perf_sigtrap(struct perf_event *event)
event->attr.type, event->attr.sig_data);
}

+static void perf_pending_event_sigtrap(struct perf_event *event)
+{
+ if (!event->pending_sigtrap)
+ return;
+
+ /*
+ * If we're racing with disabling of the event, consume pending_sigtrap
+ * and don't send the SIGTRAP. This avoids potentially delaying a signal
+ * indefinitely (oncpu mismatch) until the event is enabled again, which
+ * could happen after already returning to user space; in that case the
+ * signal would erroneously become asynchronous.
+ */
+ if (event->state == PERF_EVENT_STATE_OFF) {
+ event->pending_sigtrap = 0;
+ return;
+ }
+
+ /*
+ * Only process this pending SIGTRAP if this IRQ work is running on the
+ * right CPU: the scheduler is able to run before the IRQ work, which
+ * moved the task to another CPU. In event_sched_in() another IRQ work
+ * is scheduled, so that the signal is not lost; given the kernel has
+ * not yet returned to user space, the signal remains synchronous.
+ */
+ if (READ_ONCE(event->oncpu) != smp_processor_id())
+ return;
+
+ event->pending_sigtrap = 0;
+ perf_sigtrap(event);
+}
+
static void perf_pending_event_disable(struct perf_event *event)
{
int cpu = READ_ONCE(event->pending_disable);
@@ -6455,13 +6495,6 @@ static void perf_pending_event_disable(struct perf_event *event)

if (cpu == smp_processor_id()) {
WRITE_ONCE(event->pending_disable, -1);
-
- if (event->attr.sigtrap) {
- perf_sigtrap(event);
- atomic_set_release(&event->event_limit, 1); /* rearm event */
- return;
- }
-
perf_event_disable_local(event);
return;
}
@@ -6500,6 +6533,7 @@ static void perf_pending_event(struct irq_work *entry)
* and we won't recurse 'further'.
*/

+ perf_pending_event_sigtrap(event);
perf_pending_event_disable(event);

if (event->pending_wakeup) {
@@ -6511,6 +6545,26 @@ static void perf_pending_event(struct irq_work *entry)
perf_swevent_put_recursion_context(rctx);
}

+/*
+ * If handling of a pending action must occur before returning to user space,
+ * and it is possible to reschedule an event (to another CPU) with pending
+ * actions, where the moved-from CPU may not yet have run event->pending (and
+ * irq_work_queue() would fail on reuse), we'll use a separate IRQ work that
+ * runs perf_pending_event_resched().
+ */
+static void perf_pending_event_resched(struct irq_work *entry)
+{
+ struct perf_event *event = container_of(entry, struct perf_event, pending_resched);
+ int rctx;
+
+ rctx = perf_swevent_get_recursion_context();
+
+ perf_pending_event_sigtrap(event);
+
+ if (rctx >= 0)
+ perf_swevent_put_recursion_context(rctx);
+}
+
#ifdef CONFIG_GUEST_PERF_EVENTS
struct perf_guest_info_callbacks __rcu *perf_guest_cbs;

@@ -9209,11 +9263,20 @@ static int __perf_event_overflow(struct perf_event *event,
if (events && atomic_dec_and_test(&event->event_limit)) {
ret = 1;
event->pending_kill = POLL_HUP;
- event->pending_addr = data->addr;
-
perf_event_disable_inatomic(event);
}

+ if (event->attr.sigtrap) {
+ /*
+ * Should not be able to return to user space without processing
+ * pending_sigtrap (kernel events can overflow multiple times).
+ */
+ WARN_ON_ONCE(event->pending_sigtrap && event->attr.exclude_kernel);
+ event->pending_sigtrap = 1;
+ event->pending_addr = data->addr;
+ irq_work_queue(&event->pending);
+ }
+
READ_ONCE(event->overflow_handler)(event, data, regs);

if (*perf_event_fasync(event) && event->pending_kill) {
@@ -11536,6 +11599,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
init_waitqueue_head(&event->waitq);
event->pending_disable = -1;
init_irq_work(&event->pending, perf_pending_event);
+ init_irq_work(&event->pending_resched, perf_pending_event_resched);

mutex_init(&event->mmap_mutex);
raw_spin_lock_init(&event->addr_filters.lock);
@@ -11557,9 +11621,6 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
if (parent_event)
event->event_caps = parent_event->event_caps;

- if (event->attr.sigtrap)
- atomic_set(&event->event_limit, 1);
-
if (task) {
event->attach_state = PERF_ATTACH_TASK;
/*
--
2.37.3.998.g577e59143f-goog


2022-09-27 13:53:52

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH] perf: Fix missing SIGTRAPs due to pending_disable abuse

On Tue, Sep 27, 2022 at 02:13PM +0200, Marco Elver wrote:
> Due to the implementation of how SIGTRAP are delivered if
> perf_event_attr::sigtrap is set, we've noticed 3 issues:
>
> 1. Missing SIGTRAP due to a race with event_sched_out() (more
> details below).
>
> 2. Hardware PMU events being disabled due to returning 1 from
> perf_event_overflow(). The only way to re-enable the event is
> for user space to first "properly" disable the event and then
> re-enable it.
>
> 3. The inability to automatically disable an event after a
> specified number of overflows via PERF_EVENT_IOC_REFRESH.
>
> The worst of the 3 issues is problem (1), which occurs when a
> pending_disable is "consumed" by a racing event_sched_out(), observed as
> follows:
>
> CPU0 | CPU1
> --------------------------------+---------------------------
> __perf_event_overflow() |
> perf_event_disable_inatomic() |
> pending_disable = CPU0 | ...
> | _perf_event_enable()
> | event_function_call()
> | task_function_call()
> | /* sends IPI to CPU0 */
> <IPI> | ...
> __perf_event_enable() +---------------------------
> ctx_resched()
> task_ctx_sched_out()
> ctx_sched_out()
> group_sched_out()
> event_sched_out()
> pending_disable = -1
> </IPI>
> <IRQ-work>
> perf_pending_event()
> perf_pending_event_disable()
> /* Fails to send SIGTRAP because no pending_disable! */
> </IRQ-work>
>
> In the above case, not only is that particular SIGTRAP missed, but also
> all future SIGTRAPs because 'event_limit' is not reset back to 1.
>
> To fix, rework pending delivery of SIGTRAP via IRQ-work by introduction
> of a separate 'pending_sigtrap', no longer using 'event_limit' and
> 'pending_disable' for its delivery.
>
> During testing, this also revealed several more possible races between
> reschedules and pending IRQ work; see code comments for details.
>
> Doing so makes it possible to use 'event_limit' normally (thereby
> enabling use of PERF_EVENT_IOC_REFRESH), perf_event_overflow() no longer
> returns 1 on SIGTRAP causing disabling of hardware PMUs, and finally the
> race is no longer possible due to event_sched_out() not consuming
> 'pending_disable'.
>
> Fixes: 97ba62b27867 ("perf: Add support for SIGTRAP on perf events")
> Reported-by: Dmitry Vyukov <[email protected]>
> Debugged-by: Dmitry Vyukov <[email protected]>
> Signed-off-by: Marco Elver <[email protected]>
> ---
> include/linux/perf_event.h | 2 +
> kernel/events/core.c | 85 ++++++++++++++++++++++++++++++++------
> 2 files changed, 75 insertions(+), 12 deletions(-)
>
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index 907b0e3f1318..dff3430844a2 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -740,8 +740,10 @@ struct perf_event {
> int pending_wakeup;
> int pending_kill;
> int pending_disable;
> + int pending_sigtrap;
> unsigned long pending_addr; /* SIGTRAP */
> struct irq_work pending;
> + struct irq_work pending_resched;
>
> atomic_t event_limit;
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 75f5705b6892..df90777262bf 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -2527,6 +2527,14 @@ event_sched_in(struct perf_event *event,
> if (event->attr.exclusive)
> cpuctx->exclusive = 1;
>
> + if (event->pending_sigtrap) {
> + /*
> + * The task and event might have been moved to another CPU:
> + * queue another IRQ work. See perf_pending_event_sigtrap().
> + */
> + WARN_ON_ONCE(!irq_work_queue(&event->pending_resched));

One question we had is if it's possible for an event to be scheduled in,
immediately scheduled out, and then scheduled in on a 3rd CPU. I.e. we'd
still be in trouble if we can do this:

CPU0
sched-out
CPU1
sched-in
sched-out
CPU2
sched-in

without any IRQ work ever running. Some naive solutions so the
pending_resched IRQ work isn't needed, like trying to send a signal
right here (or in event_sched_out()), don't work because we've seen
syzkaller produce programs where there's a pending event and then the
scheduler moves the task; because we're in the scheduler we can deadlock
if we try to send the signal here.

Thanks,
-- Marco

2022-09-27 18:43:34

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] perf: Fix missing SIGTRAPs due to pending_disable abuse

On Tue, Sep 27, 2022 at 02:13:22PM +0200, Marco Elver wrote:
> Due to the implementation of how SIGTRAP are delivered if
> perf_event_attr::sigtrap is set, we've noticed 3 issues:
>
> 1. Missing SIGTRAP due to a race with event_sched_out() (more
> details below).
>
> 2. Hardware PMU events being disabled due to returning 1 from
> perf_event_overflow(). The only way to re-enable the event is
> for user space to first "properly" disable the event and then
> re-enable it.
>
> 3. The inability to automatically disable an event after a
> specified number of overflows via PERF_EVENT_IOC_REFRESH.
>
> The worst of the 3 issues is problem (1), which occurs when a
> pending_disable is "consumed" by a racing event_sched_out(), observed as
> follows:
>
> CPU0 | CPU1
> --------------------------------+---------------------------
> __perf_event_overflow() |
> perf_event_disable_inatomic() |
> pending_disable = CPU0 | ...
> | _perf_event_enable()
> | event_function_call()
> | task_function_call()
> | /* sends IPI to CPU0 */
> <IPI> | ...
> __perf_event_enable() +---------------------------
> ctx_resched()
> task_ctx_sched_out()
> ctx_sched_out()
> group_sched_out()
> event_sched_out()
> pending_disable = -1
> </IPI>
> <IRQ-work>
> perf_pending_event()
> perf_pending_event_disable()
> /* Fails to send SIGTRAP because no pending_disable! */
> </IRQ-work>
>
> In the above case, not only is that particular SIGTRAP missed, but also
> all future SIGTRAPs because 'event_limit' is not reset back to 1.
>
> To fix, rework pending delivery of SIGTRAP via IRQ-work by introduction
> of a separate 'pending_sigtrap', no longer using 'event_limit' and
> 'pending_disable' for its delivery.
>
> During testing, this also revealed several more possible races between
> reschedules and pending IRQ work; see code comments for details.

Perhaps use task_work_add() for this case? That runs on the
return-to-user path, so then it doesn't matter how many reschedules
happen in between.

The only concern is that task_work_add() uses kasan_record_aux_stack()
which obviously isn't NMI clean, so that would need to get removed or
made conditional.

2022-09-27 22:06:51

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH] perf: Fix missing SIGTRAPs due to pending_disable abuse

On Tue, Sep 27, 2022 at 08:20PM +0200, Peter Zijlstra wrote:
> On Tue, Sep 27, 2022 at 02:13:22PM +0200, Marco Elver wrote:
> > Due to the implementation of how SIGTRAP are delivered if
> > perf_event_attr::sigtrap is set, we've noticed 3 issues:
> >
> > 1. Missing SIGTRAP due to a race with event_sched_out() (more
> > details below).
> >
> > 2. Hardware PMU events being disabled due to returning 1 from
> > perf_event_overflow(). The only way to re-enable the event is
> > for user space to first "properly" disable the event and then
> > re-enable it.
> >
> > 3. The inability to automatically disable an event after a
> > specified number of overflows via PERF_EVENT_IOC_REFRESH.
> >
> > The worst of the 3 issues is problem (1), which occurs when a
> > pending_disable is "consumed" by a racing event_sched_out(), observed as
> > follows:
> >
> > CPU0 | CPU1
> > --------------------------------+---------------------------
> > __perf_event_overflow() |
> > perf_event_disable_inatomic() |
> > pending_disable = CPU0 | ...
> > | _perf_event_enable()
> > | event_function_call()
> > | task_function_call()
> > | /* sends IPI to CPU0 */
> > <IPI> | ...
> > __perf_event_enable() +---------------------------
> > ctx_resched()
> > task_ctx_sched_out()
> > ctx_sched_out()
> > group_sched_out()
> > event_sched_out()
> > pending_disable = -1
> > </IPI>
> > <IRQ-work>
> > perf_pending_event()
> > perf_pending_event_disable()
> > /* Fails to send SIGTRAP because no pending_disable! */
> > </IRQ-work>
> >
> > In the above case, not only is that particular SIGTRAP missed, but also
> > all future SIGTRAPs because 'event_limit' is not reset back to 1.
> >
> > To fix, rework pending delivery of SIGTRAP via IRQ-work by introduction
> > of a separate 'pending_sigtrap', no longer using 'event_limit' and
> > 'pending_disable' for its delivery.
> >
> > During testing, this also revealed several more possible races between
> > reschedules and pending IRQ work; see code comments for details.
>
> Perhaps use task_work_add() for this case? That runs on the
> return-to-user path, so then it doesn't matter how many reschedules
> happen in between.

Hmm, I tried the below (on top of this patch), but then all the tests
fail (including tools/testing/selftests/perf_events/sigtrap_threads.c)
because of lots of missing SIGTRAP. (The missing SIGTRAP happen with or
without the kernel/entry/ change.)

So something is wrong with task_work, and the irq_work solution thus far
is more robust (ran many hours of tests and fuzzing without failure).

Thoughts?

------ >8 ------

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index dff3430844a2..928fb9e2b655 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -743,7 +743,7 @@ struct perf_event {
int pending_sigtrap;
unsigned long pending_addr; /* SIGTRAP */
struct irq_work pending;
- struct irq_work pending_resched;
+ struct callback_head pending_twork;

atomic_t event_limit;

diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index 063068a9ea9b..7cacaefc97fe 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -162,12 +162,12 @@ static unsigned long exit_to_user_mode_loop(struct pt_regs *regs,
if (ti_work & _TIF_PATCH_PENDING)
klp_update_patch_state(current);

- if (ti_work & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
- arch_do_signal_or_restart(regs);
-
if (ti_work & _TIF_NOTIFY_RESUME)
resume_user_mode_work(regs);

+ if (ti_work & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
+ arch_do_signal_or_restart(regs);
+
/* Architecture specific TIF work */
arch_exit_to_user_mode_work(regs, ti_work);

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 007a87c1599c..7f93dd91d572 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -17,6 +17,7 @@
#include <linux/poll.h>
#include <linux/slab.h>
#include <linux/hash.h>
+#include <linux/task_work.h>
#include <linux/tick.h>
#include <linux/sysfs.h>
#include <linux/dcache.h>
@@ -2527,14 +2528,6 @@ event_sched_in(struct perf_event *event,
if (event->attr.exclusive)
cpuctx->exclusive = 1;

- if (event->pending_sigtrap) {
- /*
- * The task and event might have been moved to another CPU:
- * queue another IRQ work. See perf_pending_event_sigtrap().
- */
- WARN_ON_ONCE(!irq_work_queue(&event->pending_resched));
- }
-
out:
perf_pmu_enable(event->pmu);

@@ -4942,11 +4935,13 @@ static bool exclusive_event_installable(struct perf_event *event,

static void perf_addr_filters_splice(struct perf_event *event,
struct list_head *head);
+static void perf_pending_event_task_work(struct callback_head *work);

static void _free_event(struct perf_event *event)
{
irq_work_sync(&event->pending);
- irq_work_sync(&event->pending_resched);
+ if (event->hw.target)
+ task_work_cancel(event->hw.target, perf_pending_event_task_work);

unaccount_event(event);

@@ -6438,15 +6433,7 @@ void perf_event_wakeup(struct perf_event *event)
static void perf_sigtrap(struct perf_event *event)
{
/*
- * We'd expect this to only occur if the irq_work is delayed and either
- * ctx->task or current has changed in the meantime. This can be the
- * case on architectures that do not implement arch_irq_work_raise().
- */
- if (WARN_ON_ONCE(event->ctx->task != current))
- return;
-
- /*
- * perf_pending_event() can race with the task exiting.
+ * Can be called while the task is exiting.
*/
if (current->flags & PF_EXITING)
return;
@@ -6455,35 +6442,22 @@ static void perf_sigtrap(struct perf_event *event)
event->attr.type, event->attr.sig_data);
}

-static void perf_pending_event_sigtrap(struct perf_event *event)
+static void perf_pending_event_task_work(struct callback_head *work)
{
- if (!event->pending_sigtrap)
- return;
+ struct perf_event *event = container_of(work, struct perf_event, pending_twork);
+ int rctx;

- /*
- * If we're racing with disabling of the event, consume pending_sigtrap
- * and don't send the SIGTRAP. This avoids potentially delaying a signal
- * indefinitely (oncpu mismatch) until the event is enabled again, which
- * could happen after already returning to user space; in that case the
- * signal would erroneously become asynchronous.
- */
- if (event->state == PERF_EVENT_STATE_OFF) {
+ preempt_disable_notrace();
+ rctx = perf_swevent_get_recursion_context();
+
+ if (event->pending_sigtrap) {
event->pending_sigtrap = 0;
- return;
+ perf_sigtrap(event);
}

- /*
- * Only process this pending SIGTRAP if this IRQ work is running on the
- * right CPU: the scheduler is able to run before the IRQ work, which
- * moved the task to another CPU. In event_sched_in() another IRQ work
- * is scheduled, so that the signal is not lost; given the kernel has
- * not yet returned to user space, the signal remains synchronous.
- */
- if (READ_ONCE(event->oncpu) != smp_processor_id())
- return;
-
- event->pending_sigtrap = 0;
- perf_sigtrap(event);
+ if (rctx >= 0)
+ perf_swevent_put_recursion_context(rctx);
+ preempt_enable_notrace();
}

static void perf_pending_event_disable(struct perf_event *event)
@@ -6533,7 +6507,6 @@ static void perf_pending_event(struct irq_work *entry)
* and we won't recurse 'further'.
*/

- perf_pending_event_sigtrap(event);
perf_pending_event_disable(event);

if (event->pending_wakeup) {
@@ -6545,26 +6518,6 @@ static void perf_pending_event(struct irq_work *entry)
perf_swevent_put_recursion_context(rctx);
}

-/*
- * If handling of a pending action must occur before returning to user space,
- * and it is possible to reschedule an event (to another CPU) with pending
- * actions, where the moved-from CPU may not yet have run event->pending (and
- * irq_work_queue() would fail on reuse), we'll use a separate IRQ work that
- * runs perf_pending_event_resched().
- */
-static void perf_pending_event_resched(struct irq_work *entry)
-{
- struct perf_event *event = container_of(entry, struct perf_event, pending_resched);
- int rctx;
-
- rctx = perf_swevent_get_recursion_context();
-
- perf_pending_event_sigtrap(event);
-
- if (rctx >= 0)
- perf_swevent_put_recursion_context(rctx);
-}
-
#ifdef CONFIG_GUEST_PERF_EVENTS
struct perf_guest_info_callbacks __rcu *perf_guest_cbs;

@@ -9274,7 +9227,7 @@ static int __perf_event_overflow(struct perf_event *event,
WARN_ON_ONCE(event->pending_sigtrap && event->attr.exclude_kernel);
event->pending_sigtrap = 1;
event->pending_addr = data->addr;
- irq_work_queue(&event->pending);
+ task_work_add(current, &event->pending_twork, TWA_RESUME);
}

READ_ONCE(event->overflow_handler)(event, data, regs);
@@ -11599,7 +11552,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
init_waitqueue_head(&event->waitq);
event->pending_disable = -1;
init_irq_work(&event->pending, perf_pending_event);
- init_irq_work(&event->pending_resched, perf_pending_event_resched);
+ init_task_work(&event->pending_twork, perf_pending_event_task_work);

mutex_init(&event->mmap_mutex);
raw_spin_lock_init(&event->addr_filters.lock);

2022-09-28 11:07:44

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH] perf: Fix missing SIGTRAPs due to pending_disable abuse

On Tue, Sep 27, 2022 at 11:45PM +0200, Marco Elver wrote:
> On Tue, Sep 27, 2022 at 08:20PM +0200, Peter Zijlstra wrote:
> > On Tue, Sep 27, 2022 at 02:13:22PM +0200, Marco Elver wrote:
> > > Due to the implementation of how SIGTRAP are delivered if
> > > perf_event_attr::sigtrap is set, we've noticed 3 issues:
> > >
> > > 1. Missing SIGTRAP due to a race with event_sched_out() (more
> > > details below).
> > >
> > > 2. Hardware PMU events being disabled due to returning 1 from
> > > perf_event_overflow(). The only way to re-enable the event is
> > > for user space to first "properly" disable the event and then
> > > re-enable it.
> > >
> > > 3. The inability to automatically disable an event after a
> > > specified number of overflows via PERF_EVENT_IOC_REFRESH.
> > >
> > > The worst of the 3 issues is problem (1), which occurs when a
> > > pending_disable is "consumed" by a racing event_sched_out(), observed as
> > > follows:
> > >
> > > CPU0 | CPU1
> > > --------------------------------+---------------------------
> > > __perf_event_overflow() |
> > > perf_event_disable_inatomic() |
> > > pending_disable = CPU0 | ...
> > > | _perf_event_enable()
> > > | event_function_call()
> > > | task_function_call()
> > > | /* sends IPI to CPU0 */
> > > <IPI> | ...
> > > __perf_event_enable() +---------------------------
> > > ctx_resched()
> > > task_ctx_sched_out()
> > > ctx_sched_out()
> > > group_sched_out()
> > > event_sched_out()
> > > pending_disable = -1
> > > </IPI>
> > > <IRQ-work>
> > > perf_pending_event()
> > > perf_pending_event_disable()
> > > /* Fails to send SIGTRAP because no pending_disable! */
> > > </IRQ-work>
> > >
> > > In the above case, not only is that particular SIGTRAP missed, but also
> > > all future SIGTRAPs because 'event_limit' is not reset back to 1.
> > >
> > > To fix, rework pending delivery of SIGTRAP via IRQ-work by introduction
> > > of a separate 'pending_sigtrap', no longer using 'event_limit' and
> > > 'pending_disable' for its delivery.
> > >
> > > During testing, this also revealed several more possible races between
> > > reschedules and pending IRQ work; see code comments for details.
> >
> > Perhaps use task_work_add() for this case? That runs on the
> > return-to-user path, so then it doesn't matter how many reschedules
> > happen in between.
>
> Hmm, I tried the below (on top of this patch), but then all the tests
> fail (including tools/testing/selftests/perf_events/sigtrap_threads.c)
> because of lots of missing SIGTRAP. (The missing SIGTRAP happen with or
> without the kernel/entry/ change.)
>
> So something is wrong with task_work, and the irq_work solution thus far
> is more robust (ran many hours of tests and fuzzing without failure).

My second idea about introducing something like irq_work_raw_sync().
Maybe it's not that crazy if it is actually safe. I expect this case
where we need the irq_work_raw_sync() to be very very rare.

------ >8 ------

diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
index 8cd11a223260..490adecbb4be 100644
--- a/include/linux/irq_work.h
+++ b/include/linux/irq_work.h
@@ -59,6 +59,7 @@ bool irq_work_queue_on(struct irq_work *work, int cpu);

void irq_work_tick(void);
void irq_work_sync(struct irq_work *work);
+bool irq_work_raw_sync(struct irq_work *work);

#ifdef CONFIG_IRQ_WORK
#include <asm/irq_work.h>
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index dff3430844a2..c119fa7b70d6 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -743,7 +743,6 @@ struct perf_event {
int pending_sigtrap;
unsigned long pending_addr; /* SIGTRAP */
struct irq_work pending;
- struct irq_work pending_resched;

atomic_t event_limit;

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 007a87c1599c..6ba02a1b5c5d 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2532,7 +2532,8 @@ event_sched_in(struct perf_event *event,
* The task and event might have been moved to another CPU:
* queue another IRQ work. See perf_pending_event_sigtrap().
*/
- WARN_ON_ONCE(!irq_work_queue(&event->pending_resched));
+ irq_work_raw_sync(&event->pending); /* Syncs if pending on other CPU. */
+ irq_work_queue(&event->pending);
}

out:
@@ -4946,7 +4947,6 @@ static void perf_addr_filters_splice(struct perf_event *event,
static void _free_event(struct perf_event *event)
{
irq_work_sync(&event->pending);
- irq_work_sync(&event->pending_resched);

unaccount_event(event);

@@ -6545,26 +6545,6 @@ static void perf_pending_event(struct irq_work *entry)
perf_swevent_put_recursion_context(rctx);
}

-/*
- * If handling of a pending action must occur before returning to user space,
- * and it is possible to reschedule an event (to another CPU) with pending
- * actions, where the moved-from CPU may not yet have run event->pending (and
- * irq_work_queue() would fail on reuse), we'll use a separate IRQ work that
- * runs perf_pending_event_resched().
- */
-static void perf_pending_event_resched(struct irq_work *entry)
-{
- struct perf_event *event = container_of(entry, struct perf_event, pending_resched);
- int rctx;
-
- rctx = perf_swevent_get_recursion_context();
-
- perf_pending_event_sigtrap(event);
-
- if (rctx >= 0)
- perf_swevent_put_recursion_context(rctx);
-}
-
#ifdef CONFIG_GUEST_PERF_EVENTS
struct perf_guest_info_callbacks __rcu *perf_guest_cbs;

@@ -11599,7 +11579,6 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
init_waitqueue_head(&event->waitq);
event->pending_disable = -1;
init_irq_work(&event->pending, perf_pending_event);
- init_irq_work(&event->pending_resched, perf_pending_event_resched);

mutex_init(&event->mmap_mutex);
raw_spin_lock_init(&event->addr_filters.lock);
diff --git a/kernel/irq_work.c b/kernel/irq_work.c
index 7afa40fe5cc4..2d21be0c0f3e 100644
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -290,6 +290,40 @@ void irq_work_sync(struct irq_work *work)
}
EXPORT_SYMBOL_GPL(irq_work_sync);

+/*
+ * Synchronize against the irq_work @work, ensuring the entry is not currently
+ * in use after returning true. If it returns false, it was not possible to
+ * synchronize against the irq_work. Requires that interrupts are already
+ * disabled (prefer irq_work_sync() in all other cases).
+ */
+bool irq_work_raw_sync(struct irq_work *work)
+{
+ struct irq_work *entry;
+ struct llist_head *list;
+
+ lockdep_assert_irqs_disabled();
+
+ if (!irq_work_is_busy(work))
+ return true;
+
+ list = this_cpu_ptr(&raised_list);
+ llist_for_each_entry(entry, list->first, node.llist) {
+ if (entry == work)
+ return false;
+ }
+ list = this_cpu_ptr(&lazy_list);
+ llist_for_each_entry(entry, list->first, node.llist) {
+ if (entry == work)
+ return false;
+ }
+
+ while (irq_work_is_busy(work))
+ cpu_relax();
+
+ return true;
+}
+EXPORT_SYMBOL_GPL(irq_work_raw_sync);
+
static void run_irq_workd(unsigned int cpu)
{
irq_work_run_list(this_cpu_ptr(&lazy_list));

2022-09-28 15:41:06

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH] perf: Fix missing SIGTRAPs due to pending_disable abuse

On Wed, Sep 28, 2022 at 12:06PM +0200, Marco Elver wrote:

> My second idea about introducing something like irq_work_raw_sync().
> Maybe it's not that crazy if it is actually safe. I expect this case
> where we need the irq_work_raw_sync() to be very very rare.

The previous irq_work_raw_sync() forgot about irq_work_queue_on(). Alas,
I might still be missing something obvious, because "it's never that
easy". ;-)

And for completeness, the full perf patch of what it would look like
together with irq_work_raw_sync() (consider it v1.5). It's already
survived some shorter stress tests and fuzzing.

Thanks,
-- Marco


Attachments:
(No filename) (636.00 B)
0001-irq_work-Introduce-irq_work_raw_sync.patch (2.77 kB)
0002-perf-Fix-missing-SIGTRAPs-due-to-pending_disable-abu.patch (6.72 kB)
Download all attachments

2022-10-04 17:34:47

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] perf: Fix missing SIGTRAPs due to pending_disable abuse

On Wed, Sep 28, 2022 at 04:55:46PM +0200, Marco Elver wrote:
> On Wed, Sep 28, 2022 at 12:06PM +0200, Marco Elver wrote:
>
> > My second idea about introducing something like irq_work_raw_sync().
> > Maybe it's not that crazy if it is actually safe. I expect this case
> > where we need the irq_work_raw_sync() to be very very rare.
>
> The previous irq_work_raw_sync() forgot about irq_work_queue_on(). Alas,
> I might still be missing something obvious, because "it's never that
> easy". ;-)
>
> And for completeness, the full perf patch of what it would look like
> together with irq_work_raw_sync() (consider it v1.5). It's already
> survived some shorter stress tests and fuzzing.

So.... I don't like it. But I cooked up the below, which _almost_ works :-/

For some raisin it sometimes fails with 14999 out of 15000 events
delivered and I've not yet figured out where it goes sideways. I'm
currently thinking it's that sigtrap clear on OFF.

Still, what do you think of the approach?

---
include/linux/perf_event.h | 8 ++--
kernel/events/core.c | 92 +++++++++++++++++++++++++---------------------
2 files changed, 55 insertions(+), 45 deletions(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index ee8b9ecdc03b..c54161719d37 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -736,9 +736,11 @@ struct perf_event {
struct fasync_struct *fasync;

/* delayed work for NMIs and such */
- int pending_wakeup;
- int pending_kill;
- int pending_disable;
+ unsigned int pending_wakeup :1;
+ unsigned int pending_disable :1;
+ unsigned int pending_sigtrap :1;
+ unsigned int pending_kill :3;
+
unsigned long pending_addr; /* SIGTRAP */
struct irq_work pending;

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2621fd24ad26..8e5dbe971d9e 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2268,11 +2268,15 @@ event_sched_out(struct perf_event *event,
event->pmu->del(event, 0);
event->oncpu = -1;

- if (READ_ONCE(event->pending_disable) >= 0) {
- WRITE_ONCE(event->pending_disable, -1);
+ if (event->pending_disable) {
+ event->pending_disable = 0;
perf_cgroup_event_disable(event, ctx);
state = PERF_EVENT_STATE_OFF;
}
+
+ if (event->pending_sigtrap && state == PERF_EVENT_STATE_OFF)
+ event->pending_sigtrap = 0;
+
perf_event_set_state(event, state);

if (!is_software_event(event))
@@ -2463,8 +2467,7 @@ EXPORT_SYMBOL_GPL(perf_event_disable);

void perf_event_disable_inatomic(struct perf_event *event)
{
- WRITE_ONCE(event->pending_disable, smp_processor_id());
- /* can fail, see perf_pending_event_disable() */
+ event->pending_disable = 1;
irq_work_queue(&event->pending);
}

@@ -2527,6 +2530,9 @@ event_sched_in(struct perf_event *event,
if (event->attr.exclusive)
cpuctx->exclusive = 1;

+ if (event->pending_disable || event->pending_sigtrap)
+ irq_work_queue(&event->pending);
+
out:
perf_pmu_enable(event->pmu);

@@ -6440,47 +6446,40 @@ static void perf_sigtrap(struct perf_event *event)
event->attr.type, event->attr.sig_data);
}

-static void perf_pending_event_disable(struct perf_event *event)
+/*
+ * Deliver the pending work in-event-context or follow the context.
+ */
+static void __perf_pending_event(struct perf_event *event)
{
- int cpu = READ_ONCE(event->pending_disable);
+ int cpu = READ_ONCE(event->oncpu);

+ /*
+ * If the event isn't running; we done. event_sched_in() will restart
+ * the irq_work when needed.
+ */
if (cpu < 0)
return;

+ /*
+ * Yay, we hit home and are in the context of the event.
+ */
if (cpu == smp_processor_id()) {
- WRITE_ONCE(event->pending_disable, -1);
-
- if (event->attr.sigtrap) {
+ if (event->pending_sigtrap) {
+ event->pending_sigtrap = 0;
perf_sigtrap(event);
- atomic_set_release(&event->event_limit, 1); /* rearm event */
- return;
}
-
- perf_event_disable_local(event);
- return;
+ if (event->pending_disable) {
+ event->pending_disable = 0;
+ perf_event_disable_local(event);
+ }
}

/*
- * CPU-A CPU-B
- *
- * perf_event_disable_inatomic()
- * @pending_disable = CPU-A;
- * irq_work_queue();
- *
- * sched-out
- * @pending_disable = -1;
- *
- * sched-in
- * perf_event_disable_inatomic()
- * @pending_disable = CPU-B;
- * irq_work_queue(); // FAILS
- *
- * irq_work_run()
- * perf_pending_event()
- *
- * But the event runs on CPU-B and wants disabling there.
+ * Requeue if there's still any pending work left, make sure to follow
+ * where the event went.
*/
- irq_work_queue_on(&event->pending, cpu);
+ if (event->pending_disable || event->pending_sigtrap)
+ irq_work_queue_on(&event->pending, cpu);
}

static void perf_pending_event(struct irq_work *entry)
@@ -6488,19 +6487,23 @@ static void perf_pending_event(struct irq_work *entry)
struct perf_event *event = container_of(entry, struct perf_event, pending);
int rctx;

- rctx = perf_swevent_get_recursion_context();
/*
* If we 'fail' here, that's OK, it means recursion is already disabled
* and we won't recurse 'further'.
*/
+ rctx = perf_swevent_get_recursion_context();

- perf_pending_event_disable(event);
-
+ /*
+ * The wakeup isn't bound to the context of the event -- it can happen
+ * irrespective of where the event is.
+ */
if (event->pending_wakeup) {
event->pending_wakeup = 0;
perf_event_wakeup(event);
}

+ __perf_pending_event(event);
+
if (rctx >= 0)
perf_swevent_put_recursion_context(rctx);
}
@@ -9203,11 +9206,20 @@ static int __perf_event_overflow(struct perf_event *event,
if (events && atomic_dec_and_test(&event->event_limit)) {
ret = 1;
event->pending_kill = POLL_HUP;
- event->pending_addr = data->addr;
-
perf_event_disable_inatomic(event);
}

+ if (event->attr.sigtrap) {
+ /*
+ * Should not be able to return to user space without processing
+ * pending_sigtrap (kernel events can overflow multiple times).
+ */
+ WARN_ON_ONCE(event->pending_sigtrap && event->attr.exclude_kernel);
+ event->pending_sigtrap = 1;
+ event->pending_addr = data->addr;
+ irq_work_queue(&event->pending);
+ }
+
READ_ONCE(event->overflow_handler)(event, data, regs);

if (*perf_event_fasync(event) && event->pending_kill) {
@@ -11528,7 +11540,6 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,


init_waitqueue_head(&event->waitq);
- event->pending_disable = -1;
init_irq_work(&event->pending, perf_pending_event);

mutex_init(&event->mmap_mutex);
@@ -11551,9 +11562,6 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
if (parent_event)
event->event_caps = parent_event->event_caps;

- if (event->attr.sigtrap)
- atomic_set(&event->event_limit, 1);
-
if (task) {
event->attach_state = PERF_ATTACH_TASK;
/*

2022-10-04 17:37:39

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH] perf: Fix missing SIGTRAPs due to pending_disable abuse

On Tue, 4 Oct 2022 at 19:09, Peter Zijlstra <[email protected]> wrote:
>
> On Wed, Sep 28, 2022 at 04:55:46PM +0200, Marco Elver wrote:
> > On Wed, Sep 28, 2022 at 12:06PM +0200, Marco Elver wrote:
> >
> > > My second idea about introducing something like irq_work_raw_sync().
> > > Maybe it's not that crazy if it is actually safe. I expect this case
> > > where we need the irq_work_raw_sync() to be very very rare.
> >
> > The previous irq_work_raw_sync() forgot about irq_work_queue_on(). Alas,
> > I might still be missing something obvious, because "it's never that
> > easy". ;-)
> >
> > And for completeness, the full perf patch of what it would look like
> > together with irq_work_raw_sync() (consider it v1.5). It's already
> > survived some shorter stress tests and fuzzing.
>
> So.... I don't like it. But I cooked up the below, which _almost_ works :-/
>
> For some raisin it sometimes fails with 14999 out of 15000 events
> delivered and I've not yet figured out where it goes sideways. I'm
> currently thinking it's that sigtrap clear on OFF.
>
> Still, what do you think of the approach?

It looks reasonable, but obviously needs to pass tests. :-)
Also, see comment below (I think you're still turning signals
asynchronous, which we shouldn't do).

> ---
> include/linux/perf_event.h | 8 ++--
> kernel/events/core.c | 92 +++++++++++++++++++++++++---------------------
> 2 files changed, 55 insertions(+), 45 deletions(-)
>
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index ee8b9ecdc03b..c54161719d37 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -736,9 +736,11 @@ struct perf_event {
> struct fasync_struct *fasync;
>
> /* delayed work for NMIs and such */
> - int pending_wakeup;
> - int pending_kill;
> - int pending_disable;
> + unsigned int pending_wakeup :1;
> + unsigned int pending_disable :1;
> + unsigned int pending_sigtrap :1;
> + unsigned int pending_kill :3;
> +
> unsigned long pending_addr; /* SIGTRAP */
> struct irq_work pending;
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 2621fd24ad26..8e5dbe971d9e 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -2268,11 +2268,15 @@ event_sched_out(struct perf_event *event,
> event->pmu->del(event, 0);
> event->oncpu = -1;
>
> - if (READ_ONCE(event->pending_disable) >= 0) {
> - WRITE_ONCE(event->pending_disable, -1);
> + if (event->pending_disable) {
> + event->pending_disable = 0;
> perf_cgroup_event_disable(event, ctx);
> state = PERF_EVENT_STATE_OFF;
> }
> +
> + if (event->pending_sigtrap && state == PERF_EVENT_STATE_OFF)
> + event->pending_sigtrap = 0;
> +
> perf_event_set_state(event, state);
>
> if (!is_software_event(event))
> @@ -2463,8 +2467,7 @@ EXPORT_SYMBOL_GPL(perf_event_disable);
>
> void perf_event_disable_inatomic(struct perf_event *event)
> {
> - WRITE_ONCE(event->pending_disable, smp_processor_id());
> - /* can fail, see perf_pending_event_disable() */
> + event->pending_disable = 1;
> irq_work_queue(&event->pending);
> }
>
> @@ -2527,6 +2530,9 @@ event_sched_in(struct perf_event *event,
> if (event->attr.exclusive)
> cpuctx->exclusive = 1;
>
> + if (event->pending_disable || event->pending_sigtrap)
> + irq_work_queue(&event->pending);
> +
> out:
> perf_pmu_enable(event->pmu);
>
> @@ -6440,47 +6446,40 @@ static void perf_sigtrap(struct perf_event *event)
> event->attr.type, event->attr.sig_data);
> }
>
> -static void perf_pending_event_disable(struct perf_event *event)
> +/*
> + * Deliver the pending work in-event-context or follow the context.
> + */
> +static void __perf_pending_event(struct perf_event *event)
> {
> - int cpu = READ_ONCE(event->pending_disable);
> + int cpu = READ_ONCE(event->oncpu);
>
> + /*
> + * If the event isn't running; we done. event_sched_in() will restart
> + * the irq_work when needed.
> + */
> if (cpu < 0)
> return;
>
> + /*
> + * Yay, we hit home and are in the context of the event.
> + */
> if (cpu == smp_processor_id()) {
> - WRITE_ONCE(event->pending_disable, -1);
> -
> - if (event->attr.sigtrap) {
> + if (event->pending_sigtrap) {
> + event->pending_sigtrap = 0;
> perf_sigtrap(event);
> - atomic_set_release(&event->event_limit, 1); /* rearm event */
> - return;
> }
> -
> - perf_event_disable_local(event);
> - return;
> + if (event->pending_disable) {
> + event->pending_disable = 0;
> + perf_event_disable_local(event);
> + }
> }
>
> /*
> - * CPU-A CPU-B
> - *
> - * perf_event_disable_inatomic()
> - * @pending_disable = CPU-A;
> - * irq_work_queue();
> - *
> - * sched-out
> - * @pending_disable = -1;
> - *
> - * sched-in
> - * perf_event_disable_inatomic()
> - * @pending_disable = CPU-B;
> - * irq_work_queue(); // FAILS
> - *
> - * irq_work_run()
> - * perf_pending_event()
> - *
> - * But the event runs on CPU-B and wants disabling there.
> + * Requeue if there's still any pending work left, make sure to follow
> + * where the event went.
> */
> - irq_work_queue_on(&event->pending, cpu);
> + if (event->pending_disable || event->pending_sigtrap)
> + irq_work_queue_on(&event->pending, cpu);

I considered making the irq_work "chase" the right CPU but it doesn't
work for sigtrap. This will make the signal asynchronous (it should be
synchronous), and the reason why I had to do irq_work_raw_sync().

Thanks,
-- Marco

2022-10-04 17:44:33

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] perf: Fix missing SIGTRAPs due to pending_disable abuse

On Tue, Oct 04, 2022 at 07:09:15PM +0200, Peter Zijlstra wrote:
> On Wed, Sep 28, 2022 at 04:55:46PM +0200, Marco Elver wrote:
> > On Wed, Sep 28, 2022 at 12:06PM +0200, Marco Elver wrote:
> >
> > > My second idea about introducing something like irq_work_raw_sync().
> > > Maybe it's not that crazy if it is actually safe. I expect this case
> > > where we need the irq_work_raw_sync() to be very very rare.
> >
> > The previous irq_work_raw_sync() forgot about irq_work_queue_on(). Alas,
> > I might still be missing something obvious, because "it's never that
> > easy". ;-)
> >
> > And for completeness, the full perf patch of what it would look like
> > together with irq_work_raw_sync() (consider it v1.5). It's already
> > survived some shorter stress tests and fuzzing.
>
> So.... I don't like it. But I cooked up the below, which _almost_ works :-/
>
> For some raisin it sometimes fails with 14999 out of 15000 events
> delivered and I've not yet figured out where it goes sideways. I'm
> currently thinking it's that sigtrap clear on OFF.

Oh Urgh, this is ofcourse the case where an IPI races with a migration
and we loose the race with return to use. Effectively giving the signal
skid vs the hardware event.

Bah.. I really hate having one CPU wait for another... Let me see if I
can find another way to close that hole.

2022-10-05 08:17:25

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] perf: Fix missing SIGTRAPs due to pending_disable abuse

On Tue, Oct 04, 2022 at 07:33:55PM +0200, Marco Elver wrote:
> It looks reasonable, but obviously needs to pass tests. :-)

Ikr :-)

> Also, see comment below (I think you're still turning signals
> asynchronous, which we shouldn't do).

Indeed so; I tried fixing that this morning, but so far that doesn't
seem to want to actually cure things :/ I'll need to stomp on this
harder.

Current hackery below. The main difference is that instead of trying to
restart the irq_work on sched_in, sched_out will now queue a task-work.

The event scheduling is done from 'regular' IRQ context and as such
there should be a return-to-userspace for the relevant task in the
immediate future (either directly or after scheduling).

Alas, something still isn't right...

---
include/linux/perf_event.h | 9 ++--
kernel/events/core.c | 115 ++++++++++++++++++++++++++++-----------------
2 files changed, 79 insertions(+), 45 deletions(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 853f64b6c8c2..f15726a6c127 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -756,11 +756,14 @@ struct perf_event {
struct fasync_struct *fasync;

/* delayed work for NMIs and such */
- int pending_wakeup;
- int pending_kill;
- int pending_disable;
+ unsigned int pending_wakeup :1;
+ unsigned int pending_disable :1;
+ unsigned int pending_sigtrap :1;
+ unsigned int pending_kill :3;
+
unsigned long pending_addr; /* SIGTRAP */
struct irq_work pending;
+ struct callback_head pending_sig;

atomic_t event_limit;

diff --git a/kernel/events/core.c b/kernel/events/core.c
index b981b879bcd8..e28257fb6f00 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -54,6 +54,7 @@
#include <linux/highmem.h>
#include <linux/pgtable.h>
#include <linux/buildid.h>
+#include <linux/task_work.h>

#include "internal.h"

@@ -2276,11 +2277,19 @@ event_sched_out(struct perf_event *event,
event->pmu->del(event, 0);
event->oncpu = -1;

- if (READ_ONCE(event->pending_disable) >= 0) {
- WRITE_ONCE(event->pending_disable, -1);
+ if (event->pending_disable) {
+ event->pending_disable = 0;
perf_cgroup_event_disable(event, ctx);
state = PERF_EVENT_STATE_OFF;
}
+
+ if (event->pending_sigtrap) {
+ if (state != PERF_EVENT_STATE_OFF)
+ task_work_add(current, &event->pending_sig, TWA_NONE);
+ else
+ event->pending_sigtrap = 0;
+ }
+
perf_event_set_state(event, state);

if (!is_software_event(event))
@@ -2471,8 +2480,7 @@ EXPORT_SYMBOL_GPL(perf_event_disable);

void perf_event_disable_inatomic(struct perf_event *event)
{
- WRITE_ONCE(event->pending_disable, smp_processor_id());
- /* can fail, see perf_pending_event_disable() */
+ event->pending_disable = 1;
irq_work_queue(&event->pending);
}

@@ -6448,47 +6456,40 @@ static void perf_sigtrap(struct perf_event *event)
event->attr.type, event->attr.sig_data);
}

-static void perf_pending_event_disable(struct perf_event *event)
+/*
+ * Deliver the pending work in-event-context or follow the context.
+ */
+static void __perf_pending_event(struct perf_event *event)
{
- int cpu = READ_ONCE(event->pending_disable);
+ int cpu = READ_ONCE(event->oncpu);

+ /*
+ * If the event isn't running; we done. event_sched_in() will restart
+ * the irq_work when needed.
+ */
if (cpu < 0)
return;

+ /*
+ * Yay, we hit home and are in the context of the event.
+ */
if (cpu == smp_processor_id()) {
- WRITE_ONCE(event->pending_disable, -1);
-
- if (event->attr.sigtrap) {
+ if (event->pending_sigtrap) {
+ event->pending_sigtrap = 0;
perf_sigtrap(event);
- atomic_set_release(&event->event_limit, 1); /* rearm event */
- return;
}
-
- perf_event_disable_local(event);
- return;
+ if (event->pending_disable) {
+ event->pending_disable = 0;
+ perf_event_disable_local(event);
+ }
}

/*
- * CPU-A CPU-B
- *
- * perf_event_disable_inatomic()
- * @pending_disable = CPU-A;
- * irq_work_queue();
- *
- * sched-out
- * @pending_disable = -1;
- *
- * sched-in
- * perf_event_disable_inatomic()
- * @pending_disable = CPU-B;
- * irq_work_queue(); // FAILS
- *
- * irq_work_run()
- * perf_pending_event()
- *
- * But the event runs on CPU-B and wants disabling there.
+ * Requeue if there's still any pending work left, make sure to follow
+ * where the event went.
*/
- irq_work_queue_on(&event->pending, cpu);
+ if (event->pending_disable || event->pending_sigtrap)
+ irq_work_queue_on(&event->pending, cpu);
}

static void perf_pending_event(struct irq_work *entry)
@@ -6496,19 +6497,43 @@ static void perf_pending_event(struct irq_work *entry)
struct perf_event *event = container_of(entry, struct perf_event, pending);
int rctx;

- rctx = perf_swevent_get_recursion_context();
/*
* If we 'fail' here, that's OK, it means recursion is already disabled
* and we won't recurse 'further'.
*/
+ rctx = perf_swevent_get_recursion_context();

- perf_pending_event_disable(event);
-
+ /*
+ * The wakeup isn't bound to the context of the event -- it can happen
+ * irrespective of where the event is.
+ */
if (event->pending_wakeup) {
event->pending_wakeup = 0;
perf_event_wakeup(event);
}

+ __perf_pending_event(event);
+
+ if (rctx >= 0)
+ perf_swevent_put_recursion_context(rctx);
+}
+
+static void perf_pending_sig(struct callback_head *head)
+{
+ struct perf_event *event = container_of(head, struct perf_event, pending_sig);
+ int rctx;
+
+ /*
+ * If we 'fail' here, that's OK, it means recursion is already disabled
+ * and we won't recurse 'further'.
+ */
+ rctx = perf_swevent_get_recursion_context();
+
+ if (event->pending_sigtrap) {
+ event->pending_sigtrap = 0;
+ perf_sigtrap(event);
+ }
+
if (rctx >= 0)
perf_swevent_put_recursion_context(rctx);
}
@@ -9227,11 +9252,20 @@ static int __perf_event_overflow(struct perf_event *event,
if (events && atomic_dec_and_test(&event->event_limit)) {
ret = 1;
event->pending_kill = POLL_HUP;
- event->pending_addr = data->addr;
-
perf_event_disable_inatomic(event);
}

+ if (event->attr.sigtrap) {
+ /*
+ * Should not be able to return to user space without processing
+ * pending_sigtrap (kernel events can overflow multiple times).
+ */
+ WARN_ON_ONCE(event->pending_sigtrap && event->attr.exclude_kernel);
+ event->pending_sigtrap = 1;
+ event->pending_addr = data->addr;
+ irq_work_queue(&event->pending);
+ }
+
READ_ONCE(event->overflow_handler)(event, data, regs);

if (*perf_event_fasync(event) && event->pending_kill) {
@@ -11560,8 +11594,8 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,


init_waitqueue_head(&event->waitq);
- event->pending_disable = -1;
init_irq_work(&event->pending, perf_pending_event);
+ init_task_work(&event->pending_sig, perf_pending_sig);

mutex_init(&event->mmap_mutex);
raw_spin_lock_init(&event->addr_filters.lock);
@@ -11583,9 +11617,6 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
if (parent_event)
event->event_caps = parent_event->event_caps;

- if (event->attr.sigtrap)
- atomic_set(&event->event_limit, 1);
-
if (task) {
event->attach_state = PERF_ATTACH_TASK;
/*

2022-10-05 08:19:02

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH] perf: Fix missing SIGTRAPs due to pending_disable abuse

On Wed, 5 Oct 2022 at 09:37, Peter Zijlstra <[email protected]> wrote:
>
> On Tue, Oct 04, 2022 at 07:33:55PM +0200, Marco Elver wrote:
> > It looks reasonable, but obviously needs to pass tests. :-)
>
> Ikr :-)
>
> > Also, see comment below (I think you're still turning signals
> > asynchronous, which we shouldn't do).
>
> Indeed so; I tried fixing that this morning, but so far that doesn't
> seem to want to actually cure things :/ I'll need to stomp on this
> harder.
>
> Current hackery below. The main difference is that instead of trying to
> restart the irq_work on sched_in, sched_out will now queue a task-work.
>
> The event scheduling is done from 'regular' IRQ context and as such
> there should be a return-to-userspace for the relevant task in the
> immediate future (either directly or after scheduling).

Does this work if we get a __perf_event_enable() IPI as described in
the commit message of the patch I sent? I.e. it does a sched-out
immediately followed by a sched-in aka resched; presumably in that
case it should still have the irq_work on the same CPU, but the
task_work will be a noop?

> Alas, something still isn't right...
>
> ---
> include/linux/perf_event.h | 9 ++--
> kernel/events/core.c | 115 ++++++++++++++++++++++++++++-----------------
> 2 files changed, 79 insertions(+), 45 deletions(-)
>
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index 853f64b6c8c2..f15726a6c127 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -756,11 +756,14 @@ struct perf_event {
> struct fasync_struct *fasync;
>
> /* delayed work for NMIs and such */
> - int pending_wakeup;
> - int pending_kill;
> - int pending_disable;
> + unsigned int pending_wakeup :1;
> + unsigned int pending_disable :1;
> + unsigned int pending_sigtrap :1;
> + unsigned int pending_kill :3;
> +
> unsigned long pending_addr; /* SIGTRAP */
> struct irq_work pending;
> + struct callback_head pending_sig;
>
> atomic_t event_limit;
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index b981b879bcd8..e28257fb6f00 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -54,6 +54,7 @@
> #include <linux/highmem.h>
> #include <linux/pgtable.h>
> #include <linux/buildid.h>
> +#include <linux/task_work.h>
>
> #include "internal.h"
>
> @@ -2276,11 +2277,19 @@ event_sched_out(struct perf_event *event,
> event->pmu->del(event, 0);
> event->oncpu = -1;
>
> - if (READ_ONCE(event->pending_disable) >= 0) {
> - WRITE_ONCE(event->pending_disable, -1);
> + if (event->pending_disable) {
> + event->pending_disable = 0;
> perf_cgroup_event_disable(event, ctx);
> state = PERF_EVENT_STATE_OFF;
> }
> +
> + if (event->pending_sigtrap) {
> + if (state != PERF_EVENT_STATE_OFF)
> + task_work_add(current, &event->pending_sig, TWA_NONE);
> + else
> + event->pending_sigtrap = 0;
> + }
> +
> perf_event_set_state(event, state);
>
> if (!is_software_event(event))
> @@ -2471,8 +2480,7 @@ EXPORT_SYMBOL_GPL(perf_event_disable);
>
> void perf_event_disable_inatomic(struct perf_event *event)
> {
> - WRITE_ONCE(event->pending_disable, smp_processor_id());
> - /* can fail, see perf_pending_event_disable() */
> + event->pending_disable = 1;
> irq_work_queue(&event->pending);
> }
>
> @@ -6448,47 +6456,40 @@ static void perf_sigtrap(struct perf_event *event)
> event->attr.type, event->attr.sig_data);
> }
>
> -static void perf_pending_event_disable(struct perf_event *event)
> +/*
> + * Deliver the pending work in-event-context or follow the context.
> + */
> +static void __perf_pending_event(struct perf_event *event)
> {
> - int cpu = READ_ONCE(event->pending_disable);
> + int cpu = READ_ONCE(event->oncpu);
>
> + /*
> + * If the event isn't running; we done. event_sched_in() will restart
> + * the irq_work when needed.
> + */
> if (cpu < 0)
> return;
>
> + /*
> + * Yay, we hit home and are in the context of the event.
> + */
> if (cpu == smp_processor_id()) {
> - WRITE_ONCE(event->pending_disable, -1);
> -
> - if (event->attr.sigtrap) {
> + if (event->pending_sigtrap) {
> + event->pending_sigtrap = 0;
> perf_sigtrap(event);
> - atomic_set_release(&event->event_limit, 1); /* rearm event */
> - return;
> }
> -
> - perf_event_disable_local(event);
> - return;
> + if (event->pending_disable) {
> + event->pending_disable = 0;
> + perf_event_disable_local(event);
> + }
> }
>
> /*
> - * CPU-A CPU-B
> - *
> - * perf_event_disable_inatomic()
> - * @pending_disable = CPU-A;
> - * irq_work_queue();
> - *
> - * sched-out
> - * @pending_disable = -1;
> - *
> - * sched-in
> - * perf_event_disable_inatomic()
> - * @pending_disable = CPU-B;
> - * irq_work_queue(); // FAILS
> - *
> - * irq_work_run()
> - * perf_pending_event()
> - *
> - * But the event runs on CPU-B and wants disabling there.
> + * Requeue if there's still any pending work left, make sure to follow
> + * where the event went.
> */
> - irq_work_queue_on(&event->pending, cpu);
> + if (event->pending_disable || event->pending_sigtrap)
> + irq_work_queue_on(&event->pending, cpu);

This probably should not queue an irq_work if pending_sigtrap, given
it just doesn't work. It probably should just ignore?

Thanks,
-- Marco

2022-10-05 08:28:29

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] perf: Fix missing SIGTRAPs due to pending_disable abuse

On Wed, Oct 05, 2022 at 09:37:06AM +0200, Peter Zijlstra wrote:
> On Tue, Oct 04, 2022 at 07:33:55PM +0200, Marco Elver wrote:
> > It looks reasonable, but obviously needs to pass tests. :-)
>
> Ikr :-)
>
> > Also, see comment below (I think you're still turning signals
> > asynchronous, which we shouldn't do).
>
> Indeed so; I tried fixing that this morning, but so far that doesn't
> seem to want to actually cure things :/ I'll need to stomp on this
> harder.
>
> Current hackery below. The main difference is that instead of trying to
> restart the irq_work on sched_in, sched_out will now queue a task-work.
>
> The event scheduling is done from 'regular' IRQ context and as such
> there should be a return-to-userspace for the relevant task in the
> immediate future (either directly or after scheduling).
>
> Alas, something still isn't right...

Oh, lol, *groan*... this fixes it:

Now to find a sane way to inhibit this while a sig thing is pending :/

diff --git a/kernel/events/core.c b/kernel/events/core.c
index b981b879bcd8..92b6a2f6de1a 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3426,7 +3434,7 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn,
*/
raw_spin_lock(&ctx->lock);
raw_spin_lock_nested(&next_ctx->lock, SINGLE_DEPTH_NESTING);
- if (context_equiv(ctx, next_ctx)) {
+ if (0 && context_equiv(ctx, next_ctx)) {

WRITE_ONCE(ctx->task, next);
WRITE_ONCE(next_ctx->task, task);

2022-10-06 14:19:32

by Peter Zijlstra

[permalink] [raw]
Subject: [PATCH] perf: Fix missing SIGTRAPs


OK, so the below seems to pass the concurrent sigtrap_threads test for
me and doesn't have that horrible irq_work_sync hackery.

Does it work for you too?

---
Subject: perf: Fix missing SIGTRAPs
From: Peter Zijlstra <[email protected]>
Date: Thu Oct 6 15:00:39 CEST 2022

Marco reported:

Due to the implementation of how SIGTRAP are delivered if
perf_event_attr::sigtrap is set, we've noticed 3 issues:

1. Missing SIGTRAP due to a race with event_sched_out() (more
details below).

2. Hardware PMU events being disabled due to returning 1 from
perf_event_overflow(). The only way to re-enable the event is
for user space to first "properly" disable the event and then
re-enable it.

3. The inability to automatically disable an event after a
specified number of overflows via PERF_EVENT_IOC_REFRESH.

The worst of the 3 issues is problem (1), which occurs when a
pending_disable is "consumed" by a racing event_sched_out(), observed as
follows:

CPU0 | CPU1
--------------------------------+---------------------------
__perf_event_overflow() |
perf_event_disable_inatomic() |
pending_disable = CPU0 | ...
| _perf_event_enable()
| event_function_call()
| task_function_call()
| /* sends IPI to CPU0 */
<IPI> | ...
__perf_event_enable() +---------------------------
ctx_resched()
task_ctx_sched_out()
ctx_sched_out()
group_sched_out()
event_sched_out()
pending_disable = -1
</IPI>
<IRQ-work>
perf_pending_event()
perf_pending_event_disable()
/* Fails to send SIGTRAP because no pending_disable! */
</IRQ-work>

In the above case, not only is that particular SIGTRAP missed, but also
all future SIGTRAPs because 'event_limit' is not reset back to 1.

To fix, rework pending delivery of SIGTRAP via IRQ-work by introduction
of a separate 'pending_sigtrap', no longer using 'event_limit' and
'pending_disable' for its delivery.

Additionally; and different to Marco's proposed patch:

- recognise that pending_disable effectively duplicates oncpu for
the case where it is set. As such, change the irq_work handler to
use ->oncpu to target the event and use pending_* as boolean toggles.

- observe that SIGTRAP targets the ctx->task, so the context switch
optimization that carries contexts between tasks is invalid. If
the irq_work were delayed enough to hit after a context switch the
SIGTRAP would be delivered to the wrong task.

- observe that if the event gets scheduled out
(rotation/migration/context-switch/...) the irq-work would be
insufficient to deliver the SIGTRAP when the event gets scheduled
back in (the irq-work might still be pending on the old CPU).

Therefore have event_sched_out() convert the pending sigtrap into a
task_work which will deliver the signal at return_to_user.

Fixes: 97ba62b27867 ("perf: Add support for SIGTRAP on perf events")
Reported-by: Marco Elver <[email protected]>
Debugged-by: Marco Elver <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---
include/linux/perf_event.h | 19 ++++-
kernel/events/core.c | 149 ++++++++++++++++++++++++++++++++------------
kernel/events/ring_buffer.c | 2
3 files changed, 127 insertions(+), 43 deletions(-)

--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -736,11 +736,14 @@ struct perf_event {
struct fasync_struct *fasync;

/* delayed work for NMIs and such */
- int pending_wakeup;
- int pending_kill;
- int pending_disable;
+ unsigned int pending_wakeup;
+ unsigned int pending_kill;
+ unsigned int pending_disable;
+ unsigned int pending_sigtrap;
unsigned long pending_addr; /* SIGTRAP */
- struct irq_work pending;
+ struct irq_work pending_irq;
+ struct callback_head pending_task;
+ unsigned int pending_work;

atomic_t event_limit;

@@ -857,6 +860,14 @@ struct perf_event_context {
#endif
void *task_ctx_data; /* pmu specific data */
struct rcu_head rcu_head;
+
+ /*
+ * Sum (event->pending_sigtrap + event->pending_work)
+ *
+ * The SIGTRAP is targeted at ctx->task, as such it won't do changing
+ * that until the signal is delivered.
+ */
+ local_t nr_pending;
};

/*
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -54,6 +54,7 @@
#include <linux/highmem.h>
#include <linux/pgtable.h>
#include <linux/buildid.h>
+#include <linux/task_work.h>

#include "internal.h"

@@ -2268,11 +2269,28 @@ event_sched_out(struct perf_event *event
event->pmu->del(event, 0);
event->oncpu = -1;

- if (READ_ONCE(event->pending_disable) >= 0) {
- WRITE_ONCE(event->pending_disable, -1);
+ if (event->pending_disable) {
+ event->pending_disable = 0;
perf_cgroup_event_disable(event, ctx);
state = PERF_EVENT_STATE_OFF;
}
+
+ if (event->pending_sigtrap) {
+ event->pending_sigtrap = 0;
+ if (state == PERF_EVENT_STATE_OFF) {
+ /*
+ * If we're racing with disabling the event; consume
+ * the event to avoid it becoming asynchonous by
+ * mistake.
+ */
+ local_dec(&event->ctx->nr_pending);
+ } else {
+ WARN_ON_ONCE(event->pending_work);
+ event->pending_work = 1;
+ task_work_add(current, &event->pending_task, TWA_RESUME);
+ }
+ }
+
perf_event_set_state(event, state);

if (!is_software_event(event))
@@ -2424,7 +2442,7 @@ static void __perf_event_disable(struct
* hold the top-level event's child_mutex, so any descendant that
* goes to exit will block in perf_event_exit_event().
*
- * When called from perf_pending_event it's OK because event->ctx
+ * When called from perf_pending_irq it's OK because event->ctx
* is the current context on this CPU and preemption is disabled,
* hence we can't get into perf_event_task_sched_out for this context.
*/
@@ -2463,9 +2481,8 @@ EXPORT_SYMBOL_GPL(perf_event_disable);

void perf_event_disable_inatomic(struct perf_event *event)
{
- WRITE_ONCE(event->pending_disable, smp_processor_id());
- /* can fail, see perf_pending_event_disable() */
- irq_work_queue(&event->pending);
+ event->pending_disable = 1;
+ irq_work_queue(&event->pending_irq);
}

#define MAX_INTERRUPTS (~0ULL)
@@ -3420,11 +3437,22 @@ static void perf_event_context_sched_out
raw_spin_lock_nested(&next_ctx->lock, SINGLE_DEPTH_NESTING);
if (context_equiv(ctx, next_ctx)) {

+ perf_pmu_disable(pmu);
+
+ /* PMIs are disabled; ctx->nr_pending is stable. */
+ if (local_read(&ctx->nr_pending)) {
+ /*
+ * Must not swap out ctx when there's pending
+ * events that rely on the ctx->task relation.
+ */
+ raw_spin_unlock(&next_ctx->lock);
+ rcu_read_unlock();
+ goto inside_switch;
+ }
+
WRITE_ONCE(ctx->task, next);
WRITE_ONCE(next_ctx->task, task);

- perf_pmu_disable(pmu);
-
if (cpuctx->sched_cb_usage && pmu->sched_task)
pmu->sched_task(ctx, false);

@@ -3465,6 +3493,7 @@ static void perf_event_context_sched_out
raw_spin_lock(&ctx->lock);
perf_pmu_disable(pmu);

+inside_switch:
if (cpuctx->sched_cb_usage && pmu->sched_task)
pmu->sched_task(ctx, false);
task_ctx_sched_out(cpuctx, ctx, EVENT_ALL);
@@ -4931,7 +4960,7 @@ static void perf_addr_filters_splice(str

static void _free_event(struct perf_event *event)
{
- irq_work_sync(&event->pending);
+ irq_work_sync(&event->pending_irq);

unaccount_event(event);

@@ -6431,7 +6460,7 @@ static void perf_sigtrap(struct perf_eve
return;

/*
- * perf_pending_event() can race with the task exiting.
+ * perf_pending_irq() can race with the task exiting.
*/
if (current->flags & PF_EXITING)
return;
@@ -6440,23 +6469,33 @@ static void perf_sigtrap(struct perf_eve
event->attr.type, event->attr.sig_data);
}

-static void perf_pending_event_disable(struct perf_event *event)
+/*
+ * Deliver the pending work in-event-context or follow the context.
+ */
+static void __perf_pending_irq(struct perf_event *event)
{
- int cpu = READ_ONCE(event->pending_disable);
+ int cpu = READ_ONCE(event->oncpu);

+ /*
+ * If the event isn't running; we done. event_sched_out() will have
+ * taken care of things.
+ */
if (cpu < 0)
return;

+ /*
+ * Yay, we hit home and are in the context of the event.
+ */
if (cpu == smp_processor_id()) {
- WRITE_ONCE(event->pending_disable, -1);
-
- if (event->attr.sigtrap) {
+ if (event->pending_sigtrap) {
+ event->pending_sigtrap = 0;
+ local_dec(&event->ctx->nr_pending);
perf_sigtrap(event);
- atomic_set_release(&event->event_limit, 1); /* rearm event */
- return;
}
-
- perf_event_disable_local(event);
+ if (event->pending_disable) {
+ event->pending_disable = 0;
+ perf_event_disable_local(event);
+ }
return;
}

@@ -6476,31 +6515,56 @@ static void perf_pending_event_disable(s
* irq_work_queue(); // FAILS
*
* irq_work_run()
- * perf_pending_event()
+ * perf_pending_irq()
*
* But the event runs on CPU-B and wants disabling there.
*/
- irq_work_queue_on(&event->pending, cpu);
+ irq_work_queue_on(&event->pending_irq, cpu);
}

-static void perf_pending_event(struct irq_work *entry)
+static void perf_pending_irq(struct irq_work *entry)
{
- struct perf_event *event = container_of(entry, struct perf_event, pending);
+ struct perf_event *event = container_of(entry, struct perf_event, pending_irq);
int rctx;

- rctx = perf_swevent_get_recursion_context();
/*
* If we 'fail' here, that's OK, it means recursion is already disabled
* and we won't recurse 'further'.
*/
+ rctx = perf_swevent_get_recursion_context();

- perf_pending_event_disable(event);
-
+ /*
+ * The wakeup isn't bound to the context of the event -- it can happen
+ * irrespective of where the event is.
+ */
if (event->pending_wakeup) {
event->pending_wakeup = 0;
perf_event_wakeup(event);
}

+ __perf_pending_irq(event);
+
+ if (rctx >= 0)
+ perf_swevent_put_recursion_context(rctx);
+}
+
+static void perf_pending_task(struct callback_head *head)
+{
+ struct perf_event *event = container_of(head, struct perf_event, pending_task);
+ int rctx;
+
+ /*
+ * If we 'fail' here, that's OK, it means recursion is already disabled
+ * and we won't recurse 'further'.
+ */
+ rctx = perf_swevent_get_recursion_context();
+
+ if (event->pending_work) {
+ event->pending_work = 0;
+ local_dec(&event->ctx->nr_pending);
+ perf_sigtrap(event);
+ }
+
if (rctx >= 0)
perf_swevent_put_recursion_context(rctx);
}
@@ -9179,8 +9243,8 @@ int perf_event_account_interrupt(struct
*/

static int __perf_event_overflow(struct perf_event *event,
- int throttle, struct perf_sample_data *data,
- struct pt_regs *regs)
+ int throttle, struct perf_sample_data *data,
+ struct pt_regs *regs)
{
int events = atomic_read(&event->event_limit);
int ret = 0;
@@ -9203,24 +9267,36 @@ static int __perf_event_overflow(struct
if (events && atomic_dec_and_test(&event->event_limit)) {
ret = 1;
event->pending_kill = POLL_HUP;
- event->pending_addr = data->addr;
-
perf_event_disable_inatomic(event);
}

+ if (event->attr.sigtrap) {
+ /*
+ * Should not be able to return to user space without processing
+ * pending_sigtrap (kernel events can overflow multiple times).
+ */
+ WARN_ON_ONCE(event->pending_sigtrap && event->attr.exclude_kernel);
+ if (!event->pending_sigtrap) {
+ event->pending_sigtrap = 1;
+ local_inc(&event->ctx->nr_pending);
+ }
+ event->pending_addr = data->addr;
+ irq_work_queue(&event->pending_irq);
+ }
+
READ_ONCE(event->overflow_handler)(event, data, regs);

if (*perf_event_fasync(event) && event->pending_kill) {
event->pending_wakeup = 1;
- irq_work_queue(&event->pending);
+ irq_work_queue(&event->pending_irq);
}

return ret;
}

int perf_event_overflow(struct perf_event *event,
- struct perf_sample_data *data,
- struct pt_regs *regs)
+ struct perf_sample_data *data,
+ struct pt_regs *regs)
{
return __perf_event_overflow(event, 1, data, regs);
}
@@ -11528,8 +11604,8 @@ perf_event_alloc(struct perf_event_attr


init_waitqueue_head(&event->waitq);
- event->pending_disable = -1;
- init_irq_work(&event->pending, perf_pending_event);
+ init_irq_work(&event->pending_irq, perf_pending_irq);
+ init_task_work(&event->pending_task, perf_pending_task);

mutex_init(&event->mmap_mutex);
raw_spin_lock_init(&event->addr_filters.lock);
@@ -11551,9 +11627,6 @@ perf_event_alloc(struct perf_event_attr
if (parent_event)
event->event_caps = parent_event->event_caps;

- if (event->attr.sigtrap)
- atomic_set(&event->event_limit, 1);
-
if (task) {
event->attach_state = PERF_ATTACH_TASK;
/*
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -22,7 +22,7 @@ static void perf_output_wakeup(struct pe
atomic_set(&handle->rb->poll, EPOLLIN);

handle->event->pending_wakeup = 1;
- irq_work_queue(&handle->event->pending);
+ irq_work_queue(&handle->event->pending_irq);
}

/*

2022-10-11 08:30:13

by Peter Zijlstra

[permalink] [raw]
Subject: [PATCH v2] perf: Fix missing SIGTRAPs

Subject: perf: Fix missing SIGTRAPs
From: Peter Zijlstra <[email protected]>
Date: Thu Oct 6 15:00:39 CEST 2022

Marco reported:

Due to the implementation of how SIGTRAP are delivered if
perf_event_attr::sigtrap is set, we've noticed 3 issues:

1. Missing SIGTRAP due to a race with event_sched_out() (more
details below).

2. Hardware PMU events being disabled due to returning 1 from
perf_event_overflow(). The only way to re-enable the event is
for user space to first "properly" disable the event and then
re-enable it.

3. The inability to automatically disable an event after a
specified number of overflows via PERF_EVENT_IOC_REFRESH.

The worst of the 3 issues is problem (1), which occurs when a
pending_disable is "consumed" by a racing event_sched_out(), observed
as follows:

CPU0 | CPU1
--------------------------------+---------------------------
__perf_event_overflow() |
perf_event_disable_inatomic() |
pending_disable = CPU0 | ...
| _perf_event_enable()
| event_function_call()
| task_function_call()
| /* sends IPI to CPU0 */
<IPI> | ...
__perf_event_enable() +---------------------------
ctx_resched()
task_ctx_sched_out()
ctx_sched_out()
group_sched_out()
event_sched_out()
pending_disable = -1
</IPI>
<IRQ-work>
perf_pending_event()
perf_pending_event_disable()
/* Fails to send SIGTRAP because no pending_disable! */
</IRQ-work>

In the above case, not only is that particular SIGTRAP missed, but also
all future SIGTRAPs because 'event_limit' is not reset back to 1.

To fix, rework pending delivery of SIGTRAP via IRQ-work by introduction
of a separate 'pending_sigtrap', no longer using 'event_limit' and
'pending_disable' for its delivery.

Additionally; and different to Marco's proposed patch:

- recognise that pending_disable effectively duplicates oncpu for
the case where it is set. As such, change the irq_work handler to
use ->oncpu to target the event and use pending_* as boolean toggles.

- observe that SIGTRAP targets the ctx->task, so the context switch
optimization that carries contexts between tasks is invalid. If
the irq_work were delayed enough to hit after a context switch the
SIGTRAP would be delivered to the wrong task.

- observe that if the event gets scheduled out
(rotation/migration/context-switch/...) the irq-work would be
insufficient to deliver the SIGTRAP when the event gets scheduled
back in (the irq-work might still be pending on the old CPU).

Therefore have event_sched_out() convert the pending sigtrap into a
task_work which will deliver the signal at return_to_user.

Fixes: 97ba62b27867 ("perf: Add support for SIGTRAP on perf events")
Reported-by: Marco Elver <[email protected]>
Debugged-by: Marco Elver <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---
include/linux/perf_event.h | 19 ++++-
kernel/events/core.c | 151 ++++++++++++++++++++++++++++++++------------
kernel/events/ring_buffer.c | 2
3 files changed, 129 insertions(+), 43 deletions(-)

--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -736,11 +736,14 @@ struct perf_event {
struct fasync_struct *fasync;

/* delayed work for NMIs and such */
- int pending_wakeup;
- int pending_kill;
- int pending_disable;
+ unsigned int pending_wakeup;
+ unsigned int pending_kill;
+ unsigned int pending_disable;
+ unsigned int pending_sigtrap;
unsigned long pending_addr; /* SIGTRAP */
- struct irq_work pending;
+ struct irq_work pending_irq;
+ struct callback_head pending_task;
+ unsigned int pending_work;

atomic_t event_limit;

@@ -857,6 +860,14 @@ struct perf_event_context {
#endif
void *task_ctx_data; /* pmu specific data */
struct rcu_head rcu_head;
+
+ /*
+ * Sum (event->pending_sigtrap + event->pending_work)
+ *
+ * The SIGTRAP is targeted at ctx->task, as such it won't do changing
+ * that until the signal is delivered.
+ */
+ local_t nr_pending;
};

/*
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -54,6 +54,7 @@
#include <linux/highmem.h>
#include <linux/pgtable.h>
#include <linux/buildid.h>
+#include <linux/task_work.h>

#include "internal.h"

@@ -2268,11 +2269,26 @@ event_sched_out(struct perf_event *event
event->pmu->del(event, 0);
event->oncpu = -1;

- if (READ_ONCE(event->pending_disable) >= 0) {
- WRITE_ONCE(event->pending_disable, -1);
+ if (event->pending_disable) {
+ event->pending_disable = 0;
perf_cgroup_event_disable(event, ctx);
state = PERF_EVENT_STATE_OFF;
}
+
+ if (event->pending_sigtrap) {
+ bool dec = true;
+
+ event->pending_sigtrap = 0;
+ if (state != PERF_EVENT_STATE_OFF &&
+ !event->pending_work) {
+ event->pending_work = 1;
+ dec = false;
+ task_work_add(current, &event->pending_task, TWA_RESUME);
+ }
+ if (dec)
+ local_dec(&event->ctx->nr_pending);
+ }
+
perf_event_set_state(event, state);

if (!is_software_event(event))
@@ -2424,7 +2440,7 @@ static void __perf_event_disable(struct
* hold the top-level event's child_mutex, so any descendant that
* goes to exit will block in perf_event_exit_event().
*
- * When called from perf_pending_event it's OK because event->ctx
+ * When called from perf_pending_irq it's OK because event->ctx
* is the current context on this CPU and preemption is disabled,
* hence we can't get into perf_event_task_sched_out for this context.
*/
@@ -2463,9 +2479,8 @@ EXPORT_SYMBOL_GPL(perf_event_disable);

void perf_event_disable_inatomic(struct perf_event *event)
{
- WRITE_ONCE(event->pending_disable, smp_processor_id());
- /* can fail, see perf_pending_event_disable() */
- irq_work_queue(&event->pending);
+ event->pending_disable = 1;
+ irq_work_queue(&event->pending_irq);
}

#define MAX_INTERRUPTS (~0ULL)
@@ -3420,11 +3435,23 @@ static void perf_event_context_sched_out
raw_spin_lock_nested(&next_ctx->lock, SINGLE_DEPTH_NESTING);
if (context_equiv(ctx, next_ctx)) {

+ perf_pmu_disable(pmu);
+
+ /* PMIs are disabled; ctx->nr_pending is stable. */
+ if (local_read(&ctx->nr_pending) ||
+ local_read(&next_ctx->nr_pending)) {
+ /*
+ * Must not swap out ctx when there's pending
+ * events that rely on the ctx->task relation.
+ */
+ raw_spin_unlock(&next_ctx->lock);
+ rcu_read_unlock();
+ goto inside_switch;
+ }
+
WRITE_ONCE(ctx->task, next);
WRITE_ONCE(next_ctx->task, task);

- perf_pmu_disable(pmu);
-
if (cpuctx->sched_cb_usage && pmu->sched_task)
pmu->sched_task(ctx, false);

@@ -3465,6 +3492,7 @@ static void perf_event_context_sched_out
raw_spin_lock(&ctx->lock);
perf_pmu_disable(pmu);

+inside_switch:
if (cpuctx->sched_cb_usage && pmu->sched_task)
pmu->sched_task(ctx, false);
task_ctx_sched_out(cpuctx, ctx, EVENT_ALL);
@@ -4931,7 +4959,7 @@ static void perf_addr_filters_splice(str

static void _free_event(struct perf_event *event)
{
- irq_work_sync(&event->pending);
+ irq_work_sync(&event->pending_irq);

unaccount_event(event);

@@ -6431,7 +6459,8 @@ static void perf_sigtrap(struct perf_eve
return;

/*
- * perf_pending_event() can race with the task exiting.
+ * Both perf_pending_task() and perf_pending_irq() can race with the
+ * task exiting.
*/
if (current->flags & PF_EXITING)
return;
@@ -6440,23 +6469,33 @@ static void perf_sigtrap(struct perf_eve
event->attr.type, event->attr.sig_data);
}

-static void perf_pending_event_disable(struct perf_event *event)
+/*
+ * Deliver the pending work in-event-context or follow the context.
+ */
+static void __perf_pending_irq(struct perf_event *event)
{
- int cpu = READ_ONCE(event->pending_disable);
+ int cpu = READ_ONCE(event->oncpu);

+ /*
+ * If the event isn't running; we done. event_sched_out() will have
+ * taken care of things.
+ */
if (cpu < 0)
return;

+ /*
+ * Yay, we hit home and are in the context of the event.
+ */
if (cpu == smp_processor_id()) {
- WRITE_ONCE(event->pending_disable, -1);
-
- if (event->attr.sigtrap) {
+ if (event->pending_sigtrap) {
+ event->pending_sigtrap = 0;
perf_sigtrap(event);
- atomic_set_release(&event->event_limit, 1); /* rearm event */
- return;
+ local_dec(&event->ctx->nr_pending);
+ }
+ if (event->pending_disable) {
+ event->pending_disable = 0;
+ perf_event_disable_local(event);
}
-
- perf_event_disable_local(event);
return;
}

@@ -6476,35 +6515,62 @@ static void perf_pending_event_disable(s
* irq_work_queue(); // FAILS
*
* irq_work_run()
- * perf_pending_event()
+ * perf_pending_irq()
*
* But the event runs on CPU-B and wants disabling there.
*/
- irq_work_queue_on(&event->pending, cpu);
+ irq_work_queue_on(&event->pending_irq, cpu);
}

-static void perf_pending_event(struct irq_work *entry)
+static void perf_pending_irq(struct irq_work *entry)
{
- struct perf_event *event = container_of(entry, struct perf_event, pending);
+ struct perf_event *event = container_of(entry, struct perf_event, pending_irq);
int rctx;

- rctx = perf_swevent_get_recursion_context();
/*
* If we 'fail' here, that's OK, it means recursion is already disabled
* and we won't recurse 'further'.
*/
+ rctx = perf_swevent_get_recursion_context();

- perf_pending_event_disable(event);
-
+ /*
+ * The wakeup isn't bound to the context of the event -- it can happen
+ * irrespective of where the event is.
+ */
if (event->pending_wakeup) {
event->pending_wakeup = 0;
perf_event_wakeup(event);
}

+ __perf_pending_irq(event);
+
if (rctx >= 0)
perf_swevent_put_recursion_context(rctx);
}

+static void perf_pending_task(struct callback_head *head)
+{
+ struct perf_event *event = container_of(head, struct perf_event, pending_task);
+ int rctx;
+
+ /*
+ * If we 'fail' here, that's OK, it means recursion is already disabled
+ * and we won't recurse 'further'.
+ */
+ preempt_disable_notrace();
+ rctx = perf_swevent_get_recursion_context();
+
+ if (event->pending_work) {
+ event->pending_work = 0;
+ perf_sigtrap(event);
+ local_dec(&event->ctx->nr_pending);
+ }
+
+ if (rctx >= 0)
+ perf_swevent_put_recursion_context(rctx);
+ preempt_enable_notrace();
+}
+
#ifdef CONFIG_GUEST_PERF_EVENTS
struct perf_guest_info_callbacks __rcu *perf_guest_cbs;

@@ -9188,8 +9254,8 @@ int perf_event_account_interrupt(struct
*/

static int __perf_event_overflow(struct perf_event *event,
- int throttle, struct perf_sample_data *data,
- struct pt_regs *regs)
+ int throttle, struct perf_sample_data *data,
+ struct pt_regs *regs)
{
int events = atomic_read(&event->event_limit);
int ret = 0;
@@ -9212,24 +9278,36 @@ static int __perf_event_overflow(struct
if (events && atomic_dec_and_test(&event->event_limit)) {
ret = 1;
event->pending_kill = POLL_HUP;
- event->pending_addr = data->addr;
-
perf_event_disable_inatomic(event);
}

+ if (event->attr.sigtrap) {
+ /*
+ * Should not be able to return to user space without processing
+ * pending_sigtrap (kernel events can overflow multiple times).
+ */
+ WARN_ON_ONCE(event->pending_sigtrap && event->attr.exclude_kernel);
+ if (!event->pending_sigtrap) {
+ event->pending_sigtrap = 1;
+ local_inc(&event->ctx->nr_pending);
+ }
+ event->pending_addr = data->addr;
+ irq_work_queue(&event->pending_irq);
+ }
+
READ_ONCE(event->overflow_handler)(event, data, regs);

if (*perf_event_fasync(event) && event->pending_kill) {
event->pending_wakeup = 1;
- irq_work_queue(&event->pending);
+ irq_work_queue(&event->pending_irq);
}

return ret;
}

int perf_event_overflow(struct perf_event *event,
- struct perf_sample_data *data,
- struct pt_regs *regs)
+ struct perf_sample_data *data,
+ struct pt_regs *regs)
{
return __perf_event_overflow(event, 1, data, regs);
}
@@ -11537,8 +11615,8 @@ perf_event_alloc(struct perf_event_attr


init_waitqueue_head(&event->waitq);
- event->pending_disable = -1;
- init_irq_work(&event->pending, perf_pending_event);
+ init_irq_work(&event->pending_irq, perf_pending_irq);
+ init_task_work(&event->pending_task, perf_pending_task);

mutex_init(&event->mmap_mutex);
raw_spin_lock_init(&event->addr_filters.lock);
@@ -11560,9 +11638,6 @@ perf_event_alloc(struct perf_event_attr
if (parent_event)
event->event_caps = parent_event->event_caps;

- if (event->attr.sigtrap)
- atomic_set(&event->event_limit, 1);
-
if (task) {
event->attach_state = PERF_ATTACH_TASK;
/*
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -22,7 +22,7 @@ static void perf_output_wakeup(struct pe
atomic_set(&handle->rb->poll, EPOLLIN);

handle->event->pending_wakeup = 1;
- irq_work_queue(&handle->event->pending);
+ irq_work_queue(&handle->event->pending_irq);
}

/*

2022-10-11 13:16:41

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH v2] perf: Fix missing SIGTRAPs

On Tue, Oct 11, 2022 at 09:44AM +0200, Peter Zijlstra wrote:
> Subject: perf: Fix missing SIGTRAPs
> From: Peter Zijlstra <[email protected]>
> Date: Thu Oct 6 15:00:39 CEST 2022
>
> Marco reported:
>
> Due to the implementation of how SIGTRAP are delivered if
> perf_event_attr::sigtrap is set, we've noticed 3 issues:
>
> 1. Missing SIGTRAP due to a race with event_sched_out() (more
> details below).
>
> 2. Hardware PMU events being disabled due to returning 1 from
> perf_event_overflow(). The only way to re-enable the event is
> for user space to first "properly" disable the event and then
> re-enable it.
>
> 3. The inability to automatically disable an event after a
> specified number of overflows via PERF_EVENT_IOC_REFRESH.
>
> The worst of the 3 issues is problem (1), which occurs when a
> pending_disable is "consumed" by a racing event_sched_out(), observed
> as follows:
>
> CPU0 | CPU1
> --------------------------------+---------------------------
> __perf_event_overflow() |
> perf_event_disable_inatomic() |
> pending_disable = CPU0 | ...
> | _perf_event_enable()
> | event_function_call()
> | task_function_call()
> | /* sends IPI to CPU0 */
> <IPI> | ...
> __perf_event_enable() +---------------------------
> ctx_resched()
> task_ctx_sched_out()
> ctx_sched_out()
> group_sched_out()
> event_sched_out()
> pending_disable = -1
> </IPI>
> <IRQ-work>
> perf_pending_event()
> perf_pending_event_disable()
> /* Fails to send SIGTRAP because no pending_disable! */
> </IRQ-work>
>
> In the above case, not only is that particular SIGTRAP missed, but also
> all future SIGTRAPs because 'event_limit' is not reset back to 1.
>
> To fix, rework pending delivery of SIGTRAP via IRQ-work by introduction
> of a separate 'pending_sigtrap', no longer using 'event_limit' and
> 'pending_disable' for its delivery.
>
> Additionally; and different to Marco's proposed patch:
>
> - recognise that pending_disable effectively duplicates oncpu for
> the case where it is set. As such, change the irq_work handler to
> use ->oncpu to target the event and use pending_* as boolean toggles.
>
> - observe that SIGTRAP targets the ctx->task, so the context switch
> optimization that carries contexts between tasks is invalid. If
> the irq_work were delayed enough to hit after a context switch the
> SIGTRAP would be delivered to the wrong task.
>
> - observe that if the event gets scheduled out
> (rotation/migration/context-switch/...) the irq-work would be
> insufficient to deliver the SIGTRAP when the event gets scheduled
> back in (the irq-work might still be pending on the old CPU).
>
> Therefore have event_sched_out() convert the pending sigtrap into a
> task_work which will deliver the signal at return_to_user.
>
> Fixes: 97ba62b27867 ("perf: Add support for SIGTRAP on perf events")
> Reported-by: Marco Elver <[email protected]>
> Debugged-by: Marco Elver <[email protected]>

Reviewed-by: Marco Elver <[email protected]>
Tested-by: Marco Elver <[email protected]>

.. fuzzing, and lots of concurrent sigtrap_threads with this patch:

https://lore.kernel.org/all/[email protected]/

> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>

My original patch also attributed Dmitry:

Reported-by: Dmitry Vyukov <[email protected]>
Debugged-by: Dmitry Vyukov <[email protected]>

... we all melted our brains on this one. :-)

Would be good to get the fix into one of the upcoming 6.1-rc.

Thanks!

-- Marco

> ---
> include/linux/perf_event.h | 19 ++++-
> kernel/events/core.c | 151 ++++++++++++++++++++++++++++++++------------
> kernel/events/ring_buffer.c | 2
> 3 files changed, 129 insertions(+), 43 deletions(-)
>
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -736,11 +736,14 @@ struct perf_event {
> struct fasync_struct *fasync;
>
> /* delayed work for NMIs and such */
> - int pending_wakeup;
> - int pending_kill;
> - int pending_disable;
> + unsigned int pending_wakeup;
> + unsigned int pending_kill;
> + unsigned int pending_disable;
> + unsigned int pending_sigtrap;
> unsigned long pending_addr; /* SIGTRAP */
> - struct irq_work pending;
> + struct irq_work pending_irq;
> + struct callback_head pending_task;
> + unsigned int pending_work;
>
> atomic_t event_limit;
>
> @@ -857,6 +860,14 @@ struct perf_event_context {
> #endif
> void *task_ctx_data; /* pmu specific data */
> struct rcu_head rcu_head;
> +
> + /*
> + * Sum (event->pending_sigtrap + event->pending_work)
> + *
> + * The SIGTRAP is targeted at ctx->task, as such it won't do changing
> + * that until the signal is delivered.
> + */
> + local_t nr_pending;
> };
>
> /*
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -54,6 +54,7 @@
> #include <linux/highmem.h>
> #include <linux/pgtable.h>
> #include <linux/buildid.h>
> +#include <linux/task_work.h>
>
> #include "internal.h"
>
> @@ -2268,11 +2269,26 @@ event_sched_out(struct perf_event *event
> event->pmu->del(event, 0);
> event->oncpu = -1;
>
> - if (READ_ONCE(event->pending_disable) >= 0) {
> - WRITE_ONCE(event->pending_disable, -1);
> + if (event->pending_disable) {
> + event->pending_disable = 0;
> perf_cgroup_event_disable(event, ctx);
> state = PERF_EVENT_STATE_OFF;
> }
> +
> + if (event->pending_sigtrap) {
> + bool dec = true;
> +
> + event->pending_sigtrap = 0;
> + if (state != PERF_EVENT_STATE_OFF &&
> + !event->pending_work) {
> + event->pending_work = 1;
> + dec = false;
> + task_work_add(current, &event->pending_task, TWA_RESUME);
> + }
> + if (dec)
> + local_dec(&event->ctx->nr_pending);
> + }
> +
> perf_event_set_state(event, state);
>
> if (!is_software_event(event))
> @@ -2424,7 +2440,7 @@ static void __perf_event_disable(struct
> * hold the top-level event's child_mutex, so any descendant that
> * goes to exit will block in perf_event_exit_event().
> *
> - * When called from perf_pending_event it's OK because event->ctx
> + * When called from perf_pending_irq it's OK because event->ctx
> * is the current context on this CPU and preemption is disabled,
> * hence we can't get into perf_event_task_sched_out for this context.
> */
> @@ -2463,9 +2479,8 @@ EXPORT_SYMBOL_GPL(perf_event_disable);
>
> void perf_event_disable_inatomic(struct perf_event *event)
> {
> - WRITE_ONCE(event->pending_disable, smp_processor_id());
> - /* can fail, see perf_pending_event_disable() */
> - irq_work_queue(&event->pending);
> + event->pending_disable = 1;
> + irq_work_queue(&event->pending_irq);
> }
>
> #define MAX_INTERRUPTS (~0ULL)
> @@ -3420,11 +3435,23 @@ static void perf_event_context_sched_out
> raw_spin_lock_nested(&next_ctx->lock, SINGLE_DEPTH_NESTING);
> if (context_equiv(ctx, next_ctx)) {
>
> + perf_pmu_disable(pmu);
> +
> + /* PMIs are disabled; ctx->nr_pending is stable. */
> + if (local_read(&ctx->nr_pending) ||
> + local_read(&next_ctx->nr_pending)) {
> + /*
> + * Must not swap out ctx when there's pending
> + * events that rely on the ctx->task relation.
> + */
> + raw_spin_unlock(&next_ctx->lock);
> + rcu_read_unlock();
> + goto inside_switch;
> + }
> +
> WRITE_ONCE(ctx->task, next);
> WRITE_ONCE(next_ctx->task, task);
>
> - perf_pmu_disable(pmu);
> -
> if (cpuctx->sched_cb_usage && pmu->sched_task)
> pmu->sched_task(ctx, false);
>
> @@ -3465,6 +3492,7 @@ static void perf_event_context_sched_out
> raw_spin_lock(&ctx->lock);
> perf_pmu_disable(pmu);
>
> +inside_switch:
> if (cpuctx->sched_cb_usage && pmu->sched_task)
> pmu->sched_task(ctx, false);
> task_ctx_sched_out(cpuctx, ctx, EVENT_ALL);
> @@ -4931,7 +4959,7 @@ static void perf_addr_filters_splice(str
>
> static void _free_event(struct perf_event *event)
> {
> - irq_work_sync(&event->pending);
> + irq_work_sync(&event->pending_irq);
>
> unaccount_event(event);
>
> @@ -6431,7 +6459,8 @@ static void perf_sigtrap(struct perf_eve
> return;
>
> /*
> - * perf_pending_event() can race with the task exiting.
> + * Both perf_pending_task() and perf_pending_irq() can race with the
> + * task exiting.
> */
> if (current->flags & PF_EXITING)
> return;
> @@ -6440,23 +6469,33 @@ static void perf_sigtrap(struct perf_eve
> event->attr.type, event->attr.sig_data);
> }
>
> -static void perf_pending_event_disable(struct perf_event *event)
> +/*
> + * Deliver the pending work in-event-context or follow the context.
> + */
> +static void __perf_pending_irq(struct perf_event *event)
> {
> - int cpu = READ_ONCE(event->pending_disable);
> + int cpu = READ_ONCE(event->oncpu);
>
> + /*
> + * If the event isn't running; we done. event_sched_out() will have
> + * taken care of things.
> + */
> if (cpu < 0)
> return;
>
> + /*
> + * Yay, we hit home and are in the context of the event.
> + */
> if (cpu == smp_processor_id()) {
> - WRITE_ONCE(event->pending_disable, -1);
> -
> - if (event->attr.sigtrap) {
> + if (event->pending_sigtrap) {
> + event->pending_sigtrap = 0;
> perf_sigtrap(event);
> - atomic_set_release(&event->event_limit, 1); /* rearm event */
> - return;
> + local_dec(&event->ctx->nr_pending);
> + }
> + if (event->pending_disable) {
> + event->pending_disable = 0;
> + perf_event_disable_local(event);
> }
> -
> - perf_event_disable_local(event);
> return;
> }
>
> @@ -6476,35 +6515,62 @@ static void perf_pending_event_disable(s
> * irq_work_queue(); // FAILS
> *
> * irq_work_run()
> - * perf_pending_event()
> + * perf_pending_irq()
> *
> * But the event runs on CPU-B and wants disabling there.
> */
> - irq_work_queue_on(&event->pending, cpu);
> + irq_work_queue_on(&event->pending_irq, cpu);
> }
>
> -static void perf_pending_event(struct irq_work *entry)
> +static void perf_pending_irq(struct irq_work *entry)
> {
> - struct perf_event *event = container_of(entry, struct perf_event, pending);
> + struct perf_event *event = container_of(entry, struct perf_event, pending_irq);
> int rctx;
>
> - rctx = perf_swevent_get_recursion_context();
> /*
> * If we 'fail' here, that's OK, it means recursion is already disabled
> * and we won't recurse 'further'.
> */
> + rctx = perf_swevent_get_recursion_context();
>
> - perf_pending_event_disable(event);
> -
> + /*
> + * The wakeup isn't bound to the context of the event -- it can happen
> + * irrespective of where the event is.
> + */
> if (event->pending_wakeup) {
> event->pending_wakeup = 0;
> perf_event_wakeup(event);
> }
>
> + __perf_pending_irq(event);
> +
> if (rctx >= 0)
> perf_swevent_put_recursion_context(rctx);
> }
>
> +static void perf_pending_task(struct callback_head *head)
> +{
> + struct perf_event *event = container_of(head, struct perf_event, pending_task);
> + int rctx;
> +
> + /*
> + * If we 'fail' here, that's OK, it means recursion is already disabled
> + * and we won't recurse 'further'.
> + */
> + preempt_disable_notrace();
> + rctx = perf_swevent_get_recursion_context();
> +
> + if (event->pending_work) {
> + event->pending_work = 0;
> + perf_sigtrap(event);
> + local_dec(&event->ctx->nr_pending);
> + }
> +
> + if (rctx >= 0)
> + perf_swevent_put_recursion_context(rctx);
> + preempt_enable_notrace();
> +}
> +
> #ifdef CONFIG_GUEST_PERF_EVENTS
> struct perf_guest_info_callbacks __rcu *perf_guest_cbs;
>
> @@ -9188,8 +9254,8 @@ int perf_event_account_interrupt(struct
> */
>
> static int __perf_event_overflow(struct perf_event *event,
> - int throttle, struct perf_sample_data *data,
> - struct pt_regs *regs)
> + int throttle, struct perf_sample_data *data,
> + struct pt_regs *regs)
> {
> int events = atomic_read(&event->event_limit);
> int ret = 0;
> @@ -9212,24 +9278,36 @@ static int __perf_event_overflow(struct
> if (events && atomic_dec_and_test(&event->event_limit)) {
> ret = 1;
> event->pending_kill = POLL_HUP;
> - event->pending_addr = data->addr;
> -
> perf_event_disable_inatomic(event);
> }
>
> + if (event->attr.sigtrap) {
> + /*
> + * Should not be able to return to user space without processing
> + * pending_sigtrap (kernel events can overflow multiple times).
> + */
> + WARN_ON_ONCE(event->pending_sigtrap && event->attr.exclude_kernel);
> + if (!event->pending_sigtrap) {
> + event->pending_sigtrap = 1;
> + local_inc(&event->ctx->nr_pending);
> + }
> + event->pending_addr = data->addr;
> + irq_work_queue(&event->pending_irq);
> + }
> +
> READ_ONCE(event->overflow_handler)(event, data, regs);
>
> if (*perf_event_fasync(event) && event->pending_kill) {
> event->pending_wakeup = 1;
> - irq_work_queue(&event->pending);
> + irq_work_queue(&event->pending_irq);
> }
>
> return ret;
> }
>
> int perf_event_overflow(struct perf_event *event,
> - struct perf_sample_data *data,
> - struct pt_regs *regs)
> + struct perf_sample_data *data,
> + struct pt_regs *regs)
> {
> return __perf_event_overflow(event, 1, data, regs);
> }
> @@ -11537,8 +11615,8 @@ perf_event_alloc(struct perf_event_attr
>
>
> init_waitqueue_head(&event->waitq);
> - event->pending_disable = -1;
> - init_irq_work(&event->pending, perf_pending_event);
> + init_irq_work(&event->pending_irq, perf_pending_irq);
> + init_task_work(&event->pending_task, perf_pending_task);
>
> mutex_init(&event->mmap_mutex);
> raw_spin_lock_init(&event->addr_filters.lock);
> @@ -11560,9 +11638,6 @@ perf_event_alloc(struct perf_event_attr
> if (parent_event)
> event->event_caps = parent_event->event_caps;
>
> - if (event->attr.sigtrap)
> - atomic_set(&event->event_limit, 1);
> -
> if (task) {
> event->attach_state = PERF_ATTACH_TASK;
> /*
> --- a/kernel/events/ring_buffer.c
> +++ b/kernel/events/ring_buffer.c
> @@ -22,7 +22,7 @@ static void perf_output_wakeup(struct pe
> atomic_set(&handle->rb->poll, EPOLLIN);
>
> handle->event->pending_wakeup = 1;
> - irq_work_queue(&handle->event->pending);
> + irq_work_queue(&handle->event->pending_irq);
> }
>
> /*

2022-10-11 13:53:00

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2] perf: Fix missing SIGTRAPs

On Tue, Oct 11, 2022 at 02:58:36PM +0200, Marco Elver wrote:
> On Tue, Oct 11, 2022 at 09:44AM +0200, Peter Zijlstra wrote:
> > Subject: perf: Fix missing SIGTRAPs
> > From: Peter Zijlstra <[email protected]>
> > Date: Thu Oct 6 15:00:39 CEST 2022
> >
> > Marco reported:
> >
> > Due to the implementation of how SIGTRAP are delivered if
> > perf_event_attr::sigtrap is set, we've noticed 3 issues:
> >
> > 1. Missing SIGTRAP due to a race with event_sched_out() (more
> > details below).
> >
> > 2. Hardware PMU events being disabled due to returning 1 from
> > perf_event_overflow(). The only way to re-enable the event is
> > for user space to first "properly" disable the event and then
> > re-enable it.
> >
> > 3. The inability to automatically disable an event after a
> > specified number of overflows via PERF_EVENT_IOC_REFRESH.
> >
> > The worst of the 3 issues is problem (1), which occurs when a
> > pending_disable is "consumed" by a racing event_sched_out(), observed
> > as follows:
> >
> > CPU0 | CPU1
> > --------------------------------+---------------------------
> > __perf_event_overflow() |
> > perf_event_disable_inatomic() |
> > pending_disable = CPU0 | ...
> > | _perf_event_enable()
> > | event_function_call()
> > | task_function_call()
> > | /* sends IPI to CPU0 */
> > <IPI> | ...
> > __perf_event_enable() +---------------------------
> > ctx_resched()
> > task_ctx_sched_out()
> > ctx_sched_out()
> > group_sched_out()
> > event_sched_out()
> > pending_disable = -1
> > </IPI>
> > <IRQ-work>
> > perf_pending_event()
> > perf_pending_event_disable()
> > /* Fails to send SIGTRAP because no pending_disable! */
> > </IRQ-work>
> >
> > In the above case, not only is that particular SIGTRAP missed, but also
> > all future SIGTRAPs because 'event_limit' is not reset back to 1.
> >
> > To fix, rework pending delivery of SIGTRAP via IRQ-work by introduction
> > of a separate 'pending_sigtrap', no longer using 'event_limit' and
> > 'pending_disable' for its delivery.
> >
> > Additionally; and different to Marco's proposed patch:
> >
> > - recognise that pending_disable effectively duplicates oncpu for
> > the case where it is set. As such, change the irq_work handler to
> > use ->oncpu to target the event and use pending_* as boolean toggles.
> >
> > - observe that SIGTRAP targets the ctx->task, so the context switch
> > optimization that carries contexts between tasks is invalid. If
> > the irq_work were delayed enough to hit after a context switch the
> > SIGTRAP would be delivered to the wrong task.
> >
> > - observe that if the event gets scheduled out
> > (rotation/migration/context-switch/...) the irq-work would be
> > insufficient to deliver the SIGTRAP when the event gets scheduled
> > back in (the irq-work might still be pending on the old CPU).
> >
> > Therefore have event_sched_out() convert the pending sigtrap into a
> > task_work which will deliver the signal at return_to_user.
> >
> > Fixes: 97ba62b27867 ("perf: Add support for SIGTRAP on perf events")
> > Reported-by: Marco Elver <[email protected]>
> > Debugged-by: Marco Elver <[email protected]>
>
> Reviewed-by: Marco Elver <[email protected]>
> Tested-by: Marco Elver <[email protected]>
>
> .. fuzzing, and lots of concurrent sigtrap_threads with this patch:
>
> https://lore.kernel.org/all/[email protected]/
>
> > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
>
> My original patch also attributed Dmitry:
>
> Reported-by: Dmitry Vyukov <[email protected]>
> Debugged-by: Dmitry Vyukov <[email protected]>
>
> ... we all melted our brains on this one. :-)
>
> Would be good to get the fix into one of the upcoming 6.1-rc.

Updated and yes, I'm planning on queueing this in perf/urgent the moment
-rc1 happens.

Thanks!