2021-12-13 23:22:31

by Namhyung Kim

[permalink] [raw]
Subject: [PATCH v3] perf/core: Fix cgroup event list management

The active cgroup events are managed in the per-cpu cgrp_cpuctx_list.
This list is only accessed from current cpu and not protected by any
locks. But from the commit ef54c1a476ae ("perf: Rework
perf_event_exit_event()"), it's possible to access (actually modify)
the list from another cpu.

In the perf_remove_from_context(), it can remove an event from the
context without an IPI when the context is not active. This is not
safe with cgroup events which can have some active events in the
context even if ctx->is_active is 0 at the moment. The target cpu
might be in the middle of list iteration at the same time.

If the event is enabled when it's about to be closed, it might call
perf_cgroup_event_disable() and list_del() with the cgrp_cpuctx_list
on a different cpu.

This resulted in a crash due to an invalid list pointer access during
the cgroup list traversal on the cpu which the event belongs to.

Let's fallback to IPI to access the cgrp_cpuctx_list from that cpu.
Similarly, perf_install_in_context() should use IPI for the cgroup
events too.

Cc: Marco Elver <[email protected]>
Signed-off-by: Namhyung Kim <[email protected]>
---
kernel/events/core.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 30d94f68c5bd..f4754e93d4ee 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2388,7 +2388,11 @@ static void perf_remove_from_context(struct perf_event *event, unsigned long fla
* event_function_call() user.
*/
raw_spin_lock_irq(&ctx->lock);
- if (!ctx->is_active) {
+ /*
+ * Cgroup events are per-cpu events, and must IPI because of
+ * cgrp_cpuctx_list.
+ */
+ if (!ctx->is_active && !is_cgroup_event(event)) {
__perf_remove_from_context(event, __get_cpu_context(ctx),
ctx, (void *)flags);
raw_spin_unlock_irq(&ctx->lock);
@@ -2857,11 +2861,14 @@ perf_install_in_context(struct perf_event_context *ctx,
* perf_event_attr::disabled events will not run and can be initialized
* without IPI. Except when this is the first event for the context, in
* that case we need the magic of the IPI to set ctx->is_active.
+ * Similarly, cgroup events for the context also needs the IPI to
+ * manipulate the cgrp_cpuctx_list.
*
* The IOC_ENABLE that is sure to follow the creation of a disabled
* event will issue the IPI and reprogram the hardware.
*/
- if (__perf_effective_state(event) == PERF_EVENT_STATE_OFF && ctx->nr_events) {
+ if (__perf_effective_state(event) == PERF_EVENT_STATE_OFF &&
+ ctx->nr_events && !is_cgroup_event(event)) {
raw_spin_lock_irq(&ctx->lock);
if (ctx->task == TASK_TOMBSTONE) {
raw_spin_unlock_irq(&ctx->lock);

base-commit: 73743c3b092277febbf69b250ce8ebbca0525aa2
--
2.34.1.173.g76aa8bc2d0-goog



2022-01-10 09:00:41

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH v3] perf/core: Fix cgroup event list management

On Tue, 14 Dec 2021 at 00:22, Namhyung Kim <[email protected]> wrote:
>
> The active cgroup events are managed in the per-cpu cgrp_cpuctx_list.
> This list is only accessed from current cpu and not protected by any
> locks. But from the commit ef54c1a476ae ("perf: Rework
> perf_event_exit_event()"), it's possible to access (actually modify)
> the list from another cpu.
>
> In the perf_remove_from_context(), it can remove an event from the
> context without an IPI when the context is not active. This is not
> safe with cgroup events which can have some active events in the
> context even if ctx->is_active is 0 at the moment. The target cpu
> might be in the middle of list iteration at the same time.
>
> If the event is enabled when it's about to be closed, it might call
> perf_cgroup_event_disable() and list_del() with the cgrp_cpuctx_list
> on a different cpu.
>
> This resulted in a crash due to an invalid list pointer access during
> the cgroup list traversal on the cpu which the event belongs to.
>
> Let's fallback to IPI to access the cgrp_cpuctx_list from that cpu.
> Similarly, perf_install_in_context() should use IPI for the cgroup
> events too.
>
> Cc: Marco Elver <[email protected]>
> Signed-off-by: Namhyung Kim <[email protected]>

The final version needs:

Fixes: ef54c1a476ae ("perf: Rework perf_event_exit_event()")

so stable kernels will see it, unless this has already been picked up
in which case we need to email stable.

Thanks,
-- Marco

2022-01-10 19:49:36

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v3] perf/core: Fix cgroup event list management

Hi Marco,

On Mon, Jan 10, 2022 at 12:58 AM Marco Elver <[email protected]> wrote:
>
> On Tue, 14 Dec 2021 at 00:22, Namhyung Kim <[email protected]> wrote:
> >
> > The active cgroup events are managed in the per-cpu cgrp_cpuctx_list.
> > This list is only accessed from current cpu and not protected by any
> > locks. But from the commit ef54c1a476ae ("perf: Rework
> > perf_event_exit_event()"), it's possible to access (actually modify)
> > the list from another cpu.
> >
> > In the perf_remove_from_context(), it can remove an event from the
> > context without an IPI when the context is not active. This is not
> > safe with cgroup events which can have some active events in the
> > context even if ctx->is_active is 0 at the moment. The target cpu
> > might be in the middle of list iteration at the same time.
> >
> > If the event is enabled when it's about to be closed, it might call
> > perf_cgroup_event_disable() and list_del() with the cgrp_cpuctx_list
> > on a different cpu.
> >
> > This resulted in a crash due to an invalid list pointer access during
> > the cgroup list traversal on the cpu which the event belongs to.
> >
> > Let's fallback to IPI to access the cgrp_cpuctx_list from that cpu.
> > Similarly, perf_install_in_context() should use IPI for the cgroup
> > events too.
> >
> > Cc: Marco Elver <[email protected]>
> > Signed-off-by: Namhyung Kim <[email protected]>
>
> The final version needs:
>
> Fixes: ef54c1a476ae ("perf: Rework perf_event_exit_event()")
>
> so stable kernels will see it, unless this has already been picked up
> in which case we need to email stable.

Right, it should go to the stable tree.

Peter, do you want me to resend a new version?

Thanks,
Namhyung