2021-09-24 01:30:00

by Song Liu

[permalink] [raw]
Subject: [PATCH v2] perf/core: fix userpage->time_enabled of inactive events

Users of rdpmc rely on the mmapped user page to calculate accurate
time_enabled. Currently, userpage->time_enabled is only updated when the
event is added to the pmu. As a result, inactive event (due to counter
multiplexing) does not have accurate userpage->time_enabled. This can
be reproduced with something like:

/* open 20 task perf_event "cycles", to create multiplexing */

fd = perf_event_open(); /* open task perf_event "cycles" */
userpage = mmap(fd); /* use mmap and rdmpc */

while (true) {
time_enabled_mmap = xxx; /* use logic in perf_event_mmap_page */
time_enabled_read = read(fd).time_enabled;
if (time_enabled_mmap > time_enabled_read)
BUG();
}

Fix this by updating userpage for inactive events in merge_sched_in.

Suggested-by: Peter Zijlstra (Intel) <[email protected]>
Reported-and-tested-by: Lucian Grijincu <[email protected]>
Signed-off-by: Song Liu <[email protected]>
---
include/linux/perf_event.h | 4 +++-
kernel/events/core.c | 49 ++++++++++++++++++++++++++++++++++----
2 files changed, 48 insertions(+), 5 deletions(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 2d510ad750edc..4aa52f7a48c16 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -683,7 +683,9 @@ struct perf_event {
/*
* timestamp shadows the actual context timing but it can
* be safely used in NMI interrupt context. It reflects the
- * context time as it was when the event was last scheduled in.
+ * context time as it was when the event was last scheduled in,
+ * or when ctx_sched_in failed to schedule the event because we
+ * run out of PMC.
*
* ctx_time already accounts for ctx->timestamp. Therefore to
* compute ctx_time for a sample, simply add perf_clock().
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 1cb1f9b8392e2..d73f986eef7b3 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3707,6 +3707,46 @@ static noinline int visit_groups_merge(struct perf_cpu_context *cpuctx,
return 0;
}

+static inline bool event_update_userpage(struct perf_event *event)
+{
+ /*
+ * Checking mmap_count to avoid unnecessary work. This does leave a
+ * corner case: if the event is enabled before mmap(), the first
+ * time the event gets scheduled is via:
+ *
+ * __perf_event_enable (or __perf_install_in_context)
+ * -> ctx_resched
+ * -> perf_event_sched_in
+ * -> ctx_sched_in
+ *
+ * with mmap_count of 0, so we will skip here. As a result,
+ * userpage->offset is not accurate after mmap and before the
+ * first rotation.
+ *
+ * To avoid the discrepancy of this window, the user space should
+ * mmap the event before enabling it.
+ */
+ if (likely(!atomic_read(&event->mmap_count)))
+ return false;
+
+ perf_event_update_time(event);
+ perf_set_shadow_time(event, event->ctx);
+ perf_event_update_userpage(event);
+
+ return true;
+}
+
+static inline void group_update_userpage(struct perf_event *group_event)
+{
+ struct perf_event *event;
+
+ if (!event_update_userpage(group_event))
+ return;
+
+ for_each_sibling_event(event, group_event)
+ event_update_userpage(event);
+}
+
static int merge_sched_in(struct perf_event *event, void *data)
{
struct perf_event_context *ctx = event->ctx;
@@ -3725,14 +3765,15 @@ static int merge_sched_in(struct perf_event *event, void *data)
}

if (event->state == PERF_EVENT_STATE_INACTIVE) {
+ *can_add_hw = 0;
if (event->attr.pinned) {
perf_cgroup_event_disable(event, ctx);
perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
+ } else {
+ ctx->rotate_necessary = 1;
+ perf_mux_hrtimer_restart(cpuctx);
+ group_update_userpage(event);
}
-
- *can_add_hw = 0;
- ctx->rotate_necessary = 1;
- perf_mux_hrtimer_restart(cpuctx);
}

return 0;
--
2.30.2


2021-09-27 16:35:21

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH v2] perf/core: fix userpage->time_enabled of inactive events

Hi Peter,

> On Sep 23, 2021, at 6:28 PM, Song Liu <[email protected]> wrote:
>
> Users of rdpmc rely on the mmapped user page to calculate accurate
> time_enabled. Currently, userpage->time_enabled is only updated when the
> event is added to the pmu. As a result, inactive event (due to counter
> multiplexing) does not have accurate userpage->time_enabled. This can
> be reproduced with something like:
>
> /* open 20 task perf_event "cycles", to create multiplexing */
>
> fd = perf_event_open(); /* open task perf_event "cycles" */
> userpage = mmap(fd); /* use mmap and rdmpc */
>
> while (true) {
> time_enabled_mmap = xxx; /* use logic in perf_event_mmap_page */
> time_enabled_read = read(fd).time_enabled;
> if (time_enabled_mmap > time_enabled_read)
> BUG();
> }
>
> Fix this by updating userpage for inactive events in merge_sched_in.
>
> Suggested-by: Peter Zijlstra (Intel) <[email protected]>
> Reported-and-tested-by: Lucian Grijincu <[email protected]>
> Signed-off-by: Song Liu <[email protected]>

Could you please share your comments on this version?

Thanks,
Song

> ---
> include/linux/perf_event.h | 4 +++-
> kernel/events/core.c | 49 ++++++++++++++++++++++++++++++++++----
> 2 files changed, 48 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index 2d510ad750edc..4aa52f7a48c16 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -683,7 +683,9 @@ struct perf_event {
> /*
> * timestamp shadows the actual context timing but it can
> * be safely used in NMI interrupt context. It reflects the
> - * context time as it was when the event was last scheduled in.
> + * context time as it was when the event was last scheduled in,
> + * or when ctx_sched_in failed to schedule the event because we
> + * run out of PMC.
> *
> * ctx_time already accounts for ctx->timestamp. Therefore to
> * compute ctx_time for a sample, simply add perf_clock().
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 1cb1f9b8392e2..d73f986eef7b3 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -3707,6 +3707,46 @@ static noinline int visit_groups_merge(struct perf_cpu_context *cpuctx,
> return 0;
> }
>
> +static inline bool event_update_userpage(struct perf_event *event)
> +{
> + /*
> + * Checking mmap_count to avoid unnecessary work. This does leave a
> + * corner case: if the event is enabled before mmap(), the first
> + * time the event gets scheduled is via:
> + *
> + * __perf_event_enable (or __perf_install_in_context)
> + * -> ctx_resched
> + * -> perf_event_sched_in
> + * -> ctx_sched_in
> + *
> + * with mmap_count of 0, so we will skip here. As a result,
> + * userpage->offset is not accurate after mmap and before the
> + * first rotation.
> + *
> + * To avoid the discrepancy of this window, the user space should
> + * mmap the event before enabling it.
> + */
> + if (likely(!atomic_read(&event->mmap_count)))
> + return false;
> +
> + perf_event_update_time(event);
> + perf_set_shadow_time(event, event->ctx);
> + perf_event_update_userpage(event);
> +
> + return true;
> +}
> +
> +static inline void group_update_userpage(struct perf_event *group_event)
> +{
> + struct perf_event *event;
> +
> + if (!event_update_userpage(group_event))
> + return;
> +
> + for_each_sibling_event(event, group_event)
> + event_update_userpage(event);
> +}
> +
> static int merge_sched_in(struct perf_event *event, void *data)
> {
> struct perf_event_context *ctx = event->ctx;
> @@ -3725,14 +3765,15 @@ static int merge_sched_in(struct perf_event *event, void *data)
> }
>
> if (event->state == PERF_EVENT_STATE_INACTIVE) {
> + *can_add_hw = 0;
> if (event->attr.pinned) {
> perf_cgroup_event_disable(event, ctx);
> perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
> + } else {
> + ctx->rotate_necessary = 1;
> + perf_mux_hrtimer_restart(cpuctx);
> + group_update_userpage(event);
> }
> -
> - *can_add_hw = 0;
> - ctx->rotate_necessary = 1;
> - perf_mux_hrtimer_restart(cpuctx);
> }
>
> return 0;
> --
> 2.30.2
>

2021-09-29 09:21:13

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2] perf/core: fix userpage->time_enabled of inactive events

On Thu, Sep 23, 2021 at 06:28:00PM -0700, Song Liu wrote:

> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 1cb1f9b8392e2..d73f986eef7b3 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -3707,6 +3707,46 @@ static noinline int visit_groups_merge(struct perf_cpu_context *cpuctx,
> return 0;
> }
>
> +static inline bool event_update_userpage(struct perf_event *event)
> +{
> + /*
> + * Checking mmap_count to avoid unnecessary work. This does leave a
> + * corner case: if the event is enabled before mmap(), the first
> + * time the event gets scheduled is via:
> + *
> + * __perf_event_enable (or __perf_install_in_context)
> + * -> ctx_resched
> + * -> perf_event_sched_in
> + * -> ctx_sched_in
> + *
> + * with mmap_count of 0, so we will skip here. As a result,
> + * userpage->offset is not accurate after mmap and before the
> + * first rotation.
> + *
> + * To avoid the discrepancy of this window, the user space should
> + * mmap the event before enabling it.
> + */

It seems to me that writing that comment was more work than actually
fixing perf_mmap() to DTRT, no? AFAICT all we need is something like:

diff --git a/kernel/events/core.c b/kernel/events/core.c
index fd2ae70fa6c4..1e33c2e6b0dc 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6324,6 +6324,8 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)

ring_buffer_attach(event, rb);

+ perf_event_update_time(event);
+ perf_set_shadow_time(event, event->ctx);
perf_event_init_userpage(event);
perf_event_update_userpage(event);
} else {

2021-09-29 17:51:37

by Athira Rajeev

[permalink] [raw]
Subject: Re: [PATCH v2] perf/core: fix userpage->time_enabled of inactive events



> On 24-Sep-2021, at 6:58 AM, Song Liu <[email protected]> wrote:
>
> Users of rdpmc rely on the mmapped user page to calculate accurate
> time_enabled. Currently, userpage->time_enabled is only updated when the
> event is added to the pmu. As a result, inactive event (due to counter
> multiplexing) does not have accurate userpage->time_enabled. This can
> be reproduced with something like:
>
> /* open 20 task perf_event "cycles", to create multiplexing */
>
> fd = perf_event_open(); /* open task perf_event "cycles" */
> userpage = mmap(fd); /* use mmap and rdmpc */
>
> while (true) {
> time_enabled_mmap = xxx; /* use logic in perf_event_mmap_page */
> time_enabled_read = read(fd).time_enabled;
> if (time_enabled_mmap > time_enabled_read)
> BUG();
> }
>
> Fix this by updating userpage for inactive events in merge_sched_in.
>
> Suggested-by: Peter Zijlstra (Intel) <[email protected]>
> Reported-and-tested-by: Lucian Grijincu <[email protected]>
> Signed-off-by: Song Liu <[email protected]>
> ---
> include/linux/perf_event.h | 4 +++-
> kernel/events/core.c | 49 ++++++++++++++++++++++++++++++++++----
> 2 files changed, 48 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index 2d510ad750edc..4aa52f7a48c16 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -683,7 +683,9 @@ struct perf_event {
> /*
> * timestamp shadows the actual context timing but it can
> * be safely used in NMI interrupt context. It reflects the
> - * context time as it was when the event was last scheduled in.
> + * context time as it was when the event was last scheduled in,
> + * or when ctx_sched_in failed to schedule the event because we
> + * run out of PMC.
> *
> * ctx_time already accounts for ctx->timestamp. Therefore to
> * compute ctx_time for a sample, simply add perf_clock().
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 1cb1f9b8392e2..d73f986eef7b3 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -3707,6 +3707,46 @@ static noinline int visit_groups_merge(struct perf_cpu_context *cpuctx,
> return 0;
> }
>
> +static inline bool event_update_userpage(struct perf_event *event)
> +{
> + /*
> + * Checking mmap_count to avoid unnecessary work. This does leave a
> + * corner case: if the event is enabled before mmap(), the first
> + * time the event gets scheduled is via:
> + *
> + * __perf_event_enable (or __perf_install_in_context)
> + * -> ctx_resched
> + * -> perf_event_sched_in
> + * -> ctx_sched_in
> + *
> + * with mmap_count of 0, so we will skip here. As a result,
> + * userpage->offset is not accurate after mmap and before the
> + * first rotation.
> + *
> + * To avoid the discrepancy of this window, the user space should
> + * mmap the event before enabling it.
> + */
> + if (likely(!atomic_read(&event->mmap_count)))
> + return false;
> +
> + perf_event_update_time(event);
> + perf_set_shadow_time(event, event->ctx);
> + perf_event_update_userpage(event);
> +
> + return true;
> +}
> +
> +static inline void group_update_userpage(struct perf_event *group_event)
> +{
> + struct perf_event *event;
> +
> + if (!event_update_userpage(group_event))
> + return;
> +
> + for_each_sibling_event(event, group_event)
> + event_update_userpage(event);
> +}
> +
> static int merge_sched_in(struct perf_event *event, void *data)
> {
> struct perf_event_context *ctx = event->ctx;
> @@ -3725,14 +3765,15 @@ static int merge_sched_in(struct perf_event *event, void *data)
> }
>
> if (event->state == PERF_EVENT_STATE_INACTIVE) {
> + *can_add_hw = 0;
> if (event->attr.pinned) {
> perf_cgroup_event_disable(event, ctx);
> perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
> + } else {
> + ctx->rotate_necessary = 1;
> + perf_mux_hrtimer_restart(cpuctx);
> + group_update_userpage(event);
> }
> -
> - *can_add_hw = 0;
> - ctx->rotate_necessary = 1;
> - perf_mux_hrtimer_restart(cpuctx);
> }

Another optimisation that is possible in merge_sched_in is we can avoid calling "perf_mux_hrtimer_restart" multiple times if already rotate_necessary is set for that context. Even though "perf_mux_hrtimer_restart" will just return if hrtimer is already active, we could avoid the overhead of calling this function multiple times if there are many groups.

Something like below:

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 0c000cb01eeb..26eae79bd723 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3731,8 +3731,10 @@ static int merge_sched_in(struct perf_event *event, void *data)
}

*can_add_hw = 0;
- ctx->rotate_necessary = 1;
- perf_mux_hrtimer_restart(cpuctx);
+ if (!ctx->rotate_necessary) {
+ ctx->rotate_necessary = 1;
+ perf_mux_hrtimer_restart(cpuctx);
+ }
}

return 0;


Thanks,
Athira Rajeev

>
> return 0;
> --
> 2.30.2
>

2021-09-29 19:54:35

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH v2] perf/core: fix userpage->time_enabled of inactive events



> On Sep 29, 2021, at 10:04 AM, Athira Rajeev <[email protected]> wrote:
>
>
>
>> On 24-Sep-2021, at 6:58 AM, Song Liu <[email protected]> wrote:
>>
>> Users of rdpmc rely on the mmapped user page to calculate accurate
>> time_enabled. Currently, userpage->time_enabled is only updated when the
>> event is added to the pmu. As a result, inactive event (due to counter
>> multiplexing) does not have accurate userpage->time_enabled. This can
>> be reproduced with something like:
>>
>> /* open 20 task perf_event "cycles", to create multiplexing */
>>
>> fd = perf_event_open(); /* open task perf_event "cycles" */
>> userpage = mmap(fd); /* use mmap and rdmpc */
>>
>> while (true) {
>> time_enabled_mmap = xxx; /* use logic in perf_event_mmap_page */
>> time_enabled_read = read(fd).time_enabled;
>> if (time_enabled_mmap > time_enabled_read)
>> BUG();
>> }
>>
>> Fix this by updating userpage for inactive events in merge_sched_in.
>>
>> Suggested-by: Peter Zijlstra (Intel) <[email protected]>
>> Reported-and-tested-by: Lucian Grijincu <[email protected]>
>> Signed-off-by: Song Liu <[email protected]>
>> ---
>> include/linux/perf_event.h | 4 +++-
>> kernel/events/core.c | 49 ++++++++++++++++++++++++++++++++++----
>> 2 files changed, 48 insertions(+), 5 deletions(-)
>>
>> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
>> index 2d510ad750edc..4aa52f7a48c16 100644
>> --- a/include/linux/perf_event.h
>> +++ b/include/linux/perf_event.h
>> @@ -683,7 +683,9 @@ struct perf_event {
>> /*
>> * timestamp shadows the actual context timing but it can
>> * be safely used in NMI interrupt context. It reflects the
>> - * context time as it was when the event was last scheduled in.
>> + * context time as it was when the event was last scheduled in,
>> + * or when ctx_sched_in failed to schedule the event because we
>> + * run out of PMC.
>> *
>> * ctx_time already accounts for ctx->timestamp. Therefore to
>> * compute ctx_time for a sample, simply add perf_clock().
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 1cb1f9b8392e2..d73f986eef7b3 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -3707,6 +3707,46 @@ static noinline int visit_groups_merge(struct perf_cpu_context *cpuctx,
>> return 0;
>> }
>>
>> +static inline bool event_update_userpage(struct perf_event *event)
>> +{
>> + /*
>> + * Checking mmap_count to avoid unnecessary work. This does leave a
>> + * corner case: if the event is enabled before mmap(), the first
>> + * time the event gets scheduled is via:
>> + *
>> + * __perf_event_enable (or __perf_install_in_context)
>> + * -> ctx_resched
>> + * -> perf_event_sched_in
>> + * -> ctx_sched_in
>> + *
>> + * with mmap_count of 0, so we will skip here. As a result,
>> + * userpage->offset is not accurate after mmap and before the
>> + * first rotation.
>> + *
>> + * To avoid the discrepancy of this window, the user space should
>> + * mmap the event before enabling it.
>> + */
>> + if (likely(!atomic_read(&event->mmap_count)))
>> + return false;
>> +
>> + perf_event_update_time(event);
>> + perf_set_shadow_time(event, event->ctx);
>> + perf_event_update_userpage(event);
>> +
>> + return true;
>> +}
>> +
>> +static inline void group_update_userpage(struct perf_event *group_event)
>> +{
>> + struct perf_event *event;
>> +
>> + if (!event_update_userpage(group_event))
>> + return;
>> +
>> + for_each_sibling_event(event, group_event)
>> + event_update_userpage(event);
>> +}
>> +
>> static int merge_sched_in(struct perf_event *event, void *data)
>> {
>> struct perf_event_context *ctx = event->ctx;
>> @@ -3725,14 +3765,15 @@ static int merge_sched_in(struct perf_event *event, void *data)
>> }
>>
>> if (event->state == PERF_EVENT_STATE_INACTIVE) {
>> + *can_add_hw = 0;
>> if (event->attr.pinned) {
>> perf_cgroup_event_disable(event, ctx);
>> perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
>> + } else {
>> + ctx->rotate_necessary = 1;
>> + perf_mux_hrtimer_restart(cpuctx);
>> + group_update_userpage(event);
>> }
>> -
>> - *can_add_hw = 0;
>> - ctx->rotate_necessary = 1;
>> - perf_mux_hrtimer_restart(cpuctx);
>> }
>
> Another optimisation that is possible in merge_sched_in is we can avoid calling "perf_mux_hrtimer_restart" multiple times if already rotate_necessary is set for that context. Even though "perf_mux_hrtimer_restart" will just return if hrtimer is already active, we could avoid the overhead of calling this function multiple times if there are many groups.
>
> Something like below:
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 0c000cb01eeb..26eae79bd723 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -3731,8 +3731,10 @@ static int merge_sched_in(struct perf_event *event, void *data)
> }
>
> *can_add_hw = 0;
> - ctx->rotate_necessary = 1;
> - perf_mux_hrtimer_restart(cpuctx);
> + if (!ctx->rotate_necessary) {
> + ctx->rotate_necessary = 1;
> + perf_mux_hrtimer_restart(cpuctx);
> + }
> }
>
> return 0;

Yeah, this makes sense. Do you plan to send a patch for this?

Thanks,
Song

2021-09-29 19:54:45

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH v2] perf/core: fix userpage->time_enabled of inactive events



> On Sep 29, 2021, at 2:18 AM, Peter Zijlstra <[email protected]> wrote:
>
> On Thu, Sep 23, 2021 at 06:28:00PM -0700, Song Liu wrote:
>
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 1cb1f9b8392e2..d73f986eef7b3 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -3707,6 +3707,46 @@ static noinline int visit_groups_merge(struct perf_cpu_context *cpuctx,
>> return 0;
>> }
>>
>> +static inline bool event_update_userpage(struct perf_event *event)
>> +{
>> + /*
>> + * Checking mmap_count to avoid unnecessary work. This does leave a
>> + * corner case: if the event is enabled before mmap(), the first
>> + * time the event gets scheduled is via:
>> + *
>> + * __perf_event_enable (or __perf_install_in_context)
>> + * -> ctx_resched
>> + * -> perf_event_sched_in
>> + * -> ctx_sched_in
>> + *
>> + * with mmap_count of 0, so we will skip here. As a result,
>> + * userpage->offset is not accurate after mmap and before the
>> + * first rotation.
>> + *
>> + * To avoid the discrepancy of this window, the user space should
>> + * mmap the event before enabling it.
>> + */
>
> It seems to me that writing that comment was more work than actually
> fixing perf_mmap() to DTRT, no? AFAICT all we need is something like:
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index fd2ae70fa6c4..1e33c2e6b0dc 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -6324,6 +6324,8 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
>
> ring_buffer_attach(event, rb);
>
> + perf_event_update_time(event);
> + perf_set_shadow_time(event, event->ctx);
> perf_event_init_userpage(event);
> perf_event_update_userpage(event);
> } else {

Yeah, this does work. I will fold this in v3.

Thanks,
Song

2021-09-30 04:36:18

by Athira Rajeev

[permalink] [raw]
Subject: Re: [PATCH v2] perf/core: fix userpage->time_enabled of inactive events



> On 30-Sep-2021, at 1:11 AM, Song Liu <[email protected]> wrote:
>
>
>
>> On Sep 29, 2021, at 10:04 AM, Athira Rajeev <[email protected]> wrote:
>>
>>
>>
>>> On 24-Sep-2021, at 6:58 AM, Song Liu <[email protected]> wrote:
>>>
>>> Users of rdpmc rely on the mmapped user page to calculate accurate
>>> time_enabled. Currently, userpage->time_enabled is only updated when the
>>> event is added to the pmu. As a result, inactive event (due to counter
>>> multiplexing) does not have accurate userpage->time_enabled. This can
>>> be reproduced with something like:
>>>
>>> /* open 20 task perf_event "cycles", to create multiplexing */
>>>
>>> fd = perf_event_open(); /* open task perf_event "cycles" */
>>> userpage = mmap(fd); /* use mmap and rdmpc */
>>>
>>> while (true) {
>>> time_enabled_mmap = xxx; /* use logic in perf_event_mmap_page */
>>> time_enabled_read = read(fd).time_enabled;
>>> if (time_enabled_mmap > time_enabled_read)
>>> BUG();
>>> }
>>>
>>> Fix this by updating userpage for inactive events in merge_sched_in.
>>>
>>> Suggested-by: Peter Zijlstra (Intel) <[email protected]>
>>> Reported-and-tested-by: Lucian Grijincu <[email protected]>
>>> Signed-off-by: Song Liu <[email protected]>
>>> ---
>>> include/linux/perf_event.h | 4 +++-
>>> kernel/events/core.c | 49 ++++++++++++++++++++++++++++++++++----
>>> 2 files changed, 48 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
>>> index 2d510ad750edc..4aa52f7a48c16 100644
>>> --- a/include/linux/perf_event.h
>>> +++ b/include/linux/perf_event.h
>>> @@ -683,7 +683,9 @@ struct perf_event {
>>> /*
>>> * timestamp shadows the actual context timing but it can
>>> * be safely used in NMI interrupt context. It reflects the
>>> - * context time as it was when the event was last scheduled in.
>>> + * context time as it was when the event was last scheduled in,
>>> + * or when ctx_sched_in failed to schedule the event because we
>>> + * run out of PMC.
>>> *
>>> * ctx_time already accounts for ctx->timestamp. Therefore to
>>> * compute ctx_time for a sample, simply add perf_clock().
>>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>>> index 1cb1f9b8392e2..d73f986eef7b3 100644
>>> --- a/kernel/events/core.c
>>> +++ b/kernel/events/core.c
>>> @@ -3707,6 +3707,46 @@ static noinline int visit_groups_merge(struct perf_cpu_context *cpuctx,
>>> return 0;
>>> }
>>>
>>> +static inline bool event_update_userpage(struct perf_event *event)
>>> +{
>>> + /*
>>> + * Checking mmap_count to avoid unnecessary work. This does leave a
>>> + * corner case: if the event is enabled before mmap(), the first
>>> + * time the event gets scheduled is via:
>>> + *
>>> + * __perf_event_enable (or __perf_install_in_context)
>>> + * -> ctx_resched
>>> + * -> perf_event_sched_in
>>> + * -> ctx_sched_in
>>> + *
>>> + * with mmap_count of 0, so we will skip here. As a result,
>>> + * userpage->offset is not accurate after mmap and before the
>>> + * first rotation.
>>> + *
>>> + * To avoid the discrepancy of this window, the user space should
>>> + * mmap the event before enabling it.
>>> + */
>>> + if (likely(!atomic_read(&event->mmap_count)))
>>> + return false;
>>> +
>>> + perf_event_update_time(event);
>>> + perf_set_shadow_time(event, event->ctx);
>>> + perf_event_update_userpage(event);
>>> +
>>> + return true;
>>> +}
>>> +
>>> +static inline void group_update_userpage(struct perf_event *group_event)
>>> +{
>>> + struct perf_event *event;
>>> +
>>> + if (!event_update_userpage(group_event))
>>> + return;
>>> +
>>> + for_each_sibling_event(event, group_event)
>>> + event_update_userpage(event);
>>> +}
>>> +
>>> static int merge_sched_in(struct perf_event *event, void *data)
>>> {
>>> struct perf_event_context *ctx = event->ctx;
>>> @@ -3725,14 +3765,15 @@ static int merge_sched_in(struct perf_event *event, void *data)
>>> }
>>>
>>> if (event->state == PERF_EVENT_STATE_INACTIVE) {
>>> + *can_add_hw = 0;
>>> if (event->attr.pinned) {
>>> perf_cgroup_event_disable(event, ctx);
>>> perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
>>> + } else {
>>> + ctx->rotate_necessary = 1;
>>> + perf_mux_hrtimer_restart(cpuctx);
>>> + group_update_userpage(event);
>>> }
>>> -
>>> - *can_add_hw = 0;
>>> - ctx->rotate_necessary = 1;
>>> - perf_mux_hrtimer_restart(cpuctx);
>>> }
>>
>> Another optimisation that is possible in merge_sched_in is we can avoid calling "perf_mux_hrtimer_restart" multiple times if already rotate_necessary is set for that context. Even though "perf_mux_hrtimer_restart" will just return if hrtimer is already active, we could avoid the overhead of calling this function multiple times if there are many groups.
>>
>> Something like below:
>>
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 0c000cb01eeb..26eae79bd723 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -3731,8 +3731,10 @@ static int merge_sched_in(struct perf_event *event, void *data)
>> }
>>
>> *can_add_hw = 0;
>> - ctx->rotate_necessary = 1;
>> - perf_mux_hrtimer_restart(cpuctx);
>> + if (!ctx->rotate_necessary) {
>> + ctx->rotate_necessary = 1;
>> + perf_mux_hrtimer_restart(cpuctx);
>> + }
>> }
>>
>> return 0;
>
> Yeah, this makes sense. Do you plan to send a patch for this?

Yes, I will send a separate patch for this change right away.

Thanks
Athira
>
> Thanks,
> Song