2009-09-22 08:47:54

by Xiao Guangrong

[permalink] [raw]
Subject: [PATCH] perf_counter: cleanup for __perf_event_sched_in()

It must be a group leader if event->attr.pinned is "1"

Signed-off-by: Xiao Guangrong <[email protected]>
---
kernel/perf_event.c | 11 +++++------
1 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 76ac4db..fdd9c94 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1258,12 +1258,11 @@ __perf_event_sched_in(struct perf_event_context *ctx,
if (event->cpu != -1 && event->cpu != cpu)
continue;

- if (event != event->group_leader)
- event_sched_in(event, cpuctx, ctx, cpu);
- else {
- if (group_can_go_on(event, cpuctx, 1))
- group_sched_in(event, cpuctx, ctx, cpu);
- }
+ /* Only a group leader can be pinned */
+ BUG_ON(event != event->group_leader);
+
+ if (group_can_go_on(event, cpuctx, 1))
+ group_sched_in(event, cpuctx, ctx, cpu);

/*
* If this pinned group hasn't been scheduled,
--
1.6.1.2


2009-09-22 09:20:38

by Paul Mackerras

[permalink] [raw]
Subject: Re: [PATCH] perf_counter: cleanup for __perf_event_sched_in()

Xiao Guangrong writes:

> It must be a group leader if event->attr.pinned is "1"

True, but you shouldn't use BUG_ON unless there is no sensible way for
the kernel to continue executing, and that's not the case here. Make
it WARN_ON, or better still, WARN_ON_ONCE.

Paul.

2009-09-22 09:28:38

by Xiao Guangrong

[permalink] [raw]
Subject: Re: [PATCH] perf_counter: cleanup for __perf_event_sched_in()



Paul Mackerras wrote:
> Xiao Guangrong writes:
>
>> It must be a group leader if event->attr.pinned is "1"
>
> True, but you shouldn't use BUG_ON unless there is no sensible way for
> the kernel to continue executing, and that's not the case here. Make
> it WARN_ON, or better still, WARN_ON_ONCE.
>

Yeah, thanks for you point out, I'll fix it soon.

Thanks,
Xiao

2009-09-22 09:33:48

by Xiao Guangrong

[permalink] [raw]
Subject: [PATCH v2] perf_counter: cleanup for __perf_event_sched_in()

It must be a group leader if event->attr.pinned is "1"

Changlog:
Use WARN_ON_ONCE() instead of BUG_ON() as Paul Mackerras's suggestion

Signed-off-by: Xiao Guangrong <[email protected]>
---
kernel/perf_event.c | 11 +++++------
1 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 76ac4db..dc3221b 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1258,12 +1258,11 @@ __perf_event_sched_in(struct perf_event_context *ctx,
if (event->cpu != -1 && event->cpu != cpu)
continue;

- if (event != event->group_leader)
- event_sched_in(event, cpuctx, ctx, cpu);
- else {
- if (group_can_go_on(event, cpuctx, 1))
- group_sched_in(event, cpuctx, ctx, cpu);
- }
+ /* Only a group leader can be pinned */
+ WARN_ON_ONCE(event != event->group_leader);
+
+ if (group_can_go_on(event, cpuctx, 1))
+ group_sched_in(event, cpuctx, ctx, cpu);

/*
* If this pinned group hasn't been scheduled,
--
1.6.1.2

2009-09-22 09:39:39

by Paul Mackerras

[permalink] [raw]
Subject: Re: [PATCH] perf_counter: cleanup for __perf_event_sched_in()

Xiao Guangrong writes:

> It must be a group leader if event->attr.pinned is "1"

Actually, looking at this more closely, it has to be a group leader
anyway since it's at the top level of ctx->group_list. In fact I see
four places where we do:

list_for_each_entry(event, &ctx->group_list, group_entry) {
if (event == event->group_leader)
...

or the equivalent, three of which appear to have been introduced by
afedadf2 ("perf_counter: Optimize sched in/out of counters") back in
May by Peter Z.

As far as I can see the if () is superfluous in each case (a singleton
event will be a group of 1 and will have its group_leader pointing to
itself). Peter, do you agree or have I missed something?

Paul.

2009-09-22 11:12:41

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] perf_counter: cleanup for __perf_event_sched_in()

On Tue, 2009-09-22 at 16:47 +0800, Xiao Guangrong wrote:
> It must be a group leader if event->attr.pinned is "1"

Since we already enforce that on counter creation, this seems OK.

Thanks

> Signed-off-by: Xiao Guangrong <[email protected]>
> ---
> kernel/perf_event.c | 11 +++++------
> 1 files changed, 5 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/perf_event.c b/kernel/perf_event.c
> index 76ac4db..fdd9c94 100644
> --- a/kernel/perf_event.c
> +++ b/kernel/perf_event.c
> @@ -1258,12 +1258,11 @@ __perf_event_sched_in(struct perf_event_context *ctx,
> if (event->cpu != -1 && event->cpu != cpu)
> continue;
>
> - if (event != event->group_leader)
> - event_sched_in(event, cpuctx, ctx, cpu);
> - else {
> - if (group_can_go_on(event, cpuctx, 1))
> - group_sched_in(event, cpuctx, ctx, cpu);
> - }
> + /* Only a group leader can be pinned */
> + BUG_ON(event != event->group_leader);
> +
> + if (group_can_go_on(event, cpuctx, 1))
> + group_sched_in(event, cpuctx, ctx, cpu);
>
> /*
> * If this pinned group hasn't been scheduled,

2009-09-22 11:17:13

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] perf_counter: cleanup for __perf_event_sched_in()

On Tue, 2009-09-22 at 19:39 +1000, Paul Mackerras wrote:
> Xiao Guangrong writes:
>
> > It must be a group leader if event->attr.pinned is "1"
>
> Actually, looking at this more closely, it has to be a group leader
> anyway since it's at the top level of ctx->group_list. In fact I see
> four places where we do:
>
> list_for_each_entry(event, &ctx->group_list, group_entry) {
> if (event == event->group_leader)
> ...
>
> or the equivalent, three of which appear to have been introduced by
> afedadf2 ("perf_counter: Optimize sched in/out of counters") back in
> May by Peter Z.
>
> As far as I can see the if () is superfluous in each case (a singleton
> event will be a group of 1 and will have its group_leader pointing to
> itself). Peter, do you agree or have I missed something?

/me kicks those neurons back to work..

Ah, yes, I think you're right, the second hunk of afedadf2 is a
pessimisation due to the extra branch.

2009-09-23 02:45:57

by Xiao Guangrong

[permalink] [raw]
Subject: Re: [PATCH] perf_counter: cleanup for __perf_event_sched_in()



Paul Mackerras wrote:
> Xiao Guangrong writes:
>
>> It must be a group leader if event->attr.pinned is "1"
>
> Actually, looking at this more closely, it has to be a group leader
> anyway since it's at the top level of ctx->group_list. In fact I see
> four places where we do:
>
> list_for_each_entry(event, &ctx->group_list, group_entry) {
> if (event == event->group_leader)
> ...
>
> or the equivalent, three of which appear to have been introduced by
> afedadf2 ("perf_counter: Optimize sched in/out of counters") back in
> May by Peter Z.
>

I only find three places in __perf_event_sched_in/out, could you tell me
where is the fourth place?

I also noticed that all group leader is the top level of ctx->group_list,
if I not missed, the perf_event_init_task() function can be optimized,
like this:

int perf_event_init_task(struct task_struct *child)
{
......
/* We can only look at parent_ctx->group_list to get group leader */
list_for_each_entry_rcu(event, &parent_ctx->event_list, event_entry) {
if (event != event->group_leader)
continue;
......
}
......
}

I'll fix those if you are not mind :-)

Thanks,
Xiao

2009-09-23 03:32:59

by Paul Mackerras

[permalink] [raw]
Subject: Re: [PATCH] perf_counter: cleanup for __perf_event_sched_in()

Xiao Guangrong writes:

> I only find three places in __perf_event_sched_in/out, could you tell me
> where is the fourth place?

My mistake, it is just those three. I saw the list_for_each_entry_rcu
followed by if (event != event->group_leader) in perf_event_init_task
and missed the fact that it is iterating parent_ctx->event_list rather
than parent_ctx->group_list. But as you point out:

> I also noticed that all group leader is the top level of ctx->group_list,
> if I not missed, the perf_event_init_task() function can be optimized,
> like this:
>
> int perf_event_init_task(struct task_struct *child)
> {
> ......
> /* We can only look at parent_ctx->group_list to get group leader */
> list_for_each_entry_rcu(event, &parent_ctx->event_list, event_entry) {
> if (event != event->group_leader)
> continue;
> ......
> }
> ......
> }

we would in fact be better off using group_list rather than event_list
anyway. That should be safe since we hold parent_ctx->mutex.

> I'll fix those if you are not mind :-)

Please do. :)

Paul.

2009-09-23 08:11:29

by Xiao Guangrong

[permalink] [raw]
Subject: [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*()

Paul Mackerras says:

"Actually, looking at this more closely, it has to be a group leader
anyway since it's at the top level of ctx->group_list. In fact I see
four places where we do:

list_for_each_entry(event, &ctx->group_list, group_entry) {
if (event == event->group_leader)
...

or the equivalent, three of which appear to have been introduced by
afedadf2 ("perf_counter: Optimize sched in/out of counters") back in
May by Peter Z.

As far as I can see the if () is superfluous in each case (a singleton
event will be a group of 1 and will have its group_leader pointing to
itself)."

[Can be found at http://marc.info/?l=linux-kernel&m=125361238901442&w=2]

So, this patch fix it.

Signed-off-by: Xiao Guangrong <[email protected]>
---
kernel/perf_event.c | 41 +++++++++++++++++++++++------------------
1 files changed, 23 insertions(+), 18 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 76ac4db..9ca975a 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1032,10 +1032,13 @@ void __perf_event_sched_out(struct perf_event_context *ctx,
perf_disable();
if (ctx->nr_active) {
list_for_each_entry(event, &ctx->group_list, group_entry) {
- if (event != event->group_leader)
- event_sched_out(event, cpuctx, ctx);
- else
- group_sched_out(event, cpuctx, ctx);
+
+ /*
+ * It has to be a group leader since it's at the top
+ * level of ctx->group_list
+ */
+ WARN_ON_ONCE(event != event->group_leader);
+ group_sched_out(event, cpuctx, ctx);
}
}
perf_enable();
@@ -1258,12 +1261,14 @@ __perf_event_sched_in(struct perf_event_context *ctx,
if (event->cpu != -1 && event->cpu != cpu)
continue;

- if (event != event->group_leader)
- event_sched_in(event, cpuctx, ctx, cpu);
- else {
- if (group_can_go_on(event, cpuctx, 1))
- group_sched_in(event, cpuctx, ctx, cpu);
- }
+ /*
+ * It has to be a group leader since it's at the top
+ * level of ctx->group_list
+ */
+ WARN_ON_ONCE(event != event->group_leader);
+
+ if (group_can_go_on(event, cpuctx, 1))
+ group_sched_in(event, cpuctx, ctx, cpu);

/*
* If this pinned group hasn't been scheduled,
@@ -1291,15 +1296,15 @@ __perf_event_sched_in(struct perf_event_context *ctx,
if (event->cpu != -1 && event->cpu != cpu)
continue;

- if (event != event->group_leader) {
- if (event_sched_in(event, cpuctx, ctx, cpu))
+ /*
+ * It has to be a group leader since it's at the top
+ * level of ctx->group_list
+ */
+ WARN_ON_ONCE(event != event->group_leader);
+
+ if (group_can_go_on(event, cpuctx, can_add_hw))
+ if (group_sched_in(event, cpuctx, ctx, cpu))
can_add_hw = 0;
- } else {
- if (group_can_go_on(event, cpuctx, can_add_hw)) {
- if (group_sched_in(event, cpuctx, ctx, cpu))
- can_add_hw = 0;
- }
- }
}
perf_enable();
out:
--
1.6.1.2

2009-09-23 08:14:30

by Xiao Guangrong

[permalink] [raw]
Subject: [PATCH 2/2] perf_counter: optimize for perf_event_init_task()

We can traverse ctx->group_list to get all group leader, it should be safe
since we hold ctx->mutex

Signed-off-by: Xiao Guangrong <[email protected]>
---
kernel/perf_event.c | 5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 9ca975a..4e6e822 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -4786,9 +4786,8 @@ int perf_event_init_task(struct task_struct *child)
* We dont have to disable NMIs - we are only looking at
* the list, not manipulating it:
*/
- list_for_each_entry_rcu(event, &parent_ctx->event_list, event_entry) {
- if (event != event->group_leader)
- continue;
+ list_for_each_entry(event, &parent_ctx->group_list, group_entry) {
+ WARN_ON_ONCE(event != event->group_leader);

if (!event->attr.inherit) {
inherited_all = 0;
--
1.6.1.2

2009-09-23 08:40:22

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*()

On Wed, 2009-09-23 at 16:10 +0800, Xiao Guangrong wrote:
> Paul Mackerras says:
>
> "Actually, looking at this more closely, it has to be a group leader
> anyway since it's at the top level of ctx->group_list. In fact I see
> four places where we do:
>
> list_for_each_entry(event, &ctx->group_list, group_entry) {
> if (event == event->group_leader)
> ...
>
> or the equivalent, three of which appear to have been introduced by
> afedadf2 ("perf_counter: Optimize sched in/out of counters") back in
> May by Peter Z.
>
> As far as I can see the if () is superfluous in each case (a singleton
> event will be a group of 1 and will have its group_leader pointing to
> itself)."
>
> [Can be found at http://marc.info/?l=linux-kernel&m=125361238901442&w=2]
>
> So, this patch fix it.

Hrm.. I think its not just a cleanup, but an actual bugfix.

The intent was to call event_sched_{in,out}() for single counter groups
because that's cheaper than group_sched_{in,out}(), however..

- as you noticed, I got the condition wrong, it should have read:

list_empty(&event->sibling_list)

- it failed to call group_can_go_on() which deals with ->exclusive.

- it also doesn't call hw_perf_group_sched_in() which might break
power.

Also, I'm not sure I like the comments and WARN_ON bits, the changelog
should be sufficient.

> Signed-off-by: Xiao Guangrong <[email protected]>
> ---
> kernel/perf_event.c | 41 +++++++++++++++++++++++------------------
> 1 files changed, 23 insertions(+), 18 deletions(-)
>
> diff --git a/kernel/perf_event.c b/kernel/perf_event.c
> index 76ac4db..9ca975a 100644
> --- a/kernel/perf_event.c
> +++ b/kernel/perf_event.c
> @@ -1032,10 +1032,13 @@ void __perf_event_sched_out(struct perf_event_context *ctx,
> perf_disable();
> if (ctx->nr_active) {
> list_for_each_entry(event, &ctx->group_list, group_entry) {
> - if (event != event->group_leader)
> - event_sched_out(event, cpuctx, ctx);
> - else
> - group_sched_out(event, cpuctx, ctx);
> +
> + /*
> + * It has to be a group leader since it's at the top
> + * level of ctx->group_list
> + */
> + WARN_ON_ONCE(event != event->group_leader);
> + group_sched_out(event, cpuctx, ctx);
> }
> }
> perf_enable();
> @@ -1258,12 +1261,14 @@ __perf_event_sched_in(struct perf_event_context *ctx,
> if (event->cpu != -1 && event->cpu != cpu)
> continue;
>
> - if (event != event->group_leader)
> - event_sched_in(event, cpuctx, ctx, cpu);
> - else {
> - if (group_can_go_on(event, cpuctx, 1))
> - group_sched_in(event, cpuctx, ctx, cpu);
> - }
> + /*
> + * It has to be a group leader since it's at the top
> + * level of ctx->group_list
> + */
> + WARN_ON_ONCE(event != event->group_leader);
> +
> + if (group_can_go_on(event, cpuctx, 1))
> + group_sched_in(event, cpuctx, ctx, cpu);
>
> /*
> * If this pinned group hasn't been scheduled,
> @@ -1291,15 +1296,15 @@ __perf_event_sched_in(struct perf_event_context *ctx,
> if (event->cpu != -1 && event->cpu != cpu)
> continue;
>
> - if (event != event->group_leader) {
> - if (event_sched_in(event, cpuctx, ctx, cpu))
> + /*
> + * It has to be a group leader since it's at the top
> + * level of ctx->group_list
> + */
> + WARN_ON_ONCE(event != event->group_leader);
> +
> + if (group_can_go_on(event, cpuctx, can_add_hw))
> + if (group_sched_in(event, cpuctx, ctx, cpu))
> can_add_hw = 0;
> - } else {
> - if (group_can_go_on(event, cpuctx, can_add_hw)) {
> - if (group_sched_in(event, cpuctx, ctx, cpu))
> - can_add_hw = 0;
> - }
> - }
> }
> perf_enable();
> out:

2009-09-23 08:43:01

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 2/2] perf_counter: optimize for perf_event_init_task()

On Wed, 2009-09-23 at 16:13 +0800, Xiao Guangrong wrote:
> We can traverse ctx->group_list to get all group leader, it should be safe
> since we hold ctx->mutex

I don't think we need that WARN_ON_ONCE there.

> Signed-off-by: Xiao Guangrong <[email protected]>
> ---
> kernel/perf_event.c | 5 ++---
> 1 files changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/perf_event.c b/kernel/perf_event.c
> index 9ca975a..4e6e822 100644
> --- a/kernel/perf_event.c
> +++ b/kernel/perf_event.c
> @@ -4786,9 +4786,8 @@ int perf_event_init_task(struct task_struct *child)
> * We dont have to disable NMIs - we are only looking at
> * the list, not manipulating it:
> */
> - list_for_each_entry_rcu(event, &parent_ctx->event_list, event_entry) {
> - if (event != event->group_leader)
> - continue;
> + list_for_each_entry(event, &parent_ctx->group_list, group_entry) {
> + WARN_ON_ONCE(event != event->group_leader);
>
> if (!event->attr.inherit) {
> inherited_all = 0;

2009-09-25 01:23:23

by Xiao Guangrong

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*()



Peter Zijlstra wrote:

>
> Hrm.. I think its not just a cleanup, but an actual bugfix.
>
> The intent was to call event_sched_{in,out}() for single counter groups
> because that's cheaper than group_sched_{in,out}(), however..
>
> - as you noticed, I got the condition wrong, it should have read:
>
> list_empty(&event->sibling_list)
>
> - it failed to call group_can_go_on() which deals with ->exclusive.
>
> - it also doesn't call hw_perf_group_sched_in() which might break
> power.
>

Yeah, I'll fix the title

> Also, I'm not sure I like the comments and WARN_ON bits, the changelog
> should be sufficient.
>

Um, I'll remove the comments and WARN_ON_ONCE()

Thanks,
Xiao

2009-09-25 01:24:38

by Xiao Guangrong

[permalink] [raw]
Subject: Re: [PATCH 2/2] perf_counter: optimize for perf_event_init_task()



Peter Zijlstra wrote:
> On Wed, 2009-09-23 at 16:13 +0800, Xiao Guangrong wrote:
>> We can traverse ctx->group_list to get all group leader, it should be safe
>> since we hold ctx->mutex
>
> I don't think we need that WARN_ON_ONCE there.
>

I'll remove it

Thanks,
Xiao

>> Signed-off-by: Xiao Guangrong <[email protected]>
>> ---
>> kernel/perf_event.c | 5 ++---
>> 1 files changed, 2 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/perf_event.c b/kernel/perf_event.c
>> index 9ca975a..4e6e822 100644
>> --- a/kernel/perf_event.c
>> +++ b/kernel/perf_event.c
>> @@ -4786,9 +4786,8 @@ int perf_event_init_task(struct task_struct *child)
>> * We dont have to disable NMIs - we are only looking at
>> * the list, not manipulating it:
>> */
>> - list_for_each_entry_rcu(event, &parent_ctx->event_list, event_entry) {
>> - if (event != event->group_leader)
>> - continue;
>> + list_for_each_entry(event, &parent_ctx->group_list, group_entry) {
>> + WARN_ON_ONCE(event != event->group_leader);
>>
>> if (!event->attr.inherit) {
>> inherited_all = 0;
>
>

2009-09-25 05:52:14

by Xiao Guangrong

[permalink] [raw]
Subject: [PATCH 1/2 v2] perf_counter: fix for __perf_event_sched_*()

Paul Mackerras says:
"Actually, looking at this more closely, it has to be a group leader
anyway since it's at the top level of ctx->group_list. In fact I see
four places where we do:

list_for_each_entry(event, &ctx->group_list, group_entry) {
if (event == event->group_leader)
...

or the equivalent, three of which appear to have been introduced by
afedadf2 ("perf_counter: Optimize sched in/out of counters") back in
May by Peter Z.

As far as I can see the if () is superfluous in each case (a singleton
event will be a group of 1 and will have its group_leader pointing to
itself)."
[Can be found at http://marc.info/?l=linux-kernel&m=125361238901442&w=2]

And Peter Zijlstra point out this is an bugfix:
"The intent was to call event_sched_{in,out}() for single counter groups
because that's cheaper than group_sched_{in,out}(), however..

- as you noticed, I got the condition wrong, it should have read:

list_empty(&event->sibling_list)

- it failed to call group_can_go_on() which deals with ->exclusive.

- it also doesn't call hw_perf_group_sched_in() which might break
power."
[Can be found at http://marc.info/?l=linux-kernel&m=125369523318583&w=2]

Changelog v1->v2:
- fix the title name as Peter Zijlstra's suggestion
- remove the comments and WARN_ON_ONCE() as Peter Zijlstra's suggestion

Signed-off-by: Xiao Guangrong <[email protected]>
---
kernel/perf_event.c | 30 ++++++++----------------------
1 files changed, 8 insertions(+), 22 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 76ac4db..2a15cd6 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1030,14 +1030,10 @@ void __perf_event_sched_out(struct perf_event_context *ctx,
update_context_time(ctx);

perf_disable();
- if (ctx->nr_active) {
- list_for_each_entry(event, &ctx->group_list, group_entry) {
- if (event != event->group_leader)
- event_sched_out(event, cpuctx, ctx);
- else
- group_sched_out(event, cpuctx, ctx);
- }
- }
+ if (ctx->nr_active)
+ list_for_each_entry(event, &ctx->group_list, group_entry)
+ group_sched_out(event, cpuctx, ctx);
+
perf_enable();
out:
spin_unlock(&ctx->lock);
@@ -1258,12 +1254,8 @@ __perf_event_sched_in(struct perf_event_context *ctx,
if (event->cpu != -1 && event->cpu != cpu)
continue;

- if (event != event->group_leader)
- event_sched_in(event, cpuctx, ctx, cpu);
- else {
- if (group_can_go_on(event, cpuctx, 1))
- group_sched_in(event, cpuctx, ctx, cpu);
- }
+ if (group_can_go_on(event, cpuctx, 1))
+ group_sched_in(event, cpuctx, ctx, cpu);

/*
* If this pinned group hasn't been scheduled,
@@ -1291,15 +1283,9 @@ __perf_event_sched_in(struct perf_event_context *ctx,
if (event->cpu != -1 && event->cpu != cpu)
continue;

- if (event != event->group_leader) {
- if (event_sched_in(event, cpuctx, ctx, cpu))
+ if (group_can_go_on(event, cpuctx, can_add_hw))
+ if (group_sched_in(event, cpuctx, ctx, cpu))
can_add_hw = 0;
- } else {
- if (group_can_go_on(event, cpuctx, can_add_hw)) {
- if (group_sched_in(event, cpuctx, ctx, cpu))
- can_add_hw = 0;
- }
- }
}
perf_enable();
out:
--
1.6.1.2

2009-09-25 05:54:57

by Xiao Guangrong

[permalink] [raw]
Subject: [PATCH 2/2 v2] optimize for perf_event_init_task()

We can traverse ctx->group_list to get all group leader, it should be safe
since we hold ctx->mutex

Changlog v1->v2:
- remove WARN_ON_ONCE() as Peter Zijlstra's suggestion

Signed-off-by: Xiao Guangrong <[email protected]>
---
kernel/perf_event.c | 4 +---
1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 2a15cd6..0276fb4 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -4767,9 +4767,7 @@ int perf_event_init_task(struct task_struct *child)
* We dont have to disable NMIs - we are only looking at
* the list, not manipulating it:
*/
- list_for_each_entry_rcu(event, &parent_ctx->event_list, event_entry) {
- if (event != event->group_leader)
- continue;
+ list_for_each_entry(event, &parent_ctx->group_list, group_entry) {

if (!event->attr.inherit) {
inherited_all = 0;
--
1.6.1.2

2009-09-25 08:55:01

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/2 v2] perf_counter: fix for __perf_event_sched_*()

On Fri, 2009-09-25 at 13:51 +0800, Xiao Guangrong wrote:
> Paul Mackerras says:
> "Actually, looking at this more closely, it has to be a group leader
> anyway since it's at the top level of ctx->group_list. In fact I see
> four places where we do:
>
> list_for_each_entry(event, &ctx->group_list, group_entry) {
> if (event == event->group_leader)
> ...
>
> or the equivalent, three of which appear to have been introduced by
> afedadf2 ("perf_counter: Optimize sched in/out of counters") back in
> May by Peter Z.
>
> As far as I can see the if () is superfluous in each case (a singleton
> event will be a group of 1 and will have its group_leader pointing to
> itself)."
> [Can be found at http://marc.info/?l=linux-kernel&m=125361238901442&w=2]
>
> And Peter Zijlstra point out this is an bugfix:
> "The intent was to call event_sched_{in,out}() for single counter groups
> because that's cheaper than group_sched_{in,out}(), however..
>
> - as you noticed, I got the condition wrong, it should have read:
>
> list_empty(&event->sibling_list)
>
> - it failed to call group_can_go_on() which deals with ->exclusive.
>
> - it also doesn't call hw_perf_group_sched_in() which might break
> power."
> [Can be found at http://marc.info/?l=linux-kernel&m=125369523318583&w=2]
>
> Changelog v1->v2:
> - fix the title name as Peter Zijlstra's suggestion
> - remove the comments and WARN_ON_ONCE() as Peter Zijlstra's suggestion
>
> Signed-off-by: Xiao Guangrong <[email protected]>

Thanks,

Acked-by: Peter Zijlstra <[email protected]>

2009-09-25 08:55:19

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 2/2 v2] optimize for perf_event_init_task()

On Fri, 2009-09-25 at 13:54 +0800, Xiao Guangrong wrote:
> We can traverse ctx->group_list to get all group leader, it should be safe
> since we hold ctx->mutex
>
> Changlog v1->v2:
> - remove WARN_ON_ONCE() as Peter Zijlstra's suggestion
>
> Signed-off-by: Xiao Guangrong <[email protected]>

Thanks!

Acked-by: Peter Zijlstra <[email protected]>

2009-10-01 07:31:52

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH 2/2 v2] optimize for perf_event_init_task()


* Peter Zijlstra <[email protected]> wrote:

> On Fri, 2009-09-25 at 13:54 +0800, Xiao Guangrong wrote:
> > We can traverse ctx->group_list to get all group leader, it should be safe
> > since we hold ctx->mutex
> >
> > Changlog v1->v2:
> > - remove WARN_ON_ONCE() as Peter Zijlstra's suggestion
> >
> > Signed-off-by: Xiao Guangrong <[email protected]>
>
> Thanks!
>
> Acked-by: Peter Zijlstra <[email protected]>

Thanks Xiao and Peter, i've queued up both patches for v2.6.32.

Ingo

2009-10-01 07:47:31

by Xiao Guangrong

[permalink] [raw]
Subject: [tip:perf/urgent] perf_event: Fix event group handling in __perf_event_sched_*()

Commit-ID: 8c9ed8e14c342ec5e7f27e7e498f62409a10eb29
Gitweb: http://git.kernel.org/tip/8c9ed8e14c342ec5e7f27e7e498f62409a10eb29
Author: Xiao Guangrong <[email protected]>
AuthorDate: Fri, 25 Sep 2009 13:51:17 +0800
Committer: Ingo Molnar <[email protected]>
CommitDate: Thu, 1 Oct 2009 09:30:44 +0200

perf_event: Fix event group handling in __perf_event_sched_*()

Paul Mackerras says:

"Actually, looking at this more closely, it has to be a group
leader anyway since it's at the top level of ctx->group_list. In
fact I see four places where we do:

list_for_each_entry(event, &ctx->group_list, group_entry) {
if (event == event->group_leader)
...

or the equivalent, three of which appear to have been introduced
by afedadf2 ("perf_counter: Optimize sched in/out of counters")
back in May by Peter Z.

As far as I can see the if () is superfluous in each case (a
singleton event will be a group of 1 and will have its
group_leader pointing to itself)."

[ See: http://marc.info/?l=linux-kernel&m=125361238901442&w=2 ]

And Peter Zijlstra points out this is a bugfix:

"The intent was to call event_sched_{in,out}() for single event
groups because that's cheaper than group_sched_{in,out}(),
however..

- as you noticed, I got the condition wrong, it should have read:

list_empty(&event->sibling_list)

- it failed to call group_can_go_on() which deals with ->exclusive.

- it also doesn't call hw_perf_group_sched_in() which might break
power."

[ See: http://marc.info/?l=linux-kernel&m=125369523318583&w=2 ]

Changelog v1->v2:

- Fix the title name according to Peter Zijlstra's suggestion

- Remove the comments and WARN_ON_ONCE() as Peter Zijlstra's
suggestion

Signed-off-by: Xiao Guangrong <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
LKML-Reference: <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>


---
kernel/perf_event.c | 30 ++++++++----------------------
1 files changed, 8 insertions(+), 22 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 0f86feb..e50543d 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1030,14 +1030,10 @@ void __perf_event_sched_out(struct perf_event_context *ctx,
update_context_time(ctx);

perf_disable();
- if (ctx->nr_active) {
- list_for_each_entry(event, &ctx->group_list, group_entry) {
- if (event != event->group_leader)
- event_sched_out(event, cpuctx, ctx);
- else
- group_sched_out(event, cpuctx, ctx);
- }
- }
+ if (ctx->nr_active)
+ list_for_each_entry(event, &ctx->group_list, group_entry)
+ group_sched_out(event, cpuctx, ctx);
+
perf_enable();
out:
spin_unlock(&ctx->lock);
@@ -1258,12 +1254,8 @@ __perf_event_sched_in(struct perf_event_context *ctx,
if (event->cpu != -1 && event->cpu != cpu)
continue;

- if (event != event->group_leader)
- event_sched_in(event, cpuctx, ctx, cpu);
- else {
- if (group_can_go_on(event, cpuctx, 1))
- group_sched_in(event, cpuctx, ctx, cpu);
- }
+ if (group_can_go_on(event, cpuctx, 1))
+ group_sched_in(event, cpuctx, ctx, cpu);

/*
* If this pinned group hasn't been scheduled,
@@ -1291,15 +1283,9 @@ __perf_event_sched_in(struct perf_event_context *ctx,
if (event->cpu != -1 && event->cpu != cpu)
continue;

- if (event != event->group_leader) {
- if (event_sched_in(event, cpuctx, ctx, cpu))
+ if (group_can_go_on(event, cpuctx, can_add_hw))
+ if (group_sched_in(event, cpuctx, ctx, cpu))
can_add_hw = 0;
- } else {
- if (group_can_go_on(event, cpuctx, can_add_hw)) {
- if (group_sched_in(event, cpuctx, ctx, cpu))
- can_add_hw = 0;
- }
- }
}
perf_enable();
out:

2009-10-01 07:47:37

by Xiao Guangrong

[permalink] [raw]
Subject: [tip:perf/urgent] perf_event: Clean up perf_event_init_task()

Commit-ID: 27f9994c50e95f3a5a81fe4c7491a9f9cffe6ec0
Gitweb: http://git.kernel.org/tip/27f9994c50e95f3a5a81fe4c7491a9f9cffe6ec0
Author: Xiao Guangrong <[email protected]>
AuthorDate: Fri, 25 Sep 2009 13:54:01 +0800
Committer: Ingo Molnar <[email protected]>
CommitDate: Thu, 1 Oct 2009 09:30:44 +0200

perf_event: Clean up perf_event_init_task()

While at it: we can traverse ctx->group_list to get all
group leader, it should be safe since we hold ctx->mutex.

Changlog v1->v2:

- remove WARN_ON_ONCE() according to Peter Zijlstra's suggestion

Signed-off-by: Xiao Guangrong <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
LKML-Reference: <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>


---
kernel/perf_event.c | 4 +---
1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index e50543d..e491fb0 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -4767,9 +4767,7 @@ int perf_event_init_task(struct task_struct *child)
* We dont have to disable NMIs - we are only looking at
* the list, not manipulating it:
*/
- list_for_each_entry_rcu(event, &parent_ctx->event_list, event_entry) {
- if (event != event->group_leader)
- continue;
+ list_for_each_entry(event, &parent_ctx->group_list, group_entry) {

if (!event->attr.inherit) {
inherited_all = 0;