> Subject: Enable SPU switch notification to detect currently active SPU tasks.
>
> From: Maynard Johnson <[email protected]>
>
> This patch adds to the capability of spu_switch_event_register so that the
> caller is also notified of currently active SPU tasks. It also exports
> spu_switch_event_register and spu_switch_event_unregister.
Hi Maynard,
It'd be really good if you could convince your mailer to send patches inline :)
> Index: linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/sched.c
> ===================================================================
> --- linux-2.6.19-rc6-arnd1+patches.orig/arch/powerpc/platforms/cell/spufs/sched.c 2006-12-04 10:56:04.730698720 -0600
> +++ linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/sched.c 2007-01-11 09:45:37.918333128 -0600
> @@ -46,6 +46,8 @@
>
> #define SPU_MIN_TIMESLICE (100 * HZ / 1000)
>
> +int notify_active[MAX_NUMNODES];
You're basing the size of the array on MAX_NUMNODES
(1 << CONFIG_NODES_SHIFT), but then indexing it by spu->number.
It's quite possible we'll have a system with MAX_NUMNODES == 1, but > 1
spus, in which case this code is going to break. The PS3 is one such
system.
Instead I think you should have a flag in the spu struct.
> #define SPU_BITMAP_SIZE (((MAX_PRIO+BITS_PER_LONG)/BITS_PER_LONG)+1)
> struct spu_prio_array {
> unsigned long bitmap[SPU_BITMAP_SIZE];
> @@ -81,18 +83,45 @@
> static void spu_switch_notify(struct spu *spu, struct spu_context *ctx)
> {
> blocking_notifier_call_chain(&spu_switch_notifier,
> - ctx ? ctx->object_id : 0, spu);
> + ctx ? ctx->object_id : 0, spu);
> +}
Try not to make whitespace only changes in the same patch as actual code changes.
> +
> +static void notify_spus_active(void)
> +{
> + int node;
> + /* Wake up the active spu_contexts. When the awakened processes
> + * sees their notify_active flag is set, they will call
> + * spu_notify_already_active().
> + */
> + for (node = 0; node < MAX_NUMNODES; node++) {
> + struct spu *spu;
> + mutex_lock(&spu_prio->active_mutex[node]);
> + list_for_each_entry(spu, &spu_prio->active_list[node], list) {
> + struct spu_context *ctx = spu->ctx;
> + wake_up_all(&ctx->stop_wq);
> + notify_active[ctx->spu->number] = 1;
> + smp_mb();
> + }
I don't understand why you're setting the notify flag after you do the
wake_up_all() ?
You only need a smp_wmb() here.
Does the scheduler guarantee that ctxs won't swap nodes? Otherwise
between releasing the lock on one node and getting the lock on the next,
a ctx could migrate between them - which would cause either spurious
wake ups, or missing a ctx altogether. Although I'm not sure if it's
that important.
> + mutex_unlock(&spu_prio->active_mutex[node]);
> + }
> + yield();
> }
>
> int spu_switch_event_register(struct notifier_block * n)
> {
> - return blocking_notifier_chain_register(&spu_switch_notifier, n);
> + int ret;
> + ret = blocking_notifier_chain_register(&spu_switch_notifier, n);
> + if (!ret)
> + notify_spus_active();
> + return ret;
> }
> +EXPORT_SYMBOL_GPL(spu_switch_event_register);
>
> int spu_switch_event_unregister(struct notifier_block * n)
> {
> return blocking_notifier_chain_unregister(&spu_switch_notifier, n);
> }
> +EXPORT_SYMBOL_GPL(spu_switch_event_unregister);
>
>
> static inline void bind_context(struct spu *spu, struct spu_context *ctx)
> @@ -250,6 +279,14 @@
> return spu_get_idle(ctx, flags);
> }
>
> +void spu_notify_already_active(struct spu_context *ctx)
> +{
> + struct spu *spu = ctx->spu;
> + if (!spu)
> + return;
> + spu_switch_notify(spu, ctx);
> +}
> +
> /* The three externally callable interfaces
> * for the scheduler begin here.
> *
> Index: linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/spufs.h
> ===================================================================
> --- linux-2.6.19-rc6-arnd1+patches.orig/arch/powerpc/platforms/cell/spufs/spufs.h 2007-01-08 18:18:40.093354608 -0600
> +++ linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/spufs.h 2007-01-08 18:31:03.610345792 -0600
> @@ -183,6 +183,7 @@
> void spu_yield(struct spu_context *ctx);
> int __init spu_sched_init(void);
> void __exit spu_sched_exit(void);
> +void spu_notify_already_active(struct spu_context *ctx);
>
> extern char *isolated_loader;
>
> Index: linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/run.c
> ===================================================================
> --- linux-2.6.19-rc6-arnd1+patches.orig/arch/powerpc/platforms/cell/spufs/run.c 2007-01-08 18:33:51.979311680 -0600
> +++ linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/run.c 2007-01-11 10:17:20.777344984 -0600
> @@ -10,6 +10,8 @@
>
> #include "spufs.h"
>
> +extern int notify_active[MAX_NUMNODES];
> +
> /* interrupt-level stop callback function. */
> void spufs_stop_callback(struct spu *spu)
> {
> @@ -45,7 +47,9 @@
> u64 pte_fault;
>
> *stat = ctx->ops->status_read(ctx);
> - if (ctx->state != SPU_STATE_RUNNABLE)
> + smp_mb();
And smp_rmb() should be sufficient here.
> + if (ctx->state != SPU_STATE_RUNNABLE || notify_active[ctx->spu->number])
> return 1;
> spu = ctx->spu;
> pte_fault = spu->dsisr &
> @@ -319,6 +323,11 @@
> ret = spufs_wait(ctx->stop_wq, spu_stopped(ctx, &status));
> if (unlikely(ret))
> break;
> + if (unlikely(notify_active[ctx->spu->number])) {
> + notify_active[ctx->spu->number] = 0;
> + if (!(status & SPU_STATUS_STOPPED_BY_STOP))
> + spu_notify_already_active(ctx);
> + }
> if ((status & SPU_STATUS_STOPPED_BY_STOP) &&
> (status >> SPU_STOP_STATUS_SHIFT == 0x2104)) {
> ret = spu_process_callback(ctx);
cheers
--
Michael Ellerman
OzLabs, IBM Australia Development Lab
wwweb: http://michael.ellerman.id.au
phone: +61 2 6212 1183 (tie line 70 21183)
We do not inherit the earth from our ancestors,
we borrow it from our children. - S.M.A.R.T Person
Michael,
Thanks for your comments! My responses are below.
-Maynard
Michael Ellerman wrote:
>>Subject: Enable SPU switch notification to detect currently active SPU tasks.
>>
>>From: Maynard Johnson <[email protected]>
>>
>>This patch adds to the capability of spu_switch_event_register so that the
>>caller is also notified of currently active SPU tasks. It also exports
>>spu_switch_event_register and spu_switch_event_unregister.
>
>
> Hi Maynard,
>
> It'd be really good if you could convince your mailer to send patches inline :)
Mozilla Mail is my client, and I don't see any option to force patches
inline. Of course, there is an option to display received attachments
as inline, but that doesn't help you.
>
>
>>Index: linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/sched.c
>>===================================================================
>>--- linux-2.6.19-rc6-arnd1+patches.orig/arch/powerpc/platforms/cell/spufs/sched.c 2006-12-04 10:56:04.730698720 -0600
>>+++ linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/sched.c 2007-01-11 09:45:37.918333128 -0600
>>@@ -46,6 +46,8 @@
>>
>> #define SPU_MIN_TIMESLICE (100 * HZ / 1000)
>>
>>+int notify_active[MAX_NUMNODES];
DOH! Right, this is patently wrong. Somehow, I misunderstood what
MAX_NUMNODES was. I'll fix it.
>
>
> You're basing the size of the array on MAX_NUMNODES
> (1 << CONFIG_NODES_SHIFT), but then indexing it by spu->number.
>
> It's quite possible we'll have a system with MAX_NUMNODES == 1, but > 1
> spus, in which case this code is going to break. The PS3 is one such
> system.
>
> Instead I think you should have a flag in the spu struct.
Yes, that's certainly an option I'll look at.
>
>
>> #define SPU_BITMAP_SIZE (((MAX_PRIO+BITS_PER_LONG)/BITS_PER_LONG)+1)
>> struct spu_prio_array {
>> unsigned long bitmap[SPU_BITMAP_SIZE];
>>@@ -81,18 +83,45 @@
>> static void spu_switch_notify(struct spu *spu, struct spu_context *ctx)
>> {
>> blocking_notifier_call_chain(&spu_switch_notifier,
>>- ctx ? ctx->object_id : 0, spu);
>>+ ctx ? ctx->object_id : 0, spu);
>>+}
>
>
> Try not to make whitespace only changes in the same patch as actual code changes.
>
>
>>+
>>+static void notify_spus_active(void)
>>+{
>>+ int node;
>>+ /* Wake up the active spu_contexts. When the awakened processes
>>+ * sees their notify_active flag is set, they will call
>>+ * spu_notify_already_active().
>>+ */
>>+ for (node = 0; node < MAX_NUMNODES; node++) {
>>+ struct spu *spu;
>>+ mutex_lock(&spu_prio->active_mutex[node]);
>>+ list_for_each_entry(spu, &spu_prio->active_list[node], list) {
>>+ struct spu_context *ctx = spu->ctx;
>>+ wake_up_all(&ctx->stop_wq);
>>+ notify_active[ctx->spu->number] = 1;
>>+ smp_mb();
>>+ }
>
>
> I don't understand why you're setting the notify flag after you do the
> wake_up_all() ?
Right, I'll move it.
>
> You only need a smp_wmb() here.
OK.
>
> Does the scheduler guarantee that ctxs won't swap nodes? Otherwise
Don't know. Arnd would probably know off the top of his head.
> between releasing the lock on one node and getting the lock on the next,
> a ctx could migrate between them - which would cause either spurious
> wake ups, or missing a ctx altogether. Although I'm not sure if it's
> that important.
How important is it? If the SPUs were executing different code or
taking different paths through the same code, then seeing the samples
for each SPU is important. But if this were the case, the user would
easily see the misssing data for the missing SPU(s). Since this sounds
like a very small window, re-running the profile would _probably_ yield
a complete profile. On the other hand, if we can avoid it, we should.
When I repost the fixed-up patch, I'll add a comment/question for Arnd
about this issue.
>
>
>>+ mutex_unlock(&spu_prio->active_mutex[node]);
>>+ }
>>+ yield();
>> }
>>
>> int spu_switch_event_register(struct notifier_block * n)
>> {
>>- return blocking_notifier_chain_register(&spu_switch_notifier, n);
>>+ int ret;
>>+ ret = blocking_notifier_chain_register(&spu_switch_notifier, n);
>>+ if (!ret)
>>+ notify_spus_active();
>>+ return ret;
>> }
>>+EXPORT_SYMBOL_GPL(spu_switch_event_register);
>>
>> int spu_switch_event_unregister(struct notifier_block * n)
>> {
>> return blocking_notifier_chain_unregister(&spu_switch_notifier, n);
>> }
>>+EXPORT_SYMBOL_GPL(spu_switch_event_unregister);
>>
>>
>> static inline void bind_context(struct spu *spu, struct spu_context *ctx)
>>@@ -250,6 +279,14 @@
>> return spu_get_idle(ctx, flags);
>> }
>>
>>+void spu_notify_already_active(struct spu_context *ctx)
>>+{
>>+ struct spu *spu = ctx->spu;
>>+ if (!spu)
>>+ return;
>>+ spu_switch_notify(spu, ctx);
>>+}
>>+
>> /* The three externally callable interfaces
>> * for the scheduler begin here.
>> *
>>Index: linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/spufs.h
>>===================================================================
>>--- linux-2.6.19-rc6-arnd1+patches.orig/arch/powerpc/platforms/cell/spufs/spufs.h 2007-01-08 18:18:40.093354608 -0600
>>+++ linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/spufs.h 2007-01-08 18:31:03.610345792 -0600
>>@@ -183,6 +183,7 @@
>> void spu_yield(struct spu_context *ctx);
>> int __init spu_sched_init(void);
>> void __exit spu_sched_exit(void);
>>+void spu_notify_already_active(struct spu_context *ctx);
>>
>> extern char *isolated_loader;
>>
>>Index: linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/run.c
>>===================================================================
>>--- linux-2.6.19-rc6-arnd1+patches.orig/arch/powerpc/platforms/cell/spufs/run.c 2007-01-08 18:33:51.979311680 -0600
>>+++ linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/run.c 2007-01-11 10:17:20.777344984 -0600
>>@@ -10,6 +10,8 @@
>>
>> #include "spufs.h"
>>
>>+extern int notify_active[MAX_NUMNODES];
>>+
>> /* interrupt-level stop callback function. */
>> void spufs_stop_callback(struct spu *spu)
>> {
>>@@ -45,7 +47,9 @@
>> u64 pte_fault;
>>
>> *stat = ctx->ops->status_read(ctx);
>>- if (ctx->state != SPU_STATE_RUNNABLE)
>>+ smp_mb();
>
>
> And smp_rmb() should be sufficient here.
OK
>
>
>>+ if (ctx->state != SPU_STATE_RUNNABLE || notify_active[ctx->spu->number])
>> return 1;
>> spu = ctx->spu;
>> pte_fault = spu->dsisr &
>>@@ -319,6 +323,11 @@
>> ret = spufs_wait(ctx->stop_wq, spu_stopped(ctx, &status));
>> if (unlikely(ret))
>> break;
>>+ if (unlikely(notify_active[ctx->spu->number])) {
>>+ notify_active[ctx->spu->number] = 0;
>>+ if (!(status & SPU_STATUS_STOPPED_BY_STOP))
>>+ spu_notify_already_active(ctx);
>>+ }
>> if ((status & SPU_STATUS_STOPPED_BY_STOP) &&
>> (status >> SPU_STOP_STATUS_SHIFT == 0x2104)) {
>> ret = spu_process_callback(ctx);
>
>
> cheers
>
Attached is an updated patch that addresses Michael Ellerman's comments.
One comment made by Michael has not yet been addressed:
The comment was in regard to the for-loop in
spufs/sched.c:notify_spus_active(). He wondered if the scheduler can
swap a context from one node to another. If so, there's a small window
in this loop (where we switch the lock from one node's active list to
the next) where it may be possible we might miss waking up a context and
send a spurious wakeup to another.
Arnd . . . can you comment on this question?
Thanks.
-Maynard
Index: linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/sched.c
===================================================================
--- linux-2.6.19-rc6-arnd1+patches.orig/arch/powerpc/platforms/cell/spufs/sched.c 2006-12-04 10:56:04.730698720 -0600
+++ linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/sched.c 2007-01-15 16:22:31.808461448 -0600
@@ -84,15 +84,42 @@
ctx ? ctx->object_id : 0, spu);
}
+static void notify_spus_active(void)
+{
+ int node;
+ /* Wake up the active spu_contexts. When the awakened processes
+ * sees their notify_active flag is set, they will call
+ * spu_notify_already_active().
+ */
+ for (node = 0; node < MAX_NUMNODES; node++) {
+ struct spu *spu;
+ mutex_lock(&spu_prio->active_mutex[node]);
+ list_for_each_entry(spu, &spu_prio->active_list[node], list) {
You seem to have some issues with tabs vs spaces for indentation
here.
+ struct spu_context *ctx = spu->ctx;
+ spu->notify_active = 1;
Please make this a bit in the sched_flags field that's added in
the scheduler patch series I sent out.
+ wake_up_all(&ctx->stop_wq);
+ smp_wmb();
+ }
+ mutex_unlock(&spu_prio->active_mutex[node]);
+ }
+ yield();
+}
Why do you add the yield() here? yield is pretty much a sign
for a bug
+void spu_notify_already_active(struct spu_context *ctx)
+{
+ struct spu *spu = ctx->spu;
+ if (!spu)
+ return;
+ spu_switch_notify(spu, ctx);
+}
Please just call spu_switch_notify directly from the only
caller. Also the check for ctx->spu beeing there is not
required if you look a the caller.
*stat = ctx->ops->status_read(ctx);
- if (ctx->state != SPU_STATE_RUNNABLE)
- return 1;
+ smp_rmb();
What do you need the barrier for here?
Christoph Hellwig wrote:
>Index: linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/sched.c
>===================================================================
>--- linux-2.6.19-rc6-arnd1+patches.orig/arch/powerpc/platforms/cell/spufs/sched.c 2006-12-04 10:56:04.730698720 -0600
>+++ linux-2.6.19-rc6-arnd1+patches/arch/powerpc/platforms/cell/spufs/sched.c 2007-01-15 16:22:31.808461448 -0600
>@@ -84,15 +84,42 @@
> ctx ? ctx->object_id : 0, spu);
> }
>
>+static void notify_spus_active(void)
>+{
>+ int node;
>+ /* Wake up the active spu_contexts. When the awakened processes
>+ * sees their notify_active flag is set, they will call
>+ * spu_notify_already_active().
>+ */
>+ for (node = 0; node < MAX_NUMNODES; node++) {
>+ struct spu *spu;
>+ mutex_lock(&spu_prio->active_mutex[node]);
>+ list_for_each_entry(spu, &spu_prio->active_list[node], list) {
>
> You seem to have some issues with tabs vs spaces for indentation
> here.
>
>
fixed
>+ struct spu_context *ctx = spu->ctx;
>+ spu->notify_active = 1;
>
>
> Please make this a bit in the sched_flags field that's added in
> the scheduler patch series I sent out.
>
>
I haven't seen that the scheduler patch series got applied yet. This
Cell spu task notification patch is a pre-req for OProfile development
to support profiling SPUs. When the scheduler patch gets applied to a
kernel version that fits our needs for our OProfile development, I don't
see any problem in using the sched_flags field instead of notify_active.
>+ wake_up_all(&ctx->stop_wq);
>+ smp_wmb();
>+ }
>+ mutex_unlock(&spu_prio->active_mutex[node]);
>+ }
>+ yield();
>+}
>
> Why do you add the yield() here? yield is pretty much a sign
> for a bug
>
>
Yes, the yield() and the memory barriers were leftovers from an earlier
ill-conceived attempt at solving this problem. They should have been
removed. They're gone now.
>+void spu_notify_already_active(struct spu_context *ctx)
>+{
>+ struct spu *spu = ctx->spu;
>+ if (!spu)
>+ return;
>+ spu_switch_notify(spu, ctx);
>+}
>
> Please just call spu_switch_notify directly from the only
>
>
I hesitated doing this since it would entail changing spu_switch_notify
from being static to non-static. I'd like to get Arnd's opinion on this
question before going ahead and making such a change.
> caller. Also the check for ctx->spu beeing there is not
> required if you look a the caller.
>
>
> *stat = ctx->ops->status_read(ctx);
>- if (ctx->state != SPU_STATE_RUNNABLE)
>- return 1;
>+ smp_rmb();
>
>
>
> What do you need the barrier for here?
>
>
Removed.
On Wed, Jan 17, 2007 at 09:56:12AM -0600, Maynard Johnson wrote:
> I haven't seen that the scheduler patch series got applied yet. This
> Cell spu task notification patch is a pre-req for OProfile development
> to support profiling SPUs. When the scheduler patch gets applied to a
> kernel version that fits our needs for our OProfile development, I don't
> see any problem in using the sched_flags field instead of notify_active.
I'll hopefull commit these patches this weekend, I'm at a conference
currently so not really able to do a lot of work. If you need to make
more progress until than just apply the hunk that introduces sched_flags
before doing your patch.
> Yes, the yield() and the memory barriers were leftovers from an earlier
> ill-conceived attempt at solving this problem. They should have been
> removed. They're gone now.
Ok.
> I hesitated doing this since it would entail changing spu_switch_notify
> from being static to non-static. I'd like to get Arnd's opinion on this
> question before going ahead and making such a change.
There is no difference in impact between marking a function non-static
and adding a trivial wrapper around it, only that the latter creates
more bloat. So I don't think there's a good argument against this.
This patch makes all the changes suggested by Christopher, with the
exception of the suggestion to use the sched_flags field versus a new
member to the spu_context struct to signal the need to "notify already
active". I don't have the sched_flags change in my 2.6.20-rc1 tree. I
can send another patch later if/when the sched_flags changes appears in
the kernel version we end up picking for final oprofile-spu development.
Comments welcome. Thanks.
-Maynard