2010-11-14 08:54:28

by Dario Faggioli

[permalink] [raw]
Subject: Re: [RFC][PATCH 05/22] sched: SCHED_DEADLINE policy implementation

On Fri, 2010-10-29 at 08:30 +0200, Raistlin wrote:
> +static void task_fork_dl(struct task_struct *p)
> +{
> + /*
> + * The child of a -deadline task will be SCHED_DEADLINE, but
> + * as a throttled task. This means the parent (or someone else)
> + * must call sched_setscheduler_ex() on it, or it won't even
> + * start.
> + */
> + p->dl.dl_throttled = 1;
> + p->dl.dl_new = 0;
> +}
> +
So, this is also something we only discussed once without reaching a
conclusive statement... Are we ok with this behaviour?

I'm not, actually, but I'm not sure how to do better, considering:
- resetting task to SCHED_OTHER on fork is useless and confusing, since
RESET_ON_FORK already exists and allows for this.
- cloning the parent's bandwidth and try to see if it fits might be
nice, but what if the check fails? fork fails as well? If yes, what
should the parent do, lower its own bandwidth (since it's being
cloned) and try forking again? If yes, lowering it by how much?
Mmm... Not sure it would fly... :-(
- splitting the bandwidth of the parent in two would make sense
to me, but then it has to be returned at some point, or a poor
-deadline shell will reach zero bandwidth after a few `ls'! So, when
are we giving the bandwidth back to the parent? When the child dies?
What if the child is sched_setscheduled_ex()-ed (either to -deadline
or to something else), we return the bw back (if the call succeeds)
as well?

I was thinking wither to force RESET_ON_FORK for SCHED_DEADLINE or to
try to pursue the 3rd solution (provided I figure out what to do on
detaching, parent dying, and such things... :-P).

Comments very very welcome!

Thanks and Regards,
Dario

--
<<This happens because I choose it to happen!>> (Raistlin Majere)
----------------------------------------------------------------------
Dario Faggioli, ReTiS Lab, Scuola Superiore Sant'Anna, Pisa (Italy)

http://blog.linux.it/raistlin / [email protected] /
[email protected]


Attachments:
signature.asc (198.00 B)
This is a digitally signed message part

2010-11-23 14:25:14

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC][PATCH 05/22] sched: SCHED_DEADLINE policy implementation

On Sun, 2010-11-14 at 09:54 +0100, Raistlin wrote:
> On Fri, 2010-10-29 at 08:30 +0200, Raistlin wrote:
> > +static void task_fork_dl(struct task_struct *p)
> > +{
> > + /*
> > + * The child of a -deadline task will be SCHED_DEADLINE, but
> > + * as a throttled task. This means the parent (or someone else)
> > + * must call sched_setscheduler_ex() on it, or it won't even
> > + * start.
> > + */
> > + p->dl.dl_throttled = 1;
> > + p->dl.dl_new = 0;
> > +}
> > +
> So, this is also something we only discussed once without reaching a
> conclusive statement... Are we ok with this behaviour?
>
> I'm not, actually, but I'm not sure how to do better, considering:
> - resetting task to SCHED_OTHER on fork is useless and confusing, since
> RESET_ON_FORK already exists and allows for this.
> - cloning the parent's bandwidth and try to see if it fits might be
> nice, but what if the check fails? fork fails as well? If yes, what
> should the parent do, lower its own bandwidth (since it's being
> cloned) and try forking again? If yes, lowering it by how much?
> Mmm... Not sure it would fly... :-(
> - splitting the bandwidth of the parent in two would make sense
> to me, but then it has to be returned at some point, or a poor
> -deadline shell will reach zero bandwidth after a few `ls'! So, when
> are we giving the bandwidth back to the parent? When the child dies?
> What if the child is sched_setscheduled_ex()-ed (either to -deadline
> or to something else), we return the bw back (if the call succeeds)
> as well?
>
> I was thinking wither to force RESET_ON_FORK for SCHED_DEADLINE or to
> try to pursue the 3rd solution (provided I figure out what to do on
> detaching, parent dying, and such things... :-P).
>
> Comments very very welcome!

Right, so either this, or we could make sched_deadline tasks fail
fork() ;-)

from a RT perspective such tasks shouldn't fork() anyway, fork()
includes a lot of implicit memory allocations which definitely are not
deterministic (page reclaim etc.).

So yeah, I'm fine with this, it causes some pain, but then, you've
earned this pain by doing this in the first place.