Group, Ingo Molnar, etc,
Why does the rt sched_class contain fewer elements than fair?
missing is the RT for .task_new.
called by do_fork(): fork.c , which calls wake_up_new_task() via
if (!(clone_flags & CLONE_STOPPED))
wake_up_new_task(p, clone_flags);
which is in sched.c and calls
if (!p->sched_class->task_new ||
and
p->sched_class->task_new(rq, p, now);
sched_rt.c:
static struct sched_class rt_sched_class __read_mostly = {
217 .enqueue_task = enqueue_task_rt,
218 .dequeue_task = dequeue_task_rt,
219 .yield_task = yield_task_rt,
220
221 .check_preempt_curr = check_preempt_curr_rt,
222
223 .pick_next_task = pick_next_task_rt,
224 .put_prev_task = put_prev_task_rt,
225
226 .load_balance = load_balance_rt,
227
228 .task_tick = task_tick_rt,
229 };
sched_fair.c:
struct sched_class fair_sched_class __read_mostly = {
1075 .enqueue_task = enqueue_task_fair,
1076 .dequeue_task = dequeue_task_fair,
1077 .yield_task = yield_task_fair,
1078
1079 .check_preempt_curr = check_preempt_curr_fair,
1080
1081 .pick_next_task = pick_next_task_fair,
1082 .put_prev_task = put_prev_task_fair,
1083
1084 .load_balance = load_balance_fair,
1085
1086 .set_curr_task = set_curr_task_fair,
1087 .task_tick = task_tick_fair,
1088 .task_new = task_new_fair,
1089 };
1090
The missing rt equivalent item is:
/*
1014 * Share the fairness runtime between parent and child, thus the
1015 * total amount of pressure for CPU stays equal - new tasks
1016 * get a chance to run but frequent forkers are not allowed to
1017 * monopolize the CPU. Note: the parent runqueue is locked,
1018 * the child is not running yet.
1019 */
1020 static void task_new_fair(struct rq *rq, struct task_struct *p)
1021 {
Mitchell Erblich
On Tue, 2007-08-14 at 12:28 -0700, Mitchell Erblich wrote:
> Group, Ingo Molnar, etc,
>
> Why does the rt sched_class contain fewer elements than fair?
> missing is the RT for .task_new.
No class specific initialization needs to be done for RT tasks.
-Mike