Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933228Ab3GCXCY (ORCPT ); Wed, 3 Jul 2013 19:02:24 -0400 Received: from forward10.mail.yandex.net ([77.88.61.49]:49390 "EHLO forward10.mail.yandex.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755195Ab3GCXCX (ORCPT ); Wed, 3 Jul 2013 19:02:23 -0400 From: Kirill Tkhai To: "linux-kernel@vger.kernel.org" Cc: Steven Rostedt , Ingo Molnar , Peter Zijlstra Subject: [PATCH] sched/rt: Check if rt_se has neighbour in requeue_task_rt() MIME-Version: 1.0 Message-Id: <722151372892539@web1e.yandex.ru> X-Mailer: Yamail [ http://yandex.ru ] 5.0 Date: Thu, 04 Jul 2013 03:02:19 +0400 Content-Transfer-Encoding: 7bit Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3358 Lines: 105 1)requeue_task_rt: check if entity's next and prev are not the same element. This guarantees entity is queued and it is not the only in the prio list. Return 1 if at least one rt_se from the stack was really requeued. 2)Remove on_rt_rq check from requeue_rt_entity() because it is useless now. Furthermore, it doesn't handle single rt_se case. 3)Make pretty task_tick_rt() more pretty. Signed-off-by: Kirill Tkhai CC: Steven Rostedt CC: Ingo Molnar CC: Peter Zijlstra --- kernel/sched/rt.c | 49 +++++++++++++++++++++++-------------------------- 1 files changed, 23 insertions(+), 26 deletions(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 01970c8..3213503 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1135,29 +1135,37 @@ static void dequeue_task_rt(struct rq *rq, struct task_struct *p, int flags) * Put task to the head or the end of the run list without the overhead of * dequeue followed by enqueue. */ -static void +static inline void requeue_rt_entity(struct rt_rq *rt_rq, struct sched_rt_entity *rt_se, int head) { - if (on_rt_rq(rt_se)) { - struct rt_prio_array *array = &rt_rq->active; - struct list_head *queue = array->queue + rt_se_prio(rt_se); + struct rt_prio_array *array = &rt_rq->active; + struct list_head *queue = array->queue + rt_se_prio(rt_se); - if (head) - list_move(&rt_se->run_list, queue); - else - list_move_tail(&rt_se->run_list, queue); - } + if (head) + list_move(&rt_se->run_list, queue); + else + list_move_tail(&rt_se->run_list, queue); } -static void requeue_task_rt(struct rq *rq, struct task_struct *p, int head) +static int requeue_task_rt(struct rq *rq, struct task_struct *p, int head) { struct sched_rt_entity *rt_se = &p->rt; - struct rt_rq *rt_rq; + int requeued = 0; for_each_sched_rt_entity(rt_se) { - rt_rq = rt_rq_of_se(rt_se); - requeue_rt_entity(rt_rq, rt_se, head); + /* + * Requeue to the head or tail of prio queue if + * rt_se is queued and it is not the only element + */ + if (rt_se->run_list.prev != rt_se->run_list.next) { + struct rt_rq *rt_rq = rt_rq_of_se(rt_se); + + requeue_rt_entity(rt_rq, rt_se, head); + requeued = 1; + } } + + return requeued; } static void yield_task_rt(struct rq *rq) @@ -1912,8 +1920,6 @@ static void watchdog(struct rq *rq, struct task_struct *p) static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued) { - struct sched_rt_entity *rt_se = &p->rt; - update_curr_rt(rq); watchdog(rq, p); @@ -1930,17 +1936,8 @@ static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued) p->rt.time_slice = sched_rr_timeslice; - /* - * Requeue to the end of queue if we (and all of our ancestors) are the - * only element on the queue - */ - for_each_sched_rt_entity(rt_se) { - if (rt_se->run_list.prev != rt_se->run_list.next) { - requeue_task_rt(rq, p, 0); - set_tsk_need_resched(p); - return; - } - } + if (requeue_task_rt(rq, p, 0)) + set_tsk_need_resched(p); } static void set_curr_task_rt(struct rq *rq) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/