Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756140AbcDLFb1 (ORCPT ); Tue, 12 Apr 2016 01:31:27 -0400 Received: from mail-ob0-f194.google.com ([209.85.214.194]:35281 "EHLO mail-ob0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755962AbcDLF3g (ORCPT ); Tue, 12 Apr 2016 01:29:36 -0400 From: "Bill Huey (hui)" To: Peter Zijlstra , Steven Rostedt , linux-kernel@vger.kernel.org Cc: Dario Faggioli , Alessandro Zummo , Thomas Gleixner , KY Srinivasan , Amir Frenkel , Bdale Garbee Subject: [PATCH RFC v0 10/12] Export SCHED_FIFO/RT requeuing functions Date: Mon, 11 Apr 2016 22:29:18 -0700 Message-Id: <1460438960-32060-11-git-send-email-bill.huey@gmail.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1460438960-32060-1-git-send-email-bill.huey@gmail.com> References: <1460438960-32060-1-git-send-email-bill.huey@gmail.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2691 Lines: 100 SCHED_FIFO/RT tail/head runqueue insertion support, initial thread death support via a hook to the scheduler class. Thread death must include additional semantics to remove/discharge an admitted task properly. Signed-off-by: Bill Huey (hui) --- kernel/sched/rt.c | 41 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index c41ea7a..1d77adc 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -8,6 +8,11 @@ #include #include +#ifdef CONFIG_RTC_CYCLIC +#include "cyclic.h" +extern int rt_overrun_task_admitted1(struct rq *rq, struct task_struct *p); +#endif + int sched_rr_timeslice = RR_TIMESLICE; static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun); @@ -1321,8 +1326,18 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags) if (flags & ENQUEUE_WAKEUP) rt_se->timeout = 0; +#ifdef CONFIG_RTC_CYCLIC + /* if admitted and the current slot then head, otherwise tail */ + if (rt_overrun_task_admitted1(rq, p)) { + if (rt_overrun_task_active(p)) { + flags |= ENQUEUE_HEAD; + } + } enqueue_rt_entity(rt_se, flags); +#else + enqueue_rt_entity(rt_se, flags & ENQUEUE_HEAD); +#endif if (!task_current(rq, p) && p->nr_cpus_allowed > 1) enqueue_pushable_task(rq, p); @@ -1367,6 +1382,18 @@ static void requeue_task_rt(struct rq *rq, struct task_struct *p, int head) } } +#ifdef CONFIG_RTC_CYCLIC +void dequeue_task_rt2(struct rq *rq, struct task_struct *p, int flags) +{ + dequeue_task_rt(rq, p, flags); +} + +void requeue_task_rt2(struct rq *rq, struct task_struct *p, int head) +{ + requeue_task_rt(rq, p, head); +} +#endif + static void yield_task_rt(struct rq *rq) { requeue_task_rt(rq, rq->curr, 0); @@ -2177,6 +2204,10 @@ void __init init_sched_rt_class(void) zalloc_cpumask_var_node(&per_cpu(local_cpu_mask, i), GFP_KERNEL, cpu_to_node(i)); } + +#ifdef CONFIG_RTC_CYCLIC + init_rt_overrun(); +#endif } #endif /* CONFIG_SMP */ @@ -2322,6 +2353,13 @@ static unsigned int get_rr_interval_rt(struct rq *rq, struct task_struct *task) return 0; } +#ifdef CONFIG_RTC_CYCLIC +static void task_dead_rt(struct task_struct *p) +{ + rt_overrun_entry_delete(p); +} +#endif + const struct sched_class rt_sched_class = { .next = &fair_sched_class, .enqueue_task = enqueue_task_rt, @@ -2344,6 +2382,9 @@ const struct sched_class rt_sched_class = { #endif .set_curr_task = set_curr_task_rt, +#ifdef CONFIG_RTC_CYCLIC + .task_dead = task_dead_rt, +#endif .task_tick = task_tick_rt, .get_rr_interval = get_rr_interval_rt, -- 2.5.0