Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932466AbcDAPSg (ORCPT ); Fri, 1 Apr 2016 11:18:36 -0400 Received: from mail-wm0-f42.google.com ([74.125.82.42]:34174 "EHLO mail-wm0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932388AbcDAPS2 (ORCPT ); Fri, 1 Apr 2016 11:18:28 -0400 From: Luca Abeni To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Ingo Molnar , Juri Lelli , Luca Abeni Subject: [RFC v2 7/7] Do not reclaim the whole CPU bandwidth Date: Fri, 1 Apr 2016 17:12:33 +0200 Message-Id: <1459523553-29089-8-git-send-email-luca.abeni@unitn.it> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1459523553-29089-1-git-send-email-luca.abeni@unitn.it> References: <1459523553-29089-1-git-send-email-luca.abeni@unitn.it> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1922 Lines: 71 Original GRUB tends to reclaim 100% of the CPU time... And this allows a CPU hot to starve non-deadline tasks. To address this issue, allow the scheduler to reclaim only a specified fraction of CPU time. Signed-off-by: Luca Abeni --- kernel/sched/core.c | 4 ++++ kernel/sched/deadline.c | 7 ++++++- kernel/sched/sched.h | 6 ++++++ 3 files changed, 16 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3224132..b22fe83 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7941,6 +7941,10 @@ static void sched_dl_do_global(void) raw_spin_unlock_irqrestore(&dl_b->lock, flags); rcu_read_unlock_sched(); + if (dl_b->bw == -1) + cpu_rq(cpu)->dl.non_deadline_bw = 0; + else + cpu_rq(cpu)->dl.non_deadline_bw = (1 << 20) - new_bw; } } diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index b56f76f..b6ecec2 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -154,6 +154,11 @@ void init_dl_rq(struct dl_rq *dl_rq) #else init_dl_bw(&dl_rq->dl_bw); #endif + if (global_rt_runtime() == RUNTIME_INF) + dl_rq->non_deadline_bw = 0; + else + dl_rq->non_deadline_bw = (1 << 20) - + to_ratio(global_rt_period(), global_rt_runtime()); } #ifdef CONFIG_SMP @@ -792,7 +797,7 @@ extern bool sched_rt_bandwidth_account(struct rt_rq *rt_rq); */ u64 grub_reclaim(u64 delta, struct rq *rq) { - return (delta * rq->dl.running_bw) >> 20; + return (delta * (rq->dl.non_deadline_bw + rq->dl.running_bw)) >> 20; } /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 22d36b2..9fb3413 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -526,6 +526,12 @@ struct dl_rq { * task blocks */ s64 running_bw; + + /* + * Fraction of the CPU utilization that cannot be reclaimed + * by the GRUB algorithm. + */ + s64 non_deadline_bw; }; #ifdef CONFIG_SMP -- 2.5.0