Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756044AbdCWTxJ (ORCPT ); Thu, 23 Mar 2017 15:53:09 -0400 Received: from mail.santannapisa.it ([193.205.80.99]:26882 "EHLO mail.santannapisa.it" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1755686AbdCWTxF (ORCPT ); Thu, 23 Mar 2017 15:53:05 -0400 From: luca abeni To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Ingo Molnar , Juri Lelli , Claudio Scordino , Steven Rostedt , Tommaso Cucinotta , Daniel Bristot de Oliveira , Joel Fernandes , Mathieu Poirier , Luca Abeni Subject: [RFC v5 5/9] sched/deadline: do not reclaim the whole CPU bandwidth Date: Fri, 24 Mar 2017 04:52:58 +0100 Message-Id: <1490327582-4376-6-git-send-email-luca.abeni@santannapisa.it> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1490327582-4376-1-git-send-email-luca.abeni@santannapisa.it> References: <1490327582-4376-1-git-send-email-luca.abeni@santannapisa.it> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2101 Lines: 76 From: Luca Abeni Original GRUB tends to reclaim 100% of the CPU time... And this allows a CPU hog to starve non-deadline tasks. To address this issue, allow the scheduler to reclaim only a specified fraction of CPU time. Signed-off-by: Luca Abeni Tested-by: Daniel Bristot de Oliveira --- kernel/sched/core.c | 6 ++++++ kernel/sched/deadline.c | 7 ++++++- kernel/sched/sched.h | 6 ++++++ 3 files changed, 18 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 20c62e7..efa88eb 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6716,6 +6716,12 @@ static void sched_dl_do_global(void) raw_spin_unlock_irqrestore(&dl_b->lock, flags); rcu_read_unlock_sched(); + if (dl_b->bw == -1) + cpu_rq(cpu)->dl.deadline_bw_inv = 1 << 8; + else + cpu_rq(cpu)->dl.deadline_bw_inv = + to_ratio(global_rt_runtime(), + global_rt_period()) >> 12; } } diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 6035311..e964051 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -212,6 +212,11 @@ void init_dl_rq(struct dl_rq *dl_rq) #else init_dl_bw(&dl_rq->dl_bw); #endif + if (global_rt_runtime() == RUNTIME_INF) + dl_rq->deadline_bw_inv = 1 << 8; + else + dl_rq->deadline_bw_inv = + to_ratio(global_rt_runtime(), global_rt_period()) >> 12; } #ifdef CONFIG_SMP @@ -871,7 +876,7 @@ extern bool sched_rt_bandwidth_account(struct rt_rq *rt_rq); */ u64 grub_reclaim(u64 delta, struct rq *rq) { - return (delta * rq->dl.running_bw) >> 20; + return (delta * rq->dl.running_bw * rq->dl.deadline_bw_inv) >> 20 >> 8; } /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 57bb79b..141549b 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -565,6 +565,12 @@ struct dl_rq { * task blocks */ u64 running_bw; + + /* + * Inverse of the fraction of CPU utilization that can be reclaimed + * by the GRUB algorithm. + */ + u64 deadline_bw_inv; }; #ifdef CONFIG_SMP -- 2.7.4