Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754200AbdGNNVQ (ORCPT ); Fri, 14 Jul 2017 09:21:16 -0400 Received: from mail-yw0-f195.google.com ([209.85.161.195]:36587 "EHLO mail-yw0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754076AbdGNNVN (ORCPT ); Fri, 14 Jul 2017 09:21:13 -0400 From: Josef Bacik To: mingo@redhat.com, peterz@infradead.org, linux-kernel@vger.kernel.org, umgwanakikbuti@gmail.com, tj@kernel.org, kernel-team@fb.com Cc: Josef Bacik Subject: [PATCH 2/7] sched/fair: calculate runnable_weight slightly differently Date: Fri, 14 Jul 2017 13:20:59 +0000 Message-Id: <1500038464-8742-3-git-send-email-josef@toxicpanda.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1500038464-8742-1-git-send-email-josef@toxicpanda.com> References: <1500038464-8742-1-git-send-email-josef@toxicpanda.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1784 Lines: 48 From: Josef Bacik Our runnable_weight currently looks like this runnable_weight = shares * runnable_load_avg / load_avg The goal is to scale the runnable weight for the group based on its runnable to load_avg ratio. The problem with this is it biases us towards tasks that never go to sleep. Tasks that go to sleep are going to have their runnable_load_avg decayed pretty hard, which will drastically reduce the runnable weight of groups with interactive tasks. To solve this imbalance we tweak this slightly, so in the ideal case it is still the above, but in the interactive case it is runnable_weight = shares * runnable_weight / load_weight which will make the weight distribution fairer between interactive and non-interactive groups. Signed-off-by: Josef Bacik --- kernel/sched/fair.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 326bc55..5d4489e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2880,9 +2880,15 @@ static void update_cfs_group(struct sched_entity *se) * Note: we need to deal with very sporadic 'runnable > load' cases * due to numerical instability. */ - runnable = shares * gcfs_rq->avg.runnable_load_avg; - if (runnable) - runnable /= max(gcfs_rq->avg.load_avg, gcfs_rq->avg.runnable_load_avg); + runnable = shares * max(scale_load_down(gcfs_rq->runnable_weight), + gcfs_rq->avg.runnable_load_avg); + if (runnable) { + long divider = max(gcfs_rq->avg.load_avg, + scale_load_down(gcfs_rq->load.weight)); + divider = max_t(long, 1, divider); + runnable /= divider; + } + runnable = clamp_t(long, runnable, MIN_SHARES, shares); reweight_entity(cfs_rq_of(se), se, shares, runnable); } -- 2.9.3