Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751352AbdHCPNm (ORCPT ); Thu, 3 Aug 2017 11:13:42 -0400 Received: from mail-qk0-f196.google.com ([209.85.220.196]:34873 "EHLO mail-qk0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751133AbdHCPNl (ORCPT ); Thu, 3 Aug 2017 11:13:41 -0400 From: josef@toxicpanda.com X-Google-Original-From: jbacik@fb.com To: riel@redhat.com, kernel-team@fb.com, mingo@redhat.com, peterz@infradead.org, linux-kernel@vger.kernel.org, tj@kernel.org Cc: Josef Bacik Subject: [PATCH 1/2] sched/fair: use reweight_entity to reweight tasks Date: Thu, 3 Aug 2017 11:13:38 -0400 Message-Id: <1501773219-18774-1-git-send-email-jbacik@fb.com> X-Mailer: git-send-email 2.7.4 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1978 Lines: 63 From: Josef Bacik reweight_task only accounts for the load average change in the cfs_rq, but doesn't account for the runnable_average change in the cfs_rq. We need to do everything reweight_entity does, and then we just set our inv_weight appropriately. Signed-off-by: Josef Bacik --- kernel/sched/fair.c | 31 +++++++++++-------------------- 1 file changed, 11 insertions(+), 20 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0cff1b6..c336534 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2809,26 +2809,6 @@ __sub_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) sub_positive(&cfs_rq->avg.load_sum, se_weight(se) * se->avg.load_sum); } -void reweight_task(struct task_struct *p, int prio) -{ - struct sched_entity *se = &p->se; - struct cfs_rq *cfs_rq = cfs_rq_of(se); - struct load_weight *load = &p->se.load; - - u32 divider = LOAD_AVG_MAX - 1024 + se->avg.period_contrib; - - __sub_load_avg(cfs_rq, se); - - load->weight = scale_load(sched_prio_to_weight[prio]); - load->inv_weight = sched_prio_to_wmult[prio]; - - se->avg.load_avg = div_u64(se_weight(se) * se->avg.load_sum, divider); - se->avg.runnable_load_avg = - div_u64(se_runnable(se) * se->avg.runnable_load_sum, divider); - - __add_load_avg(cfs_rq, se); -} - static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, unsigned long weight, unsigned long runnable) { @@ -2858,6 +2838,17 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, } } +void reweight_task(struct task_struct *p, int prio) +{ + struct sched_entity *se = &p->se; + struct cfs_rq *cfs_rq = cfs_rq_of(se); + struct load_weight *load = &se->load; + unsigned long weight = scale_load(sched_prio_to_weight[prio]); + + reweight_entity(cfs_rq, se, weight, weight); + load->inv_weight = sched_prio_to_wmult[prio]; +} + static inline int throttled_hierarchy(struct cfs_rq *cfs_rq); /* -- 2.7.4