Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp848723pxk; Wed, 23 Sep 2020 18:57:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwR03C336QorAZaQ+lOQDyMn5eH1ZobqAxGrQLhygHxrbToFbT0y3uBWAUTi42rEmszT0Bw X-Received: by 2002:a17:906:70d4:: with SMTP id g20mr2439810ejk.413.1600912654847; Wed, 23 Sep 2020 18:57:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600912654; cv=none; d=google.com; s=arc-20160816; b=aDevAAf6aZPKzKcYasDbucojwtI33NcPz//cEiRSor+YhoO64AeQGGFFKw0zZnMALi /EDkCstRyOz41eIDKflLLdXExBrcyKsq8p31cRe8BZcZ06cRQ8NBC40iUmCBxk95clVs nw1vLFay+FMFzyEUNE31c5azt64lsG4MZ4D8Go65Q82STYiPrQyrWHhVjbwTTZ95M/l1 RcDfH09PUD0EmfyBOCVQ8NKaeqbp+jF1izo2egVlhlJe0f8SDF1t2JaipnQ3InspS/EC EONdx25kIED5FSLT0J9RO0bJNO2apCw56R5RHn9NM7NkUDS4z+F2N/uiYAui0Dr8fr2A TxsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from; bh=W/s6TstRzzir2qgE/4k8JbCWuwRxrQFtOW2wGK8+syE=; b=L9pGj6Pdnbb/CgwqkUKqDDXaz3m0qZPaWuQ559NIxILQc3stb+SIeNFzQf+mWhoPv+ VZq+BtyX26SgyKwrJ8vLAuFOa5/ovgKRYPiLb1IKz44QBsOHoTFCYvqatVx8YAayury8 8UzF5bVv17/wEiSD8LWkwfmHHutYZrqnjbUu6P4ajTSiuYHGTsT2Gjt1UnjgxhHJKFp8 KCxNPrjCricgQt7yDb5nrq0w7F5FjquybieNfyEZyN71bgX3ZixUTF4O/wOwc6WclEY9 d3Xo1AFhMXCK5moU1UewHaOGrGVClJi765ZAZrEH6v7XLDo1MkIuRvVhg3K2BzejVt9I 2iaQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f2si984170ejc.542.2020.09.23.18.57.09; Wed, 23 Sep 2020 18:57:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726281AbgIXBy5 (ORCPT + 99 others); Wed, 23 Sep 2020 21:54:57 -0400 Received: from smtp.h3c.com ([60.191.123.56]:43077 "EHLO h3cspam01-ex.h3c.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725208AbgIXBy5 (ORCPT ); Wed, 23 Sep 2020 21:54:57 -0400 Received: from DAG2EX03-BASE.srv.huawei-3com.com ([10.8.0.66]) by h3cspam01-ex.h3c.com with ESMTPS id 08O1svJc003306 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 24 Sep 2020 09:54:57 +0800 (GMT-8) (envelope-from tian.xianting@h3c.com) Received: from localhost.localdomain (10.99.212.201) by DAG2EX03-BASE.srv.huawei-3com.com (10.8.0.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Thu, 24 Sep 2020 09:54:58 +0800 From: Xianting Tian To: , , , , , , , CC: , Xianting Tian Subject: [PATCH] sched/fair: Remove the force parameter of update_tg_load_avg() Date: Thu, 24 Sep 2020 09:47:55 +0800 Message-ID: <20200924014755.36253-1-tian.xianting@h3c.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.99.212.201] X-ClientProxiedBy: BJSMTP02-EX.srv.huawei-3com.com (10.63.20.133) To DAG2EX03-BASE.srv.huawei-3com.com (10.8.0.66) X-DNSRBL: X-MAIL: h3cspam01-ex.h3c.com 08O1svJc003306 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In the file fair.c, sometims update_tg_load_avg(cfs_rq, 0) is used, sometimes update_tg_load_avg(cfs_rq, false) is used. update_tg_load_avg() has the parameter force, but in current code, it never set 1 or true to it, so remove the force parameter. Signed-off-by: Xianting Tian --- kernel/sched/fair.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1a68a0536..7056fa97f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -831,7 +831,7 @@ void init_entity_runnable_average(struct sched_entity *se) void post_init_entity_util_avg(struct task_struct *p) { } -static void update_tg_load_avg(struct cfs_rq *cfs_rq, int force) +static void update_tg_load_avg(struct cfs_rq *cfs_rq) { } #endif /* CONFIG_SMP */ @@ -3288,7 +3288,6 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq, int flags) /** * update_tg_load_avg - update the tg's load avg * @cfs_rq: the cfs_rq whose avg changed - * @force: update regardless of how small the difference * * This function 'ensures': tg->load_avg := \Sum tg->cfs_rq[]->avg.load. * However, because tg->load_avg is a global value there are performance @@ -3300,7 +3299,7 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq, int flags) * * Updating tg's load_avg is necessary before update_cfs_share(). */ -static inline void update_tg_load_avg(struct cfs_rq *cfs_rq, int force) +static inline void update_tg_load_avg(struct cfs_rq *cfs_rq) { long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib; @@ -3310,7 +3309,7 @@ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq, int force) if (cfs_rq->tg == &root_task_group) return; - if (force || abs(delta) > cfs_rq->tg_load_avg_contrib / 64) { + if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64) { atomic_long_add(delta, &cfs_rq->tg->load_avg); cfs_rq->tg_load_avg_contrib = cfs_rq->avg.load_avg; } @@ -3612,7 +3611,7 @@ static inline bool skip_blocked_update(struct sched_entity *se) #else /* CONFIG_FAIR_GROUP_SCHED */ -static inline void update_tg_load_avg(struct cfs_rq *cfs_rq, int force) {} +static inline void update_tg_load_avg(struct cfs_rq *cfs_rq) {} static inline int propagate_entity_load_avg(struct sched_entity *se) { @@ -3800,13 +3799,13 @@ static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s * IOW we're enqueueing a task on a new CPU. */ attach_entity_load_avg(cfs_rq, se); - update_tg_load_avg(cfs_rq, 0); + update_tg_load_avg(cfs_rq); } else if (decayed) { cfs_rq_util_change(cfs_rq, 0); if (flags & UPDATE_TG) - update_tg_load_avg(cfs_rq, 0); + update_tg_load_avg(cfs_rq); } } @@ -7887,7 +7886,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done) struct sched_entity *se; if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq)) { - update_tg_load_avg(cfs_rq, 0); + update_tg_load_avg(cfs_rq); if (cfs_rq == &rq->cfs) decayed = true; @@ -10786,7 +10785,7 @@ static void detach_entity_cfs_rq(struct sched_entity *se) /* Catch up with the cfs_rq and remove our load when we leave */ update_load_avg(cfs_rq, se, 0); detach_entity_load_avg(cfs_rq, se); - update_tg_load_avg(cfs_rq, false); + update_tg_load_avg(cfs_rq); propagate_entity_cfs_rq(se); } @@ -10805,7 +10804,7 @@ static void attach_entity_cfs_rq(struct sched_entity *se) /* Synchronize entity with its cfs_rq */ update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LOAD); attach_entity_load_avg(cfs_rq, se); - update_tg_load_avg(cfs_rq, false); + update_tg_load_avg(cfs_rq); propagate_entity_cfs_rq(se); } -- 2.17.1