Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp1367794yba; Sun, 5 May 2019 05:00:09 -0700 (PDT) X-Google-Smtp-Source: APXvYqy7B0mnaVA0uZ+cEwzbMMd86H6kvhpCbph3pCCce8LVGjs9+3nqOhJiBvwczlTZF2F59Kin X-Received: by 2002:a63:7d03:: with SMTP id y3mr23647045pgc.8.1557057609119; Sun, 05 May 2019 05:00:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557057609; cv=none; d=google.com; s=arc-20160816; b=Hco/GDVRlkpTNhpQxbVPw/4QJsDM7S3/zYR6Nws4jjoFwJq8MV9hpw95EzBQYA7JEr WGzDu+/XbUKooB3ZSzX3lgNz8rDD6ODPzP4elm6VyvRt6vcWIv4107it0E/ZICC84UcI XLeNJp75GNWCLZDTWVDjbIW8L68qQT5JK29APU2MeVEybAboypOvmeWyyKsTcCwX64/M 7doVs8uKWlMfQHmFHkrX4nrzcrfYdzeTptxXq66Tbc2IyqW0AaW6J/nHnUvNDy2IkYUN 68b5ypwY6w4YM8CRvyCq2PpxTMEZfV51lUOY+YLwjrdQLn50vvRLg3wzL2tE5cwMwuK1 9/sQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=ABYyb44tNzIPAHpzdVuYTNaQenbB7kUXVR/O7lerCds=; b=bAxiT/vtDispDAtUyBbW6dF6itQ61Bw8K/F66cDO11S910T6Bv4BKChCrKWpf6TfD2 TWSzHr4Uvm+VZkZPyGoEkhlJ1gjW+SS/gArHOXS/e/h3gLBESFCcXVnDLAbVig1VulP/ rumQJrlFboBoKjZLkytROVu0QgRZuVWNLZV9fPSOEyZFa5yr8JRW+PrB8dn1QriQwANC eVXPvPWsUZ3swzW66Q1RKWpu/fIob64utSREaD9VRVwNHOrx9+AIRkCN7iRwRPea2vpP wxggmQqK3GUpTdDq5JXFp5fhphGSVLoe7FsVwg5D7fN7wESGDQ7ruG6vcBz80GUkUX+p nTZA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w15si10398773pgj.427.2019.05.05.04.59.53; Sun, 05 May 2019 05:00:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727726AbfEEL6a (ORCPT + 99 others); Sun, 5 May 2019 07:58:30 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:56834 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725811AbfEEL62 (ORCPT ); Sun, 5 May 2019 07:58:28 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 407711684; Sun, 5 May 2019 04:58:27 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (e107158-lin.cambridge.arm.com [10.1.194.71]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 962863F5C1; Sun, 5 May 2019 04:58:25 -0700 (PDT) From: Qais Yousef To: Peter Zijlstra , Ingo Molnar , Steven Rostedt Cc: linux-kernel@vger.kernel.org, Pavankumar Kondeti , Sebastian Andrzej Siewior , Uwe Kleine-Konig , Qais Yousef Subject: [PATCH 2/7] sched: fair: move helper functions into fair.h Date: Sun, 5 May 2019 12:57:27 +0100 Message-Id: <20190505115732.9844-3-qais.yousef@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190505115732.9844-1-qais.yousef@arm.com> References: <20190505115732.9844-1-qais.yousef@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Move the small cfs rq helper functions that are inlined into fair.h header. In later patches we need a couple of functions and it made more sense to move the majority of the functions into their own header rather than the two needed only. This keeps the functions grouped together in the same file. Always include the new header in sched.h to make them accessible to all sched subsystem files like autogroup.h find_match_se() was excluded because it wasn't inlined. The two required functions are: - cfs_rq_of() - group_rq_of() Signed-off-by: Qais Yousef --- kernel/sched/fair.c | 195 ------------------------------------------ kernel/sched/fair.h | 197 +++++++++++++++++++++++++++++++++++++++++++ kernel/sched/sched.h | 1 + 3 files changed, 198 insertions(+), 195 deletions(-) create mode 100644 kernel/sched/fair.h diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 35f3ea375084..2b4963bbeab4 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -243,151 +243,7 @@ static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight const struct sched_class fair_sched_class; -/************************************************************** - * CFS operations on generic schedulable entities: - */ - #ifdef CONFIG_FAIR_GROUP_SCHED -static inline struct task_struct *task_of(struct sched_entity *se) -{ - SCHED_WARN_ON(!entity_is_task(se)); - return container_of(se, struct task_struct, se); -} - -/* Walk up scheduling entities hierarchy */ -#define for_each_sched_entity(se) \ - for (; se; se = se->parent) - -static inline struct cfs_rq *task_cfs_rq(struct task_struct *p) -{ - return p->se.cfs_rq; -} - -/* runqueue on which this entity is (to be) queued */ -static inline struct cfs_rq *cfs_rq_of(struct sched_entity *se) -{ - return se->cfs_rq; -} - -/* runqueue "owned" by this group */ -static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp) -{ - return grp->my_q; -} - -static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq) -{ - struct rq *rq = rq_of(cfs_rq); - int cpu = cpu_of(rq); - - if (cfs_rq->on_list) - return rq->tmp_alone_branch == &rq->leaf_cfs_rq_list; - - cfs_rq->on_list = 1; - - /* - * Ensure we either appear before our parent (if already - * enqueued) or force our parent to appear after us when it is - * enqueued. The fact that we always enqueue bottom-up - * reduces this to two cases and a special case for the root - * cfs_rq. Furthermore, it also means that we will always reset - * tmp_alone_branch either when the branch is connected - * to a tree or when we reach the top of the tree - */ - if (cfs_rq->tg->parent && - cfs_rq->tg->parent->cfs_rq[cpu]->on_list) { - /* - * If parent is already on the list, we add the child - * just before. Thanks to circular linked property of - * the list, this means to put the child at the tail - * of the list that starts by parent. - */ - list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list, - &(cfs_rq->tg->parent->cfs_rq[cpu]->leaf_cfs_rq_list)); - /* - * The branch is now connected to its tree so we can - * reset tmp_alone_branch to the beginning of the - * list. - */ - rq->tmp_alone_branch = &rq->leaf_cfs_rq_list; - return true; - } - - if (!cfs_rq->tg->parent) { - /* - * cfs rq without parent should be put - * at the tail of the list. - */ - list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list, - &rq->leaf_cfs_rq_list); - /* - * We have reach the top of a tree so we can reset - * tmp_alone_branch to the beginning of the list. - */ - rq->tmp_alone_branch = &rq->leaf_cfs_rq_list; - return true; - } - - /* - * The parent has not already been added so we want to - * make sure that it will be put after us. - * tmp_alone_branch points to the begin of the branch - * where we will add parent. - */ - list_add_rcu(&cfs_rq->leaf_cfs_rq_list, rq->tmp_alone_branch); - /* - * update tmp_alone_branch to points to the new begin - * of the branch - */ - rq->tmp_alone_branch = &cfs_rq->leaf_cfs_rq_list; - return false; -} - -static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq) -{ - if (cfs_rq->on_list) { - struct rq *rq = rq_of(cfs_rq); - - /* - * With cfs_rq being unthrottled/throttled during an enqueue, - * it can happen the tmp_alone_branch points the a leaf that - * we finally want to del. In this case, tmp_alone_branch moves - * to the prev element but it will point to rq->leaf_cfs_rq_list - * at the end of the enqueue. - */ - if (rq->tmp_alone_branch == &cfs_rq->leaf_cfs_rq_list) - rq->tmp_alone_branch = cfs_rq->leaf_cfs_rq_list.prev; - - list_del_rcu(&cfs_rq->leaf_cfs_rq_list); - cfs_rq->on_list = 0; - } -} - -static inline void assert_list_leaf_cfs_rq(struct rq *rq) -{ - SCHED_WARN_ON(rq->tmp_alone_branch != &rq->leaf_cfs_rq_list); -} - -/* Iterate thr' all leaf cfs_rq's on a runqueue */ -#define for_each_leaf_cfs_rq_safe(rq, cfs_rq, pos) \ - list_for_each_entry_safe(cfs_rq, pos, &rq->leaf_cfs_rq_list, \ - leaf_cfs_rq_list) - -/* Do the two (enqueued) entities belong to the same group ? */ -static inline struct cfs_rq * -is_same_group(struct sched_entity *se, struct sched_entity *pse) -{ - if (se->cfs_rq == pse->cfs_rq) - return se->cfs_rq; - - return NULL; -} - -static inline struct sched_entity *parent_entity(struct sched_entity *se) -{ - return se->parent; -} - static void find_matching_se(struct sched_entity **se, struct sched_entity **pse) { @@ -419,62 +275,11 @@ find_matching_se(struct sched_entity **se, struct sched_entity **pse) *pse = parent_entity(*pse); } } - #else /* !CONFIG_FAIR_GROUP_SCHED */ - -static inline struct task_struct *task_of(struct sched_entity *se) -{ - return container_of(se, struct task_struct, se); -} - -#define for_each_sched_entity(se) \ - for (; se; se = NULL) - -static inline struct cfs_rq *task_cfs_rq(struct task_struct *p) -{ - return &task_rq(p)->cfs; -} - -static inline struct cfs_rq *cfs_rq_of(struct sched_entity *se) -{ - struct task_struct *p = task_of(se); - struct rq *rq = task_rq(p); - - return &rq->cfs; -} - -/* runqueue "owned" by this group */ -static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp) -{ - return NULL; -} - -static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq) -{ - return true; -} - -static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq) -{ -} - -static inline void assert_list_leaf_cfs_rq(struct rq *rq) -{ -} - -#define for_each_leaf_cfs_rq_safe(rq, cfs_rq, pos) \ - for (cfs_rq = &rq->cfs, pos = NULL; cfs_rq; cfs_rq = pos) - -static inline struct sched_entity *parent_entity(struct sched_entity *se) -{ - return NULL; -} - static inline void find_matching_se(struct sched_entity **se, struct sched_entity **pse) { } - #endif /* CONFIG_FAIR_GROUP_SCHED */ static __always_inline diff --git a/kernel/sched/fair.h b/kernel/sched/fair.h new file mode 100644 index 000000000000..04c5c8c0e477 --- /dev/null +++ b/kernel/sched/fair.h @@ -0,0 +1,197 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * CFS operations on generic schedulable entities: + */ + +#ifdef CONFIG_FAIR_GROUP_SCHED +static inline struct task_struct *task_of(struct sched_entity *se) +{ + SCHED_WARN_ON(!entity_is_task(se)); + return container_of(se, struct task_struct, se); +} + +/* Walk up scheduling entities hierarchy */ +#define for_each_sched_entity(se) \ + for (; se; se = se->parent) + +static inline struct cfs_rq *task_cfs_rq(struct task_struct *p) +{ + return p->se.cfs_rq; +} + +/* runqueue on which this entity is (to be) queued */ +static inline struct cfs_rq *cfs_rq_of(struct sched_entity *se) +{ + return se->cfs_rq; +} + +/* runqueue "owned" by this group */ +static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp) +{ + return grp->my_q; +} + +static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq) +{ + struct rq *rq = rq_of(cfs_rq); + int cpu = cpu_of(rq); + + if (cfs_rq->on_list) + return rq->tmp_alone_branch == &rq->leaf_cfs_rq_list; + + cfs_rq->on_list = 1; + + /* + * Ensure we either appear before our parent (if already + * enqueued) or force our parent to appear after us when it is + * enqueued. The fact that we always enqueue bottom-up + * reduces this to two cases and a special case for the root + * cfs_rq. Furthermore, it also means that we will always reset + * tmp_alone_branch either when the branch is connected + * to a tree or when we reach the top of the tree + */ + if (cfs_rq->tg->parent && + cfs_rq->tg->parent->cfs_rq[cpu]->on_list) { + /* + * If parent is already on the list, we add the child + * just before. Thanks to circular linked property of + * the list, this means to put the child at the tail + * of the list that starts by parent. + */ + list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list, + &(cfs_rq->tg->parent->cfs_rq[cpu]->leaf_cfs_rq_list)); + /* + * The branch is now connected to its tree so we can + * reset tmp_alone_branch to the beginning of the + * list. + */ + rq->tmp_alone_branch = &rq->leaf_cfs_rq_list; + return true; + } + + if (!cfs_rq->tg->parent) { + /* + * cfs rq without parent should be put + * at the tail of the list. + */ + list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list, + &rq->leaf_cfs_rq_list); + /* + * We have reach the top of a tree so we can reset + * tmp_alone_branch to the beginning of the list. + */ + rq->tmp_alone_branch = &rq->leaf_cfs_rq_list; + return true; + } + + /* + * The parent has not already been added so we want to + * make sure that it will be put after us. + * tmp_alone_branch points to the begin of the branch + * where we will add parent. + */ + list_add_rcu(&cfs_rq->leaf_cfs_rq_list, rq->tmp_alone_branch); + /* + * update tmp_alone_branch to points to the new begin + * of the branch + */ + rq->tmp_alone_branch = &cfs_rq->leaf_cfs_rq_list; + return false; +} + +static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq) +{ + if (cfs_rq->on_list) { + struct rq *rq = rq_of(cfs_rq); + + /* + * With cfs_rq being unthrottled/throttled during an enqueue, + * it can happen the tmp_alone_branch points the a leaf that + * we finally want to del. In this case, tmp_alone_branch moves + * to the prev element but it will point to rq->leaf_cfs_rq_list + * at the end of the enqueue. + */ + if (rq->tmp_alone_branch == &cfs_rq->leaf_cfs_rq_list) + rq->tmp_alone_branch = cfs_rq->leaf_cfs_rq_list.prev; + + list_del_rcu(&cfs_rq->leaf_cfs_rq_list); + cfs_rq->on_list = 0; + } +} + +static inline void assert_list_leaf_cfs_rq(struct rq *rq) +{ + SCHED_WARN_ON(rq->tmp_alone_branch != &rq->leaf_cfs_rq_list); +} + +/* Iterate thr' all leaf cfs_rq's on a runqueue */ +#define for_each_leaf_cfs_rq_safe(rq, cfs_rq, pos) \ + list_for_each_entry_safe(cfs_rq, pos, &rq->leaf_cfs_rq_list, \ + leaf_cfs_rq_list) + +/* Do the two (enqueued) entities belong to the same group ? */ +static inline struct cfs_rq * +is_same_group(struct sched_entity *se, struct sched_entity *pse) +{ + if (se->cfs_rq == pse->cfs_rq) + return se->cfs_rq; + + return NULL; +} + +static inline struct sched_entity *parent_entity(struct sched_entity *se) +{ + return se->parent; +} + +#else /* !CONFIG_FAIR_GROUP_SCHED */ + +static inline struct task_struct *task_of(struct sched_entity *se) +{ + return container_of(se, struct task_struct, se); +} + +#define for_each_sched_entity(se) \ + for (; se; se = NULL) + +static inline struct cfs_rq *task_cfs_rq(struct task_struct *p) +{ + return &task_rq(p)->cfs; +} + +static inline struct cfs_rq *cfs_rq_of(struct sched_entity *se) +{ + struct task_struct *p = task_of(se); + struct rq *rq = task_rq(p); + + return &rq->cfs; +} + +/* runqueue "owned" by this group */ +static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp) +{ + return NULL; +} + +static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq) +{ + return true; +} + +static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq) +{ +} + +static inline void assert_list_leaf_cfs_rq(struct rq *rq) +{ +} + +#define for_each_leaf_cfs_rq_safe(rq, cfs_rq, pos) \ + for (cfs_rq = &rq->cfs, pos = NULL; cfs_rq; cfs_rq = pos) + +static inline struct sched_entity *parent_entity(struct sched_entity *se) +{ + return NULL; +} + +#endif /* CONFIG_FAIR_GROUP_SCHED */ diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index efa686eeff26..509c1dba77fc 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1418,6 +1418,7 @@ static inline void sched_ttwu_pending(void) { } #include "stats.h" #include "autogroup.h" +#include "fair.h" #ifdef CONFIG_CGROUP_SCHED -- 2.17.1