Received: by 2002:a25:f815:0:0:0:0:0 with SMTP id u21csp27058ybd; Fri, 28 Jun 2019 13:50:28 -0700 (PDT) X-Google-Smtp-Source: APXvYqxOqzHJS387+ZRyCPYZPTCJAkqglsvlIYfbYnU/lHA8L+2dIrPZpuS5GVzinAydWD4NS/RP X-Received: by 2002:a65:4342:: with SMTP id k2mr11157004pgq.218.1561755028578; Fri, 28 Jun 2019 13:50:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561755028; cv=none; d=google.com; s=arc-20160816; b=HtcoLV1lDaNie2LncIbMmS9plNAjGEGQ//mUYNm8U94EomH7oQWXSfchJzIVYQTs5I EXvO/vyAqsVEbNVALaynfG4XmJk/QyUx/k15Bx3zVyIJ64OY+JVtZLFpmo4IO/AFUf8N h7nBdcA0cL2CESykBsfnVcjve4QEXPWCB3t93GwjCMwMwUrhpyX5dF77Ye/7YDk0OIAt Uvacz5239Hd7MyqYxmhBm9iL6W6yxDu+TDB2Md5OgnlSYPRopVUX4uz1Fi7+2Na6flQZ btucUIST8EKRKABYDwqNI9qfmNDVl1RMUdHnf7aWouSoxUEFqHZzHBQYvhyHjWDY9hA6 CjMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=CG7F0r5ESVAvevXV6opvHXfjcvJ4ggZagW/SkVDkuHI=; b=p0G8NLBV00ii0LCWrjmXI2dvwaYOIdyok77Fvc59t0vb0APcGRhL34CKKNj8eIssxX ioYMA0F7tUnJQTr8lrgw3YCOh9J8lTwPHWWHMLEVhu/n4jn4QWe8IJ9gdKtiIkdnl21n rn3lj27mKt8KWNWYYeBTO8jP3UG7NC9vJgxi8spR7Xz0KejFqO1tryOrBKe1TGrlGnCM rEGPBFH8q6KbdGBrywyY0KEBnS0/K2fZyLpuu1mvNZwJNFA8SYeP7cQKt2sj6zQkqI82 pcWGj7ODGaJtXySr9IReWIfihHDE6jZtAkGqPGPcW2PPnXUn6krEtx8Ds0VmrWRxryaZ hdiA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c13si3357360pfr.36.2019.06.28.13.50.13; Fri, 28 Jun 2019 13:50:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726874AbfF1Uth (ORCPT + 99 others); Fri, 28 Jun 2019 16:49:37 -0400 Received: from shelob.surriel.com ([96.67.55.147]:38476 "EHLO shelob.surriel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726788AbfF1Utf (ORCPT ); Fri, 28 Jun 2019 16:49:35 -0400 Received: from imladris.surriel.com ([96.67.55.152]) by shelob.surriel.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.92) (envelope-from ) id 1hgxo6-0007TL-KI; Fri, 28 Jun 2019 16:49:18 -0400 From: Rik van Riel To: linux-kernel@vger.kernel.org Cc: kernel-team@fb.com, pjt@google.com, dietmar.eggemann@arm.com, peterz@infradead.org, mingo@redhat.com, morten.rasmussen@arm.com, tglx@linutronix.de, mgorman@techsingularity.net, vincent.guittot@linaro.org, Rik van Riel Subject: [PATCH 05/10] sched,fair: remove cfs rqs from leaf_cfs_rq_list bottom up Date: Fri, 28 Jun 2019 16:49:08 -0400 Message-Id: <20190628204913.10287-6-riel@surriel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190628204913.10287-1-riel@surriel.com> References: <20190628204913.10287-1-riel@surriel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Reducing the overhead of the CPU controller is achieved by not walking all the sched_entities every time a task is enqueued or dequeued. One of the things being checked every single time is whether the cfs_rq is on the rq->leaf_cfs_rq_list. By only removing a cfs_rq from the list once it no longer has children on the list, we can avoid walking the sched_entity hierarchy if the bottom cfs_rq is on the list, once the runqueues have been flattened. Signed-off-by: Rik van Riel --- kernel/sched/fair.c | 17 +++++++++++++++++ kernel/sched/sched.h | 1 + 2 files changed, 18 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 63cb40253b26..e41feacc45d9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -286,6 +286,13 @@ static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq) cfs_rq->on_list = 1; + /* + * If the tmp_alone_branch cursor was moved, it means a child cfs_rq + * is already on the list ahead of us. + */ + if (rq->tmp_alone_branch != &rq->leaf_cfs_rq_list) + cfs_rq->children_on_list++; + /* * Ensure we either appear before our parent (if already * enqueued) or force our parent to appear after us when it is @@ -311,6 +318,7 @@ static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq) * list. */ rq->tmp_alone_branch = &rq->leaf_cfs_rq_list; + cfs_rq->tg->parent->cfs_rq[cpu]->children_on_list++; return true; } @@ -359,6 +367,11 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq) if (rq->tmp_alone_branch == &cfs_rq->leaf_cfs_rq_list) rq->tmp_alone_branch = cfs_rq->leaf_cfs_rq_list.prev; + if (cfs_rq->tg->parent) { + int cpu = cpu_of(rq); + cfs_rq->tg->parent->cfs_rq[cpu]->children_on_list--; + } + list_del_rcu(&cfs_rq->leaf_cfs_rq_list); cfs_rq->on_list = 0; } @@ -7687,6 +7700,10 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq) if (cfs_rq->avg.util_sum) return false; + /* Remove decayed parents once their decayed children are gone. */ + if (cfs_rq->children_on_list) + return false; + return true; } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 32978a8de8ce..4f8acbab0fb2 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -557,6 +557,7 @@ struct cfs_rq { * This list is used during load balance. */ int on_list; + int children_on_list; struct list_head leaf_cfs_rq_list; struct task_group *tg; /* group that "owns" this runqueue */ -- 2.20.1