Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp2120903ybi; Thu, 20 Jun 2019 09:24:35 -0700 (PDT) X-Google-Smtp-Source: APXvYqzbftsQqeHw6uHrNkfGQEfZ8HVtJUkKUiHqyCwwXurroyefZlWENtNcUcLWKvaboOPmY6MF X-Received: by 2002:a17:90a:ad41:: with SMTP id w1mr501847pjv.52.1561047874979; Thu, 20 Jun 2019 09:24:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561047874; cv=none; d=google.com; s=arc-20160816; b=PuIaYIYfm9zvmEywdrzDSSFeTPkcgKUoFr02NNBu98RvLMWuSOZ4TZhTpVbME7/iut CZlClwye7FQ7CcuUeIS5ilTTwrAzzmLnZ+CaIBGoVcOQHWI4Q4Qfrjsveoq9b/BP+vbz CU3KqmpJFP0IVhJdVEH6djUmhJSfvVsWNRdnGUHuXexZiiLRZgVhUhem/l+8aswovlVZ pV4PSJAM5aiZOKaIYQ+hKwHRWHUM0lnKGbUxjBqyyqHQjXetqVU+us1rRWe4d0M+j4pm 6TkOxpjpOdxKZ2uP9W1vPGfScH9p08hSyoRgRO8ir9yZBzsiEGoSsPl3bJPOLYD89dyn 5m+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=k1TN4em62IYOYpMu3/2CK1AasBsSp9VYBnvkrCZqwm8=; b=cOnwor4Haf5VgucLCDv3ObRyzR/gbluJeBY4oCagj+9UgLKTfPeE163e7KWSpi4O4r kSpYwAtJ+40uPVrtsWugZtfPheeHudTfr+TIkoy/IWKosFYpiGaHfAMTle3CQdeN0vqd d/hQR6oWTMhx83bSsntarTxBpEiP78Uiuf1025uoCEcs8YLlmcQhXHxWodxAQ6lSnMXg 1eitiShmbXYsy0PgWCsMm+BXybqaXZeFvo4fDhv54G1q/U3G+dgUp4ubAMKIQomBkxlb CNj8Guiw5G8/f3L3FyGakvtPuE1Xj75REnLHChkSbnPAH6vx8JA9excgBOCq1kYIP91V a5RQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d11si6073633pgj.238.2019.06.20.09.24.19; Thu, 20 Jun 2019 09:24:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732091AbfFTQXw (ORCPT + 99 others); Thu, 20 Jun 2019 12:23:52 -0400 Received: from foss.arm.com ([217.140.110.172]:47402 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726827AbfFTQXw (ORCPT ); Thu, 20 Jun 2019 12:23:52 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 86B9F2B; Thu, 20 Jun 2019 09:23:51 -0700 (PDT) Received: from [0.0.0.0] (e107985-lin.cambridge.arm.com [10.1.194.38]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D64693F246; Thu, 20 Jun 2019 09:23:49 -0700 (PDT) Subject: Re: [PATCH 5/8] sched,cfs: use explicit cfs_rq of parent se helper To: Rik van Riel , peterz@infradead.org Cc: mingo@redhat.com, linux-kernel@vger.kernel.org, kernel-team@fb.com, morten.rasmussen@arm.com, tglx@linutronix.de, dietmar.eggeman@arm.com, mgorman@techsingularity.com, vincent.guittot@linaro.org References: <20190612193227.993-1-riel@surriel.com> <20190612193227.993-6-riel@surriel.com> From: Dietmar Eggemann Message-ID: <55c0dc01-24b2-bb7f-6ceb-b65c52f7b46b@arm.com> Date: Thu, 20 Jun 2019 18:23:48 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.0 MIME-Version: 1.0 In-Reply-To: <20190612193227.993-6-riel@surriel.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 6/12/19 9:32 PM, Rik van Riel wrote: > Use an explicit "cfs_rq of parent sched_entity" helper in a few > strategic places, where cfs_rq_of(se) may no longer point at the > right runqueue once we flatten the hierarchical cgroup runqueues. > > No functional change. > > Signed-off-by: Rik van Riel > --- > kernel/sched/fair.c | 17 +++++++++++++---- > 1 file changed, 13 insertions(+), 4 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index dcc521d251e3..c6ede2ecc935 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -275,6 +275,15 @@ static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp) > return grp->my_q; > } > > +/* runqueue owned by the parent entity */ > +static inline struct cfs_rq *group_cfs_rq_of_parent(struct sched_entity *se) > +{ > + if (se->parent) > + return group_cfs_rq(se->parent); > + > + return &cfs_rq_of(se)->rq->cfs; > +} > + > static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq) > { > struct rq *rq = rq_of(cfs_rq); > @@ -3298,7 +3307,7 @@ static inline int propagate_entity_load_avg(struct sched_entity *se) > > gcfs_rq->propagate = 0; > > - cfs_rq = cfs_rq_of(se); > + cfs_rq = group_cfs_rq_of_parent(se); > > add_tg_cfs_propagate(cfs_rq, gcfs_rq->prop_runnable_sum); > > @@ -7779,7 +7788,7 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq) > > WRITE_ONCE(cfs_rq->h_load_next, NULL); > for_each_sched_entity(se) { > - cfs_rq = cfs_rq_of(se); > + cfs_rq = group_cfs_rq_of_parent(se); Why do you change this here? task_se_h_load() calls update_cfs_rq_h_load() with cfs_rq = group_cfs_rq_of_parent(se) because the task might not be on the cfs_rq yet. But inside update_cfs_rq_h_load() the first se is derived from cfs_rq->tg->se[cpu_of(rq)] so in the first for_each_sched_entity() loop we should still start with group_cfs_rq() (se->my_q) ? The system doesn't barf with these two WARN_ON's in. @@ -7663,12 +7673,17 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq) unsigned long now = jiffies; unsigned long load; + WARN_ON(se && (se != group_cfs_rq(se)->tg->se[cpu_of(rq)])); + if (cfs_rq->last_h_load_update == now) return; WRITE_ONCE(cfs_rq->h_load_next, NULL); for_each_sched_entity(se) { cfs_rq = group_cfs_rq_of_parent(se); + + WARN_ON(se != group_cfs_rq(se)->tg->se[cpu_of(rq)]); + WRITE_ONCE(cfs_rq->h_load_next, se); if (cfs_rq->last_h_load_update == now) break; [...]