Received: by 2002:a25:f815:0:0:0:0:0 with SMTP id u21csp3133108ybd; Fri, 28 Jun 2019 03:28:18 -0700 (PDT) X-Google-Smtp-Source: APXvYqxFpm0kcpjeTy2gt/8Yw2nrmsUP7TebPGYWNnHf7Cbv9jWSDLwIIEEKKP+SPS7ydTNrPCbJ X-Received: by 2002:a17:902:ac81:: with SMTP id h1mr10912311plr.171.1561717698258; Fri, 28 Jun 2019 03:28:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561717698; cv=none; d=google.com; s=arc-20160816; b=NbOMq9iQ+HTCNQwni4/PDKzAcJGDy9GZlizagu6XnJgcaGAzJ2a259NN7AdJ9ey5vX a5uYcZQ7wscSMqA7MZE1hvjDfZXbP86tt9UlM8SWzgpsr89/f5OU7CSIWvkKoazdXsdz Wwg15LSJwL/UZbgfmF535K21F0SwjRHdMdr05C43JwbPWoQ3pBeOQHeCxocZTzwhEyyD 3TXrhDSgoM9m+Vyq6hAPSDeDNH9O7rtPg3O1/sb9Lp02guKTC4P2buAnAKyvrAnrH+in GVgJHK2NbOphnLDws1Chh0D+eIth+0abBx61o5dAXQFhn3hWPfa4YBCnur25rY/IS5aJ g1NA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=As3moc17rjysBpx6G6baV71x+Bjp3Es3YrRrtdlEcik=; b=ejCOyTebZ5tYYH7lrZtoau+81izb+igxsPPox1UFEGjL6Kz6i9Z37l4BNTHDSeguh3 cEdvzy5h1w8LjiK+az/3/CVIh5SWtVwg2rq4moNaaw8I1OlgCODdUjHz2uHvsa02VQjK oUpGCy1A0rbzvhNz8pG6ibg4QSqlxL2e4iuMfeRGNO2iE6hyKLJEJ3DtplnYwKWxF9ob O1nGvILBXJ7J4ONH2qS0nbejX7gvJSKy9tjPi9yW9qQNLIoRIyCDjCrx6y6OYjcr4JEy kqLrhbblx9RjDJ54VoiIyjmxG7MqvQxz0aVqz5WysE0z/wXSudNPlhNDxay7pJX/64A0 GEXA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g1si1807655plg.353.2019.06.28.03.28.02; Fri, 28 Jun 2019 03:28:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726687AbfF1K0a (ORCPT + 99 others); Fri, 28 Jun 2019 06:26:30 -0400 Received: from foss.arm.com ([217.140.110.172]:44518 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726502AbfF1K0a (ORCPT ); Fri, 28 Jun 2019 06:26:30 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A582628; Fri, 28 Jun 2019 03:26:29 -0700 (PDT) Received: from [0.0.0.0] (e107985-lin.cambridge.arm.com [10.1.194.38]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EF65E3F718; Fri, 28 Jun 2019 03:26:27 -0700 (PDT) Subject: Re: [PATCH 8/8] sched,fair: flatten hierarchical runqueues To: Rik van Riel , peterz@infradead.org Cc: mingo@redhat.com, linux-kernel@vger.kernel.org, kernel-team@fb.com, morten.rasmussen@arm.com, tglx@linutronix.de, dietmar.eggemann@arm.com, mgorman@techsingularity.com, vincent.guittot@linaro.org References: <20190612193227.993-1-riel@surriel.com> <20190612193227.993-9-riel@surriel.com> From: Dietmar Eggemann Message-ID: <1146bbfd-ae1e-27d8-6b62-d68392d8130f@arm.com> Date: Fri, 28 Jun 2019 12:26:26 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.0 MIME-Version: 1.0 In-Reply-To: <20190612193227.993-9-riel@surriel.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 6/12/19 9:32 PM, Rik van Riel wrote: > Flatten the hierarchical runqueues into just the per CPU rq.cfs runqueue. > > Iteration of the sched_entity hierarchy is rate limited to once per jiffy > per sched_entity, which is a smaller change than it seems, because load > average adjustments were already rate limited to once per jiffy before this > patch series. > > This patch breaks CONFIG_CFS_BANDWIDTH. The plan for that is to park tasks > from throttled cgroups onto their cgroup runqueues, and slowly (using the > GENTLE_FAIR_SLEEPERS) wake them back up, in vruntime order, once the cgroup > gets unthrottled, to prevent thundering herd issues. > > Signed-off-by: Rik van Riel > --- > include/linux/sched.h | 2 + > kernel/sched/fair.c | 478 +++++++++++++++++------------------------- > kernel/sched/pelt.c | 6 +- > kernel/sched/pelt.h | 2 +- > kernel/sched/sched.h | 2 +- > 5 files changed, 194 insertions(+), 296 deletions(-) > [...] > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c [...] > @@ -3491,7 +3544,7 @@ static inline bool update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s > * track group sched_entity load average for task_h_load calc in migration > */ > if (se->avg.last_update_time && !(flags & SKIP_AGE_LOAD)) > - updated = __update_load_avg_se(now, cfs_rq, se); > + updated = __update_load_avg_se(now, cfs_rq, se, curr, curr); I wonder if task migration is still working correctly. migrate_task_rq_fair(p, ...) -> remove_entity_load_avg(&p->se) would use cfs_rq = se->cfs_rq (i.e. root cfs_rq). So load (and util) will not propagate through the taskgroup hierarchy. [...]