Received: by 2002:a25:f815:0:0:0:0:0 with SMTP id u21csp2314952ybd; Mon, 24 Jun 2019 04:24:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqw+rVOyyEh7w87N/FZ99efqvgVeylAeC28+Fm5fWsyOnqwJzvATcWeEoy2gf4aYYaBhbmmq X-Received: by 2002:a65:6245:: with SMTP id q5mr1746037pgv.394.1561375479809; Mon, 24 Jun 2019 04:24:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561375479; cv=none; d=google.com; s=arc-20160816; b=CCK7yvDDtDoRil6tB0dg4+RL02xfTVmTtVBFIDeRmQQaPB82HgfhuA1p7RzNXZC9rE z/IwB78HIfBhdLgPLsCCSc/oneuNa30Q8ZIugTQiJaz1qB9oHnIjxjjV82C46snIc9v6 xTLB/Vcb77OOduZfQOKKICM7U8cI+xzNR94ttfKZ63s4CNARUrw5MJ0n6rRPf+OJhey0 C+JaIqbo8u3lFrL5RczEqOjrnZlVVf9ELwpxjeqtNEZM3YfOat2ElO4OzO+LXhsrt93m Q59cP8+ozbD5Tk1Eli5UXaZESfuEgUNH4sPuKlq+yuSANx7AaQCLHa5reFZvnbxXENMV 71yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=IePqXzTCmNpgjTfOBwYGQ8QaKcCcnYxnEhVc6+rYqHs=; b=BQ4G50K5v9FtLM6RBnr5E33+SHOzg1kgem2v9P2L9oKDGFsjU5O1Guonjetiy3ovmF bqnOYlWXjOF+8SqhJDQ4kRilu9/3jnzyx1ZkMa77pNKnU4EOG4MPQ4tqlah+uXdCWKEw 87y765RkE3NwxzjQS4FYOU4OTtzdsfCpofym785X5/6n/nRvlWeHXU2PfZREmU42fiQ0 gN8FA4cE/yx0EimUOitebJ6B1QU8ysCqFGsxneymjrJs+SXCGpxgwr/OEtveQcCmAY9t y8/Dl+s+2T21woyDh9oAuekyGii0v35Umq6wVANeHDPWRZOZffj9QeMX/VteeC+HjnEq RPig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b13si10135375plk.313.2019.06.24.04.24.24; Mon, 24 Jun 2019 04:24:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729680AbfFXLYQ (ORCPT + 99 others); Mon, 24 Jun 2019 07:24:16 -0400 Received: from foss.arm.com ([217.140.110.172]:47508 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727722AbfFXLYP (ORCPT ); Mon, 24 Jun 2019 07:24:15 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B918D2B; Mon, 24 Jun 2019 04:24:14 -0700 (PDT) Received: from [0.0.0.0] (e107985-lin.cambridge.arm.com [10.1.194.38]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D847D3F718; Mon, 24 Jun 2019 04:24:12 -0700 (PDT) Subject: Re: [PATCH 5/8] sched,cfs: use explicit cfs_rq of parent se helper To: Rik van Riel , peterz@infradead.org Cc: mingo@redhat.com, linux-kernel@vger.kernel.org, kernel-team@fb.com, morten.rasmussen@arm.com, tglx@linutronix.de, dietmar.eggeman@arm.com, mgorman@techsingularity.com, vincent.guittot@linaro.org References: <20190612193227.993-1-riel@surriel.com> <20190612193227.993-6-riel@surriel.com> <55c0dc01-24b2-bb7f-6ceb-b65c52f7b46b@arm.com> From: Dietmar Eggemann Message-ID: <1f90005d-1130-399b-8642-9e1ad7089ab3@arm.com> Date: Mon, 24 Jun 2019 13:24:10 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 6/20/19 6:29 PM, Rik van Riel wrote: > On Thu, 2019-06-20 at 18:23 +0200, Dietmar Eggemann wrote: >> On 6/12/19 9:32 PM, Rik van Riel wrote: [...] >>> @@ -7779,7 +7788,7 @@ static void update_cfs_rq_h_load(struct >>> cfs_rq *cfs_rq) >>> >>> WRITE_ONCE(cfs_rq->h_load_next, NULL); >>> for_each_sched_entity(se) { >>> - cfs_rq = cfs_rq_of(se); >>> + cfs_rq = group_cfs_rq_of_parent(se); >> >> Why do you change this here? task_se_h_load() calls >> update_cfs_rq_h_load() with cfs_rq = group_cfs_rq_of_parent(se) >> because >> the task might not be on the cfs_rq yet. > > Because patch 6 points cfs_rq_of(se) at the CPU's top level > cfs_rq for every task se ... > > ... but I guess since I have not changed where the cfs_rq_of > points for cgroup sched_entities, this change is not necessary > at this time, and I should be able to go without it, in this > function. IMHO, since you only change set_task_rq() (p->se.cfs_rq = &cpu_rq(cpu)->cfs instead of tg->cfs_rq[cpu] in 8/8) (used for a task) and not init_tg_cfs_entry() (used for a taskgroup), 'cfs_rq_of(se) == se->parent->my_q' should still be true in update_cfs_rq_h_load(). update_cfs_rq_h_load() only deals with se's representing taskgroups. So cfs_rq_of(se) and group_cfs_rq_of_parent(se) should deliver the same result for these se's. >> But inside update_cfs_rq_h_load() the first se is derived from >> cfs_rq->tg->se[cpu_of(rq)] so in the first for_each_sched_entity() >> loop >> we should still start with group_cfs_rq() (se->my_q) ? Here I was wrong. The first loop did use cfs_rq_of() and not group_cfs_rq(). [...]