Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp3082596ybi; Mon, 10 Jun 2019 04:09:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqw1zDDBwR60AoSVxxBYLKEIF9Q/K2dU1206QwsL8ZXJHASdvaPyfKSrUpHplRA9SvHkAGIj X-Received: by 2002:a62:b405:: with SMTP id h5mr71809007pfn.85.1560164959721; Mon, 10 Jun 2019 04:09:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560164959; cv=none; d=google.com; s=arc-20160816; b=w14wEMzcv7aATrks+alcyqJdMH3wwCW+qrbRX5HaNAs/N1tFUxdUlsxijc5Xwnv9Nd f+L5eZsTSc/NJMHuMvltuNolp9fDpmEsNs8eYtbfSCt3vY2eoN0xoWgXiSmFtth4EX7J lmFMA/CrMwGO0pHXq6/XkbnMatXEhMi2wmYoUh4iLxBwSifUlTiwnWXC8oKQHzH47dKw zVrMmRrmZb3wlLDgn4QZWcOUCwn+jfEbRd/GVpiu2rMji2MM1IMmhchkCqZV3f3/F0JB CFlidCr6rX32Aqlm/Kw1VcGISQXTH5F9Pdz2xB5N1jeYzMjCWNVY4L9l0R80SB4SWpQA 6Vtg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=LzNsdPnRzzgkUtGXZSb3A9dtnE6j77OvtiNeVXogMaQ=; b=qyc5SR995R3VxsHePomFAB6ikah+/zK0tIzKylaPyvvMv4/sqiJjVgPVzKjpQnfFsE FMeNsRcBUJle1h50oWDVeKHzGcqDPYBoslpc2uFHgYUTYG+POTLMbzBynshSDwCnHCvY JVqjN27a2KKNxrNecqwAZmEnEppylMjDlBoPto6PQ+SkVTHoTjJx+A0vgqgbX9q2WxU1 V6zQW6oz6qGwbmHuiPIM6/tb4sF3QXmWoNorFlQhF6QZx6fZuWaztYgeDE0h0Vbhndxh qhxla+86XZCKuN/Rd79fyIC44LtQa4+Oz/lamZtoiKfysmeLehtUUAPsXM/to+DAq4iS X+Dw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c127si2747030pfc.191.2019.06.10.04.09.04; Mon, 10 Jun 2019 04:09:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388943AbfFJK7J (ORCPT + 99 others); Mon, 10 Jun 2019 06:59:09 -0400 Received: from foss.arm.com ([217.140.110.172]:40554 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388100AbfFJK7J (ORCPT ); Mon, 10 Jun 2019 06:59:09 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 04AE8337; Mon, 10 Jun 2019 03:59:08 -0700 (PDT) Received: from e107155-lin (e107155-lin.cambridge.arm.com [10.1.196.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 18E233F557; Mon, 10 Jun 2019 04:00:46 -0700 (PDT) Date: Mon, 10 Jun 2019 11:59:03 +0100 From: Sudeep Holla To: Ulf Hansson Cc: Lorenzo Pieralisi , Mark Rutland , Linux ARM , "Rafael J . Wysocki" , Daniel Lezcano , "Raju P . L . S . S . S . N" , Amit Kucheria , Bjorn Andersson , Stephen Boyd , Niklas Cassel , Tony Lindgren , Kevin Hilman , Lina Iyer , Viresh Kumar , Vincent Guittot , Geert Uytterhoeven , Souvik Chakravarty , Sudeep Holla , Linux PM , linux-arm-msm , Linux Kernel Mailing List , Lina Iyer Subject: Re: [PATCH 09/18] drivers: firmware: psci: Add support for PM domains using genpd Message-ID: <20190610105903.GC26602@e107155-lin> References: <20190513192300.653-1-ulf.hansson@linaro.org> <20190513192300.653-10-ulf.hansson@linaro.org> <20190607152751.GH15577@e107155-lin> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 10, 2019 at 12:21:41PM +0200, Ulf Hansson wrote: > On Fri, 7 Jun 2019 at 17:27, Sudeep Holla wrote: > > > > On Mon, May 13, 2019 at 09:22:51PM +0200, Ulf Hansson wrote: > > > When the hierarchical CPU topology layout is used in DT, we need to setup > > > the corresponding PM domain data structures, as to allow a CPU and a group > > > of CPUs to be power managed accordingly. Let's enable this by deploying > > > support through the genpd interface. > > > > > > Additionally, when the OS initiated mode is supported by the PSCI FW, let's > > > also parse the domain idle states DT bindings as to make genpd responsible > > > for the state selection, when the states are compatible with > > > "domain-idle-state". Otherwise, when only Platform Coordinated mode is > > > supported, we rely solely on the state selection to be managed through the > > > regular cpuidle framework. > > > > > > If the initialization of the PM domain data structures succeeds and the OS > > > initiated mode is supported, we try to switch to it. In case it fails, > > > let's fall back into a degraded mode, rather than bailing out and returning > > > an error code. > > > > > > Due to that the OS initiated mode may become enabled, we need to adjust to > > > maintain backwards compatibility for a kernel started through a kexec call. > > > Do this by explicitly switch to Platform Coordinated mode during boot. > > > > > > Finally, the actual initialization of the PM domain data structures, is > > > done via calling the new shared function, psci_dt_init_pm_domains(). > > > However, this is implemented by subsequent changes. > > > > > > Co-developed-by: Lina Iyer > > > Signed-off-by: Lina Iyer > > > Signed-off-by: Ulf Hansson > > > --- > > > > > > Changes: > > > - Simplify code setting domain_state at power off. > > > - Use the genpd ->free_state() callback to manage freeing of states. > > > - Fixup a bogus while loop. > > > > > > --- > > > drivers/firmware/psci/Makefile | 2 +- > > > drivers/firmware/psci/psci.c | 7 +- > > > drivers/firmware/psci/psci.h | 5 + > > > drivers/firmware/psci/psci_pm_domain.c | 268 +++++++++++++++++++++++++ > > > 4 files changed, 280 insertions(+), 2 deletions(-) > > > create mode 100644 drivers/firmware/psci/psci_pm_domain.c > > > [...] > > > + > > > +static int psci_pd_parse_states(struct device_node *np, > > > + struct genpd_power_state **states, int *state_count) > > > +{ > > > + int ret; > > > + > > > + /* Parse the domain idle states. */ > > > + ret = of_genpd_parse_idle_states(np, states, state_count); > > > + if (ret) > > > + return ret; > > > + > > > > > > Lots of things here in this file are not psci specific. They can be > > moved as generic CPU PM domain support. > > What exactly do you mean by CPU PM domain support? > > The current split is based upon how the generic PM domain (genpd) > supports CPU devices (see GENPD_FLAG_CPU_DOMAIN), which is already > available. > > I agree that finding the right balance between what can be made > generic and driver specific is not always obvious. Often it's better > to start with having more things in the driver code, then move things > into a common framework, later on, when that turns out to make sense. > Indeed, I agree. But when reviewing this time I thought it should be possible to push generic stuff into existing dt_idle_driver. I must admit that I haven't thought much in details, just thought of expressing the idea and see. But yes it's difficult to find the balance but at the same time we need reasons to have these in psci :) > > > > > + /* Fill out the PSCI specifics for each found state. */ > > > + ret = psci_pd_parse_state_nodes(*states, *state_count); > > > + if (ret) > > > + kfree(*states); > > > + > > > > Things like above are PSCI. > > > > I am trying to see if we can do something to achieve partitions like we > > have today: psci.c just has PSCI specific stuff and dt_idle_states.c > > deals with generic idle stuff. > > I am open to any suggestions. Although, I am not sure I understand > your comment and nor the reason to why you want me to change. > > So, what is the problem with having the code that you refer to, inside > drivers/firmware/psci/psci_pm_domain.c? Can't we just start with that > and see how it plays? > I need to think how to partition this well. I don't have suggestions right away, but I need to get convinced what we have here is best we can do or come up with a better solution. I didn't like it as is at this time. -- Regards, Sudeep