Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp542031imm; Fri, 14 Sep 2018 02:30:46 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYux5QkpYRNq+3duM4L4aKvNQUqqIgbAH4z1feELMikZi7s26raLyKv7fHZetbENSTR98uP X-Received: by 2002:a63:174f:: with SMTP id 15-v6mr7023917pgx.31.1536917446784; Fri, 14 Sep 2018 02:30:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536917446; cv=none; d=google.com; s=arc-20160816; b=GwoXLCSqmXE4UciQ3zuX9UKUlgJ65NXUdhLMlDp+crmRMS6HezDzuTsl5oO/o+gkea JSXMllzOBMfk/prLYZdND+mH7YTRCGZ2GZog9fjTsy+bGMprYZLxcTc+1hj3FyDeJ5J0 iSl+Es2vSWPR0MrmseIWT+gqcaLXh4jL+rFLBvotpT2+Yj0BWYCmabCEn8Brv5spMNaV qnu4PT9ia/TVkLb5XGkxsLc3n/nVSSlBA1g/eF5JswX3iwrNhOn7DbQrDWV4WjMXcxfD D6OeQGgs8HBP2mGCM6Hldu6BQBEWNhq2IZoe0iSPsTRHQW/v3rh+zo1vktLlhZxt2kdF kzEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=8zz6oVqrnLuXLNOOVBOyqabBuxDrmqd8JaybCP6KDPU=; b=O+5sbpWshaTjqL3nimwKoXBM2NdXOsSqV6Ru4CeHnHaEdVonrrOgltT91Ot7S7Eztd hRu10JMBVqggn+tFS38Seqk+kcQ0tOODWoHtz5W5p5/NWeHN/cga0E3pb/jUb/cQIvSP BVdV91Yptk6SWP31NwfZ9o5hh6HmyLP47JSUH7r2/ixSiPfrsAXc/BtoB3Oovi+/1ewL sOJ5sPaVDIoYKX8+CoA0nXPLIHZNxs3+3mfZ5CSYcygiU9YxI48qUyH1JMKlPHLbuwBC mcPK52w4HQTTc7rx43kNGZ4x8s4Drz6Q52BfNUVs+ols/i0N+woViiyjjX7OJVWEiBzB LfaQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 1-v6si7038916ply.483.2018.09.14.02.30.31; Fri, 14 Sep 2018 02:30:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728315AbeINOm3 (ORCPT + 99 others); Fri, 14 Sep 2018 10:42:29 -0400 Received: from cloudserver094114.home.pl ([79.96.170.134]:61920 "EHLO cloudserver094114.home.pl" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727670AbeINOm3 (ORCPT ); Fri, 14 Sep 2018 10:42:29 -0400 Received: from 79.184.255.178.ipv4.supernova.orange.pl (79.184.255.178) (HELO aspire.rjw.lan) by serwer1319399.home.pl (79.96.170.134) with SMTP (IdeaSmtpServer 0.83.148) id e15f4b92dd6afbd5; Fri, 14 Sep 2018 11:28:48 +0200 From: "Rafael J. Wysocki" To: Ulf Hansson Cc: "Rafael J. Wysocki" , Sudeep Holla , Lorenzo Pieralisi , Mark Rutland , Linux PM , Kevin Hilman , Lina Iyer , Lina Iyer , Rob Herring , Daniel Lezcano , Thomas Gleixner , Vincent Guittot , Stephen Boyd , Juri Lelli , Geert Uytterhoeven , Linux ARM , linux-arm-msm , Linux Kernel Mailing List Subject: Re: [PATCH v8 04/26] PM / Domains: Add support for CPU devices to genpd Date: Fri, 14 Sep 2018 11:26:08 +0200 Message-ID: <3110675.RLi9TDTF77@aspire.rjw.lan> In-Reply-To: References: <20180620172226.15012-1-ulf.hansson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Friday, August 24, 2018 8:47:21 AM CEST Ulf Hansson wrote: > On 6 August 2018 at 11:36, Rafael J. Wysocki wrote: > > On Fri, Aug 3, 2018 at 1:43 PM, Ulf Hansson wrote: > >> On 19 July 2018 at 12:25, Rafael J. Wysocki wrote: > >>> On Wednesday, June 20, 2018 7:22:04 PM CEST Ulf Hansson wrote: > >>>> To enable a device belonging to a CPU to be attached to a PM domain managed > >>>> by genpd, let's do a few changes to genpd as to make it convenient to > >>>> manage the specifics around CPUs. > >>>> > >>>> First, as to be able to quickly find out what CPUs that are attached to a > >>>> genpd, which typically becomes useful from a genpd governor as following > >>>> changes is about to show, let's add a cpumask 'cpus' to the struct > >>>> generic_pm_domain. > >>>> > >>>> At the point when a device that belongs to a CPU, is attached/detached to > >>>> its corresponding PM domain via genpd_add_device(), let's update the > >>>> cpumask in genpd->cpus. Moreover, propagate the update of the cpumask to > >>>> the master domains, which makes the genpd->cpus to contain a cpumask that > >>>> hierarchically reflect all CPUs for a genpd, including CPUs attached to > >>>> subdomains. > >>>> > >>>> Second, to unconditionally manage CPUs and the cpumask in genpd->cpus, is > >>>> unnecessary for cases when only non-CPU devices are parts of a genpd. > >>>> Let's avoid this by adding a new configuration bit, GENPD_FLAG_CPU_DOMAIN. > >>>> Clients must set the bit before they call pm_genpd_init(), as to instruct > >>>> genpd that it shall deal with CPUs and thus manage the cpumask in > >>>> genpd->cpus. > >>>> > >>>> Cc: Lina Iyer > >>>> Co-developed-by: Lina Iyer > >>>> Signed-off-by: Ulf Hansson > >>>> --- > >>>> drivers/base/power/domain.c | 69 ++++++++++++++++++++++++++++++++++++- > >>>> include/linux/pm_domain.h | 3 ++ > >>>> 2 files changed, 71 insertions(+), 1 deletion(-) > >>>> > >>>> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c > >>>> index 21d298e1820b..6149ce0bfa7b 100644 > >>>> --- a/drivers/base/power/domain.c > >>>> +++ b/drivers/base/power/domain.c > >>>> @@ -20,6 +20,7 @@ > >>>> #include > >>>> #include > >>>> #include > >>>> +#include > >>>> > >>>> #include "power.h" > >>>> > >>>> @@ -126,6 +127,7 @@ static const struct genpd_lock_ops genpd_spin_ops = { > >>>> #define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE) > >>>> #define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON) > >>>> #define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP) > >>>> +#define genpd_is_cpu_domain(genpd) (genpd->flags & GENPD_FLAG_CPU_DOMAIN) > >>>> > >>>> static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev, > >>>> const struct generic_pm_domain *genpd) > >>>> @@ -1377,6 +1379,62 @@ static void genpd_free_dev_data(struct device *dev, > >>>> dev_pm_put_subsys_data(dev); > >>>> } > >>>> > >>>> +static void __genpd_update_cpumask(struct generic_pm_domain *genpd, > >>>> + int cpu, bool set, unsigned int depth) > >>>> +{ > >>>> + struct gpd_link *link; > >>>> + > >>>> + if (!genpd_is_cpu_domain(genpd)) > >>>> + return; > >>>> + > >>>> + list_for_each_entry(link, &genpd->slave_links, slave_node) { > >>>> + struct generic_pm_domain *master = link->master; > >>>> + > >>>> + genpd_lock_nested(master, depth + 1); > >>>> + __genpd_update_cpumask(master, cpu, set, depth + 1); > >>>> + genpd_unlock(master); > >>>> + } > >>>> + > >>>> + if (set) > >>>> + cpumask_set_cpu(cpu, genpd->cpus); > >>>> + else > >>>> + cpumask_clear_cpu(cpu, genpd->cpus); > >>>> +} > >>> > >>> As noted elsewhere, there is a concern about the possible weight of this > >>> cpumask and I think that it would be good to explicitly put a limit on it. > >> > >> I have been digesting your comments on the series, but wonder if this > >> is still a relevant concern? > > > > Well, there are systems with very large cpumasks and it is sort of > > good to have that in mind when designing any code using them. > > Right. > > So, if I avoid allocating the cpumask for those genpd structures that > doesn't need it (those not having GENPD_FLAG_CPU_DOMAIN set), would > that be sufficient to deal with your concern? Yes, it would, if I understand you correctly. Thanks, Rafael