Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3320426imm; Mon, 6 Aug 2018 02:37:42 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfV+24TwLPet8+uQKb4d/Zu2miotb+MZjibyvbAmJkdvgGWK6DzbcBGIudTXNxiihcQnl+p X-Received: by 2002:a17:902:3a2:: with SMTP id d31-v6mr13506631pld.287.1533548262463; Mon, 06 Aug 2018 02:37:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533548262; cv=none; d=google.com; s=arc-20160816; b=Jl5yaHxow2rpPRoBmMV3HWq/Xo/I4Ti5stY+Oml9+4mRvRwLthOWAACiGGvp6z3h/U xBLajlGCksBNiQVHUpTrMy0BPJpvOgiZLyVtPV6dMj2AJoTySro6twfN1bDML7tLW5hk Twriva7PLmhUzfSzcjv4tKmf2NnVwJsfJF/FMscqW6cCqYvh5PZVqi/nBU5KWjuQ7dzD s8dUuduUrqsr8LhAsY+3rFNrvnqmfrMuD8yUHQV0bDuG63CN3U3GFoUhG56FUduUYOr5 glKKwAtOCVlQ7W/WyFlcJR+exJzaOKgUBdIgNk76bWOovaUuPTsr1+0ZV6J0mWZqaM9i VIMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature :arc-authentication-results; bh=A+opYyAlr1mqtRUwRNoWnXJIpMVPT2NnIXcngaoeahE=; b=uVXBS/AcxOVCjmRAd0D1NbxTLiK14jU5cTufcEOw6uTsv5dfdPRQKkOVb4aJCFS5qK cauObZHARpGQQnCLrMn36YDXoyvt7ZAFHr8JXnrSvvcuwH5+1nyuhdfc4t6ECzq/srij 7CDD/BSvg/gdRUnVe2ahImurKKqZaS9JzbTxL8kCPgoIR36139gFw0E92hOl6tXnSrUK jM6dtpeQcTsWk6XdRxrxrJyBzA7efmmTZbIeFLQX3Rkmanwt8WL1pwkGcP1KaqJIOT7J sp+1iKFP3YSLCVLBd26+Q0v4npGyWYoGM8Exnu3xqJDW4/GjEi5SuN8zmBcqAebCGHNl xWKg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=XmFENDgu; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z4-v6si9874925pgp.580.2018.08.06.02.37.27; Mon, 06 Aug 2018 02:37:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=XmFENDgu; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728460AbeHFLoR (ORCPT + 99 others); Mon, 6 Aug 2018 07:44:17 -0400 Received: from mail-oi0-f67.google.com ([209.85.218.67]:41729 "EHLO mail-oi0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728010AbeHFLoR (ORCPT ); Mon, 6 Aug 2018 07:44:17 -0400 Received: by mail-oi0-f67.google.com with SMTP id k12-v6so20984130oiw.8; Mon, 06 Aug 2018 02:36:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=A+opYyAlr1mqtRUwRNoWnXJIpMVPT2NnIXcngaoeahE=; b=XmFENDgutwMUDVM1JqKAQHIcG/Sc/iXKLHYu3O8pGR/Yajwac9KMwmANe6LnpgzJMV k5XzEtQzq5VpXvNZBXGl/Ds0ACrcFjUv1ZuD6+a6m9nqhrnBzMKTTnfKCksSYDNU+NE+ kv/bZ8EqklI1Je5ZW2jPvC2gLDNn1MHtm1yd3fddK5P1DTBGi6YiA5P/RvhFl3MqOG2e qDt6UppnefMj/ulhG4cBZu7cGKdMqd3j2ow4vAusijMZal6BzPTOClx0WfaYSNuwVeU/ godn5CTOyCBupWVCiXS/OppXxZdHMhMGmf4YsEHn7S4TCs03kRfWT5/6gJ1HL1w6Ue9m yuqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=A+opYyAlr1mqtRUwRNoWnXJIpMVPT2NnIXcngaoeahE=; b=DRZnvxdDLukJUXR3g5vCPGd7QND6idUCzhoXUvLUXl5M8nwasjkhsrJ80appZKD53i W5zclEbRt4mUec2U6fLOPHv0XM1I/X/G+6SQtI/9ua76nZn+5Yzh4ksNDra+tne04tCj APKbhVkmTh7eYPJ7FBYKAzIh3V6gUtXiGOuFSodcT2rq58JvQDipzUswJ1Mg48lGfPNU yjab5pi0UUnDNrRTLd5LZkHc/VJIvHimSB4mSsLn/ckr85F/uHhDePs9ExuMQCSPJQiH +VRUM72BuR7yxynvoGTDCPshCUlDVOoFMujkeQw1U6VQ+A2qQ8fA2dRY42gjFHnFkhsQ 3+Og== X-Gm-Message-State: AOUpUlHKge9HkDyWYU5bF1S2eGjQMC2GdUPbmK2q+waLmT1AZn+EV46M Tyw55nsxOCZwdy0L8hmm68wK3pT6AsIcm+U5tAA= X-Received: by 2002:aca:ccc4:: with SMTP id c187-v6mr13065063oig.282.1533548163288; Mon, 06 Aug 2018 02:36:03 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a9d:63d2:0:0:0:0:0 with HTTP; Mon, 6 Aug 2018 02:36:02 -0700 (PDT) In-Reply-To: References: <20180620172226.15012-1-ulf.hansson@linaro.org> <20180620172226.15012-5-ulf.hansson@linaro.org> <1678369.6pMUSOvdlV@aspire.rjw.lan> From: "Rafael J. Wysocki" Date: Mon, 6 Aug 2018 11:36:02 +0200 X-Google-Sender-Auth: dX1Zm-lORw25sKTSdgEojk8zvdQ Message-ID: Subject: Re: [PATCH v8 04/26] PM / Domains: Add support for CPU devices to genpd To: Ulf Hansson Cc: "Rafael J. Wysocki" , Sudeep Holla , Lorenzo Pieralisi , Mark Rutland , Linux PM , Kevin Hilman , Lina Iyer , Lina Iyer , Rob Herring , Daniel Lezcano , Thomas Gleixner , Vincent Guittot , Stephen Boyd , Juri Lelli , Geert Uytterhoeven , Linux ARM , linux-arm-msm , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Aug 3, 2018 at 1:43 PM, Ulf Hansson wrote: > On 19 July 2018 at 12:25, Rafael J. Wysocki wrote: >> On Wednesday, June 20, 2018 7:22:04 PM CEST Ulf Hansson wrote: >>> To enable a device belonging to a CPU to be attached to a PM domain managed >>> by genpd, let's do a few changes to genpd as to make it convenient to >>> manage the specifics around CPUs. >>> >>> First, as to be able to quickly find out what CPUs that are attached to a >>> genpd, which typically becomes useful from a genpd governor as following >>> changes is about to show, let's add a cpumask 'cpus' to the struct >>> generic_pm_domain. >>> >>> At the point when a device that belongs to a CPU, is attached/detached to >>> its corresponding PM domain via genpd_add_device(), let's update the >>> cpumask in genpd->cpus. Moreover, propagate the update of the cpumask to >>> the master domains, which makes the genpd->cpus to contain a cpumask that >>> hierarchically reflect all CPUs for a genpd, including CPUs attached to >>> subdomains. >>> >>> Second, to unconditionally manage CPUs and the cpumask in genpd->cpus, is >>> unnecessary for cases when only non-CPU devices are parts of a genpd. >>> Let's avoid this by adding a new configuration bit, GENPD_FLAG_CPU_DOMAIN. >>> Clients must set the bit before they call pm_genpd_init(), as to instruct >>> genpd that it shall deal with CPUs and thus manage the cpumask in >>> genpd->cpus. >>> >>> Cc: Lina Iyer >>> Co-developed-by: Lina Iyer >>> Signed-off-by: Ulf Hansson >>> --- >>> drivers/base/power/domain.c | 69 ++++++++++++++++++++++++++++++++++++- >>> include/linux/pm_domain.h | 3 ++ >>> 2 files changed, 71 insertions(+), 1 deletion(-) >>> >>> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c >>> index 21d298e1820b..6149ce0bfa7b 100644 >>> --- a/drivers/base/power/domain.c >>> +++ b/drivers/base/power/domain.c >>> @@ -20,6 +20,7 @@ >>> #include >>> #include >>> #include >>> +#include >>> >>> #include "power.h" >>> >>> @@ -126,6 +127,7 @@ static const struct genpd_lock_ops genpd_spin_ops = { >>> #define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE) >>> #define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON) >>> #define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP) >>> +#define genpd_is_cpu_domain(genpd) (genpd->flags & GENPD_FLAG_CPU_DOMAIN) >>> >>> static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev, >>> const struct generic_pm_domain *genpd) >>> @@ -1377,6 +1379,62 @@ static void genpd_free_dev_data(struct device *dev, >>> dev_pm_put_subsys_data(dev); >>> } >>> >>> +static void __genpd_update_cpumask(struct generic_pm_domain *genpd, >>> + int cpu, bool set, unsigned int depth) >>> +{ >>> + struct gpd_link *link; >>> + >>> + if (!genpd_is_cpu_domain(genpd)) >>> + return; >>> + >>> + list_for_each_entry(link, &genpd->slave_links, slave_node) { >>> + struct generic_pm_domain *master = link->master; >>> + >>> + genpd_lock_nested(master, depth + 1); >>> + __genpd_update_cpumask(master, cpu, set, depth + 1); >>> + genpd_unlock(master); >>> + } >>> + >>> + if (set) >>> + cpumask_set_cpu(cpu, genpd->cpus); >>> + else >>> + cpumask_clear_cpu(cpu, genpd->cpus); >>> +} >> >> As noted elsewhere, there is a concern about the possible weight of this >> cpumask and I think that it would be good to explicitly put a limit on it. > > I have been digesting your comments on the series, but wonder if this > is still a relevant concern? Well, there are systems with very large cpumasks and it is sort of good to have that in mind when designing any code using them.