Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp2499320imm; Thu, 23 Aug 2018 23:48:45 -0700 (PDT) X-Google-Smtp-Source: ANB0Vda7eYxLs7iMlxNkrmyLtDukgCq/FF43rS6zwn7rPsqPX8+M8W+BDJ1tMGQ7hEl0tCF7M6N8 X-Received: by 2002:a63:f309:: with SMTP id l9-v6mr354277pgh.369.1535093325282; Thu, 23 Aug 2018 23:48:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535093325; cv=none; d=google.com; s=arc-20160816; b=dz0zNbFT06mwxibhUUTf04ZZE3bcI7CSMAz/QWXSYRnh/f4NTScbqsP61XoCbrT7b0 OOBZGs43Wq+77TiQNPHXqQ3xDOfvM6Oe4pSr0UQy7NduXn0uIDMzW+TA2XJkl6o7s7gn 87fIZroRkH64pG3U6rVpSYd/xO/uRIvIMc+TRcQyS8VDnqLRZNVfKKj6pRuPL6PO2GLC d8u4LRboWpfADDoCE4Se8/WKkGQeWLys3ymyuapk7lSebkg5v9hr+zVd/02VIbvYk75w EPJM/AyKdSpjQWgCikX1aV6hMHEGhCLNOiqyJHFq1OyQ7fUF8Wxxr/QtmHaKvnh+vmGD 9GYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature :arc-authentication-results; bh=sCv7c8KMDvmWJxbh2tLOcYKRL7o3gO6rLobnrVPXcLk=; b=gLmsTk8/qJYfCIyvLpJXguecl/DyVuifEPQrFuy4AGTRKDkb+srD4Rm5tahlwcZotp v2X1AodKutrkRrsehvujpS3w1+/m4rJ1OIcXAHDJmE6pMbrckFrMcuMoi7eUuC6PEfYE tMfoMuM53rae8nlXH7wvYECjpKYeimegzpAS6edxshNCI35gkeP75LBsIEi0Jkh74NIH Y1zcwxlivyzvo+WDrKGMZfbaknn/GLShcBhJm2Dc/5J+l9xFBIlaWWGsKKiDHg/X6u1C a8DsPgn8bVHbWK1QUc7OhnThJupekD2SCPsBtGa0ngJv9EtfYVXkA5TdYad4bIvQGNNQ QLXg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=KjyiPIJV; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d34-v6si6258395pla.195.2018.08.23.23.48.29; Thu, 23 Aug 2018 23:48:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=KjyiPIJV; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727397AbeHXKUh (ORCPT + 99 others); Fri, 24 Aug 2018 06:20:37 -0400 Received: from mail-it0-f67.google.com ([209.85.214.67]:51831 "EHLO mail-it0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726455AbeHXKUg (ORCPT ); Fri, 24 Aug 2018 06:20:36 -0400 Received: by mail-it0-f67.google.com with SMTP id e14-v6so835324itf.1 for ; Thu, 23 Aug 2018 23:47:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=sCv7c8KMDvmWJxbh2tLOcYKRL7o3gO6rLobnrVPXcLk=; b=KjyiPIJV4w60vl30H2ml0QfbvcEqLql/X0Iy9VXtzGUdvYciKYDzpFtU33l93JVWd8 VXkGEJ1CveiDQlyd/ff+7XuKCmE+rroBzs/28q1oeMJekX3rLXc8SBJs3Y6vKlzhUcYR c2xqzQ1asunh0E8EPuQLBDAOej1mm08uJnlLg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=sCv7c8KMDvmWJxbh2tLOcYKRL7o3gO6rLobnrVPXcLk=; b=QYUDTQ7T6rXtu1v0HdshHwJ3v1dkE/2uDaiE8kqUXaCLl4Q7shX41oo3XtRONQaWfj JbTD08CR9IFu/J2CTf8uHc4rhCzDzabWR+H5GvSGRyRxvh3l5yfzjOkR41t0B6bq+Sjc WfghizrnDAH1b2a2NBYi5Pw/xL+BY6u/oLUPjCPKeCU4a3fBP2uXw5oXbka1flvDx4Hr 1Y0GShYTgpMLbSRXmuEZO8Rvmi3ThQkK9/eJ6yNnbigglckkYqJc9YY+FZoZZSqZVXxU NiE7OrifmzEJx56v38NyAKu4/55YmjGleO56q2uLUJ2ehXPVSvcew6FvXDhkIah0Ei60 xwOQ== X-Gm-Message-State: APzg51Cat1YRQTwLj1EwqNTnbtpU+EYP+H6TVgRrN2CYn5mrcWRdt1Lp RHj3LUxMP45D097xjITW3Cgt/4TVoSgWshluw0YmKA== X-Received: by 2002:a02:579a:: with SMTP id b26-v6mr255405jad.107.1535093241610; Thu, 23 Aug 2018 23:47:21 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a02:2b03:0:0:0:0:0 with HTTP; Thu, 23 Aug 2018 23:47:21 -0700 (PDT) In-Reply-To: References: <20180620172226.15012-1-ulf.hansson@linaro.org> <20180620172226.15012-5-ulf.hansson@linaro.org> <1678369.6pMUSOvdlV@aspire.rjw.lan> From: Ulf Hansson Date: Fri, 24 Aug 2018 08:47:21 +0200 Message-ID: Subject: Re: [PATCH v8 04/26] PM / Domains: Add support for CPU devices to genpd To: "Rafael J. Wysocki" Cc: "Rafael J. Wysocki" , Sudeep Holla , Lorenzo Pieralisi , Mark Rutland , Linux PM , Kevin Hilman , Lina Iyer , Lina Iyer , Rob Herring , Daniel Lezcano , Thomas Gleixner , Vincent Guittot , Stephen Boyd , Juri Lelli , Geert Uytterhoeven , Linux ARM , linux-arm-msm , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 6 August 2018 at 11:36, Rafael J. Wysocki wrote: > On Fri, Aug 3, 2018 at 1:43 PM, Ulf Hansson wrote: >> On 19 July 2018 at 12:25, Rafael J. Wysocki wrote: >>> On Wednesday, June 20, 2018 7:22:04 PM CEST Ulf Hansson wrote: >>>> To enable a device belonging to a CPU to be attached to a PM domain managed >>>> by genpd, let's do a few changes to genpd as to make it convenient to >>>> manage the specifics around CPUs. >>>> >>>> First, as to be able to quickly find out what CPUs that are attached to a >>>> genpd, which typically becomes useful from a genpd governor as following >>>> changes is about to show, let's add a cpumask 'cpus' to the struct >>>> generic_pm_domain. >>>> >>>> At the point when a device that belongs to a CPU, is attached/detached to >>>> its corresponding PM domain via genpd_add_device(), let's update the >>>> cpumask in genpd->cpus. Moreover, propagate the update of the cpumask to >>>> the master domains, which makes the genpd->cpus to contain a cpumask that >>>> hierarchically reflect all CPUs for a genpd, including CPUs attached to >>>> subdomains. >>>> >>>> Second, to unconditionally manage CPUs and the cpumask in genpd->cpus, is >>>> unnecessary for cases when only non-CPU devices are parts of a genpd. >>>> Let's avoid this by adding a new configuration bit, GENPD_FLAG_CPU_DOMAIN. >>>> Clients must set the bit before they call pm_genpd_init(), as to instruct >>>> genpd that it shall deal with CPUs and thus manage the cpumask in >>>> genpd->cpus. >>>> >>>> Cc: Lina Iyer >>>> Co-developed-by: Lina Iyer >>>> Signed-off-by: Ulf Hansson >>>> --- >>>> drivers/base/power/domain.c | 69 ++++++++++++++++++++++++++++++++++++- >>>> include/linux/pm_domain.h | 3 ++ >>>> 2 files changed, 71 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c >>>> index 21d298e1820b..6149ce0bfa7b 100644 >>>> --- a/drivers/base/power/domain.c >>>> +++ b/drivers/base/power/domain.c >>>> @@ -20,6 +20,7 @@ >>>> #include >>>> #include >>>> #include >>>> +#include >>>> >>>> #include "power.h" >>>> >>>> @@ -126,6 +127,7 @@ static const struct genpd_lock_ops genpd_spin_ops = { >>>> #define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE) >>>> #define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON) >>>> #define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP) >>>> +#define genpd_is_cpu_domain(genpd) (genpd->flags & GENPD_FLAG_CPU_DOMAIN) >>>> >>>> static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev, >>>> const struct generic_pm_domain *genpd) >>>> @@ -1377,6 +1379,62 @@ static void genpd_free_dev_data(struct device *dev, >>>> dev_pm_put_subsys_data(dev); >>>> } >>>> >>>> +static void __genpd_update_cpumask(struct generic_pm_domain *genpd, >>>> + int cpu, bool set, unsigned int depth) >>>> +{ >>>> + struct gpd_link *link; >>>> + >>>> + if (!genpd_is_cpu_domain(genpd)) >>>> + return; >>>> + >>>> + list_for_each_entry(link, &genpd->slave_links, slave_node) { >>>> + struct generic_pm_domain *master = link->master; >>>> + >>>> + genpd_lock_nested(master, depth + 1); >>>> + __genpd_update_cpumask(master, cpu, set, depth + 1); >>>> + genpd_unlock(master); >>>> + } >>>> + >>>> + if (set) >>>> + cpumask_set_cpu(cpu, genpd->cpus); >>>> + else >>>> + cpumask_clear_cpu(cpu, genpd->cpus); >>>> +} >>> >>> As noted elsewhere, there is a concern about the possible weight of this >>> cpumask and I think that it would be good to explicitly put a limit on it. >> >> I have been digesting your comments on the series, but wonder if this >> is still a relevant concern? > > Well, there are systems with very large cpumasks and it is sort of > good to have that in mind when designing any code using them. Right. So, if I avoid allocating the cpumask for those genpd structures that doesn't need it (those not having GENPD_FLAG_CPU_DOMAIN set), would that be sufficient to deal with your concern? Kind regards Uffe