Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp1037237imm; Wed, 20 Jun 2018 10:29:58 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKlrqxsB2AinQVdfdpwdGA0SkQXhptyhDmjkt85fo3OrZTgRf3TV96VwESLfOMe+ilzKBG4 X-Received: by 2002:a65:4b04:: with SMTP id r4-v6mr19231611pgq.26.1529515798222; Wed, 20 Jun 2018 10:29:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529515798; cv=none; d=google.com; s=arc-20160816; b=LnMa83YD9DLEvL80IwnDk62gLuNa/6A1OT4KPCjPAE48JnZO8oRS/hyqcZXe/kSMVs CXCwZMUHxMybcGvGV+4p+s540t9SMCUUVuuthIwPhIONms4yiRaAyjLfJGizGMcnH9b4 5C/PVWMpq1XEypuhMRF3/HrR34ta69Ssd7I5Nb289WdBVJaDH7ejnu0tOWdwNtV03h75 DRp0ndtZHYk4A5mvm+JEmsbsKUuMyQ9myyHhQD+YBgI6GYmxUlJxaflKAxRidD4dDgjD +4aaOUZSAWHKyClwV7MInZAexbKuLm/PE1ocEoiRqYocjBuk0mTwE5AFfUlKf1dD52At zk4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=q3sBFx4bDLVEE6Yvpy9kVMF3vWlpgdg6YBSbLYqXsME=; b=vzpzEtbFOieXXJVm5Tx2Xv9CkZgj/Mnh1J7eQo01sglust8Iccqn5abZFXgj90sY0h h29dO9wUNLpXegphZZvZHaBTBDhAFC3u1ilBjGw3dZjIi59nOB29g8dNTLXdJi6Cjry+ D+ErFitYMKVIzv3PY6YIN1nB9UhZRbZAVIgnDNt+Xwhsoommphc0fQhCeK6MWFZrKzhf BzDZYUXEN4J0d6TgzI5lnrlsGifqU6cGl1VtAYF9PNLDl56lMwt60rpGRl9qzegqQiij 78jAU53Iu+K1+JTvL23tXBLpP+QECfIcEIYGjIETjbFu7KdL7Y31K/l78kriDi86RIq4 OahA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="e/XMLeMl"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g15-v6si2339862pgf.249.2018.06.20.10.29.43; Wed, 20 Jun 2018 10:29:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="e/XMLeMl"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754589AbeFTR2n (ORCPT + 99 others); Wed, 20 Jun 2018 13:28:43 -0400 Received: from mail-lf0-f65.google.com ([209.85.215.65]:39653 "EHLO mail-lf0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754433AbeFTRWo (ORCPT ); Wed, 20 Jun 2018 13:22:44 -0400 Received: by mail-lf0-f65.google.com with SMTP id t2-v6so470501lfd.6 for ; Wed, 20 Jun 2018 10:22:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=q3sBFx4bDLVEE6Yvpy9kVMF3vWlpgdg6YBSbLYqXsME=; b=e/XMLeMlRx28jbEDB7QNTC7AZ+tIR885ZpI7S+nl9KRtbtMETY2CF2Pv+JFE1mweSe 1awjv5ZZJLYqTVddil3Hr7LCV3KHUgjjmPN5NSdSkzFsMaTzvaOXTGE/J9DXGl/0DfgP 4FBSc+VNLZq8otJedKQjH/MWqJZUvu5tbL0v8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=q3sBFx4bDLVEE6Yvpy9kVMF3vWlpgdg6YBSbLYqXsME=; b=RJIgtP/PXI8v2dgXQH+tk+ayPrN5uIyhcox6wtDm9RHbUd/RqoY4D+3UPP6UwejUwH hH2mKnmGgOeYlLWMHh3w50MAAQhHaI1lGlRRm88dckz7fK2u+FqV1yvSHdMrrxeyBkwK YOEI88+qij+wVILppNs6gjN8xWmuHpwo/RKVM0ZbLuDwYpnl83ePr/UcTmGFtqGoRURS 0tkxYg96BdnV52nv/J7y6VWVbEAvyG6TLFuUde7UDpphceXyUj+SZKga4fhOYRdXZy+F zyvmffo4NKn732+L/Qy954q+dXGAU7Cno8slAQVg787lCS2Zmxq/Nr9JaEXFBL6GErTs oi6w== X-Gm-Message-State: APt69E138HNBeRyxsI25UpR+lAtyaEhZFDvc/i/dQmvMq3zBi3VeIvNX ojhAYssvMFsfm7qIqhJgecPRBQ== X-Received: by 2002:a19:6550:: with SMTP id c16-v6mr14358137lfj.31.1529515362666; Wed, 20 Jun 2018 10:22:42 -0700 (PDT) Received: from localhost.localdomain (h-158-174-22-210.NA.cust.bahnhof.se. [158.174.22.210]) by smtp.gmail.com with ESMTPSA id b2-v6sm514441lji.85.2018.06.20.10.22.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 20 Jun 2018 10:22:41 -0700 (PDT) From: Ulf Hansson To: "Rafael J . Wysocki" , Sudeep Holla , Lorenzo Pieralisi , Mark Rutland , linux-pm@vger.kernel.org Cc: Kevin Hilman , Lina Iyer , Lina Iyer , Ulf Hansson , Rob Herring , Daniel Lezcano , Thomas Gleixner , Vincent Guittot , Stephen Boyd , Juri Lelli , Geert Uytterhoeven , linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v8 04/26] PM / Domains: Add support for CPU devices to genpd Date: Wed, 20 Jun 2018 19:22:04 +0200 Message-Id: <20180620172226.15012-5-ulf.hansson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180620172226.15012-1-ulf.hansson@linaro.org> References: <20180620172226.15012-1-ulf.hansson@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To enable a device belonging to a CPU to be attached to a PM domain managed by genpd, let's do a few changes to genpd as to make it convenient to manage the specifics around CPUs. First, as to be able to quickly find out what CPUs that are attached to a genpd, which typically becomes useful from a genpd governor as following changes is about to show, let's add a cpumask 'cpus' to the struct generic_pm_domain. At the point when a device that belongs to a CPU, is attached/detached to its corresponding PM domain via genpd_add_device(), let's update the cpumask in genpd->cpus. Moreover, propagate the update of the cpumask to the master domains, which makes the genpd->cpus to contain a cpumask that hierarchically reflect all CPUs for a genpd, including CPUs attached to subdomains. Second, to unconditionally manage CPUs and the cpumask in genpd->cpus, is unnecessary for cases when only non-CPU devices are parts of a genpd. Let's avoid this by adding a new configuration bit, GENPD_FLAG_CPU_DOMAIN. Clients must set the bit before they call pm_genpd_init(), as to instruct genpd that it shall deal with CPUs and thus manage the cpumask in genpd->cpus. Cc: Lina Iyer Co-developed-by: Lina Iyer Signed-off-by: Ulf Hansson --- drivers/base/power/domain.c | 69 ++++++++++++++++++++++++++++++++++++- include/linux/pm_domain.h | 3 ++ 2 files changed, 71 insertions(+), 1 deletion(-) diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index 21d298e1820b..6149ce0bfa7b 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -20,6 +20,7 @@ #include #include #include +#include #include "power.h" @@ -126,6 +127,7 @@ static const struct genpd_lock_ops genpd_spin_ops = { #define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE) #define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON) #define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP) +#define genpd_is_cpu_domain(genpd) (genpd->flags & GENPD_FLAG_CPU_DOMAIN) static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev, const struct generic_pm_domain *genpd) @@ -1377,6 +1379,62 @@ static void genpd_free_dev_data(struct device *dev, dev_pm_put_subsys_data(dev); } +static void __genpd_update_cpumask(struct generic_pm_domain *genpd, + int cpu, bool set, unsigned int depth) +{ + struct gpd_link *link; + + if (!genpd_is_cpu_domain(genpd)) + return; + + list_for_each_entry(link, &genpd->slave_links, slave_node) { + struct generic_pm_domain *master = link->master; + + genpd_lock_nested(master, depth + 1); + __genpd_update_cpumask(master, cpu, set, depth + 1); + genpd_unlock(master); + } + + if (set) + cpumask_set_cpu(cpu, genpd->cpus); + else + cpumask_clear_cpu(cpu, genpd->cpus); +} + +static void genpd_update_cpumask(struct generic_pm_domain *genpd, + struct device *dev, bool set) +{ + bool is_cpu = false; + int cpu; + + if (!genpd_is_cpu_domain(genpd)) + return; + + for_each_possible_cpu(cpu) { + if (get_cpu_device(cpu) == dev) { + is_cpu = true; + break; + } + } + + if (!is_cpu) + return; + + __genpd_update_cpumask(genpd, cpu, set, 0); +} + +static void genpd_set_cpumask(struct generic_pm_domain *genpd, + struct device *dev) +{ + genpd_update_cpumask(genpd, dev, true); +} + +static void genpd_clear_cpumask(struct generic_pm_domain *genpd, + struct device *dev) +{ + genpd_update_cpumask(genpd, dev, false); +} + static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, struct gpd_timing_data *td) { @@ -1398,6 +1456,8 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, if (ret) goto out; + genpd_set_cpumask(genpd, dev); + dev_pm_domain_set(dev, &genpd->domain); genpd->device_count++; @@ -1459,6 +1519,7 @@ static int genpd_remove_device(struct generic_pm_domain *genpd, if (genpd->detach_dev) genpd->detach_dev(genpd, dev); + genpd_clear_cpumask(genpd, dev); dev_pm_domain_set(dev, NULL); list_del_init(&pdd->list_node); @@ -1686,11 +1747,16 @@ int pm_genpd_init(struct generic_pm_domain *genpd, if (genpd_is_always_on(genpd) && !genpd_status_on(genpd)) return -EINVAL; + if (!zalloc_cpumask_var(&genpd->cpus, GFP_KERNEL)) + return -ENOMEM; + /* Use only one "off" state if there were no states declared */ if (genpd->state_count == 0) { ret = genpd_set_default_power_state(genpd); - if (ret) + if (ret) { + free_cpumask_var(genpd->cpus); return ret; + } } else if (!gov) { pr_warn("%s : no governor for states\n", genpd->name); } @@ -1736,6 +1802,7 @@ static int genpd_remove(struct generic_pm_domain *genpd) list_del(&genpd->gpd_list_node); genpd_unlock(genpd); cancel_work_sync(&genpd->power_off_work); + free_cpumask_var(genpd->cpus); kfree(genpd->free); pr_debug("%s: removed %s\n", __func__, genpd->name); diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index 27fca748344a..3f67ff0c1c69 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -16,12 +16,14 @@ #include #include #include +#include /* Defines used for the flags field in the struct generic_pm_domain */ #define GENPD_FLAG_PM_CLK (1U << 0) /* PM domain uses PM clk */ #define GENPD_FLAG_IRQ_SAFE (1U << 1) /* PM domain operates in atomic */ #define GENPD_FLAG_ALWAYS_ON (1U << 2) /* PM domain is always powered on */ #define GENPD_FLAG_ACTIVE_WAKEUP (1U << 3) /* Keep devices active if wakeup */ +#define GENPD_FLAG_CPU_DOMAIN (1U << 4) /* PM domain manages CPUs */ enum gpd_status { GPD_STATE_ACTIVE = 0, /* PM domain is active */ @@ -68,6 +70,7 @@ struct generic_pm_domain { unsigned int suspended_count; /* System suspend device counter */ unsigned int prepared_count; /* Suspend counter of prepared devices */ unsigned int performance_state; /* Aggregated max performance state */ + cpumask_var_t cpus; /* A cpumask of the attached CPUs */ int (*power_off)(struct generic_pm_domain *domain); int (*power_on)(struct generic_pm_domain *domain); unsigned int (*opp_to_performance_state)(struct generic_pm_domain *genpd, -- 2.17.1