Received: by 10.192.165.156 with SMTP id m28csp1831971imm; Thu, 12 Apr 2018 04:24:29 -0700 (PDT) X-Google-Smtp-Source: AIpwx48rFrDNt2HKzT1sP63/rwOxBRb4akQ/gxc+ENIfof1BzqPbpjJEjQRYFqsIakdwQjDPwPlp X-Received: by 10.98.178.207 with SMTP id z76mr7382008pfl.37.1523532269814; Thu, 12 Apr 2018 04:24:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523532269; cv=none; d=google.com; s=arc-20160816; b=wr73R3H2D2u4eS4lgMMTbGZ8EG0MvbW8tAOsLgOz1+o+JgV0m48gXEhLz5h/t5/6IU tPdi/P6hahzYUDgQMdMjOFOnC4pYpwAlg2gOXUMR7EPjW27S74Gt44eby4EI/hY+RMKG uPnGGcR/enVEORk0f3kNDgjhTZhh1fEhBc7SZP/nR4npZWhiov2Y5g3Ojz8u5HTB8CmS y63myTeDXIMjQ7g3WWPbAj7Fe8yZd4leLM2gLJlhvipabsq8YJYx6cO2CIgsaaVFeN1D 1e+dYvQOC/TbHqo+BKS+cMGupQ6OHH42QBPR8H3aGXM9YZqFVTp1zn4U6G25SGRd7hda 0YwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=ep8w3XyIku7Xg7WOU6YAHfRgEUqb1IPra7Q7nLKsmvU=; b=IzcTQ4IlMgb5lAR5ihfbSM01v2E0LBaiHn/VbDtHhgm28P2xkLSZE25OLjclF0EQAw 6LMuyuPedemM764qZmkXWPxIXbWutjv/SpgWlAwn7Yy+mU/GpgBue0eCVBAf3FUsp5OS tjMOxXYHJGKybE/SWjl3+oy0X3VgGHmqNynpotMGmBFgZSbqTaGfsfJLOFLr6RZCTOVb LtjECnXlg+tqtY2ot4ixIpRWVwHrLJJiNMkv1xfdJTQI4L6H/oUT/vBtXvDNfbIuiJlz Xb9mwMmgxwn1+fe8Mrqquo5PL9J8lgVdx76VAdHBB9Sh/TfJ0Vq30oV5NkNbKJR3ciEK X3ow== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Im7e88Z0; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q6-v6si2291021pls.457.2018.04.12.04.23.53; Thu, 12 Apr 2018 04:24:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Im7e88Z0; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753078AbeDLLOw (ORCPT + 99 others); Thu, 12 Apr 2018 07:14:52 -0400 Received: from mail-lf0-f68.google.com ([209.85.215.68]:39671 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752990AbeDLLOr (ORCPT ); Thu, 12 Apr 2018 07:14:47 -0400 Received: by mail-lf0-f68.google.com with SMTP id p142-v6so7125761lfd.6 for ; Thu, 12 Apr 2018 04:14:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ep8w3XyIku7Xg7WOU6YAHfRgEUqb1IPra7Q7nLKsmvU=; b=Im7e88Z0gYEuCCSqRUgOqQIckI7CnYMmK/eZljGPSP6xOVZ7lkHd4A+fbaXcagqqap ZGLCkR8YX0FPRQDiFtkDb+wFMiUaBLYpcELL+EkiQ6Zzb0qfdgQQEeoHhT4/oeHnABqp B7SKq8RBsEtspg0kOfdCoZ0P8UUFTkTmmeyo8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ep8w3XyIku7Xg7WOU6YAHfRgEUqb1IPra7Q7nLKsmvU=; b=rY8deGc6jWeJg7F+ds4NSgKz4lkH+QUHzfmztKH5EFWdZ6Tq6cgFqdPPhqihBdWG1l vEF0DK+xT4XcRqR8V/tLzHTEzrkBPJnfM7Rds6M1dooJIaGfxAVnERlJc6iPCZlXeoOb Z2w9Bgulx+WIs0e4e9j7P4mbBnEF7OiRGTSK0WzGTfevc7cw+zVq/d3FFZy9ao42myvE v4Qjl/TOSkHFyxwkSSwiwttFhIDtl7aifPv0Jwia0lvYgOPBJMpqT4CWFFxneWyhEl7c DgcPyjMdiGlOhmV6tu4O+9gN1Au7K9oC6rLXKcsCytewbEiK1wnoutANAqQHJxLTSRjb e/wA== X-Gm-Message-State: ALQs6tACtVtPtL2PWiyZ2chGrgmZ3n39AaMb97TKnB/M0ISAT4NZuM4/ EzujxhtMua/XKLPpCj6J9TA3tw== X-Received: by 2002:a19:5308:: with SMTP id h8-v6mr2914230lfb.85.1523531685345; Thu, 12 Apr 2018 04:14:45 -0700 (PDT) Received: from localhost.localdomain (h-158-174-22-210.NA.cust.bahnhof.se. [158.174.22.210]) by smtp.gmail.com with ESMTPSA id r29sm543187lje.72.2018.04.12.04.14.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 12 Apr 2018 04:14:44 -0700 (PDT) From: Ulf Hansson To: "Rafael J . Wysocki" , Sudeep Holla , Lorenzo Pieralisi , Mark Rutland , linux-pm@vger.kernel.org Cc: Kevin Hilman , Lina Iyer , Lina Iyer , Ulf Hansson , Rob Herring , Daniel Lezcano , Thomas Gleixner , Vincent Guittot , Stephen Boyd , Juri Lelli , Geert Uytterhoeven , linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 04/26] PM / Domains: Add support for CPU devices to genpd Date: Thu, 12 Apr 2018 13:14:09 +0200 Message-Id: <1523531671-27491-5-git-send-email-ulf.hansson@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1523531671-27491-1-git-send-email-ulf.hansson@linaro.org> References: <1523531671-27491-1-git-send-email-ulf.hansson@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To enable a device belonging to a CPU to be attached to a PM domain managed by genpd, let's do a few changes to genpd as to make it convenient to manage the specifics around CPUs. First, as to be able to quickly find out what CPUs that are attached to a genpd, which typically becomes useful from a genpd governor as following changes is about to show, let's add a cpumask 'cpus' to the struct generic_pm_domain. At the point when a device that belongs to a CPU, is attached/detached to its corresponding PM domain via genpd_add_device(), let's update the cpumask in genpd->cpus. Moreover, propagate the update of the cpumask to the master domains, which makes the genpd->cpus to contain a cpumask that hierarchically reflect all CPUs for a genpd, including CPUs attached to subdomains. Second, to unconditionally manage CPUs and the cpumask in genpd->cpus, is unnecessary for cases when only non-CPU devices are parts of a genpd. Let's avoid this by adding a new configuration bit, GENPD_FLAG_CPU_DOMAIN. Clients must set the bit before they call pm_genpd_init(), as to instruct genpd that it shall deal with CPUs and thus manage the cpumask in genpd->cpus. Cc: Lina Iyer Co-developed-by: Lina Iyer Signed-off-by: Ulf Hansson --- drivers/base/power/domain.c | 69 ++++++++++++++++++++++++++++++++++++++++++++- include/linux/pm_domain.h | 3 ++ 2 files changed, 71 insertions(+), 1 deletion(-) diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index 9aff79d..e178521 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -19,6 +19,7 @@ #include #include #include +#include #include "power.h" @@ -125,6 +126,7 @@ static const struct genpd_lock_ops genpd_spin_ops = { #define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE) #define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON) #define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP) +#define genpd_is_cpu_domain(genpd) (genpd->flags & GENPD_FLAG_CPU_DOMAIN) static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev, const struct generic_pm_domain *genpd) @@ -1377,6 +1379,62 @@ static void genpd_free_dev_data(struct device *dev, dev_pm_put_subsys_data(dev); } +static void __genpd_update_cpumask(struct generic_pm_domain *genpd, + int cpu, bool set, unsigned int depth) +{ + struct gpd_link *link; + + if (!genpd_is_cpu_domain(genpd)) + return; + + list_for_each_entry(link, &genpd->slave_links, slave_node) { + struct generic_pm_domain *master = link->master; + + genpd_lock_nested(master, depth + 1); + __genpd_update_cpumask(master, cpu, set, depth + 1); + genpd_unlock(master); + } + + if (set) + cpumask_set_cpu(cpu, genpd->cpus); + else + cpumask_clear_cpu(cpu, genpd->cpus); +} + +static void genpd_update_cpumask(struct generic_pm_domain *genpd, + struct device *dev, bool set) +{ + bool is_cpu = false; + int cpu; + + if (!genpd_is_cpu_domain(genpd)) + return; + + for_each_possible_cpu(cpu) { + if (get_cpu_device(cpu) == dev) { + is_cpu = true; + break; + } + } + + if (!is_cpu) + return; + + __genpd_update_cpumask(genpd, cpu, set, 0); +} + +static void genpd_set_cpumask(struct generic_pm_domain *genpd, + struct device *dev) +{ + genpd_update_cpumask(genpd, dev, true); +} + +static void genpd_clear_cpumask(struct generic_pm_domain *genpd, + struct device *dev) +{ + genpd_update_cpumask(genpd, dev, false); +} + static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, struct gpd_timing_data *td) { @@ -1403,6 +1461,8 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, if (ret) goto out; + genpd_set_cpumask(genpd, dev); + dev_pm_domain_set(dev, &genpd->domain); genpd->device_count++; @@ -1466,6 +1526,7 @@ static int genpd_remove_device(struct generic_pm_domain *genpd, if (genpd->detach_dev) genpd->detach_dev(genpd, dev); + genpd_clear_cpumask(genpd, dev); dev_pm_domain_set(dev, NULL); list_del_init(&pdd->list_node); @@ -1693,11 +1754,16 @@ int pm_genpd_init(struct generic_pm_domain *genpd, if (genpd_is_always_on(genpd) && !genpd_status_on(genpd)) return -EINVAL; + if (!zalloc_cpumask_var(&genpd->cpus, GFP_KERNEL)) + return -ENOMEM; + /* Use only one "off" state if there were no states declared */ if (genpd->state_count == 0) { ret = genpd_set_default_power_state(genpd); - if (ret) + if (ret) { + free_cpumask_var(genpd->cpus); return ret; + } } else if (!gov) { pr_warn("%s : no governor for states\n", genpd->name); } @@ -1740,6 +1806,7 @@ static int genpd_remove(struct generic_pm_domain *genpd) list_del(&genpd->gpd_list_node); genpd_unlock(genpd); cancel_work_sync(&genpd->power_off_work); + free_cpumask_var(genpd->cpus); kfree(genpd->free); pr_debug("%s: removed %s\n", __func__, genpd->name); diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index 55ad34d..29ab00c 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -16,12 +16,14 @@ #include #include #include +#include /* Defines used for the flags field in the struct generic_pm_domain */ #define GENPD_FLAG_PM_CLK (1U << 0) /* PM domain uses PM clk */ #define GENPD_FLAG_IRQ_SAFE (1U << 1) /* PM domain operates in atomic */ #define GENPD_FLAG_ALWAYS_ON (1U << 2) /* PM domain is always powered on */ #define GENPD_FLAG_ACTIVE_WAKEUP (1U << 3) /* Keep devices active if wakeup */ +#define GENPD_FLAG_CPU_DOMAIN (1U << 4) /* PM domain manages CPUs */ enum gpd_status { GPD_STATE_ACTIVE = 0, /* PM domain is active */ @@ -66,6 +68,7 @@ struct generic_pm_domain { unsigned int suspended_count; /* System suspend device counter */ unsigned int prepared_count; /* Suspend counter of prepared devices */ unsigned int performance_state; /* Aggregated max performance state */ + cpumask_var_t cpus; /* A cpumask of the attached CPUs */ int (*power_off)(struct generic_pm_domain *domain); int (*power_on)(struct generic_pm_domain *domain); int (*set_performance_state)(struct generic_pm_domain *genpd, -- 2.7.4