2014-11-18 07:33:00

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v4/RFC 0/4] per-cpu PM QoS

PM QoS constraints like the CPU_DMA_LATENCY, when set, apply to all cpus. The
QoS guarantees performance, at the expense of power. There is an opportunity to
save power on the cpus, if a subset of cpus need not participate in honoring
the QoS request.

The patches do the following -

- Add "type" member to the QoS request data structure. Drivers requesting PM
QoS, can qualify the type of the QoS request. Request could be either of
all-cpus (default) or a cpumask or the cpus associated by smp-affinity to a
device IRQ.

- QoS requests can supply an cpumask or an IRQ.

- Each constraint has a per-cpu target variable, to hold the QoS value for the
constraint.

- When updating the QoS constraint target value, update the per-cpu target
value of the constraint.

- Export the IRQ smp-affinity information from the IRQ framework.

- When the IRQ smp-affinity changes, notify PM QoS framework, which would update
the target value for each of the constraint affected by the change in the
smp-affinity of the IRQ.

TODO:

- Update the QoS constraint, when the IRQ is enabled/disabled.

- The IRQ affinity is an expected affinity, but the actual affinity is
architecture dependent. Explore possibility of optimizations.

- Update cpuidle to use the per-cpu PM QoS to query the QoS value of the cpus
interested.

Thanks,
Lina

Lina Iyer (4):
QoS: Modify data structures and function arguments for scalability.
QoS: Enhance PM QoS framework to support per-cpu QoS request
irq: Add irq_get_affinity() api
QoS: Enable PM QoS requests to apply only on smp_affinity of an IRQ

Documentation/power/pm_qos_interface.txt | 18 +++
drivers/base/power/qos.c | 14 +--
include/linux/interrupt.h | 8 ++
include/linux/pm_qos.h | 22 +++-
kernel/irq/manage.c | 21 ++++
kernel/power/qos.c | 183 +++++++++++++++++++++++++++++--
6 files changed, 249 insertions(+), 17 deletions(-)

--
2.1.0


2014-11-18 07:33:07

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v4/RFC 2/4] QoS: Enhance PM QoS framework to support per-cpu QoS request

QoS request can be better optimized if the request can be set only for
the required cpus and not all cpus. This helps save power on other
cores, while still guaranteeing the quality of service on the desired
cores.

Add a new enumeration to specify the PM QoS request type. The enums help
specify what is the intended target cpu of the request.

Enhance the QoS constraints data structures to support target value for
each core. Requests specify if the QoS is applicable to all cores
(default) or to a selective subset of the cores or to a core(s).

Idle and interested drivers can request a PM QoS value for a constraint
across all cpus, or a specific cpu or a set of cpus. Separate APIs have
been added to request for individual cpu or a cpumask. The default
behaviour of PM QoS is maintained i.e, requests that do not specify a
type of the request will continue to be effected on all cores.

The userspace sysfs interface does not support setting cpumask of a PM
QoS request.

Signed-off-by: Lina Iyer <[email protected]>
Based on work by: Praveen Chidambaram <[email protected]>
https://www.codeaurora.org/cgit/quic/la/kernel/msm-3.10/tree/kernel/power?h=LNX.LA.3.7
---
Documentation/power/pm_qos_interface.txt | 16 ++++
include/linux/pm_qos.h | 12 +++
kernel/power/qos.c | 130 ++++++++++++++++++++++++++++++-
3 files changed, 157 insertions(+), 1 deletion(-)

diff --git a/Documentation/power/pm_qos_interface.txt b/Documentation/power/pm_qos_interface.txt
index 129f7c0..7f7a774 100644
--- a/Documentation/power/pm_qos_interface.txt
+++ b/Documentation/power/pm_qos_interface.txt
@@ -43,6 +43,15 @@ registered notifiers are called only if the target value is now different.
Clients of pm_qos need to save the returned handle for future use in other
pm_qos API functions.

+The handle is a pm_qos_request object. By default the request object sets the
+request type to PM_QOS_REQ_ALL_CORES, in which case, the PM QoS request
+applies to all cores. However, the driver can also specify a request type to
+be either of
+ PM_QOS_REQ_ALL_CORES,
+ PM_QOS_REQ_AFFINE_CORES,
+
+Specify the cpumask when type is set to PM_QOS_REQ_AFFINE_CORES.
+
void pm_qos_update_request(handle, new_target_value):
Will update the list element pointed to by the handle with the new target value
and recompute the new aggregated target, calling the notification tree if the
@@ -56,6 +65,13 @@ the request.
int pm_qos_request(param_class):
Returns the aggregated value for a given PM QoS class.

+int pm_qos_request_for_cpu(param_class, cpu):
+Returns the aggregated value for a given PM QoS class for the specified cpu.
+
+int pm_qos_request_for_cpumask(param_class, cpumask):
+Returns the aggregated value for a given PM QoS class for the specified
+cpumask.
+
int pm_qos_request_active(handle):
Returns if the request is still active, i.e. it has not been removed from a
PM QoS class constraints list.
diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index c4d859e..de9b04b 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -9,6 +9,7 @@
#include <linux/miscdevice.h>
#include <linux/device.h>
#include <linux/workqueue.h>
+#include <linux/cpumask.h>

enum {
PM_QOS_RESERVED = 0,
@@ -42,7 +43,15 @@ enum pm_qos_flags_status {
#define PM_QOS_FLAG_NO_POWER_OFF (1 << 0)
#define PM_QOS_FLAG_REMOTE_WAKEUP (1 << 1)

+enum pm_qos_req_type {
+ PM_QOS_REQ_ALL_CORES = 0,
+ PM_QOS_REQ_AFFINE_CORES,
+};
+
struct pm_qos_request {
+ enum pm_qos_req_type type;
+ struct cpumask cpus_affine;
+ /* Internal structure members */
struct plist_node node;
int pm_qos_class;
struct delayed_work work; /* for pm_qos_update_request_timeout */
@@ -83,6 +92,7 @@ enum pm_qos_type {
struct pm_qos_constraints {
struct plist_head list;
s32 target_value; /* Do not change to 64 bit */
+ s32 __percpu *target_per_cpu;
s32 default_value;
s32 no_constraint_value;
enum pm_qos_type type;
@@ -130,6 +140,8 @@ void pm_qos_update_request_timeout(struct pm_qos_request *req,
void pm_qos_remove_request(struct pm_qos_request *req);

int pm_qos_request(int pm_qos_class);
+int pm_qos_request_for_cpu(int pm_qos_class, int cpu);
+int pm_qos_request_for_cpumask(int pm_qos_class, struct cpumask *mask);
int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier);
int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier);
int pm_qos_request_active(struct pm_qos_request *req);
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index 602f5cb..36b4414 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -41,6 +41,7 @@
#include <linux/platform_device.h>
#include <linux/init.h>
#include <linux/kernel.h>
+#include <linux/cpumask.h>

#include <linux/uaccess.h>
#include <linux/export.h>
@@ -182,6 +183,49 @@ static inline void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
c->target_value = value;
}

+static inline int pm_qos_set_value_for_cpus(struct pm_qos_constraints *c)
+{
+ struct pm_qos_request *req;
+ int cpu;
+ s32 *qos_val;
+
+ if (!c->target_per_cpu) {
+ c->target_per_cpu = alloc_percpu_gfp(s32, GFP_ATOMIC);
+ if (!c->target_per_cpu)
+ return -ENOMEM;
+ }
+
+ for_each_possible_cpu(cpu)
+ *per_cpu_ptr(c->target_per_cpu, cpu) = c->no_constraint_value;
+
+ if (plist_head_empty(&c->list))
+ return 0;
+
+ plist_for_each_entry(req, &c->list, node) {
+ for_each_cpu(cpu, &req->cpus_affine) {
+ qos_val = per_cpu_ptr(c->target_per_cpu, cpu);
+ switch (c->type) {
+ case PM_QOS_MIN:
+ if (*qos_val > req->node.prio)
+ *qos_val = req->node.prio;
+ break;
+ case PM_QOS_MAX:
+ if (req->node.prio > *qos_val)
+ *qos_val = req->node.prio;
+ break;
+ case PM_QOS_SUM:
+ *qos_val += req->node.prio;
+ break;
+ default:
+ BUG();
+ break;
+ }
+ }
+ }
+
+ return 0;
+}
+
/**
* pm_qos_update_target - manages the constraints list and calls the notifiers
* if needed
@@ -231,9 +275,12 @@ int pm_qos_update_target(struct pm_qos_constraints *c,

curr_value = pm_qos_get_value(c);
pm_qos_set_value(c, curr_value);
-
+ ret = pm_qos_set_value_for_cpus(c);
spin_unlock_irqrestore(&pm_qos_lock, flags);

+ if (ret)
+ return ret;
+
trace_pm_qos_update_target(action, prev_value, curr_value);
if (prev_value != curr_value) {
ret = 1;
@@ -323,6 +370,64 @@ int pm_qos_request(int pm_qos_class)
}
EXPORT_SYMBOL_GPL(pm_qos_request);

+int pm_qos_request_for_cpu(int pm_qos_class, int cpu)
+{
+ s32 qos_val;
+ unsigned long flags;
+ struct pm_qos_constraints *c;
+
+ spin_lock_irqsave(&pm_qos_lock, flags);
+ c = pm_qos_array[pm_qos_class]->constraints;
+ if (c->target_per_cpu)
+ qos_val = per_cpu(*c->target_per_cpu, cpu);
+ else
+ qos_val = c->no_constraint_value;
+ spin_unlock_irqrestore(&pm_qos_lock, flags);
+
+ return qos_val;
+}
+EXPORT_SYMBOL(pm_qos_request_for_cpu);
+
+int pm_qos_request_for_cpumask(int pm_qos_class, struct cpumask *mask)
+{
+ unsigned long irqflags;
+ int cpu;
+ struct pm_qos_constraints *c;
+ s32 val, qos_val;
+
+ spin_lock_irqsave(&pm_qos_lock, irqflags);
+ c = pm_qos_array[pm_qos_class]->constraints;
+ val = c->no_constraint_value;
+ if (!c->target_per_cpu)
+ goto skip_loop;
+
+ for_each_cpu(cpu, mask) {
+ qos_val = *per_cpu_ptr(c->target_per_cpu, cpu);
+ switch (c->type) {
+ case PM_QOS_MIN:
+ if (val < qos_val)
+ val = qos_val;
+ break;
+ case PM_QOS_MAX:
+ if (qos_val > val)
+ val = qos_val;
+ break;
+ case PM_QOS_SUM:
+ val += qos_val;
+ break;
+ default:
+ BUG();
+ break;
+ }
+ }
+
+skip_loop:
+ spin_unlock_irqrestore(&pm_qos_lock, irqflags);
+
+ return val;
+}
+EXPORT_SYMBOL(pm_qos_request_for_cpumask);
+
int pm_qos_request_active(struct pm_qos_request *req)
{
return req->pm_qos_class != 0;
@@ -378,6 +483,27 @@ void pm_qos_add_request(struct pm_qos_request *req,
WARN(1, KERN_ERR "pm_qos_add_request() called for already added request\n");
return;
}
+
+ switch (req->type) {
+ case PM_QOS_REQ_AFFINE_CORES:
+ if (cpumask_empty(&req->cpus_affine))
+ req->type = PM_QOS_REQ_ALL_CORES;
+ else
+ cpumask_and(&req->cpus_affine, &req->cpus_affine,
+ cpu_possible_mask);
+ break;
+
+ default:
+ req->type = PM_QOS_REQ_ALL_CORES;
+ break;
+
+ case PM_QOS_REQ_ALL_CORES:
+ break;
+ }
+
+ if (req->type == PM_QOS_REQ_ALL_CORES)
+ cpumask_copy(&req->cpus_affine, cpu_possible_mask);
+
req->pm_qos_class = pm_qos_class;
INIT_DELAYED_WORK(&req->work, pm_qos_work_fn);
trace_pm_qos_add_request(pm_qos_class, value);
@@ -451,6 +577,7 @@ void pm_qos_update_request_timeout(struct pm_qos_request *req, s32 new_value,
*/
void pm_qos_remove_request(struct pm_qos_request *req)
{
+
if (!req) /*guard against callers passing in null */
return;
/* silent return to keep pcm code cleaner */
@@ -466,6 +593,7 @@ void pm_qos_remove_request(struct pm_qos_request *req)
pm_qos_update_target(pm_qos_array[req->pm_qos_class]->constraints,
req, PM_QOS_REMOVE_REQ,
PM_QOS_DEFAULT_VALUE);
+
memset(req, 0, sizeof(*req));
}
EXPORT_SYMBOL_GPL(pm_qos_remove_request);
--
2.1.0

2014-11-18 07:33:05

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v4/RFC 1/4] QoS: Modify data structures and function arguments for scalability.

From: Praveen Chidambaram <[email protected]>

QoS add requests uses a handle to the priority list that is used
internally to save the request, but this does not extend well. Also,
dev_pm_qos structure definition seems to use a list object directly.
The 'derivative' relationship seems to be broken.

Use pm_qos_request objects instead of passing around the protected
priority list object.

Signed-off-by: Lina Iyer <[email protected]>
Acked-by: Kevin Hilman <[email protected]>
---
drivers/base/power/qos.c | 14 +++++++-------
include/linux/pm_qos.h | 7 ++++---
kernel/power/qos.c | 14 ++++++++------
3 files changed, 19 insertions(+), 16 deletions(-)

diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c
index 36b9eb4..67a66b1 100644
--- a/drivers/base/power/qos.c
+++ b/drivers/base/power/qos.c
@@ -143,7 +143,7 @@ static int apply_constraint(struct dev_pm_qos_request *req,
switch(req->type) {
case DEV_PM_QOS_RESUME_LATENCY:
ret = pm_qos_update_target(&qos->resume_latency,
- &req->data.pnode, action, value);
+ &req->data.lat, action, value);
if (ret) {
value = pm_qos_read_value(&qos->resume_latency);
blocking_notifier_call_chain(&dev_pm_notifiers,
@@ -153,7 +153,7 @@ static int apply_constraint(struct dev_pm_qos_request *req,
break;
case DEV_PM_QOS_LATENCY_TOLERANCE:
ret = pm_qos_update_target(&qos->latency_tolerance,
- &req->data.pnode, action, value);
+ &req->data.lat, action, value);
if (ret) {
value = pm_qos_read_value(&qos->latency_tolerance);
req->dev->power.set_latency_tolerance(req->dev, value);
@@ -254,7 +254,7 @@ void dev_pm_qos_constraints_destroy(struct device *dev)

/* Flush the constraints lists for the device. */
c = &qos->resume_latency;
- plist_for_each_entry_safe(req, tmp, &c->list, data.pnode) {
+ plist_for_each_entry_safe(req, tmp, &c->list, data.lat.node) {
/*
* Update constraints list and call the notification
* callbacks if needed
@@ -263,7 +263,7 @@ void dev_pm_qos_constraints_destroy(struct device *dev)
memset(req, 0, sizeof(*req));
}
c = &qos->latency_tolerance;
- plist_for_each_entry_safe(req, tmp, &c->list, data.pnode) {
+ plist_for_each_entry_safe(req, tmp, &c->list, data.lat.node) {
apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
memset(req, 0, sizeof(*req));
}
@@ -378,7 +378,7 @@ static int __dev_pm_qos_update_request(struct dev_pm_qos_request *req,
switch(req->type) {
case DEV_PM_QOS_RESUME_LATENCY:
case DEV_PM_QOS_LATENCY_TOLERANCE:
- curr_value = req->data.pnode.prio;
+ curr_value = req->data.lat.node.prio;
break;
case DEV_PM_QOS_FLAGS:
curr_value = req->data.flr.flags;
@@ -831,8 +831,8 @@ s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev)
mutex_lock(&dev_pm_qos_mtx);
ret = IS_ERR_OR_NULL(dev->power.qos)
|| !dev->power.qos->latency_tolerance_req ?
- PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT :
- dev->power.qos->latency_tolerance_req->data.pnode.prio;
+ PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT :
+ dev->power.qos->latency_tolerance_req->data.lat.node.prio;
mutex_unlock(&dev_pm_qos_mtx);
return ret;
}
diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index 636e828..c4d859e 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -62,7 +62,7 @@ enum dev_pm_qos_req_type {
struct dev_pm_qos_request {
enum dev_pm_qos_req_type type;
union {
- struct plist_node pnode;
+ struct pm_qos_request lat;
struct pm_qos_flags_request flr;
} data;
struct device *dev;
@@ -115,7 +115,8 @@ static inline int dev_pm_qos_request_active(struct dev_pm_qos_request *req)
return req->dev != NULL;
}

-int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
+int pm_qos_update_target(struct pm_qos_constraints *c,
+ struct pm_qos_request *req,
enum pm_qos_req_action action, int value);
bool pm_qos_update_flags(struct pm_qos_flags *pqf,
struct pm_qos_flags_request *req,
@@ -213,7 +214,7 @@ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val);

static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev)
{
- return dev->power.qos->resume_latency_req->data.pnode.prio;
+ return dev->power.qos->resume_latency_req->data.lat.node.prio;
}

static inline s32 dev_pm_qos_requested_flags(struct device *dev)
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index 5f4c006..602f5cb 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -186,19 +186,21 @@ static inline void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
* pm_qos_update_target - manages the constraints list and calls the notifiers
* if needed
* @c: constraints data struct
- * @node: request to add to the list, to update or to remove
+ * @req: request to add to the list, to update or to remove
* @action: action to take on the constraints list
* @value: value of the request to add or update
*
* This function returns 1 if the aggregated constraint value has changed, 0
* otherwise.
*/
-int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
+int pm_qos_update_target(struct pm_qos_constraints *c,
+ struct pm_qos_request *req,
enum pm_qos_req_action action, int value)
{
unsigned long flags;
int prev_value, curr_value, new_value;
int ret;
+ struct plist_node *node = &req->node;

spin_lock_irqsave(&pm_qos_lock, flags);
prev_value = pm_qos_get_value(c);
@@ -335,7 +337,7 @@ static void __pm_qos_update_request(struct pm_qos_request *req,
if (new_value != req->node.prio)
pm_qos_update_target(
pm_qos_array[req->pm_qos_class]->constraints,
- &req->node, PM_QOS_UPDATE_REQ, new_value);
+ req, PM_QOS_UPDATE_REQ, new_value);
}

/**
@@ -380,7 +382,7 @@ void pm_qos_add_request(struct pm_qos_request *req,
INIT_DELAYED_WORK(&req->work, pm_qos_work_fn);
trace_pm_qos_add_request(pm_qos_class, value);
pm_qos_update_target(pm_qos_array[pm_qos_class]->constraints,
- &req->node, PM_QOS_ADD_REQ, value);
+ req, PM_QOS_ADD_REQ, value);
}
EXPORT_SYMBOL_GPL(pm_qos_add_request);

@@ -434,7 +436,7 @@ void pm_qos_update_request_timeout(struct pm_qos_request *req, s32 new_value,
if (new_value != req->node.prio)
pm_qos_update_target(
pm_qos_array[req->pm_qos_class]->constraints,
- &req->node, PM_QOS_UPDATE_REQ, new_value);
+ req, PM_QOS_UPDATE_REQ, new_value);

schedule_delayed_work(&req->work, usecs_to_jiffies(timeout_us));
}
@@ -462,7 +464,7 @@ void pm_qos_remove_request(struct pm_qos_request *req)

trace_pm_qos_remove_request(req->pm_qos_class, PM_QOS_DEFAULT_VALUE);
pm_qos_update_target(pm_qos_array[req->pm_qos_class]->constraints,
- &req->node, PM_QOS_REMOVE_REQ,
+ req, PM_QOS_REMOVE_REQ,
PM_QOS_DEFAULT_VALUE);
memset(req, 0, sizeof(*req));
}
--
2.1.0

2014-11-18 07:33:27

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v4/RFC 4/4] QoS: Enable PM QoS requests to apply only on smp_affinity of an IRQ

QoS requests that need to track an IRQ can be set to apply only on the
cpus to which the IRQ's smp_affinity attribute is set to. The PM QoS
framework will automatically track IRQ migration between the cores. The
QoS is updated to be applied only to the core(s) that the IRQ has been
migrated to.

The userspace sysfs interface does not support IRQ affinity.

Signed-off-by: Lina Iyer <[email protected]>
Based on work by: Praveen Chidambaram <[email protected]>
---
Documentation/power/pm_qos_interface.txt | 4 +++-
include/linux/pm_qos.h | 3 +++
kernel/irq/manage.c | 3 +++
kernel/power/qos.c | 41 +++++++++++++++++++++++++++++++-
4 files changed, 49 insertions(+), 2 deletions(-)

diff --git a/Documentation/power/pm_qos_interface.txt b/Documentation/power/pm_qos_interface.txt
index 7f7a774..73bfa16 100644
--- a/Documentation/power/pm_qos_interface.txt
+++ b/Documentation/power/pm_qos_interface.txt
@@ -49,8 +49,10 @@ applies to all cores. However, the driver can also specify a request type to
be either of
PM_QOS_REQ_ALL_CORES,
PM_QOS_REQ_AFFINE_CORES,
+ PM_QOS_REQ_AFFINE_IRQ,

-Specify the cpumask when type is set to PM_QOS_REQ_AFFINE_CORES.
+Specify the cpumask when type is set to PM_QOS_REQ_AFFINE_CORES and specify
+the IRQ number with PM_QOS_REQ_AFFINE_IRQ.

void pm_qos_update_request(handle, new_target_value):
Will update the list element pointed to by the handle with the new target value
diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index de9b04b..e0b80af 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -46,11 +46,13 @@ enum pm_qos_flags_status {
enum pm_qos_req_type {
PM_QOS_REQ_ALL_CORES = 0,
PM_QOS_REQ_AFFINE_CORES,
+ PM_QOS_REQ_AFFINE_IRQ,
};

struct pm_qos_request {
enum pm_qos_req_type type;
struct cpumask cpus_affine;
+ uint32_t irq;
/* Internal structure members */
struct plist_node node;
int pm_qos_class;
@@ -146,6 +148,7 @@ int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier);
int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier);
int pm_qos_request_active(struct pm_qos_request *req);
s32 pm_qos_read_value(struct pm_qos_constraints *c);
+void pm_qos_irq_affinity_change(u32 irq, const struct cpumask *mask);

#ifdef CONFIG_PM
enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask);
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 2d17098..8790f71 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -18,6 +18,7 @@
#include <linux/sched.h>
#include <linux/sched/rt.h>
#include <linux/task_work.h>
+#include <linux/pm_qos.h>

#include "internals.h"

@@ -209,6 +210,8 @@ int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask,
irq_copy_pending(desc, mask);
}

+ pm_qos_irq_affinity_change(data->irq, mask);
+
if (desc->affinity_notify) {
kref_get(&desc->affinity_notify->kref);
schedule_work(&desc->affinity_notify->work);
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index 36b4414..43de784 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -42,6 +42,7 @@
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/cpumask.h>
+#include <linux/interrupt.h>

#include <linux/uaccess.h>
#include <linux/export.h>
@@ -460,6 +461,39 @@ static void pm_qos_work_fn(struct work_struct *work)
__pm_qos_update_request(req, PM_QOS_DEFAULT_VALUE);
}

+void pm_qos_irq_affinity_change(u32 irq, const struct cpumask *mask)
+{
+ struct pm_qos_constraints *c;
+ unsigned long flags;
+ struct pm_qos_request *req;
+ s32 curr_value;
+ int i;
+ bool needs_update;
+
+ for (i = PM_QOS_CPU_DMA_LATENCY; i < PM_QOS_NUM_CLASSES; i++) {
+ c = pm_qos_array[i]->constraints;
+ if (plist_head_empty(&c->list))
+ continue;
+ needs_update = false;
+ spin_lock_irqsave(&pm_qos_lock, flags);
+ plist_for_each_entry(req, &c->list, node) {
+ if (req->irq == irq) {
+ cpumask_copy(&req->cpus_affine, mask);
+ needs_update = true;
+ }
+ }
+ if (needs_update) {
+ pm_qos_set_value_for_cpus(c);
+ curr_value = pm_qos_get_value(c);
+ }
+ spin_unlock_irqrestore(&pm_qos_lock, flags);
+ if (needs_update && c->notifiers)
+ blocking_notifier_call_chain(c->notifiers,
+ (unsigned long)curr_value,
+ NULL);
+ }
+}
+
/**
* pm_qos_add_request - inserts new qos request into the list
* @req: pointer to a preallocated handle
@@ -493,6 +527,12 @@ void pm_qos_add_request(struct pm_qos_request *req,
cpu_possible_mask);
break;

+ case PM_QOS_REQ_AFFINE_IRQ:
+ if (irq_can_set_affinity(req->irq) &&
+ !irq_get_affinity(req->irq, &req->cpus_affine))
+ req->type = PM_QOS_REQ_ALL_CORES;
+ break;
+
default:
req->type = PM_QOS_REQ_ALL_CORES;
break;
@@ -577,7 +617,6 @@ void pm_qos_update_request_timeout(struct pm_qos_request *req, s32 new_value,
*/
void pm_qos_remove_request(struct pm_qos_request *req)
{
-
if (!req) /*guard against callers passing in null */
return;
/* silent return to keep pcm code cleaner */
--
2.1.0

2014-11-18 07:33:52

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v4/RFC 3/4] irq: Add irq_get_affinity() api

Export irq_get_affinity API for drivers to read the smp affinity of an
IRQ safely.

Signed-off-by: Lina Iyer <[email protected]>
---
include/linux/interrupt.h | 8 ++++++++
kernel/irq/manage.c | 18 ++++++++++++++++++
2 files changed, 26 insertions(+)

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 69517a2..fff619c 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -260,6 +260,8 @@ extern int irq_set_affinity_hint(unsigned int irq, const struct cpumask *m);
extern int
irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify);

+extern int irq_get_affinity(unsigned int irq, struct cpumask *mask);
+
#else /* CONFIG_SMP */

static inline int irq_set_affinity(unsigned int irq, const struct cpumask *m)
@@ -290,6 +292,12 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify)
{
return 0;
}
+
+static inline int irq_get_affinity(unsigned int irq, struct cpumask *mask)
+{
+ return -EINVAL;
+}
+
#endif /* CONFIG_SMP */

/*
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 0a9104b..2d17098 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -246,6 +246,24 @@ int irq_set_affinity_hint(unsigned int irq, const struct cpumask *m)
}
EXPORT_SYMBOL_GPL(irq_set_affinity_hint);

+int irq_get_affinity(unsigned int irq, struct cpumask *mask)
+{
+ struct irq_desc *desc = irq_to_desc(irq);
+ unsigned long flags;
+
+ if (!mask)
+ return -EINVAL;
+
+ raw_spin_lock_irqsave(&desc->lock, flags);
+ if (!irqd_irq_disabled(&desc->irq_data))
+ cpumask_copy(mask, desc->irq_data.affinity);
+ else
+ cpumask_clear(mask);
+ raw_spin_unlock_irqrestore(&desc->lock, flags);
+
+ return 0;
+}
+
static void irq_affinity_notify(struct work_struct *work)
{
struct irq_affinity_notify *notify =
--
2.1.0