Changes in v2:
- Added sleep/wake, async and batch requests support
- Addressed Bjorn's comments
- Private FTRACE for drivers/soc/qcom as suggested by Steven
- Sparse checked on these patches
- Use SPDX license commenting sytle
This set of patches add the ability for platform drivers to make use of shared
resources in newer Qualcomm SoCs like SDM845. Resources that are shared between
multiple processors in a SoC are generally controlled by a dedicated remote
processor. The remote processor (Resource Power Manager or RPM in previous QCOM
SoCs) receives requests for resource state from other processors using the
shared resource, aggregates the request and applies the result on the shared
resource. SDM845 advances this concept and uses h/w (hardened I/P) blocks for
aggregating requests and applying the result on the resource. The resources
could be clocks, regulators or bandwidth requests for buses. This new
architecture is called RPM-hardened or RPMH in short.
Since this communication mechanism is completely hardware driven without a
processor intervention on the remote end, existing mechanisms like RPM-SMD are
no longer useful. Also, there is no serialization of data or is data is written
to a shared memory in this new format. The data used is different, unsigned 32
bits are used for representing an address, data and header. Each resource's
property is a unique u32 address and have pre-defined set of property specific
valid values. A request that comprises of <header, addr, data> is sent by
writing to a set of registers from Linux and transmitted to the remote slave
through an internal bus. The remote end aggregates this request along with
requests from other processors for the <addr> and applies the result.
The hardware block that houses this functionality is called Resource State
Coordinator or RSC. Inside the RSC are set of slots for sending RPMH requests
called Trigger Commands Sets (TCS). The set of patches are for writing the
requests into these TCSes and sending them to hardened IP blocks.
The driver design is split into two components. The RSC driver housed in
rpmh-rsc.c and the set of library functions in rpmh.c that frame the request and
transmit it using the controller. This first set of patches allow a simple
synchronous request to be made by the platform drivers. Future patches will add
more functionality that cater to complex drivers and use cases.
Please consider reviewing this patchset.
v1: https://www.spinics.net/lists/devicetree/msg210980.html
Lina Iyer (10):
drivers: qcom: rpmh-rsc: add RPMH controller for QCOM SoCs
dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs
drivers: qcom: rpmh-rsc: log RPMH requests in FTRACE
drivers: qcom: rpmh: add RPMH helper functions
drivers: qcom: rpmh-rsc: write sleep/wake requests to TCS
drivers: qcom: rpmh-rsc: allow invalidation of sleep/wake TCS
drivers: qcom: rpmh: cache sleep/wake state requests
drivers: qcom: rpmh: allow requests to be sent asynchronously
drivers: qcom: rpmh: add support for batch RPMH request
drivers: qcom: rpmh-rsc: allow active requests from wake TCS
.../devicetree/bindings/arm/msm/rpmh-rsc.txt | 146 ++++
drivers/soc/qcom/Kconfig | 10 +
drivers/soc/qcom/Makefile | 4 +
drivers/soc/qcom/rpmh-internal.h | 96 +++
drivers/soc/qcom/rpmh-rsc.c | 794 +++++++++++++++++++++
drivers/soc/qcom/rpmh.c | 670 +++++++++++++++++
drivers/soc/qcom/trace-rpmh.h | 89 +++
include/dt-bindings/soc/qcom,rpmh-rsc.h | 14 +
include/soc/qcom/rpmh.h | 60 ++
include/soc/qcom/tcs.h | 56 ++
10 files changed, 1939 insertions(+)
create mode 100644 Documentation/devicetree/bindings/arm/msm/rpmh-rsc.txt
create mode 100644 drivers/soc/qcom/rpmh-internal.h
create mode 100644 drivers/soc/qcom/rpmh-rsc.c
create mode 100644 drivers/soc/qcom/rpmh.c
create mode 100644 drivers/soc/qcom/trace-rpmh.h
create mode 100644 include/dt-bindings/soc/qcom,rpmh-rsc.h
create mode 100644 include/soc/qcom/rpmh.h
create mode 100644 include/soc/qcom/tcs.h
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
Some RSCs may only have sleep and wake TCS, i.e, there is no dedicated
TCS for active mode request, but drivers may still want to make active
requests from these RSCs. In such cases re-purpose the wake TCS to send
active state requests.
The requirement for this is that the driver is aware that the wake TCS
is being repurposed to send active request, hence the sleep and wake
TCSes be invalidated before the active request is sent.
Signed-off-by: Lina Iyer <[email protected]>
---
drivers/soc/qcom/rpmh-rsc.c | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index 951b797a8cc7..e13abf9ad8fd 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -220,6 +220,7 @@ static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
struct tcs_request *msg)
{
int type;
+ struct tcs_group *tcs;
switch (msg->state) {
case RPMH_ACTIVE_ONLY_STATE:
@@ -235,7 +236,22 @@ static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
return ERR_PTR(-EINVAL);
}
- return get_tcs_of_type(drv, type);
+ /*
+ * If we are making an active request on a RSC that does not have a
+ * dedicated TCS for active state use, then re-purpose a wake TCS to
+ * send active votes.
+ * NOTE: The driver must be aware that this RSC does not have a
+ * dedicated AMC, and therefore would invalidate the sleep and wake
+ * TCSes before making an active state request.
+ */
+ tcs = get_tcs_of_type(drv, type);
+ if (msg->state == RPMH_ACTIVE_ONLY_STATE && IS_ERR(tcs)) {
+ tcs = get_tcs_of_type(drv, WAKE_TCS);
+ if (!IS_ERR(tcs))
+ rpmh_rsc_invalidate(drv);
+ }
+
+ return tcs;
}
static void send_tcs_response(struct tcs_response *resp)
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
Sending RPMH requests and waiting for response from the controller
through a callback is common functionality across all platform drivers.
To simplify drivers, add a library functions to create RPMH client and
send resource state requests.
rpmh_write() is a synchronous blocking call that can be used to send
active state requests.
Signed-off-by: Lina Iyer <[email protected]>
---
drivers/soc/qcom/Makefile | 4 +-
drivers/soc/qcom/rpmh-internal.h | 2 +
drivers/soc/qcom/rpmh-rsc.c | 7 ++
drivers/soc/qcom/rpmh.c | 257 +++++++++++++++++++++++++++++++++++++++
include/soc/qcom/rpmh.h | 34 ++++++
5 files changed, 303 insertions(+), 1 deletion(-)
create mode 100644 drivers/soc/qcom/rpmh.c
create mode 100644 include/soc/qcom/rpmh.h
diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index 094b639ece2f..e5a980284f89 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -11,4 +11,6 @@ obj-$(CONFIG_QCOM_SMEM_STATE) += smem_state.o
obj-$(CONFIG_QCOM_SMP2P) += smp2p.o
obj-$(CONFIG_QCOM_SMSM) += smsm.o
obj-$(CONFIG_QCOM_WCNSS_CTRL) += wcnss_ctrl.o
-obj-$(CONFIG_QCOM_RPMH) += rpmh-rsc.o
+obj-$(CONFIG_QCOM_RPMH) += qcom_rpmh.o
+qcom_rpmh-y += rpmh-rsc.o
+qcom_rpmh-y += rpmh.o
diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
index dccf9de1a75f..9c6f8f5faa6a 100644
--- a/drivers/soc/qcom/rpmh-internal.h
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -83,4 +83,6 @@ struct rsc_drv {
int rpmh_rsc_send_data(struct rsc_drv *drv, struct tcs_request *msg);
+void rpmh_tx_done(struct tcs_request *msg, int r);
+
#endif /* __RPM_INTERNAL_H__ */
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index 58a3c53d651e..5d022363be33 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -263,6 +263,8 @@ static void tcs_notify_tx_done(unsigned long data)
struct rsc_drv *drv = (struct rsc_drv *)data;
struct tcs_response *resp;
unsigned long flags;
+ struct tcs_request *msg;
+ int err;
for (;;) {
spin_lock_irqsave(&drv->drv_lock, flags);
@@ -275,7 +277,10 @@ static void tcs_notify_tx_done(unsigned long data)
list_del(&resp->list);
spin_unlock_irqrestore(&drv->drv_lock, flags);
trace_rpmh_notify_tx_done(drv, resp);
+ msg = resp->msg;
+ err = resp->err;
free_response(resp);
+ rpmh_tx_done(msg, err);
}
}
@@ -575,6 +580,8 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
write_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, 0,
drv->tcs[ACTIVE_TCS].tcs_mask);
+ dev_set_drvdata(&pdev->dev, drv);
+
return of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
}
diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
new file mode 100644
index 000000000000..d95ea3fa8b67
--- /dev/null
+++ b/drivers/soc/qcom/rpmh.c
@@ -0,0 +1,257 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
+ */
+
+#include <linux/atomic.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/mailbox_client.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+
+#include <soc/qcom/rpmh.h>
+
+#include "rpmh-internal.h"
+
+#define RPMH_MAX_MBOXES 2
+#define RPMH_TIMEOUT (10 * HZ)
+
+#define DEFINE_RPMH_MSG_ONSTACK(rc, s, q, c, name) \
+ struct rpmh_request name = { \
+ .msg = { \
+ .state = s, \
+ .payload = name.cmd, \
+ .num_payload = 0, \
+ .is_complete = true, \
+ }, \
+ .cmd = { { 0 } }, \
+ .completion = q, \
+ .wait_count = c, \
+ .rc = rc, \
+ }
+
+/**
+ * rpmh_request: the message to be sent to rpmh-rsc
+ *
+ * @msg: the request
+ * @cmd: the payload that will be part of the @msg
+ * @completion: triggered when request is done
+ * @wait_count: count of waiters for this completion
+ * @err: err return from the controller
+ */
+struct rpmh_request {
+ struct tcs_request msg;
+ struct tcs_cmd cmd[MAX_RPMH_PAYLOAD];
+ struct completion *completion;
+ atomic_t *wait_count;
+ struct rpmh_client *rc;
+ int err;
+};
+
+/**
+ * rpmh_ctrlr: our representation of the controller
+ *
+ * @drv: the controller instance
+ */
+struct rpmh_ctrlr {
+ struct rsc_drv *drv;
+};
+
+/**
+ * rpmh_client: the client object
+ *
+ * @dev: the platform device that is the owner
+ * @ctrlr: the controller associated with this client.
+ */
+struct rpmh_client {
+ struct device *dev;
+ struct rpmh_ctrlr *ctrlr;
+};
+
+static struct rpmh_ctrlr rpmh_rsc[RPMH_MAX_MBOXES];
+static DEFINE_MUTEX(rpmh_ctrlr_mutex);
+
+void rpmh_tx_done(struct tcs_request *msg, int r)
+{
+ struct rpmh_request *rpm_msg = container_of(msg,
+ struct rpmh_request, msg);
+ atomic_t *wc = rpm_msg->wait_count;
+ struct completion *compl = rpm_msg->completion;
+
+ rpm_msg->err = r;
+
+ if (r)
+ dev_err(rpm_msg->rc->dev,
+ "RPMH TX fail in msg addr 0x%x, err=%d\n",
+ rpm_msg->msg.payload[0].addr, r);
+
+ /* Signal the blocking thread we are done */
+ if (wc && atomic_dec_and_test(wc) && compl)
+ complete(compl);
+}
+EXPORT_SYMBOL(rpmh_tx_done);
+
+/**
+ * wait_for_tx_done: Wait until the response is received.
+ *
+ * @rc: The RPMH client
+ * @compl: The completion object
+ * @addr: An addr that we sent in that request
+ * @data: The data for the address in that request
+ *
+ */
+static int wait_for_tx_done(struct rpmh_client *rc,
+ struct completion *compl, u32 addr, u32 data)
+{
+ int ret;
+
+ ret = wait_for_completion_timeout(compl, RPMH_TIMEOUT);
+ if (ret)
+ dev_dbg(rc->dev,
+ "RPMH response received addr=0x%x data=0x%x\n",
+ addr, data);
+ else
+ dev_err(rc->dev,
+ "RPMH response timeout addr=0x%x data=0x%x\n",
+ addr, data);
+
+ return (ret > 0) ? 0 : -ETIMEDOUT;
+}
+
+/**
+ * __rpmh_write: send the RPMH request
+ *
+ * @rc: The RPMH client
+ * @state: Active/Sleep request type
+ * @rpm_msg: The data that needs to be sent (payload).
+ */
+static int __rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
+ struct rpmh_request *rpm_msg)
+{
+ int ret = -EFAULT;
+
+ rpm_msg->msg.state = state;
+
+ if (state == RPMH_ACTIVE_ONLY_STATE) {
+ WARN_ON(irqs_disabled());
+ ret = rpmh_rsc_send_data(rc->ctrlr->drv, &rpm_msg->msg);
+ if (!ret)
+ dev_dbg(rc->dev,
+ "RPMH request sent addr=0x%x, data=0x%x\n",
+ rpm_msg->msg.payload[0].addr,
+ rpm_msg->msg.payload[0].data);
+ else
+ dev_warn(rc->dev,
+ "Error in RPMH request addr=0x%x, data=0x%x\n",
+ rpm_msg->msg.payload[0].addr,
+ rpm_msg->msg.payload[0].data);
+ }
+
+ return ret;
+}
+
+/**
+ * rpmh_write: Write a set of RPMH commands and block until response
+ *
+ * @rc: The RPMh handle got from rpmh_get_dev_channel
+ * @state: Active/sleep set
+ * @cmd: The payload data
+ * @n: The number of elements in payload
+ *
+ * May sleep. Do not call from atomic contexts.
+ */
+int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
+ struct tcs_cmd *cmd, int n)
+{
+ DECLARE_COMPLETION_ONSTACK(compl);
+ atomic_t wait_count = ATOMIC_INIT(1);
+ DEFINE_RPMH_MSG_ONSTACK(rc, state, &compl, &wait_count, rpm_msg);
+ int ret;
+
+ if (IS_ERR_OR_NULL(rc) || !cmd || n <= 0 || n > MAX_RPMH_PAYLOAD)
+ return -EINVAL;
+
+ might_sleep();
+
+ memcpy(rpm_msg.cmd, cmd, n * sizeof(*cmd));
+ rpm_msg.msg.num_payload = n;
+
+ ret = __rpmh_write(rc, state, &rpm_msg);
+ if (ret)
+ return ret;
+
+ return wait_for_tx_done(rc, &compl, cmd[0].addr, cmd[0].data);
+}
+EXPORT_SYMBOL(rpmh_write);
+
+static struct rpmh_ctrlr *get_rpmh_ctrlr(struct platform_device *pdev)
+{
+ int i;
+ struct rsc_drv *drv = dev_get_drvdata(pdev->dev.parent);
+ struct rpmh_ctrlr *ctrlr = ERR_PTR(-EFAULT);
+
+ if (!drv)
+ return ctrlr;
+
+ mutex_lock(&rpmh_ctrlr_mutex);
+ for (i = 0; i < RPMH_MAX_MBOXES; i++) {
+ if (rpmh_rsc[i].drv == drv) {
+ ctrlr = &rpmh_rsc[i];
+ goto unlock;
+ }
+ }
+
+ for (i = 0; i < RPMH_MAX_MBOXES; i++) {
+ if (rpmh_rsc[i].drv == NULL) {
+ ctrlr = &rpmh_rsc[i];
+ ctrlr->drv = drv;
+ break;
+ }
+ }
+ WARN_ON(i == RPMH_MAX_MBOXES);
+unlock:
+ mutex_unlock(&rpmh_ctrlr_mutex);
+ return ctrlr;
+}
+
+/**
+ * rpmh_get_client: Get the RPMh handle
+ *
+ * @pdev: the platform device which needs to communicate with RPM
+ * accelerators
+ * May sleep.
+ */
+struct rpmh_client *rpmh_get_client(struct platform_device *pdev)
+{
+ struct rpmh_client *rc;
+
+ rc = kzalloc(sizeof(*rc), GFP_KERNEL);
+ if (!rc)
+ return ERR_PTR(-ENOMEM);
+
+ rc->dev = &pdev->dev;
+ rc->ctrlr = get_rpmh_ctrlr(pdev);
+ if (IS_ERR(rc->ctrlr)) {
+ kfree(rc);
+ return ERR_PTR(-EFAULT);
+ }
+
+ return rc;
+}
+EXPORT_SYMBOL(rpmh_get_client);
+
+/**
+ * rpmh_release: Release the RPMH client
+ *
+ * @rc: The RPMh handle to be freed.
+ */
+void rpmh_release(struct rpmh_client *rc)
+{
+ kfree(rc);
+}
+EXPORT_SYMBOL(rpmh_release);
diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
new file mode 100644
index 000000000000..53cc145990c2
--- /dev/null
+++ b/include/soc/qcom/rpmh.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef __SOC_QCOM_RPMH_H__
+#define __SOC_QCOM_RPMH_H__
+
+#include <soc/qcom/tcs.h>
+#include <linux/platform_device.h>
+
+struct rpmh_client;
+
+#if IS_ENABLED(CONFIG_QCOM_RPMH)
+int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
+ struct tcs_cmd *cmd, int n);
+
+struct rpmh_client *rpmh_get_client(struct platform_device *pdev);
+
+void rpmh_release(struct rpmh_client *rc);
+
+#else
+
+static inline int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
+ struct tcs_cmd *cmd, int n)
+{ return -ENODEV; }
+
+static inline struct rpmh_client *rpmh_get_client(struct platform_device *pdev)
+{ return ERR_PTR(-ENODEV); }
+
+static inline void rpmh_release(struct rpmh_client *rc) { }
+#endif /* CONFIG_QCOM_RPMH */
+
+#endif /* __SOC_QCOM_RPMH_H__ */
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
Sleep and wake requests are sent when the application processor
subsystem of the SoC is entering deep sleep states like in suspend.
These requests help lower the system power requirements when the
resources are not in use.
Sleep and wake requests are written to the TCS slots but are not
triggered at the time of writing. The TCS are triggered by the firmware
after the last of the CPUs has executed its WFI. Since these requests
may come in different batches of requests, it is job of this controller
driver to find arrange the requests into the available TCSes.
Signed-off-by: Lina Iyer <[email protected]>
---
drivers/soc/qcom/rpmh-internal.h | 7 +++
drivers/soc/qcom/rpmh-rsc.c | 126 +++++++++++++++++++++++++++++++++++++++
2 files changed, 133 insertions(+)
diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
index 9c6f8f5faa6a..957b1a79c4c8 100644
--- a/drivers/soc/qcom/rpmh-internal.h
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -13,6 +13,7 @@
#define MAX_CMDS_PER_TCS 16
#define MAX_TCS_PER_TYPE 3
#define MAX_TCS_NR (MAX_TCS_PER_TYPE * TCS_TYPE_NR)
+#define MAX_TCS_SLOTS (MAX_CMDS_PER_TCS * MAX_TCS_PER_TYPE)
struct rsc_drv;
@@ -43,6 +44,8 @@ struct tcs_response {
* @ncpt: number of commands in each TCS
* @tcs_lock: lock for synchronizing this TCS writes
* @responses: response objects for requests sent from each TCS
+ * @cmd_addr: flattened cache of cmds in sleep/wake TCS
+ * @slots: indicates which of @cmd_addr are occupied
*/
struct tcs_group {
struct rsc_drv *drv;
@@ -53,6 +56,9 @@ struct tcs_group {
int ncpt;
spinlock_t tcs_lock;
struct tcs_response *responses[MAX_TCS_PER_TYPE];
+ u32 *cmd_addr;
+ DECLARE_BITMAP(slots, MAX_TCS_SLOTS);
+
};
/**
@@ -82,6 +88,7 @@ struct rsc_drv {
int rpmh_rsc_send_data(struct rsc_drv *drv, struct tcs_request *msg);
+int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, struct tcs_request *msg);
void rpmh_tx_done(struct tcs_request *msg, int r);
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index 5d022363be33..ae555fdc3ab7 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -6,6 +6,7 @@
#define pr_fmt(fmt) "%s " fmt, KBUILD_MODNAME
#include <linux/atomic.h>
+#include <linux/bitmap.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
@@ -178,6 +179,12 @@ static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
case RPMH_ACTIVE_ONLY_STATE:
type = ACTIVE_TCS;
break;
+ case RPMH_WAKE_ONLY_STATE:
+ type = WAKE_TCS;
+ break;
+ case RPMH_SLEEP_STATE:
+ type = SLEEP_TCS;
+ break;
default:
return ERR_PTR(-EINVAL);
}
@@ -450,6 +457,112 @@ int rpmh_rsc_send_data(struct rsc_drv *drv, struct tcs_request *msg)
}
EXPORT_SYMBOL(rpmh_rsc_send_data);
+static int find_match(struct tcs_group *tcs, struct tcs_cmd *cmd, int len)
+{
+ bool found = false;
+ int i = 0, j;
+
+ /* Check for already cached commands */
+ while ((i = find_next_bit(tcs->slots, MAX_TCS_SLOTS, i)) <
+ MAX_TCS_SLOTS) {
+ if (tcs->cmd_addr[i] != cmd[0].addr) {
+ i++;
+ continue;
+ }
+ /* sanity check to ensure the seq is same */
+ for (j = 1; j < len; j++) {
+ WARN((tcs->cmd_addr[i + j] != cmd[j].addr),
+ "Message does not match previous sequence.\n");
+ return -EINVAL;
+ }
+ found = true;
+ break;
+ }
+
+ return found ? i : -1;
+}
+
+static int find_slots(struct tcs_group *tcs, struct tcs_request *msg,
+ int *m, int *n)
+{
+ int slot, offset;
+ int i = 0;
+
+ /* Find if we already have the msg in our TCS */
+ slot = find_match(tcs, msg->payload, msg->num_payload);
+ if (slot >= 0)
+ goto copy_data;
+
+ /* Do over, until we can fit the full payload in a TCS */
+ do {
+ slot = bitmap_find_next_zero_area(tcs->slots, MAX_TCS_SLOTS,
+ i, msg->num_payload, 0);
+ if (slot == MAX_TCS_SLOTS)
+ break;
+ i += tcs->ncpt;
+ } while (slot + msg->num_payload - 1 >= i);
+
+ if (slot == MAX_TCS_SLOTS)
+ return -ENOMEM;
+
+copy_data:
+ bitmap_set(tcs->slots, slot, msg->num_payload);
+ /* Copy the addresses of the resources over to the slots */
+ for (i = 0; tcs->cmd_addr && i < msg->num_payload; i++)
+ tcs->cmd_addr[slot + i] = msg->payload[i].addr;
+
+ offset = slot / tcs->ncpt;
+ *m = offset + tcs->tcs_offset;
+ *n = slot % tcs->ncpt;
+
+ return 0;
+}
+
+static int tcs_ctrl_write(struct rsc_drv *drv, struct tcs_request *msg)
+{
+ struct tcs_group *tcs;
+ int m = 0, n = 0;
+ unsigned long flags;
+ int ret = 0;
+
+ tcs = get_tcs_for_msg(drv, msg);
+ if (IS_ERR(tcs))
+ return PTR_ERR(tcs);
+
+ spin_lock_irqsave(&tcs->tcs_lock, flags);
+ /* find the m-th TCS and the n-th position in the TCS to write to */
+ ret = find_slots(tcs, msg, &m, &n);
+ if (!ret)
+ __tcs_buffer_write(drv, m, n, msg);
+ spin_unlock_irqrestore(&tcs->tcs_lock, flags);
+
+ return ret;
+}
+
+/**
+ * rpmh_rsc_write_ctrl_data: Write request to the controller
+ *
+ * @drv: the controller
+ * @msg: the data to be written to the controller
+ *
+ * There is no response returned for writing the request to the controller.
+ */
+int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, struct tcs_request *msg)
+{
+ if (!msg || !msg->payload || !msg->num_payload ||
+ msg->num_payload > MAX_RPMH_PAYLOAD) {
+ pr_err("Payload error\n");
+ return -EINVAL;
+ }
+
+ /* Data sent to this API will not be sent immediately */
+ if (msg->state == RPMH_ACTIVE_ONLY_STATE)
+ return -EINVAL;
+
+ return tcs_ctrl_write(drv, msg);
+}
+EXPORT_SYMBOL(rpmh_rsc_write_ctrl_data);
+
static int rpmh_probe_tcs_config(struct platform_device *pdev,
struct rsc_drv *drv)
{
@@ -530,6 +643,19 @@ static int rpmh_probe_tcs_config(struct platform_device *pdev,
tcs->tcs_mask = ((1 << tcs->num_tcs) - 1) << st;
tcs->tcs_offset = st;
st += tcs->num_tcs;
+
+ /*
+ * Allocate memory to cache sleep and wake requests to
+ * avoid reading TCS register memory.
+ */
+ if (tcs->type == ACTIVE_TCS)
+ continue;
+
+ tcs->cmd_addr = devm_kzalloc(&pdev->dev,
+ sizeof(u32) * tcs->num_tcs * ncpt,
+ GFP_KERNEL);
+ if (!tcs->cmd_addr)
+ return -ENOMEM;
}
drv->num_tcs = st;
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
Platform drivers need make a lot of resource state requests at the same
time, say, at the start or end of an usecase. It can be quite
inefficient to send each request separately. Instead they can give the
RPMH library a batch of requests to be sent and wait on the whole
transaction to be complete.
rpmh_write_batch() is a blocking call that can be used to send multiple
RPMH command sets. Each RPMH command set is set asynchronously and the
API blocks until all the command sets are complete and receive their
tx_done callbacks.
Signed-off-by: Lina Iyer <[email protected]>
---
drivers/soc/qcom/rpmh.c | 150 ++++++++++++++++++++++++++++++++++++++++++++++++
include/soc/qcom/rpmh.h | 8 +++
2 files changed, 158 insertions(+)
diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index dff4c46be3af..6f60bb9a4dfa 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -22,6 +22,7 @@
#define RPMH_MAX_MBOXES 2
#define RPMH_TIMEOUT (10 * HZ)
+#define RPMH_MAX_REQ_IN_BATCH 10
#define DEFINE_RPMH_MSG_ONSTACK(rc, s, q, c, name) \
struct rpmh_request name = { \
@@ -81,12 +82,14 @@ struct rpmh_request {
* @cache: the list of cached requests
* @lock: synchronize access to the controller data
* @dirty: was the cache updated since flush
+ * @batch_cache: Cache sleep and wake requests sent as batch
*/
struct rpmh_ctrlr {
struct rsc_drv *drv;
struct list_head cache;
spinlock_t lock;
bool dirty;
+ struct rpmh_request *batch_cache[2 * RPMH_MAX_REQ_IN_BATCH];
};
/**
@@ -343,6 +346,146 @@ int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
}
EXPORT_SYMBOL(rpmh_write);
+static int cache_batch(struct rpmh_client *rc,
+ struct rpmh_request **rpm_msg, int count)
+{
+ struct rpmh_ctrlr *rpm = rc->ctrlr;
+ unsigned long flags;
+ int ret = 0;
+ int index = 0;
+ int i;
+
+ spin_lock_irqsave(&rpm->lock, flags);
+ while (rpm->batch_cache[index])
+ index++;
+ if (index + count >= 2 * RPMH_MAX_REQ_IN_BATCH) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ for (i = 0; i < count; i++)
+ rpm->batch_cache[index + i] = rpm_msg[i];
+fail:
+ spin_unlock_irqrestore(&rpm->lock, flags);
+
+ return ret;
+}
+
+static int flush_batch(struct rpmh_client *rc)
+{
+ struct rpmh_ctrlr *rpm = rc->ctrlr;
+ struct rpmh_request *rpm_msg;
+ unsigned long flags;
+ int ret = 0;
+ int i;
+
+ /* Send Sleep/Wake requests to the controller, expect no response */
+ spin_lock_irqsave(&rpm->lock, flags);
+ for (i = 0; rpm->batch_cache[i]; i++) {
+ rpm_msg = rpm->batch_cache[i];
+ ret = rpmh_rsc_write_ctrl_data(rc->ctrlr->drv, &rpm_msg->msg);
+ if (ret)
+ break;
+ }
+ spin_unlock_irqrestore(&rpm->lock, flags);
+
+ return ret;
+}
+
+static void invalidate_batch(struct rpmh_client *rc)
+{
+ struct rpmh_ctrlr *rpm = rc->ctrlr;
+ unsigned long flags;
+ int index = 0;
+ int i;
+
+ spin_lock_irqsave(&rpm->lock, flags);
+ while (rpm->batch_cache[index])
+ index++;
+ for (i = 0; i < index; i++) {
+ kfree(rpm->batch_cache[i]->free);
+ rpm->batch_cache[i] = NULL;
+ }
+ spin_unlock_irqrestore(&rpm->lock, flags);
+}
+
+/**
+ * rpmh_write_batch: Write multiple sets of RPMH commands and wait for the
+ * batch to finish.
+ *
+ * @rc: The RPMh handle got from rpmh_get_dev_channel
+ * @state: Active/sleep set
+ * @cmd: The payload data
+ * @n: The array of count of elements in each batch, 0 terminated.
+ *
+ * Write a request to the mailbox controller without caching. If the request
+ * state is ACTIVE, then the requests are treated as completion request
+ * and sent to the controller immediately. The function waits until all the
+ * commands are complete. If the request was to SLEEP or WAKE_ONLY, then the
+ * request is sent as fire-n-forget and no ack is expected.
+ *
+ * May sleep. Do not call from atomic contexts for ACTIVE_ONLY requests.
+ */
+int rpmh_write_batch(struct rpmh_client *rc, enum rpmh_state state,
+ struct tcs_cmd *cmd, int *n)
+{
+ struct rpmh_request *rpm_msg[RPMH_MAX_REQ_IN_BATCH] = { NULL };
+ DECLARE_COMPLETION_ONSTACK(compl);
+ atomic_t wait_count = ATOMIC_INIT(0); /* overwritten */
+ int count = 0;
+ int ret, i, j;
+
+ if (IS_ERR_OR_NULL(rc) || !cmd || !n)
+ return -EINVAL;
+
+ while (n[count++] > 0)
+ ;
+ count--;
+ if (!count || count > RPMH_MAX_REQ_IN_BATCH)
+ return -EINVAL;
+
+ /* Create async request batches */
+ for (i = 0; i < count; i++) {
+ rpm_msg[i] = __get_rpmh_msg_async(rc, state, cmd, n[i]);
+ if (IS_ERR_OR_NULL(rpm_msg[i])) {
+ for (j = 0 ; j < i; j++)
+ kfree(rpm_msg[j]->free);
+ return PTR_ERR(rpm_msg[i]);
+ }
+ cmd += n[i];
+ }
+
+ /* Send if Active and wait for the whole set to complete */
+ if (state == RPMH_ACTIVE_ONLY_STATE) {
+ might_sleep();
+ atomic_set(&wait_count, count);
+ for (i = 0; i < count; i++) {
+ rpm_msg[i]->completion = &compl;
+ rpm_msg[i]->wait_count = &wait_count;
+ /* Bypass caching and write to mailbox directly */
+ ret = rpmh_rsc_send_data(rc->ctrlr->drv,
+ &rpm_msg[i]->msg);
+ if (ret < 0) {
+ pr_err(
+ "Error(%d) sending RPMH message addr=0x%x\n",
+ ret, rpm_msg[i]->msg.payload[0].addr);
+ break;
+ }
+ }
+ /* For those unsent requests, spoof tx_done */
+ for (j = i; j < count; j++)
+ rpmh_tx_done(&rpm_msg[j]->msg, ret);
+ return wait_for_tx_done(rc, &compl, cmd[0].addr, cmd[0].data);
+ }
+
+ /*
+ * Cache sleep/wake data in store.
+ * But flush batch first before flushing all other data.
+ */
+ return cache_batch(rc, rpm_msg, count);
+}
+EXPORT_SYMBOL(rpmh_write_batch);
+
static int is_req_valid(struct cache_req *req)
{
return (req->sleep_val != UINT_MAX &&
@@ -394,6 +537,11 @@ int rpmh_flush(struct rpmh_client *rc)
}
spin_unlock_irqrestore(&rpm->lock, flags);
+ /* First flush the cached batch requests */
+ ret = flush_batch(rc);
+ if (ret)
+ return ret;
+
/*
* Nobody else should be calling this function other than system PM,,
* hence we can run without locks.
@@ -438,6 +586,8 @@ int rpmh_invalidate(struct rpmh_client *rc)
if (IS_ERR_OR_NULL(rc))
return -EINVAL;
+ invalidate_batch(rc);
+
spin_lock_irqsave(&rpm->lock, flags);
rpm->dirty = true;
spin_unlock_irqrestore(&rpm->lock, flags);
diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
index 172a649f1a1c..6dc1042534d6 100644
--- a/include/soc/qcom/rpmh.h
+++ b/include/soc/qcom/rpmh.h
@@ -18,6 +18,9 @@ int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
int rpmh_write_async(struct rpmh_client *rc, enum rpmh_state state,
struct tcs_cmd *cmd, int n);
+int rpmh_write_batch(struct rpmh_client *rc, enum rpmh_state state,
+ struct tcs_cmd *cmd, int *n);
+
struct rpmh_client *rpmh_get_client(struct platform_device *pdev);
int rpmh_flush(struct rpmh_client *rc);
@@ -40,6 +43,11 @@ static inline int rpmh_write_async(struct rpmh_client *rc,
struct tcs_cmd *cmd, int n)
{ return -ENODEV; }
+static inline int rpmh_write_batch(struct rpmh_client *rc,
+ enum rpmh_state state,
+ struct tcs_cmd *cmd, int *n)
+{ return -ENODEV; }
+
static inline int rpmh_flush(struct rpmh_client *rc)
{ return -ENODEV; }
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
Add device binding documentation for Qualcomm Technology Inc's RPMH RSC
driver. The hardware block is used for communicating resource state
requests for shared resources.
Cc: [email protected]
Signed-off-by: Lina Iyer <[email protected]>
---
.../devicetree/bindings/arm/msm/rpmh-rsc.txt | 146 +++++++++++++++++++++
1 file changed, 146 insertions(+)
create mode 100644 Documentation/devicetree/bindings/arm/msm/rpmh-rsc.txt
diff --git a/Documentation/devicetree/bindings/arm/msm/rpmh-rsc.txt b/Documentation/devicetree/bindings/arm/msm/rpmh-rsc.txt
new file mode 100644
index 000000000000..6cb36ae98bf9
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/msm/rpmh-rsc.txt
@@ -0,0 +1,146 @@
+RPMH RSC:
+------------
+
+RPMH is the mechanism for communicating with the hardened resource
+accelerators. Requests to the resources can be written to the TCS mailbox
+registers and using a (addr, val) pair and triggered. Messages in the TCS are
+then sent in sequence over an internal bus.
+
+The hardware block (Direct Resource Voter or DRV) is a part of the h/w entity
+(Resource State Coordinator a.k.a RSC) that can handle a multiple sleep and
+active/wake resource requests. Multiple such DRVs can exist in a SoC and can
+be written to from Linux. The structure of each DRV follows the same template
+with a few variations that are captured by the properties here.
+
+Each DRV could have 'm' TCS instances. Each TCS could have 'n' slots. Each
+slot has a header (u32), address (u32), data (u32), status (u32) and a
+read-response (u32). A TCS when triggered will send all the enabled commands
+of the 'n' commands in that slot in sequence.
+
+A TCS may be triggered from Linux or triggered by the F/W after all the CPUs
+have powered off to facilitate idle power saving. TCS could be classified as -
+
+ SLEEP, /* Triggered by F/W */
+ WAKE, /* Triggered by F/W */
+ ACTIVE, /* Triggered by Linux */
+ CONTROL /* Triggered by F/W */
+
+The order in which they are described in the DT, should match the hardware
+configuration.
+
+Requests can be made for the state of a resource, when the subsystem is active
+or idle. When all subsystems like Modem, GPU, CPU are idle, the resource state
+will be an aggregate of the sleep votes from each of those subsystem. Drivers
+may request a sleep value for their shared resources in addition to the active
+mode requests.
+
+Control requests are instance specific requests that may or may not reach an
+accelerator. Only one platform device in Linux can request a control channel
+on a DRV.
+
+CONTROLLER:
+----------
+
+PROPERTIES:
+
+- compatible:
+ Usage: required
+ Value type: <string>
+ Definition: Should be "qcom,rpmh-rsc".
+
+- reg:
+ Usage: required
+ Value type: <prop-encoded-array>
+ Definition: The first register specifies the base address of the DRV.
+ The second register specifies the start address of the
+ TCS.
+
+- reg-names:
+ Usage: required
+ Value type: <string>
+ Definition: Maps the register specified in the reg property. Must be
+ "drv" and "tcs".
+
+- interrupts:
+ Usage: required
+ Value type: <prop-encoded-interrupt>
+ Definition: The interrupt that trips when a message complete/response
+ is received for this DRV from the accelerators.
+
+- qcom,drv-id:
+ Usage: required
+ Value type: <u32>
+ Definition: the id of the DRV in the RSC block.
+
+- qcom,tcs-config:
+ Usage: required
+ Value type: <prop-encoded-array>
+ Definition: the tuple defining the configuration of TCS.
+ Must have 2 cells which describe each TCS type.
+ <type number_of_tcs>.
+ The order of the TCS must match the hardware
+ configuration.
+ - Cell #1 (TCS Type): TCS types to be specified -
+ SLEEP_TCS
+ WAKE_TCS
+ ACTIVE_TCS
+ CONTROL_TCS
+ - Cell #2 (Number of TCS): <u32>
+
+- label:
+ Usage: optional
+ Value type: <string>
+ Definition: Name for the RSC. The name would be used in trace logs.
+
+Platform drivers that want to use the RSC to communicate with RPMH must
+specify their bindings as child of the corresponding RSC controllers.
+
+EXAMPLE 1:
+
+For a TCS whose RSC base address is is 0x179C0000 and is at a DRV id of 2, the
+register offsets for DRV2 start at 0D00, the register calculations are like
+this -
+First tuple: 0x179C0000 + 0x10000 * 2 = 0x179E0000
+Second tuple: 0x179E0000 + 0xD00 = 0x179E0D00
+
+ apps_rsc: rsc@179e000 {
+ label = "apps_rsc";
+ compatible = "qcom,rpmh-rsc";
+ reg = <0x179e0000 0x10000>, <0x179e0d00 0x3000>;
+ reg-names = "drv", "tcs";
+ interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>
+ qcom,drv-id = <2>;
+ qcom,tcs-config = <SLEEP_TCS 3>,
+ <WAKE_TCS 3>,
+ <ACTIVE_TCS 2>,
+ <CONTROL_TCS 1>;
+
+ foo-clk {
+ compatible = "foo-clk";
+ };
+ };
+
+EXAMPLE 2:
+
+For a TCS whose RSC base address is 0xAF20000 and is at DRV id of 0, the
+register offsets for DRV0 start at 01C00, the register calculations are like
+this -
+First tuple: 0xAF20000
+Second tuple: 0xAF20000 + 0x1C00 = 0xAF21C00
+
+ disp_rsc: rsc@af20000 {
+ label = "disp_rsc";
+ compatible = "qcom,rpmh-rsc";
+ reg = <0xaf20000 0x10000>, <0xaf21c00 0x3000>;
+ reg-names = "drv", "tcs";
+ interrupts = <GIC_SPI 129 IRQ_TYPE_LEVEL_HIGH>;
+ qcom,drv-id = <0>;
+ qcom,tcs-config = <SLEEP_TCS 1>,
+ <WAKE_TCS 1>,
+ <ACTIVE_TCS 0>,
+ <CONTROL_TCS 0>;
+
+ foo-clk {
+ compatible = "foo-clk";
+ };
+ };
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
Allow sleep and wake commands to be cleared from the respective TCSes,
so that they can be re-populated.
Signed-off-by: Lina Iyer <[email protected]>
---
drivers/soc/qcom/rpmh-internal.h | 1 +
drivers/soc/qcom/rpmh-rsc.c | 46 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 47 insertions(+)
diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
index 957b1a79c4c8..fa1d36f796b0 100644
--- a/drivers/soc/qcom/rpmh-internal.h
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -89,6 +89,7 @@ struct rsc_drv {
int rpmh_rsc_send_data(struct rsc_drv *drv, struct tcs_request *msg);
int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, struct tcs_request *msg);
+int rpmh_rsc_invalidate(struct rsc_drv *drv);
void rpmh_tx_done(struct tcs_request *msg, int r);
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index ae555fdc3ab7..951b797a8cc7 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -170,6 +170,52 @@ static struct tcs_group *get_tcs_of_type(struct rsc_drv *drv, int type)
return tcs;
}
+/**
+ * rpmh_rsc_invalidate - Invalidate sleep and wake TCSes
+ *
+ * @drv: the mailbox controller
+ */
+int rpmh_rsc_invalidate(struct rsc_drv *drv)
+{
+ struct tcs_group *tcs;
+ int m, type, ret = 0;
+ int inv_types[] = { WAKE_TCS, SLEEP_TCS };
+ unsigned long drv_flags, flags;
+
+ /* Lock the DRV and clear sleep and wake TCSes */
+ spin_lock_irqsave(&drv->drv_lock, drv_flags);
+ for (type = 0; type < ARRAY_SIZE(inv_types); type++) {
+ tcs = get_tcs_of_type(drv, inv_types[type]);
+ if (IS_ERR(tcs))
+ continue;
+
+ spin_lock_irqsave(&tcs->tcs_lock, flags);
+ if (bitmap_empty(tcs->slots, MAX_TCS_SLOTS)) {
+ spin_unlock_irqrestore(&tcs->tcs_lock, flags);
+ continue;
+ }
+
+ /* Clear the enable register for each TCS of the type */
+ for (m = tcs->tcs_offset;
+ m < tcs->tcs_offset + tcs->num_tcs; m++) {
+ if (!tcs_is_free(drv, m)) {
+ spin_unlock_irqrestore(&tcs->tcs_lock, flags);
+ ret = -EAGAIN;
+ goto drv_unlock;
+ }
+ write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
+ /* Mark the TCS slots as free */
+ bitmap_zero(tcs->slots, MAX_TCS_SLOTS);
+ }
+ spin_unlock_irqrestore(&tcs->tcs_lock, flags);
+ }
+drv_unlock:
+ spin_unlock_irqrestore(&drv->drv_lock, drv_flags);
+
+ return ret;
+}
+EXPORT_SYMBOL(rpmh_rsc_invalidate);
+
static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
struct tcs_request *msg)
{
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
Log sent RPMH requests and interrupt responses in FTRACE.
Cc: Steven Rostedt <[email protected]>
Signed-off-by: Lina Iyer <[email protected]>
---
drivers/soc/qcom/Makefile | 1 +
drivers/soc/qcom/rpmh-rsc.c | 6 +++
drivers/soc/qcom/trace-rpmh.h | 89 +++++++++++++++++++++++++++++++++++++++++++
3 files changed, 96 insertions(+)
create mode 100644 drivers/soc/qcom/trace-rpmh.h
diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index 7426979ea9fb..094b639ece2f 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -1,4 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
+CFLAGS_trace-rpmh.o := -I$(src)
obj-$(CONFIG_QCOM_GLINK_SSR) += glink_ssr.o
obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o
obj-$(CONFIG_QCOM_MDT_LOADER) += mdt_loader.o
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index ceb8aea237d6..58a3c53d651e 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -24,6 +24,9 @@
#include "rpmh-internal.h"
+#define CREATE_TRACE_POINTS
+#include "trace-rpmh.h"
+
#define RSC_DRV_TCS_OFFSET 672
#define RSC_DRV_CMD_OFFSET 20
@@ -239,6 +242,7 @@ static irqreturn_t tcs_irq_handler(int irq, void *p)
/* Reclaim the TCS */
write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, 0, BIT(m));
+ trace_rpmh_notify_irq(drv, resp);
atomic_set(&drv->tcs_in_use[m], 0);
send_tcs_response(resp);
}
@@ -270,6 +274,7 @@ static void tcs_notify_tx_done(unsigned long data)
struct tcs_response, list);
list_del(&resp->list);
spin_unlock_irqrestore(&drv->drv_lock, flags);
+ trace_rpmh_notify_tx_done(drv, resp);
free_response(resp);
}
}
@@ -298,6 +303,7 @@ static void __tcs_buffer_write(struct rsc_drv *drv, int m, int n,
write_tcs_reg(drv, RSC_DRV_CMD_MSGID, m, n + i, msgid);
write_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, n + i, cmd->addr);
write_tcs_reg(drv, RSC_DRV_CMD_DATA, m, n + i, cmd->data);
+ trace_rpmh_send_msg(drv, m, n + i, msgid, cmd);
}
write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
diff --git a/drivers/soc/qcom/trace-rpmh.h b/drivers/soc/qcom/trace-rpmh.h
new file mode 100644
index 000000000000..734de0d54a9f
--- /dev/null
+++ b/drivers/soc/qcom/trace-rpmh.h
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
+ */
+
+#if !defined(_TRACE_RPMH_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_RPMH_H
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM rpmh
+
+#include <linux/tracepoint.h>
+#include "rpmh-internal.h"
+
+DECLARE_EVENT_CLASS(rpmh_notify,
+
+ TP_PROTO(struct rsc_drv *d, struct tcs_response *r),
+
+ TP_ARGS(d, r),
+
+ TP_STRUCT__entry(
+ __field(const char *, d->name)
+ __field(int, m)
+ __field(u32, addr)
+ __field(int, errno)
+ ),
+
+ TP_fast_assign(
+ __entry->name = d->name;
+ __entry->m = r->m;
+ __entry->addr = r->msg->payload[0].addr;
+ __entry->errno = r->err;
+ ),
+
+ TP_printk("%s: ack: tcs-m:%d addr: 0x%08x errno: %d",
+ __entry->name, __entry->m, __entry->addr, __entry->errno)
+);
+
+DEFINE_EVENT(rpmh_notify, rpmh_notify_irq,
+ TP_PROTO(struct rsc_drv *d, struct tcs_response *r),
+ TP_ARGS(d, r)
+);
+
+DEFINE_EVENT(rpmh_notify, rpmh_notify_tx_done,
+ TP_PROTO(struct rsc_drv *d, struct tcs_response *r),
+ TP_ARGS(d, r)
+);
+
+
+TRACE_EVENT(rpmh_send_msg,
+
+ TP_PROTO(struct rsc_drv *d, int m, int n, u32 h, struct tcs_cmd *c),
+
+ TP_ARGS(d, m, n, h, c),
+
+ TP_STRUCT__entry(
+ __field(const char*, d->name)
+ __field(int, m)
+ __field(int, n)
+ __field(u32, hdr)
+ __field(u32, addr)
+ __field(u32, data)
+ __field(bool, complete)
+ ),
+
+ TP_fast_assign(
+ __entry->name = s;
+ __entry->m = m;
+ __entry->n = n;
+ __entry->hdr = h;
+ __entry->addr = c->addr;
+ __entry->data = c->data;
+ __entry->complete = c->complete;
+ ),
+
+ TP_printk("%s: send-msg: tcs(m): %d cmd(n): %d msgid: 0x%08x addr: 0x%08x data: 0x%08x complete: %d",
+ __entry->name, __entry->m, __entry->n, __entry->hdr,
+ __entry->addr, __entry->data, __entry->complete)
+);
+
+#endif /* _TRACE_RPMH_H */
+
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH ../../drivers/soc/qcom
+
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE trace-rpmh
+
+#include <trace/define_trace.h>
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
Platform drivers that want to send a request but do not want to block
until the RPMH request completes have now a new API -
rpmh_write_async().
The API allocates memory and send the requests and returns the control
back to the platform driver. The tx_done callback from the controller is
handled in the context of the controller's thread and frees the
allocated memory. This API allows RPMH requests from atomic contexts as
well.
Signed-off-by: Lina Iyer <[email protected]>
---
drivers/soc/qcom/rpmh.c | 52 +++++++++++++++++++++++++++++++++++++++++++++++++
include/soc/qcom/rpmh.h | 8 ++++++++
2 files changed, 60 insertions(+)
diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index 671bc03ad77a..dff4c46be3af 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -35,6 +35,7 @@
.completion = q, \
.wait_count = c, \
.rc = rc, \
+ .free = NULL, \
}
@@ -61,6 +62,7 @@ struct cache_req {
* @completion: triggered when request is done
* @wait_count: count of waiters for this completion
* @err: err return from the controller
+ * @free: the request object to be freed at tx_done
*/
struct rpmh_request {
struct tcs_request msg;
@@ -69,6 +71,7 @@ struct rpmh_request {
atomic_t *wait_count;
struct rpmh_client *rc;
int err;
+ struct rpmh_request *free;
};
/**
@@ -114,6 +117,8 @@ void rpmh_tx_done(struct tcs_request *msg, int r)
"RPMH TX fail in msg addr 0x%x, err=%d\n",
rpm_msg->msg.payload[0].addr, r);
+ kfree(rpm_msg->free);
+
/* Signal the blocking thread we are done */
if (wc && atomic_dec_and_test(wc) && compl)
complete(compl);
@@ -257,6 +262,53 @@ static int __rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
return ret;
}
+static struct rpmh_request *__get_rpmh_msg_async(struct rpmh_client *rc,
+ enum rpmh_state state,
+ struct tcs_cmd *cmd, int n)
+{
+ struct rpmh_request *req;
+
+ if (IS_ERR_OR_NULL(rc) || !cmd || n <= 0 || n > MAX_RPMH_PAYLOAD)
+ return ERR_PTR(-EINVAL);
+
+ req = kcalloc(1, sizeof(*req), GFP_ATOMIC);
+ if (!req)
+ return ERR_PTR(-ENOMEM);
+
+ memcpy(req->cmd, cmd, n * sizeof(*cmd));
+
+ req->msg.state = state;
+ req->msg.payload = req->cmd;
+ req->msg.num_payload = n;
+ req->free = req;
+
+ return req;
+}
+
+/**
+ * rpmh_write_async: Write a set of RPMH commands
+ *
+ * @rc: The RPMh handle got from rpmh_get_dev_channel
+ * @state: Active/sleep set
+ * @cmd: The payload data
+ * @n: The number of elements in payload
+ *
+ * Write a set of RPMH commands, the order of commands is maintained
+ * and will be sent as a single shot.
+ */
+int rpmh_write_async(struct rpmh_client *rc, enum rpmh_state state,
+ struct tcs_cmd *cmd, int n)
+{
+ struct rpmh_request *rpm_msg;
+
+ rpm_msg = __get_rpmh_msg_async(rc, state, cmd, n);
+ if (IS_ERR(rpm_msg))
+ return PTR_ERR(rpm_msg);
+
+ return __rpmh_write(rc, state, rpm_msg);
+}
+EXPORT_SYMBOL(rpmh_write_async);
+
/**
* rpmh_write: Write a set of RPMH commands and block until response
*
diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
index a3f1246ce49a..172a649f1a1c 100644
--- a/include/soc/qcom/rpmh.h
+++ b/include/soc/qcom/rpmh.h
@@ -15,6 +15,9 @@ struct rpmh_client;
int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
struct tcs_cmd *cmd, int n);
+int rpmh_write_async(struct rpmh_client *rc, enum rpmh_state state,
+ struct tcs_cmd *cmd, int n);
+
struct rpmh_client *rpmh_get_client(struct platform_device *pdev);
int rpmh_flush(struct rpmh_client *rc);
@@ -32,6 +35,11 @@ static inline int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
static inline struct rpmh_client *rpmh_get_client(struct platform_device *pdev)
{ return ERR_PTR(-ENODEV); }
+static inline int rpmh_write_async(struct rpmh_client *rc,
+ enum rpmh_state state,
+ struct tcs_cmd *cmd, int n)
+{ return -ENODEV; }
+
static inline int rpmh_flush(struct rpmh_client *rc)
{ return -ENODEV; }
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
Active state requests are sent immediately to the mailbox controller,
while sleep and wake state requests are cached in this driver to avoid
taxing the mailbox controller repeatedly. The cached values will be sent
to the controller when the rpmh_flush() is called.
Generally, flushing is a system PM activity and may be called from the
system PM drivers when the system is entering suspend or deeper sleep
modes during cpuidle.
Also allow invalidating the cached requests, so they may be re-populated
again.
Signed-off-by: Lina Iyer <[email protected]>
---
drivers/soc/qcom/rpmh.c | 213 +++++++++++++++++++++++++++++++++++++++++++++++-
include/soc/qcom/rpmh.h | 10 +++
2 files changed, 222 insertions(+), 1 deletion(-)
diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index d95ea3fa8b67..671bc03ad77a 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -6,11 +6,13 @@
#include <linux/atomic.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
+#include <linux/list.h>
#include <linux/mailbox_client.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
+#include <linux/spinlock.h>
#include <linux/types.h>
#include <linux/wait.h>
@@ -35,6 +37,22 @@
.rc = rc, \
}
+
+/**
+ * cache_req : the request object for caching
+ *
+ * @addr: the address of the resource
+ * @sleep_val: the sleep vote
+ * @wake_val: the wake vote
+ * @list: linked list obj
+ */
+struct cache_req {
+ u32 addr;
+ u32 sleep_val;
+ u32 wake_val;
+ struct list_head list;
+};
+
/**
* rpmh_request: the message to be sent to rpmh-rsc
*
@@ -57,9 +75,15 @@ struct rpmh_request {
* rpmh_ctrlr: our representation of the controller
*
* @drv: the controller instance
+ * @cache: the list of cached requests
+ * @lock: synchronize access to the controller data
+ * @dirty: was the cache updated since flush
*/
struct rpmh_ctrlr {
struct rsc_drv *drv;
+ struct list_head cache;
+ spinlock_t lock;
+ bool dirty;
};
/**
@@ -123,17 +147,91 @@ static int wait_for_tx_done(struct rpmh_client *rc,
return (ret > 0) ? 0 : -ETIMEDOUT;
}
+static struct cache_req *__find_req(struct rpmh_client *rc, u32 addr)
+{
+ struct cache_req *p, *req = NULL;
+
+ list_for_each_entry(p, &rc->ctrlr->cache, list) {
+ if (p->addr == addr) {
+ req = p;
+ break;
+ }
+ }
+
+ return req;
+}
+
+static struct cache_req *cache_rpm_request(struct rpmh_client *rc,
+ enum rpmh_state state,
+ struct tcs_cmd *cmd)
+{
+ struct cache_req *req;
+ struct rpmh_ctrlr *rpm = rc->ctrlr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&rpm->lock, flags);
+ req = __find_req(rc, cmd->addr);
+ if (req)
+ goto existing;
+
+ req = kzalloc(sizeof(*req), GFP_ATOMIC);
+ if (!req) {
+ req = ERR_PTR(-ENOMEM);
+ goto unlock;
+ }
+
+ req->addr = cmd->addr;
+ req->sleep_val = req->wake_val = UINT_MAX;
+ INIT_LIST_HEAD(&req->list);
+ list_add_tail(&req->list, &rpm->cache);
+
+existing:
+ switch (state) {
+ case RPMH_ACTIVE_ONLY_STATE:
+ if (req->sleep_val != UINT_MAX)
+ req->wake_val = cmd->data;
+ break;
+ case RPMH_WAKE_ONLY_STATE:
+ req->wake_val = cmd->data;
+ break;
+ case RPMH_SLEEP_STATE:
+ req->sleep_val = cmd->data;
+ break;
+ default:
+ break;
+ };
+
+ rpm->dirty = true;
+unlock:
+ spin_unlock_irqrestore(&rpm->lock, flags);
+
+ return req;
+}
+
/**
- * __rpmh_write: send the RPMH request
+ * __rpmh_write: Cache and send the RPMH request
*
* @rc: The RPMH client
* @state: Active/Sleep request type
* @rpm_msg: The data that needs to be sent (payload).
+ *
+ * Cache the RPMH request and send if the state is ACTIVE_ONLY.
+ * SLEEP/WAKE_ONLY requests are not sent to the controller at
+ * this time. Use rpmh_flush() to send them to the controller.
*/
static int __rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
struct rpmh_request *rpm_msg)
{
int ret = -EFAULT;
+ struct cache_req *req;
+ int i;
+
+ /* Cache the request in our store and link the payload */
+ for (i = 0; i < rpm_msg->msg.num_payload; i++) {
+ req = cache_rpm_request(rc, state, &rpm_msg->msg.payload[i]);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+ }
rpm_msg->msg.state = state;
@@ -150,6 +248,10 @@ static int __rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
"Error in RPMH request addr=0x%x, data=0x%x\n",
rpm_msg->msg.payload[0].addr,
rpm_msg->msg.payload[0].data);
+ } else {
+ ret = rpmh_rsc_write_ctrl_data(rc->ctrlr->drv, &rpm_msg->msg);
+ /* Clean up our call by spoofing tx_done */
+ rpmh_tx_done(&rpm_msg->msg, ret);
}
return ret;
@@ -189,6 +291,113 @@ int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
}
EXPORT_SYMBOL(rpmh_write);
+static int is_req_valid(struct cache_req *req)
+{
+ return (req->sleep_val != UINT_MAX &&
+ req->wake_val != UINT_MAX &&
+ req->sleep_val != req->wake_val);
+}
+
+static int send_single(struct rpmh_client *rc, enum rpmh_state state,
+ u32 addr, u32 data)
+{
+ DEFINE_RPMH_MSG_ONSTACK(rc, state, NULL, NULL, rpm_msg);
+
+ /* Wake sets are always complete and sleep sets are not */
+ rpm_msg.msg.is_complete = (state == RPMH_WAKE_ONLY_STATE);
+ rpm_msg.cmd[0].addr = addr;
+ rpm_msg.cmd[0].data = data;
+ rpm_msg.msg.num_payload = 1;
+ rpm_msg.msg.is_complete = false;
+
+ return rpmh_rsc_write_ctrl_data(rc->ctrlr->drv, &rpm_msg.msg);
+}
+
+/**
+ * rpmh_flush: Flushes the buffered active and sleep sets to TCS
+ *
+ * @rc: The RPMh handle got from rpmh_get_dev_channel
+ *
+ * This function is generally called from the sleep code from the last CPU
+ * that is powering down the entire system.
+ *
+ * Returns -EBUSY if the controller is busy, probably waiting on a response
+ * to a RPMH request sent earlier.
+ */
+int rpmh_flush(struct rpmh_client *rc)
+{
+ struct cache_req *p;
+ struct rpmh_ctrlr *rpm = rc->ctrlr;
+ int ret;
+ unsigned long flags;
+
+ if (IS_ERR_OR_NULL(rc))
+ return -EINVAL;
+
+ spin_lock_irqsave(&rpm->lock, flags);
+ if (!rpm->dirty) {
+ pr_debug("Skipping flush, TCS has latest data.\n");
+ spin_unlock_irqrestore(&rpm->lock, flags);
+ return 0;
+ }
+ spin_unlock_irqrestore(&rpm->lock, flags);
+
+ /*
+ * Nobody else should be calling this function other than system PM,,
+ * hence we can run without locks.
+ */
+ list_for_each_entry(p, &rc->ctrlr->cache, list) {
+ if (!is_req_valid(p)) {
+ pr_debug("%s: skipping RPMH req: a:0x%x s:0x%x w:0x%x",
+ __func__, p->addr, p->sleep_val, p->wake_val);
+ continue;
+ }
+ ret = send_single(rc, RPMH_SLEEP_STATE, p->addr, p->sleep_val);
+ if (ret)
+ return ret;
+ ret = send_single(rc, RPMH_WAKE_ONLY_STATE, p->addr,
+ p->wake_val);
+ if (ret)
+ return ret;
+ }
+
+ spin_lock_irqsave(&rpm->lock, flags);
+ rpm->dirty = false;
+ spin_unlock_irqrestore(&rpm->lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL(rpmh_flush);
+
+/**
+ * rpmh_invalidate: Invalidate all sleep and active sets
+ * sets.
+ *
+ * @rc: The RPMh handle got from rpmh_get_dev_channel
+ *
+ * Invalidate the sleep and active values in the TCS blocks.
+ */
+int rpmh_invalidate(struct rpmh_client *rc)
+{
+ struct rpmh_ctrlr *rpm = rc->ctrlr;
+ int ret;
+ unsigned long flags;
+
+ if (IS_ERR_OR_NULL(rc))
+ return -EINVAL;
+
+ spin_lock_irqsave(&rpm->lock, flags);
+ rpm->dirty = true;
+ spin_unlock_irqrestore(&rpm->lock, flags);
+
+ do {
+ ret = rpmh_rsc_invalidate(rc->ctrlr->drv);
+ } while (ret == -EAGAIN);
+
+ return ret;
+}
+EXPORT_SYMBOL(rpmh_invalidate);
+
static struct rpmh_ctrlr *get_rpmh_ctrlr(struct platform_device *pdev)
{
int i;
@@ -210,6 +419,8 @@ static struct rpmh_ctrlr *get_rpmh_ctrlr(struct platform_device *pdev)
if (rpmh_rsc[i].drv == NULL) {
ctrlr = &rpmh_rsc[i];
ctrlr->drv = drv;
+ spin_lock_init(&ctrlr->lock);
+ INIT_LIST_HEAD(&ctrlr->cache);
break;
}
}
diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
index 53cc145990c2..a3f1246ce49a 100644
--- a/include/soc/qcom/rpmh.h
+++ b/include/soc/qcom/rpmh.h
@@ -17,6 +17,10 @@ int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
struct rpmh_client *rpmh_get_client(struct platform_device *pdev);
+int rpmh_flush(struct rpmh_client *rc);
+
+int rpmh_invalidate(struct rpmh_client *rc);
+
void rpmh_release(struct rpmh_client *rc);
#else
@@ -28,6 +32,12 @@ static inline int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
static inline struct rpmh_client *rpmh_get_client(struct platform_device *pdev)
{ return ERR_PTR(-ENODEV); }
+static inline int rpmh_flush(struct rpmh_client *rc)
+{ return -ENODEV; }
+
+static inline int rpmh_invalidate(struct rpmh_client *rc)
+{ return -ENODEV; }
+
static inline void rpmh_release(struct rpmh_client *rc) { }
#endif /* CONFIG_QCOM_RPMH */
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
Add controller driver for QCOM SoCs that have hardware based shared
resource management. The hardware IP known as RSC (Resource State
Coordinator) houses multiple Direct Resource Voter (DRV) for different
execution levels. A DRV is a unique voter on the state of a shared
resource. A Trigger Control Set (TCS) is a bunch of slots that can house
multiple resource state requests, that when triggered will issue those
requests through an internal bus to the Resource Power Manager Hardened
(RPMH) blocks. These hardware blocks are capable of adjusting clocks,
voltages etc. The resource state request from a DRV are aggregated along
with state requests from other processors in the SoC and the aggregate
value is applied on the resource.
Some important aspects of the RPMH communication -
- Requests are <addr, value> with some header information
- Multiple requests (upto 16) may be sent through a TCS, at a time
- Requests in a TCS are sent in sequence
- Requests may be fire-n-forget or completion (response expected)
- Multiple TCS from the same DRV may be triggered simultaneously
- Cannot send a request if another reques for the same addr is in
progress from the same DRV
- When all the requests from a TCS are complete, an IRQ is raised
- The IRQ handler needs to clear the TCS before it is available for
reuse
- TCS configuration is specific to a DRV
- Platform drivers may use DRV from different RSCs to make requests
Resource state requests made when CPUs are active are called 'active'
state requests. Requests made when all the CPUs are powered down (idle
state) are called 'sleep' state requests. They are matched by a
corresponding 'wake' state requests which puts the resources back in to
previously requested active state before resuming any CPU. TCSes are
dedicated for each type of requests. Control TCS are used to provide
specific information to the controller.
Signed-off-by: Lina Iyer <[email protected]>
---
drivers/soc/qcom/Kconfig | 10 +
drivers/soc/qcom/Makefile | 1 +
drivers/soc/qcom/rpmh-internal.h | 86 +++++
drivers/soc/qcom/rpmh-rsc.c | 593 ++++++++++++++++++++++++++++++++
include/dt-bindings/soc/qcom,rpmh-rsc.h | 14 +
include/soc/qcom/tcs.h | 56 +++
6 files changed, 760 insertions(+)
create mode 100644 drivers/soc/qcom/rpmh-internal.h
create mode 100644 drivers/soc/qcom/rpmh-rsc.c
create mode 100644 include/dt-bindings/soc/qcom,rpmh-rsc.h
create mode 100644 include/soc/qcom/tcs.h
diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index b81374bb6713..90b038bded0b 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -46,6 +46,16 @@ config QCOM_RMTFS_MEM
Say y here if you intend to boot the modem remoteproc.
+config QCOM_RPMH
+ bool "Qualcomm RPM-Hardened (RPMH) Communication"
+ depends on ARCH_QCOM && OF
+ help
+ Support for communication with the hardened-RPM blocks in
+ Qualcomm Technologies Inc (QTI) SoCs. RPMH communication uses an
+ internal bus to transmit state requests for shared resources. A set
+ of hardware components aggregate requests for these resources and
+ help apply the aggregated state on the resource.
+
config QCOM_SMEM
tristate "Qualcomm Shared Memory Manager (SMEM)"
depends on ARCH_QCOM
diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index 40c56f67e94a..7426979ea9fb 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -10,3 +10,4 @@ obj-$(CONFIG_QCOM_SMEM_STATE) += smem_state.o
obj-$(CONFIG_QCOM_SMP2P) += smp2p.o
obj-$(CONFIG_QCOM_SMSM) += smsm.o
obj-$(CONFIG_QCOM_WCNSS_CTRL) += wcnss_ctrl.o
+obj-$(CONFIG_QCOM_RPMH) += rpmh-rsc.o
diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
new file mode 100644
index 000000000000..dccf9de1a75f
--- /dev/null
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -0,0 +1,86 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
+ */
+
+
+#ifndef __RPM_INTERNAL_H__
+#define __RPM_INTERNAL_H__
+
+#include <soc/qcom/tcs.h>
+
+#define TCS_TYPE_NR 4
+#define MAX_CMDS_PER_TCS 16
+#define MAX_TCS_PER_TYPE 3
+#define MAX_TCS_NR (MAX_TCS_PER_TYPE * TCS_TYPE_NR)
+
+struct rsc_drv;
+
+/**
+ * tcs_response: Response object for a request
+ *
+ * @drv: the controller
+ * @msg: the request for this response
+ * @m: the tcs identifier
+ * @err: error reported in the response
+ * @list: link list object.
+ */
+struct tcs_response {
+ struct rsc_drv *drv;
+ struct tcs_request *msg;
+ u32 m;
+ int err;
+ struct list_head list;
+};
+
+/**
+ * tcs_group: group of TCSes for a request state
+ *
+ * @type: type of the TCS in this group - active, sleep, wake
+ * @tcs_mask: mask of the TCSes relative to all the TCSes in the RSC
+ * @tcs_offset: start of the TCS group relative to the TCSes in the RSC
+ * @num_tcs: number of TCSes in this type
+ * @ncpt: number of commands in each TCS
+ * @tcs_lock: lock for synchronizing this TCS writes
+ * @responses: response objects for requests sent from each TCS
+ */
+struct tcs_group {
+ struct rsc_drv *drv;
+ int type;
+ u32 tcs_mask;
+ u32 tcs_offset;
+ int num_tcs;
+ int ncpt;
+ spinlock_t tcs_lock;
+ struct tcs_response *responses[MAX_TCS_PER_TYPE];
+};
+
+/**
+ * rsc_drv: the RSC controller
+ *
+ * @name: controller identifier
+ * @tcs_base: start address of the TCS registers in this controller
+ * @drv_id: instance id in the controller (DRV)
+ * @num_tcs: number of TCSes in this DRV
+ * @tasklet: handle responses, off-load work from IRQ handler
+ * @response_pending: list of responses that needs to be sent to caller
+ * @tcs: TCS groups
+ * @tcs_in_use: s/w state of the TCS
+ * @drv_lock: synchronize state of the controller
+ */
+struct rsc_drv {
+ const char *name;
+ void __iomem *tcs_base;
+ int drv_id;
+ int num_tcs;
+ struct tasklet_struct tasklet;
+ struct list_head response_pending;
+ struct tcs_group tcs[TCS_TYPE_NR];
+ atomic_t tcs_in_use[MAX_TCS_NR];
+ spinlock_t drv_lock;
+};
+
+
+int rpmh_rsc_send_data(struct rsc_drv *drv, struct tcs_request *msg);
+
+#endif /* __RPM_INTERNAL_H__ */
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
new file mode 100644
index 000000000000..ceb8aea237d6
--- /dev/null
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -0,0 +1,593 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
+ */
+
+#define pr_fmt(fmt) "%s " fmt, KBUILD_MODNAME
+
+#include <linux/atomic.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include <asm-generic/io.h>
+#include <soc/qcom/tcs.h>
+#include <dt-bindings/soc/qcom,rpmh-rsc.h>
+
+#include "rpmh-internal.h"
+
+#define RSC_DRV_TCS_OFFSET 672
+#define RSC_DRV_CMD_OFFSET 20
+
+/* DRV Configuration Information Register */
+#define DRV_PRNT_CHLD_CONFIG 0x0C
+#define DRV_NUM_TCS_MASK 0x3F
+#define DRV_NUM_TCS_SHIFT 6
+#define DRV_NCPT_MASK 0x1F
+#define DRV_NCPT_SHIFT 27
+
+/* Register offsets */
+#define RSC_DRV_IRQ_ENABLE 0x00
+#define RSC_DRV_IRQ_STATUS 0x04
+#define RSC_DRV_IRQ_CLEAR 0x08
+#define RSC_DRV_CMD_WAIT_FOR_CMPL 0x10
+#define RSC_DRV_CONTROL 0x14
+#define RSC_DRV_STATUS 0x18
+#define RSC_DRV_CMD_ENABLE 0x1C
+#define RSC_DRV_CMD_MSGID 0x30
+#define RSC_DRV_CMD_ADDR 0x34
+#define RSC_DRV_CMD_DATA 0x38
+#define RSC_DRV_CMD_STATUS 0x3C
+#define RSC_DRV_CMD_RESP_DATA 0x40
+
+#define TCS_AMC_MODE_ENABLE BIT(16)
+#define TCS_AMC_MODE_TRIGGER BIT(24)
+
+/* TCS CMD register bit mask */
+#define CMD_MSGID_LEN 8
+#define CMD_MSGID_RESP_REQ BIT(8)
+#define CMD_MSGID_WRITE BIT(16)
+#define CMD_STATUS_ISSUED BIT(8)
+#define CMD_STATUS_COMPL BIT(16)
+
+static struct tcs_group *get_tcs_from_index(struct rsc_drv *drv, int m)
+{
+ struct tcs_group *tcs = NULL;
+ int i;
+
+ for (i = 0; i < drv->num_tcs; i++) {
+ tcs = &drv->tcs[i];
+ if (tcs->tcs_mask & BIT(m))
+ break;
+ }
+
+ if (i == drv->num_tcs) {
+ WARN(1, "Incorrect TCS index %d", m);
+ tcs = NULL;
+ }
+
+ return tcs;
+}
+
+static struct tcs_response *setup_response(struct rsc_drv *drv,
+ struct tcs_request *msg, int m)
+{
+ struct tcs_response *resp;
+ struct tcs_group *tcs;
+
+ resp = kcalloc(1, sizeof(*resp), GFP_ATOMIC);
+ if (!resp)
+ return ERR_PTR(-ENOMEM);
+
+ resp->drv = drv;
+ resp->msg = msg;
+ resp->err = 0;
+
+ tcs = get_tcs_from_index(drv, m);
+ if (!tcs)
+ return ERR_PTR(-EINVAL);
+
+ /*
+ * We should have been called from a TCS-type locked context, and
+ * we overwrite the previously saved response.
+ */
+ tcs->responses[m - tcs->tcs_offset] = resp;
+
+ return resp;
+}
+
+static void free_response(struct tcs_response *resp)
+{
+ kfree(resp);
+}
+
+static struct tcs_response *get_response(struct rsc_drv *drv, u32 m)
+{
+ struct tcs_group *tcs = get_tcs_from_index(drv, m);
+
+ return tcs->responses[m - tcs->tcs_offset];
+}
+
+static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int m, int n)
+{
+ return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
+ RSC_DRV_CMD_OFFSET * n);
+}
+
+static void write_tcs_reg(struct rsc_drv *drv, int reg, int m, int n,
+ u32 data)
+{
+ writel_relaxed(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
+ RSC_DRV_CMD_OFFSET * n);
+}
+
+static void write_tcs_reg_sync(struct rsc_drv *drv, int reg, int m, int n,
+ u32 data)
+{
+ write_tcs_reg(drv, reg, m, n, data);
+ for (;;) {
+ if (data == read_tcs_reg(drv, reg, m, n))
+ break;
+ udelay(1);
+ }
+}
+
+static bool tcs_is_free(struct rsc_drv *drv, int m)
+{
+ return !atomic_read(&drv->tcs_in_use[m]) &&
+ read_tcs_reg(drv, RSC_DRV_STATUS, m, 0);
+}
+
+static struct tcs_group *get_tcs_of_type(struct rsc_drv *drv, int type)
+{
+ int i;
+ struct tcs_group *tcs;
+
+ for (i = 0; i < TCS_TYPE_NR; i++) {
+ if (type == drv->tcs[i].type)
+ break;
+ }
+
+ if (i == TCS_TYPE_NR)
+ return ERR_PTR(-EINVAL);
+
+ tcs = &drv->tcs[i];
+ if (!tcs->num_tcs)
+ return ERR_PTR(-EINVAL);
+
+ return tcs;
+}
+
+static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
+ struct tcs_request *msg)
+{
+ int type;
+
+ switch (msg->state) {
+ case RPMH_ACTIVE_ONLY_STATE:
+ type = ACTIVE_TCS;
+ break;
+ default:
+ return ERR_PTR(-EINVAL);
+ }
+
+ return get_tcs_of_type(drv, type);
+}
+
+static void send_tcs_response(struct tcs_response *resp)
+{
+ struct rsc_drv *drv = resp->drv;
+ unsigned long flags;
+
+ if (!resp)
+ return;
+
+ spin_lock_irqsave(&drv->drv_lock, flags);
+ INIT_LIST_HEAD(&resp->list);
+ list_add_tail(&resp->list, &drv->response_pending);
+ spin_unlock_irqrestore(&drv->drv_lock, flags);
+
+ tasklet_schedule(&drv->tasklet);
+}
+
+/**
+ * tcs_irq_handler: TX Done interrupt handler
+ */
+static irqreturn_t tcs_irq_handler(int irq, void *p)
+{
+ struct rsc_drv *drv = p;
+ int m, i;
+ u32 irq_status, sts;
+ struct tcs_response *resp;
+ struct tcs_cmd *cmd;
+ int err;
+
+ irq_status = read_tcs_reg(drv, RSC_DRV_IRQ_STATUS, 0, 0);
+
+ for (m = 0; m < drv->num_tcs; m++) {
+ if (!(irq_status & (u32)BIT(m)))
+ continue;
+
+ err = 0;
+ resp = get_response(drv, m);
+ if (!resp) {
+ WARN_ON(1);
+ goto skip_resp;
+ }
+
+ for (i = 0; i < resp->msg->num_payload; i++) {
+ cmd = &resp->msg->payload[i];
+ sts = read_tcs_reg(drv, RSC_DRV_CMD_STATUS, m, i);
+ if ((!(sts & CMD_STATUS_ISSUED)) ||
+ ((resp->msg->is_complete || cmd->complete) &&
+ (!(sts & CMD_STATUS_COMPL)))) {
+ resp->err = -EIO;
+ break;
+ }
+ }
+
+ resp->err = err;
+skip_resp:
+ /* Reclaim the TCS */
+ write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
+ write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, 0, BIT(m));
+ atomic_set(&drv->tcs_in_use[m], 0);
+ send_tcs_response(resp);
+ }
+
+ return IRQ_HANDLED;
+}
+
+/**
+ * tcs_notify_tx_done: TX Done for requests that got a response
+ *
+ * @data: the tasklet argument
+ *
+ * Tasklet function to notify MBOX that we are done with the request.
+ * Handles all pending reponses whenever run.
+ */
+static void tcs_notify_tx_done(unsigned long data)
+{
+ struct rsc_drv *drv = (struct rsc_drv *)data;
+ struct tcs_response *resp;
+ unsigned long flags;
+
+ for (;;) {
+ spin_lock_irqsave(&drv->drv_lock, flags);
+ if (list_empty(&drv->response_pending)) {
+ spin_unlock_irqrestore(&drv->drv_lock, flags);
+ break;
+ }
+ resp = list_first_entry(&drv->response_pending,
+ struct tcs_response, list);
+ list_del(&resp->list);
+ spin_unlock_irqrestore(&drv->drv_lock, flags);
+ free_response(resp);
+ }
+}
+
+static void __tcs_buffer_write(struct rsc_drv *drv, int m, int n,
+ struct tcs_request *msg)
+{
+ u32 msgid, cmd_msgid = 0;
+ u32 cmd_enable = 0;
+ u32 cmd_complete;
+ struct tcs_cmd *cmd;
+ int i;
+
+ cmd_msgid = CMD_MSGID_LEN;
+ cmd_msgid |= (msg->is_complete) ? CMD_MSGID_RESP_REQ : 0;
+ cmd_msgid |= CMD_MSGID_WRITE;
+
+ cmd_complete = read_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0);
+
+ for (i = 0; i < msg->num_payload; i++) {
+ cmd = &msg->payload[i];
+ cmd_enable |= BIT(n + i);
+ cmd_complete |= cmd->complete << (n + i);
+ msgid = cmd_msgid;
+ msgid |= (cmd->complete) ? CMD_MSGID_RESP_REQ : 0;
+ write_tcs_reg(drv, RSC_DRV_CMD_MSGID, m, n + i, msgid);
+ write_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, n + i, cmd->addr);
+ write_tcs_reg(drv, RSC_DRV_CMD_DATA, m, n + i, cmd->data);
+ }
+
+ write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
+ cmd_enable |= read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0);
+ write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, cmd_enable);
+}
+
+static void __tcs_trigger(struct rsc_drv *drv, int m)
+{
+ u32 enable;
+
+ /*
+ * HW req: Clear the DRV_CONTROL and enable TCS again
+ * While clearing ensure that the AMC mode trigger is cleared
+ * and then the mode enable is cleared.
+ */
+ enable = read_tcs_reg(drv, RSC_DRV_CONTROL, m, 0);
+ enable &= ~TCS_AMC_MODE_TRIGGER;
+ write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
+ enable &= ~TCS_AMC_MODE_ENABLE;
+ write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
+
+ /* Enable the AMC mode on the TCS and then trigger the TCS */
+ enable = TCS_AMC_MODE_ENABLE;
+ write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
+ enable |= TCS_AMC_MODE_TRIGGER;
+ write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
+}
+
+static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs,
+ struct tcs_request *msg)
+{
+ unsigned long curr_enabled;
+ u32 addr;
+ int i, j, k;
+ int m = tcs->tcs_offset;
+
+ for (i = 0; i < tcs->num_tcs; i++, m++) {
+ if (tcs_is_free(drv, m))
+ continue;
+
+ curr_enabled = read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0);
+
+ for_each_set_bit(j, &curr_enabled, MAX_CMDS_PER_TCS) {
+ addr = read_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, j);
+ for (k = 0; k < msg->num_payload; k++) {
+ if (addr == msg->payload[k].addr)
+ return -EBUSY;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int find_free_tcs(struct tcs_group *tcs)
+{
+ int m;
+
+ for (m = 0; m < tcs->num_tcs; m++) {
+ if (tcs_is_free(tcs->drv, tcs->tcs_offset + m))
+ break;
+ }
+
+ return (m != tcs->num_tcs) ? m : -EBUSY;
+}
+
+static int tcs_mbox_write(struct rsc_drv *drv, struct tcs_request *msg)
+{
+ struct tcs_group *tcs;
+ int m;
+ struct tcs_response *resp = NULL;
+ unsigned long flags;
+ int ret = 0;
+
+ tcs = get_tcs_for_msg(drv, msg);
+ if (IS_ERR(tcs))
+ return PTR_ERR(tcs);
+
+ spin_lock_irqsave(&tcs->tcs_lock, flags);
+ m = find_free_tcs(tcs);
+ if (m < 0) {
+ ret = m;
+ goto done_write;
+ }
+
+ /*
+ * The h/w does not like if we send a request to the same address,
+ * when one is already in-flight or bring processed.
+ */
+ ret = check_for_req_inflight(drv, tcs, msg);
+ if (ret)
+ goto done_write;
+
+ resp = setup_response(drv, msg, m);
+ if (IS_ERR_OR_NULL(resp)) {
+ ret = PTR_ERR(resp);
+ goto done_write;
+ }
+ resp->m = m;
+
+ atomic_set(&drv->tcs_in_use[m], 1);
+ __tcs_buffer_write(drv, m, 0, msg);
+ __tcs_trigger(drv, m);
+
+done_write:
+ spin_unlock_irqrestore(&tcs->tcs_lock, flags);
+ return ret;
+}
+
+/**
+ * rpmh_rsc_send_data: Validate the incoming message and write to the
+ * appropriate TCS block.
+ *
+ * @drv: the controller
+ * @msg: the data to be sent
+ *
+ * Return: 0 on success, -EINVAL on error.
+ * Note: This call blocks until a valid data is written to the TCS.
+ */
+int rpmh_rsc_send_data(struct rsc_drv *drv, struct tcs_request *msg)
+{
+ int ret = 0;
+
+ if (!msg || !msg->payload || !msg->num_payload ||
+ msg->num_payload > MAX_RPMH_PAYLOAD)
+ return -EINVAL;
+
+ do {
+ ret = tcs_mbox_write(drv, msg);
+ if (ret == -EBUSY) {
+ pr_info_ratelimited("TCS Busy, retrying RPMH message send: addr=0x%x\n",
+ msg->payload[0].addr);
+ udelay(10);
+ }
+ } while (ret == -EBUSY);
+
+ return ret;
+}
+EXPORT_SYMBOL(rpmh_rsc_send_data);
+
+static int rpmh_probe_tcs_config(struct platform_device *pdev,
+ struct rsc_drv *drv)
+{
+ struct tcs_type_config {
+ u32 type;
+ u32 n;
+ } tcs_cfg[TCS_TYPE_NR] = { { 0 } };
+ struct device_node *dn = pdev->dev.of_node;
+ u32 config, max_tcs, ncpt;
+ int i, ret, n, st = 0;
+ struct tcs_group *tcs;
+ struct resource *res;
+ void __iomem *base;
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "drv");
+ if (!res)
+ return -EINVAL;
+
+ base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "tcs");
+ if (!res)
+ return -EINVAL;
+
+ drv->tcs_base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(drv->tcs_base))
+ return PTR_ERR(drv->tcs_base);
+
+ config = readl_relaxed(base + DRV_PRNT_CHLD_CONFIG);
+
+ max_tcs = config;
+ max_tcs &= (DRV_NUM_TCS_MASK << (DRV_NUM_TCS_SHIFT * drv->drv_id));
+ max_tcs = max_tcs >> (DRV_NUM_TCS_SHIFT * drv->drv_id);
+
+ ncpt = config & (DRV_NCPT_MASK << DRV_NCPT_SHIFT);
+ ncpt = ncpt >> DRV_NCPT_SHIFT;
+
+ n = of_property_count_elems_of_size(dn, "qcom,tcs-config",
+ sizeof(u32));
+ if (n != 2 * TCS_TYPE_NR)
+ return -EINVAL;
+
+ for (i = 0; i < TCS_TYPE_NR; i++) {
+ ret = of_property_read_u32_index(dn, "qcom,tcs-config",
+ i * 2, &tcs_cfg[i].type);
+ if (ret)
+ return ret;
+ if (tcs_cfg[i].type >= TCS_TYPE_NR)
+ return -EINVAL;
+
+ ret = of_property_read_u32_index(dn, "qcom,tcs-config",
+ i * 2 + 1, &tcs_cfg[i].n);
+ if (ret)
+ return ret;
+ if (tcs_cfg[i].n > MAX_TCS_PER_TYPE)
+ return -EINVAL;
+ }
+
+ for (i = 0; i < TCS_TYPE_NR; i++) {
+ tcs = &drv->tcs[tcs_cfg[i].type];
+ if (tcs->drv)
+ return -EINVAL;
+ tcs->drv = drv;
+ tcs->type = tcs_cfg[i].type;
+ tcs->num_tcs = tcs_cfg[i].n;
+ tcs->ncpt = ncpt;
+ spin_lock_init(&tcs->tcs_lock);
+
+ if (!tcs->num_tcs || tcs->type == CONTROL_TCS)
+ continue;
+
+ if (st + tcs->num_tcs > max_tcs ||
+ st + tcs->num_tcs >= 8 * sizeof(tcs->tcs_mask))
+ return -EINVAL;
+
+ tcs->tcs_mask = ((1 << tcs->num_tcs) - 1) << st;
+ tcs->tcs_offset = st;
+ st += tcs->num_tcs;
+ }
+
+ drv->num_tcs = st;
+
+ return 0;
+}
+
+static int rpmh_rsc_probe(struct platform_device *pdev)
+{
+ struct device_node *dn = pdev->dev.of_node;
+ struct rsc_drv *drv;
+ int i, ret, irq;
+
+ drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL);
+ if (!drv)
+ return -ENOMEM;
+
+ ret = of_property_read_u32(dn, "qcom,drv-id", &drv->drv_id);
+ if (ret)
+ return ret;
+
+ drv->name = of_get_property(pdev->dev.of_node, "label", NULL);
+ if (!drv->name)
+ drv->name = dev_name(&pdev->dev);
+
+ ret = rpmh_probe_tcs_config(pdev, drv);
+ if (ret)
+ return ret;
+
+ INIT_LIST_HEAD(&drv->response_pending);
+ spin_lock_init(&drv->drv_lock);
+ tasklet_init(&drv->tasklet, tcs_notify_tx_done, (unsigned long)drv);
+
+ for (i = 0; i < ARRAY_SIZE(drv->tcs_in_use); i++)
+ atomic_set(&drv->tcs_in_use[i], 0);
+
+ irq = of_irq_get(dn, 0);
+ if (irq < 0)
+ return irq;
+
+ ret = devm_request_irq(&pdev->dev, irq, tcs_irq_handler,
+ IRQF_TRIGGER_HIGH | IRQF_NO_SUSPEND,
+ drv->name, drv);
+ if (ret)
+ return ret;
+
+ /* Enable the active TCS to send requests immediately */
+ write_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, 0,
+ drv->tcs[ACTIVE_TCS].tcs_mask);
+
+ return of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
+}
+
+static const struct of_device_id rpmh_drv_match[] = {
+ { .compatible = "qcom,rpmh-rsc", },
+ { }
+};
+MODULE_DEVICE_TABLE(of, rpm_drv_match);
+
+static struct platform_driver rpmh_driver = {
+ .probe = rpmh_rsc_probe,
+ .driver = {
+ .name = KBUILD_MODNAME,
+ .of_match_table = rpmh_drv_match,
+ },
+};
+
+static int __init rpmh_driver_init(void)
+{
+ return platform_driver_register(&rpmh_driver);
+}
+arch_initcall(rpmh_driver_init);
diff --git a/include/dt-bindings/soc/qcom,rpmh-rsc.h b/include/dt-bindings/soc/qcom,rpmh-rsc.h
new file mode 100644
index 000000000000..868f998ea998
--- /dev/null
+++ b/include/dt-bindings/soc/qcom,rpmh-rsc.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef __DT_QCOM_RPMH_RSC_H__
+#define __DT_QCOM_RPMH_RSC_H__
+
+#define SLEEP_TCS 0
+#define WAKE_TCS 1
+#define ACTIVE_TCS 2
+#define CONTROL_TCS 3
+
+#endif /* __DT_QCOM_RPMH_RSC_H__ */
diff --git a/include/soc/qcom/tcs.h b/include/soc/qcom/tcs.h
new file mode 100644
index 000000000000..9465f0560f7a
--- /dev/null
+++ b/include/soc/qcom/tcs.h
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef __SOC_QCOM_TCS_H__
+#define __SOC_QCOM_TCS_H__
+
+#define MAX_RPMH_PAYLOAD 16
+
+/**
+ * rpmh_state: state for the request
+ *
+ * RPMH_WAKE_ONLY_STATE: Resume resource state to the value previously
+ * requested before the processor was powered down.
+ * RPMH_SLEEP_STATE: State of the resource when the processor subsystem
+ * is powered down. There is no client using the
+ * resource actively.
+ * RPMH_ACTIVE_ONLY_STATE: Active or AMC mode requests. Resource state
+ * is aggregated immediately.
+ */
+enum rpmh_state {
+ RPMH_SLEEP_STATE,
+ RPMH_WAKE_ONLY_STATE,
+ RPMH_ACTIVE_ONLY_STATE,
+};
+
+/**
+ * tcs_cmd: an individual request to RPMH.
+ *
+ * @addr: the address of the resource slv_id:18:16 | offset:0:15
+ * @data: the resource state request
+ * @complete: wait for this request to be complete before sending the next
+ */
+struct tcs_cmd {
+ u32 addr;
+ u32 data;
+ bool complete;
+};
+
+/**
+ * tcs_request: A set of tcs_cmds sent togther in a TCS
+ *
+ * @state: state for the request.
+ * @is_complete: expect a response from the h/w accelerator
+ * @num_payload: the number of tcs_cmds in thsi payload
+ * @payload: an array of tcs_cmds
+ */
+struct tcs_request {
+ enum rpmh_state state;
+ bool is_complete;
+ u32 num_payload;
+ struct tcs_cmd *payload;
+};
+
+#endif /* __SOC_QCOM_TCS_H__ */
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
On Thu, 15 Feb 2018 20:41:18 +0000
Lina Iyer <[email protected]> wrote:
> >--- a/drivers/soc/qcom/Makefile
> >+++ b/drivers/soc/qcom/Makefile
> >@@ -1,4 +1,5 @@
> > # SPDX-License-Identifier: GPL-2.0
> >+CFLAGS_trace-rpmh.o := -I$(src)
> I did this here..
Needs to be:
CFLAGS_rpmh-rsc.o
You need to use the C file that has the CREATE_TRACE_POINTS define.
-- Steve
>
> ...
>
> >+#undef TRACE_INCLUDE_PATH
> >+#define TRACE_INCLUDE_PATH ../../drivers/soc/qcom
> >+
> Yet, I had to define the whole path here for it to compile. Per comments
> I should just need to do this -
>
> #define TRACE_INCLUDE_PATH .
>
> But it doesn't work. What am I missing?
>
> >+#undef TRACE_INCLUDE_FILE
> >+#define TRACE_INCLUDE_FILE trace-rpmh
> >+
> >+#include <trace/define_trace.h>
> >--
>
> Thanks,
> Lina
On Thu, Feb 15 2018 at 20:51 +0000, Steven Rostedt wrote:
>On Thu, 15 Feb 2018 20:41:18 +0000
>Lina Iyer <[email protected]> wrote:
>
>
>> >--- a/drivers/soc/qcom/Makefile
>> >+++ b/drivers/soc/qcom/Makefile
>> >@@ -1,4 +1,5 @@
>> > # SPDX-License-Identifier: GPL-2.0
>> >+CFLAGS_trace-rpmh.o := -I$(src)
>> I did this here..
>
>Needs to be:
>
> CFLAGS_rpmh-rsc.o
>
>You need to use the C file that has the CREATE_TRACE_POINTS define.
>
Thank you so much.
-- Lina
>-- Steve
>
>>
>> ...
>>
>> >+#undef TRACE_INCLUDE_PATH
>> >+#define TRACE_INCLUDE_PATH ../../drivers/soc/qcom
>> >+
>> Yet, I had to define the whole path here for it to compile. Per comments
>> I should just need to do this -
>>
>> #define TRACE_INCLUDE_PATH .
>>
>> But it doesn't work. What am I missing?
>>
>> >+#undef TRACE_INCLUDE_FILE
>> >+#define TRACE_INCLUDE_FILE trace-rpmh
>> >+
>> >+#include <trace/define_trace.h>
>> >--
>>
>> Thanks,
>> Lina
>
On Thu, 15 Feb 2018 10:35:00 -0700
Lina Iyer <[email protected]> wrote:
> @@ -298,6 +303,7 @@ static void __tcs_buffer_write(struct rsc_drv *drv, int m, int n,
> write_tcs_reg(drv, RSC_DRV_CMD_MSGID, m, n + i, msgid);
> write_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, n + i, cmd->addr);
> write_tcs_reg(drv, RSC_DRV_CMD_DATA, m, n + i, cmd->data);
> + trace_rpmh_send_msg(drv, m, n + i, msgid, cmd);
No biggy, but I'm curious to why you didn't do something this:
+static void __tcs_buffer_write(struct rsc_drv *drv, int m, int n,
+ struct tcs_request *msg)
+{
+ u32 msgid, cmd_msgid = 0;
+ u32 cmd_enable = 0;
+ u32 cmd_complete;
+ struct tcs_cmd *cmd;
+ int i;
+
+ cmd_msgid = CMD_MSGID_LEN;
+ cmd_msgid |= (msg->is_complete) ? CMD_MSGID_RESP_REQ : 0;
+ cmd_msgid |= CMD_MSGID_WRITE;
+
+ cmd_complete = read_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0);
+
+ for (i = 0; i < msg->num_payload; i++) {
int bit = n + i;
+ cmd = &msg->payload[i];
+ cmd_enable |= BIT(bit);
+ cmd_complete |= cmd->complete << (n + i);
+ msgid = cmd_msgid;
+ msgid |= (cmd->complete) ? CMD_MSGID_RESP_REQ : 0;
+ write_tcs_reg(drv, RSC_DRV_CMD_MSGID, m, bit, msgid);
+ write_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, bit, cmd->addr);
+ write_tcs_reg(drv, RSC_DRV_CMD_DATA, m, bit, cmd->data);
trace_rpmh_send_msg(drv, m, bit, msgid, cmd);
The compiler should optimize that, so this isn't really a big deal, but
I was just curious.
+ }
+
+ write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
+ cmd_enable |= read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0);
+ write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, cmd_enable);
+}
> }
>
> write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
[..]
> +TRACE_EVENT(rpmh_send_msg,
> +
> + TP_PROTO(struct rsc_drv *d, int m, int n, u32 h, struct tcs_cmd *c),
> +
> + TP_ARGS(d, m, n, h, c),
> +
> + TP_STRUCT__entry(
> + __field(const char*, d->name)
> + __field(int, m)
> + __field(int, n)
> + __field(u32, hdr)
> + __field(u32, addr)
> + __field(u32, data)
> + __field(bool, complete)
> + ),
> +
> + TP_fast_assign(
> + __entry->name = s;
> + __entry->m = m;
> + __entry->n = n;
> + __entry->hdr = h;
> + __entry->addr = c->addr;
> + __entry->data = c->data;
> + __entry->complete = c->complete;
> + ),
> +
> + TP_printk("%s: send-msg: tcs(m): %d cmd(n): %d msgid: 0x%08x addr: 0x%08x data: 0x%08x complete: %d",
> + __entry->name, __entry->m, __entry->n, __entry->hdr,
I'm sorry I didn't catch this in my other reviews, but please don't use
direct strings in TP_printk(). In trace-cmd and perf, it has no access
to that information when reading this trace event. Not to mention, if
drv is freed between the time it is recorded, and the time it is read
in the trace buffer, you are now referencing random memory.
The way to do this in a trace event is to use the string functionality:
TP_STRUCT__entry(
__string(name, d->name)
[..]
TP_fast_assign(
__assign_string(name, d->name)
[..]
TP_printk("%s: ...",
__get_str(name), ...
Then the name is recorded in the ring buffer at the time of execution
of the trace event, and trace-cmd and perf can read it, and there's no
worries about it being freed between recording and reading the tracing
buffer.
-- Steve
> + __entry->addr, __entry->data, __entry->complete)
> +);
> +
On Thu, Feb 15 2018 at 19:57 +0000, Steven Rostedt wrote:
>On Thu, 15 Feb 2018 10:35:00 -0700
>Lina Iyer <[email protected]> wrote:
>
>> @@ -298,6 +303,7 @@ static void __tcs_buffer_write(struct rsc_drv *drv, int m, int n,
>> write_tcs_reg(drv, RSC_DRV_CMD_MSGID, m, n + i, msgid);
>> write_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, n + i, cmd->addr);
>> write_tcs_reg(drv, RSC_DRV_CMD_DATA, m, n + i, cmd->data);
>> + trace_rpmh_send_msg(drv, m, n + i, msgid, cmd);
>
>No biggy, but I'm curious to why you didn't do something this:
>
>+static void __tcs_buffer_write(struct rsc_drv *drv, int m, int n,
>+ struct tcs_request *msg)
>+{
>+ u32 msgid, cmd_msgid = 0;
>+ u32 cmd_enable = 0;
>+ u32 cmd_complete;
>+ struct tcs_cmd *cmd;
>+ int i;
>+
>+ cmd_msgid = CMD_MSGID_LEN;
>+ cmd_msgid |= (msg->is_complete) ? CMD_MSGID_RESP_REQ : 0;
>+ cmd_msgid |= CMD_MSGID_WRITE;
>+
>+ cmd_complete = read_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0);
>+
>+ for (i = 0; i < msg->num_payload; i++) {
>
> int bit = n + i;
>
>+ cmd = &msg->payload[i];
>+ cmd_enable |= BIT(bit);
>+ cmd_complete |= cmd->complete << (n + i);
>+ msgid = cmd_msgid;
>+ msgid |= (cmd->complete) ? CMD_MSGID_RESP_REQ : 0;
>+ write_tcs_reg(drv, RSC_DRV_CMD_MSGID, m, bit, msgid);
>+ write_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, bit, cmd->addr);
>+ write_tcs_reg(drv, RSC_DRV_CMD_DATA, m, bit, cmd->data);
>
> trace_rpmh_send_msg(drv, m, bit, msgid, cmd);
>
>The compiler should optimize that, so this isn't really a big deal, but
>I was just curious.
>
>
No particular reason. Think I just went with the logic at that time and
didn't look back deeply again on the code to tidy it up. Thanks for the
suggestion.
>+ }
>+
>+ write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
>+ cmd_enable |= read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0);
>+ write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, cmd_enable);
>+}
>
>> }
>>
>> write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
>
>[..]
>
>> +TRACE_EVENT(rpmh_send_msg,
>> +
>> + TP_PROTO(struct rsc_drv *d, int m, int n, u32 h, struct tcs_cmd *c),
>> +
>> + TP_ARGS(d, m, n, h, c),
>> +
>> + TP_STRUCT__entry(
>> + __field(const char*, d->name)
>> + __field(int, m)
>> + __field(int, n)
>> + __field(u32, hdr)
>> + __field(u32, addr)
>> + __field(u32, data)
>> + __field(bool, complete)
>> + ),
>> +
>> + TP_fast_assign(
>> + __entry->name = s;
>> + __entry->m = m;
>> + __entry->n = n;
>> + __entry->hdr = h;
>> + __entry->addr = c->addr;
>> + __entry->data = c->data;
>> + __entry->complete = c->complete;
>> + ),
>> +
>> + TP_printk("%s: send-msg: tcs(m): %d cmd(n): %d msgid: 0x%08x addr: 0x%08x data: 0x%08x complete: %d",
>> + __entry->name, __entry->m, __entry->n, __entry->hdr,
>
>I'm sorry I didn't catch this in my other reviews, but please don't use
>direct strings in TP_printk(). In trace-cmd and perf, it has no access
>to that information when reading this trace event. Not to mention, if
>drv is freed between the time it is recorded, and the time it is read
>in the trace buffer, you are now referencing random memory.
>
>The way to do this in a trace event is to use the string functionality:
>
> TP_STRUCT__entry(
> __string(name, d->name)
> [..]
> TP_fast_assign(
> __assign_string(name, d->name)
> [..]
> TP_printk("%s: ...",
> __get_str(name), ...
>
>Then the name is recorded in the ring buffer at the time of execution
>of the trace event, and trace-cmd and perf can read it, and there's no
>worries about it being freed between recording and reading the tracing
>buffer.
>
The drv would not be freed. But that said, I will use this in my
patches.
Thanks Steve.
-- Lina
Hi Steve,
On Thu, Feb 15 2018 at 17:35 +0000, Lina Iyer wrote:
>Log sent RPMH requests and interrupt responses in FTRACE.
>
>Cc: Steven Rostedt <[email protected]>
>Signed-off-by: Lina Iyer <[email protected]>
>---
> drivers/soc/qcom/Makefile | 1 +
> drivers/soc/qcom/rpmh-rsc.c | 6 +++
> drivers/soc/qcom/trace-rpmh.h | 89 +++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 96 insertions(+)
> create mode 100644 drivers/soc/qcom/trace-rpmh.h
>
>diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
>index 7426979ea9fb..094b639ece2f 100644
>--- a/drivers/soc/qcom/Makefile
>+++ b/drivers/soc/qcom/Makefile
>@@ -1,4 +1,5 @@
> # SPDX-License-Identifier: GPL-2.0
>+CFLAGS_trace-rpmh.o := -I$(src)
I did this here..
...
>+#undef TRACE_INCLUDE_PATH
>+#define TRACE_INCLUDE_PATH ../../drivers/soc/qcom
>+
Yet, I had to define the whole path here for it to compile. Per comments
I should just need to do this -
#define TRACE_INCLUDE_PATH .
But it doesn't work. What am I missing?
>+#undef TRACE_INCLUDE_FILE
>+#define TRACE_INCLUDE_FILE trace-rpmh
>+
>+#include <trace/define_trace.h>
>--
Thanks,
Lina
Hi Lina,
On Thu, Feb 15, 2018 at 9:34 AM, Lina Iyer <[email protected]> wrote:
> Add controller driver for QCOM SoCs that have hardware based shared
> resource management. The hardware IP known as RSC (Resource State
> Coordinator) houses multiple Direct Resource Voter (DRV) for different
> execution levels. A DRV is a unique voter on the state of a shared
> resource. A Trigger Control Set (TCS) is a bunch of slots that can house
> multiple resource state requests, that when triggered will issue those
> requests through an internal bus to the Resource Power Manager Hardened
> (RPMH) blocks. These hardware blocks are capable of adjusting clocks,
> voltages etc. The resource state request from a DRV are aggregated along
> with state requests from other processors in the SoC and the aggregate
> value is applied on the resource.
>
> Some important aspects of the RPMH communication -
> - Requests are <addr, value> with some header information
> - Multiple requests (upto 16) may be sent through a TCS, at a time
> - Requests in a TCS are sent in sequence
> - Requests may be fire-n-forget or completion (response expected)
> - Multiple TCS from the same DRV may be triggered simultaneously
> - Cannot send a request if another reques for the same addr is in
> progress from the same DRV
> - When all the requests from a TCS are complete, an IRQ is raised
> - The IRQ handler needs to clear the TCS before it is available for
> reuse
> - TCS configuration is specific to a DRV
> - Platform drivers may use DRV from different RSCs to make requests
>
> Resource state requests made when CPUs are active are called 'active'
> state requests. Requests made when all the CPUs are powered down (idle
> state) are called 'sleep' state requests. They are matched by a
> corresponding 'wake' state requests which puts the resources back in to
> previously requested active state before resuming any CPU. TCSes are
> dedicated for each type of requests. Control TCS are used to provide
> specific information to the controller.
>
> Signed-off-by: Lina Iyer <[email protected]>
> ---
> drivers/soc/qcom/Kconfig | 10 +
> drivers/soc/qcom/Makefile | 1 +
> drivers/soc/qcom/rpmh-internal.h | 86 +++++
> drivers/soc/qcom/rpmh-rsc.c | 593 ++++++++++++++++++++++++++++++++
> include/dt-bindings/soc/qcom,rpmh-rsc.h | 14 +
> include/soc/qcom/tcs.h | 56 +++
> 6 files changed, 760 insertions(+)
> create mode 100644 drivers/soc/qcom/rpmh-internal.h
> create mode 100644 drivers/soc/qcom/rpmh-rsc.c
> create mode 100644 include/dt-bindings/soc/qcom,rpmh-rsc.h
> create mode 100644 include/soc/qcom/tcs.h
>
[...]
> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
> new file mode 100644
> index 000000000000..dccf9de1a75f
> --- /dev/null
> +++ b/drivers/soc/qcom/rpmh-internal.h
> @@ -0,0 +1,86 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
> + */
> +
> +
> +#ifndef __RPM_INTERNAL_H__
> +#define __RPM_INTERNAL_H__
> +
> +#include <soc/qcom/tcs.h>
> +
> +#define TCS_TYPE_NR 4
> +#define MAX_CMDS_PER_TCS 16
> +#define MAX_TCS_PER_TYPE 3
> +#define MAX_TCS_NR (MAX_TCS_PER_TYPE * TCS_TYPE_NR)
> +
> +struct rsc_drv;
> +
> +/**
> + * tcs_response: Response object for a request
Can you embed the acronym definition, ie: tcs_response: Responses for
a Trigger Command Set.
> + *
> + * @drv: the controller
> + * @msg: the request for this response
> + * @m: the tcs identifier
> + * @err: error reported in the response
> + * @list: link list object.
> + */
> +struct tcs_response {
> + struct rsc_drv *drv;
> + struct tcs_request *msg;
> + u32 m;
> + int err;
> + struct list_head list;
> +};
> +
> +/**
> + * tcs_group: group of TCSes for a request state
> + *
Document @drv.
> + * @type: type of the TCS in this group - active, sleep, wake
> + * @tcs_mask: mask of the TCSes relative to all the TCSes in the RSC
> + * @tcs_offset: start of the TCS group relative to the TCSes in the RSC
> + * @num_tcs: number of TCSes in this type
> + * @ncpt: number of commands in each TCS
> + * @tcs_lock: lock for synchronizing this TCS writes
> + * @responses: response objects for requests sent from each TCS
> + */
> +struct tcs_group {
> + struct rsc_drv *drv;
> + int type;
> + u32 tcs_mask;
> + u32 tcs_offset;
> + int num_tcs;
> + int ncpt;
> + spinlock_t tcs_lock;
> + struct tcs_response *responses[MAX_TCS_PER_TYPE];
> +};
> +
> +/**
> + * rsc_drv: the RSC controller
Would be more helpfully described as Resource State Coordinator controller.
> + *
> + * @name: controller identifier
> + * @tcs_base: start address of the TCS registers in this controller
> + * @drv_id: instance id in the controller (DRV)
> + * @num_tcs: number of TCSes in this DRV
> + * @tasklet: handle responses, off-load work from IRQ handler
> + * @response_pending: list of responses that needs to be sent to caller
> + * @tcs: TCS groups
> + * @tcs_in_use: s/w state of the TCS
> + * @drv_lock: synchronize state of the controller
> + */
> +struct rsc_drv {
> + const char *name;
> + void __iomem *tcs_base;
> + int drv_id;
> + int num_tcs;
> + struct tasklet_struct tasklet;
> + struct list_head response_pending;
> + struct tcs_group tcs[TCS_TYPE_NR];
> + atomic_t tcs_in_use[MAX_TCS_NR];
> + spinlock_t drv_lock;
> +};
> +
> +
> +int rpmh_rsc_send_data(struct rsc_drv *drv, struct tcs_request *msg);
> +
> +#endif /* __RPM_INTERNAL_H__ */
> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> new file mode 100644
> index 000000000000..ceb8aea237d6
> --- /dev/null
> +++ b/drivers/soc/qcom/rpmh-rsc.c
> @@ -0,0 +1,593 @@
> +// SPDX-License-Identifier: GPL-2.0
[...]
> +
> +static struct tcs_group *get_tcs_from_index(struct rsc_drv *drv, int m)
> +{
> + struct tcs_group *tcs = NULL;
> + int i;
> +
> + for (i = 0; i < drv->num_tcs; i++) {
> + tcs = &drv->tcs[i];
> + if (tcs->tcs_mask & BIT(m))
> + break;
> + }
> +
> + if (i == drv->num_tcs) {
> + WARN(1, "Incorrect TCS index %d", m);
> + tcs = NULL;
> + }
> +
> + return tcs;
> +}
> +
> +static struct tcs_response *setup_response(struct rsc_drv *drv,
> + struct tcs_request *msg, int m)
> +{
> + struct tcs_response *resp;
> + struct tcs_group *tcs;
> +
> + resp = kcalloc(1, sizeof(*resp), GFP_ATOMIC);
> + if (!resp)
> + return ERR_PTR(-ENOMEM);
> +
> + resp->drv = drv;
> + resp->msg = msg;
> + resp->err = 0;
> +
> + tcs = get_tcs_from_index(drv, m);
> + if (!tcs)
> + return ERR_PTR(-EINVAL);
> +
> + /*
> + * We should have been called from a TCS-type locked context, and
> + * we overwrite the previously saved response.
> + */
> + tcs->responses[m - tcs->tcs_offset] = resp;
> +
> + return resp;
> +}
> +
> +static void free_response(struct tcs_response *resp)
> +{
> + kfree(resp);
> +}
> +
> +static struct tcs_response *get_response(struct rsc_drv *drv, u32 m)
> +{
> + struct tcs_group *tcs = get_tcs_from_index(drv, m);
> +
> + return tcs->responses[m - tcs->tcs_offset];
> +}
> +
> +static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int m, int n)
> +{
> + return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
> + RSC_DRV_CMD_OFFSET * n);
> +}
> +
> +static void write_tcs_reg(struct rsc_drv *drv, int reg, int m, int n,
> + u32 data)
> +{
> + writel_relaxed(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
> + RSC_DRV_CMD_OFFSET * n);
> +}
> +
> +static void write_tcs_reg_sync(struct rsc_drv *drv, int reg, int m, int n,
> + u32 data)
> +{
> + write_tcs_reg(drv, reg, m, n, data);
> + for (;;) {
> + if (data == read_tcs_reg(drv, reg, m, n))
> + break;
> + udelay(1);
Should this time out and return a failure?
> + }
> +}
> +
> +static bool tcs_is_free(struct rsc_drv *drv, int m)
> +{
> + return !atomic_read(&drv->tcs_in_use[m]) &&
> + read_tcs_reg(drv, RSC_DRV_STATUS, m, 0);
> +}
> +
> +static struct tcs_group *get_tcs_of_type(struct rsc_drv *drv, int type)
> +{
> + int i;
> + struct tcs_group *tcs;
> +
> + for (i = 0; i < TCS_TYPE_NR; i++) {
> + if (type == drv->tcs[i].type)
> + break;
> + }
> +
> + if (i == TCS_TYPE_NR)
> + return ERR_PTR(-EINVAL);
> +
> + tcs = &drv->tcs[i];
> + if (!tcs->num_tcs)
> + return ERR_PTR(-EINVAL);
> +
> + return tcs;
> +}
> +
> +static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
> + struct tcs_request *msg)
> +{
> + int type;
> +
> + switch (msg->state) {
> + case RPMH_ACTIVE_ONLY_STATE:
> + type = ACTIVE_TCS;
> + break;
> + default:
> + return ERR_PTR(-EINVAL);
> + }
> +
> + return get_tcs_of_type(drv, type);
> +}
> +
> +static void send_tcs_response(struct tcs_response *resp)
> +{
> + struct rsc_drv *drv = resp->drv;
> + unsigned long flags;
> +
> + if (!resp)
> + return;
Does this ever happen? Ah, I see that it might in the irq handler. But
get_response already assumes that there is a response, and reaches
through it. So I don't think you need this here nor the check+label in
the irq handler.
Is requesting an index out of range purely a developer error, or could
it happen in some sort of runtime scarcity situation? If it's a
developer error, I'd get rid of all the null checking. If it's
something that might really happen under the right circumstances, then
my comment above doesn't stand and you'd want to fix the null
dereference in get_response instead.
> +
> + spin_lock_irqsave(&drv->drv_lock, flags);
> + INIT_LIST_HEAD(&resp->list);
> + list_add_tail(&resp->list, &drv->response_pending);
> + spin_unlock_irqrestore(&drv->drv_lock, flags);
> +
> + tasklet_schedule(&drv->tasklet);
> +}
> +
> +/**
> + * tcs_irq_handler: TX Done interrupt handler
> + */
> +static irqreturn_t tcs_irq_handler(int irq, void *p)
> +{
> + struct rsc_drv *drv = p;
> + int m, i;
> + u32 irq_status, sts;
> + struct tcs_response *resp;
> + struct tcs_cmd *cmd;
> + int err;
> +
> + irq_status = read_tcs_reg(drv, RSC_DRV_IRQ_STATUS, 0, 0);
> +
> + for (m = 0; m < drv->num_tcs; m++) {
> + if (!(irq_status & (u32)BIT(m)))
> + continue;
> +
> + err = 0;
> + resp = get_response(drv, m);
> + if (!resp) {
I mention this above, but I don't think get_response can gracefully
return null, so is this needed?
> + WARN_ON(1);
> + goto skip_resp;
> + }
> +
> + for (i = 0; i < resp->msg->num_payload; i++) {
> + cmd = &resp->msg->payload[i];
> + sts = read_tcs_reg(drv, RSC_DRV_CMD_STATUS, m, i);
> + if ((!(sts & CMD_STATUS_ISSUED)) ||
> + ((resp->msg->is_complete || cmd->complete) &&
> + (!(sts & CMD_STATUS_COMPL)))) {
> + resp->err = -EIO;
You're trying to set this to -EIO, but then you clobber it to err
(which is zero) below
> + break;
> + }
> + }
> +
> + resp->err = err;
> +skip_resp:
> + /* Reclaim the TCS */
> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
> + write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, 0, BIT(m));
> + atomic_set(&drv->tcs_in_use[m], 0);
> + send_tcs_response(resp);
> + }
> +
> + return IRQ_HANDLED;
> +}
> +
[...]
> +
> +static void __tcs_trigger(struct rsc_drv *drv, int m)
> +{
> + u32 enable;
> +
> + /*
> + * HW req: Clear the DRV_CONTROL and enable TCS again
> + * While clearing ensure that the AMC mode trigger is cleared
> + * and then the mode enable is cleared.
> + */
> + enable = read_tcs_reg(drv, RSC_DRV_CONTROL, m, 0);
> + enable &= ~TCS_AMC_MODE_TRIGGER;
> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
If write_tcs_reg_sync can fail as I suggest, you'd have to fail out of here too.
[...]
> diff --git a/include/soc/qcom/tcs.h b/include/soc/qcom/tcs.h
> new file mode 100644
> index 000000000000..9465f0560f7a
> --- /dev/null
> +++ b/include/soc/qcom/tcs.h
> @@ -0,0 +1,56 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
> + */
> +
> +#ifndef __SOC_QCOM_TCS_H__
> +#define __SOC_QCOM_TCS_H__
> +
> +#define MAX_RPMH_PAYLOAD 16
> +
> +/**
> + * rpmh_state: state for the request
> + *
> + * RPMH_WAKE_ONLY_STATE: Resume resource state to the value previously
> + * requested before the processor was powered down.
> + * RPMH_SLEEP_STATE: State of the resource when the processor subsystem
> + * is powered down. There is no client using the
> + * resource actively.
> + * RPMH_ACTIVE_ONLY_STATE: Active or AMC mode requests. Resource state
> + * is aggregated immediately.
> + */
> +enum rpmh_state {
> + RPMH_SLEEP_STATE,
> + RPMH_WAKE_ONLY_STATE,
> + RPMH_ACTIVE_ONLY_STATE,
> +};
> +
> +/**
> + * tcs_cmd: an individual request to RPMH.
> + *
> + * @addr: the address of the resource slv_id:18:16 | offset:0:15
> + * @data: the resource state request
> + * @complete: wait for this request to be complete before sending the next
> + */
> +struct tcs_cmd {
> + u32 addr;
> + u32 data;
> + bool complete;
> +};
> +
> +/**
> + * tcs_request: A set of tcs_cmds sent togther in a TCS
> + *
> + * @state: state for the request.
> + * @is_complete: expect a response from the h/w accelerator
> + * @num_payload: the number of tcs_cmds in thsi payload
> + * @payload: an array of tcs_cmds
> + */
> +struct tcs_request {
> + enum rpmh_state state;
> + bool is_complete;
is_complete is kind of a misleading name. I would expect this to
indicate "the request is complete", not "expecting a completion
interrupt". Maybe everybody knows this and I'm just the confused
newbie.
Thanks,
Evan
Hello Lina,
On Thu, Feb 15, 2018 at 9:35 AM, Lina Iyer <[email protected]> wrote:
> Sleep and wake requests are sent when the application processor
> subsystem of the SoC is entering deep sleep states like in suspend.
> These requests help lower the system power requirements when the
> resources are not in use.
>
> Sleep and wake requests are written to the TCS slots but are not
> triggered at the time of writing. The TCS are triggered by the firmware
> after the last of the CPUs has executed its WFI. Since these requests
> may come in different batches of requests, it is job of this controller
> driver to find arrange the requests into the available TCSes.
>
> Signed-off-by: Lina Iyer <[email protected]>
> ---
> drivers/soc/qcom/rpmh-internal.h | 7 +++
> drivers/soc/qcom/rpmh-rsc.c | 126 +++++++++++++++++++++++++++++++++++++++
> 2 files changed, 133 insertions(+)
>
[...]
> +static int find_slots(struct tcs_group *tcs, struct tcs_request *msg,
> + int *m, int *n)
> +{
> + int slot, offset;
> + int i = 0;
> +
> + /* Find if we already have the msg in our TCS */
> + slot = find_match(tcs, msg->payload, msg->num_payload);
> + if (slot >= 0)
> + goto copy_data;
> +
> + /* Do over, until we can fit the full payload in a TCS */
> + do {
> + slot = bitmap_find_next_zero_area(tcs->slots, MAX_TCS_SLOTS,
> + i, msg->num_payload, 0);
> + if (slot == MAX_TCS_SLOTS)
> + break;
> + i += tcs->ncpt;
> + } while (slot + msg->num_payload - 1 >= i);
> +
> + if (slot == MAX_TCS_SLOTS)
> + return -ENOMEM;
> +
> +copy_data:
> + bitmap_set(tcs->slots, slot, msg->num_payload);
> + /* Copy the addresses of the resources over to the slots */
> + for (i = 0; tcs->cmd_addr && i < msg->num_payload; i++)
I don't think tcs->cmd_addr can be null, can it? Above, find_match()
is already reaching through cmd_addr with enthusiasm. If kept, it
could at least be moved outside of the loop.
Evan
Thanks Evan for your review.
On Fri, Feb 16 2018 at 21:30 +0000, Evan Green wrote:
>Hi Lina,
>
>On Thu, Feb 15, 2018 at 9:34 AM, Lina Iyer <[email protected]> wrote:
>> +
>> +/**
>> + * tcs_response: Response object for a request
>
>Can you embed the acronym definition, ie: tcs_response: Responses for
>a Trigger Command Set.
>
Sure.
>> + *
>> + * @drv: the controller
>> + * @msg: the request for this response
>> + * @m: the tcs identifier
>> + * @err: error reported in the response
>> + * @list: link list object.
>> + */
>> +struct tcs_response {
>> + struct rsc_drv *drv;
>> + struct tcs_request *msg;
>> + u32 m;
>> + int err;
>> + struct list_head list;
>> +};
>> +
>> +/**
>> + * tcs_group: group of TCSes for a request state
>> + *
>
>Document @drv.
>
OK
>> + * @type: type of the TCS in this group - active, sleep, wake
>> + * @tcs_mask: mask of the TCSes relative to all the TCSes in the RSC
>> + * @tcs_offset: start of the TCS group relative to the TCSes in the RSC
>> + * @num_tcs: number of TCSes in this type
>> + * @ncpt: number of commands in each TCS
>> + * @tcs_lock: lock for synchronizing this TCS writes
>> + * @responses: response objects for requests sent from each TCS
>> + */
>> +struct tcs_group {
>> + struct rsc_drv *drv;
>> + int type;
>> + u32 tcs_mask;
>> + u32 tcs_offset;
>> + int num_tcs;
>> + int ncpt;
>> + spinlock_t tcs_lock;
>> + struct tcs_response *responses[MAX_TCS_PER_TYPE];
>> +};
>> +
>> +/**
>> + * rsc_drv: the RSC controller
>
>Would be more helpfully described as Resource State Coordinator controller.
>
OK
>> +static void write_tcs_reg_sync(struct rsc_drv *drv, int reg, int m, int n,
>> + u32 data)
>> +{
>> + write_tcs_reg(drv, reg, m, n, data);
>> + for (;;) {
>> + if (data == read_tcs_reg(drv, reg, m, n))
>> + break;
>> + udelay(1);
>
>Should this time out and return a failure?
>
There is no reason for this fail. We just need to ensure that it is
written. Sometimes writes going through the bus, takes time to complete
the write. When we exit this function, we are assured that it is
written.
>> + }
>> +}
>> +
>> +static bool tcs_is_free(struct rsc_drv *drv, int m)
>> +{
>> + return !atomic_read(&drv->tcs_in_use[m]) &&
>> + read_tcs_reg(drv, RSC_DRV_STATUS, m, 0);
>> +}
>> +
>> +static struct tcs_group *get_tcs_of_type(struct rsc_drv *drv, int type)
>> +{
>> + int i;
>> + struct tcs_group *tcs;
>> +
>> + for (i = 0; i < TCS_TYPE_NR; i++) {
>> + if (type == drv->tcs[i].type)
>> + break;
>> + }
>> +
>> + if (i == TCS_TYPE_NR)
>> + return ERR_PTR(-EINVAL);
>> +
>> + tcs = &drv->tcs[i];
>> + if (!tcs->num_tcs)
>> + return ERR_PTR(-EINVAL);
>> +
>> + return tcs;
>> +}
>> +
>> +static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
>> + struct tcs_request *msg)
>> +{
>> + int type;
>> +
>> + switch (msg->state) {
>> + case RPMH_ACTIVE_ONLY_STATE:
>> + type = ACTIVE_TCS;
>> + break;
>> + default:
>> + return ERR_PTR(-EINVAL);
>> + }
>> +
>> + return get_tcs_of_type(drv, type);
>> +}
>> +
>> +static void send_tcs_response(struct tcs_response *resp)
>> +{
>> + struct rsc_drv *drv = resp->drv;
>> + unsigned long flags;
>> +
>> + if (!resp)
>> + return;
>
>Does this ever happen? Ah, I see that it might in the irq handler. But
>get_response already assumes that there is a response, and reaches
>through it. So I don't think you need this here nor the check+label in
>the irq handler.
>
>Is requesting an index out of range purely a developer error, or could
>it happen in some sort of runtime scarcity situation? If it's a
>developer error, I'd get rid of all the null checking. If it's
>something that might really happen under the right circumstances, then
>my comment above doesn't stand and you'd want to fix the null
>dereference in get_response instead.
>
I added the check so that I dont have confusing goto in the IRQ handler.
>> +
>> + spin_lock_irqsave(&drv->drv_lock, flags);
>> + INIT_LIST_HEAD(&resp->list);
>> + list_add_tail(&resp->list, &drv->response_pending);
>> + spin_unlock_irqrestore(&drv->drv_lock, flags);
>> +
>> + tasklet_schedule(&drv->tasklet);
>> +}
>> +
>> +/**
>> + * tcs_irq_handler: TX Done interrupt handler
>> + */
>> +static irqreturn_t tcs_irq_handler(int irq, void *p)
>> +{
>> + struct rsc_drv *drv = p;
>> + int m, i;
>> + u32 irq_status, sts;
>> + struct tcs_response *resp;
>> + struct tcs_cmd *cmd;
>> + int err;
>> +
>> + irq_status = read_tcs_reg(drv, RSC_DRV_IRQ_STATUS, 0, 0);
>> +
>> + for (m = 0; m < drv->num_tcs; m++) {
>> + if (!(irq_status & (u32)BIT(m)))
>> + continue;
>> +
>> + err = 0;
>> + resp = get_response(drv, m);
>> + if (!resp) {
>
>I mention this above, but I don't think get_response can gracefully
>return null, so is this needed?
>
>> + WARN_ON(1);
>> + goto skip_resp;
>> + }
>> +
>> + for (i = 0; i < resp->msg->num_payload; i++) {
>> + cmd = &resp->msg->payload[i];
>> + sts = read_tcs_reg(drv, RSC_DRV_CMD_STATUS, m, i);
>> + if ((!(sts & CMD_STATUS_ISSUED)) ||
>> + ((resp->msg->is_complete || cmd->complete) &&
>> + (!(sts & CMD_STATUS_COMPL)))) {
>> + resp->err = -EIO;
>
>You're trying to set this to -EIO, but then you clobber it to err
>(which is zero) below
>
Good catch. Will fix.
>> + break;
>> + }
>> + }
>> +
>> + resp->err = err;
>> +skip_resp:
>> + /* Reclaim the TCS */
>> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
>> + write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, 0, BIT(m));
>> + atomic_set(&drv->tcs_in_use[m], 0);
>> + send_tcs_response(resp);
>> + }
>> +
>> + return IRQ_HANDLED;
>> +}
>> +
>[...]
>> +
>> +static void __tcs_trigger(struct rsc_drv *drv, int m)
>> +{
>> + u32 enable;
>> +
>> + /*
>> + * HW req: Clear the DRV_CONTROL and enable TCS again
>> + * While clearing ensure that the AMC mode trigger is cleared
>> + * and then the mode enable is cleared.
>> + */
>> + enable = read_tcs_reg(drv, RSC_DRV_CONTROL, m, 0);
>> + enable &= ~TCS_AMC_MODE_TRIGGER;
>> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
>
>If write_tcs_reg_sync can fail as I suggest, you'd have to fail out of here too.
>
Explained above.
>[...]
>> diff --git a/include/soc/qcom/tcs.h b/include/soc/qcom/tcs.h
>> new file mode 100644
>> index 000000000000..9465f0560f7a
>> --- /dev/null
>> +++ b/include/soc/qcom/tcs.h
>> @@ -0,0 +1,56 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
>> + */
>> +
>> +#ifndef __SOC_QCOM_TCS_H__
>> +#define __SOC_QCOM_TCS_H__
>> +
>> +#define MAX_RPMH_PAYLOAD 16
>> +
>> +/**
>> + * rpmh_state: state for the request
>> + *
>> + * RPMH_WAKE_ONLY_STATE: Resume resource state to the value previously
>> + * requested before the processor was powered down.
>> + * RPMH_SLEEP_STATE: State of the resource when the processor subsystem
>> + * is powered down. There is no client using the
>> + * resource actively.
>> + * RPMH_ACTIVE_ONLY_STATE: Active or AMC mode requests. Resource state
>> + * is aggregated immediately.
>> + */
>> +enum rpmh_state {
>> + RPMH_SLEEP_STATE,
>> + RPMH_WAKE_ONLY_STATE,
>> + RPMH_ACTIVE_ONLY_STATE,
>> +};
>> +
>> +/**
>> + * tcs_cmd: an individual request to RPMH.
>> + *
>> + * @addr: the address of the resource slv_id:18:16 | offset:0:15
>> + * @data: the resource state request
>> + * @complete: wait for this request to be complete before sending the next
>> + */
>> +struct tcs_cmd {
>> + u32 addr;
>> + u32 data;
>> + bool complete;
>> +};
>> +
>> +/**
>> + * tcs_request: A set of tcs_cmds sent togther in a TCS
>> + *
>> + * @state: state for the request.
>> + * @is_complete: expect a response from the h/w accelerator
>> + * @num_payload: the number of tcs_cmds in thsi payload
>> + * @payload: an array of tcs_cmds
>> + */
>> +struct tcs_request {
>> + enum rpmh_state state;
>> + bool is_complete;
>
>is_complete is kind of a misleading name. I would expect this to
>indicate "the request is complete", not "expecting a completion
>interrupt". Maybe everybody knows this and I'm just the confused
>newbie.
>
It is used to distinguish between a request that is fire and forget or a
completion request where you are waiting for a response from the
accelerator. It might a be a bit of a misnomer though.
Thanks,
Lina
Hi Lina,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on linus/master]
[also build test ERROR on v4.16-rc1 next-20180216]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Lina-Iyer/drivers-qcom-add-RPMH-communication-support/20180218-075659
config: arm-allmodconfig (attached as .config)
compiler: arm-linux-gnueabi-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
make.cross ARCH=arm
All error/warnings (new ones prefixed by >>):
In file included from include/trace/define_trace.h:96:0,
from drivers/soc/qcom/trace-rpmh.h:89,
from drivers/soc/qcom/rpmh-rsc.c:28:
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:22:26: error: expected ':', ',', ';', '}' or '__attribute__' before '->' token
__field(const char *, d->name)
^
include/trace/trace_events.h:114:3: note: in definition of macro 'DECLARE_EVENT_CLASS'
tstruct \
^~~~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:21:2: note: in expansion of macro 'TP_STRUCT__entry'
TP_STRUCT__entry(
^~~~~~~~~~~~~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:22:3: note: in expansion of macro '__field'
__field(const char *, d->name)
^~~~~~~
include/trace/../../drivers/soc/qcom/trace-rpmh.h:57:25: error: expected ':', ',', ';', '}' or '__attribute__' before '->' token
__field(const char*, d->name)
^
include/trace/trace_events.h:114:3: note: in definition of macro 'DECLARE_EVENT_CLASS'
tstruct \
^~~~~~~
include/trace/trace_events.h:77:9: note: in expansion of macro 'PARAMS'
PARAMS(tstruct), \
^~~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:50:1: note: in expansion of macro 'TRACE_EVENT'
TRACE_EVENT(rpmh_send_msg,
^~~~~~~~~~~
include/trace/../../drivers/soc/qcom/trace-rpmh.h:56:2: note: in expansion of macro 'TP_STRUCT__entry'
TP_STRUCT__entry(
^~~~~~~~~~~~~~~~
include/trace/../../drivers/soc/qcom/trace-rpmh.h:57:3: note: in expansion of macro '__field'
__field(const char*, d->name)
^~~~~~~
In file included from include/trace/define_trace.h:96:0,
from drivers/soc/qcom/trace-rpmh.h:89,
from drivers/soc/qcom/rpmh-rsc.c:28:
include/trace/../../drivers/soc/qcom/trace-rpmh.h: In function 'trace_raw_output_rpmh_notify':
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:36:11: error: 'struct trace_event_raw_rpmh_notify' has no member named 'name'
__entry->name, __entry->m, __entry->addr, __entry->errno)
^
include/trace/trace_events.h:360:22: note: in definition of macro 'DECLARE_EVENT_CLASS'
trace_seq_printf(s, print); \
^~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:35:2: note: in expansion of macro 'TP_printk'
TP_printk("%s: ack: tcs-m:%d addr: 0x%08x errno: %d",
^~~~~~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:36:26: error: 'struct trace_event_raw_rpmh_notify' has no member named 'm'
__entry->name, __entry->m, __entry->addr, __entry->errno)
^
include/trace/trace_events.h:360:22: note: in definition of macro 'DECLARE_EVENT_CLASS'
trace_seq_printf(s, print); \
^~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:35:2: note: in expansion of macro 'TP_printk'
TP_printk("%s: ack: tcs-m:%d addr: 0x%08x errno: %d",
^~~~~~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:36:38: error: 'struct trace_event_raw_rpmh_notify' has no member named 'addr'
__entry->name, __entry->m, __entry->addr, __entry->errno)
^
include/trace/trace_events.h:360:22: note: in definition of macro 'DECLARE_EVENT_CLASS'
trace_seq_printf(s, print); \
^~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:35:2: note: in expansion of macro 'TP_printk'
TP_printk("%s: ack: tcs-m:%d addr: 0x%08x errno: %d",
^~~~~~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:36:53: error: 'struct trace_event_raw_rpmh_notify' has no member named 'errno'
__entry->name, __entry->m, __entry->addr, __entry->errno)
^
include/trace/trace_events.h:360:22: note: in definition of macro 'DECLARE_EVENT_CLASS'
trace_seq_printf(s, print); \
^~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:35:2: note: in expansion of macro 'TP_printk'
TP_printk("%s: ack: tcs-m:%d addr: 0x%08x errno: %d",
^~~~~~~~~
include/trace/../../drivers/soc/qcom/trace-rpmh.h: In function 'trace_raw_output_rpmh_send_msg':
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:77:11: error: 'struct trace_event_raw_rpmh_send_msg' has no member named 'name'
__entry->name, __entry->m, __entry->n, __entry->hdr,
^
include/trace/trace_events.h:360:22: note: in definition of macro 'DECLARE_EVENT_CLASS'
trace_seq_printf(s, print); \
^~~~~
include/trace/trace_events.h:79:9: note: in expansion of macro 'PARAMS'
PARAMS(print)); \
^~~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:50:1: note: in expansion of macro 'TRACE_EVENT'
TRACE_EVENT(rpmh_send_msg,
^~~~~~~~~~~
include/trace/../../drivers/soc/qcom/trace-rpmh.h:76:2: note: in expansion of macro 'TP_printk'
TP_printk("%s: send-msg: tcs(m): %d cmd(n): %d msgid: 0x%08x addr: 0x%08x data: 0x%08x complete: %d",
^~~~~~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:77:26: error: 'struct trace_event_raw_rpmh_send_msg' has no member named 'm'
__entry->name, __entry->m, __entry->n, __entry->hdr,
^
include/trace/trace_events.h:360:22: note: in definition of macro 'DECLARE_EVENT_CLASS'
trace_seq_printf(s, print); \
^~~~~
include/trace/trace_events.h:79:9: note: in expansion of macro 'PARAMS'
PARAMS(print)); \
^~~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:50:1: note: in expansion of macro 'TRACE_EVENT'
TRACE_EVENT(rpmh_send_msg,
^~~~~~~~~~~
include/trace/../../drivers/soc/qcom/trace-rpmh.h:76:2: note: in expansion of macro 'TP_printk'
TP_printk("%s: send-msg: tcs(m): %d cmd(n): %d msgid: 0x%08x addr: 0x%08x data: 0x%08x complete: %d",
^~~~~~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:77:38: error: 'struct trace_event_raw_rpmh_send_msg' has no member named 'n'
__entry->name, __entry->m, __entry->n, __entry->hdr,
^
include/trace/trace_events.h:360:22: note: in definition of macro 'DECLARE_EVENT_CLASS'
trace_seq_printf(s, print); \
^~~~~
include/trace/trace_events.h:79:9: note: in expansion of macro 'PARAMS'
PARAMS(print)); \
^~~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:50:1: note: in expansion of macro 'TRACE_EVENT'
TRACE_EVENT(rpmh_send_msg,
^~~~~~~~~~~
include/trace/../../drivers/soc/qcom/trace-rpmh.h:76:2: note: in expansion of macro 'TP_printk'
TP_printk("%s: send-msg: tcs(m): %d cmd(n): %d msgid: 0x%08x addr: 0x%08x data: 0x%08x complete: %d",
^~~~~~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:77:50: error: 'struct trace_event_raw_rpmh_send_msg' has no member named 'hdr'
__entry->name, __entry->m, __entry->n, __entry->hdr,
^
include/trace/trace_events.h:360:22: note: in definition of macro 'DECLARE_EVENT_CLASS'
trace_seq_printf(s, print); \
^~~~~
include/trace/trace_events.h:79:9: note: in expansion of macro 'PARAMS'
PARAMS(print)); \
^~~~~~
>> include/trace/../../drivers/soc/qcom/trace-rpmh.h:50:1: note: in expansion of macro 'TRACE_EVENT'
TRACE_EVENT(rpmh_send_msg,
^~~~~~~~~~~
include/trace/../../drivers/soc/qcom/trace-rpmh.h:76:2: note: in expansion of macro 'TP_printk'
TP_printk("%s: send-msg: tcs(m): %d cmd(n): %d msgid: 0x%08x addr: 0x%08x data: 0x%08x complete: %d",
^~~~~~~~~
vim +22 include/trace/../../drivers/soc/qcom/trace-rpmh.h
16
17 TP_PROTO(struct rsc_drv *d, struct tcs_response *r),
18
19 TP_ARGS(d, r),
20
> 21 TP_STRUCT__entry(
> 22 __field(const char *, d->name)
23 __field(int, m)
24 __field(u32, addr)
25 __field(int, errno)
26 ),
27
> 28 TP_fast_assign(
> 29 __entry->name = d->name;
30 __entry->m = r->m;
31 __entry->addr = r->msg->payload[0].addr;
32 __entry->errno = r->err;
33 ),
34
> 35 TP_printk("%s: ack: tcs-m:%d addr: 0x%08x errno: %d",
> 36 __entry->name, __entry->m, __entry->addr, __entry->errno)
37 );
38
39 DEFINE_EVENT(rpmh_notify, rpmh_notify_irq,
40 TP_PROTO(struct rsc_drv *d, struct tcs_response *r),
41 TP_ARGS(d, r)
42 );
43
44 DEFINE_EVENT(rpmh_notify, rpmh_notify_tx_done,
45 TP_PROTO(struct rsc_drv *d, struct tcs_response *r),
46 TP_ARGS(d, r)
47 );
48
49
> 50 TRACE_EVENT(rpmh_send_msg,
51
52 TP_PROTO(struct rsc_drv *d, int m, int n, u32 h, struct tcs_cmd *c),
53
54 TP_ARGS(d, m, n, h, c),
55
56 TP_STRUCT__entry(
> 57 __field(const char*, d->name)
58 __field(int, m)
59 __field(int, n)
60 __field(u32, hdr)
61 __field(u32, addr)
62 __field(u32, data)
63 __field(bool, complete)
64 ),
65
66 TP_fast_assign(
> 67 __entry->name = s;
68 __entry->m = m;
69 __entry->n = n;
70 __entry->hdr = h;
71 __entry->addr = c->addr;
72 __entry->data = c->data;
73 __entry->complete = c->complete;
74 ),
75
76 TP_printk("%s: send-msg: tcs(m): %d cmd(n): %d msgid: 0x%08x addr: 0x%08x data: 0x%08x complete: %d",
> 77 __entry->name, __entry->m, __entry->n, __entry->hdr,
> 78 __entry->addr, __entry->data, __entry->complete)
79 );
80
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
On Thu, Feb 15, 2018 at 10:34:59AM -0700, Lina Iyer wrote:
> Add device binding documentation for Qualcomm Technology Inc's RPMH RSC
> driver. The hardware block is used for communicating resource state
> requests for shared resources.
>
> Cc: [email protected]
> Signed-off-by: Lina Iyer <[email protected]>
> ---
Changes in v2?
> .../devicetree/bindings/arm/msm/rpmh-rsc.txt | 146 +++++++++++++++++++++
> 1 file changed, 146 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/arm/msm/rpmh-rsc.txt
>
> diff --git a/Documentation/devicetree/bindings/arm/msm/rpmh-rsc.txt b/Documentation/devicetree/bindings/arm/msm/rpmh-rsc.txt
> new file mode 100644
> index 000000000000..6cb36ae98bf9
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/arm/msm/rpmh-rsc.txt
> @@ -0,0 +1,146 @@
> +RPMH RSC:
> +------------
> +
> +RPMH is the mechanism for communicating with the hardened resource
> +accelerators. Requests to the resources can be written to the TCS mailbox
> +registers and using a (addr, val) pair and triggered. Messages in the TCS are
> +then sent in sequence over an internal bus.
> +
> +The hardware block (Direct Resource Voter or DRV) is a part of the h/w entity
> +(Resource State Coordinator a.k.a RSC) that can handle a multiple sleep and
> +active/wake resource requests. Multiple such DRVs can exist in a SoC and can
> +be written to from Linux. The structure of each DRV follows the same template
> +with a few variations that are captured by the properties here.
> +
> +Each DRV could have 'm' TCS instances. Each TCS could have 'n' slots. Each
> +slot has a header (u32), address (u32), data (u32), status (u32) and a
> +read-response (u32). A TCS when triggered will send all the enabled commands
> +of the 'n' commands in that slot in sequence.
> +
> +A TCS may be triggered from Linux or triggered by the F/W after all the CPUs
> +have powered off to facilitate idle power saving. TCS could be classified as -
> +
> + SLEEP, /* Triggered by F/W */
> + WAKE, /* Triggered by F/W */
> + ACTIVE, /* Triggered by Linux */
> + CONTROL /* Triggered by F/W */
> +
> +The order in which they are described in the DT, should match the hardware
> +configuration.
> +
> +Requests can be made for the state of a resource, when the subsystem is active
> +or idle. When all subsystems like Modem, GPU, CPU are idle, the resource state
> +will be an aggregate of the sleep votes from each of those subsystem. Drivers
> +may request a sleep value for their shared resources in addition to the active
> +mode requests.
> +
> +Control requests are instance specific requests that may or may not reach an
> +accelerator. Only one platform device in Linux can request a control channel
> +on a DRV.
> +
> +CONTROLLER:
> +----------
> +
> +PROPERTIES:
> +
> +- compatible:
> + Usage: required
> + Value type: <string>
> + Definition: Should be "qcom,rpmh-rsc".
> +
> +- reg:
> + Usage: required
> + Value type: <prop-encoded-array>
> + Definition: The first register specifies the base address of the DRV.
> + The second register specifies the start address of the
> + TCS.
> +
> +- reg-names:
> + Usage: required
> + Value type: <string>
> + Definition: Maps the register specified in the reg property. Must be
> + "drv" and "tcs".
> +
> +- interrupts:
> + Usage: required
> + Value type: <prop-encoded-interrupt>
> + Definition: The interrupt that trips when a message complete/response
> + is received for this DRV from the accelerators.
> +
> +- qcom,drv-id:
> + Usage: required
> + Value type: <u32>
> + Definition: the id of the DRV in the RSC block.
> +
> +- qcom,tcs-config:
> + Usage: required
> + Value type: <prop-encoded-array>
> + Definition: the tuple defining the configuration of TCS.
> + Must have 2 cells which describe each TCS type.
> + <type number_of_tcs>.
> + The order of the TCS must match the hardware
> + configuration.
> + - Cell #1 (TCS Type): TCS types to be specified -
> + SLEEP_TCS
> + WAKE_TCS
> + ACTIVE_TCS
> + CONTROL_TCS
> + - Cell #2 (Number of TCS): <u32>
> +
> +- label:
> + Usage: optional
> + Value type: <string>
> + Definition: Name for the RSC. The name would be used in trace logs.
> +
> +Platform drivers that want to use the RSC to communicate with RPMH must
Platform driver is a linux term. DT is not a platform driver
instantiation language.
> +specify their bindings as child of the corresponding RSC controllers.
> +
> +EXAMPLE 1:
> +
> +For a TCS whose RSC base address is is 0x179C0000 and is at a DRV id of 2, the
> +register offsets for DRV2 start at 0D00, the register calculations are like
> +this -
> +First tuple: 0x179C0000 + 0x10000 * 2 = 0x179E0000
> +Second tuple: 0x179E0000 + 0xD00 = 0x179E0D00
> +
> + apps_rsc: rsc@179e000 {
> + label = "apps_rsc";
> + compatible = "qcom,rpmh-rsc";
> + reg = <0x179e0000 0x10000>, <0x179e0d00 0x3000>;
> + reg-names = "drv", "tcs";
> + interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>
> + qcom,drv-id = <2>;
> + qcom,tcs-config = <SLEEP_TCS 3>,
> + <WAKE_TCS 3>,
> + <ACTIVE_TCS 2>,
> + <CONTROL_TCS 1>;
> +
> + foo-clk {
> + compatible = "foo-clk";
What is this? Child nodes need to be documented too.
> + };
> + };
> +
Hi Evan,
Thanks for your review.
On Fri, Feb 16 2018 at 23:51 +0000, Evan Green wrote:
>Hello Lina,
>
>On Thu, Feb 15, 2018 at 9:35 AM, Lina Iyer <[email protected]> wrote:
>> Sleep and wake requests are sent when the application processor
>> subsystem of the SoC is entering deep sleep states like in suspend.
>> These requests help lower the system power requirements when the
>> resources are not in use.
>>
>> Sleep and wake requests are written to the TCS slots but are not
>> triggered at the time of writing. The TCS are triggered by the firmware
>> after the last of the CPUs has executed its WFI. Since these requests
>> may come in different batches of requests, it is job of this controller
>> driver to find arrange the requests into the available TCSes.
>>
>> Signed-off-by: Lina Iyer <[email protected]>
>> ---
>> drivers/soc/qcom/rpmh-internal.h | 7 +++
>> drivers/soc/qcom/rpmh-rsc.c | 126 +++++++++++++++++++++++++++++++++++++++
>> 2 files changed, 133 insertions(+)
>>
>[...]
>> +static int find_slots(struct tcs_group *tcs, struct tcs_request *msg,
>> + int *m, int *n)
>> +{
>> + int slot, offset;
>> + int i = 0;
>> +
>> + /* Find if we already have the msg in our TCS */
>> + slot = find_match(tcs, msg->payload, msg->num_payload);
>> + if (slot >= 0)
>> + goto copy_data;
>> +
>> + /* Do over, until we can fit the full payload in a TCS */
>> + do {
>> + slot = bitmap_find_next_zero_area(tcs->slots, MAX_TCS_SLOTS,
>> + i, msg->num_payload, 0);
>> + if (slot == MAX_TCS_SLOTS)
>> + break;
>> + i += tcs->ncpt;
>> + } while (slot + msg->num_payload - 1 >= i);
>> +
>> + if (slot == MAX_TCS_SLOTS)
>> + return -ENOMEM;
>> +
>> +copy_data:
>> + bitmap_set(tcs->slots, slot, msg->num_payload);
>> + /* Copy the addresses of the resources over to the slots */
>> + for (i = 0; tcs->cmd_addr && i < msg->num_payload; i++)
>
>I don't think tcs->cmd_addr can be null, can it? Above, find_match()
>is already reaching through cmd_addr with enthusiasm. If kept, it
>could at least be moved outside of the loop.
>
It would be NULL for ACTIVE_TCS. I can move the check outside the loop.
Thanks,
Lina
Hi Lina,
On Thu, Feb 15, 2018 at 9:35 AM, Lina Iyer <[email protected]> wrote:
> Active state requests are sent immediately to the mailbox controller,
> while sleep and wake state requests are cached in this driver to avoid
> taxing the mailbox controller repeatedly. The cached values will be sent
> to the controller when the rpmh_flush() is called.
>
> Generally, flushing is a system PM activity and may be called from the
> system PM drivers when the system is entering suspend or deeper sleep
> modes during cpuidle.
>
> Also allow invalidating the cached requests, so they may be re-populated
> again.
>
> Signed-off-by: Lina Iyer <[email protected]>
> ---
> drivers/soc/qcom/rpmh.c | 213 +++++++++++++++++++++++++++++++++++++++++++++++-
> include/soc/qcom/rpmh.h | 10 +++
> 2 files changed, 222 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> index d95ea3fa8b67..671bc03ad77a 100644
> --- a/drivers/soc/qcom/rpmh.c
> +++ b/drivers/soc/qcom/rpmh.c
> @@ -6,11 +6,13 @@
> #include <linux/atomic.h>
> #include <linux/interrupt.h>
> #include <linux/kernel.h>
> +#include <linux/list.h>
> #include <linux/mailbox_client.h>
> #include <linux/module.h>
> #include <linux/of.h>
> #include <linux/platform_device.h>
> #include <linux/slab.h>
> +#include <linux/spinlock.h>
> #include <linux/types.h>
> #include <linux/wait.h>
>
> @@ -35,6 +37,22 @@
> .rc = rc, \
> }
>
> +
> +/**
> + * cache_req : the request object for caching
> + *
> + * @addr: the address of the resource
> + * @sleep_val: the sleep vote
> + * @wake_val: the wake vote
> + * @list: linked list obj
> + */
> +struct cache_req {
> + u32 addr;
> + u32 sleep_val;
> + u32 wake_val;
> + struct list_head list;
> +};
> +
> /**
> * rpmh_request: the message to be sent to rpmh-rsc
> *
> @@ -57,9 +75,15 @@ struct rpmh_request {
> * rpmh_ctrlr: our representation of the controller
> *
> * @drv: the controller instance
> + * @cache: the list of cached requests
> + * @lock: synchronize access to the controller data
> + * @dirty: was the cache updated since flush
> */
> struct rpmh_ctrlr {
> struct rsc_drv *drv;
> + struct list_head cache;
> + spinlock_t lock;
> + bool dirty;
> };
>
> /**
> @@ -123,17 +147,91 @@ static int wait_for_tx_done(struct rpmh_client *rc,
> return (ret > 0) ? 0 : -ETIMEDOUT;
> }
>
> +static struct cache_req *__find_req(struct rpmh_client *rc, u32 addr)
> +{
> + struct cache_req *p, *req = NULL;
> +
> + list_for_each_entry(p, &rc->ctrlr->cache, list) {
> + if (p->addr == addr) {
> + req = p;
> + break;
> + }
> + }
> +
> + return req;
> +}
> +
> +static struct cache_req *cache_rpm_request(struct rpmh_client *rc,
> + enum rpmh_state state,
> + struct tcs_cmd *cmd)
> +{
> + struct cache_req *req;
> + struct rpmh_ctrlr *rpm = rc->ctrlr;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&rpm->lock, flags);
> + req = __find_req(rc, cmd->addr);
> + if (req)
> + goto existing;
> +
> + req = kzalloc(sizeof(*req), GFP_ATOMIC);
> + if (!req) {
> + req = ERR_PTR(-ENOMEM);
> + goto unlock;
> + }
> +
> + req->addr = cmd->addr;
> + req->sleep_val = req->wake_val = UINT_MAX;
So UINT_MAX is really never a valid value to write? Maybe it would be
good to at least print some sort of complaint if somebody sends down a
request with this value. Otherwise the request is silently ignored and
would be quite challenging to track down.
> + INIT_LIST_HEAD(&req->list);
> + list_add_tail(&req->list, &rpm->cache);
> +
> +existing:
> + switch (state) {
> + case RPMH_ACTIVE_ONLY_STATE:
> + if (req->sleep_val != UINT_MAX)
> + req->wake_val = cmd->data;
> + break;
> + case RPMH_WAKE_ONLY_STATE:
> + req->wake_val = cmd->data;
> + break;
> + case RPMH_SLEEP_STATE:
> + req->sleep_val = cmd->data;
> + break;
> + default:
> + break;
> + };
> +
> + rpm->dirty = true;
> +unlock:
> + spin_unlock_irqrestore(&rpm->lock, flags);
> +
> + return req;
> +}
> +
> /**
> - * __rpmh_write: send the RPMH request
> + * __rpmh_write: Cache and send the RPMH request
> *
> * @rc: The RPMH client
> * @state: Active/Sleep request type
> * @rpm_msg: The data that needs to be sent (payload).
> + *
> + * Cache the RPMH request and send if the state is ACTIVE_ONLY.
> + * SLEEP/WAKE_ONLY requests are not sent to the controller at
> + * this time. Use rpmh_flush() to send them to the controller.
> */
> static int __rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
> struct rpmh_request *rpm_msg)
> {
> int ret = -EFAULT;
> + struct cache_req *req;
> + int i;
> +
> + /* Cache the request in our store and link the payload */
> + for (i = 0; i < rpm_msg->msg.num_payload; i++) {
> + req = cache_rpm_request(rc, state, &rpm_msg->msg.payload[i]);
> + if (IS_ERR(req))
> + return PTR_ERR(req);
> + }
>
> rpm_msg->msg.state = state;
>
> @@ -150,6 +248,10 @@ static int __rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
> "Error in RPMH request addr=0x%x, data=0x%x\n",
> rpm_msg->msg.payload[0].addr,
> rpm_msg->msg.payload[0].data);
> + } else {
> + ret = rpmh_rsc_write_ctrl_data(rc->ctrlr->drv, &rpm_msg->msg);
> + /* Clean up our call by spoofing tx_done */
> + rpmh_tx_done(&rpm_msg->msg, ret);
> }
>
> return ret;
> @@ -189,6 +291,113 @@ int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
> }
> EXPORT_SYMBOL(rpmh_write);
>
> +static int is_req_valid(struct cache_req *req)
> +{
> + return (req->sleep_val != UINT_MAX &&
> + req->wake_val != UINT_MAX &&
> + req->sleep_val != req->wake_val);
> +}
> +
> +static int send_single(struct rpmh_client *rc, enum rpmh_state state,
> + u32 addr, u32 data)
> +{
> + DEFINE_RPMH_MSG_ONSTACK(rc, state, NULL, NULL, rpm_msg);
> +
> + /* Wake sets are always complete and sleep sets are not */
> + rpm_msg.msg.is_complete = (state == RPMH_WAKE_ONLY_STATE);
> + rpm_msg.cmd[0].addr = addr;
> + rpm_msg.cmd[0].data = data;
> + rpm_msg.msg.num_payload = 1;
> + rpm_msg.msg.is_complete = false;
> +
> + return rpmh_rsc_write_ctrl_data(rc->ctrlr->drv, &rpm_msg.msg);
> +}
> +
> +/**
> + * rpmh_flush: Flushes the buffered active and sleep sets to TCS
> + *
> + * @rc: The RPMh handle got from rpmh_get_dev_channel
> + *
> + * This function is generally called from the sleep code from the last CPU
> + * that is powering down the entire system.
> + *
> + * Returns -EBUSY if the controller is busy, probably waiting on a response
> + * to a RPMH request sent earlier.
> + */
> +int rpmh_flush(struct rpmh_client *rc)
> +{
> + struct cache_req *p;
> + struct rpmh_ctrlr *rpm = rc->ctrlr;
> + int ret;
> + unsigned long flags;
> +
> + if (IS_ERR_OR_NULL(rc))
> + return -EINVAL;
> +
> + spin_lock_irqsave(&rpm->lock, flags);
> + if (!rpm->dirty) {
> + pr_debug("Skipping flush, TCS has latest data.\n");
> + spin_unlock_irqrestore(&rpm->lock, flags);
> + return 0;
> + }
> + spin_unlock_irqrestore(&rpm->lock, flags);
> +
> + /*
> + * Nobody else should be calling this function other than system PM,,
> + * hence we can run without locks.
> + */
> + list_for_each_entry(p, &rc->ctrlr->cache, list) {
> + if (!is_req_valid(p)) {
> + pr_debug("%s: skipping RPMH req: a:0x%x s:0x%x w:0x%x",
> + __func__, p->addr, p->sleep_val, p->wake_val);
> + continue;
> + }
> + ret = send_single(rc, RPMH_SLEEP_STATE, p->addr, p->sleep_val);
> + if (ret)
> + return ret;
> + ret = send_single(rc, RPMH_WAKE_ONLY_STATE, p->addr,
> + p->wake_val);
> + if (ret)
> + return ret;
> + }
> +
> + spin_lock_irqsave(&rpm->lock, flags);
> + rpm->dirty = false;
> + spin_unlock_irqrestore(&rpm->lock, flags);
> +
I've got some questions on the locking in this function.
I understand that the lock protects the list, and I'm surmising that
you don't want to hold the lock across send_single (even though
there's another lock in there that's held for most of that time, so I
think you could). I'm still a newbie to Linux in general, so I'll pose
this as a question: is it generally okay in Linux to traverse across a
list that may have items concurrently added to it? You're never
removing items from this list, so I think there are no actual bugs,
but it does seem like it relies on the implementation details of the
list. And if you ever did remove items from the list, this would bite
you.
Also, why do you need to acquire the lock just to set dirty to false?
Right now it looks like there's a race where someone could add an
element to this list just after you've terminated this loop (but
before you have the lock), but then the dirty = false here clobbers
their dirty = true, and the item is never sent during future flushes.
I think it would be safer and faster to set dirty = false before
iterating through the list (either within the lock or outside of it
given that this is the only place that reads or clears dirty). That
way if new elements sneak in you know that they will either be flushed
already or dirty will be true for them on a subsequent flush.
> + return 0;
> +}
> +EXPORT_SYMBOL(rpmh_flush);
> +
> +/**
> + * rpmh_invalidate: Invalidate all sleep and active sets
> + * sets.
> + *
> + * @rc: The RPMh handle got from rpmh_get_dev_channel
> + *
> + * Invalidate the sleep and active values in the TCS blocks.
> + */
> +int rpmh_invalidate(struct rpmh_client *rc)
> +{
> + struct rpmh_ctrlr *rpm = rc->ctrlr;
> + int ret;
> + unsigned long flags;
> +
> + if (IS_ERR_OR_NULL(rc))
> + return -EINVAL;
> +
> + spin_lock_irqsave(&rpm->lock, flags);
> + rpm->dirty = true;
> + spin_unlock_irqrestore(&rpm->lock, flags);
I don't think the lock acquire/release provides anything here, can't
you just set dirty = true?
So rpmh_invalidate clears any pending requests in the hardware, but
all the cached address/data pairs are all still in the cache, right?
As soon as someone else adds a new request and sets dirty to true, all
of these old ones get resent as well at flush, right? Is that the
desired behavior? Does anyone ever need to remove an address/data pair
from the RPMh's to-do list?
> +
> + do {
> + ret = rpmh_rsc_invalidate(rc->ctrlr->drv);
> + } while (ret == -EAGAIN);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL(rpmh_invalidate);
> +
> static struct rpmh_ctrlr *get_rpmh_ctrlr(struct platform_device *pdev)
> {
> int i;
> @@ -210,6 +419,8 @@ static struct rpmh_ctrlr *get_rpmh_ctrlr(struct platform_device *pdev)
> if (rpmh_rsc[i].drv == NULL) {
> ctrlr = &rpmh_rsc[i];
> ctrlr->drv = drv;
> + spin_lock_init(&ctrlr->lock);
> + INIT_LIST_HEAD(&ctrlr->cache);
> break;
> }
> }
> diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
> index 53cc145990c2..a3f1246ce49a 100644
> --- a/include/soc/qcom/rpmh.h
> +++ b/include/soc/qcom/rpmh.h
> @@ -17,6 +17,10 @@ int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
>
> struct rpmh_client *rpmh_get_client(struct platform_device *pdev);
>
> +int rpmh_flush(struct rpmh_client *rc);
> +
> +int rpmh_invalidate(struct rpmh_client *rc);
> +
> void rpmh_release(struct rpmh_client *rc);
>
> #else
> @@ -28,6 +32,12 @@ static inline int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
> static inline struct rpmh_client *rpmh_get_client(struct platform_device *pdev)
> { return ERR_PTR(-ENODEV); }
>
> +static inline int rpmh_flush(struct rpmh_client *rc)
> +{ return -ENODEV; }
> +
> +static inline int rpmh_invalidate(struct rpmh_client *rc)
> +{ return -ENODEV; }
> +
> static inline void rpmh_release(struct rpmh_client *rc) { }
> #endif /* CONFIG_QCOM_RPMH */
>
> --
> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Hello Lina,
On Thu, Feb 15, 2018 at 9:35 AM, Lina Iyer <[email protected]> wrote:
> Platform drivers need make a lot of resource state requests at the same
> time, say, at the start or end of an usecase. It can be quite
> inefficient to send each request separately. Instead they can give the
> RPMH library a batch of requests to be sent and wait on the whole
> transaction to be complete.
>
> rpmh_write_batch() is a blocking call that can be used to send multiple
> RPMH command sets. Each RPMH command set is set asynchronously and the
> API blocks until all the command sets are complete and receive their
> tx_done callbacks.
>
> Signed-off-by: Lina Iyer <[email protected]>
> ---
> drivers/soc/qcom/rpmh.c | 150 ++++++++++++++++++++++++++++++++++++++++++++++++
> include/soc/qcom/rpmh.h | 8 +++
> 2 files changed, 158 insertions(+)
>
> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> index dff4c46be3af..6f60bb9a4dfa 100644
> --- a/drivers/soc/qcom/rpmh.c
> +++ b/drivers/soc/qcom/rpmh.c
[...]
> @@ -394,6 +537,11 @@ int rpmh_flush(struct rpmh_client *rc)
> }
> spin_unlock_irqrestore(&rpm->lock, flags);
>
> + /* First flush the cached batch requests */
> + ret = flush_batch(rc);
> + if (ret)
> + return ret;
> +
> /*
> * Nobody else should be calling this function other than system PM,,
> * hence we can run without locks.
> @@ -438,6 +586,8 @@ int rpmh_invalidate(struct rpmh_client *rc)
> if (IS_ERR_OR_NULL(rc))
> return -EINVAL;
>
> + invalidate_batch(rc);
> +
Similarly to my comments in patch 7, aren't there races here with
adding new elements? After flush_batch, but before invalidate_batch,
somebody could call cache_batch, which would add new things on the end
of the array. These new items would be clobbered by invalidate_batch,
without having been flushed first.
-Evan
On Wed, Feb 21 2018 at 22:07 +0000, Evan Green wrote:
>Hi Lina,
>
>On Thu, Feb 15, 2018 at 9:35 AM, Lina Iyer <[email protected]> wrote:
[...]
>> +static struct cache_req *cache_rpm_request(struct rpmh_client *rc,
>> + enum rpmh_state state,
>> + struct tcs_cmd *cmd)
>> +{
>> + struct cache_req *req;
>> + struct rpmh_ctrlr *rpm = rc->ctrlr;
>> + unsigned long flags;
>> +
>> + spin_lock_irqsave(&rpm->lock, flags);
>> + req = __find_req(rc, cmd->addr);
>> + if (req)
>> + goto existing;
>> +
>> + req = kzalloc(sizeof(*req), GFP_ATOMIC);
>> + if (!req) {
>> + req = ERR_PTR(-ENOMEM);
>> + goto unlock;
>> + }
>> +
>> + req->addr = cmd->addr;
>> + req->sleep_val = req->wake_val = UINT_MAX;
>
>So UINT_MAX is really never a valid value to write? Maybe it would be
>good to at least print some sort of complaint if somebody sends down a
>request with this value. Otherwise the request is silently ignored and
>would be quite challenging to track down.
>
Yes, UINT_MAX is a invalid value. I have not encountered anything in the
spec that points to hold UINT_MAX as a valid value.
It would also be quite inefficient to validate each input. Drivers
generally know what they are doing. These are data sent to the hardware
and there is a general understanding that incorrect data may fail
silently.
>> +/**
>> + * rpmh_flush: Flushes the buffered active and sleep sets to TCS
>> + *
>> + * @rc: The RPMh handle got from rpmh_get_dev_channel
>> + *
>> + * This function is generally called from the sleep code from the last CPU
>> + * that is powering down the entire system.
>> + *
>> + * Returns -EBUSY if the controller is busy, probably waiting on a response
>> + * to a RPMH request sent earlier.
>> + */
>> +int rpmh_flush(struct rpmh_client *rc)
>> +{
>> + struct cache_req *p;
>> + struct rpmh_ctrlr *rpm = rc->ctrlr;
>> + int ret;
>> + unsigned long flags;
>> +
>> + if (IS_ERR_OR_NULL(rc))
>> + return -EINVAL;
>> +
>> + spin_lock_irqsave(&rpm->lock, flags);
>> + if (!rpm->dirty) {
>> + pr_debug("Skipping flush, TCS has latest data.\n");
>> + spin_unlock_irqrestore(&rpm->lock, flags);
>> + return 0;
>> + }
>> + spin_unlock_irqrestore(&rpm->lock, flags);
>> +
>> + /*
>> + * Nobody else should be calling this function other than system PM,,
>> + * hence we can run without locks.
>> + */
>> + list_for_each_entry(p, &rc->ctrlr->cache, list) {
>> + if (!is_req_valid(p)) {
>> + pr_debug("%s: skipping RPMH req: a:0x%x s:0x%x w:0x%x",
>> + __func__, p->addr, p->sleep_val, p->wake_val);
>> + continue;
>> + }
>> + ret = send_single(rc, RPMH_SLEEP_STATE, p->addr, p->sleep_val);
>> + if (ret)
>> + return ret;
>> + ret = send_single(rc, RPMH_WAKE_ONLY_STATE, p->addr,
>> + p->wake_val);
>> + if (ret)
>> + return ret;
>> + }
>> +
>> + spin_lock_irqsave(&rpm->lock, flags);
>> + rpm->dirty = false;
>> + spin_unlock_irqrestore(&rpm->lock, flags);
>> +
>
>I've got some questions on the locking in this function.
>
>I understand that the lock protects the list, and I'm surmising that
>you don't want to hold the lock across send_single (even though
>there's another lock in there that's held for most of that time, so I
>think you could). I'm still a newbie to Linux in general, so I'll pose
>this as a question: is it generally okay in Linux to traverse across a
>list that may have items concurrently added to it? You're never
>removing items from this list, so I think there are no actual bugs,
>but it does seem like it relies on the implementation details of the
>list. And if you ever did remove items from the list, this would bite
>you.
>
It is generally not advisable to traverse a list if the list is being
modified externally. The reason why it is okay here, is the context of
this function call. The only caller of this function will be a system PM
driver. As noted in the comments, this is called when the last CPU is
powering down and in such a context, there are no other active users of
this library.
>Also, why do you need to acquire the lock just to set dirty to false?
>Right now it looks like there's a race where someone could add an
>element to this list just after you've terminated this loop (but
>before you have the lock), but then the dirty = false here clobbers
>their dirty = true, and the item is never sent during future flushes.
>
>I think it would be safer and faster to set dirty = false before
>iterating through the list (either within the lock or outside of it
>given that this is the only place that reads or clears dirty). That
>way if new elements sneak in you know that they will either be flushed
>already or dirty will be true for them on a subsequent flush.
>
->dirty will be accessed as a u32 and so does not require a lock per-se.
The lock here guarantees a certain system state when the flag is
changed. While in this function it is not needed since, there will be no
active threads changing the system state. I could possibly remove the
lock in this function.
>> + return 0;
>> +}
>> +EXPORT_SYMBOL(rpmh_flush);
>> +
>> +/**
>> + * rpmh_invalidate: Invalidate all sleep and active sets
>> + * sets.
>> + *
>> + * @rc: The RPMh handle got from rpmh_get_dev_channel
>> + *
>> + * Invalidate the sleep and active values in the TCS blocks.
>> + */
>> +int rpmh_invalidate(struct rpmh_client *rc)
>> +{
>> + struct rpmh_ctrlr *rpm = rc->ctrlr;
>> + int ret;
>> + unsigned long flags;
>> +
>> + if (IS_ERR_OR_NULL(rc))
>> + return -EINVAL;
>> +
>> + spin_lock_irqsave(&rpm->lock, flags);
>> + rpm->dirty = true;
>> + spin_unlock_irqrestore(&rpm->lock, flags);
>
>I don't think the lock acquire/release provides anything here, can't
>you just set dirty = true?
>
It helps synchronize to the state of the controller.
>So rpmh_invalidate clears any pending requests in the hardware, but
>all the cached address/data pairs are all still in the cache, right?
>As soon as someone else adds a new request and sets dirty to true, all
>of these old ones get resent as well at flush, right? Is that the
>desired behavior? Does anyone ever need to remove an address/data pair
>from the RPMh's to-do list?
>
Drivers overwrite their requests and I have not had a requirement to
remove their previous votes. Removing from the controller's registers is
good enough to not send the data. If a driver calls invalidate, it is
generally followed by a new sleep/wake request.
Thanks,
Lina
>> +
>> + do {
>> + ret = rpmh_rsc_invalidate(rc->ctrlr->drv);
>> + } while (ret == -EAGAIN);
>> +
>> + return ret;
>> +}
>> +EXPORT_SYMBOL(rpmh_invalidate);
>> +
On Wed, Feb 21 2018 at 23:25 +0000, Evan Green wrote:
>Hello Lina,
>
>On Thu, Feb 15, 2018 at 9:35 AM, Lina Iyer <[email protected]> wrote:
>> Platform drivers need make a lot of resource state requests at the same
>> time, say, at the start or end of an usecase. It can be quite
>> inefficient to send each request separately. Instead they can give the
>> RPMH library a batch of requests to be sent and wait on the whole
>> transaction to be complete.
>>
>> rpmh_write_batch() is a blocking call that can be used to send multiple
>> RPMH command sets. Each RPMH command set is set asynchronously and the
>> API blocks until all the command sets are complete and receive their
>> tx_done callbacks.
>>
>> Signed-off-by: Lina Iyer <[email protected]>
>> ---
>> drivers/soc/qcom/rpmh.c | 150 ++++++++++++++++++++++++++++++++++++++++++++++++
>> include/soc/qcom/rpmh.h | 8 +++
>> 2 files changed, 158 insertions(+)
>>
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index dff4c46be3af..6f60bb9a4dfa 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>[...]
>> @@ -394,6 +537,11 @@ int rpmh_flush(struct rpmh_client *rc)
>> }
>> spin_unlock_irqrestore(&rpm->lock, flags);
>>
>> + /* First flush the cached batch requests */
>> + ret = flush_batch(rc);
>> + if (ret)
>> + return ret;
>> +
>> /*
>> * Nobody else should be calling this function other than system PM,,
>> * hence we can run without locks.
This comment and the comment in the header of this function.
>> @@ -438,6 +586,8 @@ int rpmh_invalidate(struct rpmh_client *rc)
>> if (IS_ERR_OR_NULL(rc))
>> return -EINVAL;
>>
>> + invalidate_batch(rc);
>> +
>
>Similarly to my comments in patch 7, aren't there races here with
>adding new elements? After flush_batch, but before invalidate_batch,
>somebody could call cache_batch, which would add new things on the end
>of the array. These new items would be clobbered by invalidate_batch,
>without having been flushed first.
>
Flush is a system PM function and is not called by drivers and is not
expected to run conncurrently with other threads using the RPMH library.
Thanks,
Lina
Hi Lina,
On Thu, Feb 22, 2018 at 8:58 AM, Lina Iyer <[email protected]> wrote:
> On Wed, Feb 21 2018 at 22:07 +0000, Evan Green wrote:
>>
>> Hi Lina,
>>
>> On Thu, Feb 15, 2018 at 9:35 AM, Lina Iyer <[email protected]> wrote:
>
> [...]
>
>>> +static struct cache_req *cache_rpm_request(struct rpmh_client *rc,
>>> + enum rpmh_state state,
>>> + struct tcs_cmd *cmd)
>>> +{
>>> + struct cache_req *req;
>>> + struct rpmh_ctrlr *rpm = rc->ctrlr;
>>> + unsigned long flags;
>>> +
>>> + spin_lock_irqsave(&rpm->lock, flags);
>>> + req = __find_req(rc, cmd->addr);
>>> + if (req)
>>> + goto existing;
>>> +
>>> + req = kzalloc(sizeof(*req), GFP_ATOMIC);
>>> + if (!req) {
>>> + req = ERR_PTR(-ENOMEM);
>>> + goto unlock;
>>> + }
>>> +
>>> + req->addr = cmd->addr;
>>> + req->sleep_val = req->wake_val = UINT_MAX;
>>
>>
>> So UINT_MAX is really never a valid value to write? Maybe it would be
>> good to at least print some sort of complaint if somebody sends down a
>> request with this value. Otherwise the request is silently ignored and
>> would be quite challenging to track down.
>>
> Yes, UINT_MAX is a invalid value. I have not encountered anything in the
> spec that points to hold UINT_MAX as a valid value.
> It would also be quite inefficient to validate each input. Drivers
> generally know what they are doing. These are data sent to the hardware
> and there is a general understanding that incorrect data may fail
> silently.
>
>
I see. I added this suggestion as it seemed worth calling out a
software-based assumption about an unknown hardware-specific value. In
the off-chance that UINT_MAX were to be a valid value in the future,
this would certainly surprise a driver writer who essentially did
everything right and wasn't expecting their request to be dropped
before reaching the hardware. But if you're confident this won't ever
happen, I'm willing to let this go.
>>> +/**
>>> + * rpmh_flush: Flushes the buffered active and sleep sets to TCS
>>> + *
>>> + * @rc: The RPMh handle got from rpmh_get_dev_channel
>>> + *
>>> + * This function is generally called from the sleep code from the last
>>> CPU
>>> + * that is powering down the entire system.
>>> + *
>>> + * Returns -EBUSY if the controller is busy, probably waiting on a
>>> response
>>> + * to a RPMH request sent earlier.
>>> + */
>>> +int rpmh_flush(struct rpmh_client *rc)
>>> +{
>>> + struct cache_req *p;
>>> + struct rpmh_ctrlr *rpm = rc->ctrlr;
>>> + int ret;
>>> + unsigned long flags;
>>> +
>>> + if (IS_ERR_OR_NULL(rc))
>>> + return -EINVAL;
>>> +
>>> + spin_lock_irqsave(&rpm->lock, flags);
>>> + if (!rpm->dirty) {
>>> + pr_debug("Skipping flush, TCS has latest data.\n");
>>> + spin_unlock_irqrestore(&rpm->lock, flags);
>>> + return 0;
>>> + }
>>> + spin_unlock_irqrestore(&rpm->lock, flags);
>>> +
>>> + /*
>>> + * Nobody else should be calling this function other than system
>>> PM,,
>>> + * hence we can run without locks.
>>> + */
>>> + list_for_each_entry(p, &rc->ctrlr->cache, list) {
>>> + if (!is_req_valid(p)) {
>>> + pr_debug("%s: skipping RPMH req: a:0x%x s:0x%x
>>> w:0x%x",
>>> + __func__, p->addr, p->sleep_val,
>>> p->wake_val);
>>> + continue;
>>> + }
>>> + ret = send_single(rc, RPMH_SLEEP_STATE, p->addr,
>>> p->sleep_val);
>>> + if (ret)
>>> + return ret;
>>> + ret = send_single(rc, RPMH_WAKE_ONLY_STATE, p->addr,
>>> + p->wake_val);
>>> + if (ret)
>>> + return ret;
>>> + }
>>> +
>>> + spin_lock_irqsave(&rpm->lock, flags);
>>> + rpm->dirty = false;
>>> + spin_unlock_irqrestore(&rpm->lock, flags);
>>> +
>>
>>
>> I've got some questions on the locking in this function.
>>
>> I understand that the lock protects the list, and I'm surmising that
>> you don't want to hold the lock across send_single (even though
>> there's another lock in there that's held for most of that time, so I
>> think you could). I'm still a newbie to Linux in general, so I'll pose
>> this as a question: is it generally okay in Linux to traverse across a
>> list that may have items concurrently added to it? You're never
>> removing items from this list, so I think there are no actual bugs,
>> but it does seem like it relies on the implementation details of the
>> list. And if you ever did remove items from the list, this would bite
>> you.
>>
> It is generally not advisable to traverse a list if the list is being
> modified externally. The reason why it is okay here, is the context of
> this function call. The only caller of this function will be a system PM
> driver. As noted in the comments, this is called when the last CPU is
> powering down and in such a context, there are no other active users of
> this library.
>
Response below.
>> Also, why do you need to acquire the lock just to set dirty to false?
>> Right now it looks like there's a race where someone could add an
>> element to this list just after you've terminated this loop (but
>> before you have the lock), but then the dirty = false here clobbers
>> their dirty = true, and the item is never sent during future flushes.
>>
>> I think it would be safer and faster to set dirty = false before
>> iterating through the list (either within the lock or outside of it
>> given that this is the only place that reads or clears dirty). That
>> way if new elements sneak in you know that they will either be flushed
>> already or dirty will be true for them on a subsequent flush.
>>
> ->dirty will be accessed as a u32 and so does not require a lock per-se.
> The lock here guarantees a certain system state when the flag is
> changed. While in this function it is not needed since, there will be no
> active threads changing the system state. I could possibly remove the
> lock in this function.
>
Yes, please remove all the locking in this function, as it doesn't
successfully synchronize anything. I suppose you still need locking in
cache_rpm_request to prevent multiple threads calling that function
concurrently, but you can remove it everywhere else. Adding broken
locking only invites someone in the future to assume that it works.
>
>>> + return 0;
>>> +}
>>> +EXPORT_SYMBOL(rpmh_flush);
>>> +
>>> +/**
>>> + * rpmh_invalidate: Invalidate all sleep and active sets
>>> + * sets.
>>> + *
>>> + * @rc: The RPMh handle got from rpmh_get_dev_channel
>>> + *
>>> + * Invalidate the sleep and active values in the TCS blocks.
>>> + */
>>> +int rpmh_invalidate(struct rpmh_client *rc)
>>> +{
>>> + struct rpmh_ctrlr *rpm = rc->ctrlr;
>>> + int ret;
>>> + unsigned long flags;
>>> +
>>> + if (IS_ERR_OR_NULL(rc))
>>> + return -EINVAL;
>>> +
>>> + spin_lock_irqsave(&rpm->lock, flags);
>>> + rpm->dirty = true;
>>> + spin_unlock_irqrestore(&rpm->lock, flags);
>>
>>
>> I don't think the lock acquire/release provides anything here, can't
>> you just set dirty = true?
>>
> It helps synchronize to the state of the controller.
No, I don't think it does. Please remove the locking here. If you need
a wmb after setting dirty, then add it (but I don't think you do).
>
>> So rpmh_invalidate clears any pending requests in the hardware, but
>> all the cached address/data pairs are all still in the cache, right?
>> As soon as someone else adds a new request and sets dirty to true, all
>> of these old ones get resent as well at flush, right? Is that the
>> desired behavior? Does anyone ever need to remove an address/data pair
>> from the RPMh's to-do list?
>>
> Drivers overwrite their requests and I have not had a requirement to
> remove their previous votes. Removing from the controller's registers is
> good enough to not send the data. If a driver calls invalidate, it is
> generally followed by a new sleep/wake request.
>
OK. There's an implicit assumption there that drivers need to update
_all_ of the address/data pairs they've ever submitted, or accept
stale ones being applied even after rpmh_invalidate. This is not true
for your invalidate_cache patch later, which does remove all the saved
entries. I wonder if that should at least be called out somewhere? Up
to you.
> Thanks,
> Lina
>
>>> +
>>> + do {
>>> + ret = rpmh_rsc_invalidate(rc->ctrlr->drv);
>>> + } while (ret == -EAGAIN);
>>> +
>>> + return ret;
>>> +}
>>> +EXPORT_SYMBOL(rpmh_invalidate);
>>> +
Hi Lina,
On Thu, Feb 22, 2018 at 9:04 AM, Lina Iyer <[email protected]> wrote:
> On Wed, Feb 21 2018 at 23:25 +0000, Evan Green wrote:
>>
>> Hello Lina,
>>
>> On Thu, Feb 15, 2018 at 9:35 AM, Lina Iyer <[email protected]> wrote:
>>>
>>> Platform drivers need make a lot of resource state requests at the same
>>> time, say, at the start or end of an usecase. It can be quite
>>> inefficient to send each request separately. Instead they can give the
>>> RPMH library a batch of requests to be sent and wait on the whole
>>> transaction to be complete.
>>>
>>> rpmh_write_batch() is a blocking call that can be used to send multiple
>>> RPMH command sets. Each RPMH command set is set asynchronously and the
>>> API blocks until all the command sets are complete and receive their
>>> tx_done callbacks.
>>>
>>> Signed-off-by: Lina Iyer <[email protected]>
>>> ---
>>> drivers/soc/qcom/rpmh.c | 150
>>> ++++++++++++++++++++++++++++++++++++++++++++++++
>>> include/soc/qcom/rpmh.h | 8 +++
>>> 2 files changed, 158 insertions(+)
>>>
>>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>>> index dff4c46be3af..6f60bb9a4dfa 100644
>>> --- a/drivers/soc/qcom/rpmh.c
>>> +++ b/drivers/soc/qcom/rpmh.c
>>
>> [...]
>>>
>>> @@ -394,6 +537,11 @@ int rpmh_flush(struct rpmh_client *rc)
>>> }
>>> spin_unlock_irqrestore(&rpm->lock, flags);
>>>
>>> + /* First flush the cached batch requests */
>>> + ret = flush_batch(rc);
>>> + if (ret)
>>> + return ret;
>>> +
>>> /*
>>> * Nobody else should be calling this function other than system
>>> PM,,
>>> * hence we can run without locks.
>
> This comment and the comment in the header of this function.
>
>>> @@ -438,6 +586,8 @@ int rpmh_invalidate(struct rpmh_client *rc)
>>> if (IS_ERR_OR_NULL(rc))
>>> return -EINVAL;
>>>
>>> + invalidate_batch(rc);
>>> +
>>
>>
>> Similarly to my comments in patch 7, aren't there races here with
>> adding new elements? After flush_batch, but before invalidate_batch,
>> somebody could call cache_batch, which would add new things on the end
>> of the array. These new items would be clobbered by invalidate_batch,
>> without having been flushed first.
>>
> Flush is a system PM function and is not called by drivers and is not
> expected to run conncurrently with other threads using the RPMH library.
My comment above was a little off because I was reading those two
hunks (flush_batch and invalidate_batch) as being in the same
function. They're not.
I'm okay with the locking here, though you could remove the locking
from flush_batch, since that's only run in single-threaded PM
contexts.
>
> Thanks,
> Lina
>