2018-04-05 16:21:34

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v5 00/10] drivers/qcom: add RPMH communication suppor

Changes in v5:
- Add Reviewed-by tags
- Rebase on top of 4.16

Changes in v4:
- Rename variables as suggested by Stephen and Evan
- Lot of minor syntax and style fixes
- Fix FTRACE compilation error
- Improve doc comments and DT description

Changes in v3:
- Address Steven's comments in FTRACE
- Fix DT documentation as suggested by Rob H
- Fix error handling in IRQ handler as suggested by Evan
- Remove locks in rpmh_flush()
- Improve comments

Changes in v2:
- Added sleep/wake, async and batch requests support
- Addressed Bjorn's comments
- Private FTRACE for drivers/soc/qcom as suggested by Steven
- Sparse checked on these patches
- Use SPDX license commenting sytle

This set of patches add the ability for platform drivers to make use of shared
resources in newer Qualcomm SoCs like SDM845. Resources that are shared between
multiple processors in a SoC are generally controlled by a dedicated remote
processor. The remote processor (Resource Power Manager or RPM in previous QCOM
SoCs) receives requests for resource state from other processors using the
shared resource, aggregates the request and applies the result on the shared
resource. SDM845 advances this concept and uses h/w (hardened I/P) blocks for
aggregating requests and applying the result on the resource. The resources
could be clocks, regulators or bandwidth requests for buses. This new
architecture is called RPM-hardened or RPMH in short.

Since this communication mechanism is completely hardware driven without a
processor intervention on the remote end, existing mechanisms like RPM-SMD are
no longer useful. Also, there is no serialization of data or is data is written
to a shared memory in this new format. The data used is different, unsigned 32
bits are used for representing an address, data and header. Each resource's
property is a unique u32 address and have pre-defined set of property specific
valid values. A request that comprises of <header, addr, data> is sent by
writing to a set of registers from Linux and transmitted to the remote slave
through an internal bus. The remote end aggregates this request along with
requests from other processors for the <addr> and applies the result.

The hardware block that houses this functionality is called Resource State
Coordinator or RSC. Inside the RSC are set of slots for sending RPMH requests
called Trigger Commands Sets (TCS). The set of patches are for writing the
requests into these TCSes and sending them to hardened IP blocks.

The driver design is split into two components. The RSC driver housed in
rpmh-rsc.c and the set of library functions in rpmh.c that frame the request and
transmit it using the controller. This first set of patches allow a simple
synchronous request to be made by the platform drivers. Future patches will add
more functionality that cater to complex drivers and use cases.

Please consider reviewing this patchset.

v1: https://www.spinics.net/lists/devicetree/msg210980.html
v2: https://lkml.org/lkml/2018/2/15/852
v3: https://lkml.org/lkml/2018/3/2/801
v4: https://lkml.org/lkml/2018/3/9/979

Lina Iyer (10):
drivers: qcom: rpmh-rsc: add RPMH controller for QCOM SoCs
dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs
drivers: qcom: rpmh-rsc: log RPMH requests in FTRACE
drivers: qcom: rpmh: add RPMH helper functions
drivers: qcom: rpmh-rsc: write sleep/wake requests to TCS
drivers: qcom: rpmh-rsc: allow invalidation of sleep/wake TCS
drivers: qcom: rpmh: cache sleep/wake state requests
drivers: qcom: rpmh: allow requests to be sent asynchronously
drivers: qcom: rpmh: add support for batch RPMH request
drivers: qcom: rpmh-rsc: allow active requests from wake TCS

.../devicetree/bindings/soc/qcom/rpmh-rsc.txt | 127 ++++
drivers/soc/qcom/Kconfig | 10 +
drivers/soc/qcom/Makefile | 4 +
drivers/soc/qcom/rpmh-internal.h | 99 +++
drivers/soc/qcom/rpmh-rsc.c | 768 +++++++++++++++++++++
drivers/soc/qcom/rpmh.c | 659 ++++++++++++++++++
drivers/soc/qcom/trace-rpmh.h | 89 +++
include/dt-bindings/soc/qcom,rpmh-rsc.h | 14 +
include/soc/qcom/rpmh.h | 60 ++
include/soc/qcom/tcs.h | 56 ++
10 files changed, 1886 insertions(+)
create mode 100644 Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
create mode 100644 drivers/soc/qcom/rpmh-internal.h
create mode 100644 drivers/soc/qcom/rpmh-rsc.c
create mode 100644 drivers/soc/qcom/rpmh.c
create mode 100644 drivers/soc/qcom/trace-rpmh.h
create mode 100644 include/dt-bindings/soc/qcom,rpmh-rsc.h
create mode 100644 include/soc/qcom/rpmh.h
create mode 100644 include/soc/qcom/tcs.h

--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



2018-04-05 16:20:45

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v5 07/10] drivers: qcom: rpmh: cache sleep/wake state requests

Active state requests are sent immediately to the mailbox controller,
while sleep and wake state requests are cached in this driver to avoid
taxing the mailbox controller repeatedly. The cached values will be sent
to the controller when the rpmh_flush() is called.

Generally, flushing is a system PM activity and may be called from the
system PM drivers when the system is entering suspend or deeper sleep
modes during cpuidle.

Also allow invalidating the cached requests, so they may be re-populated
again.

Signed-off-by: Lina Iyer <[email protected]>
Reviewed-by: Evan Green <[email protected]>
---

Changes in v4:
- remove locking for ->dirty in invalidate
- fix send_single
Changes in v3:
- Remove locking for flush function
- Improve comments
---
drivers/soc/qcom/rpmh.c | 203 +++++++++++++++++++++++++++++++++++++++++++++++-
include/soc/qcom/rpmh.h | 10 +++
2 files changed, 212 insertions(+), 1 deletion(-)

diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index e3c7491e7baf..b5468ef082c1 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -7,11 +7,13 @@
#include <linux/interrupt.h>
#include <linux/jiffies.h>
#include <linux/kernel.h>
+#include <linux/list.h>
#include <linux/mailbox_client.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
+#include <linux/spinlock.h>
#include <linux/types.h>
#include <linux/wait.h>

@@ -35,6 +37,21 @@
.rc = rc, \
}

+/**
+ * struct cache_req: the request object for caching
+ *
+ * @addr: the address of the resource
+ * @sleep_val: the sleep vote
+ * @wake_val: the wake vote
+ * @list: linked list obj
+ */
+struct cache_req {
+ u32 addr;
+ u32 sleep_val;
+ u32 wake_val;
+ struct list_head list;
+};
+
/**
* struct rpmh_request: the message to be sent to rpmh-rsc
*
@@ -55,9 +72,15 @@ struct rpmh_request {
* struct rpmh_ctrlr: our representation of the controller
*
* @drv: the controller instance
+ * @cache: the list of cached requests
+ * @lock: synchronize access to the controller data
+ * @dirty: was the cache updated since flush
*/
struct rpmh_ctrlr {
struct rsc_drv *drv;
+ struct list_head cache;
+ spinlock_t lock;
+ bool dirty;
};

/**
@@ -122,17 +145,91 @@ static int wait_for_tx_done(struct rpmh_client *rc,
return (ret > 0) ? 0 : -ETIMEDOUT;
}

+static struct cache_req *__find_req(struct rpmh_client *rc, u32 addr)
+{
+ struct cache_req *p, *req = NULL;
+
+ list_for_each_entry(p, &rc->ctrlr->cache, list) {
+ if (p->addr == addr) {
+ req = p;
+ break;
+ }
+ }
+
+ return req;
+}
+
+static struct cache_req *cache_rpm_request(struct rpmh_client *rc,
+ enum rpmh_state state,
+ struct tcs_cmd *cmd)
+{
+ struct cache_req *req;
+ struct rpmh_ctrlr *rpm = rc->ctrlr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&rpm->lock, flags);
+ req = __find_req(rc, cmd->addr);
+ if (req)
+ goto existing;
+
+ req = kzalloc(sizeof(*req), GFP_ATOMIC);
+ if (!req) {
+ req = ERR_PTR(-ENOMEM);
+ goto unlock;
+ }
+
+ req->addr = cmd->addr;
+ req->sleep_val = req->wake_val = UINT_MAX;
+ INIT_LIST_HEAD(&req->list);
+ list_add_tail(&req->list, &rpm->cache);
+
+existing:
+ switch (state) {
+ case RPMH_ACTIVE_ONLY_STATE:
+ if (req->sleep_val != UINT_MAX)
+ req->wake_val = cmd->data;
+ break;
+ case RPMH_WAKE_ONLY_STATE:
+ req->wake_val = cmd->data;
+ break;
+ case RPMH_SLEEP_STATE:
+ req->sleep_val = cmd->data;
+ break;
+ default:
+ break;
+ };
+
+ rpm->dirty = true;
+unlock:
+ spin_unlock_irqrestore(&rpm->lock, flags);
+
+ return req;
+}
+
/**
- * __rpmh_write: send the RPMH request
+ * __rpmh_write: Cache and send the RPMH request
*
* @rc: The RPMH client
* @state: Active/Sleep request type
* @rpm_msg: The data that needs to be sent (cmds).
+ *
+ * Cache the RPMH request and send if the state is ACTIVE_ONLY.
+ * SLEEP/WAKE_ONLY requests are not sent to the controller at
+ * this time. Use rpmh_flush() to send them to the controller.
*/
static int __rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
struct rpmh_request *rpm_msg)
{
int ret = -EINVAL;
+ struct cache_req *req;
+ int i;
+
+ /* Cache the request in our store and link the payload */
+ for (i = 0; i < rpm_msg->msg.num_cmds; i++) {
+ req = cache_rpm_request(rc, state, &rpm_msg->msg.cmds[i]);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+ }

rpm_msg->msg.state = state;

@@ -149,6 +246,10 @@ static int __rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
"Error in RPMH request addr=%#x, data=%#x\n",
rpm_msg->msg.cmds[0].addr,
rpm_msg->msg.cmds[0].data);
+ } else {
+ ret = rpmh_rsc_write_ctrl_data(rc->ctrlr->drv, &rpm_msg->msg);
+ /* Clean up our call by spoofing tx_done */
+ rpmh_tx_done(&rpm_msg->msg, ret);
}

return ret;
@@ -185,6 +286,104 @@ int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
}
EXPORT_SYMBOL(rpmh_write);

+static int is_req_valid(struct cache_req *req)
+{
+ return (req->sleep_val != UINT_MAX &&
+ req->wake_val != UINT_MAX &&
+ req->sleep_val != req->wake_val);
+}
+
+static int send_single(struct rpmh_client *rc, enum rpmh_state state,
+ u32 addr, u32 data)
+{
+ DEFINE_RPMH_MSG_ONSTACK(rc, state, NULL, rpm_msg);
+
+ /* Wake sets are always complete and sleep sets are not */
+ rpm_msg.msg.wait_for_compl = (state == RPMH_WAKE_ONLY_STATE);
+ rpm_msg.cmd[0].addr = addr;
+ rpm_msg.cmd[0].data = data;
+ rpm_msg.msg.num_cmds = 1;
+
+ return rpmh_rsc_write_ctrl_data(rc->ctrlr->drv, &rpm_msg.msg);
+}
+
+/**
+ * rpmh_flush: Flushes the buffered active and sleep sets to TCS
+ *
+ * @rc: The RPMh handle got from rpmh_get_client
+ *
+ * Return: -EBUSY if the controller is busy, probably waiting on a response
+ * to a RPMH request sent earlier.
+ *
+ * This function is generally called from the sleep code from the last CPU
+ * that is powering down the entire system. Since no other RPMH API would be
+ * executing at this time, it is safe to run lockless.
+ */
+int rpmh_flush(struct rpmh_client *rc)
+{
+ struct cache_req *p;
+ struct rpmh_ctrlr *rpm = rc->ctrlr;
+ int ret;
+
+ if (IS_ERR_OR_NULL(rc))
+ return -EINVAL;
+
+ if (!rpm->dirty) {
+ pr_debug("Skipping flush, TCS has latest data.\n");
+ return 0;
+ }
+
+ /*
+ * Nobody else should be calling this function other than system PM,,
+ * hence we can run without locks.
+ */
+ list_for_each_entry(p, &rc->ctrlr->cache, list) {
+ if (!is_req_valid(p)) {
+ pr_debug("%s: skipping RPMH req: a:%#x s:%#x w:%#x",
+ __func__, p->addr, p->sleep_val, p->wake_val);
+ continue;
+ }
+ ret = send_single(rc, RPMH_SLEEP_STATE, p->addr, p->sleep_val);
+ if (ret)
+ return ret;
+ ret = send_single(rc, RPMH_WAKE_ONLY_STATE,
+ p->addr, p->wake_val);
+ if (ret)
+ return ret;
+ }
+
+ rpm->dirty = false;
+
+ return 0;
+}
+EXPORT_SYMBOL(rpmh_flush);
+
+/**
+ * rpmh_invalidate: Invalidate all sleep and active sets
+ * sets.
+ *
+ * @rc: The RPMh handle got from rpmh_get_client
+ *
+ * Invalidate the sleep and active values in the TCS blocks.
+ */
+int rpmh_invalidate(struct rpmh_client *rc)
+{
+ struct rpmh_ctrlr *rpm = rc->ctrlr;
+ int ret;
+
+ if (IS_ERR_OR_NULL(rc))
+ return -EINVAL;
+
+ rpm->dirty = true;
+
+ do {
+ ret = rpmh_rsc_invalidate(rc->ctrlr->drv);
+ } while (ret == -EAGAIN);
+
+ return ret;
+}
+EXPORT_SYMBOL(rpmh_invalidate);
+
static struct rpmh_ctrlr *get_rpmh_ctrlr(struct platform_device *pdev)
{
int i;
@@ -206,6 +405,8 @@ static struct rpmh_ctrlr *get_rpmh_ctrlr(struct platform_device *pdev)
if (rpmh_rsc[i].drv == NULL) {
ctrlr = &rpmh_rsc[i];
ctrlr->drv = drv;
+ spin_lock_init(&ctrlr->lock);
+ INIT_LIST_HEAD(&ctrlr->cache);
break;
}
}
diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
index 95334d4c1ede..41a2518c46a5 100644
--- a/include/soc/qcom/rpmh.h
+++ b/include/soc/qcom/rpmh.h
@@ -17,6 +17,10 @@ int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,

struct rpmh_client *rpmh_get_client(struct platform_device *pdev);

+int rpmh_flush(struct rpmh_client *rc);
+
+int rpmh_invalidate(struct rpmh_client *rc);
+
void rpmh_release(struct rpmh_client *rc);

#else
@@ -28,6 +32,12 @@ static inline int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
static inline struct rpmh_client *rpmh_get_client(struct platform_device *pdev)
{ return ERR_PTR(-ENODEV); }

+static inline int rpmh_flush(struct rpmh_client *rc)
+{ return -ENODEV; }
+
+static inline int rpmh_invalidate(struct rpmh_client *rc)
+{ return -ENODEV; }
+
static inline void rpmh_release(struct rpmh_client *rc) { }
#endif /* CONFIG_QCOM_RPMH */

--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


2018-04-05 16:21:08

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v5 09/10] drivers: qcom: rpmh: add support for batch RPMH request

Platform drivers need make a lot of resource state requests at the same
time, say, at the start or end of an usecase. It can be quite
inefficient to send each request separately. Instead they can give the
RPMH library a batch of requests to be sent and wait on the whole
transaction to be complete.

rpmh_write_batch() is a blocking call that can be used to send multiple
RPMH command sets. Each RPMH command set is set asynchronously and the
API blocks until all the command sets are complete and receive their
tx_done callbacks.

Signed-off-by: Lina Iyer <[email protected]>
Reviewed-by: Evan Green <[email protected]>
---

Changes in v4:
- reorganize rpmh_write_batch()
- introduce wait_count here, instead of patch#4
---
drivers/soc/qcom/rpmh.c | 156 +++++++++++++++++++++++++++++++++++++++++++++++-
include/soc/qcom/rpmh.h | 8 +++
2 files changed, 162 insertions(+), 2 deletions(-)

diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index 3a96e5f58302..daadbb3f0494 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -23,6 +23,7 @@

#define RPMH_MAX_MBOXES 2
#define RPMH_TIMEOUT_MS 10000
+#define RPMH_MAX_REQ_IN_BATCH 10

#define DEFINE_RPMH_MSG_ONSTACK(rc, s, q, name) \
struct rpmh_request name = { \
@@ -36,6 +37,7 @@
.completion = q, \
.rc = rc, \
.free = NULL, \
+ .wait_count = NULL, \
}

/**
@@ -61,6 +63,7 @@ struct cache_req {
* @completion: triggered when request is done
* @err: err return from the controller
* @free: the request object to be freed at tx_done
+ * @wait_count: count of waiters for this completion
*/
struct rpmh_request {
struct tcs_request msg;
@@ -69,6 +72,7 @@ struct rpmh_request {
struct rpmh_client *rc;
int err;
struct rpmh_request *free;
+ atomic_t *wait_count;
};

/**
@@ -78,12 +82,14 @@ struct rpmh_request {
* @cache: the list of cached requests
* @lock: synchronize access to the controller data
* @dirty: was the cache updated since flush
+ * @batch_cache: Cache sleep and wake requests sent as batch
*/
struct rpmh_ctrlr {
struct rsc_drv *drv;
struct list_head cache;
spinlock_t lock;
bool dirty;
+ const struct rpmh_request *batch_cache[2 * RPMH_MAX_REQ_IN_BATCH];
};

/**
@@ -105,6 +111,7 @@ void rpmh_tx_done(const struct tcs_request *msg, int r)
struct rpmh_request *rpm_msg = container_of(msg, struct rpmh_request,
msg);
struct completion *compl = rpm_msg->completion;
+ atomic_t *wc = rpm_msg->wait_count;

rpm_msg->err = r;

@@ -116,8 +123,13 @@ void rpmh_tx_done(const struct tcs_request *msg, int r)
kfree(rpm_msg->free);

/* Signal the blocking thread we are done */
- if (compl)
- complete(compl);
+ if (!compl)
+ return;
+
+ if (wc && !atomic_dec_and_test(wc))
+ return;
+
+ complete(compl);
}
EXPORT_SYMBOL(rpmh_tx_done);

@@ -339,6 +351,139 @@ int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
}
EXPORT_SYMBOL(rpmh_write);

+static int cache_batch(struct rpmh_client *rc,
+ struct rpmh_request **rpm_msg, int count)
+{
+ struct rpmh_ctrlr *rpm = rc->ctrlr;
+ unsigned long flags;
+ int ret = 0;
+ int index = 0;
+ int i;
+
+ spin_lock_irqsave(&rpm->lock, flags);
+ while (rpm->batch_cache[index])
+ index++;
+ if (index + count >= 2 * RPMH_MAX_REQ_IN_BATCH) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ for (i = 0; i < count; i++)
+ rpm->batch_cache[index + i] = rpm_msg[i];
+fail:
+ spin_unlock_irqrestore(&rpm->lock, flags);
+
+ return ret;
+}
+
+static int flush_batch(struct rpmh_client *rc)
+{
+ struct rpmh_ctrlr *rpm = rc->ctrlr;
+ const struct rpmh_request *rpm_msg;
+ unsigned long flags;
+ int ret = 0;
+ int i;
+
+ /* Send Sleep/Wake requests to the controller, expect no response */
+ spin_lock_irqsave(&rpm->lock, flags);
+ for (i = 0; rpm->batch_cache[i]; i++) {
+ rpm_msg = rpm->batch_cache[i];
+ ret = rpmh_rsc_write_ctrl_data(rc->ctrlr->drv, &rpm_msg->msg);
+ if (ret)
+ break;
+ }
+ spin_unlock_irqrestore(&rpm->lock, flags);
+
+ return ret;
+}
+
+static void invalidate_batch(struct rpmh_client *rc)
+{
+ struct rpmh_ctrlr *rpm = rc->ctrlr;
+ unsigned long flags;
+ int index = 0;
+ int i;
+
+ spin_lock_irqsave(&rpm->lock, flags);
+ while (rpm->batch_cache[index])
+ index++;
+ for (i = 0; i < index; i++) {
+ kfree(rpm->batch_cache[i]->free);
+ rpm->batch_cache[i] = NULL;
+ }
+ spin_unlock_irqrestore(&rpm->lock, flags);
+}
+
+/**
+ * rpmh_write_batch: Write multiple sets of RPMH commands and wait for the
+ * batch to finish.
+ *
+ * @rc: The RPMh handle got from rpmh_get_client
+ * @state: Active/sleep set
+ * @cmd: The payload data
+ * @n: The array of count of elements in each batch, 0 terminated.
+ *
+ * Write a request to the mailbox controller without caching. If the request
+ * state is ACTIVE, then the requests are treated as completion request
+ * and sent to the controller immediately. The function waits until all the
+ * commands are complete. If the request was to SLEEP or WAKE_ONLY, then the
+ * request is sent as fire-n-forget and no ack is expected.
+ *
+ * May sleep. Do not call from atomic contexts for ACTIVE_ONLY requests.
+ */
+int rpmh_write_batch(struct rpmh_client *rc, enum rpmh_state state,
+ const struct tcs_cmd *cmd, u32 *n)
+{
+ struct rpmh_request *rpm_msg[RPMH_MAX_REQ_IN_BATCH] = { NULL };
+ DECLARE_COMPLETION_ONSTACK(compl);
+ atomic_t wait_count = ATOMIC_INIT(0);
+ int count = 0;
+ int ret, i;
+
+ if (IS_ERR_OR_NULL(rc) || !cmd || !n)
+ return -EINVAL;
+
+ while (n[count++] > 0)
+ ;
+ count--;
+ if (!count || count > RPMH_MAX_REQ_IN_BATCH)
+ return -EINVAL;
+
+ for (i = 0; i < count; i++) {
+ rpm_msg[i] = __get_rpmh_msg_async(rc, state, cmd, n[i]);
+ if (IS_ERR_OR_NULL(rpm_msg[i])) {
+ ret = PTR_ERR(rpm_msg[i]);
+ for (; i >= 0; i--)
+ kfree(rpm_msg[i]->free);
+ return ret;
+ }
+ cmd += n[i];
+ }
+
+ if (state != RPMH_ACTIVE_ONLY_STATE)
+ return cache_batch(rc, rpm_msg, count);
+
+ atomic_set(&wait_count, count);
+
+ for (i = 0; i < count; i++) {
+ rpm_msg[i]->completion = &compl;
+ rpm_msg[i]->wait_count = &wait_count;
+ ret = rpmh_rsc_send_data(rc->ctrlr->drv, &rpm_msg[i]->msg);
+ if (ret) {
+ int j;
+
+ pr_err("Error(%d) sending RPMH message addr=%#x\n",
+ ret, rpm_msg[i]->msg.cmds[0].addr);
+ for (j = i; j < count; j++)
+ rpmh_tx_done(&rpm_msg[j]->msg, ret);
+ break;
+ }
+ }
+
+ return wait_for_tx_done(rc, &compl, cmd[0].addr, cmd[0].data);
+}
+EXPORT_SYMBOL(rpmh_write_batch);
+
static int is_req_valid(struct cache_req *req)
{
return (req->sleep_val != UINT_MAX &&
@@ -386,6 +531,11 @@ int rpmh_flush(struct rpmh_client *rc)
return 0;
}

+ /* First flush the cached batch requests */
+ ret = flush_batch(rc);
+ if (ret)
+ return ret;
+
/*
* Nobody else should be calling this function other than system PM,,
* hence we can run without locks.
@@ -427,6 +577,8 @@ int rpmh_invalidate(struct rpmh_client *rc)
if (IS_ERR_OR_NULL(rc))
return -EINVAL;

+ invalidate_batch(rc);
+
rpm->dirty = true;

do {
diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
index 9e6de09e43f0..a5d5c57a4329 100644
--- a/include/soc/qcom/rpmh.h
+++ b/include/soc/qcom/rpmh.h
@@ -18,6 +18,9 @@ int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
int rpmh_write_async(struct rpmh_client *rc, enum rpmh_state state,
const struct tcs_cmd *cmd, u32 n);

+int rpmh_write_batch(struct rpmh_client *rc, enum rpmh_state state,
+ const struct tcs_cmd *cmd, u32 *n);
+
struct rpmh_client *rpmh_get_client(struct platform_device *pdev);

int rpmh_flush(struct rpmh_client *rc);
@@ -40,6 +43,11 @@ static inline int rpmh_write_async(struct rpmh_client *rc,
const struct tcs_cmd *cmd, u32 n)
{ return -ENODEV; }

+static inline int rpmh_write_batch(struct rpmh_client *rc,
+ enum rpmh_state state,
+ const struct tcs_cmd *cmd, u32 *n)
+{ return -ENODEV; }
+
static inline int rpmh_flush(struct rpmh_client *rc)
{ return -ENODEV; }

--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


2018-04-05 16:21:26

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v5 10/10] drivers: qcom: rpmh-rsc: allow active requests from wake TCS

Some RSCs may only have sleep and wake TCS, i.e, there is no dedicated
TCS for active mode request, but drivers may still want to make active
requests from these RSCs. In such cases re-purpose the wake TCS to send
active state requests.

The requirement for this is that the driver is aware that the wake TCS
is being repurposed to send active request, hence the sleep and wake
TCSes be invalidated before the active request is sent.

Signed-off-by: Lina Iyer <[email protected]>
---
drivers/soc/qcom/rpmh-rsc.c | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index 1dbccc4d0605..c1a460a8d955 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -214,6 +214,7 @@ static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
const struct tcs_request *msg)
{
int type;
+ struct tcs_group *tcs;

switch (msg->state) {
case RPMH_ACTIVE_ONLY_STATE:
@@ -229,7 +230,22 @@ static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
return ERR_PTR(-EINVAL);
}

- return get_tcs_of_type(drv, type);
+ /*
+ * If we are making an active request on a RSC that does not have a
+ * dedicated TCS for active state use, then re-purpose a wake TCS to
+ * send active votes.
+ * NOTE: The driver must be aware that this RSC does not have a
+ * dedicated AMC, and therefore would invalidate the sleep and wake
+ * TCSes before making an active state request.
+ */
+ tcs = get_tcs_of_type(drv, type);
+ if (msg->state == RPMH_ACTIVE_ONLY_STATE && IS_ERR(tcs)) {
+ tcs = get_tcs_of_type(drv, WAKE_TCS);
+ if (!IS_ERR(tcs))
+ rpmh_rsc_invalidate(drv);
+ }
+
+ return tcs;
}

static void send_tcs_response(struct tcs_response *resp)
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


2018-04-05 16:21:49

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v5 06/10] drivers: qcom: rpmh-rsc: allow invalidation of sleep/wake TCS

Allow sleep and wake commands to be cleared from the respective TCSes,
so that they can be re-populated.

Signed-off-by: Lina Iyer <[email protected]>
---

Changes in v4:
- refactored the rphm_rsc_invalidate()
---
drivers/soc/qcom/rpmh-internal.h | 1 +
drivers/soc/qcom/rpmh-rsc.c | 48 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 49 insertions(+)

diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
index 638662721086..3fab86a01f14 100644
--- a/drivers/soc/qcom/rpmh-internal.h
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -92,6 +92,7 @@ struct rsc_drv {
int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg);
int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv,
const struct tcs_request *msg);
+int rpmh_rsc_invalidate(struct rsc_drv *drv);

void rpmh_tx_done(const struct tcs_request *msg, int r);

diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index 58fc7254b6f3..1dbccc4d0605 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -162,6 +162,54 @@ static struct tcs_group *get_tcs_of_type(struct rsc_drv *drv, int type)
return tcs;
}

+static int __tcs_invalidate(struct rsc_drv *drv, int type)
+{
+ int m;
+ struct tcs_group *tcs;
+
+ tcs = get_tcs_of_type(drv, type);
+ if (IS_ERR(tcs))
+ return PTR_ERR(tcs);
+
+ spin_lock(&tcs->lock);
+ if (bitmap_empty(tcs->slots, MAX_TCS_SLOTS)) {
+ spin_unlock(&tcs->lock);
+ return 0;
+ }
+
+ for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) {
+ if (!tcs_is_free(drv, m)) {
+ spin_unlock(&tcs->lock);
+ return -EAGAIN;
+ }
+ write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
+ bitmap_zero(tcs->slots, MAX_TCS_SLOTS);
+ }
+ spin_unlock(&tcs->lock);
+
+ return 0;
+}
+
+/**
+ * rpmh_rsc_invalidate - Invalidate sleep and wake TCSes
+ *
+ * @drv: the mailbox controller
+ */
+int rpmh_rsc_invalidate(struct rsc_drv *drv)
+{
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&drv->drv_lock, flags);
+ ret = __tcs_invalidate(drv, SLEEP_TCS);
+ if (!ret)
+ ret = __tcs_invalidate(drv, WAKE_TCS);
+ spin_unlock_irqrestore(&drv->drv_lock, flags);
+
+ return ret;
+}
+EXPORT_SYMBOL(rpmh_rsc_invalidate);
+
static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
const struct tcs_request *msg)
{
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


2018-04-05 16:22:01

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v5 01/10] drivers: qcom: rpmh-rsc: add RPMH controller for QCOM SoCs

Add controller driver for QCOM SoCs that have hardware based shared
resource management. The hardware IP known as RSC (Resource State
Coordinator) houses multiple Direct Resource Voter (DRV) for different
execution levels. A DRV is a unique voter on the state of a shared
resource. A Trigger Control Set (TCS) is a bunch of slots that can house
multiple resource state requests, that when triggered will issue those
requests through an internal bus to the Resource Power Manager Hardened
(RPMH) blocks. These hardware blocks are capable of adjusting clocks,
voltages, etc. The resource state request from a DRV are aggregated along
with state requests from other processors in the SoC and the aggregate
value is applied on the resource.

Some important aspects of the RPMH communication -
- Requests are <addr, value> with some header information
- Multiple requests (upto 16) may be sent through a TCS, at a time
- Requests in a TCS are sent in sequence
- Requests may be fire-n-forget or completion (response expected)
- Multiple TCS from the same DRV may be triggered simultaneously
- Cannot send a request if another requesit for the same addr is in
progress from the same DRV
- When all the requests from a TCS are complete, an IRQ is raised
- The IRQ handler needs to clear the TCS before it is available for
reuse
- TCS configuration is specific to a DRV
- Platform drivers may use DRV from different RSCs to make requests

Resource state requests made when CPUs are active are called 'active'
state requests. Requests made when all the CPUs are powered down (idle
state) are called 'sleep' state requests. They are matched by a
corresponding 'wake' state requests which puts the resources back in to
previously requested active state before resuming any CPU. TCSes are
dedicated for each type of requests. Control TCS are used to provide
specific information to the controller.

Signed-off-by: Lina Iyer <[email protected]>
---

Changes in v4:
- lots of variable name changes as suggested by Stephen B
- use of const for data pointers
- fix comments and other code syntax
- use of bitmap for tcs_in_use instead of atomic
---
drivers/soc/qcom/Kconfig | 10 +
drivers/soc/qcom/Makefile | 1 +
drivers/soc/qcom/rpmh-internal.h | 89 +++++
drivers/soc/qcom/rpmh-rsc.c | 571 ++++++++++++++++++++++++++++++++
include/dt-bindings/soc/qcom,rpmh-rsc.h | 14 +
include/soc/qcom/tcs.h | 56 ++++
6 files changed, 741 insertions(+)
create mode 100644 drivers/soc/qcom/rpmh-internal.h
create mode 100644 drivers/soc/qcom/rpmh-rsc.c
create mode 100644 include/dt-bindings/soc/qcom,rpmh-rsc.h
create mode 100644 include/soc/qcom/tcs.h

diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index e050eb83341d..34f177bac8c8 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -55,6 +55,16 @@ config QCOM_RMTFS_MEM

Say y here if you intend to boot the modem remoteproc.

+config QCOM_RPMH
+ bool "Qualcomm RPM-Hardened (RPMH) Communication"
+ depends on ARCH_QCOM && ARM64 && OF || COMPILE_TEST
+ help
+ Support for communication with the hardened-RPM blocks in
+ Qualcomm Technologies Inc (QTI) SoCs. RPMH communication uses an
+ internal bus to transmit state requests for shared resources. A set
+ of hardware components aggregate requests for these resources and
+ help apply the aggregated state on the resource.
+
config QCOM_SMEM
tristate "Qualcomm Shared Memory Manager (SMEM)"
depends on ARCH_QCOM
diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index dcebf2814e6d..39d3a059ee50 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -6,6 +6,7 @@ obj-$(CONFIG_QCOM_PM) += spm.o
obj-$(CONFIG_QCOM_QMI_HELPERS) += qmi_helpers.o
qmi_helpers-y += qmi_encdec.o qmi_interface.o
obj-$(CONFIG_QCOM_RMTFS_MEM) += rmtfs_mem.o
+obj-$(CONFIG_QCOM_RPMH) += rpmh-rsc.o
obj-$(CONFIG_QCOM_SMD_RPM) += smd-rpm.o
obj-$(CONFIG_QCOM_SMEM) += smem.o
obj-$(CONFIG_QCOM_SMEM_STATE) += smem_state.o
diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
new file mode 100644
index 000000000000..aa73ec4b3e42
--- /dev/null
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
+ */
+
+
+#ifndef __RPM_INTERNAL_H__
+#define __RPM_INTERNAL_H__
+
+#include <linux/bitmap.h>
+#include <soc/qcom/tcs.h>
+
+#define TCS_TYPE_NR 4
+#define MAX_CMDS_PER_TCS 16
+#define MAX_TCS_PER_TYPE 3
+#define MAX_TCS_NR (MAX_TCS_PER_TYPE * TCS_TYPE_NR)
+
+struct rsc_drv;
+
+/**
+ * struct tcs_response: Response object for a request
+ *
+ * @drv: the controller
+ * @msg: the request for this response
+ * @m: the tcs identifier
+ * @err: error reported in the response
+ * @list: element in list of pending response objects
+ */
+struct tcs_response {
+ struct rsc_drv *drv;
+ const struct tcs_request *msg;
+ u32 m;
+ int err;
+ struct list_head list;
+};
+
+/**
+ * struct tcs_group: group of Trigger Command Sets for a request state
+ *
+ * @drv: the controller
+ * @type: type of the TCS in this group - active, sleep, wake
+ * @mask: mask of the TCSes relative to all the TCSes in the RSC
+ * @offset: start of the TCS group relative to the TCSes in the RSC
+ * @num_tcs: number of TCSes in this type
+ * @ncpt: number of commands in each TCS
+ * @lock: lock for synchronizing this TCS writes
+ * @responses: response objects for requests sent from each TCS
+ */
+struct tcs_group {
+ struct rsc_drv *drv;
+ int type;
+ u32 mask;
+ u32 offset;
+ int num_tcs;
+ int ncpt;
+ spinlock_t lock;
+ struct tcs_response *responses[MAX_TCS_PER_TYPE];
+};
+
+/**
+ * struct rsc_drv: the Resource State Coordinator controller
+ *
+ * @name: controller identifier
+ * @tcs_base: start address of the TCS registers in this controller
+ * @id: instance id in the controller (Direct Resource Voter)
+ * @num_tcs: number of TCSes in this DRV
+ * @tasklet: handle responses, off-load work from IRQ handler
+ * @response_pending:
+ * list of responses that needs to be sent to caller
+ * @tcs: TCS groups
+ * @tcs_in_use: s/w state of the TCS
+ * @drv_lock: synchronize state of the controller
+ */
+struct rsc_drv {
+ const char *name;
+ void __iomem *tcs_base;
+ int id;
+ int num_tcs;
+ struct tasklet_struct tasklet;
+ struct list_head response_pending;
+ struct tcs_group tcs[TCS_TYPE_NR];
+ DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
+ spinlock_t drv_lock;
+};
+
+
+int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg);
+
+#endif /* __RPM_INTERNAL_H__ */
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
new file mode 100644
index 000000000000..8bde1e9bd599
--- /dev/null
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -0,0 +1,571 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
+ */
+
+#define pr_fmt(fmt) "%s " fmt, KBUILD_MODNAME
+
+#include <linux/atomic.h>
+#include <linux/delay.h>
+#include <linux/export.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include <soc/qcom/tcs.h>
+#include <dt-bindings/soc/qcom,rpmh-rsc.h>
+
+#include "rpmh-internal.h"
+
+#define RSC_DRV_TCS_OFFSET 672
+#define RSC_DRV_CMD_OFFSET 20
+
+/* DRV Configuration Information Register */
+#define DRV_PRNT_CHLD_CONFIG 0x0C
+#define DRV_NUM_TCS_MASK 0x3F
+#define DRV_NUM_TCS_SHIFT 6
+#define DRV_NCPT_MASK 0x1F
+#define DRV_NCPT_SHIFT 27
+
+/* Register offsets */
+#define RSC_DRV_IRQ_ENABLE 0x00
+#define RSC_DRV_IRQ_STATUS 0x04
+#define RSC_DRV_IRQ_CLEAR 0x08
+#define RSC_DRV_CMD_WAIT_FOR_CMPL 0x10
+#define RSC_DRV_CONTROL 0x14
+#define RSC_DRV_STATUS 0x18
+#define RSC_DRV_CMD_ENABLE 0x1C
+#define RSC_DRV_CMD_MSGID 0x30
+#define RSC_DRV_CMD_ADDR 0x34
+#define RSC_DRV_CMD_DATA 0x38
+#define RSC_DRV_CMD_STATUS 0x3C
+#define RSC_DRV_CMD_RESP_DATA 0x40
+
+#define TCS_AMC_MODE_ENABLE BIT(16)
+#define TCS_AMC_MODE_TRIGGER BIT(24)
+
+/* TCS CMD register bit mask */
+#define CMD_MSGID_LEN 8
+#define CMD_MSGID_RESP_REQ BIT(8)
+#define CMD_MSGID_WRITE BIT(16)
+#define CMD_STATUS_ISSUED BIT(8)
+#define CMD_STATUS_COMPL BIT(16)
+
+static struct tcs_group *get_tcs_from_index(struct rsc_drv *drv, int m)
+{
+ struct tcs_group *tcs;
+ int i;
+
+ for (i = 0; i < drv->num_tcs; i++) {
+ tcs = &drv->tcs[i];
+ if (tcs->mask & BIT(m))
+ return tcs;
+ }
+
+ WARN(i == drv->num_tcs, "Incorrect TCS index %d", m);
+
+ return NULL;
+}
+
+static struct tcs_response *setup_response(struct rsc_drv *drv,
+ const struct tcs_request *msg, int m)
+{
+ struct tcs_response *resp;
+ struct tcs_group *tcs;
+
+ resp = kzalloc(sizeof(*resp), GFP_ATOMIC);
+ if (!resp)
+ return ERR_PTR(-ENOMEM);
+
+ resp->drv = drv;
+ resp->msg = msg;
+ resp->err = 0;
+
+ tcs = get_tcs_from_index(drv, m);
+ if (!tcs)
+ return ERR_PTR(-EINVAL);
+
+ assert_spin_locked(&tcs->lock);
+ tcs->responses[m - tcs->offset] = resp;
+
+ return resp;
+}
+
+static void free_response(struct tcs_response *resp)
+{
+ kfree(resp);
+}
+
+static struct tcs_response *get_response(struct rsc_drv *drv, u32 m)
+{
+ struct tcs_group *tcs = get_tcs_from_index(drv, m);
+
+ return tcs->responses[m - tcs->offset];
+}
+
+static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int m, int n)
+{
+ return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
+ RSC_DRV_CMD_OFFSET * n);
+}
+
+static void write_tcs_reg(struct rsc_drv *drv, int reg, int m, int n, u32 data)
+{
+ writel_relaxed(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
+ RSC_DRV_CMD_OFFSET * n);
+}
+
+static void write_tcs_reg_sync(struct rsc_drv *drv, int reg, int m, int n,
+ u32 data)
+{
+ write_tcs_reg(drv, reg, m, n, data);
+ for (;;) {
+ if (data == read_tcs_reg(drv, reg, m, n))
+ break;
+ udelay(1);
+ }
+}
+
+static bool tcs_is_free(struct rsc_drv *drv, int m)
+{
+ return !test_bit(m, drv->tcs_in_use) &&
+ read_tcs_reg(drv, RSC_DRV_STATUS, m, 0);
+}
+
+static struct tcs_group *get_tcs_of_type(struct rsc_drv *drv, int type)
+{
+ int i;
+ struct tcs_group *tcs;
+
+ for (i = 0; i < TCS_TYPE_NR; i++) {
+ if (type == drv->tcs[i].type)
+ break;
+ }
+
+ if (i == TCS_TYPE_NR)
+ return ERR_PTR(-EINVAL);
+
+ tcs = &drv->tcs[i];
+ if (!tcs->num_tcs)
+ return ERR_PTR(-EINVAL);
+
+ return tcs;
+}
+
+static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
+ const struct tcs_request *msg)
+{
+ int type;
+
+ switch (msg->state) {
+ case RPMH_ACTIVE_ONLY_STATE:
+ type = ACTIVE_TCS;
+ break;
+ default:
+ return ERR_PTR(-EINVAL);
+ }
+
+ return get_tcs_of_type(drv, type);
+}
+
+static void send_tcs_response(struct tcs_response *resp)
+{
+ struct rsc_drv *drv;
+ unsigned long flags;
+
+ if (!resp)
+ return;
+
+ drv = resp->drv;
+ spin_lock_irqsave(&drv->drv_lock, flags);
+ INIT_LIST_HEAD(&resp->list);
+ list_add_tail(&resp->list, &drv->response_pending);
+ spin_unlock_irqrestore(&drv->drv_lock, flags);
+
+ tasklet_schedule(&drv->tasklet);
+}
+
+/**
+ * tcs_irq_handler: TX Done interrupt handler
+ */
+static irqreturn_t tcs_irq_handler(int irq, void *p)
+{
+ struct rsc_drv *drv = p;
+ int m, i;
+ u32 irq_status, sts;
+ struct tcs_response *resp;
+ struct tcs_cmd *cmd;
+
+ irq_status = read_tcs_reg(drv, RSC_DRV_IRQ_STATUS, 0, 0);
+
+ for (m = 0; m < drv->num_tcs; m++) {
+ if (!(irq_status & (u32)BIT(m)))
+ continue;
+
+ resp = get_response(drv, m);
+ if (WARN_ON(!resp))
+ goto skip_resp;
+
+ resp->err = 0;
+ for (i = 0; i < resp->msg->num_cmds; i++) {
+ cmd = &resp->msg->cmds[i];
+ sts = read_tcs_reg(drv, RSC_DRV_CMD_STATUS, m, i);
+ if (!(sts & CMD_STATUS_ISSUED) ||
+ ((resp->msg->wait_for_compl || cmd->wait) &&
+ !(sts & CMD_STATUS_COMPL))) {
+ resp->err = -EIO;
+ break;
+ }
+ }
+skip_resp:
+ /* Reclaim the TCS */
+ write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
+ write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, 0, BIT(m));
+ clear_bit(m, drv->tcs_in_use);
+ send_tcs_response(resp);
+ }
+
+ return IRQ_HANDLED;
+}
+
+/**
+ * tcs_notify_tx_done: TX Done for requests that got a response
+ *
+ * @data: the tasklet argument
+ *
+ * Tasklet function to notify MBOX that we are done with the request.
+ * Handles all pending reponses whenever run.
+ */
+static void tcs_notify_tx_done(unsigned long data)
+{
+ struct rsc_drv *drv = (struct rsc_drv *)data;
+ struct tcs_response *resp;
+ unsigned long flags;
+
+ for (;;) {
+ spin_lock_irqsave(&drv->drv_lock, flags);
+ resp = list_first_entry_or_null(&drv->response_pending,
+ struct tcs_response, list);
+ if (!resp) {
+ spin_unlock_irqrestore(&drv->drv_lock, flags);
+ break;
+ }
+ list_del(&resp->list);
+ spin_unlock_irqrestore(&drv->drv_lock, flags);
+ free_response(resp);
+ }
+}
+
+static void __tcs_buffer_write(struct rsc_drv *drv, int m, int n,
+ const struct tcs_request *msg)
+{
+ u32 msgid, cmd_msgid;
+ u32 cmd_enable = 0;
+ u32 cmd_complete;
+ struct tcs_cmd *cmd;
+ int i, j;
+
+ cmd_msgid = CMD_MSGID_LEN;
+ cmd_msgid |= msg->wait_for_compl ? CMD_MSGID_RESP_REQ : 0;
+ cmd_msgid |= CMD_MSGID_WRITE;
+
+ cmd_complete = read_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0);
+
+ for (i = 0, j = n; i < msg->num_cmds; i++, j++) {
+ cmd = &msg->cmds[i];
+ cmd_enable |= BIT(j);
+ cmd_complete |= cmd->wait << j;
+ msgid = cmd_msgid;
+ msgid |= cmd->wait ? CMD_MSGID_RESP_REQ : 0;
+ write_tcs_reg(drv, RSC_DRV_CMD_MSGID, m, j, msgid);
+ write_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, j, cmd->addr);
+ write_tcs_reg(drv, RSC_DRV_CMD_DATA, m, j, cmd->data);
+ }
+
+ write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
+ cmd_enable |= read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0);
+ write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, cmd_enable);
+}
+
+static void __tcs_trigger(struct rsc_drv *drv, int m)
+{
+ u32 enable;
+
+ /*
+ * HW req: Clear the DRV_CONTROL and enable TCS again
+ * While clearing ensure that the AMC mode trigger is cleared
+ * and then the mode enable is cleared.
+ */
+ enable = read_tcs_reg(drv, RSC_DRV_CONTROL, m, 0);
+ enable &= ~TCS_AMC_MODE_TRIGGER;
+ write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
+ enable &= ~TCS_AMC_MODE_ENABLE;
+ write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
+
+ /* Enable the AMC mode on the TCS and then trigger the TCS */
+ enable = TCS_AMC_MODE_ENABLE;
+ write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
+ enable |= TCS_AMC_MODE_TRIGGER;
+ write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
+}
+
+static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs,
+ const struct tcs_request *msg)
+{
+ unsigned long curr_enabled;
+ u32 addr;
+ int i, j, k;
+ int m = tcs->offset;
+
+ for (i = 0; i < tcs->num_tcs; i++, m++) {
+ if (tcs_is_free(drv, m))
+ continue;
+
+ curr_enabled = read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0);
+
+ for_each_set_bit(j, &curr_enabled, MAX_CMDS_PER_TCS) {
+ addr = read_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, j);
+ for (k = 0; k < msg->num_cmds; k++) {
+ if (addr == msg->cmds[k].addr)
+ return -EBUSY;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int find_free_tcs(struct tcs_group *tcs)
+{
+ int m;
+
+ for (m = 0; m < tcs->num_tcs; m++) {
+ if (tcs_is_free(tcs->drv, tcs->offset + m))
+ return m;
+ }
+
+ return -EBUSY;
+}
+
+static int tcs_mbox_write(struct rsc_drv *drv, const struct tcs_request *msg)
+{
+ struct tcs_group *tcs;
+ int m;
+ struct tcs_response *resp = NULL;
+ unsigned long flags;
+ int ret;
+
+ tcs = get_tcs_for_msg(drv, msg);
+ if (IS_ERR(tcs))
+ return PTR_ERR(tcs);
+
+ spin_lock_irqsave(&tcs->lock, flags);
+ m = find_free_tcs(tcs);
+ if (m < 0) {
+ ret = m;
+ goto done_write;
+ }
+
+ /*
+ * The h/w does not like if we send a request to the same address,
+ * when one is already in-flight or being processed.
+ */
+ ret = check_for_req_inflight(drv, tcs, msg);
+ if (ret)
+ goto done_write;
+
+ resp = setup_response(drv, msg, m);
+ if (IS_ERR(resp)) {
+ ret = PTR_ERR(resp);
+ goto done_write;
+ }
+ resp->m = m;
+
+ set_bit(m, drv->tcs_in_use);
+ __tcs_buffer_write(drv, m, 0, msg);
+ __tcs_trigger(drv, m);
+
+done_write:
+ spin_unlock_irqrestore(&tcs->lock, flags);
+ return ret;
+}
+
+/**
+ * rpmh_rsc_send_data: Validate the incoming message and write to the
+ * appropriate TCS block.
+ *
+ * @drv: the controller
+ * @msg: the data to be sent
+ *
+ * Return: 0 on success, -EINVAL on error.
+ * Note: This call blocks until a valid data is written to the TCS.
+ */
+int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
+{
+ int ret;
+
+ if (!msg || !msg->cmds || !msg->num_cmds ||
+ msg->num_cmds > MAX_RPMH_PAYLOAD)
+ return -EINVAL;
+
+ do {
+ ret = tcs_mbox_write(drv, msg);
+ if (ret == -EBUSY) {
+ pr_info_ratelimited("TCS Busy, retrying RPMH message send: addr=%#x\n",
+ msg->cmds[0].addr);
+ udelay(10);
+ }
+ } while (ret == -EBUSY);
+
+ return ret;
+}
+EXPORT_SYMBOL(rpmh_rsc_send_data);
+
+static int rpmh_probe_tcs_config(struct platform_device *pdev,
+ struct rsc_drv *drv)
+{
+ struct tcs_type_config {
+ u32 type;
+ u32 n;
+ } tcs_cfg[TCS_TYPE_NR] = { { 0 } };
+ struct device_node *dn = pdev->dev.of_node;
+ u32 config, max_tcs, ncpt;
+ int i, ret, n, st = 0;
+ struct tcs_group *tcs;
+ struct resource *res;
+ void __iomem *base;
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "drv");
+ base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "tcs");
+ drv->tcs_base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(drv->tcs_base))
+ return PTR_ERR(drv->tcs_base);
+
+ config = readl_relaxed(base + DRV_PRNT_CHLD_CONFIG);
+
+ max_tcs = config;
+ max_tcs &= DRV_NUM_TCS_MASK << (DRV_NUM_TCS_SHIFT * drv->id);
+ max_tcs = max_tcs >> (DRV_NUM_TCS_SHIFT * drv->id);
+
+ ncpt = config & (DRV_NCPT_MASK << DRV_NCPT_SHIFT);
+ ncpt = ncpt >> DRV_NCPT_SHIFT;
+
+ n = of_property_count_u32_elems(dn, "qcom,tcs-config");
+ if (n != 2 * TCS_TYPE_NR)
+ return -EINVAL;
+
+ for (i = 0; i < TCS_TYPE_NR; i++) {
+ ret = of_property_read_u32_index(dn, "qcom,tcs-config",
+ i * 2, &tcs_cfg[i].type);
+ if (ret)
+ return ret;
+ if (tcs_cfg[i].type >= TCS_TYPE_NR)
+ return -EINVAL;
+
+ ret = of_property_read_u32_index(dn, "qcom,tcs-config",
+ i * 2 + 1, &tcs_cfg[i].n);
+ if (ret)
+ return ret;
+ if (tcs_cfg[i].n > MAX_TCS_PER_TYPE)
+ return -EINVAL;
+ }
+
+ for (i = 0; i < TCS_TYPE_NR; i++) {
+ tcs = &drv->tcs[tcs_cfg[i].type];
+ if (tcs->drv)
+ return -EINVAL;
+ tcs->drv = drv;
+ tcs->type = tcs_cfg[i].type;
+ tcs->num_tcs = tcs_cfg[i].n;
+ tcs->ncpt = ncpt;
+ spin_lock_init(&tcs->lock);
+
+ if (!tcs->num_tcs || tcs->type == CONTROL_TCS)
+ continue;
+
+ if (st + tcs->num_tcs > max_tcs ||
+ st + tcs->num_tcs >= BITS_PER_BYTE * sizeof(tcs->mask))
+ return -EINVAL;
+
+ tcs->mask = ((1 << tcs->num_tcs) - 1) << st;
+ tcs->offset = st;
+ st += tcs->num_tcs;
+ }
+
+ drv->num_tcs = st;
+
+ return 0;
+}
+
+static int rpmh_rsc_probe(struct platform_device *pdev)
+{
+ struct device_node *dn = pdev->dev.of_node;
+ struct rsc_drv *drv;
+ int ret, irq;
+
+ drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL);
+ if (!drv)
+ return -ENOMEM;
+
+ ret = of_property_read_u32(dn, "qcom,drv-id", &drv->id);
+ if (ret)
+ return ret;
+
+ drv->name = of_get_property(dn, "label", NULL);
+ if (!drv->name)
+ drv->name = dev_name(&pdev->dev);
+
+ ret = rpmh_probe_tcs_config(pdev, drv);
+ if (ret)
+ return ret;
+
+ INIT_LIST_HEAD(&drv->response_pending);
+ spin_lock_init(&drv->drv_lock);
+ tasklet_init(&drv->tasklet, tcs_notify_tx_done, (unsigned long)drv);
+ bitmap_zero(drv->tcs_in_use, MAX_TCS_NR);
+
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0)
+ return irq;
+
+ ret = devm_request_irq(&pdev->dev, irq, tcs_irq_handler,
+ IRQF_TRIGGER_HIGH | IRQF_NO_SUSPEND,
+ drv->name, drv);
+ if (ret)
+ return ret;
+
+ /* Enable the active TCS to send requests immediately */
+ write_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, 0, drv->tcs[ACTIVE_TCS].mask);
+
+ return devm_of_platform_populate(&pdev->dev);
+}
+
+static const struct of_device_id rpmh_drv_match[] = {
+ { .compatible = "qcom,rpmh-rsc", },
+ { }
+};
+
+static struct platform_driver rpmh_driver = {
+ .probe = rpmh_rsc_probe,
+ .driver = {
+ .name = "rpmh",
+ .of_match_table = rpmh_drv_match,
+ },
+};
+
+static int __init rpmh_driver_init(void)
+{
+ return platform_driver_register(&rpmh_driver);
+}
+arch_initcall(rpmh_driver_init);
diff --git a/include/dt-bindings/soc/qcom,rpmh-rsc.h b/include/dt-bindings/soc/qcom,rpmh-rsc.h
new file mode 100644
index 000000000000..868f998ea998
--- /dev/null
+++ b/include/dt-bindings/soc/qcom,rpmh-rsc.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef __DT_QCOM_RPMH_RSC_H__
+#define __DT_QCOM_RPMH_RSC_H__
+
+#define SLEEP_TCS 0
+#define WAKE_TCS 1
+#define ACTIVE_TCS 2
+#define CONTROL_TCS 3
+
+#endif /* __DT_QCOM_RPMH_RSC_H__ */
diff --git a/include/soc/qcom/tcs.h b/include/soc/qcom/tcs.h
new file mode 100644
index 000000000000..4b78f881010a
--- /dev/null
+++ b/include/soc/qcom/tcs.h
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef __SOC_QCOM_TCS_H__
+#define __SOC_QCOM_TCS_H__
+
+#define MAX_RPMH_PAYLOAD 16
+
+/**
+ * rpmh_state: state for the request
+ *
+ * RPMH_SLEEP_STATE: State of the resource when the processor subsystem
+ * is powered down. There is no client using the
+ * resource actively.
+ * RPMH_WAKE_ONLY_STATE: Resume resource state to the value previously
+ * requested before the processor was powered down.
+ * RPMH_ACTIVE_ONLY_STATE: Active or AMC mode requests. Resource state
+ * is aggregated immediately.
+ */
+enum rpmh_state {
+ RPMH_SLEEP_STATE,
+ RPMH_WAKE_ONLY_STATE,
+ RPMH_ACTIVE_ONLY_STATE,
+};
+
+/**
+ * struct tcs_cmd: an individual request to RPMH.
+ *
+ * @addr: the address of the resource slv_id:18:16 | offset:0:15
+ * @data: the resource state request
+ * @wait: wait for this request to be complete before sending the next
+ */
+struct tcs_cmd {
+ u32 addr;
+ u32 data;
+ bool wait;
+};
+
+/**
+ * struct tcs_request: A set of tcs_cmds sent together in a TCS
+ *
+ * @state: state for the request.
+ * @wait_for_compl: wait until we get a response from the h/w accelerator
+ * @num_cmds: the number of @cmds in this request
+ * @cmds: an array of tcs_cmds
+ */
+struct tcs_request {
+ enum rpmh_state state;
+ bool wait_for_compl;
+ u32 num_cmds;
+ struct tcs_cmd *cmds;
+};
+
+#endif /* __SOC_QCOM_TCS_H__ */
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


2018-04-05 16:22:03

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v5 08/10] drivers: qcom: rpmh: allow requests to be sent asynchronously

Platform drivers that want to send a request but do not want to block
until the RPMH request completes have now a new API -
rpmh_write_async().

The API allocates memory and send the requests and returns the control
back to the platform driver. The tx_done callback from the controller is
handled in the context of the controller's thread and frees the
allocated memory. This API allows RPMH requests from atomic contexts as
well.

Signed-off-by: Lina Iyer <[email protected]>
---
drivers/soc/qcom/rpmh.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++++
include/soc/qcom/rpmh.h | 8 ++++++++
2 files changed, 61 insertions(+)

diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index b5468ef082c1..3a96e5f58302 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -35,6 +35,7 @@
.cmd = { { 0 } }, \
.completion = q, \
.rc = rc, \
+ .free = NULL, \
}

/**
@@ -59,6 +60,7 @@ struct cache_req {
* @cmd: the payload that will be part of the @msg
* @completion: triggered when request is done
* @err: err return from the controller
+ * @free: the request object to be freed at tx_done
*/
struct rpmh_request {
struct tcs_request msg;
@@ -66,6 +68,7 @@ struct rpmh_request {
struct completion *completion;
struct rpmh_client *rc;
int err;
+ struct rpmh_request *free;
};

/**
@@ -110,6 +113,8 @@ void rpmh_tx_done(const struct tcs_request *msg, int r)
"RPMH TX fail in msg addr=%#x, err=%d\n",
rpm_msg->msg.cmds[0].addr, r);

+ kfree(rpm_msg->free);
+
/* Signal the blocking thread we are done */
if (compl)
complete(compl);
@@ -255,6 +260,54 @@ static int __rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
return ret;
}

+static struct rpmh_request *__get_rpmh_msg_async(struct rpmh_client *rc,
+ enum rpmh_state state,
+ const struct tcs_cmd *cmd,
+ u32 n)
+{
+ struct rpmh_request *req;
+
+ if (IS_ERR_OR_NULL(rc) || !cmd || !n || n > MAX_RPMH_PAYLOAD)
+ return ERR_PTR(-EINVAL);
+
+ req = kzalloc(sizeof(*req), GFP_ATOMIC);
+ if (!req)
+ return ERR_PTR(-ENOMEM);
+
+ memcpy(req->cmd, cmd, n * sizeof(*cmd));
+
+ req->msg.state = state;
+ req->msg.cmds = req->cmd;
+ req->msg.num_cmds = n;
+ req->free = req;
+
+ return req;
+}
+
+/**
+ * rpmh_write_async: Write a set of RPMH commands
+ *
+ * @rc: The RPMh handle got from rpmh_get_client
+ * @state: Active/sleep set
+ * @cmd: The payload data
+ * @n: The number of elements in payload
+ *
+ * Write a set of RPMH commands, the order of commands is maintained
+ * and will be sent as a single shot.
+ */
+int rpmh_write_async(struct rpmh_client *rc, enum rpmh_state state,
+ const struct tcs_cmd *cmd, u32 n)
+{
+ struct rpmh_request *rpm_msg;
+
+ rpm_msg = __get_rpmh_msg_async(rc, state, cmd, n);
+ if (IS_ERR(rpm_msg))
+ return PTR_ERR(rpm_msg);
+
+ return __rpmh_write(rc, state, rpm_msg);
+}
+EXPORT_SYMBOL(rpmh_write_async);
+
/**
* rpmh_write: Write a set of RPMH commands and block until response
*
diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
index 41a2518c46a5..9e6de09e43f0 100644
--- a/include/soc/qcom/rpmh.h
+++ b/include/soc/qcom/rpmh.h
@@ -15,6 +15,9 @@ struct rpmh_client;
int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
const struct tcs_cmd *cmd, u32 n);

+int rpmh_write_async(struct rpmh_client *rc, enum rpmh_state state,
+ const struct tcs_cmd *cmd, u32 n);
+
struct rpmh_client *rpmh_get_client(struct platform_device *pdev);

int rpmh_flush(struct rpmh_client *rc);
@@ -32,6 +35,11 @@ static inline int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
static inline struct rpmh_client *rpmh_get_client(struct platform_device *pdev)
{ return ERR_PTR(-ENODEV); }

+static inline int rpmh_write_async(struct rpmh_client *rc,
+ enum rpmh_state state,
+ const struct tcs_cmd *cmd, u32 n)
+{ return -ENODEV; }
+
static inline int rpmh_flush(struct rpmh_client *rc)
{ return -ENODEV; }

--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


2018-04-05 16:22:26

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v5 05/10] drivers: qcom: rpmh-rsc: write sleep/wake requests to TCS

Sleep and wake requests are sent when the application processor
subsystem of the SoC is entering deep sleep states like in suspend.
These requests help lower the system power requirements when the
resources are not in use.

Sleep and wake requests are written to the TCS slots but are not
triggered at the time of writing. The TCS are triggered by the firmware
after the last of the CPUs has executed its WFI. Since these requests
may come in different batches of requests, it is the job of this
controller driver to find and arrange the requests into the available
TCSes.

Signed-off-by: Lina Iyer <[email protected]>
Reviewed-by: Evan Green <[email protected]>
---
drivers/soc/qcom/rpmh-internal.h | 9 ++-
drivers/soc/qcom/rpmh-rsc.c | 120 +++++++++++++++++++++++++++++++++++++++
2 files changed, 128 insertions(+), 1 deletion(-)

diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
index 125f9faec536..638662721086 100644
--- a/drivers/soc/qcom/rpmh-internal.h
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -14,6 +14,7 @@
#define MAX_CMDS_PER_TCS 16
#define MAX_TCS_PER_TYPE 3
#define MAX_TCS_NR (MAX_TCS_PER_TYPE * TCS_TYPE_NR)
+#define MAX_TCS_SLOTS (MAX_CMDS_PER_TCS * MAX_TCS_PER_TYPE)

struct rsc_drv;

@@ -45,6 +46,8 @@ struct tcs_response {
* @ncpt: number of commands in each TCS
* @lock: lock for synchronizing this TCS writes
* @responses: response objects for requests sent from each TCS
+ * @cmd_cache: flattened cache of cmds in sleep/wake TCS
+ * @slots: indicates which of @cmd_addr are occupied
*/
struct tcs_group {
struct rsc_drv *drv;
@@ -55,6 +58,8 @@ struct tcs_group {
int ncpt;
spinlock_t lock;
struct tcs_response *responses[MAX_TCS_PER_TYPE];
+ u32 *cmd_cache;
+ DECLARE_BITMAP(slots, MAX_TCS_SLOTS);
};

/**
@@ -69,7 +74,7 @@ struct tcs_group {
* list of responses that needs to be sent to caller
* @tcs: TCS groups
* @tcs_in_use: s/w state of the TCS
- * @drv_lock: synchronize state of the controller
+ * @drv_lock: synchronize state of the controller
*/
struct rsc_drv {
const char *name;
@@ -85,6 +90,8 @@ struct rsc_drv {


int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg);
+int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv,
+ const struct tcs_request *msg);

void rpmh_tx_done(const struct tcs_request *msg, int r);

diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index c5cde917dba6..58fc7254b6f3 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -171,6 +171,12 @@ static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
case RPMH_ACTIVE_ONLY_STATE:
type = ACTIVE_TCS;
break;
+ case RPMH_WAKE_ONLY_STATE:
+ type = WAKE_TCS;
+ break;
+ case RPMH_SLEEP_STATE:
+ type = SLEEP_TCS;
+ break;
default:
return ERR_PTR(-EINVAL);
}
@@ -439,6 +445,107 @@ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
}
EXPORT_SYMBOL(rpmh_rsc_send_data);

+static int find_match(const struct tcs_group *tcs, const struct tcs_cmd *cmd,
+ int len)
+{
+ int i, j;
+
+ /* Check for already cached commands */
+ for_each_set_bit(i, tcs->slots, MAX_TCS_SLOTS) {
+ for (j = 0; j < len; j++) {
+ if (tcs->cmd_cache[i] != cmd[0].addr) {
+ if (j == 0)
+ break;
+ WARN(tcs->cmd_cache[i + j] != cmd[j].addr,
+ "Message does not match previous sequence.\n");
+ return -EINVAL;
+ } else if (j == len - 1) {
+ return i;
+ }
+ }
+ }
+
+ return -ENODATA;
+}
+
+static int find_slots(struct tcs_group *tcs, const struct tcs_request *msg,
+ int *m, int *n)
+{
+ int slot, offset;
+ int i = 0;
+
+ /* Find if we already have the msg in our TCS */
+ slot = find_match(tcs, msg->cmds, msg->num_cmds);
+ if (slot >= 0)
+ goto copy_data;
+
+ /* Do over, until we can fit the full payload in a TCS */
+ do {
+ slot = bitmap_find_next_zero_area(tcs->slots, MAX_TCS_SLOTS,
+ i, msg->num_cmds, 0);
+ if (slot == MAX_TCS_SLOTS)
+ return -ENOMEM;
+ i += tcs->ncpt;
+ } while (slot + msg->num_cmds - 1 >= i);
+
+copy_data:
+ bitmap_set(tcs->slots, slot, msg->num_cmds);
+ /* Copy the addresses of the resources over to the slots */
+ for (i = 0; i < msg->num_cmds; i++)
+ tcs->cmd_cache[slot + i] = msg->cmds[i].addr;
+
+ offset = slot / tcs->ncpt;
+ *m = offset + tcs->offset;
+ *n = slot % tcs->ncpt;
+
+ return 0;
+}
+
+static int tcs_ctrl_write(struct rsc_drv *drv, const struct tcs_request *msg)
+{
+ struct tcs_group *tcs;
+ int m = 0, n = 0;
+ unsigned long flags;
+ int ret;
+
+ tcs = get_tcs_for_msg(drv, msg);
+ if (IS_ERR(tcs))
+ return PTR_ERR(tcs);
+
+ spin_lock_irqsave(&tcs->lock, flags);
+ /* find the m-th TCS and the n-th position in the TCS to write to */
+ ret = find_slots(tcs, msg, &m, &n);
+ if (!ret)
+ __tcs_buffer_write(drv, m, n, msg);
+ spin_unlock_irqrestore(&tcs->lock, flags);
+
+ return ret;
+}
+
+/**
+ * rpmh_rsc_write_ctrl_data: Write request to the controller
+ *
+ * @drv: the controller
+ * @msg: the data to be written to the controller
+ *
+ * There is no response returned for writing the request to the controller.
+ */
+int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg)
+{
+ if (!msg || !msg->cmds || !msg->num_cmds ||
+ msg->num_cmds > MAX_RPMH_PAYLOAD) {
+ pr_err("Payload error\n");
+ return -EINVAL;
+ }
+
+ /* Data sent to this API will not be sent immediately */
+ if (msg->state == RPMH_ACTIVE_ONLY_STATE)
+ return -EINVAL;
+
+ return tcs_ctrl_write(drv, msg);
+}
+EXPORT_SYMBOL(rpmh_rsc_write_ctrl_data);
+
static int rpmh_probe_tcs_config(struct platform_device *pdev,
struct rsc_drv *drv)
{
@@ -512,6 +619,19 @@ static int rpmh_probe_tcs_config(struct platform_device *pdev,
tcs->mask = ((1 << tcs->num_tcs) - 1) << st;
tcs->offset = st;
st += tcs->num_tcs;
+
+ /*
+ * Allocate memory to cache sleep and wake requests to
+ * avoid reading TCS register memory.
+ */
+ if (tcs->type == ACTIVE_TCS)
+ continue;
+
+ tcs->cmd_cache = devm_kcalloc(&pdev->dev,
+ tcs->num_tcs * ncpt, sizeof(u32),
+ GFP_KERNEL);
+ if (!tcs->cmd_cache)
+ return -ENOMEM;
}

drv->num_tcs = st;
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


2018-04-05 16:22:41

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v5 03/10] drivers: qcom: rpmh-rsc: log RPMH requests in FTRACE

Log sent RPMH requests and interrupt responses in FTRACE.

Cc: Steven Rostedt <[email protected]>
Signed-off-by: Lina Iyer <[email protected]>
Reviewed-by: Steven Rostedt (VMware) <[email protected]>
---

Changes in v4:
- fix compilation issues, use __assign_str
- use %#x instead of 0x%08x
Changes in v3:
- Use __string() instead of char *
- fix TRACE_INCLUDE_PATH
---
drivers/soc/qcom/Makefile | 1 +
drivers/soc/qcom/rpmh-rsc.c | 6 +++
drivers/soc/qcom/trace-rpmh.h | 89 +++++++++++++++++++++++++++++++++++++++++++
3 files changed, 96 insertions(+)
create mode 100644 drivers/soc/qcom/trace-rpmh.h

diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index 39d3a059ee50..cb6300f6a8e9 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -1,4 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
+CFLAGS_rpmh-rsc.o := -I$(src)
obj-$(CONFIG_QCOM_GLINK_SSR) += glink_ssr.o
obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o
obj-$(CONFIG_QCOM_MDT_LOADER) += mdt_loader.o
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index 8bde1e9bd599..f604101a4fc2 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -24,6 +24,9 @@

#include "rpmh-internal.h"

+#define CREATE_TRACE_POINTS
+#include "trace-rpmh.h"
+
#define RSC_DRV_TCS_OFFSET 672
#define RSC_DRV_CMD_OFFSET 20

@@ -228,6 +231,7 @@ static irqreturn_t tcs_irq_handler(int irq, void *p)
/* Reclaim the TCS */
write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, 0, BIT(m));
+ trace_rpmh_notify_irq(drv, resp);
clear_bit(m, drv->tcs_in_use);
send_tcs_response(resp);
}
@@ -259,6 +263,7 @@ static void tcs_notify_tx_done(unsigned long data)
}
list_del(&resp->list);
spin_unlock_irqrestore(&drv->drv_lock, flags);
+ trace_rpmh_notify_tx_done(drv, resp);
free_response(resp);
}
}
@@ -287,6 +292,7 @@ static void __tcs_buffer_write(struct rsc_drv *drv, int m, int n,
write_tcs_reg(drv, RSC_DRV_CMD_MSGID, m, j, msgid);
write_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, j, cmd->addr);
write_tcs_reg(drv, RSC_DRV_CMD_DATA, m, j, cmd->data);
+ trace_rpmh_send_msg(drv, m, j, msgid, cmd);
}

write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
diff --git a/drivers/soc/qcom/trace-rpmh.h b/drivers/soc/qcom/trace-rpmh.h
new file mode 100644
index 000000000000..8d7326e684db
--- /dev/null
+++ b/drivers/soc/qcom/trace-rpmh.h
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
+ */
+
+#if !defined(_TRACE_RPMH_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_RPMH_H
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM rpmh
+
+#include <linux/tracepoint.h>
+#include "rpmh-internal.h"
+
+DECLARE_EVENT_CLASS(rpmh_notify,
+
+ TP_PROTO(struct rsc_drv *d, struct tcs_response *r),
+
+ TP_ARGS(d, r),
+
+ TP_STRUCT__entry(
+ __string(name, d->name)
+ __field(int, m)
+ __field(u32, addr)
+ __field(int, errno)
+ ),
+
+ TP_fast_assign(
+ __assign_str(name, d->name);
+ __entry->m = r->m;
+ __entry->addr = r->msg->cmds[0].addr;
+ __entry->errno = r->err;
+ ),
+
+ TP_printk("%s: ack: tcs-m:%d addr: %#x errno: %d",
+ __get_str(name), __entry->m, __entry->addr, __entry->errno)
+);
+
+DEFINE_EVENT(rpmh_notify, rpmh_notify_irq,
+ TP_PROTO(struct rsc_drv *d, struct tcs_response *r),
+ TP_ARGS(d, r)
+);
+
+DEFINE_EVENT(rpmh_notify, rpmh_notify_tx_done,
+ TP_PROTO(struct rsc_drv *d, struct tcs_response *r),
+ TP_ARGS(d, r)
+);
+
+
+TRACE_EVENT(rpmh_send_msg,
+
+ TP_PROTO(struct rsc_drv *d, int m, int n, u32 h, struct tcs_cmd *c),
+
+ TP_ARGS(d, m, n, h, c),
+
+ TP_STRUCT__entry(
+ __string(name, d->name)
+ __field(int, m)
+ __field(int, n)
+ __field(u32, hdr)
+ __field(u32, addr)
+ __field(u32, data)
+ __field(bool, wait)
+ ),
+
+ TP_fast_assign(
+ __assign_str(name, d->name);
+ __entry->m = m;
+ __entry->n = n;
+ __entry->hdr = h;
+ __entry->addr = c->addr;
+ __entry->data = c->data;
+ __entry->wait = c->wait;
+ ),
+
+ TP_printk("%s: send-msg: tcs(m): %d cmd(n): %d msgid: %#x addr: %#x data: %#x complete: %d",
+ __get_str(name), __entry->m, __entry->n, __entry->hdr,
+ __entry->addr, __entry->data, __entry->wait)
+);
+
+#endif /* _TRACE_RPMH_H */
+
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH .
+
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE trace-rpmh
+
+#include <trace/define_trace.h>
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


2018-04-05 16:23:03

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v5 04/10] drivers: qcom: rpmh: add RPMH helper functions

Sending RPMH requests and waiting for response from the controller
through a callback is common functionality across all platform drivers.
To simplify drivers, add a library functions to create RPMH client and
send resource state requests.

rpmh_write() is a synchronous blocking call that can be used to send
active state requests.

Signed-off-by: Lina Iyer <[email protected]>
---

Changes in v4:
- use const struct tcs_cmd in API
- remove wait count from this patch
- changed -EFAULT to -EINVAL
---
drivers/soc/qcom/Makefile | 4 +-
drivers/soc/qcom/rpmh-internal.h | 2 +
drivers/soc/qcom/rpmh-rsc.c | 7 ++
drivers/soc/qcom/rpmh.c | 253 +++++++++++++++++++++++++++++++++++++++
include/soc/qcom/rpmh.h | 34 ++++++
5 files changed, 299 insertions(+), 1 deletion(-)
create mode 100644 drivers/soc/qcom/rpmh.c
create mode 100644 include/soc/qcom/rpmh.h

diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index cb6300f6a8e9..bb395c3202ca 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -7,7 +7,9 @@ obj-$(CONFIG_QCOM_PM) += spm.o
obj-$(CONFIG_QCOM_QMI_HELPERS) += qmi_helpers.o
qmi_helpers-y += qmi_encdec.o qmi_interface.o
obj-$(CONFIG_QCOM_RMTFS_MEM) += rmtfs_mem.o
-obj-$(CONFIG_QCOM_RPMH) += rpmh-rsc.o
+obj-$(CONFIG_QCOM_RPMH) += qcom_rpmh.o
+qcom_rpmh-y += rpmh-rsc.o
+qcom_rpmh-y += rpmh.o
obj-$(CONFIG_QCOM_SMD_RPM) += smd-rpm.o
obj-$(CONFIG_QCOM_SMEM) += smem.o
obj-$(CONFIG_QCOM_SMEM_STATE) += smem_state.o
diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
index aa73ec4b3e42..125f9faec536 100644
--- a/drivers/soc/qcom/rpmh-internal.h
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -86,4 +86,6 @@ struct rsc_drv {

int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg);

+void rpmh_tx_done(const struct tcs_request *msg, int r);
+
#endif /* __RPM_INTERNAL_H__ */
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index f604101a4fc2..c5cde917dba6 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -252,6 +252,8 @@ static void tcs_notify_tx_done(unsigned long data)
struct rsc_drv *drv = (struct rsc_drv *)data;
struct tcs_response *resp;
unsigned long flags;
+ const struct tcs_request *msg;
+ int err;

for (;;) {
spin_lock_irqsave(&drv->drv_lock, flags);
@@ -264,7 +266,10 @@ static void tcs_notify_tx_done(unsigned long data)
list_del(&resp->list);
spin_unlock_irqrestore(&drv->drv_lock, flags);
trace_rpmh_notify_tx_done(drv, resp);
+ msg = resp->msg;
+ err = resp->err;
free_response(resp);
+ rpmh_tx_done(msg, err);
}
}

@@ -554,6 +559,8 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
/* Enable the active TCS to send requests immediately */
write_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, 0, drv->tcs[ACTIVE_TCS].mask);

+ dev_set_drvdata(&pdev->dev, drv);
+
return devm_of_platform_populate(&pdev->dev);
}

diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
new file mode 100644
index 000000000000..e3c7491e7baf
--- /dev/null
+++ b/drivers/soc/qcom/rpmh.c
@@ -0,0 +1,253 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
+ */
+
+#include <linux/atomic.h>
+#include <linux/interrupt.h>
+#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/mailbox_client.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+
+#include <soc/qcom/rpmh.h>
+
+#include "rpmh-internal.h"
+
+#define RPMH_MAX_MBOXES 2
+#define RPMH_TIMEOUT_MS 10000
+
+#define DEFINE_RPMH_MSG_ONSTACK(rc, s, q, name) \
+ struct rpmh_request name = { \
+ .msg = { \
+ .state = s, \
+ .cmds = name.cmd, \
+ .num_cmds = 0, \
+ .wait_for_compl = true, \
+ }, \
+ .cmd = { { 0 } }, \
+ .completion = q, \
+ .rc = rc, \
+ }
+
+/**
+ * struct rpmh_request: the message to be sent to rpmh-rsc
+ *
+ * @msg: the request
+ * @cmd: the payload that will be part of the @msg
+ * @completion: triggered when request is done
+ * @err: err return from the controller
+ */
+struct rpmh_request {
+ struct tcs_request msg;
+ struct tcs_cmd cmd[MAX_RPMH_PAYLOAD];
+ struct completion *completion;
+ struct rpmh_client *rc;
+ int err;
+};
+
+/**
+ * struct rpmh_ctrlr: our representation of the controller
+ *
+ * @drv: the controller instance
+ */
+struct rpmh_ctrlr {
+ struct rsc_drv *drv;
+};
+
+/**
+ * struct rpmh_client: the client object
+ *
+ * @dev: the platform device that is the owner
+ * @ctrlr: the controller associated with this client.
+ */
+struct rpmh_client {
+ struct device *dev;
+ struct rpmh_ctrlr *ctrlr;
+};
+
+static struct rpmh_ctrlr rpmh_rsc[RPMH_MAX_MBOXES];
+static DEFINE_MUTEX(rpmh_ctrlr_mutex);
+
+void rpmh_tx_done(const struct tcs_request *msg, int r)
+{
+ struct rpmh_request *rpm_msg = container_of(msg, struct rpmh_request,
+ msg);
+ struct completion *compl = rpm_msg->completion;
+
+ rpm_msg->err = r;
+
+ if (r)
+ dev_err(rpm_msg->rc->dev,
+ "RPMH TX fail in msg addr=%#x, err=%d\n",
+ rpm_msg->msg.cmds[0].addr, r);
+
+ /* Signal the blocking thread we are done */
+ if (compl)
+ complete(compl);
+}
+EXPORT_SYMBOL(rpmh_tx_done);
+
+/**
+ * wait_for_tx_done: Wait until the response is received.
+ *
+ * @rc: The RPMH client
+ * @compl: The completion object
+ * @addr: An addr that we sent in that request
+ * @data: The data for the address in that request
+ */
+static int wait_for_tx_done(struct rpmh_client *rc,
+ struct completion *compl, u32 addr, u32 data)
+{
+ int ret;
+
+ might_sleep();
+
+ ret = wait_for_completion_timeout(compl,
+ msecs_to_jiffies(RPMH_TIMEOUT_MS));
+ if (ret)
+ dev_dbg(rc->dev,
+ "RPMH response received addr=%#x data=%#x\n",
+ addr, data);
+ else
+ dev_err(rc->dev,
+ "RPMH response timeout addr=%#x data=%#x\n",
+ addr, data);
+
+ return (ret > 0) ? 0 : -ETIMEDOUT;
+}
+
+/**
+ * __rpmh_write: send the RPMH request
+ *
+ * @rc: The RPMH client
+ * @state: Active/Sleep request type
+ * @rpm_msg: The data that needs to be sent (cmds).
+ */
+static int __rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
+ struct rpmh_request *rpm_msg)
+{
+ int ret = -EINVAL;
+
+ rpm_msg->msg.state = state;
+
+ if (state == RPMH_ACTIVE_ONLY_STATE) {
+ WARN_ON(irqs_disabled());
+ ret = rpmh_rsc_send_data(rc->ctrlr->drv, &rpm_msg->msg);
+ if (!ret)
+ dev_dbg(rc->dev,
+ "RPMH request sent addr=%#x, data=%i#x\n",
+ rpm_msg->msg.cmds[0].addr,
+ rpm_msg->msg.cmds[0].data);
+ else
+ dev_warn(rc->dev,
+ "Error in RPMH request addr=%#x, data=%#x\n",
+ rpm_msg->msg.cmds[0].addr,
+ rpm_msg->msg.cmds[0].data);
+ }
+
+ return ret;
+}
+
+/**
+ * rpmh_write: Write a set of RPMH commands and block until response
+ *
+ * @rc: The RPMh handle got from rpmh_get_client
+ * @state: Active/sleep set
+ * @cmd: The payload data
+ * @n: The number of elements in @cmd
+ *
+ * May sleep. Do not call from atomic contexts.
+ */
+int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
+ const struct tcs_cmd *cmd, u32 n)
+{
+ DECLARE_COMPLETION_ONSTACK(compl);
+ DEFINE_RPMH_MSG_ONSTACK(rc, state, &compl, rpm_msg);
+ int ret;
+
+ if (IS_ERR_OR_NULL(rc) || !cmd || !n || n > MAX_RPMH_PAYLOAD)
+ return -EINVAL;
+
+ memcpy(rpm_msg.cmd, cmd, n * sizeof(*cmd));
+ rpm_msg.msg.num_cmds = n;
+
+ ret = __rpmh_write(rc, state, &rpm_msg);
+ if (ret)
+ return ret;
+
+ return wait_for_tx_done(rc, &compl, cmd[0].addr, cmd[0].data);
+}
+EXPORT_SYMBOL(rpmh_write);
+
+static struct rpmh_ctrlr *get_rpmh_ctrlr(struct platform_device *pdev)
+{
+ int i;
+ struct rsc_drv *drv = dev_get_drvdata(pdev->dev.parent);
+ struct rpmh_ctrlr *ctrlr = ERR_PTR(-EINVAL);
+
+ if (!drv)
+ return ctrlr;
+
+ mutex_lock(&rpmh_ctrlr_mutex);
+ for (i = 0; i < RPMH_MAX_MBOXES; i++) {
+ if (rpmh_rsc[i].drv == drv) {
+ ctrlr = &rpmh_rsc[i];
+ goto unlock;
+ }
+ }
+
+ for (i = 0; i < RPMH_MAX_MBOXES; i++) {
+ if (rpmh_rsc[i].drv == NULL) {
+ ctrlr = &rpmh_rsc[i];
+ ctrlr->drv = drv;
+ break;
+ }
+ }
+ WARN_ON(i == RPMH_MAX_MBOXES);
+unlock:
+ mutex_unlock(&rpmh_ctrlr_mutex);
+ return ctrlr;
+}
+
+/**
+ * rpmh_get_client: Get the RPMh handle
+ *
+ * @pdev: the platform device which needs to communicate with RPM
+ * accelerators
+ * May sleep.
+ */
+struct rpmh_client *rpmh_get_client(struct platform_device *pdev)
+{
+ struct rpmh_client *rc;
+
+ rc = kzalloc(sizeof(*rc), GFP_KERNEL);
+ if (!rc)
+ return ERR_PTR(-ENOMEM);
+
+ rc->dev = &pdev->dev;
+ rc->ctrlr = get_rpmh_ctrlr(pdev);
+ if (IS_ERR(rc->ctrlr)) {
+ kfree(rc);
+ return ERR_PTR(-EINVAL);
+ }
+
+ return rc;
+}
+EXPORT_SYMBOL(rpmh_get_client);
+
+/**
+ * rpmh_release: Release the RPMH client
+ *
+ * @rc: The RPMh handle to be freed.
+ */
+void rpmh_release(struct rpmh_client *rc)
+{
+ kfree(rc);
+}
+EXPORT_SYMBOL(rpmh_release);
diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
new file mode 100644
index 000000000000..95334d4c1ede
--- /dev/null
+++ b/include/soc/qcom/rpmh.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef __SOC_QCOM_RPMH_H__
+#define __SOC_QCOM_RPMH_H__
+
+#include <soc/qcom/tcs.h>
+#include <linux/platform_device.h>
+
+struct rpmh_client;
+
+#if IS_ENABLED(CONFIG_QCOM_RPMH)
+int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
+ const struct tcs_cmd *cmd, u32 n);
+
+struct rpmh_client *rpmh_get_client(struct platform_device *pdev);
+
+void rpmh_release(struct rpmh_client *rc);
+
+#else
+
+static inline int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
+ const struct tcs_cmd *cmd, u32 n)
+{ return -ENODEV; }
+
+static inline struct rpmh_client *rpmh_get_client(struct platform_device *pdev)
+{ return ERR_PTR(-ENODEV); }
+
+static inline void rpmh_release(struct rpmh_client *rc) { }
+#endif /* CONFIG_QCOM_RPMH */
+
+#endif /* __SOC_QCOM_RPMH_H__ */
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


2018-04-05 16:23:10

by Lina Iyer

[permalink] [raw]
Subject: [PATCH v5 02/10] dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs

Add device binding documentation for Qualcomm Technology Inc's RPMH RSC
driver. The driver is used for communicating resource state requests for
shared resources.

Cc: [email protected]
Signed-off-by: Lina Iyer <[email protected]>
Reviewed-by: Rob Herring <[email protected]>
---

Changes in v3:
- Move to soc/qcom
- Amend text per Stephen's suggestions

Changes in v2:
- Amend text to describe the registers in reg property
- Add reg-names for the registers
- Update examples to use GIC_SPI in interrupts instead of 0
- Rephrase incorrect description

Changes in v3:
- Fix unwanted capitalization
- Remove clients from the examples, this doc does not describe
them
- Rephrase introductory paragraph
- Remove hardware specifics from DT bindings
---
.../devicetree/bindings/soc/qcom/rpmh-rsc.txt | 127 +++++++++++++++++++++
1 file changed, 127 insertions(+)
create mode 100644 Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt

diff --git a/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
new file mode 100644
index 000000000000..dcf71a5b302f
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
@@ -0,0 +1,127 @@
+RPMH RSC:
+------------
+
+Resource Power Manager Hardened (RPMH) is the mechanism for communicating with
+the hardened resource accelerators on Qualcomm SoCs. Requests to the resources
+can be written to the Trigger Command Set (TCS) registers and using a (addr,
+val) pair and triggered. Messages in the TCS are then sent in sequence over an
+internal bus.
+
+The hardware block (Direct Resource Voter or DRV) is a part of the h/w entity
+(Resource State Coordinator a.k.a RSC) that can handle a multiple sleep and
+active/wake resource requests. Multiple such DRVs can exist in a SoC and can
+be written to from Linux. The structure of each DRV follows the same template
+with a few variations that are captured by the properties here.
+
+A TCS may be triggered from Linux or triggered by the F/W after all the CPUs
+have powered off to facilitate idle power saving. TCS could be classified as -
+
+ SLEEP, /* Triggered by F/W */
+ WAKE, /* Triggered by F/W */
+ ACTIVE, /* Triggered by Linux */
+ CONTROL /* Triggered by F/W */
+
+The order in which they are described in the DT, should match the hardware
+configuration.
+
+Requests can be made for the state of a resource, when the subsystem is active
+or idle. When all subsystems like Modem, GPU, CPU are idle, the resource state
+will be an aggregate of the sleep votes from each of those subsystems. Clients
+may request a sleep value for their shared resources in addition to the active
+mode requests.
+
+Properties:
+
+- compatible:
+ Usage: required
+ Value type: <string>
+ Definition: Should be "qcom,rpmh-rsc".
+
+- reg:
+ Usage: required
+ Value type: <prop-encoded-array>
+ Definition: The first register specifies the base address of the DRV.
+ The second register specifies the start address of the
+ TCS.
+
+- reg-names:
+ Usage: required
+ Value type: <string>
+ Definition: Maps the register specified in the reg property. Must be
+ "drv" and "tcs".
+
+- interrupts:
+ Usage: required
+ Value type: <prop-encoded-interrupt>
+ Definition: The interrupt that trips when a message complete/response
+ is received for this DRV from the accelerators.
+
+- qcom,drv-id:
+ Usage: required
+ Value type: <u32>
+ Definition: the id of the DRV in the RSC block.
+
+- qcom,tcs-config:
+ Usage: required
+ Value type: <prop-encoded-array>
+ Definition: the tuple defining the configuration of TCS.
+ Must have 2 cells which describe each TCS type.
+ <type number_of_tcs>.
+ The order of the TCS must match the hardware
+ configuration.
+ - Cell #1 (TCS Type): TCS types to be specified -
+ SLEEP_TCS
+ WAKE_TCS
+ ACTIVE_TCS
+ CONTROL_TCS
+ - Cell #2 (Number of TCS): <u32>
+
+- label:
+ Usage: optional
+ Value type: <string>
+ Definition: Name for the RSC. The name would be used in trace logs.
+
+Drivers that want to use the RSC to communicate with RPMH must specify their
+bindings as child of the RSC controllers they wish to communicate with.
+
+Example 1:
+
+For a TCS whose RSC base address is is 0x179C0000 and is at a DRV id of 2, the
+register offsets for DRV2 start at 0D00, the register calculations are like
+this -
+First tuple: 0x179C0000 + 0x10000 * 2 = 0x179E0000
+Second tuple: 0x179E0000 + 0xD00 = 0x179E0D00
+
+ apps_rsc: rsc@179e000 {
+ label = "apps_rsc";
+ compatible = "qcom,rpmh-rsc";
+ reg = <0x179e0000 0x10000>, <0x179e0d00 0x3000>;
+ reg-names = "drv", "tcs";
+ interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
+ qcom,drv-id = <2>;
+ qcom,tcs-config = <SLEEP_TCS 3>,
+ <WAKE_TCS 3>,
+ <ACTIVE_TCS 2>,
+ <CONTROL_TCS 1>;
+ };
+
+Example 2:
+
+For a TCS whose RSC base address is 0xAF20000 and is at DRV id of 0, the
+register offsets for DRV0 start at 01C00, the register calculations are like
+this -
+First tuple: 0xAF20000
+Second tuple: 0xAF20000 + 0x1C00 = 0xAF21C00
+
+ disp_rsc: rsc@af20000 {
+ label = "disp_rsc";
+ compatible = "qcom,rpmh-rsc";
+ reg = <0xaf20000 0x10000>, <0xaf21c00 0x3000>;
+ reg-names = "drv", "tcs";
+ interrupts = <GIC_SPI 129 IRQ_TYPE_LEVEL_HIGH>;
+ qcom,drv-id = <0>;
+ qcom,tcs-config = <SLEEP_TCS 1>,
+ <WAKE_TCS 1>,
+ <ACTIVE_TCS 0>,
+ <CONTROL_TCS 0>;
+ };
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


2018-04-05 18:34:15

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v5 03/10] drivers: qcom: rpmh-rsc: log RPMH requests in FTRACE

On Thu, 5 Apr 2018 10:18:27 -0600
Lina Iyer <[email protected]> wrote:

> Log sent RPMH requests and interrupt responses in FTRACE.
>
> Cc: Steven Rostedt <[email protected]>
> Signed-off-by: Lina Iyer <[email protected]>
> Reviewed-by: Steven Rostedt (VMware) <[email protected]>

Still looks good.

-- Steve


2018-04-07 01:18:33

by Stephen Boyd

[permalink] [raw]
Subject: Re: [PATCH v5 02/10] dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs

Quoting Lina Iyer (2018-04-05 09:18:26)
> diff --git a/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
> new file mode 100644
> index 000000000000..dcf71a5b302f
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
> @@ -0,0 +1,127 @@
> +RPMH RSC:
> +------------
> +
> +Resource Power Manager Hardened (RPMH) is the mechanism for communicating with
> +the hardened resource accelerators on Qualcomm SoCs. Requests to the resources
> +can be written to the Trigger Command Set (TCS) registers and using a (addr,
> +val) pair and triggered. Messages in the TCS are then sent in sequence over an
> +internal bus.
> +
> +The hardware block (Direct Resource Voter or DRV) is a part of the h/w entity
> +(Resource State Coordinator a.k.a RSC) that can handle a multiple sleep and

s/ a / /

> +active/wake resource requests. Multiple such DRVs can exist in a SoC and can
> +be written to from Linux. The structure of each DRV follows the same template
> +with a few variations that are captured by the properties here.
> +
> +A TCS may be triggered from Linux or triggered by the F/W after all the CPUs
> +have powered off to facilitate idle power saving. TCS could be classified as -

s/ -/:/

> +
> + SLEEP, /* Triggered by F/W */
> + WAKE, /* Triggered by F/W */
> + ACTIVE, /* Triggered by Linux */
> + CONTROL /* Triggered by F/W */

Drop the commas?

> +
> +The order in which they are described in the DT, should match the hardware
> +configuration.
> +
> +Requests can be made for the state of a resource, when the subsystem is active
> +or idle. When all subsystems like Modem, GPU, CPU are idle, the resource state
> +will be an aggregate of the sleep votes from each of those subsystems. Clients
> +may request a sleep value for their shared resources in addition to the active
> +mode requests.
> +
> +Properties:
> +
> +- compatible:
> + Usage: required
> + Value type: <string>
> + Definition: Should be "qcom,rpmh-rsc".
> +
> +- reg:
> + Usage: required
> + Value type: <prop-encoded-array>
> + Definition: The first register specifies the base address of the DRV.
> + The second register specifies the start address of the
> + TCS.
> +
> +- reg-names:
> + Usage: required
> + Value type: <string>
> + Definition: Maps the register specified in the reg property. Must be
> + "drv" and "tcs".
> +
> +- interrupts:
> + Usage: required
> + Value type: <prop-encoded-interrupt>
> + Definition: The interrupt that trips when a message complete/response
> + is received for this DRV from the accelerators.
> +
> +- qcom,drv-id:
> + Usage: required
> + Value type: <u32>
> + Definition: the id of the DRV in the RSC block.
> +
> +- qcom,tcs-config:
> + Usage: required
> + Value type: <prop-encoded-array>
> + Definition: the tuple defining the configuration of TCS.
> + Must have 2 cells which describe each TCS type.
> + <type number_of_tcs>.
> + The order of the TCS must match the hardware
> + configuration.
> + - Cell #1 (TCS Type): TCS types to be specified -
> + SLEEP_TCS
> + WAKE_TCS
> + ACTIVE_TCS
> + CONTROL_TCS
> + - Cell #2 (Number of TCS): <u32>
> +
> +- label:
> + Usage: optional
> + Value type: <string>
> + Definition: Name for the RSC. The name would be used in trace logs.
> +
> +Drivers that want to use the RSC to communicate with RPMH must specify their
> +bindings as child of the RSC controllers they wish to communicate with.

s/child/child nodes/


> +
> +Example 1:
> +
> +For a TCS whose RSC base address is is 0x179C0000 and is at a DRV id of 2, the
> +register offsets for DRV2 start at 0D00, the register calculations are like
> +this -
> +First tuple: 0x179C0000 + 0x10000 * 2 = 0x179E0000
> +Second tuple: 0x179E0000 + 0xD00 = 0x179E0D00
> +
> + apps_rsc: rsc@179e000 {
> + label = "apps_rsc";
> + compatible = "qcom,rpmh-rsc";
> + reg = <0x179e0000 0x10000>, <0x179e0d00 0x3000>;

The first reg property overlaps the second one. Does this second one
ever move around? I would hardcode it in the driver to be 0xd00 away
from the drv base instead of specifying it in DT if it's the same all
the time.

Also, the example shows 0x179c0000 which I guess is the actual beginning
of the RSC block. So the binding seems to be for one DRV inside of an
RSC. Can we get the full description of the RSC in the binding instead?
I imagine that means there's a DRV0,1,2 and those probably have an
interrupt per each DRV and then a different TCS config per each one too?
If the binding can describe all of the RSC then we can use different
DRVs by changing the qcom,drv-id property.

rsc@179c0000 {
compatible = "qcom,rpmh-rsc";
reg = <0x179c0000 0x10000>,
<0x179d0000 0x10000>,
<0x179e0000 0x10000>;
qcom,tcs-offset = <0xd00>;
qcom,drv-id = <0/1/2>;
interrupts = <GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
}

This is sort of what I imagine it would look like. I have no idea how
the tcs config would work unless each DRV has the same TCS config
though. Otherwise, if each node is for a drv, then I would expect the
node would be called 'drv' and we wouldn't need the drv-id property and
the compatible string would say drv instead of rsc?

BTW, what are the other DRVs used for in the apps RSC?

> + reg-names = "drv", "tcs";
> + interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
> + qcom,drv-id = <2>;
> + qcom,tcs-config = <SLEEP_TCS 3>,
> + <WAKE_TCS 3>,
> + <ACTIVE_TCS 2>,
> + <CONTROL_TCS 1>;
> + };
> +
> +Example 2:
> +
> +For a TCS whose RSC base address is 0xAF20000 and is at DRV id of 0, the
> +register offsets for DRV0 start at 01C00, the register calculations are like
> +this -
> +First tuple: 0xAF20000
> +Second tuple: 0xAF20000 + 0x1C00 = 0xAF21C00
> +
> + disp_rsc: rsc@af20000 {
> + label = "disp_rsc";
> + compatible = "qcom,rpmh-rsc";
> + reg = <0xaf20000 0x10000>, <0xaf21c00 0x3000>;

Ok. The TCS offset seems totally random now.

> + reg-names = "drv", "tcs";
> + interrupts = <GIC_SPI 129 IRQ_TYPE_LEVEL_HIGH>;
> + qcom,drv-id = <0>;
> + qcom,tcs-config = <SLEEP_TCS 1>,
> + <WAKE_TCS 1>,
> + <ACTIVE_TCS 0>,
> + <CONTROL_TCS 0>;
> + };

2018-04-07 01:25:12

by Stephen Boyd

[permalink] [raw]
Subject: Re: [PATCH v5 04/10] drivers: qcom: rpmh: add RPMH helper functions

Quoting Lina Iyer (2018-04-05 09:18:28)
> diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
> new file mode 100644
> index 000000000000..95334d4c1ede
> --- /dev/null
> +++ b/include/soc/qcom/rpmh.h
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
> + */
> +
> +#ifndef __SOC_QCOM_RPMH_H__
> +#define __SOC_QCOM_RPMH_H__
> +
> +#include <soc/qcom/tcs.h>
> +#include <linux/platform_device.h>
> +
> +struct rpmh_client;
> +
> +#if IS_ENABLED(CONFIG_QCOM_RPMH)
> +int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
> + const struct tcs_cmd *cmd, u32 n);
> +
> +struct rpmh_client *rpmh_get_client(struct platform_device *pdev);
> +
> +void rpmh_release(struct rpmh_client *rc);

Please get rid of this 'client' layer and fold it into the rpmh driver.
Everything that uses the rpmh_client is a child device of the rpmh
device so they should be able to just pass in their device pointer as
their 'handle' and have the rpmh driver take that, get the parent device
pointer, and pull an rpmh_drv structure out of there. The 'common' code
can go into the base rpmh driver and get used from there and then we
don't have to hop between two files to see how rpmh is used by the
consumers. Code complexity goes down this way.

2018-04-09 15:40:52

by Lina Iyer

[permalink] [raw]
Subject: Re: [PATCH v5 04/10] drivers: qcom: rpmh: add RPMH helper functions

On Fri, Apr 06 2018 at 19:21 -0600, Stephen Boyd wrote:
>Quoting Lina Iyer (2018-04-05 09:18:28)
>> diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
>> new file mode 100644
>> index 000000000000..95334d4c1ede
>> --- /dev/null
>> +++ b/include/soc/qcom/rpmh.h
>> @@ -0,0 +1,34 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
>> + */
>> +
>> +#ifndef __SOC_QCOM_RPMH_H__
>> +#define __SOC_QCOM_RPMH_H__
>> +
>> +#include <soc/qcom/tcs.h>
>> +#include <linux/platform_device.h>
>> +
>> +struct rpmh_client;
>> +
>> +#if IS_ENABLED(CONFIG_QCOM_RPMH)
>> +int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
>> + const struct tcs_cmd *cmd, u32 n);
>> +
>> +struct rpmh_client *rpmh_get_client(struct platform_device *pdev);
>> +
>> +void rpmh_release(struct rpmh_client *rc);
>
>Please get rid of this 'client' layer and fold it into the rpmh driver.
>Everything that uses the rpmh_client is a child device of the rpmh
>device so they should be able to just pass in their device pointer as
>their 'handle' and have the rpmh driver take that, get the parent device
>pointer, and pull an rpmh_drv structure out of there. The 'common' code
>can go into the base rpmh driver and get used from there and then we
>don't have to hop between two files to see how rpmh is used by the
>consumers. Code complexity goes down this way.

That would be not be a good idea. This layer is not just providing an
API interface. There is resource buffering, handling of memory for
requests and downstream quirks and debug going on in this layer. It
would be unwise to clobber the hardware centric rpmh-rsc layer. If you
look at the series as a whole, you would understand why this is
necessary. I plan to build more on top of these patches in the future as
we add support for system low power modes. The complexity doesn't go
away, it just thrown in to another file, which is already decently
sized.

I could try to use the device as a handle, and internally work on
getting the drv and other information from it, if that helps. But I do
not want to clobber these two files together. It doesn't help
maintainability.

Thanks,
Lina

2018-04-09 16:12:52

by Lina Iyer

[permalink] [raw]
Subject: Re: [PATCH v5 02/10] dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs

On Fri, Apr 06 2018 at 19:14 -0600, Stephen Boyd wrote:
>Quoting Lina Iyer (2018-04-05 09:18:26)
>> diff --git a/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
>> new file mode 100644
>> index 000000000000..dcf71a5b302f
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
>> @@ -0,0 +1,127 @@
>> +RPMH RSC:
>> +------------
>> +
>> +Resource Power Manager Hardened (RPMH) is the mechanism for communicating with
>> +the hardened resource accelerators on Qualcomm SoCs. Requests to the resources
>> +can be written to the Trigger Command Set (TCS) registers and using a (addr,
>> +val) pair and triggered. Messages in the TCS are then sent in sequence over an
>> +internal bus.
>> +
>> +The hardware block (Direct Resource Voter or DRV) is a part of the h/w entity
>> +(Resource State Coordinator a.k.a RSC) that can handle a multiple sleep and
>
>s/ a / /
>
>> +active/wake resource requests. Multiple such DRVs can exist in a SoC and can
>> +be written to from Linux. The structure of each DRV follows the same template
>> +with a few variations that are captured by the properties here.
>> +
>> +A TCS may be triggered from Linux or triggered by the F/W after all the CPUs
>> +have powered off to facilitate idle power saving. TCS could be classified as -
>
>s/ -/:/
>
>> +
>> + SLEEP, /* Triggered by F/W */
>> + WAKE, /* Triggered by F/W */
>> + ACTIVE, /* Triggered by Linux */
>> + CONTROL /* Triggered by F/W */
>
>Drop the commas?
>
>> +
>> +The order in which they are described in the DT, should match the hardware
>> +configuration.
>> +
>> +Requests can be made for the state of a resource, when the subsystem is active
>> +or idle. When all subsystems like Modem, GPU, CPU are idle, the resource state
>> +will be an aggregate of the sleep votes from each of those subsystems. Clients
>> +may request a sleep value for their shared resources in addition to the active
>> +mode requests.
>> +
>> +Properties:
>> +
>> +- compatible:
>> + Usage: required
>> + Value type: <string>
>> + Definition: Should be "qcom,rpmh-rsc".
>> +
>> +- reg:
>> + Usage: required
>> + Value type: <prop-encoded-array>
>> + Definition: The first register specifies the base address of the DRV.
>> + The second register specifies the start address of the
>> + TCS.
>> +
>> +- reg-names:
>> + Usage: required
>> + Value type: <string>
>> + Definition: Maps the register specified in the reg property. Must be
>> + "drv" and "tcs".
>> +
>> +- interrupts:
>> + Usage: required
>> + Value type: <prop-encoded-interrupt>
>> + Definition: The interrupt that trips when a message complete/response
>> + is received for this DRV from the accelerators.
>> +
>> +- qcom,drv-id:
>> + Usage: required
>> + Value type: <u32>
>> + Definition: the id of the DRV in the RSC block.
>> +
>> +- qcom,tcs-config:
>> + Usage: required
>> + Value type: <prop-encoded-array>
>> + Definition: the tuple defining the configuration of TCS.
>> + Must have 2 cells which describe each TCS type.
>> + <type number_of_tcs>.
>> + The order of the TCS must match the hardware
>> + configuration.
>> + - Cell #1 (TCS Type): TCS types to be specified -
>> + SLEEP_TCS
>> + WAKE_TCS
>> + ACTIVE_TCS
>> + CONTROL_TCS
>> + - Cell #2 (Number of TCS): <u32>
>> +
>> +- label:
>> + Usage: optional
>> + Value type: <string>
>> + Definition: Name for the RSC. The name would be used in trace logs.
>> +
>> +Drivers that want to use the RSC to communicate with RPMH must specify their
>> +bindings as child of the RSC controllers they wish to communicate with.
>
>s/child/child nodes/
>
>
Ok to these as well as the above.

>> +
>> +Example 1:
>> +
>> +For a TCS whose RSC base address is is 0x179C0000 and is at a DRV id of 2, the
>> +register offsets for DRV2 start at 0D00, the register calculations are like
>> +this -
>> +First tuple: 0x179C0000 + 0x10000 * 2 = 0x179E0000
>> +Second tuple: 0x179E0000 + 0xD00 = 0x179E0D00
>> +
>> + apps_rsc: rsc@179e000 {
>> + label = "apps_rsc";
>> + compatible = "qcom,rpmh-rsc";
>> + reg = <0x179e0000 0x10000>, <0x179e0d00 0x3000>;
>
>The first reg property overlaps the second one. Does this second one
>ever move around? I would hardcode it in the driver to be 0xd00 away
>from the drv base instead of specifying it in DT if it's the same all
>the time.
>
>Also, the example shows 0x179c0000 which I guess is the actual beginning
>of the RSC block. So the binding seems to be for one DRV inside of an
>RSC. Can we get the full description of the RSC in the binding instead?
>I imagine that means there's a DRV0,1,2 and those probably have an
>interrupt per each DRV and then a different TCS config per each one too?
>If the binding can describe all of the RSC then we can use different
>DRVs by changing the qcom,drv-id property.
>
> rsc@179c0000 {
> compatible = "qcom,rpmh-rsc";
> reg = <0x179c0000 0x10000>,
> <0x179d0000 0x10000>,
> <0x179e0000 0x10000>;
> qcom,tcs-offset = <0xd00>;
> qcom,drv-id = <0/1/2>;
> interrupts = <GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
> <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
> <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
> }
>
>This is sort of what I imagine it would look like. I have no idea how
>the tcs config would work unless each DRV has the same TCS config
>though. Otherwise, if each node is for a drv, then I would expect the
>node would be called 'drv' and we wouldn't need the drv-id property and
>the compatible string would say drv instead of rsc?
>
>BTW, what are the other DRVs used for in the apps RSC?
>
The DRV is the voter for an execution environment (Linux, Hypervisor,
ATF) in the RSC. The RSC has a lot of other registers that Linux is not
privy to. They are access restricted. The memory organization of the RSC
mandates that we know the DRV id to access registers specific to the
DRV. Unfortunately, not all RSC have identical DRV configuration and the
register space is also variable depending on the capability of the RSC.
There are functionalities supported by other RSCs in the SoC that are
not supported by the RSC associated with the application processor,
while not many RSCs' support multiple DRVs. Therefore it doesn't benefit
describing the whole RSC as it is not usable from Linux (because of
access restrictions).

>> + reg-names = "drv", "tcs";
>> + interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
>> + qcom,drv-id = <2>;
>> + qcom,tcs-config = <SLEEP_TCS 3>,
>> + <WAKE_TCS 3>,
>> + <ACTIVE_TCS 2>,
>> + <CONTROL_TCS 1>;
>> + };
>> +
>> +Example 2:
>> +
>> +For a TCS whose RSC base address is 0xAF20000 and is at DRV id of 0, the
>> +register offsets for DRV0 start at 01C00, the register calculations are like
>> +this -
>> +First tuple: 0xAF20000
>> +Second tuple: 0xAF20000 + 0x1C00 = 0xAF21C00
>> +
>> + disp_rsc: rsc@af20000 {
>> + label = "disp_rsc";
>> + compatible = "qcom,rpmh-rsc";
>> + reg = <0xaf20000 0x10000>, <0xaf21c00 0x3000>;
>
>Ok. The TCS offset seems totally random now.
>
Yes it would appear so. Because the register space is optimized based on
the functionality supported by the RSC, the TCS for a DRV is at a
different offset in the RSC. Hence the explicit description of the
address in the binding.

Thanks,
Lina

>> + reg-names = "drv", "tcs";
>> + interrupts = <GIC_SPI 129 IRQ_TYPE_LEVEL_HIGH>;
>> + qcom,drv-id = <0>;
>> + qcom,tcs-config = <SLEEP_TCS 1>,
>> + <WAKE_TCS 1>,
>> + <ACTIVE_TCS 0>,
>> + <CONTROL_TCS 0>;
>> + };

2018-04-10 19:40:11

by Bjorn Andersson

[permalink] [raw]
Subject: Re: [PATCH v5 02/10] dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs

On Mon 09 Apr 09:08 PDT 2018, Lina Iyer wrote:

> On Fri, Apr 06 2018 at 19:14 -0600, Stephen Boyd wrote:
> > Quoting Lina Iyer (2018-04-05 09:18:26)
> > > diff --git a/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
[..]
> > > +Example 1:
> > > +
> > > +For a TCS whose RSC base address is is 0x179C0000 and is at a DRV id of 2, the
> > > +register offsets for DRV2 start at 0D00, the register calculations are like
> > > +this -
> > > +First tuple: 0x179C0000 + 0x10000 * 2 = 0x179E0000
> > > +Second tuple: 0x179E0000 + 0xD00 = 0x179E0D00
> > > +
> > > + apps_rsc: rsc@179e000 {
> > > + label = "apps_rsc";
> > > + compatible = "qcom,rpmh-rsc";
> > > + reg = <0x179e0000 0x10000>, <0x179e0d00 0x3000>;
> >
> > The first reg property overlaps the second one. Does this second one
> > ever move around? I would hardcode it in the driver to be 0xd00 away
> > from the drv base instead of specifying it in DT if it's the same all
> > the time.
[..]
> >
> The DRV is the voter for an execution environment (Linux, Hypervisor,
> ATF) in the RSC. The RSC has a lot of other registers that Linux is not
> privy to. They are access restricted. The memory organization of the RSC
> mandates that we know the DRV id to access registers specific to the
> DRV. Unfortunately, not all RSC have identical DRV configuration and the
> register space is also variable depending on the capability of the RSC.
> There are functionalities supported by other RSCs in the SoC that are
> not supported by the RSC associated with the application processor,
> while not many RSCs' support multiple DRVs. Therefore it doesn't benefit
> describing the whole RSC as it is not usable from Linux (because of
> access restrictions).
>

I generally prefer that we describe the hardware blocks as accurate as
possible, instead of applying current restrictions on Linux onto the
description. This ensures that we can reuse the binding and drivers in
configurations not considered today. However, afaict we still have the
problem that we need a way to express where in the RSC our TCS sits.

Regardless of what's right or not, the given example causes the driver
to fail probing, so something needs to be changed. (Making the drv size
0xd00 is functional but doesn't really relate to any bondary in the
register space).

Regards,
Bjorn

2018-04-10 23:45:59

by Bjorn Andersson

[permalink] [raw]
Subject: Re: [PATCH v5 01/10] drivers: qcom: rpmh-rsc: add RPMH controller for QCOM SoCs

On Thu 05 Apr 09:18 PDT 2018, Lina Iyer wrote:
[..]
> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
[..]
> +/**
> + * struct tcs_response: Response object for a request
> + *
> + * @drv: the controller
> + * @msg: the request for this response
> + * @m: the tcs identifier
> + * @err: error reported in the response
> + * @list: element in list of pending response objects
> + */
> +struct tcs_response {
> + struct rsc_drv *drv;
> + const struct tcs_request *msg;
> + u32 m;

m is assigned in one place but never used.

> + int err;
> + struct list_head list;
> +};
[..]
> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
[..]
> +static struct tcs_group *get_tcs_from_index(struct rsc_drv *drv, int m)
> +{
> + struct tcs_group *tcs;
> + int i;
> +
> + for (i = 0; i < drv->num_tcs; i++) {
> + tcs = &drv->tcs[i];
> + if (tcs->mask & BIT(m))
> + return tcs;
> + }
> +
> + WARN(i == drv->num_tcs, "Incorrect TCS index %d", m);
> +
> + return NULL;
> +}
> +
> +static struct tcs_response *setup_response(struct rsc_drv *drv,
> + const struct tcs_request *msg, int m)
> +{
> + struct tcs_response *resp;
> + struct tcs_group *tcs;
> +
> + resp = kzalloc(sizeof(*resp), GFP_ATOMIC);

I still don't like the idea that you allocate a response struct for each
request, then upon getting an ack post this on a list and schedule a
tasklet in order to optionally deliver the return value to the waiting
caller.

Why don't you just just add the "err" and a completion to the
tcs_request struct and if it's a sync operation you complete that in
your irq handler?

That would remove the response struct, the list of them, the tasklet and
the dynamic memory handling - at the "cost" of making the code possible
to follow.

> + if (!resp)
> + return ERR_PTR(-ENOMEM);
> +
> + resp->drv = drv;
> + resp->msg = msg;
> + resp->err = 0;
> +
> + tcs = get_tcs_from_index(drv, m);
> + if (!tcs)
> + return ERR_PTR(-EINVAL);
> +
> + assert_spin_locked(&tcs->lock);

I tried to boot the kernel with the rpmh-clk and rpmh-regulator drivers
and I kept hitting this assert.

Turns out that find_free_tcs() finds an empty TCS with index 'm' within
the tcs, then passes it to setup_response() which tries to use the 'm'
to figure out which tcs contains the TCS we're operating on.

But as 'm' is in tcs-local space and get_tcs_from_index() tries to
lookup the TCS in the global drv space we get hold of the wrong TCS.

> + tcs->responses[m - tcs->offset] = resp;
> +
> + return resp;
> +}
> +
> +static void free_response(struct tcs_response *resp)
> +{
> + kfree(resp);
> +}
> +
> +static struct tcs_response *get_response(struct rsc_drv *drv, u32 m)
> +{
> + struct tcs_group *tcs = get_tcs_from_index(drv, m);
> +
> + return tcs->responses[m - tcs->offset];
> +}
> +
> +static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int m, int n)
> +{
> + return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
> + RSC_DRV_CMD_OFFSET * n);
> +}
> +
> +static void write_tcs_reg(struct rsc_drv *drv, int reg, int m, int n, u32 data)
> +{
> + writel_relaxed(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
> + RSC_DRV_CMD_OFFSET * n);

Do you really want this relaxed? Isn't the ordering of these
significant?

> +}
> +
> +static void write_tcs_reg_sync(struct rsc_drv *drv, int reg, int m, int n,
> + u32 data)
> +{
> + write_tcs_reg(drv, reg, m, n, data);
> + for (;;) {
> + if (data == read_tcs_reg(drv, reg, m, n))
> + break;
> + udelay(1);
> + }
> +}
> +
> +static bool tcs_is_free(struct rsc_drv *drv, int m)
> +{
> + return !test_bit(m, drv->tcs_in_use) &&
> + read_tcs_reg(drv, RSC_DRV_STATUS, m, 0);
> +}
> +
> +static struct tcs_group *get_tcs_of_type(struct rsc_drv *drv, int type)

According to rpmh_rsc_probe() the tcs array is indexed by "type", so you
can replace the entire function with:

return &drv->tcs[type];

> +{
> + int i;
> + struct tcs_group *tcs;
> +
> + for (i = 0; i < TCS_TYPE_NR; i++) {
> + if (type == drv->tcs[i].type)
> + break;
> + }
> +
> + if (i == TCS_TYPE_NR)
> + return ERR_PTR(-EINVAL);
> +
> + tcs = &drv->tcs[i];
> + if (!tcs->num_tcs)
> + return ERR_PTR(-EINVAL);
> +
> + return tcs;
> +}
> +
> +static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
> + const struct tcs_request *msg)
> +{
> + int type;
> +
> + switch (msg->state) {
> + case RPMH_ACTIVE_ONLY_STATE:
> + type = ACTIVE_TCS;
> + break;
> + default:
> + return ERR_PTR(-EINVAL);
> + }
> +
> + return get_tcs_of_type(drv, type);
> +}
> +
> +static void send_tcs_response(struct tcs_response *resp)
> +{
> + struct rsc_drv *drv;
> + unsigned long flags;
> +
> + if (!resp)
> + return;
> +
> + drv = resp->drv;
> + spin_lock_irqsave(&drv->drv_lock, flags);
> + INIT_LIST_HEAD(&resp->list);
> + list_add_tail(&resp->list, &drv->response_pending);
> + spin_unlock_irqrestore(&drv->drv_lock, flags);
> +
> + tasklet_schedule(&drv->tasklet);
> +}
> +
> +/**
> + * tcs_irq_handler: TX Done interrupt handler
> + */
> +static irqreturn_t tcs_irq_handler(int irq, void *p)
> +{
> + struct rsc_drv *drv = p;
> + int m, i;
> + u32 irq_status, sts;
> + struct tcs_response *resp;
> + struct tcs_cmd *cmd;
> +
> + irq_status = read_tcs_reg(drv, RSC_DRV_IRQ_STATUS, 0, 0);
> +
> + for (m = 0; m < drv->num_tcs; m++) {
> + if (!(irq_status & (u32)BIT(m)))
> + continue;
> +
> + resp = get_response(drv, m);
> + if (WARN_ON(!resp))

This will only ever fail in the beginning of time, as soon as you've
utilized every TCS at least once resp will never be NULL, as you never
clear it.

> + goto skip_resp;
> +
> + resp->err = 0;
> + for (i = 0; i < resp->msg->num_cmds; i++) {
> + cmd = &resp->msg->cmds[i];
> + sts = read_tcs_reg(drv, RSC_DRV_CMD_STATUS, m, i);
> + if (!(sts & CMD_STATUS_ISSUED) ||
> + ((resp->msg->wait_for_compl || cmd->wait) &&
> + !(sts & CMD_STATUS_COMPL))) {
> + resp->err = -EIO;
> + break;
> + }
> + }
> +skip_resp:
> + /* Reclaim the TCS */
> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
> + write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, 0, BIT(m));
> + clear_bit(m, drv->tcs_in_use);
> + send_tcs_response(resp);

As I suggested above, rather than putting resp on a list and schedule a
tasklet to free and possibly deliver the value or "err" to a client just
keep track of the current msg for the TCS for sync operations, set "err"
and fire the completion (and then untie the request from the TCS).

> + }
> +
> + return IRQ_HANDLED;
> +}
> +
> +/**
> + * tcs_notify_tx_done: TX Done for requests that got a response
> + *
> + * @data: the tasklet argument
> + *
> + * Tasklet function to notify MBOX that we are done with the request.
> + * Handles all pending reponses whenever run.

This is accidental complexity from the downstream use of the mailbox
framework, we don't need it.

> + */
> +static void tcs_notify_tx_done(unsigned long data)
> +{
> + struct rsc_drv *drv = (struct rsc_drv *)data;
> + struct tcs_response *resp;
> + unsigned long flags;
> +
> + for (;;) {
> + spin_lock_irqsave(&drv->drv_lock, flags);
> + resp = list_first_entry_or_null(&drv->response_pending,
> + struct tcs_response, list);
> + if (!resp) {
> + spin_unlock_irqrestore(&drv->drv_lock, flags);
> + break;
> + }
> + list_del(&resp->list);
> + spin_unlock_irqrestore(&drv->drv_lock, flags);
> + free_response(resp);
> + }
> +}
> +
> +static void __tcs_buffer_write(struct rsc_drv *drv, int m, int n,
> + const struct tcs_request *msg)
> +{
> + u32 msgid, cmd_msgid;
> + u32 cmd_enable = 0;
> + u32 cmd_complete;
> + struct tcs_cmd *cmd;
> + int i, j;
> +
> + cmd_msgid = CMD_MSGID_LEN;
> + cmd_msgid |= msg->wait_for_compl ? CMD_MSGID_RESP_REQ : 0;
> + cmd_msgid |= CMD_MSGID_WRITE;
> +
> + cmd_complete = read_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0);
> +
> + for (i = 0, j = n; i < msg->num_cmds; i++, j++) {
> + cmd = &msg->cmds[i];
> + cmd_enable |= BIT(j);
> + cmd_complete |= cmd->wait << j;
> + msgid = cmd_msgid;
> + msgid |= cmd->wait ? CMD_MSGID_RESP_REQ : 0;
> + write_tcs_reg(drv, RSC_DRV_CMD_MSGID, m, j, msgid);
> + write_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, j, cmd->addr);
> + write_tcs_reg(drv, RSC_DRV_CMD_DATA, m, j, cmd->data);
> + }
> +
> + write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
> + cmd_enable |= read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0);
> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, cmd_enable);
> +}
> +
> +static void __tcs_trigger(struct rsc_drv *drv, int m)
> +{
> + u32 enable;

"enable"?

> +
> + /*
> + * HW req: Clear the DRV_CONTROL and enable TCS again
> + * While clearing ensure that the AMC mode trigger is cleared
> + * and then the mode enable is cleared.
> + */
> + enable = read_tcs_reg(drv, RSC_DRV_CONTROL, m, 0);
> + enable &= ~TCS_AMC_MODE_TRIGGER;
> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
> + enable &= ~TCS_AMC_MODE_ENABLE;
> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
> +
> + /* Enable the AMC mode on the TCS and then trigger the TCS */
> + enable = TCS_AMC_MODE_ENABLE;
> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
> + enable |= TCS_AMC_MODE_TRIGGER;
> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
> +}
> +
> +static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs,
> + const struct tcs_request *msg)
> +{
> + unsigned long curr_enabled;
> + u32 addr;
> + int i, j, k;
> + int m = tcs->offset;
> +
> + for (i = 0; i < tcs->num_tcs; i++, m++) {
> + if (tcs_is_free(drv, m))
> + continue;
> +
> + curr_enabled = read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0);
> +
> + for_each_set_bit(j, &curr_enabled, MAX_CMDS_PER_TCS) {
> + addr = read_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, j);
> + for (k = 0; k < msg->num_cmds; k++) {
> + if (addr == msg->cmds[k].addr)
> + return -EBUSY;
> + }
> + }
> + }
> +
> + return 0;
> +}
> +
> +static int find_free_tcs(struct tcs_group *tcs)
> +{
> + int m;
> +
> + for (m = 0; m < tcs->num_tcs; m++) {
> + if (tcs_is_free(tcs->drv, tcs->offset + m))
> + return m;

The returned index is within the tcs but is passed to setup_response()
where it's used as the index of the TCS, so this needs to return
tcs->offset + m so that setup_response() will be able to find the tcs
again.

> + }
> +
> + return -EBUSY;
> +}
> +
> +static int tcs_mbox_write(struct rsc_drv *drv, const struct tcs_request *msg)
> +{
> + struct tcs_group *tcs;
> + int m;
> + struct tcs_response *resp = NULL;

No need to initialize resp.

> + unsigned long flags;
> + int ret;
> +
> + tcs = get_tcs_for_msg(drv, msg);
> + if (IS_ERR(tcs))
> + return PTR_ERR(tcs);
> +
> + spin_lock_irqsave(&tcs->lock, flags);
> + m = find_free_tcs(tcs);
> + if (m < 0) {
> + ret = m;
> + goto done_write;
> + }
> +
> + /*
> + * The h/w does not like if we send a request to the same address,
> + * when one is already in-flight or being processed.
> + */
> + ret = check_for_req_inflight(drv, tcs, msg);

This scans all TCS in the DRV for any operations on msg->cmds[*].addr,
but you're only holding a lock for tcs. Either cross-tcs operations
doesn't matter and check_for_req_inflight() can loose one of the loops
or the locking used here is too optimistic.

> + if (ret)
> + goto done_write;
> +
> + resp = setup_response(drv, msg, m);

Alternatively we could just actually pass "tcs" to setup_response() so
that it doesn't have to search for it based on drv and m. But I think
it's cleaner if we just associate the msg with the TCS and complete that
directly in the irq handler - if it's a sync operation.

> + if (IS_ERR(resp)) {
> + ret = PTR_ERR(resp);
> + goto done_write;
> + }
> + resp->m = m;

You never read resp->m...

> +
> + set_bit(m, drv->tcs_in_use);
> + __tcs_buffer_write(drv, m, 0, msg);
> + __tcs_trigger(drv, m);
> +
> +done_write:
> + spin_unlock_irqrestore(&tcs->lock, flags);
> + return ret;
> +}
> +
> +/**
> + * rpmh_rsc_send_data: Validate the incoming message and write to the
> + * appropriate TCS block.
> + *
> + * @drv: the controller
> + * @msg: the data to be sent
> + *
> + * Return: 0 on success, -EINVAL on error.
> + * Note: This call blocks until a valid data is written to the TCS.
> + */
> +int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
> +{
> + int ret;
> +
> + if (!msg || !msg->cmds || !msg->num_cmds ||
> + msg->num_cmds > MAX_RPMH_PAYLOAD)
> + return -EINVAL;

You're the only caller of this function, which means that if this ever
evaluates to true you will return -EINVAL and your bug will be way
harder to find than if you just end up panicing because we dereferenced
any of these null pointers.

At least wrap the whole thing in a WARN_ON() to make it possible to
detect when this happen.

> +
> + do {
> + ret = tcs_mbox_write(drv, msg);
> + if (ret == -EBUSY) {
> + pr_info_ratelimited("TCS Busy, retrying RPMH message send: addr=%#x\n",
> + msg->cmds[0].addr);
> + udelay(10);
> + }
> + } while (ret == -EBUSY);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL(rpmh_rsc_send_data);

Regards,
Bjorn

2018-04-11 00:05:47

by Bjorn Andersson

[permalink] [raw]
Subject: Re: [PATCH v5 04/10] drivers: qcom: rpmh: add RPMH helper functions

On Thu 05 Apr 09:18 PDT 2018, Lina Iyer wrote:

> Sending RPMH requests and waiting for response from the controller
> through a callback is common functionality across all platform drivers.
> To simplify drivers, add a library functions to create RPMH client and
> send resource state requests.
>
> rpmh_write() is a synchronous blocking call that can be used to send
> active state requests.
>
> Signed-off-by: Lina Iyer <[email protected]>
> ---
>
> Changes in v4:
> - use const struct tcs_cmd in API
> - remove wait count from this patch
> - changed -EFAULT to -EINVAL
> ---
> drivers/soc/qcom/Makefile | 4 +-
> drivers/soc/qcom/rpmh-internal.h | 2 +
> drivers/soc/qcom/rpmh-rsc.c | 7 ++
> drivers/soc/qcom/rpmh.c | 253 +++++++++++++++++++++++++++++++++++++++
> include/soc/qcom/rpmh.h | 34 ++++++
> 5 files changed, 299 insertions(+), 1 deletion(-)
> create mode 100644 drivers/soc/qcom/rpmh.c
> create mode 100644 include/soc/qcom/rpmh.h
>
> diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
> index cb6300f6a8e9..bb395c3202ca 100644
> --- a/drivers/soc/qcom/Makefile
> +++ b/drivers/soc/qcom/Makefile
> @@ -7,7 +7,9 @@ obj-$(CONFIG_QCOM_PM) += spm.o
> obj-$(CONFIG_QCOM_QMI_HELPERS) += qmi_helpers.o
> qmi_helpers-y += qmi_encdec.o qmi_interface.o
> obj-$(CONFIG_QCOM_RMTFS_MEM) += rmtfs_mem.o
> -obj-$(CONFIG_QCOM_RPMH) += rpmh-rsc.o
> +obj-$(CONFIG_QCOM_RPMH) += qcom_rpmh.o
> +qcom_rpmh-y += rpmh-rsc.o
> +qcom_rpmh-y += rpmh.o
> obj-$(CONFIG_QCOM_SMD_RPM) += smd-rpm.o
> obj-$(CONFIG_QCOM_SMEM) += smem.o
> obj-$(CONFIG_QCOM_SMEM_STATE) += smem_state.o
> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
> index aa73ec4b3e42..125f9faec536 100644
> --- a/drivers/soc/qcom/rpmh-internal.h
> +++ b/drivers/soc/qcom/rpmh-internal.h
> @@ -86,4 +86,6 @@ struct rsc_drv {
>
> int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg);
>
> +void rpmh_tx_done(const struct tcs_request *msg, int r);
> +
> #endif /* __RPM_INTERNAL_H__ */
> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> index f604101a4fc2..c5cde917dba6 100644
> --- a/drivers/soc/qcom/rpmh-rsc.c
> +++ b/drivers/soc/qcom/rpmh-rsc.c
> @@ -252,6 +252,8 @@ static void tcs_notify_tx_done(unsigned long data)
> struct rsc_drv *drv = (struct rsc_drv *)data;
> struct tcs_response *resp;
> unsigned long flags;
> + const struct tcs_request *msg;
> + int err;
>
> for (;;) {
> spin_lock_irqsave(&drv->drv_lock, flags);
> @@ -264,7 +266,10 @@ static void tcs_notify_tx_done(unsigned long data)
> list_del(&resp->list);
> spin_unlock_irqrestore(&drv->drv_lock, flags);
> trace_rpmh_notify_tx_done(drv, resp);
> + msg = resp->msg;
> + err = resp->err;
> free_response(resp);
> + rpmh_tx_done(msg, err);
> }
> }
>
> @@ -554,6 +559,8 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
> /* Enable the active TCS to send requests immediately */
> write_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, 0, drv->tcs[ACTIVE_TCS].mask);
>
> + dev_set_drvdata(&pdev->dev, drv);
> +
> return devm_of_platform_populate(&pdev->dev);
> }
>
> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> new file mode 100644
> index 000000000000..e3c7491e7baf
> --- /dev/null
> +++ b/drivers/soc/qcom/rpmh.c
> @@ -0,0 +1,253 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
> + */
> +
> +#include <linux/atomic.h>
> +#include <linux/interrupt.h>
> +#include <linux/jiffies.h>
> +#include <linux/kernel.h>
> +#include <linux/mailbox_client.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <linux/types.h>
> +#include <linux/wait.h>
> +
> +#include <soc/qcom/rpmh.h>
> +
> +#include "rpmh-internal.h"
> +
> +#define RPMH_MAX_MBOXES 2
> +#define RPMH_TIMEOUT_MS 10000

Just define this in jiffies and you don't need to do msecs_to_jiffies()
every time you use it.

> +
> +#define DEFINE_RPMH_MSG_ONSTACK(rc, s, q, name) \
> + struct rpmh_request name = { \
> + .msg = { \
> + .state = s, \
> + .cmds = name.cmd, \
> + .num_cmds = 0, \
> + .wait_for_compl = true, \
> + }, \
> + .cmd = { { 0 } }, \
> + .completion = q, \
> + .rc = rc, \
> + }
> +
> +/**
> + * struct rpmh_request: the message to be sent to rpmh-rsc
> + *
> + * @msg: the request
> + * @cmd: the payload that will be part of the @msg
> + * @completion: triggered when request is done
> + * @err: err return from the controller
> + */
> +struct rpmh_request {
> + struct tcs_request msg;
> + struct tcs_cmd cmd[MAX_RPMH_PAYLOAD];
> + struct completion *completion;
> + struct rpmh_client *rc;
> + int err;
> +};
> +
> +/**
> + * struct rpmh_ctrlr: our representation of the controller
> + *
> + * @drv: the controller instance
> + */
> +struct rpmh_ctrlr {
> + struct rsc_drv *drv;
> +};
> +
> +/**
> + * struct rpmh_client: the client object
> + *
> + * @dev: the platform device that is the owner
> + * @ctrlr: the controller associated with this client.
> + */
> +struct rpmh_client {
> + struct device *dev;
> + struct rpmh_ctrlr *ctrlr;
> +};
> +
> +static struct rpmh_ctrlr rpmh_rsc[RPMH_MAX_MBOXES];
> +static DEFINE_MUTEX(rpmh_ctrlr_mutex);
> +
> +void rpmh_tx_done(const struct tcs_request *msg, int r)

Why do you abstract this? Just put "err" and "completion" in the
rpmh_request and fire them off as soon as you know the result of the
request.

> +{
> + struct rpmh_request *rpm_msg = container_of(msg, struct rpmh_request,
> + msg);
> + struct completion *compl = rpm_msg->completion;
> +
> + rpm_msg->err = r;
> +
> + if (r)
> + dev_err(rpm_msg->rc->dev,
> + "RPMH TX fail in msg addr=%#x, err=%d\n",
> + rpm_msg->msg.cmds[0].addr, r);
> +
> + /* Signal the blocking thread we are done */
> + if (compl)
> + complete(compl);
> +}
> +EXPORT_SYMBOL(rpmh_tx_done);
> +
> +/**
> + * wait_for_tx_done: Wait until the response is received.
> + *
> + * @rc: The RPMH client
> + * @compl: The completion object
> + * @addr: An addr that we sent in that request
> + * @data: The data for the address in that request
> + */
> +static int wait_for_tx_done(struct rpmh_client *rc,
> + struct completion *compl, u32 addr, u32 data)
> +{
> + int ret;
> +
> + might_sleep();
> +
> + ret = wait_for_completion_timeout(compl,
> + msecs_to_jiffies(RPMH_TIMEOUT_MS));
> + if (ret)
> + dev_dbg(rc->dev,
> + "RPMH response received addr=%#x data=%#x\n",
> + addr, data);

For the single cmd use case this printout will be correct, but in that
case there's no value printing at this level compared to higher up. For
the batch process you will in a later patch print the parameters of
cmd[0], regardless of which cmd failed - which is deceiving.

I suggest that you just drop these debug prints and have some higher
layer take care of it, if they want to.

> + else
> + dev_err(rc->dev,
> + "RPMH response timeout addr=%#x data=%#x\n",
> + addr, data);
> +
> + return (ret > 0) ? 0 : -ETIMEDOUT;
> +}
> +
> +/**
> + * __rpmh_write: send the RPMH request
> + *
> + * @rc: The RPMH client
> + * @state: Active/Sleep request type
> + * @rpm_msg: The data that needs to be sent (cmds).
> + */
> +static int __rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
> + struct rpmh_request *rpm_msg)
> +{
> + int ret = -EINVAL;
> +
> + rpm_msg->msg.state = state;
> +
> + if (state == RPMH_ACTIVE_ONLY_STATE) {
> + WARN_ON(irqs_disabled());

Can you please add a comment here, describing the different expectations
compared to using might_sleep() here.

> + ret = rpmh_rsc_send_data(rc->ctrlr->drv, &rpm_msg->msg);
> + if (!ret)
> + dev_dbg(rc->dev,
> + "RPMH request sent addr=%#x, data=%i#x\n",
> + rpm_msg->msg.cmds[0].addr,
> + rpm_msg->msg.cmds[0].data);
> + else
> + dev_warn(rc->dev,
> + "Error in RPMH request addr=%#x, data=%#x\n",
> + rpm_msg->msg.cmds[0].addr,
> + rpm_msg->msg.cmds[0].data);

As above, there's no added value in printing an error on this level.

> + }
> +
> + return ret;
> +}
> +
> +/**
> + * rpmh_write: Write a set of RPMH commands and block until response
> + *
> + * @rc: The RPMh handle got from rpmh_get_client
> + * @state: Active/sleep set
> + * @cmd: The payload data
> + * @n: The number of elements in @cmd
> + *
> + * May sleep. Do not call from atomic contexts.
> + */
> +int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
> + const struct tcs_cmd *cmd, u32 n)
> +{
> + DECLARE_COMPLETION_ONSTACK(compl);
> + DEFINE_RPMH_MSG_ONSTACK(rc, state, &compl, rpm_msg);
> + int ret;
> +
> + if (IS_ERR_OR_NULL(rc) || !cmd || !n || n > MAX_RPMH_PAYLOAD)
> + return -EINVAL;
> +
> + memcpy(rpm_msg.cmd, cmd, n * sizeof(*cmd));
> + rpm_msg.msg.num_cmds = n;
> +
> + ret = __rpmh_write(rc, state, &rpm_msg);
> + if (ret)
> + return ret;
> +
> + return wait_for_tx_done(rc, &compl, cmd[0].addr, cmd[0].data);

As this is just a wait_for_completion_timeout(compl) it would be cleaner
to just call that here directly.

> +}
> +EXPORT_SYMBOL(rpmh_write);
> +
> +static struct rpmh_ctrlr *get_rpmh_ctrlr(struct platform_device *pdev)
> +{
> + int i;
> + struct rsc_drv *drv = dev_get_drvdata(pdev->dev.parent);
> + struct rpmh_ctrlr *ctrlr = ERR_PTR(-EINVAL);
> +
> + if (!drv)
> + return ctrlr;

The only time this would fail is if pdev->dev.parent happens to be
some other device that happens to not have drvdata. It's fine to assume
that the rsc client is actually a child of the rsc device.

> +
> + mutex_lock(&rpmh_ctrlr_mutex);
> + for (i = 0; i < RPMH_MAX_MBOXES; i++) {
> + if (rpmh_rsc[i].drv == drv) {

Why do you store these in a global variable? Just create one for each
rsc and store that in the drvdata. Turning this function into:

return dev_get_drvdata(pdev->dev.parent);

> + ctrlr = &rpmh_rsc[i];
> + goto unlock;
> + }
> + }
> +
> + for (i = 0; i < RPMH_MAX_MBOXES; i++) {
> + if (rpmh_rsc[i].drv == NULL) {
> + ctrlr = &rpmh_rsc[i];
> + ctrlr->drv = drv;
> + break;
> + }
> + }
> + WARN_ON(i == RPMH_MAX_MBOXES);
> +unlock:
> + mutex_unlock(&rpmh_ctrlr_mutex);
> + return ctrlr;
> +}
> +
> +/**
> + * rpmh_get_client: Get the RPMh handle
> + *
> + * @pdev: the platform device which needs to communicate with RPM
> + * accelerators
> + * May sleep.
> + */
> +struct rpmh_client *rpmh_get_client(struct platform_device *pdev)
> +{
> + struct rpmh_client *rc;
> +
> + rc = kzalloc(sizeof(*rc), GFP_KERNEL);
> + if (!rc)
> + return ERR_PTR(-ENOMEM);

The only thing this is used for it to keep the client's "dev" in order
to be able to print debug messages on its behalf. The cost of it is that
the clients needs to do custom error handling in probe and actually
implement a remove function.

Can you please elaborate on what kind of magic stuff will go into this
struct that makes it worth lugging around?

> +
> + rc->dev = &pdev->dev;
> + rc->ctrlr = get_rpmh_ctrlr(pdev);
> + if (IS_ERR(rc->ctrlr)) {
> + kfree(rc);
> + return ERR_PTR(-EINVAL);
> + }
> +
> + return rc;
> +}
> +EXPORT_SYMBOL(rpmh_get_client);
> +
> +/**
> + * rpmh_release: Release the RPMH client
> + *
> + * @rc: The RPMh handle to be freed.
> + */
> +void rpmh_release(struct rpmh_client *rc)
> +{
> + kfree(rc);

If you really really need it, can we at least make it use devres.

> +}

Regards,
Bjorn

2018-04-11 02:27:40

by Stephen Boyd

[permalink] [raw]
Subject: Re: [PATCH v5 04/10] drivers: qcom: rpmh: add RPMH helper functions

Quoting Lina Iyer (2018-04-09 08:36:31)
> On Fri, Apr 06 2018 at 19:21 -0600, Stephen Boyd wrote:
> >Quoting Lina Iyer (2018-04-05 09:18:28)
> >> diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
> >> new file mode 100644
> >> index 000000000000..95334d4c1ede
> >> --- /dev/null
> >> +++ b/include/soc/qcom/rpmh.h
> >> @@ -0,0 +1,34 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> >> +/*
> >> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
> >> + */
> >> +
> >> +#ifndef __SOC_QCOM_RPMH_H__
> >> +#define __SOC_QCOM_RPMH_H__
> >> +
> >> +#include <soc/qcom/tcs.h>
> >> +#include <linux/platform_device.h>
> >> +
> >> +struct rpmh_client;
> >> +
> >> +#if IS_ENABLED(CONFIG_QCOM_RPMH)
> >> +int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
> >> + const struct tcs_cmd *cmd, u32 n);
> >> +
> >> +struct rpmh_client *rpmh_get_client(struct platform_device *pdev);
> >> +
> >> +void rpmh_release(struct rpmh_client *rc);
> >
> >Please get rid of this 'client' layer and fold it into the rpmh driver.
> >Everything that uses the rpmh_client is a child device of the rpmh
> >device so they should be able to just pass in their device pointer as
> >their 'handle' and have the rpmh driver take that, get the parent device
> >pointer, and pull an rpmh_drv structure out of there. The 'common' code
> >can go into the base rpmh driver and get used from there and then we
> >don't have to hop between two files to see how rpmh is used by the
> >consumers. Code complexity goes down this way.
>
> That would be not be a good idea. This layer is not just providing an
> API interface. There is resource buffering, handling of memory for
> requests and downstream quirks and debug going on in this layer. It
> would be unwise to clobber the hardware centric rpmh-rsc layer. If you
> look at the series as a whole, you would understand why this is
> necessary. I plan to build more on top of these patches in the future as
> we add support for system low power modes. The complexity doesn't go
> away, it just thrown in to another file, which is already decently
> sized.
>
> I could try to use the device as a handle, and internally work on
> getting the drv and other information from it, if that helps. But I do
> not want to clobber these two files together. It doesn't help
> maintainability.

Using the device as a handle is a good start. Let's see how it looks
once that part of the code gets replaced. I still fail to see how buffer
management and requests are any different from poking the hardware, but
OK. Maybe if this was a TCS "library" on top of the rpmh hardware
interface?

2018-04-11 03:38:33

by Bjorn Andersson

[permalink] [raw]
Subject: Re: [PATCH v5 05/10] drivers: qcom: rpmh-rsc: write sleep/wake requests to TCS

On Thu 05 Apr 09:18 PDT 2018, Lina Iyer wrote:
> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
[..]
> @@ -439,6 +445,107 @@ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
> }
> EXPORT_SYMBOL(rpmh_rsc_send_data);
>
> +static int find_match(const struct tcs_group *tcs, const struct tcs_cmd *cmd,
> + int len)
> +{
> + int i, j;
> +
> + /* Check for already cached commands */
> + for_each_set_bit(i, tcs->slots, MAX_TCS_SLOTS) {

Wouldn't it be good if this cared about TCS boundaries?

> + for (j = 0; j < len; j++) {
> + if (tcs->cmd_cache[i] != cmd[0].addr) {
> + if (j == 0)
> + break;
> + WARN(tcs->cmd_cache[i + j] != cmd[j].addr,
> + "Message does not match previous sequence.\n");
> + return -EINVAL;
> + } else if (j == len - 1) {
> + return i;
> + }
> + }
> + }
> +
> + return -ENODATA;
> +}
> +
> +static int find_slots(struct tcs_group *tcs, const struct tcs_request *msg,
> + int *m, int *n)
> +{
> + int slot, offset;
> + int i = 0;
> +
> + /* Find if we already have the msg in our TCS */

"Search for the sequence of addresses in our tcs group"

> + slot = find_match(tcs, msg->cmds, msg->num_cmds);
> + if (slot >= 0)
> + goto copy_data;
> +
> + /* Do over, until we can fit the full payload in a TCS */
> + do {
> + slot = bitmap_find_next_zero_area(tcs->slots, MAX_TCS_SLOTS,
> + i, msg->num_cmds, 0);
> + if (slot == MAX_TCS_SLOTS)
> + return -ENOMEM;
> + i += tcs->ncpt;
> + } while (slot + msg->num_cmds - 1 >= i);

Does this conditional check that the sequence of free slots that we
found doesn't extend past the boundary of a TCS?

I'm sorry, but this code is hard to understand. I would find this much
easier to read if there was one bitmap per TCS and you just looped over
them to find free regions.

> +
> +copy_data:
> + bitmap_set(tcs->slots, slot, msg->num_cmds);
> + /* Copy the addresses of the resources over to the slots */
> + for (i = 0; i < msg->num_cmds; i++)
> + tcs->cmd_cache[slot + i] = msg->cmds[i].addr;
> +
> + offset = slot / tcs->ncpt;
> + *m = offset + tcs->offset;
> + *n = slot % tcs->ncpt;
> +
> + return 0;
> +}
> +
> +static int tcs_ctrl_write(struct rsc_drv *drv, const struct tcs_request *msg)
> +{
> + struct tcs_group *tcs;
> + int m = 0, n = 0;
> + unsigned long flags;
> + int ret;
> +
> + tcs = get_tcs_for_msg(drv, msg);
> + if (IS_ERR(tcs))
> + return PTR_ERR(tcs);
> +
> + spin_lock_irqsave(&tcs->lock, flags);
> + /* find the m-th TCS and the n-th position in the TCS to write to */
> + ret = find_slots(tcs, msg, &m, &n);
> + if (!ret)
> + __tcs_buffer_write(drv, m, n, msg);
> + spin_unlock_irqrestore(&tcs->lock, flags);
> +
> + return ret;
> +}
> +
> +/**
> + * rpmh_rsc_write_ctrl_data: Write request to the controller
> + *
> + * @drv: the controller
> + * @msg: the data to be written to the controller
> + *
> + * There is no response returned for writing the request to the controller.
> + */
> +int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg)

So this is exactly the same thing as rpmh_rsc_send_data() but for one of
the non-active TCSs?

Can't we have a single API for writing msg to the hardware and if it's
active we "send" it as well?

> +{
> + if (!msg || !msg->cmds || !msg->num_cmds ||
> + msg->num_cmds > MAX_RPMH_PAYLOAD) {
> + pr_err("Payload error\n");
> + return -EINVAL;
> + }
> +
> + /* Data sent to this API will not be sent immediately */
> + if (msg->state == RPMH_ACTIVE_ONLY_STATE)
> + return -EINVAL;

If you're concerned about this then the API isn't clear enough.

> +
> + return tcs_ctrl_write(drv, msg);
> +}
> +EXPORT_SYMBOL(rpmh_rsc_write_ctrl_data);
> +
> static int rpmh_probe_tcs_config(struct platform_device *pdev,
> struct rsc_drv *drv)
> {
> @@ -512,6 +619,19 @@ static int rpmh_probe_tcs_config(struct platform_device *pdev,
> tcs->mask = ((1 << tcs->num_tcs) - 1) << st;
> tcs->offset = st;
> st += tcs->num_tcs;
> +
> + /*
> + * Allocate memory to cache sleep and wake requests to
> + * avoid reading TCS register memory.
> + */
> + if (tcs->type == ACTIVE_TCS)
> + continue;

Rather than "the rest of this loop shouldn't be done for the active tcs
group" just make another loop... Or at least make the comment relate
directly to the code it's adjacent.

> +
> + tcs->cmd_cache = devm_kcalloc(&pdev->dev,
> + tcs->num_tcs * ncpt, sizeof(u32),
> + GFP_KERNEL);
> + if (!tcs->cmd_cache)
> + return -ENOMEM;

Regards,
Bjorn

2018-04-11 04:43:06

by Stephen Boyd

[permalink] [raw]
Subject: Re: [PATCH v5 01/10] drivers: qcom: rpmh-rsc: add RPMH controller for QCOM SoCs

Quoting Lina Iyer (2018-04-05 09:18:25)
> Add controller driver for QCOM SoCs that have hardware based shared
> resource management. The hardware IP known as RSC (Resource State
> Coordinator) houses multiple Direct Resource Voter (DRV) for different
> execution levels. A DRV is a unique voter on the state of a shared
> resource. A Trigger Control Set (TCS) is a bunch of slots that can house
> multiple resource state requests, that when triggered will issue those
> requests through an internal bus to the Resource Power Manager Hardened
> (RPMH) blocks. These hardware blocks are capable of adjusting clocks,
> voltages, etc. The resource state request from a DRV are aggregated along
> with state requests from other processors in the SoC and the aggregate
> value is applied on the resource.
>
> Some important aspects of the RPMH communication -
> - Requests are <addr, value> with some header information
> - Multiple requests (upto 16) may be sent through a TCS, at a time
> - Requests in a TCS are sent in sequence
> - Requests may be fire-n-forget or completion (response expected)
> - Multiple TCS from the same DRV may be triggered simultaneously
> - Cannot send a request if another requesit for the same addr is in

s/requesit/request/

> progress from the same DRV
> - When all the requests from a TCS are complete, an IRQ is raised
> - The IRQ handler needs to clear the TCS before it is available for
> reuse
> - TCS configuration is specific to a DRV
> - Platform drivers may use DRV from different RSCs to make requests

This last point is sort of not true anymore? At least my understanding
is that platform drivers are children of the rsc and that they can only
use that rsc to do anything with rpmh.

>
> Resource state requests made when CPUs are active are called 'active'
> state requests. Requests made when all the CPUs are powered down (idle
> state) are called 'sleep' state requests. They are matched by a
> corresponding 'wake' state requests which puts the resources back in to
> previously requested active state before resuming any CPU. TCSes are
> dedicated for each type of requests. Control TCS are used to provide
> specific information to the controller.

Can you mention AMC here too? I see the acronym but no definition of
what it is besides "Active or AMC" which may indicate A == Active.

>
> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
> new file mode 100644
> index 000000000000..aa73ec4b3e42
> --- /dev/null
> +++ b/drivers/soc/qcom/rpmh-internal.h
> @@ -0,0 +1,89 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
> + */
> +
> +
> +#ifndef __RPM_INTERNAL_H__
> +#define __RPM_INTERNAL_H__
> +
> +#include <linux/bitmap.h>
> +#include <soc/qcom/tcs.h>
> +
> +#define TCS_TYPE_NR 4
> +#define MAX_CMDS_PER_TCS 16
> +#define MAX_TCS_PER_TYPE 3
> +#define MAX_TCS_NR (MAX_TCS_PER_TYPE * TCS_TYPE_NR)
> +
> +struct rsc_drv;
> +
> +/**
> + * struct tcs_response: Response object for a request
> + *
> + * @drv: the controller
> + * @msg: the request for this response
> + * @m: the tcs identifier
> + * @err: error reported in the response
> + * @list: element in list of pending response objects
> + */
> +struct tcs_response {
> + struct rsc_drv *drv;
> + const struct tcs_request *msg;
> + u32 m;
> + int err;
> + struct list_head list;
> +};
> +
> +/**
> + * struct tcs_group: group of Trigger Command Sets for a request state

Put (ACRONYM) for the acronyms that are spelled out the first time
please. Also, make sure we know what 'request state' is.

> + *
> + * @drv: the controller
> + * @type: type of the TCS in this group - active, sleep, wake

Now 'group' means 'request state'?

> + * @mask: mask of the TCSes relative to all the TCSes in the RSC
> + * @offset: start of the TCS group relative to the TCSes in the RSC
> + * @num_tcs: number of TCSes in this type
> + * @ncpt: number of commands in each TCS
> + * @lock: lock for synchronizing this TCS writes
> + * @responses: response objects for requests sent from each TCS
> + */
> +struct tcs_group {
> + struct rsc_drv *drv;
> + int type;

Is type supposed to be an enum?

> + u32 mask;
> + u32 offset;
> + int num_tcs;
> + int ncpt;
> + spinlock_t lock;
> + struct tcs_response *responses[MAX_TCS_PER_TYPE];
> +};
> +
> +/**
> + * struct rsc_drv: the Resource State Coordinator controller
> + *
> + * @name: controller identifier
> + * @tcs_base: start address of the TCS registers in this controller
> + * @id: instance id in the controller (Direct Resource Voter)
> + * @num_tcs: number of TCSes in this DRV

It changed from an RSC to a DRV here?

> + * @tasklet: handle responses, off-load work from IRQ handler
> + * @response_pending:
> + * list of responses that needs to be sent to caller
> + * @tcs: TCS groups
> + * @tcs_in_use: s/w state of the TCS
> + * @drv_lock: synchronize state of the controller
> + */
> +struct rsc_drv {

Is 'drv' in here talking about the DRV acronym?

> + const char *name;
> + void __iomem *tcs_base;
> + int id;
> + int num_tcs;
> + struct tasklet_struct tasklet;
> + struct list_head response_pending;
> + struct tcs_group tcs[TCS_TYPE_NR];
> + DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
> + spinlock_t drv_lock;

s/drv_lock/lock/

? Because otherwise it looks like drv->drv_lock.

> +};
> +
> +
> +int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg);

Do we send data to anything else in rpmh? Maybe it could just be called
rpmh_send_data(), or rpmh_send().

> +
> +#endif /* __RPM_INTERNAL_H__ */
> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> new file mode 100644
> index 000000000000..8bde1e9bd599
> --- /dev/null
> +++ b/drivers/soc/qcom/rpmh-rsc.c
> @@ -0,0 +1,571 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
> + */
> +
> +#define pr_fmt(fmt) "%s " fmt, KBUILD_MODNAME
> +
> +#include <linux/atomic.h>

Is this used?

> +#include <linux/delay.h>
> +#include <linux/export.h>
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <linux/of.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_platform.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +
> +#include <soc/qcom/tcs.h>
> +#include <dt-bindings/soc/qcom,rpmh-rsc.h>
> +
> +#include "rpmh-internal.h"
> +
> +#define RSC_DRV_TCS_OFFSET 672
> +#define RSC_DRV_CMD_OFFSET 20
> +
> +/* DRV Configuration Information Register */
> +#define DRV_PRNT_CHLD_CONFIG 0x0C
> +#define DRV_NUM_TCS_MASK 0x3F
> +#define DRV_NUM_TCS_SHIFT 6
> +#define DRV_NCPT_MASK 0x1F
> +#define DRV_NCPT_SHIFT 27
> +
> +/* Register offsets */
> +#define RSC_DRV_IRQ_ENABLE 0x00
> +#define RSC_DRV_IRQ_STATUS 0x04
> +#define RSC_DRV_IRQ_CLEAR 0x08
> +#define RSC_DRV_CMD_WAIT_FOR_CMPL 0x10
> +#define RSC_DRV_CONTROL 0x14
> +#define RSC_DRV_STATUS 0x18
> +#define RSC_DRV_CMD_ENABLE 0x1C
> +#define RSC_DRV_CMD_MSGID 0x30
> +#define RSC_DRV_CMD_ADDR 0x34
> +#define RSC_DRV_CMD_DATA 0x38
> +#define RSC_DRV_CMD_STATUS 0x3C
> +#define RSC_DRV_CMD_RESP_DATA 0x40
> +
> +#define TCS_AMC_MODE_ENABLE BIT(16)
> +#define TCS_AMC_MODE_TRIGGER BIT(24)
> +
> +/* TCS CMD register bit mask */
> +#define CMD_MSGID_LEN 8
> +#define CMD_MSGID_RESP_REQ BIT(8)
> +#define CMD_MSGID_WRITE BIT(16)
> +#define CMD_STATUS_ISSUED BIT(8)
> +#define CMD_STATUS_COMPL BIT(16)
> +
> +static struct tcs_group *get_tcs_from_index(struct rsc_drv *drv, int m)
> +{
> + struct tcs_group *tcs;
> + int i;
> +
> + for (i = 0; i < drv->num_tcs; i++) {
> + tcs = &drv->tcs[i];
> + if (tcs->mask & BIT(m))
> + return tcs;
> + }
> +
> + WARN(i == drv->num_tcs, "Incorrect TCS index %d", m);
> +
> + return NULL;
> +}
> +
> +static struct tcs_response *setup_response(struct rsc_drv *drv,
> + const struct tcs_request *msg, int m)
> +{
> + struct tcs_response *resp;
> + struct tcs_group *tcs;
> +
> + resp = kzalloc(sizeof(*resp), GFP_ATOMIC);
> + if (!resp)
> + return ERR_PTR(-ENOMEM);
> +
> + resp->drv = drv;
> + resp->msg = msg;
> + resp->err = 0;
> +
> + tcs = get_tcs_from_index(drv, m);
> + if (!tcs)
> + return ERR_PTR(-EINVAL);
> +
> + assert_spin_locked(&tcs->lock);
> + tcs->responses[m - tcs->offset] = resp;
> +
> + return resp;
> +}
> +
> +static void free_response(struct tcs_response *resp)

const?

> +{
> + kfree(resp);
> +}
> +
> +static struct tcs_response *get_response(struct rsc_drv *drv, u32 m)
> +{
> + struct tcs_group *tcs = get_tcs_from_index(drv, m);
> +
> + return tcs->responses[m - tcs->offset];
> +}
> +
> +static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int m, int n)
> +{
> + return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
> + RSC_DRV_CMD_OFFSET * n);
> +}
> +
> +static void write_tcs_reg(struct rsc_drv *drv, int reg, int m, int n, u32 data)

Is m the type of TCS (sleep, active, wake) and n is just an offset?
Maybe you can replace m with 'tcs_type' and n with 'index' or 'i' or
'offset'. And then don't use this function to write the random TCS
registers that don't have to do with the TCS command slots? I see
various places where there are things like:

> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
> + write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, cmd_enable);

And 'n' is 0, meaning you rely on that 0 killing that last part of the
equation (RSC_DRV_CMD_OFFSET * n). But if we had a write_tcs_reg(drv,
reg, m, data) and a write_tcs_cmd(drv, reg, m, n, data) then it would be
clearer.

Even better, add a void *base to a 'struct tcs' and then pass that
struct to the tcs_read/write APIs and then have that pull out a
tcs->base + reg or tcs->base + reg + RSC_DRV_CMD_OFFSET * index.

> +{
> + writel_relaxed(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
> + RSC_DRV_CMD_OFFSET * n);
> +}
> +
> +static void write_tcs_reg_sync(struct rsc_drv *drv, int reg, int m, int n,
> + u32 data)
> +{
> + write_tcs_reg(drv, reg, m, n, data);
> + for (;;) {
> + if (data == read_tcs_reg(drv, reg, m, n))
> + break;
> + udelay(1);
> + }
> +}
> +
> +static bool tcs_is_free(struct rsc_drv *drv, int m)
> +{
> + return !test_bit(m, drv->tcs_in_use) &&
> + read_tcs_reg(drv, RSC_DRV_STATUS, m, 0);
> +}
> +
> +static struct tcs_group *get_tcs_of_type(struct rsc_drv *drv, int type)
> +{
> + int i;
> + struct tcs_group *tcs;
> +
> + for (i = 0; i < TCS_TYPE_NR; i++) {
> + if (type == drv->tcs[i].type)
> + break;
> + }
> +
> + if (i == TCS_TYPE_NR)
> + return ERR_PTR(-EINVAL);
> +
> + tcs = &drv->tcs[i];
> + if (!tcs->num_tcs)
> + return ERR_PTR(-EINVAL);
> +
> + return tcs;
> +}
> +
> +static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
> + const struct tcs_request *msg)
> +{
> + int type;
> +
> + switch (msg->state) {
> + case RPMH_ACTIVE_ONLY_STATE:
> + type = ACTIVE_TCS;
> + break;
> + default:
> + return ERR_PTR(-EINVAL);
> + }
> +
> + return get_tcs_of_type(drv, type);
> +}
> +
> +static void send_tcs_response(struct tcs_response *resp)
> +{
> + struct rsc_drv *drv;
> + unsigned long flags;
> +
> + if (!resp)
> + return;

Sad.

> +
> + drv = resp->drv;
> + spin_lock_irqsave(&drv->drv_lock, flags);
> + INIT_LIST_HEAD(&resp->list);
> + list_add_tail(&resp->list, &drv->response_pending);
> + spin_unlock_irqrestore(&drv->drv_lock, flags);
> +
> + tasklet_schedule(&drv->tasklet);
> +}
> +
> +/**
> + * tcs_irq_handler: TX Done interrupt handler

So call it tcs_tx_done?

> + */
> +static irqreturn_t tcs_irq_handler(int irq, void *p)
> +{
> + struct rsc_drv *drv = p;
> + int m, i;
> + u32 irq_status, sts;
> + struct tcs_response *resp;
> + struct tcs_cmd *cmd;
> +
> + irq_status = read_tcs_reg(drv, RSC_DRV_IRQ_STATUS, 0, 0);
> +
> + for (m = 0; m < drv->num_tcs; m++) {
> + if (!(irq_status & (u32)BIT(m)))
> + continue;
> +
> + resp = get_response(drv, m);
> + if (WARN_ON(!resp))
> + goto skip_resp;
> +
> + resp->err = 0;
> + for (i = 0; i < resp->msg->num_cmds; i++) {
> + cmd = &resp->msg->cmds[i];
> + sts = read_tcs_reg(drv, RSC_DRV_CMD_STATUS, m, i);
> + if (!(sts & CMD_STATUS_ISSUED) ||
> + ((resp->msg->wait_for_compl || cmd->wait) &&
> + !(sts & CMD_STATUS_COMPL))) {
> + resp->err = -EIO;
> + break;
> + }
> + }
> +skip_resp:
> + /* Reclaim the TCS */
> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
> + write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, 0, BIT(m));
> + clear_bit(m, drv->tcs_in_use);

Should we reclaim the TCS if the above for loop fails too? It may make
more sense to look up the response, reclaim, check if it's NULL and
execute a 'continue' and otherwise look through resp->msg->cmds for
something that was done and then send_tcs_response(). At the least,
don't call send_tcs_response() if resp == NULL.

> + send_tcs_response(resp);
> + }
> +
> + return IRQ_HANDLED;
> +}
> +
> +/**
> + * tcs_notify_tx_done: TX Done for requests that got a response
> + *
> + * @data: the tasklet argument
> + *
> + * Tasklet function to notify MBOX that we are done with the request.
> + * Handles all pending reponses whenever run.
> + */
> +static void tcs_notify_tx_done(unsigned long data)
> +{
> + struct rsc_drv *drv = (struct rsc_drv *)data;
> + struct tcs_response *resp;
> + unsigned long flags;
> +
> + for (;;) {
> + spin_lock_irqsave(&drv->drv_lock, flags);
> + resp = list_first_entry_or_null(&drv->response_pending,
> + struct tcs_response, list);
> + if (!resp) {
> + spin_unlock_irqrestore(&drv->drv_lock, flags);
> + break;
> + }
> + list_del(&resp->list);

Someone should make a list_deqeue() API. Then this would read as:

spin_lock_irqsave()
resp = list_deqeue();
spin_unlock_irqrestore()
if (!resp)
break;
free_response(resp)

> + spin_unlock_irqrestore(&drv->drv_lock, flags);
> + free_response(resp);
> + }
> +}
> +
> +static void __tcs_buffer_write(struct rsc_drv *drv, int m, int n,
> + const struct tcs_request *msg)
> +{
> + u32 msgid, cmd_msgid;
> + u32 cmd_enable = 0;
> + u32 cmd_complete;
> + struct tcs_cmd *cmd;
> + int i, j;
> +
> + cmd_msgid = CMD_MSGID_LEN;
> + cmd_msgid |= msg->wait_for_compl ? CMD_MSGID_RESP_REQ : 0;
> + cmd_msgid |= CMD_MSGID_WRITE;
> +
> + cmd_complete = read_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0);
> +
> + for (i = 0, j = n; i < msg->num_cmds; i++, j++) {
> + cmd = &msg->cmds[i];
> + cmd_enable |= BIT(j);
> + cmd_complete |= cmd->wait << j;
> + msgid = cmd_msgid;
> + msgid |= cmd->wait ? CMD_MSGID_RESP_REQ : 0;
> + write_tcs_reg(drv, RSC_DRV_CMD_MSGID, m, j, msgid);
> + write_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, j, cmd->addr);
> + write_tcs_reg(drv, RSC_DRV_CMD_DATA, m, j, cmd->data);
> + }
> +
> + write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
> + cmd_enable |= read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0);
> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, cmd_enable);
> +}
> +
> +static void __tcs_trigger(struct rsc_drv *drv, int m)
> +{
> + u32 enable;
> +
> + /*
> + * HW req: Clear the DRV_CONTROL and enable TCS again
> + * While clearing ensure that the AMC mode trigger is cleared
> + * and then the mode enable is cleared.
> + */
> + enable = read_tcs_reg(drv, RSC_DRV_CONTROL, m, 0);
> + enable &= ~TCS_AMC_MODE_TRIGGER;
> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
> + enable &= ~TCS_AMC_MODE_ENABLE;
> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
> +
> + /* Enable the AMC mode on the TCS and then trigger the TCS */
> + enable = TCS_AMC_MODE_ENABLE;
> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
> + enable |= TCS_AMC_MODE_TRIGGER;
> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
> +}
> +
> +static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs,
> + const struct tcs_request *msg)
> +{
> + unsigned long curr_enabled;
> + u32 addr;
> + int i, j, k;
> + int m = tcs->offset;
> +
> + for (i = 0; i < tcs->num_tcs; i++, m++) {
> + if (tcs_is_free(drv, m))
> + continue;
> +
> + curr_enabled = read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0);
> +
> + for_each_set_bit(j, &curr_enabled, MAX_CMDS_PER_TCS) {
> + addr = read_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, j);
> + for (k = 0; k < msg->num_cmds; k++) {
> + if (addr == msg->cmds[k].addr)
> + return -EBUSY;
> + }
> + }
> + }
> +
> + return 0;
> +}
> +
> +static int find_free_tcs(struct tcs_group *tcs)
> +{
> + int m;
> +
> + for (m = 0; m < tcs->num_tcs; m++) {
> + if (tcs_is_free(tcs->drv, tcs->offset + m))
> + return m;
> + }
> +
> + return -EBUSY;
> +}
> +
> +static int tcs_mbox_write(struct rsc_drv *drv, const struct tcs_request *msg)
> +{
> + struct tcs_group *tcs;
> + int m;
> + struct tcs_response *resp = NULL;
> + unsigned long flags;
> + int ret;
> +
> + tcs = get_tcs_for_msg(drv, msg);
> + if (IS_ERR(tcs))
> + return PTR_ERR(tcs);
> +
> + spin_lock_irqsave(&tcs->lock, flags);
> + m = find_free_tcs(tcs);
> + if (m < 0) {
> + ret = m;
> + goto done_write;
> + }
> +
> + /*
> + * The h/w does not like if we send a request to the same address,
> + * when one is already in-flight or being processed.
> + */
> + ret = check_for_req_inflight(drv, tcs, msg);
> + if (ret)
> + goto done_write;
> +
> + resp = setup_response(drv, msg, m);
> + if (IS_ERR(resp)) {
> + ret = PTR_ERR(resp);
> + goto done_write;
> + }
> + resp->m = m;
> +
> + set_bit(m, drv->tcs_in_use);
> + __tcs_buffer_write(drv, m, 0, msg);
> + __tcs_trigger(drv, m);
> +
> +done_write:
> + spin_unlock_irqrestore(&tcs->lock, flags);
> + return ret;
> +}
> +
> +/**
> + * rpmh_rsc_send_data: Validate the incoming message and write to the
> + * appropriate TCS block.
> + *
> + * @drv: the controller
> + * @msg: the data to be sent
> + *
> + * Return: 0 on success, -EINVAL on error.
> + * Note: This call blocks until a valid data is written to the TCS.
> + */
> +int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
> +{
> + int ret;
> +
> + if (!msg || !msg->cmds || !msg->num_cmds ||
> + msg->num_cmds > MAX_RPMH_PAYLOAD)
> + return -EINVAL;
> +
> + do {
> + ret = tcs_mbox_write(drv, msg);
> + if (ret == -EBUSY) {
> + pr_info_ratelimited("TCS Busy, retrying RPMH message send: addr=%#x\n",
> + msg->cmds[0].addr);
> + udelay(10);
> + }
> + } while (ret == -EBUSY);

This loop never breaks if we can't avoid the BUSY loop. And that printk
is informational, shouldn't it be an error? Is there some number of
tries we can make and then just give up?

> +
> + return ret;
> +}
> +EXPORT_SYMBOL(rpmh_rsc_send_data);
> +
> +static int rpmh_probe_tcs_config(struct platform_device *pdev,
> + struct rsc_drv *drv)
> +{
> + struct tcs_type_config {
> + u32 type;
> + u32 n;
> + } tcs_cfg[TCS_TYPE_NR] = { { 0 } };
> + struct device_node *dn = pdev->dev.of_node;
> + u32 config, max_tcs, ncpt;
> + int i, ret, n, st = 0;
> + struct tcs_group *tcs;
> + struct resource *res;
> + void __iomem *base;
> +
> + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "drv");
> + base = devm_ioremap_resource(&pdev->dev, res);
> + if (IS_ERR(base))
> + return PTR_ERR(base);
> +
> + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "tcs");
> + drv->tcs_base = devm_ioremap_resource(&pdev->dev, res);
> + if (IS_ERR(drv->tcs_base))
> + return PTR_ERR(drv->tcs_base);
> +
> + config = readl_relaxed(base + DRV_PRNT_CHLD_CONFIG);
> +
> + max_tcs = config;
> + max_tcs &= DRV_NUM_TCS_MASK << (DRV_NUM_TCS_SHIFT * drv->id);
> + max_tcs = max_tcs >> (DRV_NUM_TCS_SHIFT * drv->id);
> +
> + ncpt = config & (DRV_NCPT_MASK << DRV_NCPT_SHIFT);
> + ncpt = ncpt >> DRV_NCPT_SHIFT;
> +
> + n = of_property_count_u32_elems(dn, "qcom,tcs-config");
> + if (n != 2 * TCS_TYPE_NR)
> + return -EINVAL;
> +
> + for (i = 0; i < TCS_TYPE_NR; i++) {
> + ret = of_property_read_u32_index(dn, "qcom,tcs-config",
> + i * 2, &tcs_cfg[i].type);
> + if (ret)
> + return ret;
> + if (tcs_cfg[i].type >= TCS_TYPE_NR)
> + return -EINVAL;
> +
> + ret = of_property_read_u32_index(dn, "qcom,tcs-config",
> + i * 2 + 1, &tcs_cfg[i].n);
> + if (ret)
> + return ret;
> + if (tcs_cfg[i].n > MAX_TCS_PER_TYPE)
> + return -EINVAL;
> + }
> +
> + for (i = 0; i < TCS_TYPE_NR; i++) {
> + tcs = &drv->tcs[tcs_cfg[i].type];
> + if (tcs->drv)
> + return -EINVAL;
> + tcs->drv = drv;
> + tcs->type = tcs_cfg[i].type;
> + tcs->num_tcs = tcs_cfg[i].n;
> + tcs->ncpt = ncpt;
> + spin_lock_init(&tcs->lock);
> +
> + if (!tcs->num_tcs || tcs->type == CONTROL_TCS)
> + continue;
> +
> + if (st + tcs->num_tcs > max_tcs ||
> + st + tcs->num_tcs >= BITS_PER_BYTE * sizeof(tcs->mask))
> + return -EINVAL;
> +
> + tcs->mask = ((1 << tcs->num_tcs) - 1) << st;
> + tcs->offset = st;
> + st += tcs->num_tcs;
> + }
> +
> + drv->num_tcs = st;
> +
> + return 0;
> +}
> +
> +static int rpmh_rsc_probe(struct platform_device *pdev)
> +{
> + struct device_node *dn = pdev->dev.of_node;
> + struct rsc_drv *drv;
> + int ret, irq;
> +
> + drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL);
> + if (!drv)
> + return -ENOMEM;
> +
> + ret = of_property_read_u32(dn, "qcom,drv-id", &drv->id);
> + if (ret)
> + return ret;
> +
> + drv->name = of_get_property(dn, "label", NULL);
> + if (!drv->name)
> + drv->name = dev_name(&pdev->dev);
> +
> + ret = rpmh_probe_tcs_config(pdev, drv);
> + if (ret)
> + return ret;
> +
> + INIT_LIST_HEAD(&drv->response_pending);
> + spin_lock_init(&drv->drv_lock);
> + tasklet_init(&drv->tasklet, tcs_notify_tx_done, (unsigned long)drv);
> + bitmap_zero(drv->tcs_in_use, MAX_TCS_NR);
> +
> + irq = platform_get_irq(pdev, 0);
> + if (irq < 0)
> + return irq;
> +
> + ret = devm_request_irq(&pdev->dev, irq, tcs_irq_handler,
> + IRQF_TRIGGER_HIGH | IRQF_NO_SUSPEND,
> + drv->name, drv);
> + if (ret)
> + return ret;
> +
> + /* Enable the active TCS to send requests immediately */
> + write_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, 0, drv->tcs[ACTIVE_TCS].mask);
> +
> + return devm_of_platform_populate(&pdev->dev);
> +}
> +
> diff --git a/include/dt-bindings/soc/qcom,rpmh-rsc.h b/include/dt-bindings/soc/qcom,rpmh-rsc.h
> new file mode 100644
> index 000000000000..868f998ea998
> --- /dev/null
> +++ b/include/dt-bindings/soc/qcom,rpmh-rsc.h
> @@ -0,0 +1,14 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
> + */
> +
> +#ifndef __DT_QCOM_RPMH_RSC_H__
> +#define __DT_QCOM_RPMH_RSC_H__
> +
> +#define SLEEP_TCS 0
> +#define WAKE_TCS 1
> +#define ACTIVE_TCS 2
> +#define CONTROL_TCS 3

Is anything besides the RSC node going to use these defines? Typically
we have defines for things that are used by many nodes in many places
and also in C code by drivers so this looks odd if it's mostly used for
packing many properties into a single property on the DT side.

> +
> +#endif /* __DT_QCOM_RPMH_RSC_H__ */
> diff --git a/include/soc/qcom/tcs.h b/include/soc/qcom/tcs.h
> new file mode 100644
> index 000000000000..4b78f881010a
> --- /dev/null
> +++ b/include/soc/qcom/tcs.h
> @@ -0,0 +1,56 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
> + */
> +
> +#ifndef __SOC_QCOM_TCS_H__
> +#define __SOC_QCOM_TCS_H__
> +
> +#define MAX_RPMH_PAYLOAD 16
> +
> +/**
> + * rpmh_state: state for the request
> + *
> + * RPMH_SLEEP_STATE: State of the resource when the processor subsystem
> + * is powered down. There is no client using the
> + * resource actively.
> + * RPMH_WAKE_ONLY_STATE: Resume resource state to the value previously
> + * requested before the processor was powered down.
> + * RPMH_ACTIVE_ONLY_STATE: Active or AMC mode requests. Resource state
> + * is aggregated immediately.
> + */
> +enum rpmh_state {
> + RPMH_SLEEP_STATE,
> + RPMH_WAKE_ONLY_STATE,
> + RPMH_ACTIVE_ONLY_STATE,
> +};
> +
> +/**
> + * struct tcs_cmd: an individual request to RPMH.
> + *
> + * @addr: the address of the resource slv_id:18:16 | offset:0:15
> + * @data: the resource state request
> + * @wait: wait for this request to be complete before sending the next
> + */
> +struct tcs_cmd {
> + u32 addr;
> + u32 data;
> + bool wait;
> +};
> +
> +/**
> + * struct tcs_request: A set of tcs_cmds sent together in a TCS
> + *
> + * @state: state for the request.

Drop full stop please

> + * @wait_for_compl: wait until we get a response from the h/w accelerator
> + * @num_cmds: the number of @cmds in this request
> + * @cmds: an array of tcs_cmds
> + */
> +struct tcs_request {
> + enum rpmh_state state;
> + bool wait_for_compl;
> + u32 num_cmds;
> + struct tcs_cmd *cmds;
> +};

2018-04-11 15:32:29

by Stephen Boyd

[permalink] [raw]
Subject: Re: [PATCH v5 02/10] dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs

Quoting Lina Iyer (2018-04-09 09:08:00)
> On Fri, Apr 06 2018 at 19:14 -0600, Stephen Boyd wrote:
> >Quoting Lina Iyer (2018-04-05 09:18:26)
> >> diff --git a/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
> >> new file mode 100644
> >> index 000000000000..dcf71a5b302f
> >> --- /dev/null
> >> +++ b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
> >> @@ -0,0 +1,127 @@
> >> +
> >> +Example 1:
> >> +
> >> +For a TCS whose RSC base address is is 0x179C0000 and is at a DRV id of 2, the
> >> +register offsets for DRV2 start at 0D00, the register calculations are like
> >> +this -
> >> +First tuple: 0x179C0000 + 0x10000 * 2 = 0x179E0000
> >> +Second tuple: 0x179E0000 + 0xD00 = 0x179E0D00
> >> +
> >> + apps_rsc: rsc@179e000 {
> >> + label = "apps_rsc";
> >> + compatible = "qcom,rpmh-rsc";
> >> + reg = <0x179e0000 0x10000>, <0x179e0d00 0x3000>;
> >
> >The first reg property overlaps the second one. Does this second one
> >ever move around? I would hardcode it in the driver to be 0xd00 away
> >from the drv base instead of specifying it in DT if it's the same all
> >the time.
> >
> >Also, the example shows 0x179c0000 which I guess is the actual beginning
> >of the RSC block. So the binding seems to be for one DRV inside of an
> >RSC. Can we get the full description of the RSC in the binding instead?
> >I imagine that means there's a DRV0,1,2 and those probably have an
> >interrupt per each DRV and then a different TCS config per each one too?
> >If the binding can describe all of the RSC then we can use different
> >DRVs by changing the qcom,drv-id property.
> >
> > rsc@179c0000 {
> > compatible = "qcom,rpmh-rsc";
> > reg = <0x179c0000 0x10000>,
> > <0x179d0000 0x10000>,
> > <0x179e0000 0x10000>;
> > qcom,tcs-offset = <0xd00>;
> > qcom,drv-id = <0/1/2>;
> > interrupts = <GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
> > <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
> > <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
> > }
> >
> >This is sort of what I imagine it would look like. I have no idea how
> >the tcs config would work unless each DRV has the same TCS config
> >though. Otherwise, if each node is for a drv, then I would expect the
> >node would be called 'drv' and we wouldn't need the drv-id property and
> >the compatible string would say drv instead of rsc?
> >
> >BTW, what are the other DRVs used for in the apps RSC?
> >
> The DRV is the voter for an execution environment (Linux, Hypervisor,
> ATF) in the RSC. The RSC has a lot of other registers that Linux is not
> privy to. They are access restricted.

Alright. Well sometimes access restrictions aren't there, so this isn't
a good assumption to make.

> The memory organization of the RSC
> mandates that we know the DRV id to access registers specific to the
> DRV.

I think qcom,drv-id covers that, no?

> Unfortunately, not all RSC have identical DRV configuration and the
> register space is also variable depending on the capability of the RSC.
> There are functionalities supported by other RSCs in the SoC that are
> not supported by the RSC associated with the application processor,
> while not many RSCs' support multiple DRVs. Therefore it doesn't benefit
> describing the whole RSC as it is not usable from Linux (because of
> access restrictions).

If we're not describing the whole RSC in the RSC binding then we're not
going to get very far. From what I can tell, this binding describes one
DRV inside of an RSC instead of the whole RSC. Yes we'll probably never
use the ATF part of the RSC in Linux, but we may use the hypervisor part
if we use KVM/Xen so the binding should be describing as much as it can
about this device in case some software needs to use it.

Put another way, even if the "apps" RSC is complicated, we should be
describing it to the best of our abilities in the binding so that when
it is used by non-linux OSes things still work by simply tweaking the
drv-id that we use to pick the right things out of the node.

Or we're describing the RSC but it's really a container node that
doesn't do much besides hold DRVs? So this is described at the wrong
level?

2018-04-11 16:38:01

by Lina Iyer

[permalink] [raw]
Subject: Re: [PATCH v5 04/10] drivers: qcom: rpmh: add RPMH helper functions

On Tue, Apr 10 2018 at 20:23 -0600, Stephen Boyd wrote:
>Quoting Lina Iyer (2018-04-09 08:36:31)
>> On Fri, Apr 06 2018 at 19:21 -0600, Stephen Boyd wrote:
>> >Quoting Lina Iyer (2018-04-05 09:18:28)
>> >> diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
>> >> new file mode 100644
>> >> index 000000000000..95334d4c1ede
>> >> --- /dev/null
>> >> +++ b/include/soc/qcom/rpmh.h
>> >> @@ -0,0 +1,34 @@
>> >> +/* SPDX-License-Identifier: GPL-2.0 */
>> >> +/*
>> >> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
>> >> + */
>> >> +
>> >> +#ifndef __SOC_QCOM_RPMH_H__
>> >> +#define __SOC_QCOM_RPMH_H__
>> >> +
>> >> +#include <soc/qcom/tcs.h>
>> >> +#include <linux/platform_device.h>
>> >> +
>> >> +struct rpmh_client;
>> >> +
>> >> +#if IS_ENABLED(CONFIG_QCOM_RPMH)
>> >> +int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
>> >> + const struct tcs_cmd *cmd, u32 n);
>> >> +
>> >> +struct rpmh_client *rpmh_get_client(struct platform_device *pdev);
>> >> +
>> >> +void rpmh_release(struct rpmh_client *rc);
>> >
>> >Please get rid of this 'client' layer and fold it into the rpmh driver.
>> >Everything that uses the rpmh_client is a child device of the rpmh
>> >device so they should be able to just pass in their device pointer as
>> >their 'handle' and have the rpmh driver take that, get the parent device
>> >pointer, and pull an rpmh_drv structure out of there. The 'common' code
>> >can go into the base rpmh driver and get used from there and then we
>> >don't have to hop between two files to see how rpmh is used by the
>> >consumers. Code complexity goes down this way.
>>
>> That would be not be a good idea. This layer is not just providing an
>> API interface. There is resource buffering, handling of memory for
>> requests and downstream quirks and debug going on in this layer. It
>> would be unwise to clobber the hardware centric rpmh-rsc layer. If you
>> look at the series as a whole, you would understand why this is
>> necessary. I plan to build more on top of these patches in the future as
>> we add support for system low power modes. The complexity doesn't go
>> away, it just thrown in to another file, which is already decently
>> sized.
>>
>> I could try to use the device as a handle, and internally work on
>> getting the drv and other information from it, if that helps. But I do
>> not want to clobber these two files together. It doesn't help
>> maintainability.
>
>Using the device as a handle is a good start. Let's see how it looks
>once that part of the code gets replaced. I still fail to see how buffer
>management and requests are any different from poking the hardware, but
>OK. Maybe if this was a TCS "library" on top of the rpmh hardware
>interface?
This is essentially a TCS library.

-- Lina


2018-04-11 16:39:54

by Lina Iyer

[permalink] [raw]
Subject: Re: [PATCH v5 05/10] drivers: qcom: rpmh-rsc: write sleep/wake requests to TCS

On Tue, Apr 10 2018 at 18:31 -0600, Bjorn Andersson wrote:
>On Thu 05 Apr 09:18 PDT 2018, Lina Iyer wrote:
>> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
>[..]
>> @@ -439,6 +445,107 @@ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
>> }
>> EXPORT_SYMBOL(rpmh_rsc_send_data);
>>
>> +static int find_match(const struct tcs_group *tcs, const struct tcs_cmd *cmd,
>> + int len)
>> +{
>> + int i, j;
>> +
>> + /* Check for already cached commands */
>> + for_each_set_bit(i, tcs->slots, MAX_TCS_SLOTS) {
>
>Wouldn't it be good if this cared about TCS boundaries?
>
A sequence would never cross a TCS boundary. So it doesn't need to be
checked.

>> + for (j = 0; j < len; j++) {
>> + if (tcs->cmd_cache[i] != cmd[0].addr) {
>> + if (j == 0)
>> + break;
>> + WARN(tcs->cmd_cache[i + j] != cmd[j].addr,
>> + "Message does not match previous sequence.\n");
>> + return -EINVAL;
>> + } else if (j == len - 1) {
>> + return i;
>> + }
>> + }
>> + }
>> +
>> + return -ENODATA;
>> +}
>> +
>> +static int find_slots(struct tcs_group *tcs, const struct tcs_request *msg,
>> + int *m, int *n)
>> +{
>> + int slot, offset;
>> + int i = 0;
>> +
>> + /* Find if we already have the msg in our TCS */
>
>"Search for the sequence of addresses in our tcs group"
>
OK
>> + slot = find_match(tcs, msg->cmds, msg->num_cmds);
>> + if (slot >= 0)
>> + goto copy_data;
>> +
>> + /* Do over, until we can fit the full payload in a TCS */
>> + do {
>> + slot = bitmap_find_next_zero_area(tcs->slots, MAX_TCS_SLOTS,
>> + i, msg->num_cmds, 0);
>> + if (slot == MAX_TCS_SLOTS)
>> + return -ENOMEM;
>> + i += tcs->ncpt;
>> + } while (slot + msg->num_cmds - 1 >= i);
>
>Does this conditional check that the sequence of free slots that we
>found doesn't extend past the boundary of a TCS?
>
Yes, it does.

>I'm sorry, but this code is hard to understand. I would find this much
>easier to read if there was one bitmap per TCS and you just looped over
>them to find free regions.
>
Hmm, its too many bitmaps otherwise.

>> +
>> +copy_data:
>> + bitmap_set(tcs->slots, slot, msg->num_cmds);
>> + /* Copy the addresses of the resources over to the slots */
>> + for (i = 0; i < msg->num_cmds; i++)
>> + tcs->cmd_cache[slot + i] = msg->cmds[i].addr;
>> +
>> + offset = slot / tcs->ncpt;
>> + *m = offset + tcs->offset;
>> + *n = slot % tcs->ncpt;
>> +
>> + return 0;
>> +}
>> +
>> +static int tcs_ctrl_write(struct rsc_drv *drv, const struct tcs_request *msg)
>> +{
>> + struct tcs_group *tcs;
>> + int m = 0, n = 0;
>> + unsigned long flags;
>> + int ret;
>> +
>> + tcs = get_tcs_for_msg(drv, msg);
>> + if (IS_ERR(tcs))
>> + return PTR_ERR(tcs);
>> +
>> + spin_lock_irqsave(&tcs->lock, flags);
>> + /* find the m-th TCS and the n-th position in the TCS to write to */
>> + ret = find_slots(tcs, msg, &m, &n);
>> + if (!ret)
>> + __tcs_buffer_write(drv, m, n, msg);
>> + spin_unlock_irqrestore(&tcs->lock, flags);
>> +
>> + return ret;
>> +}
>> +
>> +/**
>> + * rpmh_rsc_write_ctrl_data: Write request to the controller
>> + *
>> + * @drv: the controller
>> + * @msg: the data to be written to the controller
>> + *
>> + * There is no response returned for writing the request to the controller.
>> + */
>> +int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg)
>
>So this is exactly the same thing as rpmh_rsc_send_data() but for one of
>the non-active TCSs?
>
Yes.
>Can't we have a single API for writing msg to the hardware and if it's
>active we "send" it as well?
>
Hmm.. It can be done.

>> +{
>> + if (!msg || !msg->cmds || !msg->num_cmds ||
>> + msg->num_cmds > MAX_RPMH_PAYLOAD) {
>> + pr_err("Payload error\n");
>> + return -EINVAL;
>> + }
>> +
>> + /* Data sent to this API will not be sent immediately */
>> + if (msg->state == RPMH_ACTIVE_ONLY_STATE)
>> + return -EINVAL;
>
>If you're concerned about this then the API isn't clear enough.
>
>> +
>> + return tcs_ctrl_write(drv, msg);
>> +}
>> +EXPORT_SYMBOL(rpmh_rsc_write_ctrl_data);
>> +
>> static int rpmh_probe_tcs_config(struct platform_device *pdev,
>> struct rsc_drv *drv)
>> {
>> @@ -512,6 +619,19 @@ static int rpmh_probe_tcs_config(struct platform_device *pdev,
>> tcs->mask = ((1 << tcs->num_tcs) - 1) << st;
>> tcs->offset = st;
>> st += tcs->num_tcs;
>> +
>> + /*
>> + * Allocate memory to cache sleep and wake requests to
>> + * avoid reading TCS register memory.
>> + */
>> + if (tcs->type == ACTIVE_TCS)
>> + continue;
>
>Rather than "the rest of this loop shouldn't be done for the active tcs
>group" just make another loop... Or at least make the comment relate
>directly to the code it's adjacent.
>
Will move the comment out.
>> +
>> + tcs->cmd_cache = devm_kcalloc(&pdev->dev,
>> + tcs->num_tcs * ncpt, sizeof(u32),
>> + GFP_KERNEL);
>> + if (!tcs->cmd_cache)
>> + return -ENOMEM;
>

Thanks for the review Bjorn.

-- Lina


2018-04-11 21:29:36

by Lina Iyer

[permalink] [raw]
Subject: Re: [PATCH v5 02/10] dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs

On Wed, Apr 11 2018 at 09:29 -0600, Stephen Boyd wrote:
>Quoting Lina Iyer (2018-04-09 09:08:00)
>> On Fri, Apr 06 2018 at 19:14 -0600, Stephen Boyd wrote:
>> >Quoting Lina Iyer (2018-04-05 09:18:26)
>> >> diff --git a/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
>> >> new file mode 100644
>> >> index 000000000000..dcf71a5b302f
>> >> --- /dev/null
>> >> +++ b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
>> >> @@ -0,0 +1,127 @@
>> >> +
>> >> +Example 1:
>> >> +
>> >> +For a TCS whose RSC base address is is 0x179C0000 and is at a DRV id of 2, the
>> >> +register offsets for DRV2 start at 0D00, the register calculations are like
>> >> +this -
>> >> +First tuple: 0x179C0000 + 0x10000 * 2 = 0x179E0000
>> >> +Second tuple: 0x179E0000 + 0xD00 = 0x179E0D00
>> >> +
>> >> + apps_rsc: rsc@179e000 {
>> >> + label = "apps_rsc";
>> >> + compatible = "qcom,rpmh-rsc";
>> >> + reg = <0x179e0000 0x10000>, <0x179e0d00 0x3000>;
>> >
>> >The first reg property overlaps the second one. Does this second one
>> >ever move around? I would hardcode it in the driver to be 0xd00 away
>> >from the drv base instead of specifying it in DT if it's the same all
>> >the time.
>> >
>> >Also, the example shows 0x179c0000 which I guess is the actual beginning
>> >of the RSC block. So the binding seems to be for one DRV inside of an
>> >RSC. Can we get the full description of the RSC in the binding instead?
>> >I imagine that means there's a DRV0,1,2 and those probably have an
>> >interrupt per each DRV and then a different TCS config per each one too?
>> >If the binding can describe all of the RSC then we can use different
>> >DRVs by changing the qcom,drv-id property.
>> >
>> > rsc@179c0000 {
>> > compatible = "qcom,rpmh-rsc";
>> > reg = <0x179c0000 0x10000>,
>> > <0x179d0000 0x10000>,
>> > <0x179e0000 0x10000>;
>> > qcom,tcs-offset = <0xd00>;
>> > qcom,drv-id = <0/1/2>;
>> > interrupts = <GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
>> > <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
>> > <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
>> > }
>> >
>> >This is sort of what I imagine it would look like. I have no idea how
>> >the tcs config would work unless each DRV has the same TCS config
>> >though. Otherwise, if each node is for a drv, then I would expect the
>> >node would be called 'drv' and we wouldn't need the drv-id property and
>> >the compatible string would say drv instead of rsc?
>> >
>> >BTW, what are the other DRVs used for in the apps RSC?
>> >
>> The DRV is the voter for an execution environment (Linux, Hypervisor,
>> ATF) in the RSC. The RSC has a lot of other registers that Linux is not
>> privy to. They are access restricted.
>
>Alright. Well sometimes access restrictions aren't there, so this isn't
>a good assumption to make.
>
>> The memory organization of the RSC
>> mandates that we know the DRV id to access registers specific to the
>> DRV.
>
>I think qcom,drv-id covers that, no?
>
>> Unfortunately, not all RSC have identical DRV configuration and the
>> register space is also variable depending on the capability of the RSC.
>> There are functionalities supported by other RSCs in the SoC that are
>> not supported by the RSC associated with the application processor,
>> while not many RSCs' support multiple DRVs. Therefore it doesn't benefit
>> describing the whole RSC as it is not usable from Linux (because of
>> access restrictions).
>
>If we're not describing the whole RSC in the RSC binding then we're not
>going to get very far. From what I can tell, this binding describes one
>DRV inside of an RSC instead of the whole RSC. Yes we'll probably never
>use the ATF part of the RSC in Linux, but we may use the hypervisor part
>if we use KVM/Xen so the binding should be describing as much as it can
>about this device in case some software needs to use it.
>
The RSC is pretty much this. A set of registers that are RSC specific at
the address pointed to by the "rsc" reg and the TCS regsiters pointed to
by the "tcs" reg. You do not want to clobber multiple DRVs into the same
device node. It will be a lot confusing for the drivers to determine
which DRV to vote.
>Put another way, even if the "apps" RSC is complicated, we should be
>describing it to the best of our abilities in the binding so that when
>it is used by non-linux OSes things still work by simply tweaking the
>drv-id that we use to pick the right things out of the node.
>
>Or we're describing the RSC but it's really a container node that
>doesn't do much besides hold DRVs? So this is described at the wrong
>level?
What we are describing is a DRV, but a standalone DRV alone is useless
without the necessary RSC registers. So its a unique RSC+DRV combination
that is represented here.

Hope that helps.

-- Lina


2018-04-11 21:31:44

by Lina Iyer

[permalink] [raw]
Subject: Re: [PATCH v5 02/10] dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs

On Tue, Apr 10 2018 at 13:36 -0600, Bjorn Andersson wrote:
>On Mon 09 Apr 09:08 PDT 2018, Lina Iyer wrote:
>
>> On Fri, Apr 06 2018 at 19:14 -0600, Stephen Boyd wrote:
>> > Quoting Lina Iyer (2018-04-05 09:18:26)
>> > > diff --git a/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
>[..]
>> > > +Example 1:
>> > > +
>> > > +For a TCS whose RSC base address is is 0x179C0000 and is at a DRV id of 2, the
>> > > +register offsets for DRV2 start at 0D00, the register calculations are like
>> > > +this -
>> > > +First tuple: 0x179C0000 + 0x10000 * 2 = 0x179E0000
>> > > +Second tuple: 0x179E0000 + 0xD00 = 0x179E0D00
>> > > +
>> > > + apps_rsc: rsc@179e000 {
>> > > + label = "apps_rsc";
>> > > + compatible = "qcom,rpmh-rsc";
>> > > + reg = <0x179e0000 0x10000>, <0x179e0d00 0x3000>;
>> >
>> > The first reg property overlaps the second one. Does this second one
>> > ever move around? I would hardcode it in the driver to be 0xd00 away
>> > from the drv base instead of specifying it in DT if it's the same all
>> > the time.
>[..]
>> >
>> The DRV is the voter for an execution environment (Linux, Hypervisor,
>> ATF) in the RSC. The RSC has a lot of other registers that Linux is not
>> privy to. They are access restricted. The memory organization of the RSC
>> mandates that we know the DRV id to access registers specific to the
>> DRV. Unfortunately, not all RSC have identical DRV configuration and the
>> register space is also variable depending on the capability of the RSC.
>> There are functionalities supported by other RSCs in the SoC that are
>> not supported by the RSC associated with the application processor,
>> while not many RSCs' support multiple DRVs. Therefore it doesn't benefit
>> describing the whole RSC as it is not usable from Linux (because of
>> access restrictions).
>>
>
>I generally prefer that we describe the hardware blocks as accurate as
>possible, instead of applying current restrictions on Linux onto the
>description. This ensures that we can reuse the binding and drivers in
>configurations not considered today. However, afaict we still have the
>problem that we need a way to express where in the RSC our TCS sits.
>
>Regardless of what's right or not, the given example causes the driver
>to fail probing, so something needs to be changed.
I have been using this in DT and I haven't seen failures. Could you send
me the logs?

Thanks,
Lina

>(Making the drv size
>0xd00 is functional but doesn't really relate to any bondary in the
>register space).
>
>Regards,
>Bjorn

2018-04-13 15:39:35

by Lina Iyer

[permalink] [raw]
Subject: Re: [PATCH v5 01/10] drivers: qcom: rpmh-rsc: add RPMH controller for QCOM SoCs

On Tue, Apr 10 2018 at 22:39 -0600, Stephen Boyd wrote:
>Quoting Lina Iyer (2018-04-05 09:18:25)
>> Add controller driver for QCOM SoCs that have hardware based shared
>> resource management. The hardware IP known as RSC (Resource State
>> Coordinator) houses multiple Direct Resource Voter (DRV) for different
>> execution levels. A DRV is a unique voter on the state of a shared
>> resource. A Trigger Control Set (TCS) is a bunch of slots that can house
>> multiple resource state requests, that when triggered will issue those
>> requests through an internal bus to the Resource Power Manager Hardened
>> (RPMH) blocks. These hardware blocks are capable of adjusting clocks,
>> voltages, etc. The resource state request from a DRV are aggregated along
>> with state requests from other processors in the SoC and the aggregate
>> value is applied on the resource.
>>
>> Some important aspects of the RPMH communication -
>> - Requests are <addr, value> with some header information
>> - Multiple requests (upto 16) may be sent through a TCS, at a time
>> - Requests in a TCS are sent in sequence
>> - Requests may be fire-n-forget or completion (response expected)
>> - Multiple TCS from the same DRV may be triggered simultaneously
>> - Cannot send a request if another requesit for the same addr is in
>
>s/requesit/request/
>
Ok.
>> progress from the same DRV
>> - When all the requests from a TCS are complete, an IRQ is raised
>> - The IRQ handler needs to clear the TCS before it is available for
>> reuse
>> - TCS configuration is specific to a DRV
>> - Platform drivers may use DRV from different RSCs to make requests
>
>This last point is sort of not true anymore? At least my understanding
>is that platform drivers are children of the rsc and that they can only
>use that rsc to do anything with rpmh.
>
Platform drivers may talk to multiple RSC+DRV instances and make
requests from those DRVs.

>>
>> Resource state requests made when CPUs are active are called 'active'
>> state requests. Requests made when all the CPUs are powered down (idle
>> state) are called 'sleep' state requests. They are matched by a
>> corresponding 'wake' state requests which puts the resources back in to
>> previously requested active state before resuming any CPU. TCSes are
>> dedicated for each type of requests. Control TCS are used to provide
>> specific information to the controller.
>
>Can you mention AMC here too? I see the acronym but no definition of
>what it is besides "Active or AMC" which may indicate A == Active.
>
Ok.

>>
>> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
>> new file mode 100644
>> index 000000000000..aa73ec4b3e42
>> --- /dev/null
>> +++ b/drivers/soc/qcom/rpmh-internal.h
>> @@ -0,0 +1,89 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
>> + */
>> +
>> +
>> +#ifndef __RPM_INTERNAL_H__
>> +#define __RPM_INTERNAL_H__
>> +
>> +#include <linux/bitmap.h>
>> +#include <soc/qcom/tcs.h>
>> +
>> +#define TCS_TYPE_NR 4
>> +#define MAX_CMDS_PER_TCS 16
>> +#define MAX_TCS_PER_TYPE 3
>> +#define MAX_TCS_NR (MAX_TCS_PER_TYPE * TCS_TYPE_NR)
>> +
>> +struct rsc_drv;
>> +
>> +/**
>> + * struct tcs_response: Response object for a request
>> + *
>> + * @drv: the controller
>> + * @msg: the request for this response
>> + * @m: the tcs identifier
>> + * @err: error reported in the response
>> + * @list: element in list of pending response objects
>> + */
>> +struct tcs_response {
>> + struct rsc_drv *drv;
>> + const struct tcs_request *msg;
>> + u32 m;
>> + int err;
>> + struct list_head list;
>> +};
>> +
>> +/**
>> + * struct tcs_group: group of Trigger Command Sets for a request state
>
>Put (ACRONYM) for the acronyms that are spelled out the first time
>please. Also, make sure we know what 'request state' is.
>
Its already in the commit text, but sure.

>> + *
>> + * @drv: the controller
>> + * @type: type of the TCS in this group - active, sleep, wake
>
>Now 'group' means 'request state'?
>
Group of TCSes. TCSes are grouped based on their use - sending requests
for active, sleep and wake.

>> + * @mask: mask of the TCSes relative to all the TCSes in the RSC
>> + * @offset: start of the TCS group relative to the TCSes in the RSC
>> + * @num_tcs: number of TCSes in this type
>> + * @ncpt: number of commands in each TCS
>> + * @lock: lock for synchronizing this TCS writes
>> + * @responses: response objects for requests sent from each TCS
>> + */
>> +struct tcs_group {
>> + struct rsc_drv *drv;
>> + int type;
>
>Is type supposed to be an enum?
>
Uses the #defines from include/dt-bindings/qcom,rpmh-rsc.txt.

>> + u32 mask;
>> + u32 offset;
>> + int num_tcs;
>> + int ncpt;
>> + spinlock_t lock;
>> + struct tcs_response *responses[MAX_TCS_PER_TYPE];
>> +};
>> +
>> +/**
>> + * struct rsc_drv: the Resource State Coordinator controller
>> + *
>> + * @name: controller identifier
>> + * @tcs_base: start address of the TCS registers in this controller
>> + * @id: instance id in the controller (Direct Resource Voter)
>> + * @num_tcs: number of TCSes in this DRV
>
>It changed from an RSC to a DRV here?
>
RSC has DRVs. A DRV has TCS(es).

>> + * @tasklet: handle responses, off-load work from IRQ handler
>> + * @response_pending:
>> + * list of responses that needs to be sent to caller
>> + * @tcs: TCS groups
>> + * @tcs_in_use: s/w state of the TCS
>> + * @drv_lock: synchronize state of the controller
>> + */
>> +struct rsc_drv {
>
>Is 'drv' in here talking about the DRV acronym?
>
Yes.

>> + const char *name;
>> + void __iomem *tcs_base;
>> + int id;
>> + int num_tcs;
>> + struct tasklet_struct tasklet;
>> + struct list_head response_pending;
>> + struct tcs_group tcs[TCS_TYPE_NR];
>> + DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
>> + spinlock_t drv_lock;
>
>s/drv_lock/lock/
>
>? Because otherwise it looks like drv->drv_lock.
>
Ok.

>> +};
>> +
>> +
>> +int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg);
>
>Do we send data to anything else in rpmh? Maybe it could just be called
>rpmh_send_data(), or rpmh_send().
>
No, but this file is rpmh_rsc. Hence the rpmh_rsc_ functions.

>> +
>> +#endif /* __RPM_INTERNAL_H__ */
>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>> new file mode 100644
>> index 000000000000..8bde1e9bd599
>> --- /dev/null
>> +++ b/drivers/soc/qcom/rpmh-rsc.c
>> @@ -0,0 +1,571 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
>> + */
>> +
>> +#define pr_fmt(fmt) "%s " fmt, KBUILD_MODNAME
>> +
>> +#include <linux/atomic.h>
>
>Is this used?
>
Will remove.

>> +#include <linux/delay.h>
>> +#include <linux/export.h>
>> +#include <linux/interrupt.h>
>> +#include <linux/io.h>
>> +#include <linux/kernel.h>
>> +#include <linux/list.h>
>> +#include <linux/of.h>
>> +#include <linux/of_irq.h>
>> +#include <linux/of_platform.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/slab.h>
>> +#include <linux/spinlock.h>
>> +
>> +#include <soc/qcom/tcs.h>
>> +#include <dt-bindings/soc/qcom,rpmh-rsc.h>
>> +
>> +#include "rpmh-internal.h"
>> +
>> +#define RSC_DRV_TCS_OFFSET 672
>> +#define RSC_DRV_CMD_OFFSET 20
>> +
>> +/* DRV Configuration Information Register */
>> +#define DRV_PRNT_CHLD_CONFIG 0x0C
>> +#define DRV_NUM_TCS_MASK 0x3F
>> +#define DRV_NUM_TCS_SHIFT 6
>> +#define DRV_NCPT_MASK 0x1F
>> +#define DRV_NCPT_SHIFT 27
>> +
>> +/* Register offsets */
>> +#define RSC_DRV_IRQ_ENABLE 0x00
>> +#define RSC_DRV_IRQ_STATUS 0x04
>> +#define RSC_DRV_IRQ_CLEAR 0x08
>> +#define RSC_DRV_CMD_WAIT_FOR_CMPL 0x10
>> +#define RSC_DRV_CONTROL 0x14
>> +#define RSC_DRV_STATUS 0x18
>> +#define RSC_DRV_CMD_ENABLE 0x1C
>> +#define RSC_DRV_CMD_MSGID 0x30
>> +#define RSC_DRV_CMD_ADDR 0x34
>> +#define RSC_DRV_CMD_DATA 0x38
>> +#define RSC_DRV_CMD_STATUS 0x3C
>> +#define RSC_DRV_CMD_RESP_DATA 0x40
>> +
>> +#define TCS_AMC_MODE_ENABLE BIT(16)
>> +#define TCS_AMC_MODE_TRIGGER BIT(24)
>> +
>> +/* TCS CMD register bit mask */
>> +#define CMD_MSGID_LEN 8
>> +#define CMD_MSGID_RESP_REQ BIT(8)
>> +#define CMD_MSGID_WRITE BIT(16)
>> +#define CMD_STATUS_ISSUED BIT(8)
>> +#define CMD_STATUS_COMPL BIT(16)
>> +
>> +static struct tcs_group *get_tcs_from_index(struct rsc_drv *drv, int m)
>> +{
>> + struct tcs_group *tcs;
>> + int i;
>> +
>> + for (i = 0; i < drv->num_tcs; i++) {
>> + tcs = &drv->tcs[i];
>> + if (tcs->mask & BIT(m))
>> + return tcs;
>> + }
>> +
>> + WARN(i == drv->num_tcs, "Incorrect TCS index %d", m);
>> +
>> + return NULL;
>> +}
>> +
>> +static struct tcs_response *setup_response(struct rsc_drv *drv,
>> + const struct tcs_request *msg, int m)
>> +{
>> + struct tcs_response *resp;
>> + struct tcs_group *tcs;
>> +
>> + resp = kzalloc(sizeof(*resp), GFP_ATOMIC);
>> + if (!resp)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + resp->drv = drv;
>> + resp->msg = msg;
>> + resp->err = 0;
>> +
>> + tcs = get_tcs_from_index(drv, m);
>> + if (!tcs)
>> + return ERR_PTR(-EINVAL);
>> +
>> + assert_spin_locked(&tcs->lock);
>> + tcs->responses[m - tcs->offset] = resp;
>> +
>> + return resp;
>> +}
>> +
>> +static void free_response(struct tcs_response *resp)
>
>const?
>
Ok

>> +{
>> + kfree(resp);
>> +}
>> +
>> +static struct tcs_response *get_response(struct rsc_drv *drv, u32 m)
>> +{
>> + struct tcs_group *tcs = get_tcs_from_index(drv, m);
>> +
>> + return tcs->responses[m - tcs->offset];
>> +}
>> +
>> +static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int m, int n)
>> +{
>> + return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
>> + RSC_DRV_CMD_OFFSET * n);
>> +}
>> +
>> +static void write_tcs_reg(struct rsc_drv *drv, int reg, int m, int n, u32 data)
>
>Is m the type of TCS (sleep, active, wake) and n is just an offset?
>Maybe you can replace m with 'tcs_type' and n with 'index' or 'i' or
>'offset'. And then don't use this function to write the random TCS
>registers that don't have to do with the TCS command slots? I see
>various places where there are things like:
>
If you look at the spec and the registers, this representation matches
the usage there.
d = DRV
m = TCS number in the DRV
n = Command slot in the TCS

>> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
>> + write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
>> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, cmd_enable);
>
>And 'n' is 0, meaning you rely on that 0 killing that last part of the
>equation (RSC_DRV_CMD_OFFSET * n). But if we had a write_tcs_reg(drv,
>reg, m, data) and a write_tcs_cmd(drv, reg, m, n, data) then it would be
>clearer.
>
Hmm. ok.
>Even better, add a void *base to a 'struct tcs' and then pass that
>struct to the tcs_read/write APIs and then have that pull out a
>tcs->base + reg or tcs->base + reg + RSC_DRV_CMD_OFFSET * index.
>
Based on comments from Bjorn on patch v1, I switched over to using
rsc_drv* instead of void *base.

>> +{
>> + writel_relaxed(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
>> + RSC_DRV_CMD_OFFSET * n);
>> +}
>> +
>> +static void write_tcs_reg_sync(struct rsc_drv *drv, int reg, int m, int n,
>> + u32 data)
>> +{
>> + write_tcs_reg(drv, reg, m, n, data);
>> + for (;;) {
>> + if (data == read_tcs_reg(drv, reg, m, n))
>> + break;
>> + udelay(1);
>> + }
>> +}
>> +
>> +static bool tcs_is_free(struct rsc_drv *drv, int m)
>> +{
>> + return !test_bit(m, drv->tcs_in_use) &&
>> + read_tcs_reg(drv, RSC_DRV_STATUS, m, 0);
>> +}
>> +
>> +static struct tcs_group *get_tcs_of_type(struct rsc_drv *drv, int type)
>> +{
>> + int i;
>> + struct tcs_group *tcs;
>> +
>> + for (i = 0; i < TCS_TYPE_NR; i++) {
>> + if (type == drv->tcs[i].type)
>> + break;
>> + }
>> +
>> + if (i == TCS_TYPE_NR)
>> + return ERR_PTR(-EINVAL);
>> +
>> + tcs = &drv->tcs[i];
>> + if (!tcs->num_tcs)
>> + return ERR_PTR(-EINVAL);
>> +
>> + return tcs;
>> +}
>> +
>> +static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
>> + const struct tcs_request *msg)
>> +{
>> + int type;
>> +
>> + switch (msg->state) {
>> + case RPMH_ACTIVE_ONLY_STATE:
>> + type = ACTIVE_TCS;
>> + break;
>> + default:
>> + return ERR_PTR(-EINVAL);
>> + }
>> +
>> + return get_tcs_of_type(drv, type);
>> +}
>> +
>> +static void send_tcs_response(struct tcs_response *resp)
>> +{
>> + struct rsc_drv *drv;
>> + unsigned long flags;
>> +
>> + if (!resp)
>> + return;
>
>Sad.
>
>> +
>> + drv = resp->drv;
>> + spin_lock_irqsave(&drv->drv_lock, flags);
>> + INIT_LIST_HEAD(&resp->list);
>> + list_add_tail(&resp->list, &drv->response_pending);
>> + spin_unlock_irqrestore(&drv->drv_lock, flags);
>> +
>> + tasklet_schedule(&drv->tasklet);
>> +}
>> +
>> +/**
>> + * tcs_irq_handler: TX Done interrupt handler
>
>So call it tcs_tx_done?
>
But, but, it's an irq handler.

>> + */
>> +static irqreturn_t tcs_irq_handler(int irq, void *p)
>> +{
>> + struct rsc_drv *drv = p;
>> + int m, i;
>> + u32 irq_status, sts;
>> + struct tcs_response *resp;
>> + struct tcs_cmd *cmd;
>> +
>> + irq_status = read_tcs_reg(drv, RSC_DRV_IRQ_STATUS, 0, 0);
>> +
>> + for (m = 0; m < drv->num_tcs; m++) {
>> + if (!(irq_status & (u32)BIT(m)))
>> + continue;
>> +
>> + resp = get_response(drv, m);
>> + if (WARN_ON(!resp))
>> + goto skip_resp;
>> +
>> + resp->err = 0;
>> + for (i = 0; i < resp->msg->num_cmds; i++) {
>> + cmd = &resp->msg->cmds[i];
>> + sts = read_tcs_reg(drv, RSC_DRV_CMD_STATUS, m, i);
>> + if (!(sts & CMD_STATUS_ISSUED) ||
>> + ((resp->msg->wait_for_compl || cmd->wait) &&
>> + !(sts & CMD_STATUS_COMPL))) {
>> + resp->err = -EIO;
>> + break;
>> + }
>> + }
>> +skip_resp:
>> + /* Reclaim the TCS */
>> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
>> + write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, 0, BIT(m));
>> + clear_bit(m, drv->tcs_in_use);
>
>Should we reclaim the TCS if the above for loop fails too? It may make
>more sense to look up the response, reclaim, check if it's NULL and
>execute a 'continue' and otherwise look through resp->msg->cmds for
>something that was done and then send_tcs_response(). At the least,
The TCS will will be reclaimed, even if the for loop fails. We can't
read the CMD_STATUS reliably after reclaiming the TCS.
>don't call send_tcs_response() if resp == NULL.
>
I could do that.
>> + send_tcs_response(resp);
>> + }
>> +
>> + return IRQ_HANDLED;
>> +}
>> +
>> +/**
>> + * tcs_notify_tx_done: TX Done for requests that got a response
>> + *
>> + * @data: the tasklet argument
>> + *
>> + * Tasklet function to notify MBOX that we are done with the request.
>> + * Handles all pending reponses whenever run.
>> + */
>> +static void tcs_notify_tx_done(unsigned long data)
>> +{
>> + struct rsc_drv *drv = (struct rsc_drv *)data;
>> + struct tcs_response *resp;
>> + unsigned long flags;
>> +
>> + for (;;) {
>> + spin_lock_irqsave(&drv->drv_lock, flags);
>> + resp = list_first_entry_or_null(&drv->response_pending,
>> + struct tcs_response, list);
>> + if (!resp) {
>> + spin_unlock_irqrestore(&drv->drv_lock, flags);
>> + break;
>> + }
>> + list_del(&resp->list);
>
>Someone should make a list_deqeue() API. Then this would read as:
>
> spin_lock_irqsave()
> resp = list_deqeue();
> spin_unlock_irqrestore()
> if (!resp)
> break;
> free_response(resp)
>
Hmm.. cleaner.

>> + spin_unlock_irqrestore(&drv->drv_lock, flags);
>> + free_response(resp);
>> + }
>> +}
>> +
>> +static void __tcs_buffer_write(struct rsc_drv *drv, int m, int n,
>> + const struct tcs_request *msg)
>> +{
>> + u32 msgid, cmd_msgid;
>> + u32 cmd_enable = 0;
>> + u32 cmd_complete;
>> + struct tcs_cmd *cmd;
>> + int i, j;
>> +
>> + cmd_msgid = CMD_MSGID_LEN;
>> + cmd_msgid |= msg->wait_for_compl ? CMD_MSGID_RESP_REQ : 0;
>> + cmd_msgid |= CMD_MSGID_WRITE;
>> +
>> + cmd_complete = read_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0);
>> +
>> + for (i = 0, j = n; i < msg->num_cmds; i++, j++) {
>> + cmd = &msg->cmds[i];
>> + cmd_enable |= BIT(j);
>> + cmd_complete |= cmd->wait << j;
>> + msgid = cmd_msgid;
>> + msgid |= cmd->wait ? CMD_MSGID_RESP_REQ : 0;
>> + write_tcs_reg(drv, RSC_DRV_CMD_MSGID, m, j, msgid);
>> + write_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, j, cmd->addr);
>> + write_tcs_reg(drv, RSC_DRV_CMD_DATA, m, j, cmd->data);
>> + }
>> +
>> + write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
>> + cmd_enable |= read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0);
>> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, cmd_enable);
>> +}
>> +
>> +static void __tcs_trigger(struct rsc_drv *drv, int m)
>> +{
>> + u32 enable;
>> +
>> + /*
>> + * HW req: Clear the DRV_CONTROL and enable TCS again
>> + * While clearing ensure that the AMC mode trigger is cleared
>> + * and then the mode enable is cleared.
>> + */
>> + enable = read_tcs_reg(drv, RSC_DRV_CONTROL, m, 0);
>> + enable &= ~TCS_AMC_MODE_TRIGGER;
>> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
>> + enable &= ~TCS_AMC_MODE_ENABLE;
>> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
>> +
>> + /* Enable the AMC mode on the TCS and then trigger the TCS */
>> + enable = TCS_AMC_MODE_ENABLE;
>> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
>> + enable |= TCS_AMC_MODE_TRIGGER;
>> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
>> +}
>> +
>> +static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs,
>> + const struct tcs_request *msg)
>> +{
>> + unsigned long curr_enabled;
>> + u32 addr;
>> + int i, j, k;
>> + int m = tcs->offset;
>> +
>> + for (i = 0; i < tcs->num_tcs; i++, m++) {
>> + if (tcs_is_free(drv, m))
>> + continue;
>> +
>> + curr_enabled = read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0);
>> +
>> + for_each_set_bit(j, &curr_enabled, MAX_CMDS_PER_TCS) {
>> + addr = read_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, j);
>> + for (k = 0; k < msg->num_cmds; k++) {
>> + if (addr == msg->cmds[k].addr)
>> + return -EBUSY;
>> + }
>> + }
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int find_free_tcs(struct tcs_group *tcs)
>> +{
>> + int m;
>> +
>> + for (m = 0; m < tcs->num_tcs; m++) {
>> + if (tcs_is_free(tcs->drv, tcs->offset + m))
>> + return m;
>> + }
>> +
>> + return -EBUSY;
>> +}
>> +
>> +static int tcs_mbox_write(struct rsc_drv *drv, const struct tcs_request *msg)
>> +{
>> + struct tcs_group *tcs;
>> + int m;
>> + struct tcs_response *resp = NULL;
>> + unsigned long flags;
>> + int ret;
>> +
>> + tcs = get_tcs_for_msg(drv, msg);
>> + if (IS_ERR(tcs))
>> + return PTR_ERR(tcs);
>> +
>> + spin_lock_irqsave(&tcs->lock, flags);
>> + m = find_free_tcs(tcs);
>> + if (m < 0) {
>> + ret = m;
>> + goto done_write;
>> + }
>> +
>> + /*
>> + * The h/w does not like if we send a request to the same address,
>> + * when one is already in-flight or being processed.
>> + */
>> + ret = check_for_req_inflight(drv, tcs, msg);
>> + if (ret)
>> + goto done_write;
>> +
>> + resp = setup_response(drv, msg, m);
>> + if (IS_ERR(resp)) {
>> + ret = PTR_ERR(resp);
>> + goto done_write;
>> + }
>> + resp->m = m;
>> +
>> + set_bit(m, drv->tcs_in_use);
>> + __tcs_buffer_write(drv, m, 0, msg);
>> + __tcs_trigger(drv, m);
>> +
>> +done_write:
>> + spin_unlock_irqrestore(&tcs->lock, flags);
>> + return ret;
>> +}
>> +
>> +/**
>> + * rpmh_rsc_send_data: Validate the incoming message and write to the
>> + * appropriate TCS block.
>> + *
>> + * @drv: the controller
>> + * @msg: the data to be sent
>> + *
>> + * Return: 0 on success, -EINVAL on error.
>> + * Note: This call blocks until a valid data is written to the TCS.
>> + */
>> +int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
>> +{
>> + int ret;
>> +
>> + if (!msg || !msg->cmds || !msg->num_cmds ||
>> + msg->num_cmds > MAX_RPMH_PAYLOAD)
>> + return -EINVAL;
>> +
>> + do {
>> + ret = tcs_mbox_write(drv, msg);
>> + if (ret == -EBUSY) {
>> + pr_info_ratelimited("TCS Busy, retrying RPMH message send: addr=%#x\n",
>> + msg->cmds[0].addr);
>> + udelay(10);
>> + }
>> + } while (ret == -EBUSY);
>
>This loop never breaks if we can't avoid the BUSY loop. And that printk
>is informational, shouldn't it be an error? Is there some number of
>tries we can make and then just give up?
>
I could do that. Generally, there are some transient conditions the
causes these loops to spin for a while, before we get a free TCS to
write to. Failing after just a handful tries may be calling it quits
early. If we increase the delay to compensate for it, then we end
slowing up requests that could have otherwise been faster.

>> +
>> + return ret;
>> +}
>> +EXPORT_SYMBOL(rpmh_rsc_send_data);
>> +
>> +static int rpmh_probe_tcs_config(struct platform_device *pdev,
>> + struct rsc_drv *drv)
>> +{
>> + struct tcs_type_config {
>> + u32 type;
>> + u32 n;
>> + } tcs_cfg[TCS_TYPE_NR] = { { 0 } };
>> + struct device_node *dn = pdev->dev.of_node;
>> + u32 config, max_tcs, ncpt;
>> + int i, ret, n, st = 0;
>> + struct tcs_group *tcs;
>> + struct resource *res;
>> + void __iomem *base;
>> +
>> + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "drv");
>> + base = devm_ioremap_resource(&pdev->dev, res);
>> + if (IS_ERR(base))
>> + return PTR_ERR(base);
>> +
>> + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "tcs");
>> + drv->tcs_base = devm_ioremap_resource(&pdev->dev, res);
>> + if (IS_ERR(drv->tcs_base))
>> + return PTR_ERR(drv->tcs_base);
>> +
>> + config = readl_relaxed(base + DRV_PRNT_CHLD_CONFIG);
>> +
>> + max_tcs = config;
>> + max_tcs &= DRV_NUM_TCS_MASK << (DRV_NUM_TCS_SHIFT * drv->id);
>> + max_tcs = max_tcs >> (DRV_NUM_TCS_SHIFT * drv->id);
>> +
>> + ncpt = config & (DRV_NCPT_MASK << DRV_NCPT_SHIFT);
>> + ncpt = ncpt >> DRV_NCPT_SHIFT;
>> +
>> + n = of_property_count_u32_elems(dn, "qcom,tcs-config");
>> + if (n != 2 * TCS_TYPE_NR)
>> + return -EINVAL;
>> +
>> + for (i = 0; i < TCS_TYPE_NR; i++) {
>> + ret = of_property_read_u32_index(dn, "qcom,tcs-config",
>> + i * 2, &tcs_cfg[i].type);
>> + if (ret)
>> + return ret;
>> + if (tcs_cfg[i].type >= TCS_TYPE_NR)
>> + return -EINVAL;
>> +
>> + ret = of_property_read_u32_index(dn, "qcom,tcs-config",
>> + i * 2 + 1, &tcs_cfg[i].n);
>> + if (ret)
>> + return ret;
>> + if (tcs_cfg[i].n > MAX_TCS_PER_TYPE)
>> + return -EINVAL;
>> + }
>> +
>> + for (i = 0; i < TCS_TYPE_NR; i++) {
>> + tcs = &drv->tcs[tcs_cfg[i].type];
>> + if (tcs->drv)
>> + return -EINVAL;
>> + tcs->drv = drv;
>> + tcs->type = tcs_cfg[i].type;
>> + tcs->num_tcs = tcs_cfg[i].n;
>> + tcs->ncpt = ncpt;
>> + spin_lock_init(&tcs->lock);
>> +
>> + if (!tcs->num_tcs || tcs->type == CONTROL_TCS)
>> + continue;
>> +
>> + if (st + tcs->num_tcs > max_tcs ||
>> + st + tcs->num_tcs >= BITS_PER_BYTE * sizeof(tcs->mask))
>> + return -EINVAL;
>> +
>> + tcs->mask = ((1 << tcs->num_tcs) - 1) << st;
>> + tcs->offset = st;
>> + st += tcs->num_tcs;
>> + }
>> +
>> + drv->num_tcs = st;
>> +
>> + return 0;
>> +}
>> +
>> +static int rpmh_rsc_probe(struct platform_device *pdev)
>> +{
>> + struct device_node *dn = pdev->dev.of_node;
>> + struct rsc_drv *drv;
>> + int ret, irq;
>> +
>> + drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL);
>> + if (!drv)
>> + return -ENOMEM;
>> +
>> + ret = of_property_read_u32(dn, "qcom,drv-id", &drv->id);
>> + if (ret)
>> + return ret;
>> +
>> + drv->name = of_get_property(dn, "label", NULL);
>> + if (!drv->name)
>> + drv->name = dev_name(&pdev->dev);
>> +
>> + ret = rpmh_probe_tcs_config(pdev, drv);
>> + if (ret)
>> + return ret;
>> +
>> + INIT_LIST_HEAD(&drv->response_pending);
>> + spin_lock_init(&drv->drv_lock);
>> + tasklet_init(&drv->tasklet, tcs_notify_tx_done, (unsigned long)drv);
>> + bitmap_zero(drv->tcs_in_use, MAX_TCS_NR);
>> +
>> + irq = platform_get_irq(pdev, 0);
>> + if (irq < 0)
>> + return irq;
>> +
>> + ret = devm_request_irq(&pdev->dev, irq, tcs_irq_handler,
>> + IRQF_TRIGGER_HIGH | IRQF_NO_SUSPEND,
>> + drv->name, drv);
>> + if (ret)
>> + return ret;
>> +
>> + /* Enable the active TCS to send requests immediately */
>> + write_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, 0, drv->tcs[ACTIVE_TCS].mask);
>> +
>> + return devm_of_platform_populate(&pdev->dev);
>> +}
>> +
>> diff --git a/include/dt-bindings/soc/qcom,rpmh-rsc.h b/include/dt-bindings/soc/qcom,rpmh-rsc.h
>> new file mode 100644
>> index 000000000000..868f998ea998
>> --- /dev/null
>> +++ b/include/dt-bindings/soc/qcom,rpmh-rsc.h
>> @@ -0,0 +1,14 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
>> + */
>> +
>> +#ifndef __DT_QCOM_RPMH_RSC_H__
>> +#define __DT_QCOM_RPMH_RSC_H__
>> +
>> +#define SLEEP_TCS 0
>> +#define WAKE_TCS 1
>> +#define ACTIVE_TCS 2
>> +#define CONTROL_TCS 3
>
>Is anything besides the RSC node going to use these defines? Typically
>we have defines for things that are used by many nodes in many places
>and also in C code by drivers so this looks odd if it's mostly used for
>packing many properties into a single property on the DT side.
>
This definition is shared between the DT and the driver. Do you have
recommendation on sharing enums between DT and driver?
>> +
>> +#endif /* __DT_QCOM_RPMH_RSC_H__ */
>> diff --git a/include/soc/qcom/tcs.h b/include/soc/qcom/tcs.h
>> new file mode 100644
>> index 000000000000..4b78f881010a
>> --- /dev/null
>> +++ b/include/soc/qcom/tcs.h
>> @@ -0,0 +1,56 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
>> + */
>> +
>> +#ifndef __SOC_QCOM_TCS_H__
>> +#define __SOC_QCOM_TCS_H__
>> +
>> +#define MAX_RPMH_PAYLOAD 16
>> +
>> +/**
>> + * rpmh_state: state for the request
>> + *
>> + * RPMH_SLEEP_STATE: State of the resource when the processor subsystem
>> + * is powered down. There is no client using the
>> + * resource actively.
>> + * RPMH_WAKE_ONLY_STATE: Resume resource state to the value previously
>> + * requested before the processor was powered down.
>> + * RPMH_ACTIVE_ONLY_STATE: Active or AMC mode requests. Resource state
>> + * is aggregated immediately.
>> + */
>> +enum rpmh_state {
>> + RPMH_SLEEP_STATE,
>> + RPMH_WAKE_ONLY_STATE,
>> + RPMH_ACTIVE_ONLY_STATE,
>> +};
>> +
>> +/**
>> + * struct tcs_cmd: an individual request to RPMH.
>> + *
>> + * @addr: the address of the resource slv_id:18:16 | offset:0:15
>> + * @data: the resource state request
>> + * @wait: wait for this request to be complete before sending the next
>> + */
>> +struct tcs_cmd {
>> + u32 addr;
>> + u32 data;
>> + bool wait;
>> +};
>> +
>> +/**
>> + * struct tcs_request: A set of tcs_cmds sent together in a TCS
>> + *
>> + * @state: state for the request.
>
>Drop full stop please
>
OK.

Thanks for the review Stephen.

-- Lina

2018-04-13 16:18:43

by Lina Iyer

[permalink] [raw]
Subject: Re: [PATCH v5 01/10] drivers: qcom: rpmh-rsc: add RPMH controller for QCOM SoCs

On Tue, Apr 10 2018 at 17:40 -0600, Bjorn Andersson wrote:
>On Thu 05 Apr 09:18 PDT 2018, Lina Iyer wrote:
>[..]
>> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
>[..]
>> +/**
>> + * struct tcs_response: Response object for a request
>> + *
>> + * @drv: the controller
>> + * @msg: the request for this response
>> + * @m: the tcs identifier
>> + * @err: error reported in the response
>> + * @list: element in list of pending response objects
>> + */
>> +struct tcs_response {
>> + struct rsc_drv *drv;
>> + const struct tcs_request *msg;
>> + u32 m;
>
>m is assigned in one place but never used.
>
Right. Remnant from the downstream driver that uses buffers of
responses.

>> + int err;
>> + struct list_head list;
>> +};
>[..]
>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>[..]
>> +static struct tcs_group *get_tcs_from_index(struct rsc_drv *drv, int m)
>> +{
>> + struct tcs_group *tcs;
>> + int i;
>> +
>> + for (i = 0; i < drv->num_tcs; i++) {
>> + tcs = &drv->tcs[i];
>> + if (tcs->mask & BIT(m))
>> + return tcs;
>> + }
>> +
>> + WARN(i == drv->num_tcs, "Incorrect TCS index %d", m);
>> +
>> + return NULL;
>> +}
>> +
>> +static struct tcs_response *setup_response(struct rsc_drv *drv,
>> + const struct tcs_request *msg, int m)
>> +{
>> + struct tcs_response *resp;
>> + struct tcs_group *tcs;
>> +
>> + resp = kzalloc(sizeof(*resp), GFP_ATOMIC);
>
>I still don't like the idea that you allocate a response struct for each
>request, then upon getting an ack post this on a list and schedule a
>tasklet in order to optionally deliver the return value to the waiting
>caller.
>
>Why don't you just just add the "err" and a completion to the
>tcs_request struct and if it's a sync operation you complete that in
>your irq handler?
>
>That would remove the response struct, the list of them, the tasklet and
>the dynamic memory handling - at the "cost" of making the code possible
>to follow.
>
Hmm.. Ok. Will try to simplify.

>> + if (!resp)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + resp->drv = drv;
>> + resp->msg = msg;
>> + resp->err = 0;
>> +
>> + tcs = get_tcs_from_index(drv, m);
>> + if (!tcs)
>> + return ERR_PTR(-EINVAL);
>> +
>> + assert_spin_locked(&tcs->lock);
>
>I tried to boot the kernel with the rpmh-clk and rpmh-regulator drivers
>and I kept hitting this assert.
>
>Turns out that find_free_tcs() finds an empty TCS with index 'm' within
>the tcs, then passes it to setup_response() which tries to use the 'm'
>to figure out which tcs contains the TCS we're operating on.
>
>But as 'm' is in tcs-local space and get_tcs_from_index() tries to
>lookup the TCS in the global drv space we get hold of the wrong TCS.
>
You are right. I will fix it. Thanks for point out. Wonder what is in
your DT, that caused this to be triggered. Clearly it's missing my
setup.

>> + tcs->responses[m - tcs->offset] = resp;
>> +
>> + return resp;
>> +}
>> +
>> +static void free_response(struct tcs_response *resp)
>> +{
>> + kfree(resp);
>> +}
>> +
>> +static struct tcs_response *get_response(struct rsc_drv *drv, u32 m)
>> +{
>> + struct tcs_group *tcs = get_tcs_from_index(drv, m);
>> +
>> + return tcs->responses[m - tcs->offset];
>> +}
>> +
>> +static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int m, int n)
>> +{
>> + return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
>> + RSC_DRV_CMD_OFFSET * n);
>> +}
>> +
>> +static void write_tcs_reg(struct rsc_drv *drv, int reg, int m, int n, u32 data)
>> +{
>> + writel_relaxed(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
>> + RSC_DRV_CMD_OFFSET * n);
>
>Do you really want this relaxed? Isn't the ordering of these
>significant?
>
The ordering isnt. I can make it not relaxed. Only ordering requirement
is that we tigger after writing everything.

>> +}
>> +
>> +static void write_tcs_reg_sync(struct rsc_drv *drv, int reg, int m, int n,
>> + u32 data)
>> +{
>> + write_tcs_reg(drv, reg, m, n, data);
>> + for (;;) {
>> + if (data == read_tcs_reg(drv, reg, m, n))
>> + break;
>> + udelay(1);
>> + }
>> +}
>> +
>> +static bool tcs_is_free(struct rsc_drv *drv, int m)
>> +{
>> + return !test_bit(m, drv->tcs_in_use) &&
>> + read_tcs_reg(drv, RSC_DRV_STATUS, m, 0);
>> +}
>> +
>> +static struct tcs_group *get_tcs_of_type(struct rsc_drv *drv, int type)
>
>According to rpmh_rsc_probe() the tcs array is indexed by "type", so you
>can replace the entire function with:
>
> return &drv->tcs[type];
>
Hmm. Ok.

>> +{
>> + int i;
>> + struct tcs_group *tcs;
>> +
>> + for (i = 0; i < TCS_TYPE_NR; i++) {
>> + if (type == drv->tcs[i].type)
>> + break;
>> + }
>> +
>> + if (i == TCS_TYPE_NR)
>> + return ERR_PTR(-EINVAL);
>> +
>> + tcs = &drv->tcs[i];
>> + if (!tcs->num_tcs)
>> + return ERR_PTR(-EINVAL);
>> +
>> + return tcs;
>> +}
>> +
>> +static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
>> + const struct tcs_request *msg)
>> +{
>> + int type;
>> +
>> + switch (msg->state) {
>> + case RPMH_ACTIVE_ONLY_STATE:
>> + type = ACTIVE_TCS;
>> + break;
>> + default:
>> + return ERR_PTR(-EINVAL);
>> + }
>> +
>> + return get_tcs_of_type(drv, type);
>> +}
>> +
>> +static void send_tcs_response(struct tcs_response *resp)
>> +{
>> + struct rsc_drv *drv;
>> + unsigned long flags;
>> +
>> + if (!resp)
>> + return;
>> +
>> + drv = resp->drv;
>> + spin_lock_irqsave(&drv->drv_lock, flags);
>> + INIT_LIST_HEAD(&resp->list);
>> + list_add_tail(&resp->list, &drv->response_pending);
>> + spin_unlock_irqrestore(&drv->drv_lock, flags);
>> +
>> + tasklet_schedule(&drv->tasklet);
>> +}
>> +
>> +/**
>> + * tcs_irq_handler: TX Done interrupt handler
>> + */
>> +static irqreturn_t tcs_irq_handler(int irq, void *p)
>> +{
>> + struct rsc_drv *drv = p;
>> + int m, i;
>> + u32 irq_status, sts;
>> + struct tcs_response *resp;
>> + struct tcs_cmd *cmd;
>> +
>> + irq_status = read_tcs_reg(drv, RSC_DRV_IRQ_STATUS, 0, 0);
>> +
>> + for (m = 0; m < drv->num_tcs; m++) {
>> + if (!(irq_status & (u32)BIT(m)))
>> + continue;
>> +
>> + resp = get_response(drv, m);
>> + if (WARN_ON(!resp))
>
>This will only ever fail in the beginning of time, as soon as you've
>utilized every TCS at least once resp will never be NULL, as you never
>clear it.
>
>> + goto skip_resp;
>> +
>> + resp->err = 0;
>> + for (i = 0; i < resp->msg->num_cmds; i++) {
>> + cmd = &resp->msg->cmds[i];
>> + sts = read_tcs_reg(drv, RSC_DRV_CMD_STATUS, m, i);
>> + if (!(sts & CMD_STATUS_ISSUED) ||
>> + ((resp->msg->wait_for_compl || cmd->wait) &&
>> + !(sts & CMD_STATUS_COMPL))) {
>> + resp->err = -EIO;
>> + break;
>> + }
>> + }
>> +skip_resp:
>> + /* Reclaim the TCS */
>> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
>> + write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, 0, BIT(m));
>> + clear_bit(m, drv->tcs_in_use);
>> + send_tcs_response(resp);
>
>As I suggested above, rather than putting resp on a list and schedule a
>tasklet to free and possibly deliver the value or "err" to a client just
>keep track of the current msg for the TCS for sync operations, set "err"
>and fire the completion (and then untie the request from the TCS).
>
Ok.

>> + }
>> +
>> + return IRQ_HANDLED;
>> +}
>> +
>> +/**
>> + * tcs_notify_tx_done: TX Done for requests that got a response
>> + *
>> + * @data: the tasklet argument
>> + *
>> + * Tasklet function to notify MBOX that we are done with the request.
>> + * Handles all pending reponses whenever run.
>
>This is accidental complexity from the downstream use of the mailbox
>framework, we don't need it.
>
Yes. will remove this.

>> + */
>> +static void tcs_notify_tx_done(unsigned long data)
>> +{
>> + struct rsc_drv *drv = (struct rsc_drv *)data;
>> + struct tcs_response *resp;
>> + unsigned long flags;
>> +
>> + for (;;) {
>> + spin_lock_irqsave(&drv->drv_lock, flags);
>> + resp = list_first_entry_or_null(&drv->response_pending,
>> + struct tcs_response, list);
>> + if (!resp) {
>> + spin_unlock_irqrestore(&drv->drv_lock, flags);
>> + break;
>> + }
>> + list_del(&resp->list);
>> + spin_unlock_irqrestore(&drv->drv_lock, flags);
>> + free_response(resp);
>> + }
>> +}
>> +
>> +static void __tcs_buffer_write(struct rsc_drv *drv, int m, int n,
>> + const struct tcs_request *msg)
>> +{
>> + u32 msgid, cmd_msgid;
>> + u32 cmd_enable = 0;
>> + u32 cmd_complete;
>> + struct tcs_cmd *cmd;
>> + int i, j;
>> +
>> + cmd_msgid = CMD_MSGID_LEN;
>> + cmd_msgid |= msg->wait_for_compl ? CMD_MSGID_RESP_REQ : 0;
>> + cmd_msgid |= CMD_MSGID_WRITE;
>> +
>> + cmd_complete = read_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0);
>> +
>> + for (i = 0, j = n; i < msg->num_cmds; i++, j++) {
>> + cmd = &msg->cmds[i];
>> + cmd_enable |= BIT(j);
>> + cmd_complete |= cmd->wait << j;
>> + msgid = cmd_msgid;
>> + msgid |= cmd->wait ? CMD_MSGID_RESP_REQ : 0;
>> + write_tcs_reg(drv, RSC_DRV_CMD_MSGID, m, j, msgid);
>> + write_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, j, cmd->addr);
>> + write_tcs_reg(drv, RSC_DRV_CMD_DATA, m, j, cmd->data);
>> + }
>> +
>> + write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
>> + cmd_enable |= read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0);
>> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, cmd_enable);
>> +}
>> +
>> +static void __tcs_trigger(struct rsc_drv *drv, int m)
>> +{
>> + u32 enable;
>
>"enable"?
>
Sorry?
>> +
>> + /*
>> + * HW req: Clear the DRV_CONTROL and enable TCS again
>> + * While clearing ensure that the AMC mode trigger is cleared
>> + * and then the mode enable is cleared.
>> + */
>> + enable = read_tcs_reg(drv, RSC_DRV_CONTROL, m, 0);
>> + enable &= ~TCS_AMC_MODE_TRIGGER;
>> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
>> + enable &= ~TCS_AMC_MODE_ENABLE;
>> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
>> +
>> + /* Enable the AMC mode on the TCS and then trigger the TCS */
>> + enable = TCS_AMC_MODE_ENABLE;
>> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
>> + enable |= TCS_AMC_MODE_TRIGGER;
>> + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, m, 0, enable);
>> +}
>> +
>> +static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs,
>> + const struct tcs_request *msg)
>> +{
>> + unsigned long curr_enabled;
>> + u32 addr;
>> + int i, j, k;
>> + int m = tcs->offset;
>> +
>> + for (i = 0; i < tcs->num_tcs; i++, m++) {
>> + if (tcs_is_free(drv, m))
>> + continue;
>> +
>> + curr_enabled = read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0);
>> +
>> + for_each_set_bit(j, &curr_enabled, MAX_CMDS_PER_TCS) {
>> + addr = read_tcs_reg(drv, RSC_DRV_CMD_ADDR, m, j);
>> + for (k = 0; k < msg->num_cmds; k++) {
>> + if (addr == msg->cmds[k].addr)
>> + return -EBUSY;
>> + }
>> + }
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int find_free_tcs(struct tcs_group *tcs)
>> +{
>> + int m;
>> +
>> + for (m = 0; m < tcs->num_tcs; m++) {
>> + if (tcs_is_free(tcs->drv, tcs->offset + m))
>> + return m;
>
>The returned index is within the tcs but is passed to setup_response()
>where it's used as the index of the TCS, so this needs to return
>tcs->offset + m so that setup_response() will be able to find the tcs
>again.
>
Correct. Will fix.

>> + }
>> +
>> + return -EBUSY;
>> +}
>> +
>> +static int tcs_mbox_write(struct rsc_drv *drv, const struct tcs_request *msg)
>> +{
>> + struct tcs_group *tcs;
>> + int m;
>> + struct tcs_response *resp = NULL;
>
>No need to initialize resp.
>
Ok.

>> + unsigned long flags;
>> + int ret;
>> +
>> + tcs = get_tcs_for_msg(drv, msg);
>> + if (IS_ERR(tcs))
>> + return PTR_ERR(tcs);
>> +
>> + spin_lock_irqsave(&tcs->lock, flags);
>> + m = find_free_tcs(tcs);
>> + if (m < 0) {
>> + ret = m;
>> + goto done_write;
>> + }
>> +
>> + /*
>> + * The h/w does not like if we send a request to the same address,
>> + * when one is already in-flight or being processed.
>> + */
>> + ret = check_for_req_inflight(drv, tcs, msg);
>
>This scans all TCS in the DRV for any operations on msg->cmds[*].addr,
>but you're only holding a lock for tcs. Either cross-tcs operations
>doesn't matter and check_for_req_inflight() can loose one of the loops
>or the locking used here is too optimistic.
>
We only need to look at the AMC TCSes and see if any of them are in use.
We dont care about the request being present in sleep/wake TCS.

>> + if (ret)
>> + goto done_write;
>> +
>> + resp = setup_response(drv, msg, m);
>
>Alternatively we could just actually pass "tcs" to setup_response() so
>that it doesn't have to search for it based on drv and m. But I think
>it's cleaner if we just associate the msg with the TCS and complete that
>directly in the irq handler - if it's a sync operation.
>
Got it.

>> + if (IS_ERR(resp)) {
>> + ret = PTR_ERR(resp);
>> + goto done_write;
>> + }
>> + resp->m = m;
>
>You never read resp->m...
>
Remnant. Will remove the structure itself.

>> +
>> + set_bit(m, drv->tcs_in_use);
>> + __tcs_buffer_write(drv, m, 0, msg);
>> + __tcs_trigger(drv, m);
>> +
>> +done_write:
>> + spin_unlock_irqrestore(&tcs->lock, flags);
>> + return ret;
>> +}
>> +
>> +/**
>> + * rpmh_rsc_send_data: Validate the incoming message and write to the
>> + * appropriate TCS block.
>> + *
>> + * @drv: the controller
>> + * @msg: the data to be sent
>> + *
>> + * Return: 0 on success, -EINVAL on error.
>> + * Note: This call blocks until a valid data is written to the TCS.
>> + */
>> +int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
>> +{
>> + int ret;
>> +
>> + if (!msg || !msg->cmds || !msg->num_cmds ||
>> + msg->num_cmds > MAX_RPMH_PAYLOAD)
>> + return -EINVAL;
>
>You're the only caller of this function, which means that if this ever
>evaluates to true you will return -EINVAL and your bug will be way
>harder to find than if you just end up panicing because we dereferenced
>any of these null pointers.
>
>At least wrap the whole thing in a WARN_ON() to make it possible to
>detect when this happen.
>
Ok. Will do.

>> +
>> + do {
>> + ret = tcs_mbox_write(drv, msg);
>> + if (ret == -EBUSY) {
>> + pr_info_ratelimited("TCS Busy, retrying RPMH message send: addr=%#x\n",
>> + msg->cmds[0].addr);
>> + udelay(10);
>> + }
>> + } while (ret == -EBUSY);
>> +
>> + return ret;
>> +}
>> +EXPORT_SYMBOL(rpmh_rsc_send_data);
>

Thanks for the review.

-- Lina


2018-04-13 17:19:45

by Stephen Boyd

[permalink] [raw]
Subject: Re: [PATCH v5 04/10] drivers: qcom: rpmh: add RPMH helper functions

Quoting Bjorn Andersson (2018-04-10 17:01:20)
> On Thu 05 Apr 09:18 PDT 2018, Lina Iyer wrote:
>
> >
> > diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> > new file mode 100644
> > index 000000000000..e3c7491e7baf
> > --- /dev/null
> > +++ b/drivers/soc/qcom/rpmh.c
> > @@ -0,0 +1,253 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
> > + */
> > +
> > +#include <linux/atomic.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/jiffies.h>
> > +#include <linux/kernel.h>
> > +#include <linux/mailbox_client.h>
> > +#include <linux/module.h>
> > +#include <linux/of.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/slab.h>
> > +#include <linux/types.h>
> > +#include <linux/wait.h>
> > +
> > +#include <soc/qcom/rpmh.h>
> > +
> > +#include "rpmh-internal.h"
> > +
> > +#define RPMH_MAX_MBOXES 2
> > +#define RPMH_TIMEOUT_MS 10000
>
> Just define this in jiffies and you don't need to do msecs_to_jiffies()
> every time you use it.
>

Put the msecs_to_jiffies() here if you want. The compiler will constant
fold it to the final value, and then we get the same time in seconds no
matter what the jiffies frequency is configured for.

2018-04-13 17:45:07

by Stephen Boyd

[permalink] [raw]
Subject: Re: [PATCH v5 01/10] drivers: qcom: rpmh-rsc: add RPMH controller for QCOM SoCs

Quoting Lina Iyer (2018-04-13 08:37:25)
> On Tue, Apr 10 2018 at 22:39 -0600, Stephen Boyd wrote:
> >Quoting Lina Iyer (2018-04-05 09:18:25)
> >>
> >> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
> >> new file mode 100644
> >> index 000000000000..aa73ec4b3e42
> >> --- /dev/null
> >> +++ b/drivers/soc/qcom/rpmh-internal.h
> >> @@ -0,0 +1,89 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> >> +/*
> >> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
> >> + */
> >> +
> >> +
> >> +#ifndef __RPM_INTERNAL_H__
> >> +#define __RPM_INTERNAL_H__
> >> +
> >> +#include <linux/bitmap.h>
> >> +#include <soc/qcom/tcs.h>
> >> +
> >> +#define TCS_TYPE_NR 4
> >> +#define MAX_CMDS_PER_TCS 16
> >> +#define MAX_TCS_PER_TYPE 3
> >> +#define MAX_TCS_NR (MAX_TCS_PER_TYPE * TCS_TYPE_NR)
> >> +
> >> +struct rsc_drv;
> >> +
> >> +/**
> >> + * struct tcs_response: Response object for a request
> >> + *
> >> + * @drv: the controller
> >> + * @msg: the request for this response
> >> + * @m: the tcs identifier
> >> + * @err: error reported in the response
> >> + * @list: element in list of pending response objects
> >> + */
> >> +struct tcs_response {
> >> + struct rsc_drv *drv;
> >> + const struct tcs_request *msg;
> >> + u32 m;
> >> + int err;
> >> + struct list_head list;
> >> +};
> >> +
> >> +/**
> >> + * struct tcs_group: group of Trigger Command Sets for a request state
> >
> >Put (ACRONYM) for the acronyms that are spelled out the first time
> >please. Also, make sure we know what 'request state' is.
> >
> Its already in the commit text, but sure.

Thanks!

>
> >> + *
> >> + * @drv: the controller
> >> + * @type: type of the TCS in this group - active, sleep, wake
> >
> >Now 'group' means 'request state'?
> >
> Group of TCSes. TCSes are grouped based on their use - sending requests
> for active, sleep and wake.

Ok so maybe "type of the TCSes in this group, either active, sleep,
wake, etc."

>
> >> + * @mask: mask of the TCSes relative to all the TCSes in the RSC
> >> + * @offset: start of the TCS group relative to the TCSes in the RSC
> >> + * @num_tcs: number of TCSes in this type
> >> + * @ncpt: number of commands in each TCS
> >> + * @lock: lock for synchronizing this TCS writes
> >> + * @responses: response objects for requests sent from each TCS
> >> + */
> >> +struct tcs_group {
> >> + struct rsc_drv *drv;
> >> + int type;
> >
> >Is type supposed to be an enum?
> >
> Uses the #defines from include/dt-bindings/qcom,rpmh-rsc.txt.
>
> >> + u32 mask;
> >> + u32 offset;
> >> + int num_tcs;
> >> + int ncpt;
> >> + spinlock_t lock;
> >> + struct tcs_response *responses[MAX_TCS_PER_TYPE];
> >> +};
> >> +
> >> +/**
> >> + * struct rsc_drv: the Resource State Coordinator controller
> >> + *
> >> + * @name: controller identifier
> >> + * @tcs_base: start address of the TCS registers in this controller
> >> + * @id: instance id in the controller (Direct Resource Voter)
> >> + * @num_tcs: number of TCSes in this DRV
> >
> >It changed from an RSC to a DRV here?
> >
> RSC has DRVs. A DRV has TCS(es).

It seems like RSC and DRV are pretty much interchangeable then?

>
> >> +
> >> +#endif /* __RPM_INTERNAL_H__ */
> >> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> >> new file mode 100644
> >> index 000000000000..8bde1e9bd599
> >> --- /dev/null
> >> +++ b/drivers/soc/qcom/rpmh-rsc.c
> >> @@ -0,0 +1,571 @@
>
> >> +{
> >> + kfree(resp);
> >> +}
> >> +
> >> +static struct tcs_response *get_response(struct rsc_drv *drv, u32 m)
> >> +{
> >> + struct tcs_group *tcs = get_tcs_from_index(drv, m);
> >> +
> >> + return tcs->responses[m - tcs->offset];
> >> +}
> >> +
> >> +static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int m, int n)
> >> +{
> >> + return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * m +
> >> + RSC_DRV_CMD_OFFSET * n);
> >> +}
> >> +
> >> +static void write_tcs_reg(struct rsc_drv *drv, int reg, int m, int n, u32 data)
> >
> >Is m the type of TCS (sleep, active, wake) and n is just an offset?
> >Maybe you can replace m with 'tcs_type' and n with 'index' or 'i' or
> >'offset'. And then don't use this function to write the random TCS
> >registers that don't have to do with the TCS command slots? I see
> >various places where there are things like:
> >
> If you look at the spec and the registers, this representation matches
> the usage there.
> d = DRV
> m = TCS number in the DRV
> n = Command slot in the TCS

Ok. I don't have access to the spec and the registers so I can't really
map it to anything.

>
> >> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
> >> + write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0, cmd_complete);
> >> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, cmd_enable);
> >
> >And 'n' is 0, meaning you rely on that 0 killing that last part of the
> >equation (RSC_DRV_CMD_OFFSET * n). But if we had a write_tcs_reg(drv,
> >reg, m, data) and a write_tcs_cmd(drv, reg, m, n, data) then it would be
> >clearer.
> >
> Hmm. ok.
> >Even better, add a void *base to a 'struct tcs' and then pass that
> >struct to the tcs_read/write APIs and then have that pull out a
> >tcs->base + reg or tcs->base + reg + RSC_DRV_CMD_OFFSET * index.
> >
> Based on comments from Bjorn on patch v1, I switched over to using
> rsc_drv* instead of void *base.

Can we get the write_tcs_cmd() and write_tsc_reg() functions? I don't
like seeing all the random zeroes passed around when they aren't needed.

>
> >
> >> +
> >> + drv = resp->drv;
> >> + spin_lock_irqsave(&drv->drv_lock, flags);
> >> + INIT_LIST_HEAD(&resp->list);
> >> + list_add_tail(&resp->list, &drv->response_pending);
> >> + spin_unlock_irqrestore(&drv->drv_lock, flags);
> >> +
> >> + tasklet_schedule(&drv->tasklet);
> >> +}
> >> +
> >> +/**
> >> + * tcs_irq_handler: TX Done interrupt handler
> >
> >So call it tcs_tx_done?
> >
> But, but, it's an irq handler.

Heh ok, tcs_tx_done_handler()? It's obviously an irqhandler, it has
irqreturn_t in the signature.

>
> >> + */
> >> +static irqreturn_t tcs_irq_handler(int irq, void *p)
> >> +{
> >> + struct rsc_drv *drv = p;
> >> + int m, i;
> >> + u32 irq_status, sts;
> >> + struct tcs_response *resp;
> >> + struct tcs_cmd *cmd;
> >> +
> >> + irq_status = read_tcs_reg(drv, RSC_DRV_IRQ_STATUS, 0, 0);
> >> +
> >> + for (m = 0; m < drv->num_tcs; m++) {
> >> + if (!(irq_status & (u32)BIT(m)))

This u32 cast looks out of place. And can't we do for_each_set_bit() in
this loop instead of looping through num_tcs?

> >> + continue;
> >> +
> >> + resp = get_response(drv, m);
> >> + if (WARN_ON(!resp))
> >> + goto skip_resp;
> >> +
> >> + resp->err = 0;
> >> + for (i = 0; i < resp->msg->num_cmds; i++) {
> >> + cmd = &resp->msg->cmds[i];
> >> + sts = read_tcs_reg(drv, RSC_DRV_CMD_STATUS, m, i);
> >> + if (!(sts & CMD_STATUS_ISSUED) ||
> >> + ((resp->msg->wait_for_compl || cmd->wait) &&
> >> + !(sts & CMD_STATUS_COMPL))) {
> >> + resp->err = -EIO;
> >> + break;
> >> + }
> >> + }
> >> +skip_resp:
> >> + /* Reclaim the TCS */
> >> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
> >> + write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, 0, BIT(m));
> >> + clear_bit(m, drv->tcs_in_use);
> >
> >Should we reclaim the TCS if the above for loop fails too? It may make
> >more sense to look up the response, reclaim, check if it's NULL and
> >execute a 'continue' and otherwise look through resp->msg->cmds for
> >something that was done and then send_tcs_response(). At the least,
> The TCS will will be reclaimed, even if the for loop fails. We can't
> read the CMD_STATUS reliably after reclaiming the TCS.
> >don't call send_tcs_response() if resp == NULL.
> >
> I could do that.

Ah right, the break is for the inner for-loop. Can we push the for-loop
and reclaim into the get_response() function so that the goto inside the
loop is avoided?

resp = get_response(drv, m);
if (WARN_ON(!resp))
continue;
send_tcs_response(resp);


> >> +/**
> >> + * rpmh_rsc_send_data: Validate the incoming message and write to the
> >> + * appropriate TCS block.
> >> + *
> >> + * @drv: the controller
> >> + * @msg: the data to be sent
> >> + *
> >> + * Return: 0 on success, -EINVAL on error.
> >> + * Note: This call blocks until a valid data is written to the TCS.
> >> + */
> >> +int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
> >> +{
> >> + int ret;
> >> +
> >> + if (!msg || !msg->cmds || !msg->num_cmds ||
> >> + msg->num_cmds > MAX_RPMH_PAYLOAD)
> >> + return -EINVAL;
> >> +
> >> + do {
> >> + ret = tcs_mbox_write(drv, msg);
> >> + if (ret == -EBUSY) {
> >> + pr_info_ratelimited("TCS Busy, retrying RPMH message send: addr=%#x\n",
> >> + msg->cmds[0].addr);
> >> + udelay(10);
> >> + }
> >> + } while (ret == -EBUSY);
> >
> >This loop never breaks if we can't avoid the BUSY loop. And that printk
> >is informational, shouldn't it be an error? Is there some number of
> >tries we can make and then just give up?
> >
> I could do that. Generally, there are some transient conditions the
> causes these loops to spin for a while, before we get a free TCS to
> write to. Failing after just a handful tries may be calling it quits
> early. If we increase the delay to compensate for it, then we end
> slowing up requests that could have otherwise been faster.

So a 10 second timeout with a 10uS delay between attempts? I'm not
asking to increase the delay between attempts, instead I'm asking for
this loop to not go forever in case something goes wrong. Getting stuck
here would not be much fun.

>
> >> +
> >> + return ret;
> >> +}
> >> +EXPORT_SYMBOL(rpmh_rsc_send_data);
> >> +
> >> diff --git a/include/dt-bindings/soc/qcom,rpmh-rsc.h b/include/dt-bindings/soc/qcom,rpmh-rsc.h
> >> new file mode 100644
> >> index 000000000000..868f998ea998
> >> --- /dev/null
> >> +++ b/include/dt-bindings/soc/qcom,rpmh-rsc.h
> >> @@ -0,0 +1,14 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> >> +/*
> >> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
> >> + */
> >> +
> >> +#ifndef __DT_QCOM_RPMH_RSC_H__
> >> +#define __DT_QCOM_RPMH_RSC_H__
> >> +
> >> +#define SLEEP_TCS 0
> >> +#define WAKE_TCS 1
> >> +#define ACTIVE_TCS 2
> >> +#define CONTROL_TCS 3
> >
> >Is anything besides the RSC node going to use these defines? Typically
> >we have defines for things that are used by many nodes in many places
> >and also in C code by drivers so this looks odd if it's mostly used for
> >packing many properties into a single property on the DT side.
> >
> This definition is shared between the DT and the driver. Do you have
> recommendation on sharing enums between DT and driver?

I'm not aware of anything. I suppose the enum in the kernel header file
could be assigned to the value of the DT binding defines?

#include <dt-bindings/soc/qcom,rpmh-rsc.h>

enum rpmh_state {
RPMH_SLEEP_STATE = SLEEP_TCS,
RPMH_WAKE_ONLY_STATE = WAKE_TCS,
...
};

#undef SLEEP_TCS
#undef WAKE_TCS
#undef ...

This sort of defeats the point of the defines, but I suppose it works.

2018-04-13 22:42:00

by Stephen Boyd

[permalink] [raw]
Subject: Re: [PATCH v5 02/10] dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs

Quoting Lina Iyer (2018-04-11 14:24:31)
> On Wed, Apr 11 2018 at 09:29 -0600, Stephen Boyd wrote:
> >Quoting Lina Iyer (2018-04-09 09:08:00)
> >> On Fri, Apr 06 2018 at 19:14 -0600, Stephen Boyd wrote:
> >> >Quoting Lina Iyer (2018-04-05 09:18:26)
> >> >> diff --git a/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
> >> >> new file mode 100644
> >> >> index 000000000000..dcf71a5b302f
> >> >> --- /dev/null
> >> >> +++ b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
> >> >> @@ -0,0 +1,127 @@
> >> >> +
> >> >> +Example 1:
> >> >> +
> >> >> +For a TCS whose RSC base address is is 0x179C0000 and is at a DRV id of 2, the
> >> >> +register offsets for DRV2 start at 0D00, the register calculations are like
> >> >> +this -
> >> >> +First tuple: 0x179C0000 + 0x10000 * 2 = 0x179E0000
> >> >> +Second tuple: 0x179E0000 + 0xD00 = 0x179E0D00
> >> >> +
> >> >> + apps_rsc: rsc@179e000 {
> >> >> + label = "apps_rsc";
> >> >> + compatible = "qcom,rpmh-rsc";
> >> >> + reg = <0x179e0000 0x10000>, <0x179e0d00 0x3000>;
> >> >
> >> >The first reg property overlaps the second one. Does this second one
> >> >ever move around? I would hardcode it in the driver to be 0xd00 away
> >> >from the drv base instead of specifying it in DT if it's the same all
> >> >the time.
> >> >
> >> >Also, the example shows 0x179c0000 which I guess is the actual beginning
> >> >of the RSC block. So the binding seems to be for one DRV inside of an
> >> >RSC. Can we get the full description of the RSC in the binding instead?
> >> >I imagine that means there's a DRV0,1,2 and those probably have an
> >> >interrupt per each DRV and then a different TCS config per each one too?
> >> >If the binding can describe all of the RSC then we can use different
> >> >DRVs by changing the qcom,drv-id property.
> >> >
> >> > rsc@179c0000 {
> >> > compatible = "qcom,rpmh-rsc";
> >> > reg = <0x179c0000 0x10000>,
> >> > <0x179d0000 0x10000>,
> >> > <0x179e0000 0x10000>;
> >> > qcom,tcs-offset = <0xd00>;
> >> > qcom,drv-id = <0/1/2>;
> >> > interrupts = <GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
> >> > <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
> >> > <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
> >> > }
> >> >
> >> >This is sort of what I imagine it would look like. I have no idea how
> >> >the tcs config would work unless each DRV has the same TCS config
> >> >though. Otherwise, if each node is for a drv, then I would expect the
> >> >node would be called 'drv' and we wouldn't need the drv-id property and
> >> >the compatible string would say drv instead of rsc?
> >> >
> >> >BTW, what are the other DRVs used for in the apps RSC?
> >> >
> >> The DRV is the voter for an execution environment (Linux, Hypervisor,
> >> ATF) in the RSC. The RSC has a lot of other registers that Linux is not
> >> privy to. They are access restricted.
> >
> >Alright. Well sometimes access restrictions aren't there, so this isn't
> >a good assumption to make.
> >
> >> The memory organization of the RSC
> >> mandates that we know the DRV id to access registers specific to the
> >> DRV.
> >
> >I think qcom,drv-id covers that, no?
> >
> >> Unfortunately, not all RSC have identical DRV configuration and the
> >> register space is also variable depending on the capability of the RSC.
> >> There are functionalities supported by other RSCs in the SoC that are
> >> not supported by the RSC associated with the application processor,
> >> while not many RSCs' support multiple DRVs. Therefore it doesn't benefit
> >> describing the whole RSC as it is not usable from Linux (because of
> >> access restrictions).
> >
> >If we're not describing the whole RSC in the RSC binding then we're not
> >going to get very far. From what I can tell, this binding describes one
> >DRV inside of an RSC instead of the whole RSC. Yes we'll probably never
> >use the ATF part of the RSC in Linux, but we may use the hypervisor part
> >if we use KVM/Xen so the binding should be describing as much as it can
> >about this device in case some software needs to use it.
> >
> The RSC is pretty much this. A set of registers that are RSC specific at
> the address pointed to by the "rsc" reg and the TCS regsiters pointed to
> by the "tcs" reg. You do not want to clobber multiple DRVs into the same
> device node. It will be a lot confusing for the drivers to determine
> which DRV to vote.

Well it seems like an RSC contains many DRVs and those DRVs contain many
TCSes. This is what I get after talking with Bjorn on IRC.

+--------------------------------------------------+ (0x00000)
| |
| DRV #0 |
| |
|---------- --------------| (tcs-offset (0xd00))
| DRV0_TCS0 |
| common space |
| cmd sequencer | 0xd00 + 0x14
| |
| DRV0_TCS1 |
| common space | 0xd00 + 0x2a0
| cmd sequencer | 0xd00 + 0x2a0 + 0x14
| |
| DRV0_TCS2 |
| |
| |
+--------------------------------------------------+ (0x10000)
| |
| DRV #1 |
| |
|---------- --------------| (tcs-offset)
| DRV1_TCS0 |
| DRV1_TCS1 |
| DRV1_TCS2 |
+--------------------------------------------------+ (0x20000)
| |
| DRV #2 |
| |
|---------- --------------|
| DRV2_TCS0 |
| DRV2_TCS1 |
| DRV2_TCS2 |
| DRV2_TCS3 |
| DRV2_TCS4 |
| DRV2_TCS5 |
+--------------------------------------------------+

I think I understand it now. There aren't any "RSC common" registers
that are common to the entire RSC. Instead, everything goes into a DRV,
or into a common TCS space, or into a TCS "queue".

> >Put another way, even if the "apps" RSC is complicated, we should be
> >describing it to the best of our abilities in the binding so that when
> >it is used by non-linux OSes things still work by simply tweaking the
> >drv-id that we use to pick the right things out of the node.
> >
> >Or we're describing the RSC but it's really a container node that
> >doesn't do much besides hold DRVs? So this is described at the wrong
> >level?
> What we are describing is a DRV, but a standalone DRV alone is useless
> without the necessary RSC registers. So its a unique RSC+DRV combination
> that is represented here.
>

If my understanding is correct up there then the binding could either
describe a single RSC DRV, or it could describe all the RSC DRV
instances and interrupts going into the RSC "block" and then we can use
drv-id to pick the offset we jump to.

I imagine we don't have any practical use-case for the entire RSC space
because there aren't any common RSC registers to deal with. So we've
boiled this all down to describing one DRV and then I wonder why we care
about having drv-id at all? It looks to be used to check for a max
number of TCS, but that's already described by DT so it doesn't seem
very useful to double check what the hardware can tells us.

Long story short, we can remove drv-id and just describe drvs by
themselves?

2018-04-16 16:10:35

by Lina Iyer

[permalink] [raw]
Subject: Re: [PATCH v5 02/10] dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs

On Fri, Apr 13 2018 at 16:40 -0600, Stephen Boyd wrote:
>Quoting Lina Iyer (2018-04-11 14:24:31)
>> On Wed, Apr 11 2018 at 09:29 -0600, Stephen Boyd wrote:
>> >Quoting Lina Iyer (2018-04-09 09:08:00)
>> >> On Fri, Apr 06 2018 at 19:14 -0600, Stephen Boyd wrote:
>> >> >Quoting Lina Iyer (2018-04-05 09:18:26)
>> >> >> diff --git a/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
>> >> >> new file mode 100644
>> >> >> index 000000000000..dcf71a5b302f
>> >> >> --- /dev/null
>> >> >> +++ b/Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
>> >> >> @@ -0,0 +1,127 @@
>> >> >> +
>> >> >> +Example 1:
>> >> >> +
>> >> >> +For a TCS whose RSC base address is is 0x179C0000 and is at a DRV id of 2, the
>> >> >> +register offsets for DRV2 start at 0D00, the register calculations are like
>> >> >> +this -
>> >> >> +First tuple: 0x179C0000 + 0x10000 * 2 = 0x179E0000
>> >> >> +Second tuple: 0x179E0000 + 0xD00 = 0x179E0D00
>> >> >> +
>> >> >> + apps_rsc: rsc@179e000 {
>> >> >> + label = "apps_rsc";
>> >> >> + compatible = "qcom,rpmh-rsc";
>> >> >> + reg = <0x179e0000 0x10000>, <0x179e0d00 0x3000>;
>> >> >
>> >> >The first reg property overlaps the second one. Does this second one
>> >> >ever move around? I would hardcode it in the driver to be 0xd00 away
>> >> >from the drv base instead of specifying it in DT if it's the same all
>> >> >the time.
>> >> >
>> >> >Also, the example shows 0x179c0000 which I guess is the actual beginning
>> >> >of the RSC block. So the binding seems to be for one DRV inside of an
>> >> >RSC. Can we get the full description of the RSC in the binding instead?
>> >> >I imagine that means there's a DRV0,1,2 and those probably have an
>> >> >interrupt per each DRV and then a different TCS config per each one too?
>> >> >If the binding can describe all of the RSC then we can use different
>> >> >DRVs by changing the qcom,drv-id property.
>> >> >
>> >> > rsc@179c0000 {
>> >> > compatible = "qcom,rpmh-rsc";
>> >> > reg = <0x179c0000 0x10000>,
>> >> > <0x179d0000 0x10000>,
>> >> > <0x179e0000 0x10000>;
>> >> > qcom,tcs-offset = <0xd00>;
>> >> > qcom,drv-id = <0/1/2>;
>> >> > interrupts = <GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
>> >> > <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
>> >> > <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
>> >> > }
>> >> >
>> >> >This is sort of what I imagine it would look like. I have no idea how
>> >> >the tcs config would work unless each DRV has the same TCS config
>> >> >though. Otherwise, if each node is for a drv, then I would expect the
>> >> >node would be called 'drv' and we wouldn't need the drv-id property and
>> >> >the compatible string would say drv instead of rsc?
>> >> >
>> >> >BTW, what are the other DRVs used for in the apps RSC?
>> >> >
>> >> The DRV is the voter for an execution environment (Linux, Hypervisor,
>> >> ATF) in the RSC. The RSC has a lot of other registers that Linux is not
>> >> privy to. They are access restricted.
>> >
>> >Alright. Well sometimes access restrictions aren't there, so this isn't
>> >a good assumption to make.
>> >
>> >> The memory organization of the RSC
>> >> mandates that we know the DRV id to access registers specific to the
>> >> DRV.
>> >
>> >I think qcom,drv-id covers that, no?
>> >
>> >> Unfortunately, not all RSC have identical DRV configuration and the
>> >> register space is also variable depending on the capability of the RSC.
>> >> There are functionalities supported by other RSCs in the SoC that are
>> >> not supported by the RSC associated with the application processor,
>> >> while not many RSCs' support multiple DRVs. Therefore it doesn't benefit
>> >> describing the whole RSC as it is not usable from Linux (because of
>> >> access restrictions).
>> >
>> >If we're not describing the whole RSC in the RSC binding then we're not
>> >going to get very far. From what I can tell, this binding describes one
>> >DRV inside of an RSC instead of the whole RSC. Yes we'll probably never
>> >use the ATF part of the RSC in Linux, but we may use the hypervisor part
>> >if we use KVM/Xen so the binding should be describing as much as it can
>> >about this device in case some software needs to use it.
>> >
>> The RSC is pretty much this. A set of registers that are RSC specific at
>> the address pointed to by the "rsc" reg and the TCS regsiters pointed to
>> by the "tcs" reg. You do not want to clobber multiple DRVs into the same
>> device node. It will be a lot confusing for the drivers to determine
>> which DRV to vote.
>
>Well it seems like an RSC contains many DRVs and those DRVs contain many
>TCSes. This is what I get after talking with Bjorn on IRC.
>
> +--------------------------------------------------+ (0x00000)
> | |
> | DRV #0 |
> | |
> |---------- --------------| (tcs-offset (0xd00))
> | DRV0_TCS0 |
> | common space |
> | cmd sequencer | 0xd00 + 0x14
> | |
> | DRV0_TCS1 |
> | common space | 0xd00 + 0x2a0
> | cmd sequencer | 0xd00 + 0x2a0 + 0x14
> | |
> | DRV0_TCS2 |
> | |
> | |
> +--------------------------------------------------+ (0x10000)
> | |
> | DRV #1 |
> | |
> |---------- --------------| (tcs-offset)
> | DRV1_TCS0 |
> | DRV1_TCS1 |
> | DRV1_TCS2 |
> +--------------------------------------------------+ (0x20000)
> | |
> | DRV #2 |
> | |
> |---------- --------------|
> | DRV2_TCS0 |
> | DRV2_TCS1 |
> | DRV2_TCS2 |
> | DRV2_TCS3 |
> | DRV2_TCS4 |
> | DRV2_TCS5 |
> +--------------------------------------------------+
>
>I think I understand it now. There aren't any "RSC common" registers
>that are common to the entire RSC. Instead, everything goes into a DRV,
>or into a common TCS space, or into a TCS "queue".
>
>> >Put another way, even if the "apps" RSC is complicated, we should be
>> >describing it to the best of our abilities in the binding so that when
>> >it is used by non-linux OSes things still work by simply tweaking the
>> >drv-id that we use to pick the right things out of the node.
>> >
>> >Or we're describing the RSC but it's really a container node that
>> >doesn't do much besides hold DRVs? So this is described at the wrong
>> >level?
>> What we are describing is a DRV, but a standalone DRV alone is useless
>> without the necessary RSC registers. So its a unique RSC+DRV combination
>> that is represented here.
>>
>
>If my understanding is correct up there then the binding could either
>describe a single RSC DRV, or it could describe all the RSC DRV
>instances and interrupts going into the RSC "block" and then we can use
>drv-id to pick the offset we jump to.
>
Your understanding is correct.

>I imagine we don't have any practical use-case for the entire RSC space
>because there aren't any common RSC registers to deal with.
Not true.

>So we've
>boiled this all down to describing one DRV and then I wonder why we care
>about having drv-id at all? It looks to be used to check for a max
>number of TCS, but that's already described by DT so it doesn't seem
>very useful to double check what the hardware can tells us.
>
There is also a number of commands per TCS (NCPT), that may way vary
between different RSCs. The RSC of the application processor has 16
commands in each TCS, but that is variable. I am not saying it cannot be
described in DT, but it is something I read from the common RSC
registers, currently.
Also, I will using common/DRV0 registers to write wakeup time value,
when the processor subsystem goes into power down. This is not DRV2
register, but is a DRV0 register that we will have special access to.
The patches for those I intend to publish, when we have support for
sleep/suspend with this new architecture. So the address of the start of
the RSC (=DRV0) is necessary.

>Long story short, we can remove drv-id and just describe drvs by
>themselves?
Yes, we may. As long as I have a way to describe the register addresss
of the start of the DRV (0x20000 for DRV#2) and the tcs-offset (0xd00),
we can work with the RSC-DRV in the driver.

Thanks,
Lina


2018-04-16 19:53:05

by Lina Iyer

[permalink] [raw]
Subject: Re: [PATCH v5 01/10] drivers: qcom: rpmh-rsc: add RPMH controller for QCOM SoCs

On Fri, Apr 13 2018 at 11:43 -0600, Stephen Boyd wrote:
>Quoting Lina Iyer (2018-04-13 08:37:25)
>> On Tue, Apr 10 2018 at 22:39 -0600, Stephen Boyd wrote:
>> >Quoting Lina Iyer (2018-04-05 09:18:25)
>> >>
>> >> + */
>> >> +static irqreturn_t tcs_irq_handler(int irq, void *p)
>> >> +{
>> >> + struct rsc_drv *drv = p;
>> >> + int m, i;
>> >> + u32 irq_status, sts;
>> >> + struct tcs_response *resp;
>> >> + struct tcs_cmd *cmd;
>> >> +
>> >> + irq_status = read_tcs_reg(drv, RSC_DRV_IRQ_STATUS, 0, 0);
>> >> +
>> >> + for (m = 0; m < drv->num_tcs; m++) {
>> >> + if (!(irq_status & (u32)BIT(m)))
>
>This u32 cast looks out of place. And can't we do for_each_set_bit() in
>this loop instead of looping through num_tcs?
>
Ok.

>> >> + continue;
>> >> +
>> >> + resp = get_response(drv, m);
>> >> + if (WARN_ON(!resp))
>> >> + goto skip_resp;
>> >> +
>> >> + resp->err = 0;
>> >> + for (i = 0; i < resp->msg->num_cmds; i++) {
>> >> + cmd = &resp->msg->cmds[i];
>> >> + sts = read_tcs_reg(drv, RSC_DRV_CMD_STATUS, m, i);
>> >> + if (!(sts & CMD_STATUS_ISSUED) ||
>> >> + ((resp->msg->wait_for_compl || cmd->wait) &&
>> >> + !(sts & CMD_STATUS_COMPL))) {
>> >> + resp->err = -EIO;
>> >> + break;
>> >> + }
>> >> + }
>> >> +skip_resp:
>> >> + /* Reclaim the TCS */
>> >> + write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, m, 0, 0);
>> >> + write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, 0, BIT(m));
>> >> + clear_bit(m, drv->tcs_in_use);
>> >
>> >Should we reclaim the TCS if the above for loop fails too? It may make
>> >more sense to look up the response, reclaim, check if it's NULL and
>> >execute a 'continue' and otherwise look through resp->msg->cmds for
>> >something that was done and then send_tcs_response(). At the least,
>> The TCS will will be reclaimed, even if the for loop fails. We can't
>> read the CMD_STATUS reliably after reclaiming the TCS.
>> >don't call send_tcs_response() if resp == NULL.
>> >
>> I could do that.
>
>Ah right, the break is for the inner for-loop. Can we push the for-loop
>and reclaim into the get_response() function so that the goto inside the
>loop is avoided?
>
> resp = get_response(drv, m);
> if (WARN_ON(!resp))
> continue;
> send_tcs_response(resp);
>
>
I needed the resp object to get the resp->msg and send_tcs_response()
could happen only after we have reclaimed the TCS.
I removed the response object based on comments from Bjorn, but the idea
would still need to be the same.

>> >> +/**
>> >> + * rpmh_rsc_send_data: Validate the incoming message and write to the
>> >> + * appropriate TCS block.
>> >> + *
>> >> + * @drv: the controller
>> >> + * @msg: the data to be sent
>> >> + *
>> >> + * Return: 0 on success, -EINVAL on error.
>> >> + * Note: This call blocks until a valid data is written to the TCS.
>> >> + */
>> >> +int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
>> >> +{
>> >> + int ret;
>> >> +
>> >> + if (!msg || !msg->cmds || !msg->num_cmds ||
>> >> + msg->num_cmds > MAX_RPMH_PAYLOAD)
>> >> + return -EINVAL;
>> >> +
>> >> + do {
>> >> + ret = tcs_mbox_write(drv, msg);
>> >> + if (ret == -EBUSY) {
>> >> + pr_info_ratelimited("TCS Busy, retrying RPMH message send: addr=%#x\n",
>> >> + msg->cmds[0].addr);
>> >> + udelay(10);
>> >> + }
>> >> + } while (ret == -EBUSY);
>> >
>> >This loop never breaks if we can't avoid the BUSY loop. And that printk
>> >is informational, shouldn't it be an error? Is there some number of
>> >tries we can make and then just give up?
>> >
>> I could do that. Generally, there are some transient conditions the
>> causes these loops to spin for a while, before we get a free TCS to
>> write to. Failing after just a handful tries may be calling it quits
>> early. If we increase the delay to compensate for it, then we end
>> slowing up requests that could have otherwise been faster.
>
>So a 10 second timeout with a 10uS delay between attempts? I'm not
>asking to increase the delay between attempts, instead I'm asking for
>this loop to not go forever in case something goes wrong. Getting stuck
>here would not be much fun.
>
There is no recoverable situtation if we are unable to send RPMH
requests. I can timeout here, but then most drivers have no way to
recover from that. The only reason why we would timeout is when the
TCSes are otherwise engaged and the requests that were hung. You cannot
clear the TCSes to continue the system.

>>
>> >> +
>> >> + return ret;
>> >> +}
>> >> +EXPORT_SYMBOL(rpmh_rsc_send_data);
>> >> +
>> >> diff --git a/include/dt-bindings/soc/qcom,rpmh-rsc.h b/include/dt-bindings/soc/qcom,rpmh-rsc.h
>> >> new file mode 100644
>> >> index 000000000000..868f998ea998
>> >> --- /dev/null
>> >> +++ b/include/dt-bindings/soc/qcom,rpmh-rsc.h
>> >> @@ -0,0 +1,14 @@
>> >> +/* SPDX-License-Identifier: GPL-2.0 */
>> >> +/*
>> >> + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
>> >> + */
>> >> +
>> >> +#ifndef __DT_QCOM_RPMH_RSC_H__
>> >> +#define __DT_QCOM_RPMH_RSC_H__
>> >> +
>> >> +#define SLEEP_TCS 0
>> >> +#define WAKE_TCS 1
>> >> +#define ACTIVE_TCS 2
>> >> +#define CONTROL_TCS 3
>> >
>> >Is anything besides the RSC node going to use these defines? Typically
>> >we have defines for things that are used by many nodes in many places
>> >and also in C code by drivers so this looks odd if it's mostly used for
>> >packing many properties into a single property on the DT side.
>> >
>> This definition is shared between the DT and the driver. Do you have
>> recommendation on sharing enums between DT and driver?
>
>I'm not aware of anything. I suppose the enum in the kernel header file
>could be assigned to the value of the DT binding defines?
>
> #include <dt-bindings/soc/qcom,rpmh-rsc.h>
>
> enum rpmh_state {
> RPMH_SLEEP_STATE = SLEEP_TCS,
> RPMH_WAKE_ONLY_STATE = WAKE_TCS,
> ...
> };
>
I wasn't talking about these enums. The driver uses the SLEEP_TCS,
WAKE_TCS etc to probe the TCS types.

-- Lina

> #undef SLEEP_TCS
> #undef WAKE_TCS
> #undef ...
>
>This sort of defeats the point of the defines, but I suppose it works.

2018-04-17 06:03:08

by Stephen Boyd

[permalink] [raw]
Subject: Re: [PATCH v5 02/10] dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs

Quoting Lina Iyer (2018-04-16 09:08:18)
> On Fri, Apr 13 2018 at 16:40 -0600, Stephen Boyd wrote:
> >Well it seems like an RSC contains many DRVs and those DRVs contain many
> >TCSes. This is what I get after talking with Bjorn on IRC.
> >
> > +--------------------------------------------------+ (0x00000)
> > | |
> > | DRV #0 |
> > | |
> > |---------- --------------| (tcs-offset (0xd00))
> > | DRV0_TCS0 |
> > | common space |
> > | cmd sequencer | 0xd00 + 0x14
> > | |
> > | DRV0_TCS1 |
> > | common space | 0xd00 + 0x2a0
> > | cmd sequencer | 0xd00 + 0x2a0 + 0x14
> > | |
> > | DRV0_TCS2 |
> > | |
> > | |
> > +--------------------------------------------------+ (0x10000)
> > | |
> > | DRV #1 |
> > | |
> > |---------- --------------| (tcs-offset)
> > | DRV1_TCS0 |
> > | DRV1_TCS1 |
> > | DRV1_TCS2 |
> > +--------------------------------------------------+ (0x20000)
> > | |
> > | DRV #2 |
> > | |
> > |---------- --------------|
> > | DRV2_TCS0 |
> > | DRV2_TCS1 |
> > | DRV2_TCS2 |
> > | DRV2_TCS3 |
> > | DRV2_TCS4 |
> > | DRV2_TCS5 |
> > +--------------------------------------------------+
> >
> >I think I understand it now. There aren't any "RSC common" registers
> >that are common to the entire RSC. Instead, everything goes into a DRV,
> >or into a common TCS space, or into a TCS "queue".
> >
> >> >Put another way, even if the "apps" RSC is complicated, we should be
> >> >describing it to the best of our abilities in the binding so that when
> >> >it is used by non-linux OSes things still work by simply tweaking the
> >> >drv-id that we use to pick the right things out of the node.
> >> >
> >> >Or we're describing the RSC but it's really a container node that
> >> >doesn't do much besides hold DRVs? So this is described at the wrong
> >> >level?
> >> What we are describing is a DRV, but a standalone DRV alone is useless
> >> without the necessary RSC registers. So its a unique RSC+DRV combination
> >> that is represented here.
> >>
> >
> >If my understanding is correct up there then the binding could either
> >describe a single RSC DRV, or it could describe all the RSC DRV
> >instances and interrupts going into the RSC "block" and then we can use
> >drv-id to pick the offset we jump to.
> >
> Your understanding is correct.
>
> >I imagine we don't have any practical use-case for the entire RSC space
> >because there aren't any common RSC registers to deal with.
> Not true.

So then my understanding is not correct! :/

>
> >So we've
> >boiled this all down to describing one DRV and then I wonder why we care
> >about having drv-id at all? It looks to be used to check for a max
> >number of TCS, but that's already described by DT so it doesn't seem
> >very useful to double check what the hardware can tells us.
> >
> There is also a number of commands per TCS (NCPT), that may way vary
> between different RSCs. The RSC of the application processor has 16
> commands in each TCS, but that is variable. I am not saying it cannot be
> described in DT, but it is something I read from the common RSC
> registers, currently.
> Also, I will using common/DRV0 registers to write wakeup time value,
> when the processor subsystem goes into power down. This is not DRV2
> register, but is a DRV0 register that we will have special access to.
> The patches for those I intend to publish, when we have support for
> sleep/suspend with this new architecture. So the address of the start of
> the RSC (=DRV0) is necessary.
>
> >Long story short, we can remove drv-id and just describe drvs by
> >themselves?
> Yes, we may. As long as I have a way to describe the register addresss
> of the start of the DRV (0x20000 for DRV#2) and the tcs-offset (0xd00),
> we can work with the RSC-DRV in the driver.
>

From this new information it seems like we need to know about all the
DRVs in the RSC then. So let's go back to my previous binding proposal
describing all of them and having the qcom,drv-id property describe
which one to use most of the time and hardcode the assumption to use
DRV0 in the driver when we need to do things for sleep/suspend? Then
we're back to describing the whole RSC and configuring the picker to
pick the right DRV depending on firmware configuration.

2018-04-18 19:33:55

by Lina Iyer

[permalink] [raw]
Subject: Re: [PATCH v5 02/10] dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs

On Mon, Apr 16 2018 at 00:01 -0600, Stephen Boyd wrote:
>Quoting Lina Iyer (2018-04-16 09:08:18)
>> On Fri, Apr 13 2018 at 16:40 -0600, Stephen Boyd wrote:
>> >Well it seems like an RSC contains many DRVs and those DRVs contain many
>> >TCSes. This is what I get after talking with Bjorn on IRC.
>> >
>> > +--------------------------------------------------+ (0x00000)
>> > | |
>> > | DRV #0 |
>> > | |
>> > |---------- --------------| (tcs-offset (0xd00))
>> > | DRV0_TCS0 |
>> > | common space |
>> > | cmd sequencer | 0xd00 + 0x14
>> > | |
>> > | DRV0_TCS1 |
>> > | common space | 0xd00 + 0x2a0
>> > | cmd sequencer | 0xd00 + 0x2a0 + 0x14
>> > | |
>> > | DRV0_TCS2 |
>> > | |
>> > | |
>> > +--------------------------------------------------+ (0x10000)
>> > | |
>> > | DRV #1 |
>> > | |
>> > |---------- --------------| (tcs-offset)
>> > | DRV1_TCS0 |
>> > | DRV1_TCS1 |
>> > | DRV1_TCS2 |
>> > +--------------------------------------------------+ (0x20000)
>> > | |
>> > | DRV #2 |
>> > | |
>> > |---------- --------------|
>> > | DRV2_TCS0 |
>> > | DRV2_TCS1 |
>> > | DRV2_TCS2 |
>> > | DRV2_TCS3 |
>> > | DRV2_TCS4 |
>> > | DRV2_TCS5 |
>> > +--------------------------------------------------+
>> >
>> >I think I understand it now. There aren't any "RSC common" registers
>> >that are common to the entire RSC. Instead, everything goes into a DRV,
>> >or into a common TCS space, or into a TCS "queue".
>> >
>> >> >Put another way, even if the "apps" RSC is complicated, we should be
>> >> >describing it to the best of our abilities in the binding so that when
>> >> >it is used by non-linux OSes things still work by simply tweaking the
>> >> >drv-id that we use to pick the right things out of the node.
>> >> >
>> >> >Or we're describing the RSC but it's really a container node that
>> >> >doesn't do much besides hold DRVs? So this is described at the wrong
>> >> >level?
>> >> What we are describing is a DRV, but a standalone DRV alone is useless
>> >> without the necessary RSC registers. So its a unique RSC+DRV combination
>> >> that is represented here.
>> >>
>> >
>> >If my understanding is correct up there then the binding could either
>> >describe a single RSC DRV, or it could describe all the RSC DRV
>> >instances and interrupts going into the RSC "block" and then we can use
>> >drv-id to pick the offset we jump to.
>> >
>> Your understanding is correct.
>>
>> >I imagine we don't have any practical use-case for the entire RSC space
>> >because there aren't any common RSC registers to deal with.
>> Not true.
>
>So then my understanding is not correct! :/
>
>>
>> >So we've
>> >boiled this all down to describing one DRV and then I wonder why we care
>> >about having drv-id at all? It looks to be used to check for a max
>> >number of TCS, but that's already described by DT so it doesn't seem
>> >very useful to double check what the hardware can tells us.
>> >
>> There is also a number of commands per TCS (NCPT), that may way vary
>> between different RSCs. The RSC of the application processor has 16
>> commands in each TCS, but that is variable. I am not saying it cannot be
>> described in DT, but it is something I read from the common RSC
>> registers, currently.
>> Also, I will using common/DRV0 registers to write wakeup time value,
>> when the processor subsystem goes into power down. This is not DRV2
>> register, but is a DRV0 register that we will have special access to.
>> The patches for those I intend to publish, when we have support for
>> sleep/suspend with this new architecture. So the address of the start of
>> the RSC (=DRV0) is necessary.
>>
>> >Long story short, we can remove drv-id and just describe drvs by
>> >themselves?
>> Yes, we may. As long as I have a way to describe the register addresss
>> of the start of the DRV (0x20000 for DRV#2) and the tcs-offset (0xd00),
>> we can work with the RSC-DRV in the driver.
>>
>
>From this new information it seems like we need to know about all the
>DRVs in the RSC then. So let's go back to my previous binding proposal
>describing all of them and having the qcom,drv-id property describe
>which one to use most of the time and hardcode the assumption to use
>DRV0 in the driver when we need to do things for sleep/suspend? Then
>we're back to describing the whole RSC and configuring the picker to
>pick the right DRV depending on firmware configuration.
>
Hmm.. I am okay with that, even though it might be bit confusing to
define all that and not use them.

-- Lina


2018-04-19 06:13:36

by Stephen Boyd

[permalink] [raw]
Subject: Re: [PATCH v5 07/10] drivers: qcom: rpmh: cache sleep/wake state requests

Quoting Lina Iyer (2018-04-05 09:18:31)
> Active state requests are sent immediately to the mailbox controller,

Drive by note, mailbox went away from this code so grep 'mailbox' or
'mbox' on these patches should come up with zero hits.

> while sleep and wake state requests are cached in this driver to avoid
> taxing the mailbox controller repeatedly. The cached values will be sent
> to the controller when the rpmh_flush() is called.
>

2018-04-19 15:08:19

by Lina Iyer

[permalink] [raw]
Subject: Re: [PATCH v5 07/10] drivers: qcom: rpmh: cache sleep/wake state requests

On Wed, Apr 18 2018 at 00:12 -0600, Stephen Boyd wrote:
>Quoting Lina Iyer (2018-04-05 09:18:31)
>> Active state requests are sent immediately to the mailbox controller,
>
>Drive by note, mailbox went away from this code so grep 'mailbox' or
>'mbox' on these patches should come up with zero hits.
>
Will do. Thanks.

-- Lina

>> while sleep and wake state requests are cached in this driver to avoid
>> taxing the mailbox controller repeatedly. The cached values will be sent
>> to the controller when the rpmh_flush() is called.
>>