2021-06-01 10:25:53

by Cristian Marussi

[permalink] [raw]
Subject: [PATCH v2 0/5] Review/Extend SCMI Transport Core layer

Hi all,

this short series is meant to review and extend a couple of SCMI transport
core layer mechanisms in order to ease the interaction with more complex
SCMI transport driver like the upcoming virtio-scmi and optee-scmi.

The commits in this series have not and *should not* have any impact or
require any change in the currently existing SCMI transports.

The three main principal area of change are:

- introduce monotonically increasing message tokens: change the policy
used by the OSPM agent to select sequence number for outgoing messages
in order to minimize the re-use of recently utilized tokens so as to
mitigate the possible misbehaviors on reception of late arrival 'ghost'
reply from the platform, currently hard to identify and discard.
More details and ASCII art of the involved logics follows in the related
commit :D

- introduce optional support for delegated xfers: expose a few helpers
from the SCMI core transport layer that can be used by the transports
to actively query the core for existing in-flight transactions already
associated with a received message header or to obtain a brand new xfer
to accommodate a just received notification.
The general idea is to optionally let the transports process early and
more of their transport specific payloads filling the xfer already
naturally associated with the received message header, so that the
transport specific implementations can get rid early-on of their
specific message envelope structures (before even calling into the
core scmi_rx_callback()) and so avoiding upfront the need to keep track
of the state of such transport specific message envelopes till the core
has processed them till completion or timeout. (while not modifying the
signature of scmi_rx_callback)....any feedback even on a more sensible
naming for this optional feature would be highly appreciated :D

- add optional support for transport-specific init/exit functions: when a
transport provides .init/.exit functions in its scmi_desc descriptor,
they will be called early on during SCMI core stack init/deinit, so well
ahead of SCMI core probing.
This allows some transports like virtio/optee to have performed early on
some specific setup, such as registering as drivers also with some other
kernel subsystem like virtio or optee, before the main SCMI stack is
probed.
This early init cannot be done through the usual kernel module_* methods
because an SCMI transport implementation is not a full-fledged driver but
compiled as part of the SCMI core itself, so using module_init() in a
transport makes then SCMI core fail to build as a single module including
the desired transports.

This, as said, is intended to be preparatory work for the rework and
simplification of the WIP virtio-scmi series. (init/exit support is indeed
needed also by upcoming optee transport)

The series is based on sudeep/for-next [1] on top of commit:

commit a3b884cef873 ("firmware: arm_scmi: Add clock management to the SCMI
power domain")

Any feedback welcome.

Thanks,

Cristian

---
v1 --> v2:
- fixed a lot of typos in "firmware: arm_scmi: Introduce monotonically
increasing tokens"
- added init/exit optional support patch

[1]:https://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux.git/log/?h=for-next/scmi


Cristian Marussi (5):
firmware: arm_scmi: reset_rx_to_maxsz during async commands
firmware: arm_scmi: Add support for type handling in common functions
firmware: arm_scmi: Add transport optional init/exit support
firmware: arm_scmi: Introduce monotonically increasing tokens
firmware: arm_scmi: Introduce delegated xfers support

drivers/firmware/arm_scmi/common.h | 51 +++-
drivers/firmware/arm_scmi/driver.c | 449 ++++++++++++++++++++++++++---
2 files changed, 460 insertions(+), 40 deletions(-)

--
2.17.1


2021-06-01 10:26:02

by Cristian Marussi

[permalink] [raw]
Subject: [PATCH v2 1/5] firmware: arm_scmi: reset_rx_to_maxsz during async commands

During an async commands execution the rx buffer len is at first set to
max_msg_sz when the synchronous part of the command is first sent; once
the synchronous part completes the transport layer waits for the delayed
response which will be processed using the same xfer descriptor initially
allocated, but synchronous response received at the end of the xfer will
have shrunk the rx buffer len to the effective payload response length.

Raise the rx buffer length again to max_msg_sz while waiting for the
delayed response, while adding an informational error message.

Fixes: 58ecdf03dbb9 ("firmware: arm_scmi: Add support for asynchronous commands and delayed response")
Signed-off-by: Cristian Marussi <[email protected]>
---
drivers/firmware/arm_scmi/driver.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index 66eb3f0e5daf..75141b90ae53 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -513,8 +513,16 @@ static int do_xfer_with_response(const struct scmi_protocol_handle *ph,
xfer->async_done = &async_response;

ret = do_xfer(ph, xfer);
- if (!ret && !wait_for_completion_timeout(xfer->async_done, timeout))
- ret = -ETIMEDOUT;
+ if (!ret) {
+ /* rx.len would have been shrunk in the sync do_xfer above */
+ reset_rx_to_maxsz(ph, xfer);
+ if (!wait_for_completion_timeout(xfer->async_done, timeout)) {
+ dev_err(ph->dev,
+ "timed out in delayed resp(caller: %pS)\n",
+ (void *)_RET_IP_);
+ ret = -ETIMEDOUT;
+ }
+ }

xfer->async_done = NULL;
return ret;
--
2.17.1

2021-06-01 10:26:06

by Cristian Marussi

[permalink] [raw]
Subject: [PATCH v2 2/5] firmware: arm_scmi: Add support for type handling in common functions

Add SCMI type handling to pack/unpack_scmi_header common helper functions.

Signed-off-by: Cristian Marussi <[email protected]>
---
Needed later in the series to support delegated xfers
---
drivers/firmware/arm_scmi/common.h | 6 +++++-
drivers/firmware/arm_scmi/driver.c | 1 +
2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index 228bf4a71d23..4bd43863c306 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -70,6 +70,7 @@ struct scmi_msg_resp_prot_version {
*
* @id: The identifier of the message being sent
* @protocol_id: The identifier of the protocol used to send @id message
+ * @type: The SCMI type for this message
* @seq: The token to identify the message. When a message returns, the
* platform returns the whole message header unmodified including the
* token
@@ -80,6 +81,7 @@ struct scmi_msg_resp_prot_version {
struct scmi_msg_hdr {
u8 id;
u8 protocol_id;
+ u8 type;
u16 seq;
u32 status;
bool poll_completion;
@@ -89,13 +91,14 @@ struct scmi_msg_hdr {
* pack_scmi_header() - packs and returns 32-bit header
*
* @hdr: pointer to header containing all the information on message id,
- * protocol id and sequence id.
+ * protocol id, sequence id and type.
*
* Return: 32-bit packed message header to be sent to the platform.
*/
static inline u32 pack_scmi_header(struct scmi_msg_hdr *hdr)
{
return FIELD_PREP(MSG_ID_MASK, hdr->id) |
+ FIELD_PREP(MSG_TYPE_MASK, hdr->type) |
FIELD_PREP(MSG_TOKEN_ID_MASK, hdr->seq) |
FIELD_PREP(MSG_PROTOCOL_ID_MASK, hdr->protocol_id);
}
@@ -110,6 +113,7 @@ static inline void unpack_scmi_header(u32 msg_hdr, struct scmi_msg_hdr *hdr)
{
hdr->id = MSG_XTRACT_ID(msg_hdr);
hdr->protocol_id = MSG_XTRACT_PROT_ID(msg_hdr);
+ hdr->type = MSG_XTRACT_TYPE(msg_hdr);
}

/**
diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index 75141b90ae53..b4c69141eca1 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -568,6 +568,7 @@ static int xfer_get_init(const struct scmi_protocol_handle *ph,

xfer->tx.len = tx_size;
xfer->rx.len = rx_size ? : info->desc->max_msg_size;
+ xfer->hdr.type = MSG_TYPE_COMMAND;
xfer->hdr.id = msg_id;
xfer->hdr.protocol_id = pi->proto->id;
xfer->hdr.poll_completion = false;
--
2.17.1

2021-06-01 10:26:33

by Cristian Marussi

[permalink] [raw]
Subject: [PATCH v2 3/5] firmware: arm_scmi: Add transport optional init/exit support

Some SCMI transport could need to perform some transport specific setup
before they can be used by the SCMI core transport layer: typically this
early setup consists in registering with some other kernel subsystem.

Add the optional capability for a transport to provide a couple of .init
and .exit functions that are assured to be called early during the SCMI
core initialization phase, well before the SCMI core probing step.

[ Peter: Adapted RFC patch by Cristian for submission to upstream. ]
Signed-off-by: Peter Hilber <[email protected]>
Signed-off-by: Cristian Marussi <[email protected]>
---
drivers/firmware/arm_scmi/common.h | 8 ++++
drivers/firmware/arm_scmi/driver.c | 59 ++++++++++++++++++++++++++++++
2 files changed, 67 insertions(+)

diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index 4bd43863c306..530784bc17fa 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -321,6 +321,12 @@ struct scmi_device *scmi_child_dev_find(struct device *parent,
/**
* struct scmi_desc - Description of SoC integration
*
+ * @init: An optional function that a transport can provide to initialize some
+ * transport-specific setup during SCMI core initialization, so ahead of
+ * SCMI core probing.
+ * @exit: An optional function that a transport can provide to de-initialize
+ * some transport-specific setup during SCMI core de-initialization, so
+ * after SCMI core removal.
* @ops: Pointer to the transport specific ops structure
* @max_rx_timeout_ms: Timeout for communication with SoC (in Milliseconds)
* @max_msg: Maximum number of messages that can be pending
@@ -328,6 +334,8 @@ struct scmi_device *scmi_child_dev_find(struct device *parent,
* @max_msg_size: Maximum size of data per message that can be handled.
*/
struct scmi_desc {
+ int (*init)(void);
+ void (*exit)(void);
const struct scmi_transport_ops *ops;
int max_rx_timeout_ms;
int max_msg;
diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index b4c69141eca1..b1f1f5ac58f5 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -1595,10 +1595,67 @@ static struct platform_driver scmi_driver = {
.remove = scmi_remove,
};

+/**
+ * __scmi_transports_setup - Common helper to call transport-specific
+ * .init/.exit code if provided.
+ *
+ * @init: A flag to distinguish between init and exit.
+ *
+ * Note that, if provided, we invoke .init/.exit functions for all the
+ * transports currently compiled in.
+ *
+ * Return: 0 on Success.
+ */
+static inline int __scmi_transports_setup(bool init)
+{
+ int ret = 0;
+ const struct of_device_id *trans;
+
+ for (trans = scmi_of_match; trans->data; trans++) {
+ const struct scmi_desc *tdesc = trans->data;
+
+ if ((init && !tdesc->init) || (!init && !tdesc->exit))
+ continue;
+
+ pr_debug("SCMI %sInitializing %s transport\n",
+ init ? "" : "De-", trans->compatible);
+
+ if (init)
+ ret = tdesc->init();
+ else
+ tdesc->exit();
+
+ if (ret) {
+ pr_err("SCMI transport %s FAILED initialization!\n",
+ trans->compatible);
+ break;
+ }
+ }
+
+ return ret;
+}
+
+static int __init scmi_transports_init(void)
+{
+ return __scmi_transports_setup(true);
+}
+
+static void __exit scmi_transports_exit(void)
+{
+ __scmi_transports_setup(false);
+}
+
static int __init scmi_driver_init(void)
{
+ int ret;
+
scmi_bus_init();

+ /* Initialize any compiled-in transport which provided an init/exit */
+ ret = scmi_transports_init();
+ if (ret)
+ return ret;
+
scmi_base_register();

scmi_clock_register();
@@ -1625,6 +1682,8 @@ static void __exit scmi_driver_exit(void)
scmi_voltage_unregister();
scmi_system_unregister();

+ scmi_transports_exit();
+
scmi_bus_exit();

platform_driver_unregister(&scmi_driver);
--
2.17.1

2021-06-01 10:27:00

by Cristian Marussi

[permalink] [raw]
Subject: [PATCH v2 4/5] firmware: arm_scmi: Introduce monotonically increasing tokens

Tokens are sequence numbers embedded in the each SCMI message header: they
are used to correlate commands with responses (and delayed responses), but
their usage and policy of selection is entirely up to the caller (usually
the OSPM agent), while they are completely opaque to the callee (SCMI
server platform) which merely copies them back from the command into the
response message header.
This also means that the platform does not, can not and should not enforce
any kind of policy on received messages depending on the contained sequence
number: platform can perfectly handle concurrent requests carrying the same
identifiying token if that should happen.

Moreover the platform is not required to produce in-order responses to
agent requests, the only constraint in these regards is that in case of
an asynchronous message the delayed response must be sent after the
immediate response for the synchronous part of the command transaction.

Currenly the SCMI stack of the OSPM agent selects a token for the egressing
commands picking the lowest possible number which is not already in use by
an existing in-flight transaction, which means, in other words, that we
immediately reuse any token after its transaction has completed or it has
timed out: this policy indeed does simplify management and lookup of tokens
and associated xfers.

Under the above assumptions and constraints, since there is really no state
shared between the agent and the platform to let the platform know when a
token and its associated message has timed out, the current policy of early
reuse of tokens can easily lead to the situation in which a spurious or
late received response (or delayed_response), related to an old stale and
timed out transaction, can be wrongly associated to a newer valid in-flight
xfer that just happens to have reused the same token.

This misbehaviour on such ghost responses is more easily exposed on those
transports that naturally have an higher level of parallelism in processing
multiple concurrent in-flight messages.

This commit introduces a new policy of selection of tokens for the OSPM
agent: each new transfer now gets the next available and monotonically
increasing token, until tokens are exhausted and the counter rolls over.

Such new policy mitigates the above issues with ghost responses since the
tokens are now reused as late as possible (when they roll back ideally)
and so it is much easier to identify such ghost responses to stale timed
out transactions: this also helps in simplifying the specific transports
implementation since stale transport messages can be easily identified
and discarded early on in the rx path without the need to cross check
their actual state with the core transport layer.
This mitigation is even more effective when, as is usually the case, the
maximum number of pending messages is capped by the platform to a much
lower number than the whole possible range of tokens values (2^10).

This internal policy change in the core SCMI transport layer is fully
transparent to the specific transports so it has not and should not have
any impact on the transports implementation.

The empirically observed cost of such new procedure of token selection
amounts in the best case to ~10us out of an observed full transaction cost
of 3ms for the completion of a synchronous sensor reading command on a
platform supporting commands completion interrupts.

Signed-off-by: Cristian Marussi <[email protected]>
---
drivers/firmware/arm_scmi/common.h | 23 +++
drivers/firmware/arm_scmi/driver.c | 245 +++++++++++++++++++++++++----
2 files changed, 234 insertions(+), 34 deletions(-)

diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index 530784bc17fa..f49465086579 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -14,7 +14,10 @@
#include <linux/device.h>
#include <linux/errno.h>
#include <linux/kernel.h>
+#include <linux/hashtable.h>
+#include <linux/list.h>
#include <linux/module.h>
+#include <linux/refcount.h>
#include <linux/scmi_protocol.h>
#include <linux/types.h>

@@ -127,6 +130,21 @@ struct scmi_msg {
size_t len;
};

+/**
+ * An helper macro to lookup an xfer from the @pending_xfers hashtable
+ * using the message sequence number token as a key.
+ */
+#define XFER_FIND(__ht, __k) \
+({ \
+ typeof(__k) k_ = __k; \
+ struct scmi_xfer *xfer_ = NULL; \
+ \
+ hash_for_each_possible((__ht), xfer_, node, k_) \
+ if (xfer_->hdr.seq == k_) \
+ break; \
+ xfer_; \
+})
+
/**
* struct scmi_xfer - Structure representing a message flow
*
@@ -138,6 +156,9 @@ struct scmi_msg {
* buffer for the rx path as we use for the tx path.
* @done: command message transmit completion event
* @async_done: pointer to delayed response message received event completion
+ * @users: A refcount to track the active users for this xfer
+ * @node: An hlist_node reference used to store this xfer, alternatively, on
+ * the free list @free_xfers or in the @pending_xfers hashtable
*/
struct scmi_xfer {
int transfer_id;
@@ -146,6 +167,8 @@ struct scmi_xfer {
struct scmi_msg rx;
struct completion done;
struct completion *async_done;
+ refcount_t users;
+ struct hlist_node node;
};

struct scmi_xfer_ops;
diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index b1f1f5ac58f5..9a00028c69aa 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -21,6 +21,7 @@
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/ktime.h>
+#include <linux/hashtable.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/of_address.h>
@@ -65,19 +66,29 @@ struct scmi_requested_dev {
struct list_head node;
};

+#define SCMI_PENDING_XFERS_HT_ORDER_SZ 9
+
/**
* struct scmi_xfers_info - Structure to manage transfer information
*
- * @xfer_block: Preallocated Message array
* @xfer_alloc_table: Bitmap table for allocated messages.
* Index of this bitmap table is also used for message
* sequence identifier.
* @xfer_lock: Protection for message allocation
+ * @last_token: A counter to use as base to generate for monotonically
+ * increasing tokens.
+ * @free_xfers: A free list for available to use xfers. It is initialized with
+ * a number of xfers equal to the maximum allowed in-flight
+ * messages.
+ * @pending_xfers: An hashtable, indexed by msg_hdr.seq, used to keep all the
+ * currently in-flight messages.
*/
struct scmi_xfers_info {
- struct scmi_xfer *xfer_block;
unsigned long *xfer_alloc_table;
spinlock_t xfer_lock;
+ atomic_t last_token;
+ struct hlist_head free_xfers;
+ DECLARE_HASHTABLE(pending_xfers, SCMI_PENDING_XFERS_HT_ORDER_SZ);
};

/**
@@ -203,6 +214,117 @@ void *scmi_notification_instance_data_get(const struct scmi_handle *handle)
return info->notify_priv;
}

+/**
+ * scmi_xfer_token_set - Reserve and set new token for the xfer at hand
+ *
+ * @minfo: Pointer to Tx/Rx Message management info based on channel type
+ * @xfer: The xfer to act upon
+ *
+ * Pick the next unused monotonically increasing token and set it into
+ * xfer->hdr.seq: picking a monotonically increasing value avoids immediate
+ * reuse of freshly completed or timed-out xfers, thus mitigating the risk
+ * of incorrect association of a late and expired xfer with a live in-flight
+ * transaction, both happening to re-use the same token identifier.
+ *
+ * Since platform is NOT required to answer our request in-order we should
+ * account for a few rare but possible scenarios:
+ *
+ * - exactly 'next_token' may be NOT available so pick xfer_id >= next_token
+ * using find_next_zero_bit() starting from candidate next_token bit
+ *
+ * - all tokens ahead upto (MSG_TOKEN_ID_MASK - 1) are used in-flight but we
+ * are plenty of free tokens at start, so try a second pass using
+ * find_next_zero_bit() and starting from 0.
+ *
+ * X = used in-flight
+ *
+ * Normal
+ * ------
+ *
+ * |- xfer_id picked
+ * -----------+----------------------------------------------------------
+ * | | |X|X|X| | | | | | ... ... ... ... ... ... ... ... ... ... ...|X|X|
+ * ----------------------------------------------------------------------
+ * ^
+ * |- next_token
+ *
+ * Out-of-order pending at start
+ * -----------------------------
+ *
+ * |- xfer_id picked, last_token fixed
+ * -----+----------------------------------------------------------------
+ * |X|X| | | | |X|X| ... ... ... ... ... ... ... ... ... ... ... ...|X| |
+ * ----------------------------------------------------------------------
+ * ^
+ * |- next_token
+ *
+ *
+ * Out-of-order pending at end
+ * ---------------------------
+ *
+ * |- xfer_id picked, last_token fixed
+ * -----+----------------------------------------------------------------
+ * |X|X| | | | |X|X| ... ... ... ... ... ... ... ... ... ... |X|X|X||X|X|
+ * ----------------------------------------------------------------------
+ * ^
+ * |- next_token
+ *
+ * Context: Assumes to be called with @xfer_lock already acquired.
+ *
+ * Return: 0 on Success or error
+ */
+static int scmi_xfer_token_set(struct scmi_xfers_info *minfo,
+ struct scmi_xfer *xfer)
+{
+ unsigned long xfer_id, next_token;
+
+ /* Pick a candidate monotonic token in range [0, MSG_TOKEN_MAX - 1] */
+ next_token = (atomic_inc_return(&minfo->last_token) &
+ (MSG_TOKEN_MAX - 1));
+
+ /* Pick the next available xfer_id >= next_token */
+ xfer_id = find_next_zero_bit(minfo->xfer_alloc_table,
+ MSG_TOKEN_MAX, next_token);
+ if (xfer_id == MSG_TOKEN_MAX) {
+ /*
+ * After heavily out-of-order responses, there are no free
+ * tokens ahead, but only at start of xfer_alloc_table so
+ * try again from the beginning.
+ */
+ xfer_id = find_next_zero_bit(minfo->xfer_alloc_table,
+ MSG_TOKEN_MAX, 0);
+ /*
+ * Something is wrong if we got here since there can be a
+ * maximum number of (MSG_TOKEN_MAX - 1) in-flight messages
+ * but we have not found any free token [0, MSG_TOKEN_MAX - 1].
+ */
+ if (WARN_ON_ONCE(xfer_id == MSG_TOKEN_MAX))
+ return -ENOMEM;
+ }
+
+ /* Update +/- last_token accordingly if we skipped some hole */
+ if (xfer_id != next_token)
+ atomic_add((int)(xfer_id - next_token), &minfo->last_token);
+
+ /* Set in-flight */
+ set_bit(xfer_id, minfo->xfer_alloc_table);
+ xfer->hdr.seq = (u16)xfer_id;
+
+ return 0;
+}
+
+/**
+ * scmi_xfer_token_clear - Release the token
+ *
+ * @minfo: Pointer to Tx/Rx Message management info based on channel type
+ * @xfer: The xfer to act upon
+ */
+static inline void scmi_xfer_token_clear(struct scmi_xfers_info *minfo,
+ struct scmi_xfer *xfer)
+{
+ clear_bit(xfer->hdr.seq, minfo->xfer_alloc_table);
+}
+
/**
* scmi_xfer_get() - Allocate one message
*
@@ -212,37 +334,50 @@ void *scmi_notification_instance_data_get(const struct scmi_handle *handle)
* Helper function which is used by various message functions that are
* exposed to clients of this driver for allocating a message traffic event.
*
- * This function can sleep depending on pending requests already in the system
- * for the SCMI entity. Further, this also holds a spinlock to maintain
- * integrity of internal data structures.
+ * Picks an xfer from the free list @free_xfers (if any available), sets a
+ * monotonically increasing token and stores the inflight xfer into the
+ * @pending_xfers hashtable for later retrieval.
+ *
+ * The successfully initialized xfer is refcounted.
+ *
+ * Context: Holds @xfer_lock while manipulating @xfer_alloc_table and
+ * @free_xfers.
*
* Return: 0 if all went fine, else corresponding error.
*/
static struct scmi_xfer *scmi_xfer_get(const struct scmi_handle *handle,
struct scmi_xfers_info *minfo)
{
- u16 xfer_id;
+ int ret;
+ unsigned long flags;
struct scmi_xfer *xfer;
- unsigned long flags, bit_pos;
- struct scmi_info *info = handle_to_scmi_info(handle);

- /* Keep the locked section as small as possible */
spin_lock_irqsave(&minfo->xfer_lock, flags);
- bit_pos = find_first_zero_bit(minfo->xfer_alloc_table,
- info->desc->max_msg);
- if (bit_pos == info->desc->max_msg) {
+ if (hlist_empty(&minfo->free_xfers)) {
spin_unlock_irqrestore(&minfo->xfer_lock, flags);
return ERR_PTR(-ENOMEM);
}
- set_bit(bit_pos, minfo->xfer_alloc_table);
- spin_unlock_irqrestore(&minfo->xfer_lock, flags);

- xfer_id = bit_pos;
+ /* grab an xfer from the free_list */
+ xfer = hlist_entry(minfo->free_xfers.first, struct scmi_xfer, node);
+ hlist_del_init(&xfer->node);

- xfer = &minfo->xfer_block[xfer_id];
- xfer->hdr.seq = xfer_id;
- reinit_completion(&xfer->done);
- xfer->transfer_id = atomic_inc_return(&transfer_last_id);
+ /* Pick and set monotonic token */
+ ret = scmi_xfer_token_set(minfo, xfer);
+ if (!ret) {
+ hash_add(minfo->pending_xfers, &xfer->node, xfer->hdr.seq);
+ } else {
+ dev_err(handle->dev, "Failed to get monotonic token %d\n", ret);
+ hlist_add_head(&xfer->node, &minfo->free_xfers);
+ xfer = ERR_PTR(ret);
+ }
+ spin_unlock_irqrestore(&minfo->xfer_lock, flags);
+
+ if (!IS_ERR(xfer)) {
+ refcount_set(&xfer->users, 1);
+ reinit_completion(&xfer->done);
+ xfer->transfer_id = atomic_inc_return(&transfer_last_id);
+ }

return xfer;
}
@@ -253,6 +388,9 @@ static struct scmi_xfer *scmi_xfer_get(const struct scmi_handle *handle,
* @minfo: Pointer to Tx/Rx Message management info based on channel type
* @xfer: message that was reserved by scmi_xfer_get
*
+ * After refcount check, possibly release an xfer, clearing the token slot,
+ * removing xfer from @pending_xfers and putting it back into free_xfers.
+ *
* This holds a spinlock to maintain integrity of internal data structures.
*/
static void
@@ -260,16 +398,41 @@ __scmi_xfer_put(struct scmi_xfers_info *minfo, struct scmi_xfer *xfer)
{
unsigned long flags;

- /*
- * Keep the locked section as small as possible
- * NOTE: we might escape with smp_mb and no lock here..
- * but just be conservative and symmetric.
- */
spin_lock_irqsave(&minfo->xfer_lock, flags);
- clear_bit(xfer->hdr.seq, minfo->xfer_alloc_table);
+ if (refcount_dec_and_test(&xfer->users)) {
+ scmi_xfer_token_clear(minfo, xfer);
+ hash_del(&xfer->node);
+ hlist_add_head(&xfer->node, &minfo->free_xfers);
+ }
spin_unlock_irqrestore(&minfo->xfer_lock, flags);
}

+/**
+ * scmi_xfer_lookup_unlocked - Helper to lookup an xfer_id
+ *
+ * @minfo: Pointer to Tx/Rx Message management info based on channel type
+ * @xfer_id: Token ID to lookup in @pending_xfers
+ *
+ * Refcounting is untouched.
+ *
+ * Context: Assumes to be called with @xfer_lock already acquired.
+ *
+ * Return: A valid xfer on Success or error otherwise
+ */
+static struct scmi_xfer *
+scmi_xfer_lookup_unlocked(struct scmi_xfers_info *minfo, u16 xfer_id)
+{
+ struct scmi_xfer *xfer = NULL;
+
+ if (xfer_id >= MSG_TOKEN_MAX)
+ return ERR_PTR(-EINVAL);
+
+ if (test_bit(xfer_id, minfo->xfer_alloc_table))
+ xfer = XFER_FIND(minfo->pending_xfers, xfer_id);
+
+ return xfer ?: ERR_PTR(-EINVAL);
+}
+
static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr)
{
struct scmi_xfer *xfer;
@@ -306,19 +469,22 @@ static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr)
static void scmi_handle_response(struct scmi_chan_info *cinfo,
u16 xfer_id, u8 msg_type)
{
+ unsigned long flags;
struct scmi_xfer *xfer;
struct device *dev = cinfo->dev;
struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
struct scmi_xfers_info *minfo = &info->tx_minfo;

/* Are we even expecting this? */
- if (!test_bit(xfer_id, minfo->xfer_alloc_table)) {
+ spin_lock_irqsave(&minfo->xfer_lock, flags);
+ xfer = scmi_xfer_lookup_unlocked(minfo, xfer_id);
+ spin_unlock_irqrestore(&minfo->xfer_lock, flags);
+ if (IS_ERR(xfer)) {
dev_err(dev, "message for %d is not expected!\n", xfer_id);
info->desc->ops->clear_channel(cinfo);
return;
}

- xfer = &minfo->xfer_block[xfer_id];
/*
* Even if a response was indeed expected on this slot at this point,
* a buggy platform could wrongly reply feeding us an unexpected
@@ -1036,18 +1202,25 @@ static int __scmi_xfer_info_init(struct scmi_info *sinfo,
return -EINVAL;
}

- info->xfer_block = devm_kcalloc(dev, desc->max_msg,
- sizeof(*info->xfer_block), GFP_KERNEL);
- if (!info->xfer_block)
- return -ENOMEM;
+ hash_init(info->pending_xfers);

- info->xfer_alloc_table = devm_kcalloc(dev, BITS_TO_LONGS(desc->max_msg),
+ /* Allocate a bitmask sized to hold MSG_TOKEN_MAX tokens */
+ info->xfer_alloc_table = devm_kcalloc(dev, BITS_TO_LONGS(MSG_TOKEN_MAX),
sizeof(long), GFP_KERNEL);
if (!info->xfer_alloc_table)
return -ENOMEM;

- /* Pre-initialize the buffer pointer to pre-allocated buffers */
- for (i = 0, xfer = info->xfer_block; i < desc->max_msg; i++, xfer++) {
+ /*
+ * Preallocate a number of xfers equal to max inflight messages,
+ * pre-initialize the buffer pointer to pre-allocated buffers and
+ * attach all of them to the free list
+ */
+ INIT_HLIST_HEAD(&info->free_xfers);
+ for (i = 0; i < desc->max_msg; i++) {
+ xfer = devm_kzalloc(dev, sizeof(*xfer), GFP_KERNEL);
+ if (!xfer)
+ return -ENOMEM;
+
xfer->rx.buf = devm_kcalloc(dev, sizeof(u8), desc->max_msg_size,
GFP_KERNEL);
if (!xfer->rx.buf)
@@ -1055,8 +1228,12 @@ static int __scmi_xfer_info_init(struct scmi_info *sinfo,

xfer->tx.buf = xfer->rx.buf;
init_completion(&xfer->done);
+
+ /* Add initialized xfer to the free list */
+ hlist_add_head(&xfer->node, &info->free_xfers);
}

+ atomic_set(&info->last_token, -1);
spin_lock_init(&info->xfer_lock);

return 0;
--
2.17.1

2021-06-01 10:27:09

by Cristian Marussi

[permalink] [raw]
Subject: [PATCH v2 5/5] firmware: arm_scmi: Introduce delegated xfers support

Introduce optional support for delegated xfers allocation.

An SCMI transport can optionally declare to support delegated xfers and
then use a few helper functions exposed by the core SCMI transport layer to
query the core for existing in-flight transfers matching a provided message
header or alternatively and transparently obtain a brand new xfer to handle
a freshly received notification message.
In both cases the obtained xfer is uniquely mapped into a specific xfer
through the means of the message header acting as key.

In this way such a transport can properly store its own transport specific
payload into the xfer uniquely associated to the message header before
even calling into the core scmi_rx_callback() in the usual way, so that
the transport specific message envelope structures can be freed early
and there is no more need to keep track of their status till the core
fully processes the xfer to completion or times out.

The scmi_rx_callbak() does not need to be modified to carry additional
transport-specific ancillary data related to such message envelopes since
an unique natural association is established between the xfer and the
related message header.

Existing transports that do not need anything of the above will continue
to work as before without any change.

Signed-off-by: Cristian Marussi <[email protected]>
---
drivers/firmware/arm_scmi/common.h | 14 +++
drivers/firmware/arm_scmi/driver.c | 132 ++++++++++++++++++++++++++++-
2 files changed, 143 insertions(+), 3 deletions(-)

diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index f49465086579..fa5a209bd792 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -80,6 +80,7 @@ struct scmi_msg_resp_prot_version {
* @status: Status of the transfer once it's complete
* @poll_completion: Indicate if the transfer needs to be polled for
* completion or interrupt mode is used
+ * @saved_hdr: A copy of the original msg_hdr
*/
struct scmi_msg_hdr {
u8 id;
@@ -88,6 +89,7 @@ struct scmi_msg_hdr {
u16 seq;
u32 status;
bool poll_completion;
+ u32 saved_hdr;
};

/**
@@ -154,6 +156,9 @@ struct scmi_msg {
* @rx: Receive message, the buffer should be pre-allocated to store
* message. If request-ACK protocol is used, we can reuse the same
* buffer for the rx path as we use for the tx path.
+ * @rx_raw_len: A field which can be optionally used by a specific transport
+ * to save transport specific message length
+ * It is not used by the SCMI transport core
* @done: command message transmit completion event
* @async_done: pointer to delayed response message received event completion
* @users: A refcount to track the active users for this xfer
@@ -165,6 +170,7 @@ struct scmi_xfer {
struct scmi_msg_hdr hdr;
struct scmi_msg tx;
struct scmi_msg rx;
+ size_t rx_raw_len;
struct completion done;
struct completion *async_done;
refcount_t users;
@@ -355,6 +361,9 @@ struct scmi_device *scmi_child_dev_find(struct device *parent,
* @max_msg: Maximum number of messages that can be pending
* simultaneously in the system
* @max_msg_size: Maximum size of data per message that can be handled.
+ * @support_xfers_delegation: A flag to indicate if the described transport
+ * will handle delegated xfers, so the core can
+ * derive proper related assumptions.
*/
struct scmi_desc {
int (*init)(void);
@@ -363,6 +372,7 @@ struct scmi_desc {
int max_rx_timeout_ms;
int max_msg;
int max_msg_size;
+ bool support_xfers_delegation;
};

extern const struct scmi_desc scmi_mailbox_desc;
@@ -391,4 +401,8 @@ void scmi_notification_instance_data_set(const struct scmi_handle *handle,
void *priv);
void *scmi_notification_instance_data_get(const struct scmi_handle *handle);

+int scmi_transfer_acquire(struct scmi_chan_info *cinfo, u32 *msg_hdr,
+ struct scmi_xfer **xfer);
+void scmi_transfer_release(struct scmi_chan_info *cinfo,
+ struct scmi_xfer *xfer);
#endif /* _SCMI_COMMON_H */
diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index 9a00028c69aa..4478fe897f31 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -433,6 +433,124 @@ scmi_xfer_lookup_unlocked(struct scmi_xfers_info *minfo, u16 xfer_id)
return xfer ?: ERR_PTR(-EINVAL);
}

+/**
+ * scmi_xfer_acquire - Helper to lookup and acquire an xfer
+ *
+ * @minfo: Pointer to Tx/Rx Message management info based on channel type
+ * @xfer_id: Token ID to lookup in @pending_xfers
+ *
+ * When a valid xfer is found for the provided @xfer_id, reference counting is
+ * properly updated.
+ *
+ * Return: A valid @xfer on Success or error otherwise.
+ */
+static struct scmi_xfer *
+scmi_xfer_acquire(struct scmi_xfers_info *minfo, u16 xfer_id)
+{
+ unsigned long flags;
+ struct scmi_xfer *xfer;
+
+ spin_lock_irqsave(&minfo->xfer_lock, flags);
+ xfer = scmi_xfer_lookup_unlocked(minfo, xfer_id);
+ if (!IS_ERR(xfer))
+ refcount_inc(&xfer->users);
+ spin_unlock_irqrestore(&minfo->xfer_lock, flags);
+
+ return xfer;
+}
+
+/**
+ * scmi_transfer_acquire - Lookup for an existing xfer or freshly allocate a
+ * new one depending on the type of the message
+ *
+ * @cinfo: A reference to the channel descriptor.
+ * @msg_hdr: A pointer to the message header to lookup.
+ * @xfer: A reference to the pre-existent or freshly allocated xfer
+ * associated with the provided *msg_hdr.
+ *
+ * This function can be used by transports supporting delegated xfers to obtain
+ * a valid @xfer associated with the provided @msg_hdr param.
+ *
+ * The nature of the resulting @xfer depends on the type of message specified in
+ * @msg_hdr:
+ * - for responses and delayed responses a pre-existent/pre-allocated in-flight
+ * xfer descriptor will be returned (properly refcounted)
+ * - for notifications a brand new xfer will be allocated; in this case the
+ * provided message header sequence number will also be mangled to match
+ * the token in the freshly allocated xfer: this is needed to establish a
+ * link between the picked xfer and the msg_hdr that will be subsequently
+ * passed back via usual scmi_rx_callback().
+ *
+ * Return: 0 if a valid xfer is returned in @xfer, error otherwise.
+ */
+int scmi_transfer_acquire(struct scmi_chan_info *cinfo, u32 *msg_hdr,
+ struct scmi_xfer **xfer)
+{
+ u8 msg_type;
+ struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
+
+ if (!xfer || !msg_hdr || !info->desc->support_xfers_delegation)
+ return -EINVAL;
+
+ msg_type = MSG_XTRACT_TYPE(*msg_hdr);
+ switch (msg_type) {
+ case MSG_TYPE_COMMAND:
+ case MSG_TYPE_DELAYED_RESP:
+ /* Grab an existing xfer for xfer_id */
+ *xfer = scmi_xfer_acquire(&info->tx_minfo,
+ MSG_XTRACT_TOKEN(*msg_hdr));
+ break;
+ case MSG_TYPE_NOTIFICATION:
+ /* Get a brand new RX xfer */
+ *xfer = scmi_xfer_get(cinfo->handle, &info->rx_minfo);
+ if (!IS_ERR(*xfer)) {
+ /* Save original msg_hdr and fix sequence number */
+ (*xfer)->hdr.saved_hdr = *msg_hdr;
+ *msg_hdr &= ~MSG_TOKEN_ID_MASK;
+ *msg_hdr |= FIELD_PREP(MSG_TOKEN_ID_MASK,
+ (*xfer)->hdr.seq);
+ }
+ break;
+ default:
+ *xfer = ERR_PTR(-EINVAL);
+ break;
+ }
+
+ if (IS_ERR(*xfer)) {
+ dev_err(cinfo->dev,
+ "Failed to acquire a valid xfer for hdr:0x%X\n",
+ *msg_hdr);
+ return PTR_ERR(*xfer);
+ }
+
+ /* Fix xfer->hdr.type with actual msg_hdr carried type */
+ unpack_scmi_header(*msg_hdr, &((*xfer)->hdr));
+
+ return 0;
+}
+
+/**
+ * scmi_transfer_release - Release an previously acquired xfer
+ *
+ * @cinfo: A reference to the channel descriptor.
+ * @xfer: A reference to the xfer to release.
+ */
+void scmi_transfer_release(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer)
+{
+ struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
+ struct scmi_xfers_info *minfo;
+
+ if (!xfer || !info->desc->support_xfers_delegation)
+ return;
+
+ if (xfer->hdr.type == MSG_TYPE_NOTIFICATION)
+ minfo = &info->rx_minfo;
+ else
+ minfo = &info->tx_minfo;
+
+ __scmi_xfer_put(minfo, xfer);
+}
+
static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr)
{
struct scmi_xfer *xfer;
@@ -442,7 +560,11 @@ static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr)
ktime_t ts;

ts = ktime_get_boottime();
- xfer = scmi_xfer_get(cinfo->handle, minfo);
+
+ if (!info->desc->support_xfers_delegation)
+ xfer = scmi_xfer_get(cinfo->handle, minfo);
+ else
+ xfer = scmi_xfer_acquire(minfo, MSG_XTRACT_TOKEN(msg_hdr));
if (IS_ERR(xfer)) {
dev_err(dev, "failed to get free message slot (%ld)\n",
PTR_ERR(xfer));
@@ -450,8 +572,11 @@ static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr)
return;
}

- unpack_scmi_header(msg_hdr, &xfer->hdr);
scmi_dump_header_dbg(dev, &xfer->hdr);
+
+ if (!info->desc->support_xfers_delegation)
+ unpack_scmi_header(msg_hdr, &xfer->hdr);
+
info->desc->ops->fetch_notification(cinfo, info->desc->max_msg_size,
xfer);
scmi_notify(cinfo->handle, xfer->hdr.protocol_id,
@@ -497,7 +622,8 @@ static void scmi_handle_response(struct scmi_chan_info *cinfo,
xfer_id);
info->desc->ops->clear_channel(cinfo);
/* It was unexpected, so nobody will clear the xfer if not us */
- __scmi_xfer_put(minfo, xfer);
+ if (!info->desc->support_xfers_delegation) //XXX ??? Really
+ __scmi_xfer_put(minfo, xfer);
return;
}

--
2.17.1

2021-06-02 16:11:42

by Sudeep Holla

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] firmware: arm_scmi: reset_rx_to_maxsz during async commands

On Tue, Jun 01, 2021 at 11:24:17AM +0100, Cristian Marussi wrote:
> During an async commands execution the rx buffer len is at first set to
> max_msg_sz when the synchronous part of the command is first sent; once
> the synchronous part completes the transport layer waits for the delayed
> response which will be processed using the same xfer descriptor initially
> allocated, but synchronous response received at the end of the xfer will
> have shrunk the rx buffer len to the effective payload response length.
>
> Raise the rx buffer length again to max_msg_sz while waiting for the
> delayed response, while adding an informational error message.
>
> Fixes: 58ecdf03dbb9 ("firmware: arm_scmi: Add support for asynchronous commands and delayed response")
> Signed-off-by: Cristian Marussi <[email protected]>
> ---
> drivers/firmware/arm_scmi/driver.c | 12 ++++++++++--
> 1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
> index 66eb3f0e5daf..75141b90ae53 100644
> --- a/drivers/firmware/arm_scmi/driver.c
> +++ b/drivers/firmware/arm_scmi/driver.c
> @@ -513,8 +513,16 @@ static int do_xfer_with_response(const struct scmi_protocol_handle *ph,
> xfer->async_done = &async_response;
>
> ret = do_xfer(ph, xfer);
> - if (!ret && !wait_for_completion_timeout(xfer->async_done, timeout))
> - ret = -ETIMEDOUT;
> + if (!ret) {
> + /* rx.len would have been shrunk in the sync do_xfer above */
> + reset_rx_to_maxsz(ph, xfer);

Won't this race with delayed response notification ? I think so, let me
know if not and how. Can't we have this before we fetch the response into xfer ?

--
Regards,
Sudeep

2021-06-04 08:56:46

by Sudeep Holla

[permalink] [raw]
Subject: Re: (subset) [PATCH v2 0/5] Review/Extend SCMI Transport Core layer

On Tue, 1 Jun 2021 11:24:16 +0100, Cristian Marussi wrote:
> this short series is meant to review and extend a couple of SCMI transport
> core layer mechanisms in order to ease the interaction with more complex
> SCMI transport driver like the upcoming virtio-scmi and optee-scmi.
>
> The commits in this series have not and *should not* have any impact or
> require any change in the currently existing SCMI transports.
>
> [...]


Applied to sudeep.holla/linux (for-next/scmi), thanks!

[1/5] firmware: arm_scmi: reset_rx_to_maxsz during async commands
https://git.kernel.org/sudeep.holla/c/0cb7af474e

--
Regards,
Sudeep

2021-06-07 17:41:40

by Cristian Marussi

[permalink] [raw]
Subject: Re: [PATCH v2 4/5] firmware: arm_scmi: Introduce monotonically increasing tokens

Please note that this patch assigns monotonically increasing sequence
numbers @xfer_get() time; this means that if an xfer is reused in a loop
of do_xfer invocations (like it happens in some protocols) the token
will remain unchanged between different transactions, so contradicting
the whole point of having uniquely increasing tokens.

The alternative approach would have been to get a new token sequence
sequence number using the same algorothm but picking a new one upon
each do_xfer(): this would complicated a bit more the whole story (got
patches for this) so I sticked to selection @xfer_get time, also
because usually when this 'looping' behaviour is implemented and a
timeout occurs along the way the whole command sequence is dropped,
the xfer is immediately put and an error is reported, so that it cannot
really happen anyway that a stale timed-out response is associated with
the next iteration of the loop using the same seqnum.
(but note that this is just common practice now...nothig forbids a future
proto implementation to use such looping behavour and not bail out on error)

Thanks,
Cristian

On Tue, Jun 01, 2021 at 11:24:20AM +0100, Cristian Marussi wrote:
> Tokens are sequence numbers embedded in the each SCMI message header: they
> are used to correlate commands with responses (and delayed responses), but
> their usage and policy of selection is entirely up to the caller (usually
> the OSPM agent), while they are completely opaque to the callee (SCMI
> server platform) which merely copies them back from the command into the
> response message header.
> This also means that the platform does not, can not and should not enforce
> any kind of policy on received messages depending on the contained sequence
> number: platform can perfectly handle concurrent requests carrying the same
> identifiying token if that should happen.
>
> Moreover the platform is not required to produce in-order responses to
> agent requests, the only constraint in these regards is that in case of
> an asynchronous message the delayed response must be sent after the
> immediate response for the synchronous part of the command transaction.
>
> Currenly the SCMI stack of the OSPM agent selects a token for the egressing
> commands picking the lowest possible number which is not already in use by
> an existing in-flight transaction, which means, in other words, that we
> immediately reuse any token after its transaction has completed or it has
> timed out: this policy indeed does simplify management and lookup of tokens
> and associated xfers.
>
> Under the above assumptions and constraints, since there is really no state
> shared between the agent and the platform to let the platform know when a
> token and its associated message has timed out, the current policy of early
> reuse of tokens can easily lead to the situation in which a spurious or
> late received response (or delayed_response), related to an old stale and
> timed out transaction, can be wrongly associated to a newer valid in-flight
> xfer that just happens to have reused the same token.
>
> This misbehaviour on such ghost responses is more easily exposed on those
> transports that naturally have an higher level of parallelism in processing
> multiple concurrent in-flight messages.
>
> This commit introduces a new policy of selection of tokens for the OSPM
> agent: each new transfer now gets the next available and monotonically
> increasing token, until tokens are exhausted and the counter rolls over.
>
> Such new policy mitigates the above issues with ghost responses since the
> tokens are now reused as late as possible (when they roll back ideally)
> and so it is much easier to identify such ghost responses to stale timed
> out transactions: this also helps in simplifying the specific transports
> implementation since stale transport messages can be easily identified
> and discarded early on in the rx path without the need to cross check
> their actual state with the core transport layer.
> This mitigation is even more effective when, as is usually the case, the
> maximum number of pending messages is capped by the platform to a much
> lower number than the whole possible range of tokens values (2^10).
>
> This internal policy change in the core SCMI transport layer is fully
> transparent to the specific transports so it has not and should not have
> any impact on the transports implementation.
>
> The empirically observed cost of such new procedure of token selection
> amounts in the best case to ~10us out of an observed full transaction cost
> of 3ms for the completion of a synchronous sensor reading command on a
> platform supporting commands completion interrupts.
>
> Signed-off-by: Cristian Marussi <[email protected]>
> ---
> drivers/firmware/arm_scmi/common.h | 23 +++
> drivers/firmware/arm_scmi/driver.c | 245 +++++++++++++++++++++++++----
> 2 files changed, 234 insertions(+), 34 deletions(-)
>
> diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
> index 530784bc17fa..f49465086579 100644
> --- a/drivers/firmware/arm_scmi/common.h
> +++ b/drivers/firmware/arm_scmi/common.h
> @@ -14,7 +14,10 @@
> #include <linux/device.h>
> #include <linux/errno.h>
> #include <linux/kernel.h>
> +#include <linux/hashtable.h>
> +#include <linux/list.h>
> #include <linux/module.h>
> +#include <linux/refcount.h>
> #include <linux/scmi_protocol.h>
> #include <linux/types.h>
>
> @@ -127,6 +130,21 @@ struct scmi_msg {
> size_t len;
> };
>
> +/**
> + * An helper macro to lookup an xfer from the @pending_xfers hashtable
> + * using the message sequence number token as a key.
> + */
> +#define XFER_FIND(__ht, __k) \
> +({ \
> + typeof(__k) k_ = __k; \
> + struct scmi_xfer *xfer_ = NULL; \
> + \
> + hash_for_each_possible((__ht), xfer_, node, k_) \
> + if (xfer_->hdr.seq == k_) \
> + break; \
> + xfer_; \
> +})
> +
> /**
> * struct scmi_xfer - Structure representing a message flow
> *
> @@ -138,6 +156,9 @@ struct scmi_msg {
> * buffer for the rx path as we use for the tx path.
> * @done: command message transmit completion event
> * @async_done: pointer to delayed response message received event completion
> + * @users: A refcount to track the active users for this xfer
> + * @node: An hlist_node reference used to store this xfer, alternatively, on
> + * the free list @free_xfers or in the @pending_xfers hashtable
> */
> struct scmi_xfer {
> int transfer_id;
> @@ -146,6 +167,8 @@ struct scmi_xfer {
> struct scmi_msg rx;
> struct completion done;
> struct completion *async_done;
> + refcount_t users;
> + struct hlist_node node;
> };
>
> struct scmi_xfer_ops;
> diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
> index b1f1f5ac58f5..9a00028c69aa 100644
> --- a/drivers/firmware/arm_scmi/driver.c
> +++ b/drivers/firmware/arm_scmi/driver.c
> @@ -21,6 +21,7 @@
> #include <linux/io.h>
> #include <linux/kernel.h>
> #include <linux/ktime.h>
> +#include <linux/hashtable.h>
> #include <linux/list.h>
> #include <linux/module.h>
> #include <linux/of_address.h>
> @@ -65,19 +66,29 @@ struct scmi_requested_dev {
> struct list_head node;
> };
>
> +#define SCMI_PENDING_XFERS_HT_ORDER_SZ 9
> +
> /**
> * struct scmi_xfers_info - Structure to manage transfer information
> *
> - * @xfer_block: Preallocated Message array
> * @xfer_alloc_table: Bitmap table for allocated messages.
> * Index of this bitmap table is also used for message
> * sequence identifier.
> * @xfer_lock: Protection for message allocation
> + * @last_token: A counter to use as base to generate for monotonically
> + * increasing tokens.
> + * @free_xfers: A free list for available to use xfers. It is initialized with
> + * a number of xfers equal to the maximum allowed in-flight
> + * messages.
> + * @pending_xfers: An hashtable, indexed by msg_hdr.seq, used to keep all the
> + * currently in-flight messages.
> */
> struct scmi_xfers_info {
> - struct scmi_xfer *xfer_block;
> unsigned long *xfer_alloc_table;
> spinlock_t xfer_lock;
> + atomic_t last_token;
> + struct hlist_head free_xfers;
> + DECLARE_HASHTABLE(pending_xfers, SCMI_PENDING_XFERS_HT_ORDER_SZ);
> };
>
> /**
> @@ -203,6 +214,117 @@ void *scmi_notification_instance_data_get(const struct scmi_handle *handle)
> return info->notify_priv;
> }
>
> +/**
> + * scmi_xfer_token_set - Reserve and set new token for the xfer at hand
> + *
> + * @minfo: Pointer to Tx/Rx Message management info based on channel type
> + * @xfer: The xfer to act upon
> + *
> + * Pick the next unused monotonically increasing token and set it into
> + * xfer->hdr.seq: picking a monotonically increasing value avoids immediate
> + * reuse of freshly completed or timed-out xfers, thus mitigating the risk
> + * of incorrect association of a late and expired xfer with a live in-flight
> + * transaction, both happening to re-use the same token identifier.
> + *
> + * Since platform is NOT required to answer our request in-order we should
> + * account for a few rare but possible scenarios:
> + *
> + * - exactly 'next_token' may be NOT available so pick xfer_id >= next_token
> + * using find_next_zero_bit() starting from candidate next_token bit
> + *
> + * - all tokens ahead upto (MSG_TOKEN_ID_MASK - 1) are used in-flight but we
> + * are plenty of free tokens at start, so try a second pass using
> + * find_next_zero_bit() and starting from 0.
> + *
> + * X = used in-flight
> + *
> + * Normal
> + * ------
> + *
> + * |- xfer_id picked
> + * -----------+----------------------------------------------------------
> + * | | |X|X|X| | | | | | ... ... ... ... ... ... ... ... ... ... ...|X|X|
> + * ----------------------------------------------------------------------
> + * ^
> + * |- next_token
> + *
> + * Out-of-order pending at start
> + * -----------------------------
> + *
> + * |- xfer_id picked, last_token fixed
> + * -----+----------------------------------------------------------------
> + * |X|X| | | | |X|X| ... ... ... ... ... ... ... ... ... ... ... ...|X| |
> + * ----------------------------------------------------------------------
> + * ^
> + * |- next_token
> + *
> + *
> + * Out-of-order pending at end
> + * ---------------------------
> + *
> + * |- xfer_id picked, last_token fixed
> + * -----+----------------------------------------------------------------
> + * |X|X| | | | |X|X| ... ... ... ... ... ... ... ... ... ... |X|X|X||X|X|
> + * ----------------------------------------------------------------------
> + * ^
> + * |- next_token
> + *
> + * Context: Assumes to be called with @xfer_lock already acquired.
> + *
> + * Return: 0 on Success or error
> + */
> +static int scmi_xfer_token_set(struct scmi_xfers_info *minfo,
> + struct scmi_xfer *xfer)
> +{
> + unsigned long xfer_id, next_token;
> +
> + /* Pick a candidate monotonic token in range [0, MSG_TOKEN_MAX - 1] */
> + next_token = (atomic_inc_return(&minfo->last_token) &
> + (MSG_TOKEN_MAX - 1));
> +
> + /* Pick the next available xfer_id >= next_token */
> + xfer_id = find_next_zero_bit(minfo->xfer_alloc_table,
> + MSG_TOKEN_MAX, next_token);
> + if (xfer_id == MSG_TOKEN_MAX) {
> + /*
> + * After heavily out-of-order responses, there are no free
> + * tokens ahead, but only at start of xfer_alloc_table so
> + * try again from the beginning.
> + */
> + xfer_id = find_next_zero_bit(minfo->xfer_alloc_table,
> + MSG_TOKEN_MAX, 0);
> + /*
> + * Something is wrong if we got here since there can be a
> + * maximum number of (MSG_TOKEN_MAX - 1) in-flight messages
> + * but we have not found any free token [0, MSG_TOKEN_MAX - 1].
> + */
> + if (WARN_ON_ONCE(xfer_id == MSG_TOKEN_MAX))
> + return -ENOMEM;
> + }
> +
> + /* Update +/- last_token accordingly if we skipped some hole */
> + if (xfer_id != next_token)
> + atomic_add((int)(xfer_id - next_token), &minfo->last_token);
> +
> + /* Set in-flight */
> + set_bit(xfer_id, minfo->xfer_alloc_table);
> + xfer->hdr.seq = (u16)xfer_id;
> +
> + return 0;
> +}
> +
> +/**
> + * scmi_xfer_token_clear - Release the token
> + *
> + * @minfo: Pointer to Tx/Rx Message management info based on channel type
> + * @xfer: The xfer to act upon
> + */
> +static inline void scmi_xfer_token_clear(struct scmi_xfers_info *minfo,
> + struct scmi_xfer *xfer)
> +{
> + clear_bit(xfer->hdr.seq, minfo->xfer_alloc_table);
> +}
> +
> /**
> * scmi_xfer_get() - Allocate one message
> *
> @@ -212,37 +334,50 @@ void *scmi_notification_instance_data_get(const struct scmi_handle *handle)
> * Helper function which is used by various message functions that are
> * exposed to clients of this driver for allocating a message traffic event.
> *
> - * This function can sleep depending on pending requests already in the system
> - * for the SCMI entity. Further, this also holds a spinlock to maintain
> - * integrity of internal data structures.
> + * Picks an xfer from the free list @free_xfers (if any available), sets a
> + * monotonically increasing token and stores the inflight xfer into the
> + * @pending_xfers hashtable for later retrieval.
> + *
> + * The successfully initialized xfer is refcounted.
> + *
> + * Context: Holds @xfer_lock while manipulating @xfer_alloc_table and
> + * @free_xfers.
> *
> * Return: 0 if all went fine, else corresponding error.
> */
> static struct scmi_xfer *scmi_xfer_get(const struct scmi_handle *handle,
> struct scmi_xfers_info *minfo)
> {
> - u16 xfer_id;
> + int ret;
> + unsigned long flags;
> struct scmi_xfer *xfer;
> - unsigned long flags, bit_pos;
> - struct scmi_info *info = handle_to_scmi_info(handle);
>
> - /* Keep the locked section as small as possible */
> spin_lock_irqsave(&minfo->xfer_lock, flags);
> - bit_pos = find_first_zero_bit(minfo->xfer_alloc_table,
> - info->desc->max_msg);
> - if (bit_pos == info->desc->max_msg) {
> + if (hlist_empty(&minfo->free_xfers)) {
> spin_unlock_irqrestore(&minfo->xfer_lock, flags);
> return ERR_PTR(-ENOMEM);
> }
> - set_bit(bit_pos, minfo->xfer_alloc_table);
> - spin_unlock_irqrestore(&minfo->xfer_lock, flags);
>
> - xfer_id = bit_pos;
> + /* grab an xfer from the free_list */
> + xfer = hlist_entry(minfo->free_xfers.first, struct scmi_xfer, node);
> + hlist_del_init(&xfer->node);
>
> - xfer = &minfo->xfer_block[xfer_id];
> - xfer->hdr.seq = xfer_id;
> - reinit_completion(&xfer->done);
> - xfer->transfer_id = atomic_inc_return(&transfer_last_id);
> + /* Pick and set monotonic token */
> + ret = scmi_xfer_token_set(minfo, xfer);
> + if (!ret) {
> + hash_add(minfo->pending_xfers, &xfer->node, xfer->hdr.seq);
> + } else {
> + dev_err(handle->dev, "Failed to get monotonic token %d\n", ret);
> + hlist_add_head(&xfer->node, &minfo->free_xfers);
> + xfer = ERR_PTR(ret);
> + }
> + spin_unlock_irqrestore(&minfo->xfer_lock, flags);
> +
> + if (!IS_ERR(xfer)) {
> + refcount_set(&xfer->users, 1);
> + reinit_completion(&xfer->done);
> + xfer->transfer_id = atomic_inc_return(&transfer_last_id);
> + }
>
> return xfer;
> }
> @@ -253,6 +388,9 @@ static struct scmi_xfer *scmi_xfer_get(const struct scmi_handle *handle,
> * @minfo: Pointer to Tx/Rx Message management info based on channel type
> * @xfer: message that was reserved by scmi_xfer_get
> *
> + * After refcount check, possibly release an xfer, clearing the token slot,
> + * removing xfer from @pending_xfers and putting it back into free_xfers.
> + *
> * This holds a spinlock to maintain integrity of internal data structures.
> */
> static void
> @@ -260,16 +398,41 @@ __scmi_xfer_put(struct scmi_xfers_info *minfo, struct scmi_xfer *xfer)
> {
> unsigned long flags;
>
> - /*
> - * Keep the locked section as small as possible
> - * NOTE: we might escape with smp_mb and no lock here..
> - * but just be conservative and symmetric.
> - */
> spin_lock_irqsave(&minfo->xfer_lock, flags);
> - clear_bit(xfer->hdr.seq, minfo->xfer_alloc_table);
> + if (refcount_dec_and_test(&xfer->users)) {
> + scmi_xfer_token_clear(minfo, xfer);
> + hash_del(&xfer->node);
> + hlist_add_head(&xfer->node, &minfo->free_xfers);
> + }
> spin_unlock_irqrestore(&minfo->xfer_lock, flags);
> }
>
> +/**
> + * scmi_xfer_lookup_unlocked - Helper to lookup an xfer_id
> + *
> + * @minfo: Pointer to Tx/Rx Message management info based on channel type
> + * @xfer_id: Token ID to lookup in @pending_xfers
> + *
> + * Refcounting is untouched.
> + *
> + * Context: Assumes to be called with @xfer_lock already acquired.
> + *
> + * Return: A valid xfer on Success or error otherwise
> + */
> +static struct scmi_xfer *
> +scmi_xfer_lookup_unlocked(struct scmi_xfers_info *minfo, u16 xfer_id)
> +{
> + struct scmi_xfer *xfer = NULL;
> +
> + if (xfer_id >= MSG_TOKEN_MAX)
> + return ERR_PTR(-EINVAL);
> +
> + if (test_bit(xfer_id, minfo->xfer_alloc_table))
> + xfer = XFER_FIND(minfo->pending_xfers, xfer_id);
> +
> + return xfer ?: ERR_PTR(-EINVAL);
> +}
> +
> static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr)
> {
> struct scmi_xfer *xfer;
> @@ -306,19 +469,22 @@ static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr)
> static void scmi_handle_response(struct scmi_chan_info *cinfo,
> u16 xfer_id, u8 msg_type)
> {
> + unsigned long flags;
> struct scmi_xfer *xfer;
> struct device *dev = cinfo->dev;
> struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
> struct scmi_xfers_info *minfo = &info->tx_minfo;
>
> /* Are we even expecting this? */
> - if (!test_bit(xfer_id, minfo->xfer_alloc_table)) {
> + spin_lock_irqsave(&minfo->xfer_lock, flags);
> + xfer = scmi_xfer_lookup_unlocked(minfo, xfer_id);
> + spin_unlock_irqrestore(&minfo->xfer_lock, flags);
> + if (IS_ERR(xfer)) {
> dev_err(dev, "message for %d is not expected!\n", xfer_id);
> info->desc->ops->clear_channel(cinfo);
> return;
> }
>
> - xfer = &minfo->xfer_block[xfer_id];
> /*
> * Even if a response was indeed expected on this slot at this point,
> * a buggy platform could wrongly reply feeding us an unexpected
> @@ -1036,18 +1202,25 @@ static int __scmi_xfer_info_init(struct scmi_info *sinfo,
> return -EINVAL;
> }
>
> - info->xfer_block = devm_kcalloc(dev, desc->max_msg,
> - sizeof(*info->xfer_block), GFP_KERNEL);
> - if (!info->xfer_block)
> - return -ENOMEM;
> + hash_init(info->pending_xfers);
>
> - info->xfer_alloc_table = devm_kcalloc(dev, BITS_TO_LONGS(desc->max_msg),
> + /* Allocate a bitmask sized to hold MSG_TOKEN_MAX tokens */
> + info->xfer_alloc_table = devm_kcalloc(dev, BITS_TO_LONGS(MSG_TOKEN_MAX),
> sizeof(long), GFP_KERNEL);
> if (!info->xfer_alloc_table)
> return -ENOMEM;
>
> - /* Pre-initialize the buffer pointer to pre-allocated buffers */
> - for (i = 0, xfer = info->xfer_block; i < desc->max_msg; i++, xfer++) {
> + /*
> + * Preallocate a number of xfers equal to max inflight messages,
> + * pre-initialize the buffer pointer to pre-allocated buffers and
> + * attach all of them to the free list
> + */
> + INIT_HLIST_HEAD(&info->free_xfers);
> + for (i = 0; i < desc->max_msg; i++) {
> + xfer = devm_kzalloc(dev, sizeof(*xfer), GFP_KERNEL);
> + if (!xfer)
> + return -ENOMEM;
> +
> xfer->rx.buf = devm_kcalloc(dev, sizeof(u8), desc->max_msg_size,
> GFP_KERNEL);
> if (!xfer->rx.buf)
> @@ -1055,8 +1228,12 @@ static int __scmi_xfer_info_init(struct scmi_info *sinfo,
>
> xfer->tx.buf = xfer->rx.buf;
> init_completion(&xfer->done);
> +
> + /* Add initialized xfer to the free list */
> + hlist_add_head(&xfer->node, &info->free_xfers);
> }
>
> + atomic_set(&info->last_token, -1);
> spin_lock_init(&info->xfer_lock);
>
> return 0;
> --
> 2.17.1
>

2021-06-11 17:10:14

by Cristian Marussi

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] Review/Extend SCMI Transport Core layer

Hi,

On Tue, Jun 01, 2021 at 11:24:16AM +0100, Cristian Marussi wrote:
> Hi all,
>
> this short series is meant to review and extend a couple of SCMI transport
> core layer mechanisms in order to ease the interaction with more complex
> SCMI transport driver like the upcoming virtio-scmi and optee-scmi.
>

I'm going to retire this series as standalone and post these changes
only together with the SCMI VirtIO series that need them at:

https://lore.kernel.org/linux-arm-kernel/[email protected]/T/#t

Thanks,
Cristian