2022-02-14 19:53:51

by Cristian Marussi

[permalink] [raw]
Subject: [PATCH v4 0/8] Add SCMI Virtio & Clock atomic support

Hi,

This small series is the tail-subset of the previous V8 series about atomic
support in SCMI [1], whose 8-patches head-subset has now been queued on
[2]; as such, it is based on [2] on top of tag scmi-updates-5.17:

commit 94d0cd1da14a ("firmware: arm_scmi: Add new parameter to
mark_txdone")

Patch [1/8] substitute virtio-scmi ready flag and lock with a reference
counter to keep track of vio channels lifetime while removing the need of
a wide spinlocked section (that would cause issues with introduction of
virtio polling support)

Patch [2/8] adds a few helpers to handle the TX free_list and a dedicated
spinlock to reduce the reliance on the main one.

Patch [3/8] adds polling mode to SCMI VirtIO transport in order to support
atomic operations on such transport.

Patches [4,5/8] introduce a new optional SCMI binding, atomic-threshold-us,
to configure a platform specific time threshold used in the following
patches to select with a finer grain which SCMI resources should be
eligible for atomic operations when requested.

Patch [6/8] exposes new SCMI Clock protocol operations to allow an SCMI
user to request atomic mode on clock enable commands.

Patch [7/8] adds support to SCMI Clock protocol for a new clock attributes
field which advertises typical enable latency for a specific resource.

Finally patch [8/8] add support for atomic operations to the SCMI clock
driver; the underlying logic here is that we register with the Clock
framework atomic-capable clock resources if and only if the backing SCMI
transport is capable of atomic operations AND the specific clock resource
has been advertised by the SCMI platform as having:

clock_enable_latency <= atomic-threshold-us

The idea is to avoid costly atomic busy-waiting for resources that have
been advertised as 'slow' to operate upon. (i.e. a PLL vs a gating clock)

To ease testing the whole series can be find at [3].

Any feedback/testing welcome as usual.

Thanks,
Cristian

[1]: https://lore.kernel.org/linux-arm-kernel/[email protected]/
[2]: https://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux.git/tag/?h=scmi-updates-5.17
[3]: https://gitlab.arm.com/linux-arm/linux-cm/-/commits/scmi_atomic_clk_virtio_V4/

---
V3 --> V4
- renamed optional DT property to atomic-threshold-us

V2 --> V3
- split out virtio_ring RFC patch into a distinct series
- calling virtqueue_broke_device when cleaning up channel
- removed RFC tags from CLK related patches

V1 --> V2
- added vio channel refcount support patch
- reviewed free_list support and usage
- added virtio_ring RFC patch
- shrinked spinlocked section within virtio_poll_done to exclude
virtqueue_poll call
- removed poll_lock
- use vio channel refcount acquire/release logic when polling
- using new free_list accessors
- added new dedicated pending_lock to access pending_cmds_list
- fixed a few comments



Cristian Marussi (8):
firmware: arm_scmi: Add a virtio channel refcount
firmware: arm_scmi: Review virtio free_list handling
firmware: arm_scmi: Add atomic mode support to virtio transport
dt-bindings: firmware: arm,scmi: Add atomic-threshold-us optional
property
firmware: arm_scmi: Support optional system wide atomic-threshold-us
firmware: arm_scmi: Add atomic support to clock protocol
firmware: arm_scmi: Add support for clock_enable_latency
clk: scmi: Support atomic clock enable/disable API

.../bindings/firmware/arm,scmi.yaml | 11 +
drivers/clk/clk-scmi.c | 71 ++-
drivers/firmware/arm_scmi/Kconfig | 15 +
drivers/firmware/arm_scmi/clock.c | 34 +-
drivers/firmware/arm_scmi/driver.c | 36 +-
drivers/firmware/arm_scmi/virtio.c | 495 +++++++++++++++---
include/linux/scmi_protocol.h | 9 +-
7 files changed, 566 insertions(+), 105 deletions(-)

--
2.17.1


2022-02-14 20:06:15

by Cristian Marussi

[permalink] [raw]
Subject: [PATCH v4 2/8] firmware: arm_scmi: Review virtio free_list handling

Add a new spinlock dedicated to the access of the TX free list and a couple
of helpers to get and put messages back and forth from the free_list.

Cc: "Michael S. Tsirkin" <[email protected]>
Cc: Igor Skalkin <[email protected]>
Cc: Peter Hilber <[email protected]>
Cc: [email protected]
Signed-off-by: Cristian Marussi <[email protected]>
---
drivers/firmware/arm_scmi/virtio.c | 88 +++++++++++++++++++-----------
1 file changed, 57 insertions(+), 31 deletions(-)

diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c
index 112d6bd4be2e..a147fc24c4c0 100644
--- a/drivers/firmware/arm_scmi/virtio.c
+++ b/drivers/firmware/arm_scmi/virtio.c
@@ -40,20 +40,23 @@
*
* @vqueue: Associated virtqueue
* @cinfo: SCMI Tx or Rx channel
+ * @free_lock: Protects access to the @free_list.
* @free_list: List of unused scmi_vio_msg, maintained for Tx channels only
* @is_rx: Whether channel is an Rx channel
* @max_msg: Maximum number of pending messages for this channel.
- * @lock: Protects access to all members except users.
+ * @lock: Protects access to all members except users, free_list.
* @shutdown_done: A reference to a completion used when freeing this channel.
* @users: A reference count to currently active users of this channel.
*/
struct scmi_vio_channel {
struct virtqueue *vqueue;
struct scmi_chan_info *cinfo;
+ /* lock to protect access to the free list. */
+ spinlock_t free_lock;
struct list_head free_list;
bool is_rx;
unsigned int max_msg;
- /* lock to protect access to all members except users. */
+ /* lock to protect access to all members except users, free_list */
spinlock_t lock;
struct completion *shutdown_done;
refcount_t users;
@@ -134,18 +137,49 @@ static void scmi_vio_channel_cleanup_sync(struct scmi_vio_channel *vioch)
wait_for_completion(vioch->shutdown_done);
}

+/* Assumes to be called with vio channel acquired already */
+static struct scmi_vio_msg *
+scmi_virtio_get_free_msg(struct scmi_vio_channel *vioch)
+{
+ unsigned long flags;
+ struct scmi_vio_msg *msg;
+
+ spin_lock_irqsave(&vioch->free_lock, flags);
+ if (list_empty(&vioch->free_list)) {
+ spin_unlock_irqrestore(&vioch->free_lock, flags);
+ return NULL;
+ }
+
+ msg = list_first_entry(&vioch->free_list, typeof(*msg), list);
+ list_del_init(&msg->list);
+ spin_unlock_irqrestore(&vioch->free_lock, flags);
+
+ return msg;
+}
+
+/* Assumes to be called with vio channel acquired already */
+static void scmi_virtio_put_free_msg(struct scmi_vio_channel *vioch,
+ struct scmi_vio_msg *msg)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&vioch->free_lock, flags);
+ list_add_tail(&msg->list, &vioch->free_list);
+ spin_unlock_irqrestore(&vioch->free_lock, flags);
+}
+
static bool scmi_vio_have_vq_rx(struct virtio_device *vdev)
{
return virtio_has_feature(vdev, VIRTIO_SCMI_F_P2A_CHANNELS);
}

static int scmi_vio_feed_vq_rx(struct scmi_vio_channel *vioch,
- struct scmi_vio_msg *msg,
- struct device *dev)
+ struct scmi_vio_msg *msg)
{
struct scatterlist sg_in;
int rc;
unsigned long flags;
+ struct device *dev = &vioch->vqueue->vdev->dev;

sg_init_one(&sg_in, msg->input, VIRTIO_SCMI_MAX_PDU_SIZE);

@@ -162,17 +196,17 @@ static int scmi_vio_feed_vq_rx(struct scmi_vio_channel *vioch,
return rc;
}

+/*
+ * Assume to be called with channel already acquired or not ready at all;
+ * vioch->lock MUST NOT have been already acquired.
+ */
static void scmi_finalize_message(struct scmi_vio_channel *vioch,
struct scmi_vio_msg *msg)
{
- if (vioch->is_rx) {
- scmi_vio_feed_vq_rx(vioch, msg, vioch->cinfo->dev);
- } else {
- /* Here IRQs are assumed to be already disabled by the caller */
- spin_lock(&vioch->lock);
- list_add(&msg->list, &vioch->free_list);
- spin_unlock(&vioch->lock);
- }
+ if (vioch->is_rx)
+ scmi_vio_feed_vq_rx(vioch, msg);
+ else
+ scmi_virtio_put_free_msg(vioch, msg);
}

static void scmi_vio_complete_cb(struct virtqueue *vqueue)
@@ -287,7 +321,6 @@ static bool virtio_chan_available(struct device *dev, int idx)
static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
bool tx)
{
- unsigned long flags;
struct scmi_vio_channel *vioch;
int index = tx ? VIRTIO_SCMI_VQ_TX : VIRTIO_SCMI_VQ_RX;
int i;
@@ -317,13 +350,7 @@ static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
if (!msg->input)
return -ENOMEM;

- if (tx) {
- spin_lock_irqsave(&vioch->lock, flags);
- list_add_tail(&msg->list, &vioch->free_list);
- spin_unlock_irqrestore(&vioch->lock, flags);
- } else {
- scmi_vio_feed_vq_rx(vioch, msg, cinfo->dev);
- }
+ scmi_finalize_message(vioch, msg);
}

scmi_vio_channel_ready(vioch, cinfo);
@@ -357,33 +384,31 @@ static int virtio_send_message(struct scmi_chan_info *cinfo,
if (!scmi_vio_channel_acquire(vioch))
return -EINVAL;

- spin_lock_irqsave(&vioch->lock, flags);
-
- if (list_empty(&vioch->free_list)) {
- spin_unlock_irqrestore(&vioch->lock, flags);
+ msg = scmi_virtio_get_free_msg(vioch);
+ if (!msg) {
scmi_vio_channel_release(vioch);
return -EBUSY;
}

- msg = list_first_entry(&vioch->free_list, typeof(*msg), list);
- list_del(&msg->list);
-
msg_tx_prepare(msg->request, xfer);

sg_init_one(&sg_out, msg->request, msg_command_size(xfer));
sg_init_one(&sg_in, msg->input, msg_response_size(xfer));

+ spin_lock_irqsave(&vioch->lock, flags);
+
rc = virtqueue_add_sgs(vioch->vqueue, sgs, 1, 1, msg, GFP_ATOMIC);
- if (rc) {
- list_add(&msg->list, &vioch->free_list);
+ if (rc)
dev_err(vioch->cinfo->dev,
"failed to add to TX virtqueue (%d)\n", rc);
- } else {
+ else
virtqueue_kick(vioch->vqueue);
- }

spin_unlock_irqrestore(&vioch->lock, flags);

+ if (rc)
+ scmi_virtio_put_free_msg(vioch, msg);
+
scmi_vio_channel_release(vioch);

return rc;
@@ -460,6 +485,7 @@ static int scmi_vio_probe(struct virtio_device *vdev)
unsigned int sz;

spin_lock_init(&channels[i].lock);
+ spin_lock_init(&channels[i].free_lock);
INIT_LIST_HEAD(&channels[i].free_list);
channels[i].vqueue = vqs[i];

--
2.17.1

2022-02-14 21:03:37

by Cristian Marussi

[permalink] [raw]
Subject: [PATCH v4 5/8] firmware: arm_scmi: Support optional system wide atomic-threshold-us

An SCMI agent can be configured system-wide with a well-defined atomic
threshold: only SCMI synchronous command whose latency has been advertised
by the SCMI platform to be lower or equal to this configured threshold will
be considered for atomic operations, when requested and if supported by the
underlying transport at all.

Signed-off-by: Cristian Marussi <[email protected]>
---
v3 --> v4
- renamed DT property to atomic-threshold-us
---
drivers/firmware/arm_scmi/driver.c | 27 ++++++++++++++++++++++++---
include/linux/scmi_protocol.h | 5 ++++-
2 files changed, 28 insertions(+), 4 deletions(-)

diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index dc972a54e93e..0724012a580e 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -131,6 +131,12 @@ struct scmi_protocol_instance {
* MAX_PROTOCOLS_IMP elements allocated by the base protocol
* @active_protocols: IDR storing device_nodes for protocols actually defined
* in the DT and confirmed as implemented by fw.
+ * @atomic_threshold: Optional system wide DT-configured threshold, expressed
+ * in microseconds, for atomic operations.
+ * Only SCMI synchronous commands reported by the platform
+ * to have an execution latency lesser-equal to the threshold
+ * should be considered for atomic mode operation: such
+ * decision is finally left up to the SCMI drivers.
* @notify_priv: Pointer to private data structure specific to notifications.
* @node: List head
* @users: Number of users of this instance
@@ -149,6 +155,7 @@ struct scmi_info {
struct mutex protocols_mtx;
u8 *protocols_imp;
struct idr active_protocols;
+ unsigned int atomic_threshold;
void *notify_priv;
struct list_head node;
int users;
@@ -1409,15 +1416,22 @@ static void scmi_devm_protocol_put(struct scmi_device *sdev, u8 protocol_id)
* SCMI instance is configured as atomic.
*
* @handle: A reference to the SCMI platform instance.
+ * @atomic_threshold: An optional return value for the system wide currently
+ * configured threshold for atomic operations.
*
* Return: True if transport is configured as atomic
*/
-static bool scmi_is_transport_atomic(const struct scmi_handle *handle)
+static bool scmi_is_transport_atomic(const struct scmi_handle *handle,
+ unsigned int *atomic_threshold)
{
+ bool ret;
struct scmi_info *info = handle_to_scmi_info(handle);

- return info->desc->atomic_enabled &&
- is_transport_polling_capable(info);
+ ret = info->desc->atomic_enabled && is_transport_polling_capable(info);
+ if (ret && atomic_threshold)
+ *atomic_threshold = info->atomic_threshold;
+
+ return ret;
}

static inline
@@ -1957,6 +1971,13 @@ static int scmi_probe(struct platform_device *pdev)
handle->version = &info->version;
handle->devm_protocol_get = scmi_devm_protocol_get;
handle->devm_protocol_put = scmi_devm_protocol_put;
+
+ /* System wide atomic threshold for atomic ops .. if any */
+ if (!of_property_read_u32(np, "atomic-threshold-us",
+ &info->atomic_threshold))
+ dev_info(dev,
+ "SCMI System wide atomic threshold set to %d us\n",
+ info->atomic_threshold);
handle->is_transport_atomic = scmi_is_transport_atomic;

if (desc->ops->link_supplier) {
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index 9f895cb81818..fdf6bd83cc59 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -619,6 +619,8 @@ struct scmi_notify_ops {
* be interested to know if they can assume SCMI
* command transactions associated to this handle will
* never sleep and act accordingly.
+ * An optional atomic threshold value could be returned
+ * where configured.
* @notify_ops: pointer to set of notifications related operations
*/
struct scmi_handle {
@@ -629,7 +631,8 @@ struct scmi_handle {
(*devm_protocol_get)(struct scmi_device *sdev, u8 proto,
struct scmi_protocol_handle **ph);
void (*devm_protocol_put)(struct scmi_device *sdev, u8 proto);
- bool (*is_transport_atomic)(const struct scmi_handle *handle);
+ bool (*is_transport_atomic)(const struct scmi_handle *handle,
+ unsigned int *atomic_threshold);

const struct scmi_notify_ops *notify_ops;
};
--
2.17.1

2022-02-14 21:13:34

by Cristian Marussi

[permalink] [raw]
Subject: [PATCH v4 1/8] firmware: arm_scmi: Add a virtio channel refcount

Currently SCMI VirtIO channels are marked with a ready flag and related
lock to track channel lifetime and support proper synchronization at
shutdown when virtqueues have to be stopped.

This leads to some extended spinlocked sections with IRQs off on the RX
path to keep hold of the ready flag and does not scale well especially when
SCMI VirtIO polling mode will be introduced.

Add an SCMI VirtIO channel dedicated refcount to track active users on both
the TX and the RX path and properly enforce synchronization and cleanup at
shutdown, inhibiting further usage of the channel once freed.

Cc: "Michael S. Tsirkin" <[email protected]>
Cc: Igor Skalkin <[email protected]>
Cc: Peter Hilber <[email protected]>
Cc: [email protected]
Signed-off-by: Cristian Marussi <[email protected]>
---
v2 --> v3
- Break virtio device at shutdown while cleaning up SCMI channel
---
drivers/firmware/arm_scmi/virtio.c | 140 +++++++++++++++++++----------
1 file changed, 92 insertions(+), 48 deletions(-)

diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c
index fd0f6f91fc0b..112d6bd4be2e 100644
--- a/drivers/firmware/arm_scmi/virtio.c
+++ b/drivers/firmware/arm_scmi/virtio.c
@@ -17,7 +17,9 @@
* virtqueue. Access to each virtqueue is protected by spinlocks.
*/

+#include <linux/completion.h>
#include <linux/errno.h>
+#include <linux/refcount.h>
#include <linux/slab.h>
#include <linux/virtio.h>
#include <linux/virtio_config.h>
@@ -27,6 +29,7 @@

#include "common.h"

+#define VIRTIO_MAX_RX_TIMEOUT_MS 60000
#define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */
#define VIRTIO_SCMI_MAX_PDU_SIZE \
(VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD)
@@ -39,23 +42,21 @@
* @cinfo: SCMI Tx or Rx channel
* @free_list: List of unused scmi_vio_msg, maintained for Tx channels only
* @is_rx: Whether channel is an Rx channel
- * @ready: Whether transport user is ready to hear about channel
* @max_msg: Maximum number of pending messages for this channel.
- * @lock: Protects access to all members except ready.
- * @ready_lock: Protects access to ready. If required, it must be taken before
- * lock.
+ * @lock: Protects access to all members except users.
+ * @shutdown_done: A reference to a completion used when freeing this channel.
+ * @users: A reference count to currently active users of this channel.
*/
struct scmi_vio_channel {
struct virtqueue *vqueue;
struct scmi_chan_info *cinfo;
struct list_head free_list;
bool is_rx;
- bool ready;
unsigned int max_msg;
- /* lock to protect access to all members except ready. */
+ /* lock to protect access to all members except users. */
spinlock_t lock;
- /* lock to rotects access to ready flag. */
- spinlock_t ready_lock;
+ struct completion *shutdown_done;
+ refcount_t users;
};

/**
@@ -76,6 +77,63 @@ struct scmi_vio_msg {
/* Only one SCMI VirtIO device can possibly exist */
static struct virtio_device *scmi_vdev;

+static void scmi_vio_channel_ready(struct scmi_vio_channel *vioch,
+ struct scmi_chan_info *cinfo)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&vioch->lock, flags);
+ cinfo->transport_info = vioch;
+ /* Indirectly setting channel not available any more */
+ vioch->cinfo = cinfo;
+ spin_unlock_irqrestore(&vioch->lock, flags);
+
+ refcount_set(&vioch->users, 1);
+}
+
+static inline bool scmi_vio_channel_acquire(struct scmi_vio_channel *vioch)
+{
+ return refcount_inc_not_zero(&vioch->users);
+}
+
+static inline void scmi_vio_channel_release(struct scmi_vio_channel *vioch)
+{
+ if (refcount_dec_and_test(&vioch->users)) {
+ unsigned long flags;
+
+ spin_lock_irqsave(&vioch->lock, flags);
+ if (vioch->shutdown_done) {
+ vioch->cinfo = NULL;
+ complete(vioch->shutdown_done);
+ }
+ spin_unlock_irqrestore(&vioch->lock, flags);
+ }
+}
+
+static void scmi_vio_channel_cleanup_sync(struct scmi_vio_channel *vioch)
+{
+ unsigned long flags;
+ DECLARE_COMPLETION_ONSTACK(vioch_shutdown_done);
+
+ /*
+ * Prepare to wait for the last release if not already released
+ * or in progress.
+ */
+ spin_lock_irqsave(&vioch->lock, flags);
+ if (!vioch->cinfo || vioch->shutdown_done) {
+ spin_unlock_irqrestore(&vioch->lock, flags);
+ return;
+ }
+ vioch->shutdown_done = &vioch_shutdown_done;
+ virtio_break_device(vioch->vqueue->vdev);
+ spin_unlock_irqrestore(&vioch->lock, flags);
+
+ scmi_vio_channel_release(vioch);
+
+ /* Let any possibly concurrent RX path release the channel */
+ wait_for_completion(vioch->shutdown_done);
+}
+
static bool scmi_vio_have_vq_rx(struct virtio_device *vdev)
{
return virtio_has_feature(vdev, VIRTIO_SCMI_F_P2A_CHANNELS);
@@ -119,7 +177,7 @@ static void scmi_finalize_message(struct scmi_vio_channel *vioch,

static void scmi_vio_complete_cb(struct virtqueue *vqueue)
{
- unsigned long ready_flags;
+ unsigned long flags;
unsigned int length;
struct scmi_vio_channel *vioch;
struct scmi_vio_msg *msg;
@@ -130,27 +188,27 @@ static void scmi_vio_complete_cb(struct virtqueue *vqueue)
vioch = &((struct scmi_vio_channel *)vqueue->vdev->priv)[vqueue->index];

for (;;) {
- spin_lock_irqsave(&vioch->ready_lock, ready_flags);
-
- if (!vioch->ready) {
+ if (!scmi_vio_channel_acquire(vioch)) {
if (!cb_enabled)
(void)virtqueue_enable_cb(vqueue);
- goto unlock_ready_out;
+ return;
}

- /* IRQs already disabled here no need to irqsave */
- spin_lock(&vioch->lock);
+ spin_lock_irqsave(&vioch->lock, flags);
if (cb_enabled) {
virtqueue_disable_cb(vqueue);
cb_enabled = false;
}
msg = virtqueue_get_buf(vqueue, &length);
if (!msg) {
- if (virtqueue_enable_cb(vqueue))
- goto unlock_out;
+ if (virtqueue_enable_cb(vqueue)) {
+ spin_unlock_irqrestore(&vioch->lock, flags);
+ scmi_vio_channel_release(vioch);
+ return;
+ }
cb_enabled = true;
}
- spin_unlock(&vioch->lock);
+ spin_unlock_irqrestore(&vioch->lock, flags);

if (msg) {
msg->rx_len = length;
@@ -161,19 +219,14 @@ static void scmi_vio_complete_cb(struct virtqueue *vqueue)
}

/*
- * Release ready_lock and re-enable IRQs between loop iterations
- * to allow virtio_chan_free() to possibly kick in and set the
- * flag vioch->ready to false even in between processing of
- * messages, so as to force outstanding messages to be ignored
- * when system is shutting down.
+ * Release vio channel between loop iterations to allow
+ * virtio_chan_free() to eventually fully release it when
+ * shutting down; in such a case, any outstanding message will
+ * be ignored since this loop will bail out at the next
+ * iteration.
*/
- spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
+ scmi_vio_channel_release(vioch);
}
-
-unlock_out:
- spin_unlock(&vioch->lock);
-unlock_ready_out:
- spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
}

static const char *const scmi_vio_vqueue_names[] = { "tx", "rx" };
@@ -273,35 +326,20 @@ static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
}
}

- spin_lock_irqsave(&vioch->lock, flags);
- cinfo->transport_info = vioch;
- /* Indirectly setting channel not available any more */
- vioch->cinfo = cinfo;
- spin_unlock_irqrestore(&vioch->lock, flags);
-
- spin_lock_irqsave(&vioch->ready_lock, flags);
- vioch->ready = true;
- spin_unlock_irqrestore(&vioch->ready_lock, flags);
+ scmi_vio_channel_ready(vioch, cinfo);

return 0;
}

static int virtio_chan_free(int id, void *p, void *data)
{
- unsigned long flags;
struct scmi_chan_info *cinfo = p;
struct scmi_vio_channel *vioch = cinfo->transport_info;

- spin_lock_irqsave(&vioch->ready_lock, flags);
- vioch->ready = false;
- spin_unlock_irqrestore(&vioch->ready_lock, flags);
+ scmi_vio_channel_cleanup_sync(vioch);

scmi_free_channel(cinfo, data, id);

- spin_lock_irqsave(&vioch->lock, flags);
- vioch->cinfo = NULL;
- spin_unlock_irqrestore(&vioch->lock, flags);
-
return 0;
}

@@ -316,10 +354,14 @@ static int virtio_send_message(struct scmi_chan_info *cinfo,
int rc;
struct scmi_vio_msg *msg;

+ if (!scmi_vio_channel_acquire(vioch))
+ return -EINVAL;
+
spin_lock_irqsave(&vioch->lock, flags);

if (list_empty(&vioch->free_list)) {
spin_unlock_irqrestore(&vioch->lock, flags);
+ scmi_vio_channel_release(vioch);
return -EBUSY;
}

@@ -342,6 +384,8 @@ static int virtio_send_message(struct scmi_chan_info *cinfo,

spin_unlock_irqrestore(&vioch->lock, flags);

+ scmi_vio_channel_release(vioch);
+
return rc;
}

@@ -416,7 +460,6 @@ static int scmi_vio_probe(struct virtio_device *vdev)
unsigned int sz;

spin_lock_init(&channels[i].lock);
- spin_lock_init(&channels[i].ready_lock);
INIT_LIST_HEAD(&channels[i].free_list);
channels[i].vqueue = vqs[i];

@@ -503,7 +546,8 @@ const struct scmi_desc scmi_virtio_desc = {
.transport_init = virtio_scmi_init,
.transport_exit = virtio_scmi_exit,
.ops = &scmi_virtio_ops,
- .max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */
+ /* for non-realtime virtio devices */
+ .max_rx_timeout_ms = VIRTIO_MAX_RX_TIMEOUT_MS,
.max_msg = 0, /* overridden by virtio_get_max_msg() */
.max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE,
};
--
2.17.1

2022-02-16 10:16:02

by Peter Hilber

[permalink] [raw]
Subject: Re: [PATCH v4 1/8] firmware: arm_scmi: Add a virtio channel refcount

On 13.02.22 20:58, Cristian Marussi wrote:
> Currently SCMI VirtIO channels are marked with a ready flag and related
> lock to track channel lifetime and support proper synchronization at
> shutdown when virtqueues have to be stopped.
>
> This leads to some extended spinlocked sections with IRQs off on the RX
> path to keep hold of the ready flag and does not scale well especially when
> SCMI VirtIO polling mode will be introduced.
>
> Add an SCMI VirtIO channel dedicated refcount to track active users on both
> the TX and the RX path and properly enforce synchronization and cleanup at
> shutdown, inhibiting further usage of the channel once freed.
>
> Cc: "Michael S. Tsirkin" <[email protected]>
> Cc: Igor Skalkin <[email protected]>
> Cc: Peter Hilber <[email protected]>
> Cc: [email protected]
> Signed-off-by: Cristian Marussi <[email protected]>
> ---
> v2 --> v3
> - Break virtio device at shutdown while cleaning up SCMI channel
> ---
> drivers/firmware/arm_scmi/virtio.c | 140 +++++++++++++++++++----------
> 1 file changed, 92 insertions(+), 48 deletions(-)
>
> diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c
> index fd0f6f91fc0b..112d6bd4be2e 100644
> --- a/drivers/firmware/arm_scmi/virtio.c
> +++ b/drivers/firmware/arm_scmi/virtio.c
> @@ -17,7 +17,9 @@
> * virtqueue. Access to each virtqueue is protected by spinlocks.
> */
>
> +#include <linux/completion.h>
> #include <linux/errno.h>
> +#include <linux/refcount.h>
> #include <linux/slab.h>
> #include <linux/virtio.h>
> #include <linux/virtio_config.h>
> @@ -27,6 +29,7 @@
>
> #include "common.h"
>
> +#define VIRTIO_MAX_RX_TIMEOUT_MS 60000
> #define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */
> #define VIRTIO_SCMI_MAX_PDU_SIZE \
> (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD)
> @@ -39,23 +42,21 @@
> * @cinfo: SCMI Tx or Rx channel
> * @free_list: List of unused scmi_vio_msg, maintained for Tx channels only
> * @is_rx: Whether channel is an Rx channel
> - * @ready: Whether transport user is ready to hear about channel
> * @max_msg: Maximum number of pending messages for this channel.
> - * @lock: Protects access to all members except ready.
> - * @ready_lock: Protects access to ready. If required, it must be taken before
> - * lock.
> + * @lock: Protects access to all members except users.
> + * @shutdown_done: A reference to a completion used when freeing this channel.
> + * @users: A reference count to currently active users of this channel.
> */
> struct scmi_vio_channel {
> struct virtqueue *vqueue;
> struct scmi_chan_info *cinfo;
> struct list_head free_list;
> bool is_rx;
> - bool ready;
> unsigned int max_msg;
> - /* lock to protect access to all members except ready. */
> + /* lock to protect access to all members except users. */
> spinlock_t lock;
> - /* lock to rotects access to ready flag. */
> - spinlock_t ready_lock;
> + struct completion *shutdown_done;
> + refcount_t users;
> };
>
> /**
> @@ -76,6 +77,63 @@ struct scmi_vio_msg {
> /* Only one SCMI VirtIO device can possibly exist */
> static struct virtio_device *scmi_vdev;
>
> +static void scmi_vio_channel_ready(struct scmi_vio_channel *vioch,
> + struct scmi_chan_info *cinfo)
> +{
> + unsigned long flags;
> +
> + spin_lock_irqsave(&vioch->lock, flags);
> + cinfo->transport_info = vioch;
> + /* Indirectly setting channel not available any more */
> + vioch->cinfo = cinfo;
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +
> + refcount_set(&vioch->users, 1);
> +}
> +
> +static inline bool scmi_vio_channel_acquire(struct scmi_vio_channel *vioch)
> +{
> + return refcount_inc_not_zero(&vioch->users);
> +}
> +
> +static inline void scmi_vio_channel_release(struct scmi_vio_channel *vioch)
> +{
> + if (refcount_dec_and_test(&vioch->users)) {
> + unsigned long flags;
> +
> + spin_lock_irqsave(&vioch->lock, flags);
> + if (vioch->shutdown_done) {
> + vioch->cinfo = NULL;
> + complete(vioch->shutdown_done);
> + }
> + spin_unlock_irqrestore(&vioch->lock, flags);
> + }
> +}
> +
> +static void scmi_vio_channel_cleanup_sync(struct scmi_vio_channel *vioch)
> +{
> + unsigned long flags;
> + DECLARE_COMPLETION_ONSTACK(vioch_shutdown_done);
> +
> + /*
> + * Prepare to wait for the last release if not already released
> + * or in progress.
> + */
> + spin_lock_irqsave(&vioch->lock, flags);
> + if (!vioch->cinfo || vioch->shutdown_done) {
> + spin_unlock_irqrestore(&vioch->lock, flags);
> + return;
> + }
> + vioch->shutdown_done = &vioch_shutdown_done;
> + virtio_break_device(vioch->vqueue->vdev);
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +
> + scmi_vio_channel_release(vioch);
> +
> + /* Let any possibly concurrent RX path release the channel */
> + wait_for_completion(vioch->shutdown_done);
> +}
> +
> static bool scmi_vio_have_vq_rx(struct virtio_device *vdev)
> {
> return virtio_has_feature(vdev, VIRTIO_SCMI_F_P2A_CHANNELS);
> @@ -119,7 +177,7 @@ static void scmi_finalize_message(struct scmi_vio_channel *vioch,
>
> static void scmi_vio_complete_cb(struct virtqueue *vqueue)
> {
> - unsigned long ready_flags;
> + unsigned long flags;
> unsigned int length;
> struct scmi_vio_channel *vioch;
> struct scmi_vio_msg *msg;
> @@ -130,27 +188,27 @@ static void scmi_vio_complete_cb(struct virtqueue *vqueue)
> vioch = &((struct scmi_vio_channel *)vqueue->vdev->priv)[vqueue->index];
>
> for (;;) {
> - spin_lock_irqsave(&vioch->ready_lock, ready_flags);
> -
> - if (!vioch->ready) {
> + if (!scmi_vio_channel_acquire(vioch)) {
> if (!cb_enabled)
> (void)virtqueue_enable_cb(vqueue);

This seems unneeded ATM (in particular since the virtqueue is now broken when
freeing the channel).

> - goto unlock_ready_out;
> + return;
> }
>
> - /* IRQs already disabled here no need to irqsave */
> - spin_lock(&vioch->lock);
> + spin_lock_irqsave(&vioch->lock, flags);
> if (cb_enabled) {
> virtqueue_disable_cb(vqueue);
> cb_enabled = false;
> }
> msg = virtqueue_get_buf(vqueue, &length);
> if (!msg) {
> - if (virtqueue_enable_cb(vqueue))
> - goto unlock_out;
> + if (virtqueue_enable_cb(vqueue)) {
> + spin_unlock_irqrestore(&vioch->lock, flags);
> + scmi_vio_channel_release(vioch);
> + return;
> + }
> cb_enabled = true;
> }
> - spin_unlock(&vioch->lock);
> + spin_unlock_irqrestore(&vioch->lock, flags);
>
> if (msg) {
> msg->rx_len = length;
> @@ -161,19 +219,14 @@ static void scmi_vio_complete_cb(struct virtqueue *vqueue)
> }
>
> /*
> - * Release ready_lock and re-enable IRQs between loop iterations
> - * to allow virtio_chan_free() to possibly kick in and set the
> - * flag vioch->ready to false even in between processing of
> - * messages, so as to force outstanding messages to be ignored
> - * when system is shutting down.
> + * Release vio channel between loop iterations to allow
> + * virtio_chan_free() to eventually fully release it when
> + * shutting down; in such a case, any outstanding message will
> + * be ignored since this loop will bail out at the next
> + * iteration.
> */
> - spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
> + scmi_vio_channel_release(vioch);
> }
> -
> -unlock_out:
> - spin_unlock(&vioch->lock);
> -unlock_ready_out:
> - spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
> }
>
> static const char *const scmi_vio_vqueue_names[] = { "tx", "rx" };
> @@ -273,35 +326,20 @@ static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
> }
> }
>
> - spin_lock_irqsave(&vioch->lock, flags);
> - cinfo->transport_info = vioch;
> - /* Indirectly setting channel not available any more */
> - vioch->cinfo = cinfo;
> - spin_unlock_irqrestore(&vioch->lock, flags);
> -
> - spin_lock_irqsave(&vioch->ready_lock, flags);
> - vioch->ready = true;
> - spin_unlock_irqrestore(&vioch->ready_lock, flags);
> + scmi_vio_channel_ready(vioch, cinfo);
>
> return 0;
> }
>
> static int virtio_chan_free(int id, void *p, void *data)
> {
> - unsigned long flags;
> struct scmi_chan_info *cinfo = p;
> struct scmi_vio_channel *vioch = cinfo->transport_info;
>
> - spin_lock_irqsave(&vioch->ready_lock, flags);
> - vioch->ready = false;
> - spin_unlock_irqrestore(&vioch->ready_lock, flags);
> + scmi_vio_channel_cleanup_sync(vioch);
>
> scmi_free_channel(cinfo, data, id);
>
> - spin_lock_irqsave(&vioch->lock, flags);
> - vioch->cinfo = NULL;
> - spin_unlock_irqrestore(&vioch->lock, flags);
> -
> return 0;
> }
>
> @@ -316,10 +354,14 @@ static int virtio_send_message(struct scmi_chan_info *cinfo,
> int rc;
> struct scmi_vio_msg *msg;
>
> + if (!scmi_vio_channel_acquire(vioch))
> + return -EINVAL;
> +
> spin_lock_irqsave(&vioch->lock, flags);
>
> if (list_empty(&vioch->free_list)) {
> spin_unlock_irqrestore(&vioch->lock, flags);
> + scmi_vio_channel_release(vioch);
> return -EBUSY;
> }
>
> @@ -342,6 +384,8 @@ static int virtio_send_message(struct scmi_chan_info *cinfo,
>
> spin_unlock_irqrestore(&vioch->lock, flags);
>
> + scmi_vio_channel_release(vioch);
> +
> return rc;
> }
>
> @@ -416,7 +460,6 @@ static int scmi_vio_probe(struct virtio_device *vdev)
> unsigned int sz;
>
> spin_lock_init(&channels[i].lock);
> - spin_lock_init(&channels[i].ready_lock);
> INIT_LIST_HEAD(&channels[i].free_list);
> channels[i].vqueue = vqs[i];
>
> @@ -503,7 +546,8 @@ const struct scmi_desc scmi_virtio_desc = {
> .transport_init = virtio_scmi_init,
> .transport_exit = virtio_scmi_exit,
> .ops = &scmi_virtio_ops,
> - .max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */
> + /* for non-realtime virtio devices */
> + .max_rx_timeout_ms = VIRTIO_MAX_RX_TIMEOUT_MS,
> .max_msg = 0, /* overridden by virtio_get_max_msg() */
> .max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE,
> };


2022-02-16 17:34:38

by Cristian Marussi

[permalink] [raw]
Subject: Re: [PATCH v4 1/8] firmware: arm_scmi: Add a virtio channel refcount

On Wed, Feb 16, 2022 at 11:15:40AM +0100, Peter Hilber wrote:
> On 13.02.22 20:58, Cristian Marussi wrote:
> > Currently SCMI VirtIO channels are marked with a ready flag and related
> > lock to track channel lifetime and support proper synchronization at
> > shutdown when virtqueues have to be stopped.
> >
> > This leads to some extended spinlocked sections with IRQs off on the RX
> > path to keep hold of the ready flag and does not scale well especially when
> > SCMI VirtIO polling mode will be introduced.
> >
> > Add an SCMI VirtIO channel dedicated refcount to track active users on both
> > the TX and the RX path and properly enforce synchronization and cleanup at
> > shutdown, inhibiting further usage of the channel once freed.
> >
> > Cc: "Michael S. Tsirkin" <[email protected]>
> > Cc: Igor Skalkin <[email protected]>
> > Cc: Peter Hilber <[email protected]>
> > Cc: [email protected]
> > Signed-off-by: Cristian Marussi <[email protected]>
> > ---

Hi,

> > v2 --> v3
> > - Break virtio device at shutdown while cleaning up SCMI channel
> > ---
> > drivers/firmware/arm_scmi/virtio.c | 140 +++++++++++++++++++----------
> > 1 file changed, 92 insertions(+), 48 deletions(-)
> >
> > diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c
> > index fd0f6f91fc0b..112d6bd4be2e 100644
> > --- a/drivers/firmware/arm_scmi/virtio.c
> > +++ b/drivers/firmware/arm_scmi/virtio.c
> > @@ -17,7 +17,9 @@
> > * virtqueue. Access to each virtqueue is protected by spinlocks.
> > */
> >
> > +#include <linux/completion.h>
> > #include <linux/errno.h>
> > +#include <linux/refcount.h>
> > #include <linux/slab.h>
> > #include <linux/virtio.h>
> > #include <linux/virtio_config.h>
> > @@ -27,6 +29,7 @@
> >
> > #include "common.h"
> >
> > +#define VIRTIO_MAX_RX_TIMEOUT_MS 60000
> > #define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */
> > #define VIRTIO_SCMI_MAX_PDU_SIZE \
> > (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD)
> > @@ -39,23 +42,21 @@
> > * @cinfo: SCMI Tx or Rx channel
> > * @free_list: List of unused scmi_vio_msg, maintained for Tx channels only
> > * @is_rx: Whether channel is an Rx channel
> > - * @ready: Whether transport user is ready to hear about channel
> > * @max_msg: Maximum number of pending messages for this channel.
> > - * @lock: Protects access to all members except ready.
> > - * @ready_lock: Protects access to ready. If required, it must be taken before
> > - * lock.
> > + * @lock: Protects access to all members except users.
> > + * @shutdown_done: A reference to a completion used when freeing this channel.
> > + * @users: A reference count to currently active users of this channel.
> > */
> > struct scmi_vio_channel {
> > struct virtqueue *vqueue;
> > struct scmi_chan_info *cinfo;
> > struct list_head free_list;
> > bool is_rx;
> > - bool ready;
> > unsigned int max_msg;
> > - /* lock to protect access to all members except ready. */
> > + /* lock to protect access to all members except users. */
> > spinlock_t lock;
> > - /* lock to rotects access to ready flag. */
> > - spinlock_t ready_lock;
> > + struct completion *shutdown_done;
> > + refcount_t users;
> > };
> >
> > /**
> > @@ -76,6 +77,63 @@ struct scmi_vio_msg {
> > /* Only one SCMI VirtIO device can possibly exist */
> > static struct virtio_device *scmi_vdev;
> >
> > +static void scmi_vio_channel_ready(struct scmi_vio_channel *vioch,
> > + struct scmi_chan_info *cinfo)
> > +{
> > + unsigned long flags;
> > +
> > + spin_lock_irqsave(&vioch->lock, flags);
> > + cinfo->transport_info = vioch;
> > + /* Indirectly setting channel not available any more */
> > + vioch->cinfo = cinfo;
> > + spin_unlock_irqrestore(&vioch->lock, flags);
> > +
> > + refcount_set(&vioch->users, 1);
> > +}
> > +
> > +static inline bool scmi_vio_channel_acquire(struct scmi_vio_channel *vioch)
> > +{
> > + return refcount_inc_not_zero(&vioch->users);
> > +}
> > +
> > +static inline void scmi_vio_channel_release(struct scmi_vio_channel *vioch)
> > +{
> > + if (refcount_dec_and_test(&vioch->users)) {
> > + unsigned long flags;
> > +
> > + spin_lock_irqsave(&vioch->lock, flags);
> > + if (vioch->shutdown_done) {
> > + vioch->cinfo = NULL;
> > + complete(vioch->shutdown_done);
> > + }
> > + spin_unlock_irqrestore(&vioch->lock, flags);
> > + }
> > +}
> > +
> > +static void scmi_vio_channel_cleanup_sync(struct scmi_vio_channel *vioch)
> > +{
> > + unsigned long flags;
> > + DECLARE_COMPLETION_ONSTACK(vioch_shutdown_done);
> > +
> > + /*
> > + * Prepare to wait for the last release if not already released
> > + * or in progress.
> > + */
> > + spin_lock_irqsave(&vioch->lock, flags);
> > + if (!vioch->cinfo || vioch->shutdown_done) {
> > + spin_unlock_irqrestore(&vioch->lock, flags);
> > + return;
> > + }
> > + vioch->shutdown_done = &vioch_shutdown_done;
> > + virtio_break_device(vioch->vqueue->vdev);
> > + spin_unlock_irqrestore(&vioch->lock, flags);
> > +
> > + scmi_vio_channel_release(vioch);
> > +
> > + /* Let any possibly concurrent RX path release the channel */
> > + wait_for_completion(vioch->shutdown_done);
> > +}
> > +
> > static bool scmi_vio_have_vq_rx(struct virtio_device *vdev)
> > {
> > return virtio_has_feature(vdev, VIRTIO_SCMI_F_P2A_CHANNELS);
> > @@ -119,7 +177,7 @@ static void scmi_finalize_message(struct scmi_vio_channel *vioch,
> >
> > static void scmi_vio_complete_cb(struct virtqueue *vqueue)
> > {
> > - unsigned long ready_flags;
> > + unsigned long flags;
> > unsigned int length;
> > struct scmi_vio_channel *vioch;
> > struct scmi_vio_msg *msg;
> > @@ -130,27 +188,27 @@ static void scmi_vio_complete_cb(struct virtqueue *vqueue)
> > vioch = &((struct scmi_vio_channel *)vqueue->vdev->priv)[vqueue->index];
> >
> > for (;;) {
> > - spin_lock_irqsave(&vioch->ready_lock, ready_flags);
> > -
> > - if (!vioch->ready) {
> > + if (!scmi_vio_channel_acquire(vioch)) {
> > if (!cb_enabled)
> > (void)virtqueue_enable_cb(vqueue);
>
> This seems unneeded ATM (in particular since the virtqueue is now broken when
> freeing the channel).
>

Yes I was unsure indeed. I'll drop it.

Thanks,
Cristian