2021-05-11 00:30:02

by Peter Hilber

[permalink] [raw]
Subject: [RFC PATCH v3 00/12] firmware: arm_scmi: Add virtio transport

This series implements an SCMI virtio driver according to the virtio
SCMI device spec [1], after simple preparatory changes to the
existing arm-scmi driver.

This RFC patch series is intended to give others an understanding of the
current development progress.

The virtio transport differs in some respects from the existing
shared-memory based SCMI transports, and therefore some preparatory
steps are necessary.

The series is based on v5.13-rc1.

Changes in RFC v3:

- Fix scmi_xfer buffer management. Use dedicated buffers for actual
Tx/Rx. Introduce a message handle, and a drop_message() op for message
passing-based transports.

- Add service data unit abstraction for transports which use message
passing.

- The virtio transport doesn't call the core once the channel is not
ready any more.

- Handle races between core and transport.

- Use generic transport init/deinit (Cristian Marussi).

- Don't use vqueue->priv field (Viresh Kumar).

- Numerous small improvements.

The following problems remain:

- Polling is not implemented.

- When handling races between core and transport, a timeout corner case
can keep an scmi_xfer permanently occupied.

- We must be sure that the virtio transport option (such as virtio over
MMIO) is available when the virtio SCMI device is probed.

All other known problems should have been addressed.

Test:

The series was smoke tested with a v5.4 based kernel, with the Base
protocol and Sensor management protocol. The virtio SCMI device used was
a proprietary implementation by OpenSynergy.

The following was not tested yet:

- delayed responses

- driver remove

- timeouts

Changes in RFC v2:

- Remove the DT virtio_transport phandle, since the SCMI virtio device
may not be known in advance. Instead, use the first suitable probed
device. Change due to Rob Herring's comment.

Any comments are very welcome.

[1] https://github.com/oasis-tcs/virtio-spec/blob/master/virtio-scmi.tex


Cristian Marussi (1):
firmware: arm_scmi: Add transport init/deinit

Igor Skalkin (4):
firmware: arm_scmi, smccc, mailbox: Make shmem based transports
optional
firmware: arm_scmi: Add op to override max message #
dt-bindings: arm: Add virtio transport for SCMI
firmware: arm_scmi: Add virtio transport

Peter Hilber (7):
firmware: arm_scmi: Add optional link_supplier() transport op
firmware: arm_scmi: Add per-device transport private info
firmware: arm_scmi: Add is_scmi_protocol_device()
firmware: arm_scmi: Add msg_handle to some transport ops
firmware: arm_scmi: Add optional drop_message() transport op
firmware: arm_scmi: Add message passing abstractions for transports
firmware: arm_scmi: Handle races between core and transport

.../devicetree/bindings/arm/arm,scmi.txt | 35 +-
MAINTAINERS | 1 +
drivers/firmware/Kconfig | 26 +-
drivers/firmware/arm_scmi/Makefile | 4 +-
drivers/firmware/arm_scmi/bus.c | 5 +
drivers/firmware/arm_scmi/common.h | 81 ++-
drivers/firmware/arm_scmi/driver.c | 310 +++++++++--
drivers/firmware/arm_scmi/mailbox.c | 7 +-
drivers/firmware/arm_scmi/msg.c | 113 ++++
drivers/firmware/arm_scmi/smc.c | 5 +-
drivers/firmware/arm_scmi/virtio.c | 524 ++++++++++++++++++
drivers/firmware/smccc/Kconfig | 1 +
drivers/mailbox/Kconfig | 1 +
include/uapi/linux/virtio_ids.h | 1 +
include/uapi/linux/virtio_scmi.h | 25 +
15 files changed, 1065 insertions(+), 74 deletions(-)
create mode 100644 drivers/firmware/arm_scmi/msg.c
create mode 100644 drivers/firmware/arm_scmi/virtio.c
create mode 100644 include/uapi/linux/virtio_scmi.h


base-commit: 6efb943b8616ec53a5e444193dccf1af9ad627b5
--
2.25.1



2021-05-11 00:30:21

by Peter Hilber

[permalink] [raw]
Subject: [RFC PATCH v3 11/12] firmware: arm_scmi: Add virtio transport

From: Igor Skalkin <[email protected]>

This transport enables accessing an SCMI platform as a virtio device.

Implement an SCMI virtio driver according to the virtio SCMI device spec
[1]. Virtio device id 32 has been reserved for the SCMI device [2].

The virtio transport has one Tx channel (virtio cmdq, A2P channel) and
at most one Rx channel (virtio eventq, P2A channel).

The following feature bit defined in [1] is not implemented:
VIRTIO_SCMI_F_SHARED_MEMORY.

After the preparatory patches, this implements the virtio transport as
paraphrased:

Only support a single arm-scmi device (which is consistent with the SCMI
spec). scmi-virtio init is called from arm-scmi module init. During the
arm-scmi probing, link to the first probed scmi-virtio device. Defer
arm-scmi probing if no scmi-virtio device is bound yet.

For simplicity, restrict the number of messages which can be pending
simultaneously according to the virtqueue capacity. (The virtqueue sizes
are negotiated with the virtio device.)

As soon as Rx channel message buffers are allocated or have been read
out by the arm-scmi driver, feed them to the virtio device.

Since some virtio devices may not have the short response time exhibited
by SCMI platforms using other transports, set a generous response
timeout.

Limitations:

- Polling is not supported.

- The timeout for delayed responses has not been adjusted.

[1] https://github.com/oasis-tcs/virtio-spec/blob/master/virtio-scmi.tex
[2] https://www.oasis-open.org/committees/ballot.php?id=3496

Signed-off-by: Igor Skalkin <[email protected]>
[ Peter: Adapted patch for submission to upstream. ]
Co-developed-by: Peter Hilber <[email protected]>
Signed-off-by: Peter Hilber <[email protected]>
---
MAINTAINERS | 1 +
drivers/firmware/Kconfig | 12 +
drivers/firmware/arm_scmi/Makefile | 1 +
drivers/firmware/arm_scmi/common.h | 3 +
drivers/firmware/arm_scmi/driver.c | 3 +
drivers/firmware/arm_scmi/virtio.c | 523 +++++++++++++++++++++++++++++
include/uapi/linux/virtio_ids.h | 1 +
include/uapi/linux/virtio_scmi.h | 25 ++
8 files changed, 569 insertions(+)
create mode 100644 drivers/firmware/arm_scmi/virtio.c
create mode 100644 include/uapi/linux/virtio_scmi.h

diff --git a/MAINTAINERS b/MAINTAINERS
index bd7aff0c120f..449c336872f3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -17705,6 +17705,7 @@ F: drivers/regulator/scmi-regulator.c
F: drivers/reset/reset-scmi.c
F: include/linux/sc[mp]i_protocol.h
F: include/trace/events/scmi.h
+F: include/uapi/linux/virtio_scmi.h

SYSTEM RESET/SHUTDOWN DRIVERS
M: Sebastian Reichel <[email protected]>
diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
index e8377b12e4d0..7e9eafdd9b63 100644
--- a/drivers/firmware/Kconfig
+++ b/drivers/firmware/Kconfig
@@ -39,6 +39,18 @@ config ARM_SCMI_HAVE_MSG
This declares whether a message passing based transport for SCMI is
available.

+# This config option includes the dependencies of ARM_SCMI_PROTOCOL so that
+# this config doesn't show up when SCMI wouldn't be available.
+config VIRTIO_SCMI
+ bool "Virtio transport for SCMI"
+ select ARM_SCMI_HAVE_MSG
+ depends on VIRTIO && (ARM || ARM64 || COMPILE_TEST)
+ help
+ This enables the virtio based transport for SCMI.
+
+ If you want to use the ARM SCMI protocol between the virtio guest and
+ a host providing a virtio SCMI device, answer Y.
+
config ARM_SCMI_POWER_DOMAIN
tristate "SCMI power domain driver"
depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF)
diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
index f6b4acb8abdb..db1787606fb2 100644
--- a/drivers/firmware/arm_scmi/Makefile
+++ b/drivers/firmware/arm_scmi/Makefile
@@ -5,6 +5,7 @@ scmi-transport-$(CONFIG_ARM_SCMI_HAVE_SHMEM) = shmem.o
scmi-transport-$(CONFIG_MAILBOX) += mailbox.o
scmi-transport-$(CONFIG_HAVE_ARM_SMCCC_DISCOVERY) += smc.o
scmi-transport-$(CONFIG_ARM_SCMI_HAVE_MSG) += msg.o
+scmi-transport-$(CONFIG_VIRTIO_SCMI) += virtio.o
scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o
scmi-module-objs := $(scmi-bus-y) $(scmi-driver-y) $(scmi-protocols-y) \
$(scmi-transport-y)
diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index 4cb6571c7aaf..bada06cfd33d 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -349,6 +349,9 @@ extern const struct scmi_desc scmi_mailbox_desc;
#ifdef CONFIG_HAVE_ARM_SMCCC
extern const struct scmi_desc scmi_smc_desc;
#endif
+#ifdef CONFIG_VIRTIO_SCMI
+extern const struct scmi_desc scmi_virtio_desc;
+#endif

int scmi_set_transport_info(struct device *dev, void *transport_info);
void *scmi_get_transport_info(struct device *dev);
diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index e04e7c8e6928..a31187385470 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -1637,6 +1637,9 @@ static const struct of_device_id scmi_of_match[] = {
#endif
#ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
{ .compatible = "arm,scmi-smc", .data = &scmi_smc_desc},
+#endif
+#ifdef CONFIG_VIRTIO_SCMI
+ { .compatible = "arm,scmi-virtio", .data = &scmi_virtio_desc},
#endif
{ /* Sentinel */ },
};
diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c
new file mode 100644
index 000000000000..20972adf6dc7
--- /dev/null
+++ b/drivers/firmware/arm_scmi/virtio.c
@@ -0,0 +1,523 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Virtio Transport driver for Arm System Control and Management Interface
+ * (SCMI).
+ *
+ * Copyright (C) 2020 OpenSynergy.
+ */
+
+/**
+ * DOC: Theory of Operation
+ *
+ * The scmi-virtio transport implements a driver for the virtio SCMI device.
+ *
+ * There is one Tx channel (virtio cmdq, A2P channel) and at most one Rx
+ * channel (virtio eventq, P2A channel). Each channel is implemented through a
+ * virtqueue. Access to each virtqueue is protected by spinlocks.
+ */
+
+#include <linux/errno.h>
+#include <linux/of.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/virtio.h>
+#include <linux/virtio_config.h>
+#include <uapi/linux/virtio_ids.h>
+#include <uapi/linux/virtio_scmi.h>
+
+#include "common.h"
+
+#define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */
+#define VIRTIO_SCMI_MAX_PDU_SIZE \
+ (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD)
+#define DESCRIPTORS_PER_TX_MSG 2
+
+/**
+ * struct scmi_vio_channel - Transport channel information
+ *
+ * @lock: Protects access to all members except ready.
+ * @ready_lock: Protects access to ready. If required, it must be taken before
+ * lock.
+ * @vqueue: Associated virtqueue
+ * @cinfo: SCMI Tx or Rx channel
+ * @free_list: List of unused scmi_vio_msg, maintained for Tx channels only
+ * @is_rx: Whether channel is an Rx channel
+ * @ready: Whether transport user is ready to hear about channel
+ */
+struct scmi_vio_channel {
+ spinlock_t lock;
+ spinlock_t ready_lock;
+ struct virtqueue *vqueue;
+ struct scmi_chan_info *cinfo;
+ struct list_head free_list;
+ u8 is_rx;
+ u8 ready;
+};
+
+/**
+ * struct scmi_vio_msg - Transport PDU information
+ *
+ * @request: SDU used for commands
+ * @input: SDU used for (delayed) responses and notifications
+ * @list: List which scmi_vio_msg may be part of
+ * @rx_len: Input SDU size in bytes, once input has been received
+ */
+struct scmi_vio_msg {
+ struct scmi_msg_payld *request;
+ struct scmi_msg_payld *input;
+ struct list_head list;
+ unsigned int rx_len;
+};
+
+static bool scmi_vio_have_vq_rx(struct virtio_device *vdev)
+{
+ return virtio_has_feature(vdev, VIRTIO_SCMI_F_P2A_CHANNELS);
+}
+
+static int scmi_vio_feed_vq_rx(struct scmi_vio_channel *vioch,
+ struct scmi_vio_msg *msg)
+{
+ struct scatterlist sg_in;
+ int rc;
+ unsigned long flags;
+
+ sg_init_one(&sg_in, msg->input, VIRTIO_SCMI_MAX_PDU_SIZE);
+
+ spin_lock_irqsave(&vioch->lock, flags);
+
+ rc = virtqueue_add_inbuf(vioch->vqueue, &sg_in, 1, msg, GFP_ATOMIC);
+ if (rc)
+ dev_err_once(vioch->cinfo->dev,
+ "%s() failed to add to virtqueue (%d)\n", __func__,
+ rc);
+ else
+ virtqueue_kick(vioch->vqueue);
+
+ spin_unlock_irqrestore(&vioch->lock, flags);
+
+ return rc;
+}
+
+static void scmi_vio_complete_cb(struct virtqueue *vqueue)
+{
+ unsigned long ready_flags;
+ unsigned long flags;
+ unsigned int length;
+ struct scmi_vio_channel *vioch;
+ struct scmi_vio_msg *msg;
+ bool cb_enabled = true;
+
+ if (WARN_ON_ONCE(!vqueue->vdev->priv))
+ return;
+ vioch = &((struct scmi_vio_channel *)vqueue->vdev->priv)[vqueue->index];
+
+ for (;;) {
+ spin_lock_irqsave(&vioch->ready_lock, ready_flags);
+
+ if (!vioch->ready) {
+ if (!cb_enabled)
+ (void)virtqueue_enable_cb(vqueue);
+ goto unlock_ready_out;
+ }
+
+ spin_lock_irqsave(&vioch->lock, flags);
+ if (cb_enabled) {
+ virtqueue_disable_cb(vqueue);
+ cb_enabled = false;
+ }
+ msg = virtqueue_get_buf(vqueue, &length);
+ if (!msg) {
+ if (virtqueue_enable_cb(vqueue))
+ goto unlock_out;
+ else
+ cb_enabled = true;
+ }
+ spin_unlock_irqrestore(&vioch->lock, flags);
+
+ if (msg) {
+ msg->rx_len = length;
+
+ /*
+ * Hold the ready_lock during the callback to avoid
+ * races when the arm-scmi driver is unbinding while
+ * the virtio device is not quiesced yet.
+ */
+ scmi_rx_callback(vioch->cinfo,
+ msg_read_header(msg->input), msg);
+ }
+ spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
+ }
+
+unlock_out:
+ spin_unlock_irqrestore(&vioch->lock, flags);
+unlock_ready_out:
+ spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
+}
+
+static const char *const scmi_vio_vqueue_names[] = { "tx", "rx" };
+
+static vq_callback_t *scmi_vio_complete_callbacks[] = {
+ scmi_vio_complete_cb,
+ scmi_vio_complete_cb
+};
+
+static unsigned int virtio_get_max_msg(bool tx,
+ struct scmi_chan_info *base_cinfo)
+{
+ struct scmi_vio_channel *vioch = base_cinfo->transport_info;
+ unsigned int ret;
+
+ ret = virtqueue_get_vring_size(vioch->vqueue);
+
+ /* Tx messages need multiple descriptors. */
+ if (tx)
+ ret /= DESCRIPTORS_PER_TX_MSG;
+
+ if (ret > MSG_TOKEN_MAX) {
+ dev_info_once(
+ base_cinfo->dev,
+ "Only %ld messages can be pending simultaneously, while the %s virtqueue could hold %d\n",
+ MSG_TOKEN_MAX, tx ? "tx" : "rx", ret);
+ ret = MSG_TOKEN_MAX;
+ }
+
+ return ret;
+}
+
+static int scmi_vio_match_any_dev(struct device *dev, const void *data)
+{
+ return 1;
+}
+
+static struct virtio_driver virtio_scmi_driver; /* Forward declaration */
+
+static int virtio_link_supplier(struct device *dev)
+{
+ struct device *vdev = driver_find_device(
+ &virtio_scmi_driver.driver, NULL, NULL, scmi_vio_match_any_dev);
+
+ if (!vdev) {
+ dev_notice_once(
+ dev,
+ "Deferring probe after not finding a bound scmi-virtio device\n");
+ return -EPROBE_DEFER;
+ }
+
+ /* Add device link for remove order and sysfs link. */
+ if (!device_link_add(dev, vdev, DL_FLAG_AUTOREMOVE_CONSUMER)) {
+ put_device(vdev);
+ dev_err(dev, "Adding link to supplier virtio device failed\n");
+ return -ECANCELED;
+ }
+
+ put_device(vdev);
+ return scmi_set_transport_info(dev, dev_to_virtio(vdev));
+}
+
+static bool virtio_chan_available(struct device *dev, int idx)
+{
+ struct virtio_device *vdev;
+
+ /* scmi-virtio doesn't support per-protocol channels */
+ if (is_scmi_protocol_device(dev))
+ return false;
+
+ vdev = scmi_get_transport_info(dev);
+ if (!vdev)
+ return false;
+
+ switch (idx) {
+ case VIRTIO_SCMI_VQ_TX:
+ return true;
+ case VIRTIO_SCMI_VQ_RX:
+ return scmi_vio_have_vq_rx(vdev);
+ default:
+ return false;
+ }
+}
+
+static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
+ bool tx)
+{
+ unsigned long flags;
+ struct virtio_device *vdev;
+ struct scmi_vio_channel *vioch;
+ int index = tx ? VIRTIO_SCMI_VQ_TX : VIRTIO_SCMI_VQ_RX;
+ int max_msg;
+ int i;
+
+ if (!virtio_chan_available(dev, index))
+ return -ENODEV;
+
+ vdev = scmi_get_transport_info(dev);
+ vioch = &((struct scmi_vio_channel *)vdev->priv)[index];
+
+ spin_lock_irqsave(&vioch->lock, flags);
+ cinfo->transport_info = vioch;
+ vioch->cinfo = cinfo;
+ spin_unlock_irqrestore(&vioch->lock, flags);
+
+ max_msg = virtio_get_max_msg(tx, cinfo);
+
+ for (i = 0; i < max_msg; i++) {
+ struct scmi_vio_msg *msg;
+
+ msg = devm_kzalloc(cinfo->dev, sizeof(*msg), GFP_KERNEL);
+ if (!msg)
+ return -ENOMEM;
+
+ if (tx) {
+ msg->request = devm_kzalloc(cinfo->dev,
+ VIRTIO_SCMI_MAX_PDU_SIZE,
+ GFP_KERNEL);
+ if (!msg->request)
+ return -ENOMEM;
+ }
+
+ msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE,
+ GFP_KERNEL);
+ if (!msg->input)
+ return -ENOMEM;
+
+ if (tx) {
+ spin_lock_irqsave(&vioch->lock, flags);
+ list_add_tail(&msg->list, &vioch->free_list);
+ spin_unlock_irqrestore(&vioch->lock, flags);
+ } else {
+ scmi_vio_feed_vq_rx(vioch, msg);
+ }
+ }
+
+ spin_lock_irqsave(&vioch->ready_lock, flags);
+ vioch->ready = true;
+ spin_unlock_irqrestore(&vioch->ready_lock, flags);
+
+ return 0;
+}
+
+static int virtio_chan_free(int id, void *p, void *data)
+{
+ unsigned long flags;
+ struct scmi_chan_info *cinfo = p;
+ struct scmi_vio_channel *vioch = cinfo->transport_info;
+
+ spin_lock_irqsave(&vioch->ready_lock, flags);
+ vioch->ready = false;
+ spin_unlock_irqrestore(&vioch->ready_lock, flags);
+
+ scmi_free_channel(cinfo, data, id);
+ return 0;
+}
+
+static int virtio_send_message(struct scmi_chan_info *cinfo,
+ struct scmi_xfer *xfer)
+{
+ struct scmi_vio_channel *vioch = cinfo->transport_info;
+ struct scatterlist sg_out;
+ struct scatterlist sg_in;
+ struct scatterlist *sgs[DESCRIPTORS_PER_TX_MSG] = { &sg_out, &sg_in };
+ unsigned long flags;
+ int rc;
+ struct scmi_vio_msg *msg;
+
+ /*
+ * TODO: For now, we don't support polling. But it should not be
+ * difficult to add support.
+ */
+ if (xfer->hdr.poll_completion)
+ return -EINVAL;
+
+ spin_lock_irqsave(&vioch->lock, flags);
+
+ if (list_empty(&vioch->free_list)) {
+ spin_unlock_irqrestore(&vioch->lock, flags);
+ return -EBUSY;
+ }
+
+ msg = list_first_entry(&vioch->free_list, typeof(*msg), list);
+ list_del(&msg->list);
+
+ msg_tx_prepare(msg->request, xfer);
+
+ sg_init_one(&sg_out, msg->request, msg_command_size(xfer));
+ sg_init_one(&sg_in, msg->input, msg_response_size(xfer));
+
+ rc = virtqueue_add_sgs(vioch->vqueue, sgs, 1, 1, msg, GFP_ATOMIC);
+ if (rc) {
+ list_add(&msg->list, &vioch->free_list);
+ dev_err_once(vioch->cinfo->dev,
+ "%s() failed to add to virtqueue (%d)\n", __func__,
+ rc);
+ } else {
+ virtqueue_kick(vioch->vqueue);
+ }
+
+ spin_unlock_irqrestore(&vioch->lock, flags);
+
+ return rc;
+}
+
+static void virtio_fetch_response(struct scmi_chan_info *cinfo,
+ struct scmi_xfer *xfer, void *msg_handle)
+{
+ struct scmi_vio_msg *msg = msg_handle;
+ struct scmi_vio_channel *vioch = cinfo->transport_info;
+
+ if (!msg) {
+ dev_dbg_once(&vioch->vqueue->vdev->dev,
+ "Ignoring %s() call with NULL msg_handle\n",
+ __func__);
+ return;
+ }
+
+ msg_fetch_response(msg->input, msg->rx_len, xfer);
+}
+
+static void virtio_fetch_notification(struct scmi_chan_info *cinfo,
+ size_t max_len, struct scmi_xfer *xfer,
+ void *msg_handle)
+{
+ struct scmi_vio_msg *msg = msg_handle;
+ struct scmi_vio_channel *vioch = cinfo->transport_info;
+
+ if (!msg) {
+ dev_dbg_once(&vioch->vqueue->vdev->dev,
+ "Ignoring %s() call with NULL msg_handle\n",
+ __func__);
+ return;
+ }
+
+ msg_fetch_notification(msg->input, msg->rx_len, max_len, xfer);
+}
+
+static void dummy_clear_channel(struct scmi_chan_info *cinfo)
+{
+}
+
+static bool dummy_poll_done(struct scmi_chan_info *cinfo,
+ struct scmi_xfer *xfer)
+{
+ return false;
+}
+
+static void virtio_drop_message(struct scmi_chan_info *cinfo, void *msg_handle)
+{
+ unsigned long flags;
+ struct scmi_vio_channel *vioch = cinfo->transport_info;
+ struct scmi_vio_msg *msg = msg_handle;
+
+ if (!msg) {
+ dev_dbg_once(&vioch->vqueue->vdev->dev,
+ "Ignoring %s() call with NULL msg_handle\n",
+ __func__);
+ return;
+ }
+
+ if (vioch->is_rx) {
+ scmi_vio_feed_vq_rx(vioch, msg);
+ } else {
+ spin_lock_irqsave(&vioch->lock, flags);
+ list_add(&msg->list, &vioch->free_list);
+ spin_unlock_irqrestore(&vioch->lock, flags);
+ }
+}
+
+static const struct scmi_transport_ops scmi_virtio_ops = {
+ .link_supplier = virtio_link_supplier,
+ .chan_available = virtio_chan_available,
+ .chan_setup = virtio_chan_setup,
+ .chan_free = virtio_chan_free,
+ .get_max_msg = virtio_get_max_msg,
+ .send_message = virtio_send_message,
+ .fetch_response = virtio_fetch_response,
+ .fetch_notification = virtio_fetch_notification,
+ .clear_channel = dummy_clear_channel,
+ .poll_done = dummy_poll_done,
+ .drop_message = virtio_drop_message,
+};
+
+static int scmi_vio_probe(struct virtio_device *vdev)
+{
+ struct device *dev = &vdev->dev;
+ struct scmi_vio_channel *channels;
+ bool have_vq_rx;
+ int vq_cnt;
+ int i;
+ int ret;
+ struct virtqueue *vqs[VIRTIO_SCMI_VQ_MAX_CNT];
+
+ have_vq_rx = scmi_vio_have_vq_rx(vdev);
+ vq_cnt = have_vq_rx ? VIRTIO_SCMI_VQ_MAX_CNT : 1;
+
+ channels = devm_kcalloc(dev, vq_cnt, sizeof(*channels), GFP_KERNEL);
+ if (!channels)
+ return -ENOMEM;
+
+ if (have_vq_rx)
+ channels[VIRTIO_SCMI_VQ_RX].is_rx = true;
+
+ ret = virtio_find_vqs(vdev, vq_cnt, vqs, scmi_vio_complete_callbacks,
+ scmi_vio_vqueue_names, NULL);
+ if (ret) {
+ dev_err(dev, "Failed to get %d virtqueue(s)\n", vq_cnt);
+ return ret;
+ }
+ dev_info(dev, "Found %d virtqueue(s)\n", vq_cnt);
+
+ for (i = 0; i < vq_cnt; i++) {
+ spin_lock_init(&channels[i].lock);
+ spin_lock_init(&channels[i].ready_lock);
+ INIT_LIST_HEAD(&channels[i].free_list);
+ channels[i].vqueue = vqs[i];
+ }
+
+ vdev->priv = channels;
+
+ return 0;
+}
+
+static void scmi_vio_remove(struct virtio_device *vdev)
+{
+ vdev->config->reset(vdev);
+ vdev->config->del_vqs(vdev);
+}
+
+static unsigned int features[] = {
+ VIRTIO_SCMI_F_P2A_CHANNELS,
+};
+
+static const struct virtio_device_id id_table[] = {
+ { VIRTIO_ID_SCMI, VIRTIO_DEV_ANY_ID },
+ { 0 }
+};
+
+static struct virtio_driver virtio_scmi_driver = {
+ .driver.name = "scmi-virtio",
+ .driver.owner = THIS_MODULE,
+ .feature_table = features,
+ .feature_table_size = ARRAY_SIZE(features),
+ .id_table = id_table,
+ .probe = scmi_vio_probe,
+ .remove = scmi_vio_remove,
+};
+
+static int __init virtio_scmi_init(void)
+{
+ return register_virtio_driver(&virtio_scmi_driver);
+}
+
+static void __exit virtio_scmi_exit(void)
+{
+ unregister_virtio_driver(&virtio_scmi_driver);
+}
+
+const struct scmi_desc scmi_virtio_desc = {
+ .init = virtio_scmi_init,
+ .exit = virtio_scmi_exit,
+ .ops = &scmi_virtio_ops,
+ .max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */
+ .max_msg = 0, /* overridden by virtio_get_max_msg() */
+ .max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE,
+};
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index f0c35ce8628c..c146fe30e589 100644
--- a/include/uapi/linux/virtio_ids.h
+++ b/include/uapi/linux/virtio_ids.h
@@ -56,5 +56,6 @@
#define VIRTIO_ID_PMEM 27 /* virtio pmem */
#define VIRTIO_ID_BT 28 /* virtio bluetooth */
#define VIRTIO_ID_MAC80211_HWSIM 29 /* virtio mac80211-hwsim */
+#define VIRTIO_ID_SCMI 32 /* virtio SCMI */

#endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/uapi/linux/virtio_scmi.h b/include/uapi/linux/virtio_scmi.h
new file mode 100644
index 000000000000..732b01504c35
--- /dev/null
+++ b/include/uapi/linux/virtio_scmi.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
+/*
+ * Copyright (C) 2020 OpenSynergy GmbH
+ */
+
+#ifndef _UAPI_LINUX_VIRTIO_SCMI_H
+#define _UAPI_LINUX_VIRTIO_SCMI_H
+
+#include <linux/virtio_types.h>
+
+/* Feature bits */
+
+/* Device implements some SCMI notifications, or delayed responses. */
+#define VIRTIO_SCMI_F_P2A_CHANNELS 0
+
+/* Device implements any SCMI statistics shared memory region */
+#define VIRTIO_SCMI_F_SHARED_MEMORY 1
+
+/* Virtqueues */
+
+#define VIRTIO_SCMI_VQ_TX 0 /* cmdq */
+#define VIRTIO_SCMI_VQ_RX 1 /* eventq */
+#define VIRTIO_SCMI_VQ_MAX_CNT 2
+
+#endif /* _UAPI_LINUX_VIRTIO_SCMI_H */
--
2.25.1


2021-05-11 00:30:23

by Peter Hilber

[permalink] [raw]
Subject: [RFC PATCH v3 12/12] firmware: arm_scmi: Handle races between core and transport

Introduce additional state and locks to handle race conditions in the
core-transport interaction. These race conditions become more relevant
with the new virtio transport, which may time out more often, especially
in the case of misconfigurations.

The races involve the transport calling scmi_rx_callback() and a second
function. The following race conditions are addressed:

- concurrent not delayed and delayed response (or inverted order)
- belated response to timed out message corrupts new message
- mixing up of responses if the same command is repeated after timeout

The implementation is not yet complete, as when the wait_after_timeout
transport option is set and a not delayed response returns an error
after timeout, the core would wait endlessly for any corresponding
delayed response (marked with TODO in the code).

Signed-off-by: Peter Hilber <[email protected]>
---
drivers/firmware/arm_scmi/common.h | 34 ++++++-
drivers/firmware/arm_scmi/driver.c | 153 ++++++++++++++++++++++-------
drivers/firmware/arm_scmi/virtio.c | 1 +
3 files changed, 150 insertions(+), 38 deletions(-)

diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index bada06cfd33d..bdb9796ea1a3 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -123,6 +123,29 @@ struct scmi_msg {
size_t len;
};

+/**
+ * struct scmi_xfer_state - Message flow state
+ *
+ * @done: command message transmit completion event
+ * @async_done: pointer to delayed response message received event completion
+ * @wait_response: Waiting for the (not delayed) response.
+ * @wait_delayed: Waiting for the delayed response.
+ * @wait_after_timeout: Message timed out for core, but transport will
+ * eventually submit the response. Cannot reuse the
+ * scmi_xfer as long as this is set.
+ * @put_wait: wait_after_timeout was set when we were trying to put the
+ * message, so we wait for the transport to present the response
+ * before actually releasing the message.
+ */
+struct scmi_xfer_state {
+ struct completion done;
+ struct completion *async_done;
+ u8 wait_response;
+ u8 wait_delayed;
+ u8 wait_after_timeout;
+ u8 put_wait;
+};
+
/**
* struct scmi_xfer - Structure representing a message flow
*
@@ -132,16 +155,16 @@ struct scmi_msg {
* @rx: Receive message, the buffer should be pre-allocated to store
* message. If request-ACK protocol is used, we can reuse the same
* buffer for the rx path as we use for the tx path.
- * @done: command message transmit completion event
- * @async_done: pointer to delayed response message received event completion
+ * @state: Message flow state
+ * @lock: Protects access to message while transport may access it
*/
struct scmi_xfer {
int transfer_id;
struct scmi_msg_hdr hdr;
struct scmi_msg tx;
struct scmi_msg rx;
- struct completion done;
- struct completion *async_done;
+ struct scmi_xfer_state state;
+ spinlock_t lock;
};

struct scmi_xfer_ops;
@@ -333,6 +356,8 @@ struct scmi_device *scmi_child_dev_find(struct device *parent,
* be pending simultaneously in the system. May be overridden by the
* get_max_msg op.
* @max_msg_size: Maximum size of data per message that can be handled.
+ * @wait_after_timeout: On response timeout, don't reuse scmi_xfer until
+ * transport eventually returns response.
*/
struct scmi_desc {
int (*init)(void);
@@ -341,6 +366,7 @@ struct scmi_desc {
int max_rx_timeout_ms;
int max_msg;
int max_msg_size;
+ u8 wait_after_timeout;
};

#ifdef CONFIG_MAILBOX
diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index a31187385470..51f6799cc8ab 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -72,13 +72,13 @@ struct scmi_requested_dev {
* @xfer_alloc_table: Bitmap table for allocated messages.
* Index of this bitmap table is also used for message
* sequence identifier.
- * @xfer_lock: Protection for message allocation
+ * @alloc_lock: Protection for message allocation
* @max_msg: Maximum number of messages that can be pending
*/
struct scmi_xfers_info {
struct scmi_xfer *xfer_block;
unsigned long *xfer_alloc_table;
- spinlock_t xfer_lock;
+ spinlock_t alloc_lock;
int max_msg;
};

@@ -230,20 +230,20 @@ static struct scmi_xfer *scmi_xfer_get(const struct scmi_handle *handle,
unsigned long flags, bit_pos;

/* Keep the locked section as small as possible */
- spin_lock_irqsave(&minfo->xfer_lock, flags);
+ spin_lock_irqsave(&minfo->alloc_lock, flags);
bit_pos = find_first_zero_bit(minfo->xfer_alloc_table, minfo->max_msg);
if (bit_pos == minfo->max_msg) {
- spin_unlock_irqrestore(&minfo->xfer_lock, flags);
+ spin_unlock_irqrestore(&minfo->alloc_lock, flags);
return ERR_PTR(-ENOMEM);
}
set_bit(bit_pos, minfo->xfer_alloc_table);
- spin_unlock_irqrestore(&minfo->xfer_lock, flags);
+ spin_unlock_irqrestore(&minfo->alloc_lock, flags);

xfer_id = bit_pos;

xfer = &minfo->xfer_block[xfer_id];
xfer->hdr.seq = xfer_id;
- reinit_completion(&xfer->done);
+ reinit_completion(&xfer->state.done);
xfer->transfer_id = atomic_inc_return(&transfer_last_id);

return xfer;
@@ -255,21 +255,27 @@ static struct scmi_xfer *scmi_xfer_get(const struct scmi_handle *handle,
* @minfo: Pointer to Tx/Rx Message management info based on channel type
* @xfer: message that was reserved by scmi_xfer_get
*
- * This holds a spinlock to maintain integrity of internal data structures.
+ * For the Tx channel, call this with the xfer lock held.
+ * This uses the xfer table spinlock to maintain integrity of internal data
+ * structures.
*/
static void
__scmi_xfer_put(struct scmi_xfers_info *minfo, struct scmi_xfer *xfer)
{
unsigned long flags;

- /*
- * Keep the locked section as small as possible
- * NOTE: we might escape with smp_mb and no lock here..
- * but just be conservative and symmetric.
- */
- spin_lock_irqsave(&minfo->xfer_lock, flags);
- clear_bit(xfer->hdr.seq, minfo->xfer_alloc_table);
- spin_unlock_irqrestore(&minfo->xfer_lock, flags);
+ if (xfer->state.wait_after_timeout) {
+ xfer->state.put_wait = true;
+ } else {
+ /*
+ * Keep the locked section as small as possible
+ * NOTE: we might escape with smp_mb and no lock here..
+ * but just be conservative and symmetric.
+ */
+ spin_lock_irqsave(&minfo->alloc_lock, flags);
+ clear_bit(xfer->hdr.seq, minfo->xfer_alloc_table);
+ spin_unlock_irqrestore(&minfo->alloc_lock, flags);
+ }
}

static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr,
@@ -313,6 +319,7 @@ static void scmi_handle_response(struct scmi_chan_info *cinfo, u16 xfer_id,
struct device *dev = cinfo->dev;
struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
struct scmi_xfers_info *minfo = &info->tx_minfo;
+ unsigned long flags;

/* Are we even expecting this? */
if (!test_bit(xfer_id, minfo->xfer_alloc_table)) {
@@ -322,25 +329,65 @@ static void scmi_handle_response(struct scmi_chan_info *cinfo, u16 xfer_id,
}

xfer = &minfo->xfer_block[xfer_id];
+
+ spin_lock_irqsave(&xfer->lock, flags);
+
/*
* Even if a response was indeed expected on this slot at this point,
* a buggy platform could wrongly reply feeding us an unexpected
- * delayed response we're not prepared to handle: bail-out safely
+ * response we're not prepared to handle: bail-out safely
* blaming firmware.
*/
- if (unlikely(msg_type == MSG_TYPE_DELAYED_RESP && !xfer->async_done)) {
- dev_err(dev,
- "Delayed Response for %d not expected! Buggy F/W ?\n",
- xfer_id);
- info->desc->ops->clear_channel(cinfo);
- /* It was unexpected, so nobody will clear the xfer if not us */
- __scmi_xfer_put(minfo, xfer);
- return;
+ if (msg_type == MSG_TYPE_COMMAND) {
+ if (xfer->state.wait_response) {
+ xfer->state.wait_response = false;
+ /*
+ * TODO: In case of an error response, we should ensure
+ * that we cancel waiting for the delayed response.
+ */
+ } else {
+ spin_unlock_irqrestore(&xfer->lock, flags);
+ dev_err(dev,
+ "Response for %d not expected! Buggy F/W?\n",
+ xfer_id);
+ return;
+ }
+ } else /* msg_type == MSG_TYPE_DELAYED_RESP */ {
+ if (xfer->state.wait_delayed) {
+ xfer->state.wait_delayed = false;
+ } else {
+ dev_err(dev,
+ "Delayed Response for %d not expected! Buggy F/W?\n",
+ xfer_id);
+ info->desc->ops->clear_channel(cinfo);
+
+ /* It was unexpected, so nobody will clear the xfer if not us */
+ __scmi_xfer_put(minfo, xfer);
+ spin_unlock_irqrestore(&xfer->lock, flags);
+ return;
+ }
}

scmi_dump_header_dbg(dev, &xfer->hdr);

- info->desc->ops->fetch_response(cinfo, xfer, msg_handle);
+ if (xfer->state.wait_after_timeout && !xfer->state.wait_response &&
+ !xfer->state.wait_delayed) {
+ xfer->state.wait_after_timeout = false;
+ if (xfer->state.put_wait) {
+ __scmi_xfer_put(minfo, xfer);
+ spin_unlock_irqrestore(&xfer->lock, flags);
+ return;
+ }
+ }
+
+ if (xfer->state.async_done && msg_type == MSG_TYPE_COMMAND &&
+ !xfer->state.wait_delayed) {
+ dev_dbg(dev,
+ "Ignoring not delayed response for %d, which was received after delayed response\n",
+ xfer_id);
+ } else {
+ info->desc->ops->fetch_response(cinfo, xfer, msg_handle);
+ }

trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id,
xfer->hdr.protocol_id, xfer->hdr.seq,
@@ -348,10 +395,12 @@ static void scmi_handle_response(struct scmi_chan_info *cinfo, u16 xfer_id,

if (msg_type == MSG_TYPE_DELAYED_RESP) {
info->desc->ops->clear_channel(cinfo);
- complete(xfer->async_done);
+ complete(xfer->state.async_done);
} else {
- complete(&xfer->done);
+ complete(&xfer->state.done);
}
+
+ spin_unlock_irqrestore(&xfer->lock, flags);
}

/**
@@ -434,8 +483,11 @@ static void xfer_put(const struct scmi_protocol_handle *ph,
{
const struct scmi_protocol_instance *pi = ph_to_pi(ph);
struct scmi_info *info = handle_to_scmi_info(pi->handle);
+ unsigned long flags;

+ spin_lock_irqsave(&xfer->lock, flags);
__scmi_xfer_put(&info->tx_minfo, xfer);
+ spin_unlock_irqrestore(&xfer->lock, flags);
}

#define SCMI_MAX_POLL_TO_NS (100 * NSEC_PER_USEC)
@@ -468,6 +520,7 @@ static int do_xfer(const struct scmi_protocol_handle *ph,
struct scmi_info *info = handle_to_scmi_info(pi->handle);
struct device *dev = info->dev;
struct scmi_chan_info *cinfo;
+ unsigned long flags;

/*
* Re-instate protocol id here from protocol handle so that cannot be
@@ -480,6 +533,16 @@ static int do_xfer(const struct scmi_protocol_handle *ph,
if (unlikely(!cinfo))
return -EINVAL;

+ spin_lock_irqsave(&xfer->lock, flags);
+ if (xfer->state.wait_after_timeout) {
+ spin_unlock_irqrestore(&xfer->lock, flags);
+ return -EBUSY;
+ }
+
+ xfer->state.wait_response = true;
+ xfer->state.wait_delayed = !!xfer->state.async_done;
+ spin_unlock_irqrestore(&xfer->lock, flags);
+
trace_scmi_xfer_begin(xfer->transfer_id, xfer->hdr.id,
xfer->hdr.protocol_id, xfer->hdr.seq,
xfer->hdr.poll_completion);
@@ -502,7 +565,14 @@ static int do_xfer(const struct scmi_protocol_handle *ph,
} else {
/* And we wait for the response. */
timeout = msecs_to_jiffies(info->desc->max_rx_timeout_ms);
- if (!wait_for_completion_timeout(&xfer->done, timeout)) {
+ if (!wait_for_completion_timeout(&xfer->state.done, timeout)) {
+ if (info->desc->wait_after_timeout) {
+ spin_lock_irqsave(&xfer->lock, flags);
+ if (xfer->state.wait_response ||
+ xfer->state.wait_delayed)
+ xfer->state.wait_after_timeout = true;
+ spin_unlock_irqrestore(&xfer->lock, flags);
+ }
dev_err(dev, "timed out in resp(caller: %pS)\n",
(void *)_RET_IP_);
ret = -ETIMEDOUT;
@@ -547,17 +617,27 @@ static int do_xfer_with_response(const struct scmi_protocol_handle *ph,
{
int ret, timeout = msecs_to_jiffies(SCMI_MAX_RESPONSE_TIMEOUT);
const struct scmi_protocol_instance *pi = ph_to_pi(ph);
+ const struct scmi_info *info = handle_to_scmi_info(pi->handle);
DECLARE_COMPLETION_ONSTACK(async_response);
+ unsigned long flags;

xfer->hdr.protocol_id = pi->proto->id;

- xfer->async_done = &async_response;
+ xfer->state.async_done = &async_response;

ret = do_xfer(ph, xfer);
- if (!ret && !wait_for_completion_timeout(xfer->async_done, timeout))
+ if (!ret &&
+ !wait_for_completion_timeout(xfer->state.async_done, timeout)) {
+ if (info->desc->wait_after_timeout) {
+ spin_lock_irqsave(&xfer->lock, flags);
+ if (xfer->state.wait_delayed)
+ xfer->state.wait_after_timeout = true;
+ spin_unlock_irqrestore(&xfer->lock, flags);
+ }
ret = -ETIMEDOUT;
+ }

- xfer->async_done = NULL;
+ xfer->state.async_done = NULL;
return ret;
}

@@ -604,6 +684,10 @@ static int xfer_get_init(const struct scmi_protocol_handle *ph,
xfer->hdr.id = msg_id;
xfer->hdr.protocol_id = pi->proto->id;
xfer->hdr.poll_completion = false;
+ xfer->state.wait_response = false;
+ xfer->state.wait_delayed = false;
+ xfer->state.wait_after_timeout = false;
+ xfer->state.put_wait = false;

*p = xfer;

@@ -1085,7 +1169,7 @@ static int __scmi_xfer_info_init(struct scmi_info *sinfo,
if (!info->xfer_alloc_table)
return -ENOMEM;

- /* Pre-initialize the buffer pointer to pre-allocated buffers */
+ /* Pre-initialize the buffer pointer to pre-allocated buffers etc. */
for (i = 0, xfer = info->xfer_block; i < info->max_msg; i++, xfer++) {
xfer->rx.buf = devm_kcalloc(dev, sizeof(u8), desc->max_msg_size,
GFP_KERNEL);
@@ -1093,10 +1177,11 @@ static int __scmi_xfer_info_init(struct scmi_info *sinfo,
return -ENOMEM;

xfer->tx.buf = xfer->rx.buf;
- init_completion(&xfer->done);
+ init_completion(&xfer->state.done);
+ spin_lock_init(&xfer->lock);
}

- spin_lock_init(&info->xfer_lock);
+ spin_lock_init(&info->alloc_lock);

return 0;
}
diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c
index 20972adf6dc7..3db3c629778c 100644
--- a/drivers/firmware/arm_scmi/virtio.c
+++ b/drivers/firmware/arm_scmi/virtio.c
@@ -520,4 +520,5 @@ const struct scmi_desc scmi_virtio_desc = {
.max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */
.max_msg = 0, /* overridden by virtio_get_max_msg() */
.max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE,
+ .wait_after_timeout = true,
};
--
2.25.1


2021-05-11 00:30:29

by Peter Hilber

[permalink] [raw]
Subject: [RFC PATCH v3 01/12] firmware: arm_scmi, smccc, mailbox: Make shmem based transports optional

From: Igor Skalkin <[email protected]>

Upon adding the virtio transport in this patch series, SCMI will also
work without shared memory based transports. Also, the mailbox transport
may not be needed if the smc transport is used.

- Compile shmem.c only if a shmem based transport is available.

- Remove hard dependency of SCMI on mailbox.

Signed-off-by: Igor Skalkin <[email protected]>
[ Peter: Adapted patch for submission to upstream. ]
Co-developed-by: Peter Hilber <[email protected]>
Signed-off-by: Peter Hilber <[email protected]>
---
drivers/firmware/Kconfig | 8 +++++++-
drivers/firmware/arm_scmi/Makefile | 2 +-
drivers/firmware/arm_scmi/common.h | 2 ++
drivers/firmware/arm_scmi/driver.c | 2 ++
drivers/firmware/smccc/Kconfig | 1 +
drivers/mailbox/Kconfig | 1 +
6 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
index db0ea2d2d75a..80ff49dadf35 100644
--- a/drivers/firmware/Kconfig
+++ b/drivers/firmware/Kconfig
@@ -9,7 +9,7 @@ menu "Firmware Drivers"
config ARM_SCMI_PROTOCOL
tristate "ARM System Control and Management Interface (SCMI) Message Protocol"
depends on ARM || ARM64 || COMPILE_TEST
- depends on MAILBOX
+ depends on ARM_SCMI_HAVE_SHMEM
help
ARM System Control and Management Interface (SCMI) protocol is a
set of operating system-independent software interfaces that are
@@ -27,6 +27,12 @@ config ARM_SCMI_PROTOCOL
This protocol library provides interface for all the client drivers
making use of the features offered by the SCMI.

+config ARM_SCMI_HAVE_SHMEM
+ bool
+ help
+ This declares whether a shared memory based transport for SCMI is
+ available.
+
config ARM_SCMI_POWER_DOMAIN
tristate "SCMI power domain driver"
depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF)
diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
index 6a2ef63306d6..5a2d4c32e0ae 100644
--- a/drivers/firmware/arm_scmi/Makefile
+++ b/drivers/firmware/arm_scmi/Makefile
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only
scmi-bus-y = bus.o
scmi-driver-y = driver.o notify.o
-scmi-transport-y = shmem.o
+scmi-transport-$(CONFIG_ARM_SCMI_HAVE_SHMEM) = shmem.o
scmi-transport-$(CONFIG_MAILBOX) += mailbox.o
scmi-transport-$(CONFIG_HAVE_ARM_SMCCC_DISCOVERY) += smc.o
scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o
diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index 228bf4a71d23..94dcfb8c0176 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -330,7 +330,9 @@ struct scmi_desc {
int max_msg_size;
};

+#ifdef CONFIG_MAILBOX
extern const struct scmi_desc scmi_mailbox_desc;
+#endif
#ifdef CONFIG_HAVE_ARM_SMCCC
extern const struct scmi_desc scmi_smc_desc;
#endif
diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index 66eb3f0e5daf..41d80bbaa9a2 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -1567,7 +1567,9 @@ ATTRIBUTE_GROUPS(versions);

/* Each compatible listed below must have descriptor associated with it */
static const struct of_device_id scmi_of_match[] = {
+#ifdef CONFIG_MAILBOX
{ .compatible = "arm,scmi", .data = &scmi_mailbox_desc },
+#endif
#ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
{ .compatible = "arm,scmi-smc", .data = &scmi_smc_desc},
#endif
diff --git a/drivers/firmware/smccc/Kconfig b/drivers/firmware/smccc/Kconfig
index 15e7466179a6..69c4d6cabf62 100644
--- a/drivers/firmware/smccc/Kconfig
+++ b/drivers/firmware/smccc/Kconfig
@@ -9,6 +9,7 @@ config HAVE_ARM_SMCCC_DISCOVERY
bool
depends on ARM_PSCI_FW
default y
+ select ARM_SCMI_HAVE_SHMEM
help
SMCCC v1.0 lacked discoverability and hence PSCI v1.0 was updated
to add SMCCC discovery mechanism though the PSCI firmware
diff --git a/drivers/mailbox/Kconfig b/drivers/mailbox/Kconfig
index 68de2c6af727..fc02c38c0739 100644
--- a/drivers/mailbox/Kconfig
+++ b/drivers/mailbox/Kconfig
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only
menuconfig MAILBOX
bool "Mailbox Hardware Support"
+ select ARM_SCMI_HAVE_SHMEM
help
Mailbox is a framework to control hardware communication between
on-chip processors through queued messages and interrupt driven
--
2.25.1


2021-05-11 00:30:29

by Peter Hilber

[permalink] [raw]
Subject: [RFC PATCH v3 08/12] firmware: arm_scmi: Add optional drop_message() transport op

The virtio transport will need to know when the core has finished using
a message. Add a transport op that indicates this to scmi_rx_callback().
Do not address the polling case for now.

Signed-off-by: Peter Hilber <[email protected]>
---
drivers/firmware/arm_scmi/common.h | 2 ++
drivers/firmware/arm_scmi/driver.c | 4 ++++
2 files changed, 6 insertions(+)

diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index 5ab2ea0f7db2..51ee08bdcb0c 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -298,6 +298,7 @@ struct scmi_chan_info {
* @fetch_notification: Callback to fetch notification
* @clear_channel: Callback to clear a channel
* @poll_done: Callback to poll transfer status
+ * @drop_message: Optional callback when finished using msg_handle
*/
struct scmi_transport_ops {
int (*link_supplier)(struct device *dev);
@@ -315,6 +316,7 @@ struct scmi_transport_ops {
struct scmi_xfer *xfer, void *msg_handle);
void (*clear_channel)(struct scmi_chan_info *cinfo);
bool (*poll_done)(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer);
+ void (*drop_message)(struct scmi_chan_info *cinfo, void *msg_handle);
};

int scmi_protocol_device_request(const struct scmi_device_id *id_table);
diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index cc27978b4bea..e04e7c8e6928 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -371,6 +371,7 @@ void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr,
{
u16 xfer_id = MSG_XTRACT_TOKEN(msg_hdr);
u8 msg_type = MSG_XTRACT_TYPE(msg_hdr);
+ struct scmi_info *info = handle_to_scmi_info(cinfo->handle);

switch (msg_type) {
case MSG_TYPE_NOTIFICATION:
@@ -384,6 +385,9 @@ void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr,
WARN_ONCE(1, "received unknown msg_type:%d\n", msg_type);
break;
}
+
+ if (info->desc->ops->drop_message)
+ info->desc->ops->drop_message(cinfo, msg_handle);
}

/**
--
2.25.1


2021-05-11 00:30:40

by Peter Hilber

[permalink] [raw]
Subject: [RFC PATCH v3 06/12] firmware: arm_scmi: Add is_scmi_protocol_device()

The scmi-virtio transport driver will need to distinguish SCMI protocol
devices from the SCMI instance device in the chan_setup() and
chan_free() ops. Add this internal helper to be able to distinguish the
two.

Signed-off-by: Peter Hilber <[email protected]>
---
drivers/firmware/arm_scmi/bus.c | 5 +++++
drivers/firmware/arm_scmi/common.h | 2 ++
2 files changed, 7 insertions(+)

diff --git a/drivers/firmware/arm_scmi/bus.c b/drivers/firmware/arm_scmi/bus.c
index 784cf0027da3..06148e972d1a 100644
--- a/drivers/firmware/arm_scmi/bus.c
+++ b/drivers/firmware/arm_scmi/bus.c
@@ -134,6 +134,11 @@ static struct bus_type scmi_bus_type = {
.remove = scmi_dev_remove,
};

+bool is_scmi_protocol_device(struct device *dev)
+{
+ return dev->bus == &scmi_bus_type;
+}
+
int scmi_driver_register(struct scmi_driver *driver, struct module *owner,
const char *mod_name)
{
diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index 61b22cdeaeeb..9488c682a51d 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -231,6 +231,8 @@ struct scmi_protocol {
const struct scmi_protocol_events *events;
};

+bool is_scmi_protocol_device(struct device *dev);
+
int __init scmi_bus_init(void);
void __exit scmi_bus_exit(void);

--
2.25.1


2021-05-11 00:31:14

by Peter Hilber

[permalink] [raw]
Subject: [RFC PATCH v3 10/12] dt-bindings: arm: Add virtio transport for SCMI

From: Igor Skalkin <[email protected]>

Document the properties for arm,scmi-virtio compatible nodes. The
backing virtio SCMI device is described in patch [1].

[1] https://lists.oasis-open.org/archives/virtio-comment/202005/msg00096.html

Signed-off-by: Igor Skalkin <[email protected]>
[ Peter: Adapted patch for submission to upstream. ]
Co-developed-by: Peter Hilber <[email protected]>
Signed-off-by: Peter Hilber <[email protected]>
---
.../devicetree/bindings/arm/arm,scmi.txt | 35 +++++++++++++++++--
1 file changed, 33 insertions(+), 2 deletions(-)

diff --git a/Documentation/devicetree/bindings/arm/arm,scmi.txt b/Documentation/devicetree/bindings/arm/arm,scmi.txt
index 667d58e0a659..5d209ba666f6 100644
--- a/Documentation/devicetree/bindings/arm/arm,scmi.txt
+++ b/Documentation/devicetree/bindings/arm/arm,scmi.txt
@@ -13,6 +13,9 @@ the device tree.
Required properties:

The scmi node with the following properties shall be under the /firmware/ node.
+Some properties are specific to a transport type.
+
+shmem-based transports (mailbox, smc/hvc):

- compatible : shall be "arm,scmi" or "arm,scmi-smc" for smc/hvc transports
- mboxes: List of phandle and mailbox channel specifiers. It should contain
@@ -21,6 +24,15 @@ The scmi node with the following properties shall be under the /firmware/ node.
supported.
- shmem : List of phandle pointing to the shared memory(SHM) area as per
generic mailbox client binding.
+
+Virtio transport:
+
+- compatible : shall be "arm,scmi-virtio".
+
+The virtio transport only supports a single device.
+
+Additional required properties:
+
- #address-cells : should be '1' if the device has sub-nodes, maps to
protocol identifier for a given sub-node.
- #size-cells : should be '0' as 'reg' property doesn't have any size
@@ -50,7 +62,8 @@ Each protocol supported shall have a sub-node with corresponding compatible
as described in the following sections. If the platform supports dedicated
communication channel for a particular protocol, the 3 properties namely:
mboxes, mbox-names and shmem shall be present in the sub-node corresponding
-to that protocol.
+to that protocol. The virtio transport does not support dedicated communication
+channels.

Clock/Performance bindings for the clocks/OPPs based on SCMI Message Protocol
------------------------------------------------------------
@@ -129,7 +142,8 @@ Required sub-node properties:
[5] Documentation/devicetree/bindings/reset/reset.txt
[6] Documentation/devicetree/bindings/regulator/regulator.yaml

-Example:
+Example (mailbox transport):
+----------------------------

sram@50000000 {
compatible = "mmio-sram";
@@ -237,3 +251,20 @@ thermal-zones {
...
};
};
+
+Example (virtio transport):
+---------------------------
+
+virtio_mmio@4b001000 {
+ compatible = "virtio,mmio";
+ ...
+};
+
+firmware {
+ ...
+ scmi {
+ compatible = "arm,scmi-virtio";
+ ...
+
+The rest is similar to the mailbox transport example, when omitting the
+mailbox/shmem-specific properties.
--
2.25.1


2021-05-11 00:31:18

by Peter Hilber

[permalink] [raw]
Subject: [RFC PATCH v3 05/12] firmware: arm_scmi: Add per-device transport private info

The scmi-virtio transport will link a supplier device to the arm-scmi
device in the link_supplier() op. The transport should then save a
pointer to the linked device.

To enable this, add a transport private info to the scmi_info. (The
scmi_info is already reachable through the arm-scmi device driver_data.)

Signed-off-by: Peter Hilber <[email protected]>
---
drivers/firmware/arm_scmi/common.h | 2 ++
drivers/firmware/arm_scmi/driver.c | 35 ++++++++++++++++++++++++++++++
2 files changed, 37 insertions(+)

diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index d60e3c26821d..61b22cdeaeeb 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -346,6 +346,8 @@ extern const struct scmi_desc scmi_mailbox_desc;
extern const struct scmi_desc scmi_smc_desc;
#endif

+int scmi_set_transport_info(struct device *dev, void *transport_info);
+void *scmi_get_transport_info(struct device *dev);
void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr);
void scmi_free_channel(struct scmi_chan_info *cinfo, struct idr *idr, int id);

diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index df526ff37c6d..581b6c9b3781 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -127,6 +127,7 @@ struct scmi_protocol_instance {
* @active_protocols: IDR storing device_nodes for protocols actually defined
* in the DT and confirmed as implemented by fw.
* @notify_priv: Pointer to private data structure specific to notifications.
+ * @transport_info: Transport private info
* @node: List head
* @users: Number of users of this instance
*/
@@ -145,6 +146,7 @@ struct scmi_info {
u8 *protocols_imp;
struct idr active_protocols;
void *notify_priv;
+ void *transport_info;
struct list_head node;
int users;
};
@@ -382,6 +384,39 @@ void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr)
}
}

+/**
+ * scmi_set_transport_info() - Set transport private info
+ *
+ * @dev: SCMI instance device
+ * @transport_info: transport private info
+ *
+ * Return: 0 on success, otherwise error.
+ */
+int scmi_set_transport_info(struct device *dev, void *transport_info)
+{
+ struct scmi_info *info = dev_get_drvdata(dev);
+
+ if (!info)
+ return -EBADR;
+
+ info->transport_info = transport_info;
+ return 0;
+}
+
+/**
+ * scmi_get_transport_info() - Get transport private info
+ *
+ * @dev: SCMI instance device
+ *
+ * Return: transport private info on success, otherwise NULL.
+ */
+void *scmi_get_transport_info(struct device *dev)
+{
+ struct scmi_info *info = dev_get_drvdata(dev);
+
+ return info ? info->transport_info : NULL;
+}
+
/**
* xfer_put() - Release a transmit message
*
--
2.25.1


2021-05-19 08:10:35

by Rob Herring

[permalink] [raw]
Subject: Re: [RFC PATCH v3 10/12] dt-bindings: arm: Add virtio transport for SCMI

On Tue, May 11, 2021 at 02:20:38AM +0200, Peter Hilber wrote:
> From: Igor Skalkin <[email protected]>
>
> Document the properties for arm,scmi-virtio compatible nodes. The
> backing virtio SCMI device is described in patch [1].
>
> [1] https://lists.oasis-open.org/archives/virtio-comment/202005/msg00096.html
>
> Signed-off-by: Igor Skalkin <[email protected]>
> [ Peter: Adapted patch for submission to upstream. ]
> Co-developed-by: Peter Hilber <[email protected]>
> Signed-off-by: Peter Hilber <[email protected]>
> ---
> .../devicetree/bindings/arm/arm,scmi.txt | 35 +++++++++++++++++--
> 1 file changed, 33 insertions(+), 2 deletions(-)

Seems like it may not be perfectly clear what properties apply or not
for the different transports. Can you convert this to DT schema first.

>
> diff --git a/Documentation/devicetree/bindings/arm/arm,scmi.txt b/Documentation/devicetree/bindings/arm/arm,scmi.txt
> index 667d58e0a659..5d209ba666f6 100644
> --- a/Documentation/devicetree/bindings/arm/arm,scmi.txt
> +++ b/Documentation/devicetree/bindings/arm/arm,scmi.txt
> @@ -13,6 +13,9 @@ the device tree.
> Required properties:
>
> The scmi node with the following properties shall be under the /firmware/ node.
> +Some properties are specific to a transport type.
> +
> +shmem-based transports (mailbox, smc/hvc):
>
> - compatible : shall be "arm,scmi" or "arm,scmi-smc" for smc/hvc transports
> - mboxes: List of phandle and mailbox channel specifiers. It should contain
> @@ -21,6 +24,15 @@ The scmi node with the following properties shall be under the /firmware/ node.
> supported.
> - shmem : List of phandle pointing to the shared memory(SHM) area as per
> generic mailbox client binding.
> +
> +Virtio transport:
> +
> +- compatible : shall be "arm,scmi-virtio".
> +
> +The virtio transport only supports a single device.
> +
> +Additional required properties:
> +
> - #address-cells : should be '1' if the device has sub-nodes, maps to
> protocol identifier for a given sub-node.
> - #size-cells : should be '0' as 'reg' property doesn't have any size
> @@ -50,7 +62,8 @@ Each protocol supported shall have a sub-node with corresponding compatible
> as described in the following sections. If the platform supports dedicated
> communication channel for a particular protocol, the 3 properties namely:
> mboxes, mbox-names and shmem shall be present in the sub-node corresponding
> -to that protocol.
> +to that protocol. The virtio transport does not support dedicated communication
> +channels.
>
> Clock/Performance bindings for the clocks/OPPs based on SCMI Message Protocol
> ------------------------------------------------------------
> @@ -129,7 +142,8 @@ Required sub-node properties:
> [5] Documentation/devicetree/bindings/reset/reset.txt
> [6] Documentation/devicetree/bindings/regulator/regulator.yaml
>
> -Example:
> +Example (mailbox transport):
> +----------------------------
>
> sram@50000000 {
> compatible = "mmio-sram";
> @@ -237,3 +251,20 @@ thermal-zones {
> ...
> };
> };
> +
> +Example (virtio transport):
> +---------------------------
> +
> +virtio_mmio@4b001000 {
> + compatible = "virtio,mmio";
> + ...
> +};
> +
> +firmware {
> + ...
> + scmi {
> + compatible = "arm,scmi-virtio";
> + ...
> +
> +The rest is similar to the mailbox transport example, when omitting the
> +mailbox/shmem-specific properties.
> --
> 2.25.1
>
>

2021-05-19 16:26:18

by Cristian Marussi

[permalink] [raw]
Subject: Re: [RFC PATCH v3 11/12] firmware: arm_scmi: Add virtio transport

Hi Peter,

On Tue, May 11, 2021 at 02:20:39AM +0200, Peter Hilber wrote:
> From: Igor Skalkin <[email protected]>
>
> This transport enables accessing an SCMI platform as a virtio device.
>
> Implement an SCMI virtio driver according to the virtio SCMI device spec
> [1]. Virtio device id 32 has been reserved for the SCMI device [2].
>
> The virtio transport has one Tx channel (virtio cmdq, A2P channel) and
> at most one Rx channel (virtio eventq, P2A channel).
>
> The following feature bit defined in [1] is not implemented:
> VIRTIO_SCMI_F_SHARED_MEMORY.
>
> After the preparatory patches, this implements the virtio transport as
> paraphrased:
>
> Only support a single arm-scmi device (which is consistent with the SCMI
> spec). scmi-virtio init is called from arm-scmi module init. During the
> arm-scmi probing, link to the first probed scmi-virtio device. Defer
> arm-scmi probing if no scmi-virtio device is bound yet.
>
> For simplicity, restrict the number of messages which can be pending
> simultaneously according to the virtqueue capacity. (The virtqueue sizes
> are negotiated with the virtio device.)
>
> As soon as Rx channel message buffers are allocated or have been read
> out by the arm-scmi driver, feed them to the virtio device.
>
> Since some virtio devices may not have the short response time exhibited
> by SCMI platforms using other transports, set a generous response
> timeout.
>
> Limitations:
>
> - Polling is not supported.
>
> - The timeout for delayed responses has not been adjusted.
>

I want to give you some feedback/update while I'm reworking this series.

My main concerns currently arise from the logic of the interaction with
the core transport layer and the race resolution logic that you added in
the final patch, but I understand that, as said, this stems from the
current assumptions built into the core about transports.

So I tried to radically simplify this interactions modifying the core
accordingly and I now have something working (even though still a bit
dirty): so my plan would be in the next days to post an updated RFC V4 with
some core changes upfront, plus your V3 with patches 7,8,12 dropped and a
cumulative patch on virtio transport patch 11 (to better highlight the
differences) and get some feedback at first. (and tessing if you can :D)

I'm fine with the message abstractions introduced in patch 09.

I still have not addressed the probing sequence issues (as discussed),
nor included some work I'm doing on atomicity and polling of the transport
(and I've just seen Rob email about DT), so such RFC V4 will be anyway
somehow transitional, in order to share and get your feedback and see if
I missed something along the way.

Anyway, in a nutshell, I think that the main issues mentioned above derive
from the fact that the virtio transport is forced to cope with the assumption,
currently present in the core model, that a transport needs to handle msg
payloads and hdrs in two different steps, while virtio clearly is ready
to go as soon as the resp/notif/dresp msg are received from the vqueues
and this leads to the need of passing back and forth transport specific
ancillary data (like msg_handle) and complicates the handling and syncing
of the states of the inflight messages between the core and the virtio
transport.

So...I'm making fetch_response/notification optional to the core :D, such
that if a transport like virtio does not offer such ops, the new assumption
in the core is that, when scmi_rx_callback(cinfo, msg_hdr) is called,
a transport like virtio would have already completely filled up the xfer
and there's no need to call fetch_* since all is ready to go; in order to
do that, I added a couple of helpers in the core for the transport to be
able to grab, based only in msg_hdr, the proper pending xfer (for resp/
dresp since they are tied uniquely to the seq/token in the hdr matching the
original sent cmd) or have a freshly allocated xfer assigned (for notif):
this way the transport RX callback (like vio_complete) can now effectively
fillup completely the xfer and release the vqueue message before even
calling scmi_rx_callback. (and also I added a bit of additional sync logic
in and around xfer to avoid races against stale and possibly reused xfers.)

This seemed to me a bit less complicated and with fewer core/transport sync
issue sthan the actual solution so I gave it a try.

As of now it works for me in my minimal emulated setup for cmd/resp/dresp/
notif using sensors proto via scmi_iio/hwmon as a testbed (like it was
working indeed with your V3 original series, so this is conforting :D),
but I want to make more test on a real virtio backend and cleanup a bit
the series before posting an intermediate V4 RFC as said.

Thanks,
Cristian

> [1] https://github.com/oasis-tcs/virtio-spec/blob/master/virtio-scmi.tex
> [2] https://www.oasis-open.org/committees/ballot.php?id=3496
>
> Signed-off-by: Igor Skalkin <[email protected]>
> [ Peter: Adapted patch for submission to upstream. ]
> Co-developed-by: Peter Hilber <[email protected]>
> Signed-off-by: Peter Hilber <[email protected]>
> ---
> MAINTAINERS | 1 +
> drivers/firmware/Kconfig | 12 +
> drivers/firmware/arm_scmi/Makefile | 1 +
> drivers/firmware/arm_scmi/common.h | 3 +
> drivers/firmware/arm_scmi/driver.c | 3 +
> drivers/firmware/arm_scmi/virtio.c | 523 +++++++++++++++++++++++++++++
> include/uapi/linux/virtio_ids.h | 1 +
> include/uapi/linux/virtio_scmi.h | 25 ++
> 8 files changed, 569 insertions(+)
> create mode 100644 drivers/firmware/arm_scmi/virtio.c
> create mode 100644 include/uapi/linux/virtio_scmi.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index bd7aff0c120f..449c336872f3 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -17705,6 +17705,7 @@ F: drivers/regulator/scmi-regulator.c
> F: drivers/reset/reset-scmi.c
> F: include/linux/sc[mp]i_protocol.h
> F: include/trace/events/scmi.h
> +F: include/uapi/linux/virtio_scmi.h
>
> SYSTEM RESET/SHUTDOWN DRIVERS
> M: Sebastian Reichel <[email protected]>
> diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
> index e8377b12e4d0..7e9eafdd9b63 100644
> --- a/drivers/firmware/Kconfig
> +++ b/drivers/firmware/Kconfig
> @@ -39,6 +39,18 @@ config ARM_SCMI_HAVE_MSG
> This declares whether a message passing based transport for SCMI is
> available.
>
> +# This config option includes the dependencies of ARM_SCMI_PROTOCOL so that
> +# this config doesn't show up when SCMI wouldn't be available.
> +config VIRTIO_SCMI
> + bool "Virtio transport for SCMI"
> + select ARM_SCMI_HAVE_MSG
> + depends on VIRTIO && (ARM || ARM64 || COMPILE_TEST)
> + help
> + This enables the virtio based transport for SCMI.
> +
> + If you want to use the ARM SCMI protocol between the virtio guest and
> + a host providing a virtio SCMI device, answer Y.
> +
> config ARM_SCMI_POWER_DOMAIN
> tristate "SCMI power domain driver"
> depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF)
> diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
> index f6b4acb8abdb..db1787606fb2 100644
> --- a/drivers/firmware/arm_scmi/Makefile
> +++ b/drivers/firmware/arm_scmi/Makefile
> @@ -5,6 +5,7 @@ scmi-transport-$(CONFIG_ARM_SCMI_HAVE_SHMEM) = shmem.o
> scmi-transport-$(CONFIG_MAILBOX) += mailbox.o
> scmi-transport-$(CONFIG_HAVE_ARM_SMCCC_DISCOVERY) += smc.o
> scmi-transport-$(CONFIG_ARM_SCMI_HAVE_MSG) += msg.o
> +scmi-transport-$(CONFIG_VIRTIO_SCMI) += virtio.o
> scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o
> scmi-module-objs := $(scmi-bus-y) $(scmi-driver-y) $(scmi-protocols-y) \
> $(scmi-transport-y)
> diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
> index 4cb6571c7aaf..bada06cfd33d 100644
> --- a/drivers/firmware/arm_scmi/common.h
> +++ b/drivers/firmware/arm_scmi/common.h
> @@ -349,6 +349,9 @@ extern const struct scmi_desc scmi_mailbox_desc;
> #ifdef CONFIG_HAVE_ARM_SMCCC
> extern const struct scmi_desc scmi_smc_desc;
> #endif
> +#ifdef CONFIG_VIRTIO_SCMI
> +extern const struct scmi_desc scmi_virtio_desc;
> +#endif
>
> int scmi_set_transport_info(struct device *dev, void *transport_info);
> void *scmi_get_transport_info(struct device *dev);
> diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
> index e04e7c8e6928..a31187385470 100644
> --- a/drivers/firmware/arm_scmi/driver.c
> +++ b/drivers/firmware/arm_scmi/driver.c
> @@ -1637,6 +1637,9 @@ static const struct of_device_id scmi_of_match[] = {
> #endif
> #ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
> { .compatible = "arm,scmi-smc", .data = &scmi_smc_desc},
> +#endif
> +#ifdef CONFIG_VIRTIO_SCMI
> + { .compatible = "arm,scmi-virtio", .data = &scmi_virtio_desc},
> #endif
> { /* Sentinel */ },
> };
> diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c
> new file mode 100644
> index 000000000000..20972adf6dc7
> --- /dev/null
> +++ b/drivers/firmware/arm_scmi/virtio.c
> @@ -0,0 +1,523 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Virtio Transport driver for Arm System Control and Management Interface
> + * (SCMI).
> + *
> + * Copyright (C) 2020 OpenSynergy.
> + */
> +
> +/**
> + * DOC: Theory of Operation
> + *
> + * The scmi-virtio transport implements a driver for the virtio SCMI device.
> + *
> + * There is one Tx channel (virtio cmdq, A2P channel) and at most one Rx
> + * channel (virtio eventq, P2A channel). Each channel is implemented through a
> + * virtqueue. Access to each virtqueue is protected by spinlocks.
> + */
> +
> +#include <linux/errno.h>
> +#include <linux/of.h>
> +#include <linux/of_platform.h>
> +#include <linux/platform_device.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <uapi/linux/virtio_ids.h>
> +#include <uapi/linux/virtio_scmi.h>
> +
> +#include "common.h"
> +
> +#define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */
> +#define VIRTIO_SCMI_MAX_PDU_SIZE \
> + (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD)
> +#define DESCRIPTORS_PER_TX_MSG 2
> +
> +/**
> + * struct scmi_vio_channel - Transport channel information
> + *
> + * @lock: Protects access to all members except ready.
> + * @ready_lock: Protects access to ready. If required, it must be taken before
> + * lock.
> + * @vqueue: Associated virtqueue
> + * @cinfo: SCMI Tx or Rx channel
> + * @free_list: List of unused scmi_vio_msg, maintained for Tx channels only
> + * @is_rx: Whether channel is an Rx channel
> + * @ready: Whether transport user is ready to hear about channel
> + */
> +struct scmi_vio_channel {
> + spinlock_t lock;
> + spinlock_t ready_lock;
> + struct virtqueue *vqueue;
> + struct scmi_chan_info *cinfo;
> + struct list_head free_list;
> + u8 is_rx;
> + u8 ready;
> +};
> +
> +/**
> + * struct scmi_vio_msg - Transport PDU information
> + *
> + * @request: SDU used for commands
> + * @input: SDU used for (delayed) responses and notifications
> + * @list: List which scmi_vio_msg may be part of
> + * @rx_len: Input SDU size in bytes, once input has been received
> + */
> +struct scmi_vio_msg {
> + struct scmi_msg_payld *request;
> + struct scmi_msg_payld *input;
> + struct list_head list;
> + unsigned int rx_len;
> +};
> +
> +static bool scmi_vio_have_vq_rx(struct virtio_device *vdev)
> +{
> + return virtio_has_feature(vdev, VIRTIO_SCMI_F_P2A_CHANNELS);
> +}
> +
> +static int scmi_vio_feed_vq_rx(struct scmi_vio_channel *vioch,
> + struct scmi_vio_msg *msg)
> +{
> + struct scatterlist sg_in;
> + int rc;
> + unsigned long flags;
> +
> + sg_init_one(&sg_in, msg->input, VIRTIO_SCMI_MAX_PDU_SIZE);
> +
> + spin_lock_irqsave(&vioch->lock, flags);
> +
> + rc = virtqueue_add_inbuf(vioch->vqueue, &sg_in, 1, msg, GFP_ATOMIC);
> + if (rc)
> + dev_err_once(vioch->cinfo->dev,
> + "%s() failed to add to virtqueue (%d)\n", __func__,
> + rc);
> + else
> + virtqueue_kick(vioch->vqueue);
> +
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +
> + return rc;
> +}
> +
> +static void scmi_vio_complete_cb(struct virtqueue *vqueue)
> +{
> + unsigned long ready_flags;
> + unsigned long flags;
> + unsigned int length;
> + struct scmi_vio_channel *vioch;
> + struct scmi_vio_msg *msg;
> + bool cb_enabled = true;
> +
> + if (WARN_ON_ONCE(!vqueue->vdev->priv))
> + return;
> + vioch = &((struct scmi_vio_channel *)vqueue->vdev->priv)[vqueue->index];
> +
> + for (;;) {
> + spin_lock_irqsave(&vioch->ready_lock, ready_flags);
> +
> + if (!vioch->ready) {
> + if (!cb_enabled)
> + (void)virtqueue_enable_cb(vqueue);
> + goto unlock_ready_out;
> + }
> +
> + spin_lock_irqsave(&vioch->lock, flags);
> + if (cb_enabled) {
> + virtqueue_disable_cb(vqueue);
> + cb_enabled = false;
> + }
> + msg = virtqueue_get_buf(vqueue, &length);
> + if (!msg) {
> + if (virtqueue_enable_cb(vqueue))
> + goto unlock_out;
> + else
> + cb_enabled = true;
> + }
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +
> + if (msg) {
> + msg->rx_len = length;
> +
> + /*
> + * Hold the ready_lock during the callback to avoid
> + * races when the arm-scmi driver is unbinding while
> + * the virtio device is not quiesced yet.
> + */
> + scmi_rx_callback(vioch->cinfo,
> + msg_read_header(msg->input), msg);
> + }
> + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
> + }
> +
> +unlock_out:
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +unlock_ready_out:
> + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
> +}
> +
> +static const char *const scmi_vio_vqueue_names[] = { "tx", "rx" };
> +
> +static vq_callback_t *scmi_vio_complete_callbacks[] = {
> + scmi_vio_complete_cb,
> + scmi_vio_complete_cb
> +};
> +
> +static unsigned int virtio_get_max_msg(bool tx,
> + struct scmi_chan_info *base_cinfo)
> +{
> + struct scmi_vio_channel *vioch = base_cinfo->transport_info;
> + unsigned int ret;
> +
> + ret = virtqueue_get_vring_size(vioch->vqueue);
> +
> + /* Tx messages need multiple descriptors. */
> + if (tx)
> + ret /= DESCRIPTORS_PER_TX_MSG;
> +
> + if (ret > MSG_TOKEN_MAX) {
> + dev_info_once(
> + base_cinfo->dev,
> + "Only %ld messages can be pending simultaneously, while the %s virtqueue could hold %d\n",
> + MSG_TOKEN_MAX, tx ? "tx" : "rx", ret);
> + ret = MSG_TOKEN_MAX;
> + }
> +
> + return ret;
> +}
> +
> +static int scmi_vio_match_any_dev(struct device *dev, const void *data)
> +{
> + return 1;
> +}
> +
> +static struct virtio_driver virtio_scmi_driver; /* Forward declaration */
> +
> +static int virtio_link_supplier(struct device *dev)
> +{
> + struct device *vdev = driver_find_device(
> + &virtio_scmi_driver.driver, NULL, NULL, scmi_vio_match_any_dev);
> +
> + if (!vdev) {
> + dev_notice_once(
> + dev,
> + "Deferring probe after not finding a bound scmi-virtio device\n");
> + return -EPROBE_DEFER;
> + }
> +
> + /* Add device link for remove order and sysfs link. */
> + if (!device_link_add(dev, vdev, DL_FLAG_AUTOREMOVE_CONSUMER)) {
> + put_device(vdev);
> + dev_err(dev, "Adding link to supplier virtio device failed\n");
> + return -ECANCELED;
> + }
> +
> + put_device(vdev);
> + return scmi_set_transport_info(dev, dev_to_virtio(vdev));
> +}
> +
> +static bool virtio_chan_available(struct device *dev, int idx)
> +{
> + struct virtio_device *vdev;
> +
> + /* scmi-virtio doesn't support per-protocol channels */
> + if (is_scmi_protocol_device(dev))
> + return false;
> +
> + vdev = scmi_get_transport_info(dev);
> + if (!vdev)
> + return false;
> +
> + switch (idx) {
> + case VIRTIO_SCMI_VQ_TX:
> + return true;
> + case VIRTIO_SCMI_VQ_RX:
> + return scmi_vio_have_vq_rx(vdev);
> + default:
> + return false;
> + }
> +}
> +
> +static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
> + bool tx)
> +{
> + unsigned long flags;
> + struct virtio_device *vdev;
> + struct scmi_vio_channel *vioch;
> + int index = tx ? VIRTIO_SCMI_VQ_TX : VIRTIO_SCMI_VQ_RX;
> + int max_msg;
> + int i;
> +
> + if (!virtio_chan_available(dev, index))
> + return -ENODEV;
> +
> + vdev = scmi_get_transport_info(dev);
> + vioch = &((struct scmi_vio_channel *)vdev->priv)[index];
> +
> + spin_lock_irqsave(&vioch->lock, flags);
> + cinfo->transport_info = vioch;
> + vioch->cinfo = cinfo;
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +
> + max_msg = virtio_get_max_msg(tx, cinfo);
> +
> + for (i = 0; i < max_msg; i++) {
> + struct scmi_vio_msg *msg;
> +
> + msg = devm_kzalloc(cinfo->dev, sizeof(*msg), GFP_KERNEL);
> + if (!msg)
> + return -ENOMEM;
> +
> + if (tx) {
> + msg->request = devm_kzalloc(cinfo->dev,
> + VIRTIO_SCMI_MAX_PDU_SIZE,
> + GFP_KERNEL);
> + if (!msg->request)
> + return -ENOMEM;
> + }
> +
> + msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE,
> + GFP_KERNEL);
> + if (!msg->input)
> + return -ENOMEM;
> +
> + if (tx) {
> + spin_lock_irqsave(&vioch->lock, flags);
> + list_add_tail(&msg->list, &vioch->free_list);
> + spin_unlock_irqrestore(&vioch->lock, flags);
> + } else {
> + scmi_vio_feed_vq_rx(vioch, msg);
> + }
> + }
> +
> + spin_lock_irqsave(&vioch->ready_lock, flags);
> + vioch->ready = true;
> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
> +
> + return 0;
> +}
> +
> +static int virtio_chan_free(int id, void *p, void *data)
> +{
> + unsigned long flags;
> + struct scmi_chan_info *cinfo = p;
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> +
> + spin_lock_irqsave(&vioch->ready_lock, flags);
> + vioch->ready = false;
> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
> +
> + scmi_free_channel(cinfo, data, id);
> + return 0;
> +}
> +
> +static int virtio_send_message(struct scmi_chan_info *cinfo,
> + struct scmi_xfer *xfer)
> +{
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> + struct scatterlist sg_out;
> + struct scatterlist sg_in;
> + struct scatterlist *sgs[DESCRIPTORS_PER_TX_MSG] = { &sg_out, &sg_in };
> + unsigned long flags;
> + int rc;
> + struct scmi_vio_msg *msg;
> +
> + /*
> + * TODO: For now, we don't support polling. But it should not be
> + * difficult to add support.
> + */
> + if (xfer->hdr.poll_completion)
> + return -EINVAL;
> +
> + spin_lock_irqsave(&vioch->lock, flags);
> +
> + if (list_empty(&vioch->free_list)) {
> + spin_unlock_irqrestore(&vioch->lock, flags);
> + return -EBUSY;
> + }
> +
> + msg = list_first_entry(&vioch->free_list, typeof(*msg), list);
> + list_del(&msg->list);
> +
> + msg_tx_prepare(msg->request, xfer);
> +
> + sg_init_one(&sg_out, msg->request, msg_command_size(xfer));
> + sg_init_one(&sg_in, msg->input, msg_response_size(xfer));
> +
> + rc = virtqueue_add_sgs(vioch->vqueue, sgs, 1, 1, msg, GFP_ATOMIC);
> + if (rc) {
> + list_add(&msg->list, &vioch->free_list);
> + dev_err_once(vioch->cinfo->dev,
> + "%s() failed to add to virtqueue (%d)\n", __func__,
> + rc);
> + } else {
> + virtqueue_kick(vioch->vqueue);
> + }
> +
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +
> + return rc;
> +}
> +
> +static void virtio_fetch_response(struct scmi_chan_info *cinfo,
> + struct scmi_xfer *xfer, void *msg_handle)
> +{
> + struct scmi_vio_msg *msg = msg_handle;
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> +
> + if (!msg) {
> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> + "Ignoring %s() call with NULL msg_handle\n",
> + __func__);
> + return;
> + }
> +
> + msg_fetch_response(msg->input, msg->rx_len, xfer);
> +}
> +
> +static void virtio_fetch_notification(struct scmi_chan_info *cinfo,
> + size_t max_len, struct scmi_xfer *xfer,
> + void *msg_handle)
> +{
> + struct scmi_vio_msg *msg = msg_handle;
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> +
> + if (!msg) {
> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> + "Ignoring %s() call with NULL msg_handle\n",
> + __func__);
> + return;
> + }
> +
> + msg_fetch_notification(msg->input, msg->rx_len, max_len, xfer);
> +}
> +
> +static void dummy_clear_channel(struct scmi_chan_info *cinfo)
> +{
> +}
> +
> +static bool dummy_poll_done(struct scmi_chan_info *cinfo,
> + struct scmi_xfer *xfer)
> +{
> + return false;
> +}
> +
> +static void virtio_drop_message(struct scmi_chan_info *cinfo, void *msg_handle)
> +{
> + unsigned long flags;
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> + struct scmi_vio_msg *msg = msg_handle;
> +
> + if (!msg) {
> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> + "Ignoring %s() call with NULL msg_handle\n",
> + __func__);
> + return;
> + }
> +
> + if (vioch->is_rx) {
> + scmi_vio_feed_vq_rx(vioch, msg);
> + } else {
> + spin_lock_irqsave(&vioch->lock, flags);
> + list_add(&msg->list, &vioch->free_list);
> + spin_unlock_irqrestore(&vioch->lock, flags);
> + }
> +}
> +
> +static const struct scmi_transport_ops scmi_virtio_ops = {
> + .link_supplier = virtio_link_supplier,
> + .chan_available = virtio_chan_available,
> + .chan_setup = virtio_chan_setup,
> + .chan_free = virtio_chan_free,
> + .get_max_msg = virtio_get_max_msg,
> + .send_message = virtio_send_message,
> + .fetch_response = virtio_fetch_response,
> + .fetch_notification = virtio_fetch_notification,
> + .clear_channel = dummy_clear_channel,
> + .poll_done = dummy_poll_done,
> + .drop_message = virtio_drop_message,
> +};
> +
> +static int scmi_vio_probe(struct virtio_device *vdev)
> +{
> + struct device *dev = &vdev->dev;
> + struct scmi_vio_channel *channels;
> + bool have_vq_rx;
> + int vq_cnt;
> + int i;
> + int ret;
> + struct virtqueue *vqs[VIRTIO_SCMI_VQ_MAX_CNT];
> +
> + have_vq_rx = scmi_vio_have_vq_rx(vdev);
> + vq_cnt = have_vq_rx ? VIRTIO_SCMI_VQ_MAX_CNT : 1;
> +
> + channels = devm_kcalloc(dev, vq_cnt, sizeof(*channels), GFP_KERNEL);
> + if (!channels)
> + return -ENOMEM;
> +
> + if (have_vq_rx)
> + channels[VIRTIO_SCMI_VQ_RX].is_rx = true;
> +
> + ret = virtio_find_vqs(vdev, vq_cnt, vqs, scmi_vio_complete_callbacks,
> + scmi_vio_vqueue_names, NULL);
> + if (ret) {
> + dev_err(dev, "Failed to get %d virtqueue(s)\n", vq_cnt);
> + return ret;
> + }
> + dev_info(dev, "Found %d virtqueue(s)\n", vq_cnt);
> +
> + for (i = 0; i < vq_cnt; i++) {
> + spin_lock_init(&channels[i].lock);
> + spin_lock_init(&channels[i].ready_lock);
> + INIT_LIST_HEAD(&channels[i].free_list);
> + channels[i].vqueue = vqs[i];
> + }
> +
> + vdev->priv = channels;
> +
> + return 0;
> +}
> +
> +static void scmi_vio_remove(struct virtio_device *vdev)
> +{
> + vdev->config->reset(vdev);
> + vdev->config->del_vqs(vdev);
> +}
> +
> +static unsigned int features[] = {
> + VIRTIO_SCMI_F_P2A_CHANNELS,
> +};
> +
> +static const struct virtio_device_id id_table[] = {
> + { VIRTIO_ID_SCMI, VIRTIO_DEV_ANY_ID },
> + { 0 }
> +};
> +
> +static struct virtio_driver virtio_scmi_driver = {
> + .driver.name = "scmi-virtio",
> + .driver.owner = THIS_MODULE,
> + .feature_table = features,
> + .feature_table_size = ARRAY_SIZE(features),
> + .id_table = id_table,
> + .probe = scmi_vio_probe,
> + .remove = scmi_vio_remove,
> +};
> +
> +static int __init virtio_scmi_init(void)
> +{
> + return register_virtio_driver(&virtio_scmi_driver);
> +}
> +
> +static void __exit virtio_scmi_exit(void)
> +{
> + unregister_virtio_driver(&virtio_scmi_driver);
> +}
> +
> +const struct scmi_desc scmi_virtio_desc = {
> + .init = virtio_scmi_init,
> + .exit = virtio_scmi_exit,
> + .ops = &scmi_virtio_ops,
> + .max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */
> + .max_msg = 0, /* overridden by virtio_get_max_msg() */
> + .max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE,
> +};
> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> index f0c35ce8628c..c146fe30e589 100644
> --- a/include/uapi/linux/virtio_ids.h
> +++ b/include/uapi/linux/virtio_ids.h
> @@ -56,5 +56,6 @@
> #define VIRTIO_ID_PMEM 27 /* virtio pmem */
> #define VIRTIO_ID_BT 28 /* virtio bluetooth */
> #define VIRTIO_ID_MAC80211_HWSIM 29 /* virtio mac80211-hwsim */
> +#define VIRTIO_ID_SCMI 32 /* virtio SCMI */
>
> #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/include/uapi/linux/virtio_scmi.h b/include/uapi/linux/virtio_scmi.h
> new file mode 100644
> index 000000000000..732b01504c35
> --- /dev/null
> +++ b/include/uapi/linux/virtio_scmi.h
> @@ -0,0 +1,25 @@
> +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
> +/*
> + * Copyright (C) 2020 OpenSynergy GmbH
> + */
> +
> +#ifndef _UAPI_LINUX_VIRTIO_SCMI_H
> +#define _UAPI_LINUX_VIRTIO_SCMI_H
> +
> +#include <linux/virtio_types.h>
> +
> +/* Feature bits */
> +
> +/* Device implements some SCMI notifications, or delayed responses. */
> +#define VIRTIO_SCMI_F_P2A_CHANNELS 0
> +
> +/* Device implements any SCMI statistics shared memory region */
> +#define VIRTIO_SCMI_F_SHARED_MEMORY 1
> +
> +/* Virtqueues */
> +
> +#define VIRTIO_SCMI_VQ_TX 0 /* cmdq */
> +#define VIRTIO_SCMI_VQ_RX 1 /* eventq */
> +#define VIRTIO_SCMI_VQ_MAX_CNT 2
> +
> +#endif /* _UAPI_LINUX_VIRTIO_SCMI_H */
> --
> 2.25.1
>
>

2021-05-26 14:42:02

by Cristian Marussi

[permalink] [raw]
Subject: Re: [RFC PATCH v3 11/12] firmware: arm_scmi: Add virtio transport

Hi Peter,

as anticipated I'm adding some new SCMI core mechanisms that should help
simplyfying virtio-scmi series.

Such core work is still in progress (and to be properly reviewed) and it
is at:

https://lore.kernel.org/linux-arm-kernel/[email protected]/

but in the meantime I have an initial but working (for me at least :D)
rework of your V3 virtio-scmi series; rework is still in progress and to be
cleaned up (nor I have addressed probing sequence or polling mode), and I am
anyway holding it for now since Rob asked about DT txt-to-yaml conversion
too, BUT if you can or want to have a look in the meantime, you can find the
whole V4 transitional series rebased on top of my core changes with some
changes on top at:

https://gitlab.arm.com/linux-arm/linux-cm/-/commits/scmi_virtio_trans_V4_rework/

where:

- I dropped V3 patches 7,8,12
- the virtio changes I applied to make use of my core changes are all
embedded in the last patch (just for now):

[RFC] firmware: arm_scmi: make virtio-scmi use delegated xfers

Definitely not the final version, so you may want to just wait for a
real V4, but just to give an idea of the direction I'm trying to follow
if you want.

Thanks,
Cristian

On Tue, May 11, 2021 at 02:20:39AM +0200, Peter Hilber wrote:
> From: Igor Skalkin <[email protected]>
>
> This transport enables accessing an SCMI platform as a virtio device.
>
> Implement an SCMI virtio driver according to the virtio SCMI device spec
> [1]. Virtio device id 32 has been reserved for the SCMI device [2].
>
> The virtio transport has one Tx channel (virtio cmdq, A2P channel) and
> at most one Rx channel (virtio eventq, P2A channel).
>
> The following feature bit defined in [1] is not implemented:
> VIRTIO_SCMI_F_SHARED_MEMORY.
>
> After the preparatory patches, this implements the virtio transport as
> paraphrased:
>
> Only support a single arm-scmi device (which is consistent with the SCMI
> spec). scmi-virtio init is called from arm-scmi module init. During the
> arm-scmi probing, link to the first probed scmi-virtio device. Defer
> arm-scmi probing if no scmi-virtio device is bound yet.
>
> For simplicity, restrict the number of messages which can be pending
> simultaneously according to the virtqueue capacity. (The virtqueue sizes
> are negotiated with the virtio device.)
>
> As soon as Rx channel message buffers are allocated or have been read
> out by the arm-scmi driver, feed them to the virtio device.
>
> Since some virtio devices may not have the short response time exhibited
> by SCMI platforms using other transports, set a generous response
> timeout.
>
> Limitations:
>
> - Polling is not supported.
>
> - The timeout for delayed responses has not been adjusted.
>
> [1] https://github.com/oasis-tcs/virtio-spec/blob/master/virtio-scmi.tex
> [2] https://www.oasis-open.org/committees/ballot.php?id=3496
>
> Signed-off-by: Igor Skalkin <[email protected]>
> [ Peter: Adapted patch for submission to upstream. ]
> Co-developed-by: Peter Hilber <[email protected]>
> Signed-off-by: Peter Hilber <[email protected]>
> ---
> MAINTAINERS | 1 +
> drivers/firmware/Kconfig | 12 +
> drivers/firmware/arm_scmi/Makefile | 1 +
> drivers/firmware/arm_scmi/common.h | 3 +
> drivers/firmware/arm_scmi/driver.c | 3 +
> drivers/firmware/arm_scmi/virtio.c | 523 +++++++++++++++++++++++++++++
> include/uapi/linux/virtio_ids.h | 1 +
> include/uapi/linux/virtio_scmi.h | 25 ++
> 8 files changed, 569 insertions(+)
> create mode 100644 drivers/firmware/arm_scmi/virtio.c
> create mode 100644 include/uapi/linux/virtio_scmi.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index bd7aff0c120f..449c336872f3 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -17705,6 +17705,7 @@ F: drivers/regulator/scmi-regulator.c
> F: drivers/reset/reset-scmi.c
> F: include/linux/sc[mp]i_protocol.h
> F: include/trace/events/scmi.h
> +F: include/uapi/linux/virtio_scmi.h
>
> SYSTEM RESET/SHUTDOWN DRIVERS
> M: Sebastian Reichel <[email protected]>
> diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
> index e8377b12e4d0..7e9eafdd9b63 100644
> --- a/drivers/firmware/Kconfig
> +++ b/drivers/firmware/Kconfig
> @@ -39,6 +39,18 @@ config ARM_SCMI_HAVE_MSG
> This declares whether a message passing based transport for SCMI is
> available.
>
> +# This config option includes the dependencies of ARM_SCMI_PROTOCOL so that
> +# this config doesn't show up when SCMI wouldn't be available.
> +config VIRTIO_SCMI
> + bool "Virtio transport for SCMI"
> + select ARM_SCMI_HAVE_MSG
> + depends on VIRTIO && (ARM || ARM64 || COMPILE_TEST)
> + help
> + This enables the virtio based transport for SCMI.
> +
> + If you want to use the ARM SCMI protocol between the virtio guest and
> + a host providing a virtio SCMI device, answer Y.
> +
> config ARM_SCMI_POWER_DOMAIN
> tristate "SCMI power domain driver"
> depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF)
> diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
> index f6b4acb8abdb..db1787606fb2 100644
> --- a/drivers/firmware/arm_scmi/Makefile
> +++ b/drivers/firmware/arm_scmi/Makefile
> @@ -5,6 +5,7 @@ scmi-transport-$(CONFIG_ARM_SCMI_HAVE_SHMEM) = shmem.o
> scmi-transport-$(CONFIG_MAILBOX) += mailbox.o
> scmi-transport-$(CONFIG_HAVE_ARM_SMCCC_DISCOVERY) += smc.o
> scmi-transport-$(CONFIG_ARM_SCMI_HAVE_MSG) += msg.o
> +scmi-transport-$(CONFIG_VIRTIO_SCMI) += virtio.o
> scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o
> scmi-module-objs := $(scmi-bus-y) $(scmi-driver-y) $(scmi-protocols-y) \
> $(scmi-transport-y)
> diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
> index 4cb6571c7aaf..bada06cfd33d 100644
> --- a/drivers/firmware/arm_scmi/common.h
> +++ b/drivers/firmware/arm_scmi/common.h
> @@ -349,6 +349,9 @@ extern const struct scmi_desc scmi_mailbox_desc;
> #ifdef CONFIG_HAVE_ARM_SMCCC
> extern const struct scmi_desc scmi_smc_desc;
> #endif
> +#ifdef CONFIG_VIRTIO_SCMI
> +extern const struct scmi_desc scmi_virtio_desc;
> +#endif
>
> int scmi_set_transport_info(struct device *dev, void *transport_info);
> void *scmi_get_transport_info(struct device *dev);
> diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
> index e04e7c8e6928..a31187385470 100644
> --- a/drivers/firmware/arm_scmi/driver.c
> +++ b/drivers/firmware/arm_scmi/driver.c
> @@ -1637,6 +1637,9 @@ static const struct of_device_id scmi_of_match[] = {
> #endif
> #ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
> { .compatible = "arm,scmi-smc", .data = &scmi_smc_desc},
> +#endif
> +#ifdef CONFIG_VIRTIO_SCMI
> + { .compatible = "arm,scmi-virtio", .data = &scmi_virtio_desc},
> #endif
> { /* Sentinel */ },
> };
> diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c
> new file mode 100644
> index 000000000000..20972adf6dc7
> --- /dev/null
> +++ b/drivers/firmware/arm_scmi/virtio.c
> @@ -0,0 +1,523 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Virtio Transport driver for Arm System Control and Management Interface
> + * (SCMI).
> + *
> + * Copyright (C) 2020 OpenSynergy.
> + */
> +
> +/**
> + * DOC: Theory of Operation
> + *
> + * The scmi-virtio transport implements a driver for the virtio SCMI device.
> + *
> + * There is one Tx channel (virtio cmdq, A2P channel) and at most one Rx
> + * channel (virtio eventq, P2A channel). Each channel is implemented through a
> + * virtqueue. Access to each virtqueue is protected by spinlocks.
> + */
> +
> +#include <linux/errno.h>
> +#include <linux/of.h>
> +#include <linux/of_platform.h>
> +#include <linux/platform_device.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <uapi/linux/virtio_ids.h>
> +#include <uapi/linux/virtio_scmi.h>
> +
> +#include "common.h"
> +
> +#define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */
> +#define VIRTIO_SCMI_MAX_PDU_SIZE \
> + (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD)
> +#define DESCRIPTORS_PER_TX_MSG 2
> +
> +/**
> + * struct scmi_vio_channel - Transport channel information
> + *
> + * @lock: Protects access to all members except ready.
> + * @ready_lock: Protects access to ready. If required, it must be taken before
> + * lock.
> + * @vqueue: Associated virtqueue
> + * @cinfo: SCMI Tx or Rx channel
> + * @free_list: List of unused scmi_vio_msg, maintained for Tx channels only
> + * @is_rx: Whether channel is an Rx channel
> + * @ready: Whether transport user is ready to hear about channel
> + */
> +struct scmi_vio_channel {
> + spinlock_t lock;
> + spinlock_t ready_lock;
> + struct virtqueue *vqueue;
> + struct scmi_chan_info *cinfo;
> + struct list_head free_list;
> + u8 is_rx;
> + u8 ready;
> +};
> +
> +/**
> + * struct scmi_vio_msg - Transport PDU information
> + *
> + * @request: SDU used for commands
> + * @input: SDU used for (delayed) responses and notifications
> + * @list: List which scmi_vio_msg may be part of
> + * @rx_len: Input SDU size in bytes, once input has been received
> + */
> +struct scmi_vio_msg {
> + struct scmi_msg_payld *request;
> + struct scmi_msg_payld *input;
> + struct list_head list;
> + unsigned int rx_len;
> +};
> +
> +static bool scmi_vio_have_vq_rx(struct virtio_device *vdev)
> +{
> + return virtio_has_feature(vdev, VIRTIO_SCMI_F_P2A_CHANNELS);
> +}
> +
> +static int scmi_vio_feed_vq_rx(struct scmi_vio_channel *vioch,
> + struct scmi_vio_msg *msg)
> +{
> + struct scatterlist sg_in;
> + int rc;
> + unsigned long flags;
> +
> + sg_init_one(&sg_in, msg->input, VIRTIO_SCMI_MAX_PDU_SIZE);
> +
> + spin_lock_irqsave(&vioch->lock, flags);
> +
> + rc = virtqueue_add_inbuf(vioch->vqueue, &sg_in, 1, msg, GFP_ATOMIC);
> + if (rc)
> + dev_err_once(vioch->cinfo->dev,
> + "%s() failed to add to virtqueue (%d)\n", __func__,
> + rc);
> + else
> + virtqueue_kick(vioch->vqueue);
> +
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +
> + return rc;
> +}
> +
> +static void scmi_vio_complete_cb(struct virtqueue *vqueue)
> +{
> + unsigned long ready_flags;
> + unsigned long flags;
> + unsigned int length;
> + struct scmi_vio_channel *vioch;
> + struct scmi_vio_msg *msg;
> + bool cb_enabled = true;
> +
> + if (WARN_ON_ONCE(!vqueue->vdev->priv))
> + return;
> + vioch = &((struct scmi_vio_channel *)vqueue->vdev->priv)[vqueue->index];
> +
> + for (;;) {
> + spin_lock_irqsave(&vioch->ready_lock, ready_flags);
> +
> + if (!vioch->ready) {
> + if (!cb_enabled)
> + (void)virtqueue_enable_cb(vqueue);
> + goto unlock_ready_out;
> + }
> +
> + spin_lock_irqsave(&vioch->lock, flags);
> + if (cb_enabled) {
> + virtqueue_disable_cb(vqueue);
> + cb_enabled = false;
> + }
> + msg = virtqueue_get_buf(vqueue, &length);
> + if (!msg) {
> + if (virtqueue_enable_cb(vqueue))
> + goto unlock_out;
> + else
> + cb_enabled = true;
> + }
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +
> + if (msg) {
> + msg->rx_len = length;
> +
> + /*
> + * Hold the ready_lock during the callback to avoid
> + * races when the arm-scmi driver is unbinding while
> + * the virtio device is not quiesced yet.
> + */
> + scmi_rx_callback(vioch->cinfo,
> + msg_read_header(msg->input), msg);
> + }
> + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
> + }
> +
> +unlock_out:
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +unlock_ready_out:
> + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
> +}
> +
> +static const char *const scmi_vio_vqueue_names[] = { "tx", "rx" };
> +
> +static vq_callback_t *scmi_vio_complete_callbacks[] = {
> + scmi_vio_complete_cb,
> + scmi_vio_complete_cb
> +};
> +
> +static unsigned int virtio_get_max_msg(bool tx,
> + struct scmi_chan_info *base_cinfo)
> +{
> + struct scmi_vio_channel *vioch = base_cinfo->transport_info;
> + unsigned int ret;
> +
> + ret = virtqueue_get_vring_size(vioch->vqueue);
> +
> + /* Tx messages need multiple descriptors. */
> + if (tx)
> + ret /= DESCRIPTORS_PER_TX_MSG;
> +
> + if (ret > MSG_TOKEN_MAX) {
> + dev_info_once(
> + base_cinfo->dev,
> + "Only %ld messages can be pending simultaneously, while the %s virtqueue could hold %d\n",
> + MSG_TOKEN_MAX, tx ? "tx" : "rx", ret);
> + ret = MSG_TOKEN_MAX;
> + }
> +
> + return ret;
> +}
> +
> +static int scmi_vio_match_any_dev(struct device *dev, const void *data)
> +{
> + return 1;
> +}
> +
> +static struct virtio_driver virtio_scmi_driver; /* Forward declaration */
> +
> +static int virtio_link_supplier(struct device *dev)
> +{
> + struct device *vdev = driver_find_device(
> + &virtio_scmi_driver.driver, NULL, NULL, scmi_vio_match_any_dev);
> +
> + if (!vdev) {
> + dev_notice_once(
> + dev,
> + "Deferring probe after not finding a bound scmi-virtio device\n");
> + return -EPROBE_DEFER;
> + }
> +
> + /* Add device link for remove order and sysfs link. */
> + if (!device_link_add(dev, vdev, DL_FLAG_AUTOREMOVE_CONSUMER)) {
> + put_device(vdev);
> + dev_err(dev, "Adding link to supplier virtio device failed\n");
> + return -ECANCELED;
> + }
> +
> + put_device(vdev);
> + return scmi_set_transport_info(dev, dev_to_virtio(vdev));
> +}
> +
> +static bool virtio_chan_available(struct device *dev, int idx)
> +{
> + struct virtio_device *vdev;
> +
> + /* scmi-virtio doesn't support per-protocol channels */
> + if (is_scmi_protocol_device(dev))
> + return false;
> +
> + vdev = scmi_get_transport_info(dev);
> + if (!vdev)
> + return false;
> +
> + switch (idx) {
> + case VIRTIO_SCMI_VQ_TX:
> + return true;
> + case VIRTIO_SCMI_VQ_RX:
> + return scmi_vio_have_vq_rx(vdev);
> + default:
> + return false;
> + }
> +}
> +
> +static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
> + bool tx)
> +{
> + unsigned long flags;
> + struct virtio_device *vdev;
> + struct scmi_vio_channel *vioch;
> + int index = tx ? VIRTIO_SCMI_VQ_TX : VIRTIO_SCMI_VQ_RX;
> + int max_msg;
> + int i;
> +
> + if (!virtio_chan_available(dev, index))
> + return -ENODEV;
> +
> + vdev = scmi_get_transport_info(dev);
> + vioch = &((struct scmi_vio_channel *)vdev->priv)[index];
> +
> + spin_lock_irqsave(&vioch->lock, flags);
> + cinfo->transport_info = vioch;
> + vioch->cinfo = cinfo;
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +
> + max_msg = virtio_get_max_msg(tx, cinfo);
> +
> + for (i = 0; i < max_msg; i++) {
> + struct scmi_vio_msg *msg;
> +
> + msg = devm_kzalloc(cinfo->dev, sizeof(*msg), GFP_KERNEL);
> + if (!msg)
> + return -ENOMEM;
> +
> + if (tx) {
> + msg->request = devm_kzalloc(cinfo->dev,
> + VIRTIO_SCMI_MAX_PDU_SIZE,
> + GFP_KERNEL);
> + if (!msg->request)
> + return -ENOMEM;
> + }
> +
> + msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE,
> + GFP_KERNEL);
> + if (!msg->input)
> + return -ENOMEM;
> +
> + if (tx) {
> + spin_lock_irqsave(&vioch->lock, flags);
> + list_add_tail(&msg->list, &vioch->free_list);
> + spin_unlock_irqrestore(&vioch->lock, flags);
> + } else {
> + scmi_vio_feed_vq_rx(vioch, msg);
> + }
> + }
> +
> + spin_lock_irqsave(&vioch->ready_lock, flags);
> + vioch->ready = true;
> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
> +
> + return 0;
> +}
> +
> +static int virtio_chan_free(int id, void *p, void *data)
> +{
> + unsigned long flags;
> + struct scmi_chan_info *cinfo = p;
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> +
> + spin_lock_irqsave(&vioch->ready_lock, flags);
> + vioch->ready = false;
> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
> +
> + scmi_free_channel(cinfo, data, id);
> + return 0;
> +}
> +
> +static int virtio_send_message(struct scmi_chan_info *cinfo,
> + struct scmi_xfer *xfer)
> +{
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> + struct scatterlist sg_out;
> + struct scatterlist sg_in;
> + struct scatterlist *sgs[DESCRIPTORS_PER_TX_MSG] = { &sg_out, &sg_in };
> + unsigned long flags;
> + int rc;
> + struct scmi_vio_msg *msg;
> +
> + /*
> + * TODO: For now, we don't support polling. But it should not be
> + * difficult to add support.
> + */
> + if (xfer->hdr.poll_completion)
> + return -EINVAL;
> +
> + spin_lock_irqsave(&vioch->lock, flags);
> +
> + if (list_empty(&vioch->free_list)) {
> + spin_unlock_irqrestore(&vioch->lock, flags);
> + return -EBUSY;
> + }
> +
> + msg = list_first_entry(&vioch->free_list, typeof(*msg), list);
> + list_del(&msg->list);
> +
> + msg_tx_prepare(msg->request, xfer);
> +
> + sg_init_one(&sg_out, msg->request, msg_command_size(xfer));
> + sg_init_one(&sg_in, msg->input, msg_response_size(xfer));
> +
> + rc = virtqueue_add_sgs(vioch->vqueue, sgs, 1, 1, msg, GFP_ATOMIC);
> + if (rc) {
> + list_add(&msg->list, &vioch->free_list);
> + dev_err_once(vioch->cinfo->dev,
> + "%s() failed to add to virtqueue (%d)\n", __func__,
> + rc);
> + } else {
> + virtqueue_kick(vioch->vqueue);
> + }
> +
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +
> + return rc;
> +}
> +
> +static void virtio_fetch_response(struct scmi_chan_info *cinfo,
> + struct scmi_xfer *xfer, void *msg_handle)
> +{
> + struct scmi_vio_msg *msg = msg_handle;
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> +
> + if (!msg) {
> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> + "Ignoring %s() call with NULL msg_handle\n",
> + __func__);
> + return;
> + }
> +
> + msg_fetch_response(msg->input, msg->rx_len, xfer);
> +}
> +
> +static void virtio_fetch_notification(struct scmi_chan_info *cinfo,
> + size_t max_len, struct scmi_xfer *xfer,
> + void *msg_handle)
> +{
> + struct scmi_vio_msg *msg = msg_handle;
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> +
> + if (!msg) {
> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> + "Ignoring %s() call with NULL msg_handle\n",
> + __func__);
> + return;
> + }
> +
> + msg_fetch_notification(msg->input, msg->rx_len, max_len, xfer);
> +}
> +
> +static void dummy_clear_channel(struct scmi_chan_info *cinfo)
> +{
> +}
> +
> +static bool dummy_poll_done(struct scmi_chan_info *cinfo,
> + struct scmi_xfer *xfer)
> +{
> + return false;
> +}
> +
> +static void virtio_drop_message(struct scmi_chan_info *cinfo, void *msg_handle)
> +{
> + unsigned long flags;
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> + struct scmi_vio_msg *msg = msg_handle;
> +
> + if (!msg) {
> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> + "Ignoring %s() call with NULL msg_handle\n",
> + __func__);
> + return;
> + }
> +
> + if (vioch->is_rx) {
> + scmi_vio_feed_vq_rx(vioch, msg);
> + } else {
> + spin_lock_irqsave(&vioch->lock, flags);
> + list_add(&msg->list, &vioch->free_list);
> + spin_unlock_irqrestore(&vioch->lock, flags);
> + }
> +}
> +
> +static const struct scmi_transport_ops scmi_virtio_ops = {
> + .link_supplier = virtio_link_supplier,
> + .chan_available = virtio_chan_available,
> + .chan_setup = virtio_chan_setup,
> + .chan_free = virtio_chan_free,
> + .get_max_msg = virtio_get_max_msg,
> + .send_message = virtio_send_message,
> + .fetch_response = virtio_fetch_response,
> + .fetch_notification = virtio_fetch_notification,
> + .clear_channel = dummy_clear_channel,
> + .poll_done = dummy_poll_done,
> + .drop_message = virtio_drop_message,
> +};
> +
> +static int scmi_vio_probe(struct virtio_device *vdev)
> +{
> + struct device *dev = &vdev->dev;
> + struct scmi_vio_channel *channels;
> + bool have_vq_rx;
> + int vq_cnt;
> + int i;
> + int ret;
> + struct virtqueue *vqs[VIRTIO_SCMI_VQ_MAX_CNT];
> +
> + have_vq_rx = scmi_vio_have_vq_rx(vdev);
> + vq_cnt = have_vq_rx ? VIRTIO_SCMI_VQ_MAX_CNT : 1;
> +
> + channels = devm_kcalloc(dev, vq_cnt, sizeof(*channels), GFP_KERNEL);
> + if (!channels)
> + return -ENOMEM;
> +
> + if (have_vq_rx)
> + channels[VIRTIO_SCMI_VQ_RX].is_rx = true;
> +
> + ret = virtio_find_vqs(vdev, vq_cnt, vqs, scmi_vio_complete_callbacks,
> + scmi_vio_vqueue_names, NULL);
> + if (ret) {
> + dev_err(dev, "Failed to get %d virtqueue(s)\n", vq_cnt);
> + return ret;
> + }
> + dev_info(dev, "Found %d virtqueue(s)\n", vq_cnt);
> +
> + for (i = 0; i < vq_cnt; i++) {
> + spin_lock_init(&channels[i].lock);
> + spin_lock_init(&channels[i].ready_lock);
> + INIT_LIST_HEAD(&channels[i].free_list);
> + channels[i].vqueue = vqs[i];
> + }
> +
> + vdev->priv = channels;
> +
> + return 0;
> +}
> +
> +static void scmi_vio_remove(struct virtio_device *vdev)
> +{
> + vdev->config->reset(vdev);
> + vdev->config->del_vqs(vdev);
> +}
> +
> +static unsigned int features[] = {
> + VIRTIO_SCMI_F_P2A_CHANNELS,
> +};
> +
> +static const struct virtio_device_id id_table[] = {
> + { VIRTIO_ID_SCMI, VIRTIO_DEV_ANY_ID },
> + { 0 }
> +};
> +
> +static struct virtio_driver virtio_scmi_driver = {
> + .driver.name = "scmi-virtio",
> + .driver.owner = THIS_MODULE,
> + .feature_table = features,
> + .feature_table_size = ARRAY_SIZE(features),
> + .id_table = id_table,
> + .probe = scmi_vio_probe,
> + .remove = scmi_vio_remove,
> +};
> +
> +static int __init virtio_scmi_init(void)
> +{
> + return register_virtio_driver(&virtio_scmi_driver);
> +}
> +
> +static void __exit virtio_scmi_exit(void)
> +{
> + unregister_virtio_driver(&virtio_scmi_driver);
> +}
> +
> +const struct scmi_desc scmi_virtio_desc = {
> + .init = virtio_scmi_init,
> + .exit = virtio_scmi_exit,
> + .ops = &scmi_virtio_ops,
> + .max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */
> + .max_msg = 0, /* overridden by virtio_get_max_msg() */
> + .max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE,
> +};
> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> index f0c35ce8628c..c146fe30e589 100644
> --- a/include/uapi/linux/virtio_ids.h
> +++ b/include/uapi/linux/virtio_ids.h
> @@ -56,5 +56,6 @@
> #define VIRTIO_ID_PMEM 27 /* virtio pmem */
> #define VIRTIO_ID_BT 28 /* virtio bluetooth */
> #define VIRTIO_ID_MAC80211_HWSIM 29 /* virtio mac80211-hwsim */
> +#define VIRTIO_ID_SCMI 32 /* virtio SCMI */
>
> #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/include/uapi/linux/virtio_scmi.h b/include/uapi/linux/virtio_scmi.h
> new file mode 100644
> index 000000000000..732b01504c35
> --- /dev/null
> +++ b/include/uapi/linux/virtio_scmi.h
> @@ -0,0 +1,25 @@
> +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
> +/*
> + * Copyright (C) 2020 OpenSynergy GmbH
> + */
> +
> +#ifndef _UAPI_LINUX_VIRTIO_SCMI_H
> +#define _UAPI_LINUX_VIRTIO_SCMI_H
> +
> +#include <linux/virtio_types.h>
> +
> +/* Feature bits */
> +
> +/* Device implements some SCMI notifications, or delayed responses. */
> +#define VIRTIO_SCMI_F_P2A_CHANNELS 0
> +
> +/* Device implements any SCMI statistics shared memory region */
> +#define VIRTIO_SCMI_F_SHARED_MEMORY 1
> +
> +/* Virtqueues */
> +
> +#define VIRTIO_SCMI_VQ_TX 0 /* cmdq */
> +#define VIRTIO_SCMI_VQ_RX 1 /* eventq */
> +#define VIRTIO_SCMI_VQ_MAX_CNT 2
> +
> +#endif /* _UAPI_LINUX_VIRTIO_SCMI_H */
> --
> 2.25.1
>
>

2021-06-01 14:55:55

by Vincent Guittot

[permalink] [raw]
Subject: Re: [RFC PATCH v3 11/12] firmware: arm_scmi: Add virtio transport

On Tue, 11 May 2021 at 02:28, Peter Hilber <[email protected]> wrote:
>
> From: Igor Skalkin <[email protected]>
>
> This transport enables accessing an SCMI platform as a virtio device.
>
> Implement an SCMI virtio driver according to the virtio SCMI device spec
> [1]. Virtio device id 32 has been reserved for the SCMI device [2].
>
> The virtio transport has one Tx channel (virtio cmdq, A2P channel) and
> at most one Rx channel (virtio eventq, P2A channel).
>
> The following feature bit defined in [1] is not implemented:
> VIRTIO_SCMI_F_SHARED_MEMORY.
>
> After the preparatory patches, this implements the virtio transport as
> paraphrased:
>
> Only support a single arm-scmi device (which is consistent with the SCMI
> spec). scmi-virtio init is called from arm-scmi module init. During the
> arm-scmi probing, link to the first probed scmi-virtio device. Defer
> arm-scmi probing if no scmi-virtio device is bound yet.
>
> For simplicity, restrict the number of messages which can be pending
> simultaneously according to the virtqueue capacity. (The virtqueue sizes
> are negotiated with the virtio device.)
>
> As soon as Rx channel message buffers are allocated or have been read
> out by the arm-scmi driver, feed them to the virtio device.
>
> Since some virtio devices may not have the short response time exhibited
> by SCMI platforms using other transports, set a generous response
> timeout.
>
> Limitations:
>
> - Polling is not supported.
>
> - The timeout for delayed responses has not been adjusted.
>
> [1] https://github.com/oasis-tcs/virtio-spec/blob/master/virtio-scmi.tex
> [2] https://www.oasis-open.org/committees/ballot.php?id=3496
>
> Signed-off-by: Igor Skalkin <[email protected]>
> [ Peter: Adapted patch for submission to upstream. ]
> Co-developed-by: Peter Hilber <[email protected]>
> Signed-off-by: Peter Hilber <[email protected]>
> ---
> MAINTAINERS | 1 +
> drivers/firmware/Kconfig | 12 +
> drivers/firmware/arm_scmi/Makefile | 1 +
> drivers/firmware/arm_scmi/common.h | 3 +
> drivers/firmware/arm_scmi/driver.c | 3 +
> drivers/firmware/arm_scmi/virtio.c | 523 +++++++++++++++++++++++++++++
> include/uapi/linux/virtio_ids.h | 1 +
> include/uapi/linux/virtio_scmi.h | 25 ++
> 8 files changed, 569 insertions(+)
> create mode 100644 drivers/firmware/arm_scmi/virtio.c
> create mode 100644 include/uapi/linux/virtio_scmi.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index bd7aff0c120f..449c336872f3 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -17705,6 +17705,7 @@ F: drivers/regulator/scmi-regulator.c
> F: drivers/reset/reset-scmi.c
> F: include/linux/sc[mp]i_protocol.h
> F: include/trace/events/scmi.h
> +F: include/uapi/linux/virtio_scmi.h
>
> SYSTEM RESET/SHUTDOWN DRIVERS
> M: Sebastian Reichel <[email protected]>
> diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
> index e8377b12e4d0..7e9eafdd9b63 100644
> --- a/drivers/firmware/Kconfig
> +++ b/drivers/firmware/Kconfig
> @@ -39,6 +39,18 @@ config ARM_SCMI_HAVE_MSG
> This declares whether a message passing based transport for SCMI is
> available.
>
> +# This config option includes the dependencies of ARM_SCMI_PROTOCOL so that
> +# this config doesn't show up when SCMI wouldn't be available.
> +config VIRTIO_SCMI
> + bool "Virtio transport for SCMI"
> + select ARM_SCMI_HAVE_MSG
> + depends on VIRTIO && (ARM || ARM64 || COMPILE_TEST)
> + help
> + This enables the virtio based transport for SCMI.
> +
> + If you want to use the ARM SCMI protocol between the virtio guest and
> + a host providing a virtio SCMI device, answer Y.
> +
> config ARM_SCMI_POWER_DOMAIN
> tristate "SCMI power domain driver"
> depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF)
> diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
> index f6b4acb8abdb..db1787606fb2 100644
> --- a/drivers/firmware/arm_scmi/Makefile
> +++ b/drivers/firmware/arm_scmi/Makefile
> @@ -5,6 +5,7 @@ scmi-transport-$(CONFIG_ARM_SCMI_HAVE_SHMEM) = shmem.o
> scmi-transport-$(CONFIG_MAILBOX) += mailbox.o
> scmi-transport-$(CONFIG_HAVE_ARM_SMCCC_DISCOVERY) += smc.o
> scmi-transport-$(CONFIG_ARM_SCMI_HAVE_MSG) += msg.o
> +scmi-transport-$(CONFIG_VIRTIO_SCMI) += virtio.o
> scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o
> scmi-module-objs := $(scmi-bus-y) $(scmi-driver-y) $(scmi-protocols-y) \
> $(scmi-transport-y)
> diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
> index 4cb6571c7aaf..bada06cfd33d 100644
> --- a/drivers/firmware/arm_scmi/common.h
> +++ b/drivers/firmware/arm_scmi/common.h
> @@ -349,6 +349,9 @@ extern const struct scmi_desc scmi_mailbox_desc;
> #ifdef CONFIG_HAVE_ARM_SMCCC
> extern const struct scmi_desc scmi_smc_desc;
> #endif
> +#ifdef CONFIG_VIRTIO_SCMI
> +extern const struct scmi_desc scmi_virtio_desc;
> +#endif
>
> int scmi_set_transport_info(struct device *dev, void *transport_info);
> void *scmi_get_transport_info(struct device *dev);
> diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
> index e04e7c8e6928..a31187385470 100644
> --- a/drivers/firmware/arm_scmi/driver.c
> +++ b/drivers/firmware/arm_scmi/driver.c
> @@ -1637,6 +1637,9 @@ static const struct of_device_id scmi_of_match[] = {
> #endif
> #ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
> { .compatible = "arm,scmi-smc", .data = &scmi_smc_desc},
> +#endif
> +#ifdef CONFIG_VIRTIO_SCMI
> + { .compatible = "arm,scmi-virtio", .data = &scmi_virtio_desc},
> #endif
> { /* Sentinel */ },
> };
> diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c
> new file mode 100644
> index 000000000000..20972adf6dc7
> --- /dev/null
> +++ b/drivers/firmware/arm_scmi/virtio.c
> @@ -0,0 +1,523 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Virtio Transport driver for Arm System Control and Management Interface
> + * (SCMI).
> + *
> + * Copyright (C) 2020 OpenSynergy.
> + */
> +
> +/**
> + * DOC: Theory of Operation
> + *
> + * The scmi-virtio transport implements a driver for the virtio SCMI device.
> + *
> + * There is one Tx channel (virtio cmdq, A2P channel) and at most one Rx
> + * channel (virtio eventq, P2A channel). Each channel is implemented through a
> + * virtqueue. Access to each virtqueue is protected by spinlocks.
> + */
> +
> +#include <linux/errno.h>
> +#include <linux/of.h>
> +#include <linux/of_platform.h>
> +#include <linux/platform_device.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <uapi/linux/virtio_ids.h>
> +#include <uapi/linux/virtio_scmi.h>
> +
> +#include "common.h"
> +
> +#define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */
> +#define VIRTIO_SCMI_MAX_PDU_SIZE \
> + (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD)
> +#define DESCRIPTORS_PER_TX_MSG 2
> +
> +/**
> + * struct scmi_vio_channel - Transport channel information
> + *
> + * @lock: Protects access to all members except ready.
> + * @ready_lock: Protects access to ready. If required, it must be taken before
> + * lock.
> + * @vqueue: Associated virtqueue
> + * @cinfo: SCMI Tx or Rx channel
> + * @free_list: List of unused scmi_vio_msg, maintained for Tx channels only
> + * @is_rx: Whether channel is an Rx channel
> + * @ready: Whether transport user is ready to hear about channel
> + */
> +struct scmi_vio_channel {
> + spinlock_t lock;
> + spinlock_t ready_lock;
> + struct virtqueue *vqueue;
> + struct scmi_chan_info *cinfo;
> + struct list_head free_list;
> + u8 is_rx;
> + u8 ready;
> +};
> +
> +/**
> + * struct scmi_vio_msg - Transport PDU information
> + *
> + * @request: SDU used for commands
> + * @input: SDU used for (delayed) responses and notifications
> + * @list: List which scmi_vio_msg may be part of
> + * @rx_len: Input SDU size in bytes, once input has been received
> + */
> +struct scmi_vio_msg {
> + struct scmi_msg_payld *request;
> + struct scmi_msg_payld *input;
> + struct list_head list;
> + unsigned int rx_len;
> +};
> +
> +static bool scmi_vio_have_vq_rx(struct virtio_device *vdev)
> +{
> + return virtio_has_feature(vdev, VIRTIO_SCMI_F_P2A_CHANNELS);
> +}
> +
> +static int scmi_vio_feed_vq_rx(struct scmi_vio_channel *vioch,
> + struct scmi_vio_msg *msg)
> +{
> + struct scatterlist sg_in;
> + int rc;
> + unsigned long flags;
> +
> + sg_init_one(&sg_in, msg->input, VIRTIO_SCMI_MAX_PDU_SIZE);
> +
> + spin_lock_irqsave(&vioch->lock, flags);
> +
> + rc = virtqueue_add_inbuf(vioch->vqueue, &sg_in, 1, msg, GFP_ATOMIC);
> + if (rc)
> + dev_err_once(vioch->cinfo->dev,
> + "%s() failed to add to virtqueue (%d)\n", __func__,
> + rc);
> + else
> + virtqueue_kick(vioch->vqueue);
> +
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +
> + return rc;
> +}
> +
> +static void scmi_vio_complete_cb(struct virtqueue *vqueue)
> +{
> + unsigned long ready_flags;
> + unsigned long flags;
> + unsigned int length;
> + struct scmi_vio_channel *vioch;
> + struct scmi_vio_msg *msg;
> + bool cb_enabled = true;
> +
> + if (WARN_ON_ONCE(!vqueue->vdev->priv))
> + return;
> + vioch = &((struct scmi_vio_channel *)vqueue->vdev->priv)[vqueue->index];
> +
> + for (;;) {
> + spin_lock_irqsave(&vioch->ready_lock, ready_flags);
> +
> + if (!vioch->ready) {
> + if (!cb_enabled)
> + (void)virtqueue_enable_cb(vqueue);
> + goto unlock_ready_out;
> + }
> +
> + spin_lock_irqsave(&vioch->lock, flags);
> + if (cb_enabled) {
> + virtqueue_disable_cb(vqueue);
> + cb_enabled = false;
> + }
> + msg = virtqueue_get_buf(vqueue, &length);
> + if (!msg) {
> + if (virtqueue_enable_cb(vqueue))
> + goto unlock_out;
> + else
> + cb_enabled = true;
> + }
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +
> + if (msg) {
> + msg->rx_len = length;
> +
> + /*
> + * Hold the ready_lock during the callback to avoid
> + * races when the arm-scmi driver is unbinding while
> + * the virtio device is not quiesced yet.
> + */
> + scmi_rx_callback(vioch->cinfo,
> + msg_read_header(msg->input), msg);
> + }
> + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
> + }
> +
> +unlock_out:
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +unlock_ready_out:
> + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
> +}
> +
> +static const char *const scmi_vio_vqueue_names[] = { "tx", "rx" };
> +
> +static vq_callback_t *scmi_vio_complete_callbacks[] = {
> + scmi_vio_complete_cb,
> + scmi_vio_complete_cb
> +};
> +
> +static unsigned int virtio_get_max_msg(bool tx,
> + struct scmi_chan_info *base_cinfo)
> +{
> + struct scmi_vio_channel *vioch = base_cinfo->transport_info;
> + unsigned int ret;
> +
> + ret = virtqueue_get_vring_size(vioch->vqueue);
> +
> + /* Tx messages need multiple descriptors. */()
> + if (tx)
> + ret /= DESCRIPTORS_PER_TX_MSG;
> +
> + if (ret > MSG_TOKEN_MAX) {

__scmi_xfer_info_init() returns error for info->max_msg >=
MSG_TOKEN_MAX so it should be (ret >= MSG_TOKEN_MAX


> + dev_info_once(
> + base_cinfo->dev,
> + "Only %ld messages can be pending simultaneously, while the %s virtqueue could hold %d\n",
> + MSG_TOKEN_MAX, tx ? "tx" : "rx", ret);
> + ret = MSG_TOKEN_MAX;

should be ret = MSG_TOKEN_MAX-1;

> + }
> +
> + return ret;
> +}
> +
> +static int scmi_vio_match_any_dev(struct device *dev, const void *data)
> +{
> + return 1;
> +}
> +
> +static struct virtio_driver virtio_scmi_driver; /* Forward declaration */
> +
> +static int virtio_link_supplier(struct device *dev)
> +{
> + struct device *vdev = driver_find_device(
> + &virtio_scmi_driver.driver, NULL, NULL, scmi_vio_match_any_dev);
> +
> + if (!vdev) {
> + dev_notice_once(
> + dev,
> + "Deferring probe after not finding a bound scmi-virtio device\n");
> + return -EPROBE_DEFER;
> + }
> +
> + /* Add device link for remove order and sysfs link. */
> + if (!device_link_add(dev, vdev, DL_FLAG_AUTOREMOVE_CONSUMER)) {
> + put_device(vdev);
> + dev_err(dev, "Adding link to supplier virtio device failed\n");
> + return -ECANCELED;
> + }
> +
> + put_device(vdev);
> + return scmi_set_transport_info(dev, dev_to_virtio(vdev));
> +}
> +
> +static bool virtio_chan_available(struct device *dev, int idx)
> +{
> + struct virtio_device *vdev;
> +
> + /* scmi-virtio doesn't support per-protocol channels */
> + if (is_scmi_protocol_device(dev))
> + return false;
> +
> + vdev = scmi_get_transport_info(dev);
> + if (!vdev)
> + return false;
> +
> + switch (idx) {
> + case VIRTIO_SCMI_VQ_TX:
> + return true;
> + case VIRTIO_SCMI_VQ_RX:
> + return scmi_vio_have_vq_rx(vdev);
> + default:
> + return false;
> + }
> +}
> +
> +static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
> + bool tx)
> +{
> + unsigned long flags;
> + struct virtio_device *vdev;
> + struct scmi_vio_channel *vioch;
> + int index = tx ? VIRTIO_SCMI_VQ_TX : VIRTIO_SCMI_VQ_RX;
> + int max_msg;
> + int i;
> +
> + if (!virtio_chan_available(dev, index))
> + return -ENODEV;
> +
> + vdev = scmi_get_transport_info(dev);
> + vioch = &((struct scmi_vio_channel *)vdev->priv)[index];
> +
> + spin_lock_irqsave(&vioch->lock, flags);
> + cinfo->transport_info = vioch;
> + vioch->cinfo = cinfo;
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +
> + max_msg = virtio_get_max_msg(tx, cinfo);
> +
> + for (i = 0; i < max_msg; i++) {
> + struct scmi_vio_msg *msg;
> +
> + msg = devm_kzalloc(cinfo->dev, sizeof(*msg), GFP_KERNEL);
> + if (!msg)
> + return -ENOMEM;
> +
> + if (tx) {
> + msg->request = devm_kzalloc(cinfo->dev,
> + VIRTIO_SCMI_MAX_PDU_SIZE,
> + GFP_KERNEL);
> + if (!msg->request)
> + return -ENOMEM;
> + }
> +
> + msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE,
> + GFP_KERNEL);
> + if (!msg->input)
> + return -ENOMEM;
> +
> + if (tx) {
> + spin_lock_irqsave(&vioch->lock, flags);
> + list_add_tail(&msg->list, &vioch->free_list);
> + spin_unlock_irqrestore(&vioch->lock, flags);
> + } else {
> + scmi_vio_feed_vq_rx(vioch, msg);
> + }
> + }
> +
> + spin_lock_irqsave(&vioch->ready_lock, flags);
> + vioch->ready = true;
> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
> +
> + return 0;
> +}
> +
> +static int virtio_chan_free(int id, void *p, void *data)
> +{
> + unsigned long flags;
> + struct scmi_chan_info *cinfo = p;
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> +
> + spin_lock_irqsave(&vioch->ready_lock, flags);
> + vioch->ready = false;
> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
> +
> + scmi_free_channel(cinfo, data, id);
> + return 0;
> +}
> +
> +static int virtio_send_message(struct scmi_chan_info *cinfo,
> + struct scmi_xfer *xfer)
> +{
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> + struct scatterlist sg_out;
> + struct scatterlist sg_in;
> + struct scatterlist *sgs[DESCRIPTORS_PER_TX_MSG] = { &sg_out, &sg_in };
> + unsigned long flags;
> + int rc;
> + struct scmi_vio_msg *msg;
> +
> + /*
> + * TODO: For now, we don't support polling. But it should not be
> + * difficult to add support.
> + */
> + if (xfer->hdr.poll_completion)
> + return -EINVAL;
> +
> + spin_lock_irqsave(&vioch->lock, flags);
> +
> + if (list_empty(&vioch->free_list)) {
> + spin_unlock_irqrestore(&vioch->lock, flags);
> + return -EBUSY;
> + }
> +
> + msg = list_first_entry(&vioch->free_list, typeof(*msg), list);
> + list_del(&msg->list);
> +
> + msg_tx_prepare(msg->request, xfer);
> +
> + sg_init_one(&sg_out, msg->request, msg_command_size(xfer));
> + sg_init_one(&sg_in, msg->input, msg_response_size(xfer));
> +
> + rc = virtqueue_add_sgs(vioch->vqueue, sgs, 1, 1, msg, GFP_ATOMIC);
> + if (rc) {
> + list_add(&msg->list, &vioch->free_list);
> + dev_err_once(vioch->cinfo->dev,
> + "%s() failed to add to virtqueue (%d)\n", __func__,
> + rc);
> + } else {
> + virtqueue_kick(vioch->vqueue);
> + }
> +
> + spin_unlock_irqrestore(&vioch->lock, flags);
> +
> + return rc;
> +}
> +
> +static void virtio_fetch_response(struct scmi_chan_info *cinfo,
> + struct scmi_xfer *xfer, void *msg_handle)
> +{
> + struct scmi_vio_msg *msg = msg_handle;
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> +
> + if (!msg) {
> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> + "Ignoring %s() call with NULL msg_handle\n",
> + __func__);
> + return;
> + }
> +
> + msg_fetch_response(msg->input, msg->rx_len, xfer);
> +}
> +
> +static void virtio_fetch_notification(struct scmi_chan_info *cinfo,
> + size_t max_len, struct scmi_xfer *xfer,
> + void *msg_handle)
> +{
> + struct scmi_vio_msg *msg = msg_handle;
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> +
> + if (!msg) {
> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> + "Ignoring %s() call with NULL msg_handle\n",
> + __func__);
> + return;
> + }
> +
> + msg_fetch_notification(msg->input, msg->rx_len, max_len, xfer);
> +}
> +
> +static void dummy_clear_channel(struct scmi_chan_info *cinfo)
> +{
> +}
> +
> +static bool dummy_poll_done(struct scmi_chan_info *cinfo,
> + struct scmi_xfer *xfer)
> +{
> + return false;
> +}
> +
> +static void virtio_drop_message(struct scmi_chan_info *cinfo, void *msg_handle)
> +{
> + unsigned long flags;
> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> + struct scmi_vio_msg *msg = msg_handle;
> +
> + if (!msg) {
> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> + "Ignoring %s() call with NULL msg_handle\n",
> + __func__);
> + return;
> + }
> +
> + if (vioch->is_rx) {
> + scmi_vio_feed_vq_rx(vioch, msg);
> + } else {
> + spin_lock_irqsave(&vioch->lock, flags);
> + list_add(&msg->list, &vioch->free_list);
> + spin_unlock_irqrestore(&vioch->lock, flags);
> + }
> +}
> +
> +static const struct scmi_transport_ops scmi_virtio_ops = {
> + .link_supplier = virtio_link_supplier,
> + .chan_available = virtio_chan_available,
> + .chan_setup = virtio_chan_setup,
> + .chan_free = virtio_chan_free,
> + .get_max_msg = virtio_get_max_msg,
> + .send_message = virtio_send_message,
> + .fetch_response = virtio_fetch_response,
> + .fetch_notification = virtio_fetch_notification,
> + .clear_channel = dummy_clear_channel,
> + .poll_done = dummy_poll_done,
> + .drop_message = virtio_drop_message,
> +};
> +
> +static int scmi_vio_probe(struct virtio_device *vdev)
> +{
> + struct device *dev = &vdev->dev;
> + struct scmi_vio_channel *channels;
> + bool have_vq_rx;
> + int vq_cnt;
> + int i;
> + int ret;
> + struct virtqueue *vqs[VIRTIO_SCMI_VQ_MAX_CNT];
> +
> + have_vq_rx = scmi_vio_have_vq_rx(vdev);
> + vq_cnt = have_vq_rx ? VIRTIO_SCMI_VQ_MAX_CNT : 1;
> +
> + channels = devm_kcalloc(dev, vq_cnt, sizeof(*channels), GFP_KERNEL);
> + if (!channels)
> + return -ENOMEM;
> +
> + if (have_vq_rx)
> + channels[VIRTIO_SCMI_VQ_RX].is_rx = true;
> +
> + ret = virtio_find_vqs(vdev, vq_cnt, vqs, scmi_vio_complete_callbacks,
> + scmi_vio_vqueue_names, NULL);
> + if (ret) {
> + dev_err(dev, "Failed to get %d virtqueue(s)\n", vq_cnt);
> + return ret;
> + }
> + dev_info(dev, "Found %d virtqueue(s)\n", vq_cnt);
> +
> + for (i = 0; i < vq_cnt; i++) {
> + spin_lock_init(&channels[i].lock);
> + spin_lock_init(&channels[i].ready_lock);
> + INIT_LIST_HEAD(&channels[i].free_list);
> + channels[i].vqueue = vqs[i];
> + }
> +
> + vdev->priv = channels;
> +
> + return 0;
> +}
> +
> +static void scmi_vio_remove(struct virtio_device *vdev)
> +{
> + vdev->config->reset(vdev);
> + vdev->config->del_vqs(vdev);
> +}
> +
> +static unsigned int features[] = {
> + VIRTIO_SCMI_F_P2A_CHANNELS,
> +};
> +
> +static const struct virtio_device_id id_table[] = {
> + { VIRTIO_ID_SCMI, VIRTIO_DEV_ANY_ID },
> + { 0 }
> +};
> +
> +static struct virtio_driver virtio_scmi_driver = {
> + .driver.name = "scmi-virtio",
> + .driver.owner = THIS_MODULE,
> + .feature_table = features,
> + .feature_table_size = ARRAY_SIZE(features),
> + .id_table = id_table,
> + .probe = scmi_vio_probe,
> + .remove = scmi_vio_remove,
> +};
> +
> +static int __init virtio_scmi_init(void)
> +{
> + return register_virtio_driver(&virtio_scmi_driver);
> +}
> +
> +static void __exit virtio_scmi_exit(void)
> +{
> + unregister_virtio_driver(&virtio_scmi_driver);
> +}
> +
> +const struct scmi_desc scmi_virtio_desc = {
> + .init = virtio_scmi_init,
> + .exit = virtio_scmi_exit,
> + .ops = &scmi_virtio_ops,
> + .max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */
> + .max_msg = 0, /* overridden by virtio_get_max_msg() */
> + .max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE,
> +};
> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> index f0c35ce8628c..c146fe30e589 100644
> --- a/include/uapi/linux/virtio_ids.h
> +++ b/include/uapi/linux/virtio_ids.h
> @@ -56,5 +56,6 @@
> #define VIRTIO_ID_PMEM 27 /* virtio pmem */
> #define VIRTIO_ID_BT 28 /* virtio bluetooth */
> #define VIRTIO_ID_MAC80211_HWSIM 29 /* virtio mac80211-hwsim */
> +#define VIRTIO_ID_SCMI 32 /* virtio SCMI */
>
> #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/include/uapi/linux/virtio_scmi.h b/include/uapi/linux/virtio_scmi.h
> new file mode 100644
> index 000000000000..732b01504c35
> --- /dev/null
> +++ b/include/uapi/linux/virtio_scmi.h
> @@ -0,0 +1,25 @@
> +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
> +/*
> + * Copyright (C) 2020 OpenSynergy GmbH
> + */
> +
> +#ifndef _UAPI_LINUX_VIRTIO_SCMI_H
> +#define _UAPI_LINUX_VIRTIO_SCMI_H
> +
> +#include <linux/virtio_types.h>
> +
> +/* Feature bits */
> +
> +/* Device implements some SCMI notifications, or delayed responses. */
> +#define VIRTIO_SCMI_F_P2A_CHANNELS 0
> +
> +/* Device implements any SCMI statistics shared memory region */
> +#define VIRTIO_SCMI_F_SHARED_MEMORY 1
> +
> +/* Virtqueues */
> +
> +#define VIRTIO_SCMI_VQ_TX 0 /* cmdq */
> +#define VIRTIO_SCMI_VQ_RX 1 /* eventq */
> +#define VIRTIO_SCMI_VQ_MAX_CNT 2
> +
> +#endif /* _UAPI_LINUX_VIRTIO_SCMI_H */
> --
> 2.25.1
>
>

2021-06-02 09:45:21

by Peter Hilber

[permalink] [raw]
Subject: Re: [RFC PATCH v3 11/12] firmware: arm_scmi: Add virtio transport

On 01.06.21 16:53, Vincent Guittot wrote:
> On Tue, 11 May 2021 at 02:28, Peter Hilber <[email protected]> wrote:
>>
>> From: Igor Skalkin <[email protected]>
>>
>> This transport enables accessing an SCMI platform as a virtio device.
>>
>> Implement an SCMI virtio driver according to the virtio SCMI device spec
>> [1]. Virtio device id 32 has been reserved for the SCMI device [2].
>>
>> The virtio transport has one Tx channel (virtio cmdq, A2P channel) and
>> at most one Rx channel (virtio eventq, P2A channel).
>>
>> The following feature bit defined in [1] is not implemented:
>> VIRTIO_SCMI_F_SHARED_MEMORY.
>>
>> After the preparatory patches, this implements the virtio transport as
>> paraphrased:
>>
>> Only support a single arm-scmi device (which is consistent with the SCMI
>> spec). scmi-virtio init is called from arm-scmi module init. During the
>> arm-scmi probing, link to the first probed scmi-virtio device. Defer
>> arm-scmi probing if no scmi-virtio device is bound yet.
>>
>> For simplicity, restrict the number of messages which can be pending
>> simultaneously according to the virtqueue capacity. (The virtqueue sizes
>> are negotiated with the virtio device.)
>>
>> As soon as Rx channel message buffers are allocated or have been read
>> out by the arm-scmi driver, feed them to the virtio device.
>>
>> Since some virtio devices may not have the short response time exhibited
>> by SCMI platforms using other transports, set a generous response
>> timeout.
>>
>> Limitations:
>>
>> - Polling is not supported.
>>
>> - The timeout for delayed responses has not been adjusted.
>>
>> [1] https://github.com/oasis-tcs/virtio-spec/blob/master/virtio-scmi.tex
>> [2] https://www.oasis-open.org/committees/ballot.php?id=3496
>>
>> Signed-off-by: Igor Skalkin <[email protected]>
>> [ Peter: Adapted patch for submission to upstream. ]
>> Co-developed-by: Peter Hilber <[email protected]>
>> Signed-off-by: Peter Hilber <[email protected]>
>> ---
>> MAINTAINERS | 1 +
>> drivers/firmware/Kconfig | 12 +
>> drivers/firmware/arm_scmi/Makefile | 1 +
>> drivers/firmware/arm_scmi/common.h | 3 +
>> drivers/firmware/arm_scmi/driver.c | 3 +
>> drivers/firmware/arm_scmi/virtio.c | 523 +++++++++++++++++++++++++++++
>> include/uapi/linux/virtio_ids.h | 1 +
>> include/uapi/linux/virtio_scmi.h | 25 ++
>> 8 files changed, 569 insertions(+)
>> create mode 100644 drivers/firmware/arm_scmi/virtio.c
>> create mode 100644 include/uapi/linux/virtio_scmi.h
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index bd7aff0c120f..449c336872f3 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -17705,6 +17705,7 @@ F: drivers/regulator/scmi-regulator.c
>> F: drivers/reset/reset-scmi.c
>> F: include/linux/sc[mp]i_protocol.h
>> F: include/trace/events/scmi.h
>> +F: include/uapi/linux/virtio_scmi.h
>>
>> SYSTEM RESET/SHUTDOWN DRIVERS
>> M: Sebastian Reichel <[email protected]>
>> diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
>> index e8377b12e4d0..7e9eafdd9b63 100644
>> --- a/drivers/firmware/Kconfig
>> +++ b/drivers/firmware/Kconfig
>> @@ -39,6 +39,18 @@ config ARM_SCMI_HAVE_MSG
>> This declares whether a message passing based transport for SCMI is
>> available.
>>
>> +# This config option includes the dependencies of ARM_SCMI_PROTOCOL so that
>> +# this config doesn't show up when SCMI wouldn't be available.
>> +config VIRTIO_SCMI
>> + bool "Virtio transport for SCMI"
>> + select ARM_SCMI_HAVE_MSG
>> + depends on VIRTIO && (ARM || ARM64 || COMPILE_TEST)
>> + help
>> + This enables the virtio based transport for SCMI.
>> +
>> + If you want to use the ARM SCMI protocol between the virtio guest and
>> + a host providing a virtio SCMI device, answer Y.
>> +
>> config ARM_SCMI_POWER_DOMAIN
>> tristate "SCMI power domain driver"
>> depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF)
>> diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
>> index f6b4acb8abdb..db1787606fb2 100644
>> --- a/drivers/firmware/arm_scmi/Makefile
>> +++ b/drivers/firmware/arm_scmi/Makefile
>> @@ -5,6 +5,7 @@ scmi-transport-$(CONFIG_ARM_SCMI_HAVE_SHMEM) = shmem.o
>> scmi-transport-$(CONFIG_MAILBOX) += mailbox.o
>> scmi-transport-$(CONFIG_HAVE_ARM_SMCCC_DISCOVERY) += smc.o
>> scmi-transport-$(CONFIG_ARM_SCMI_HAVE_MSG) += msg.o
>> +scmi-transport-$(CONFIG_VIRTIO_SCMI) += virtio.o
>> scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o
>> scmi-module-objs := $(scmi-bus-y) $(scmi-driver-y) $(scmi-protocols-y) \
>> $(scmi-transport-y)
>> diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
>> index 4cb6571c7aaf..bada06cfd33d 100644
>> --- a/drivers/firmware/arm_scmi/common.h
>> +++ b/drivers/firmware/arm_scmi/common.h
>> @@ -349,6 +349,9 @@ extern const struct scmi_desc scmi_mailbox_desc;
>> #ifdef CONFIG_HAVE_ARM_SMCCC
>> extern const struct scmi_desc scmi_smc_desc;
>> #endif
>> +#ifdef CONFIG_VIRTIO_SCMI
>> +extern const struct scmi_desc scmi_virtio_desc;
>> +#endif
>>
>> int scmi_set_transport_info(struct device *dev, void *transport_info);
>> void *scmi_get_transport_info(struct device *dev);
>> diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
>> index e04e7c8e6928..a31187385470 100644
>> --- a/drivers/firmware/arm_scmi/driver.c
>> +++ b/drivers/firmware/arm_scmi/driver.c
>> @@ -1637,6 +1637,9 @@ static const struct of_device_id scmi_of_match[] = {
>> #endif
>> #ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
>> { .compatible = "arm,scmi-smc", .data = &scmi_smc_desc},
>> +#endif
>> +#ifdef CONFIG_VIRTIO_SCMI
>> + { .compatible = "arm,scmi-virtio", .data = &scmi_virtio_desc},
>> #endif
>> { /* Sentinel */ },
>> };
>> diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c
>> new file mode 100644
>> index 000000000000..20972adf6dc7
>> --- /dev/null
>> +++ b/drivers/firmware/arm_scmi/virtio.c
>> @@ -0,0 +1,523 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Virtio Transport driver for Arm System Control and Management Interface
>> + * (SCMI).
>> + *
>> + * Copyright (C) 2020 OpenSynergy.
>> + */
>> +
>> +/**
>> + * DOC: Theory of Operation
>> + *
>> + * The scmi-virtio transport implements a driver for the virtio SCMI device.
>> + *
>> + * There is one Tx channel (virtio cmdq, A2P channel) and at most one Rx
>> + * channel (virtio eventq, P2A channel). Each channel is implemented through a
>> + * virtqueue. Access to each virtqueue is protected by spinlocks.
>> + */
>> +
>> +#include <linux/errno.h>
>> +#include <linux/of.h>
>> +#include <linux/of_platform.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/module.h>
>> +#include <linux/slab.h>
>> +#include <linux/virtio.h>
>> +#include <linux/virtio_config.h>
>> +#include <uapi/linux/virtio_ids.h>
>> +#include <uapi/linux/virtio_scmi.h>
>> +
>> +#include "common.h"
>> +
>> +#define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */
>> +#define VIRTIO_SCMI_MAX_PDU_SIZE \
>> + (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD)
>> +#define DESCRIPTORS_PER_TX_MSG 2
>> +
>> +/**
>> + * struct scmi_vio_channel - Transport channel information
>> + *
>> + * @lock: Protects access to all members except ready.
>> + * @ready_lock: Protects access to ready. If required, it must be taken before
>> + * lock.
>> + * @vqueue: Associated virtqueue
>> + * @cinfo: SCMI Tx or Rx channel
>> + * @free_list: List of unused scmi_vio_msg, maintained for Tx channels only
>> + * @is_rx: Whether channel is an Rx channel
>> + * @ready: Whether transport user is ready to hear about channel
>> + */
>> +struct scmi_vio_channel {
>> + spinlock_t lock;
>> + spinlock_t ready_lock;
>> + struct virtqueue *vqueue;
>> + struct scmi_chan_info *cinfo;
>> + struct list_head free_list;
>> + u8 is_rx;
>> + u8 ready;
>> +};
>> +
>> +/**
>> + * struct scmi_vio_msg - Transport PDU information
>> + *
>> + * @request: SDU used for commands
>> + * @input: SDU used for (delayed) responses and notifications
>> + * @list: List which scmi_vio_msg may be part of
>> + * @rx_len: Input SDU size in bytes, once input has been received
>> + */
>> +struct scmi_vio_msg {
>> + struct scmi_msg_payld *request;
>> + struct scmi_msg_payld *input;
>> + struct list_head list;
>> + unsigned int rx_len;
>> +};
>> +
>> +static bool scmi_vio_have_vq_rx(struct virtio_device *vdev)
>> +{
>> + return virtio_has_feature(vdev, VIRTIO_SCMI_F_P2A_CHANNELS);
>> +}
>> +
>> +static int scmi_vio_feed_vq_rx(struct scmi_vio_channel *vioch,
>> + struct scmi_vio_msg *msg)
>> +{
>> + struct scatterlist sg_in;
>> + int rc;
>> + unsigned long flags;
>> +
>> + sg_init_one(&sg_in, msg->input, VIRTIO_SCMI_MAX_PDU_SIZE);
>> +
>> + spin_lock_irqsave(&vioch->lock, flags);
>> +
>> + rc = virtqueue_add_inbuf(vioch->vqueue, &sg_in, 1, msg, GFP_ATOMIC);
>> + if (rc)
>> + dev_err_once(vioch->cinfo->dev,
>> + "%s() failed to add to virtqueue (%d)\n", __func__,
>> + rc);
>> + else
>> + virtqueue_kick(vioch->vqueue);
>> +
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> +
>> + return rc;
>> +}
>> +
>> +static void scmi_vio_complete_cb(struct virtqueue *vqueue)
>> +{
>> + unsigned long ready_flags;
>> + unsigned long flags;
>> + unsigned int length;
>> + struct scmi_vio_channel *vioch;
>> + struct scmi_vio_msg *msg;
>> + bool cb_enabled = true;
>> +
>> + if (WARN_ON_ONCE(!vqueue->vdev->priv))
>> + return;
>> + vioch = &((struct scmi_vio_channel *)vqueue->vdev->priv)[vqueue->index];
>> +
>> + for (;;) {
>> + spin_lock_irqsave(&vioch->ready_lock, ready_flags);
>> +
>> + if (!vioch->ready) {
>> + if (!cb_enabled)
>> + (void)virtqueue_enable_cb(vqueue);
>> + goto unlock_ready_out;
>> + }
>> +
>> + spin_lock_irqsave(&vioch->lock, flags);
>> + if (cb_enabled) {
>> + virtqueue_disable_cb(vqueue);
>> + cb_enabled = false;
>> + }
>> + msg = virtqueue_get_buf(vqueue, &length);
>> + if (!msg) {
>> + if (virtqueue_enable_cb(vqueue))
>> + goto unlock_out;
>> + else
>> + cb_enabled = true;
>> + }
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> +
>> + if (msg) {
>> + msg->rx_len = length;
>> +
>> + /*
>> + * Hold the ready_lock during the callback to avoid
>> + * races when the arm-scmi driver is unbinding while
>> + * the virtio device is not quiesced yet.
>> + */
>> + scmi_rx_callback(vioch->cinfo,
>> + msg_read_header(msg->input), msg);
>> + }
>> + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
>> + }
>> +
>> +unlock_out:
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> +unlock_ready_out:
>> + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
>> +}
>> +
>> +static const char *const scmi_vio_vqueue_names[] = { "tx", "rx" };
>> +
>> +static vq_callback_t *scmi_vio_complete_callbacks[] = {
>> + scmi_vio_complete_cb,
>> + scmi_vio_complete_cb
>> +};
>> +
>> +static unsigned int virtio_get_max_msg(bool tx,
>> + struct scmi_chan_info *base_cinfo)
>> +{
>> + struct scmi_vio_channel *vioch = base_cinfo->transport_info;
>> + unsigned int ret;
>> +
>> + ret = virtqueue_get_vring_size(vioch->vqueue);
>> +
>> + /* Tx messages need multiple descriptors. */()
>> + if (tx)
>> + ret /= DESCRIPTORS_PER_TX_MSG;
>> +
>> + if (ret > MSG_TOKEN_MAX) {
>
> __scmi_xfer_info_init() returns error for info->max_msg >=
> MSG_TOKEN_MAX so it should be (ret >= MSG_TOKEN_MAX
>

I think it would be better to use the > comparison in
__scmi_xfer_info_init(), too. That should work in my understanding. I
think that change was originally in my branch, but I lost it somehow.

>
>> + dev_info_once(
>> + base_cinfo->dev,
>> + "Only %ld messages can be pending simultaneously, while the %s virtqueue could hold %d\n",
>> + MSG_TOKEN_MAX, tx ? "tx" : "rx", ret);
>> + ret = MSG_TOKEN_MAX;
>
> should be ret = MSG_TOKEN_MAX-1;
>
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +static int scmi_vio_match_any_dev(struct device *dev, const void *data)
>> +{
>> + return 1;
>> +}
>> +
>> +static struct virtio_driver virtio_scmi_driver; /* Forward declaration */
>> +
>> +static int virtio_link_supplier(struct device *dev)
>> +{
>> + struct device *vdev = driver_find_device(
>> + &virtio_scmi_driver.driver, NULL, NULL, scmi_vio_match_any_dev);
>> +
>> + if (!vdev) {
>> + dev_notice_once(
>> + dev,
>> + "Deferring probe after not finding a bound scmi-virtio device\n");
>> + return -EPROBE_DEFER;
>> + }
>> +
>> + /* Add device link for remove order and sysfs link. */
>> + if (!device_link_add(dev, vdev, DL_FLAG_AUTOREMOVE_CONSUMER)) {
>> + put_device(vdev);
>> + dev_err(dev, "Adding link to supplier virtio device failed\n");
>> + return -ECANCELED;
>> + }
>> +
>> + put_device(vdev);
>> + return scmi_set_transport_info(dev, dev_to_virtio(vdev));
>> +}
>> +
>> +static bool virtio_chan_available(struct device *dev, int idx)
>> +{
>> + struct virtio_device *vdev;
>> +
>> + /* scmi-virtio doesn't support per-protocol channels */
>> + if (is_scmi_protocol_device(dev))
>> + return false;
>> +
>> + vdev = scmi_get_transport_info(dev);
>> + if (!vdev)
>> + return false;
>> +
>> + switch (idx) {
>> + case VIRTIO_SCMI_VQ_TX:
>> + return true;
>> + case VIRTIO_SCMI_VQ_RX:
>> + return scmi_vio_have_vq_rx(vdev);
>> + default:
>> + return false;
>> + }
>> +}
>> +
>> +static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
>> + bool tx)
>> +{
>> + unsigned long flags;
>> + struct virtio_device *vdev;
>> + struct scmi_vio_channel *vioch;
>> + int index = tx ? VIRTIO_SCMI_VQ_TX : VIRTIO_SCMI_VQ_RX;
>> + int max_msg;
>> + int i;
>> +
>> + if (!virtio_chan_available(dev, index))
>> + return -ENODEV;
>> +
>> + vdev = scmi_get_transport_info(dev);
>> + vioch = &((struct scmi_vio_channel *)vdev->priv)[index];
>> +
>> + spin_lock_irqsave(&vioch->lock, flags);
>> + cinfo->transport_info = vioch;
>> + vioch->cinfo = cinfo;
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> +
>> + max_msg = virtio_get_max_msg(tx, cinfo);
>> +
>> + for (i = 0; i < max_msg; i++) {
>> + struct scmi_vio_msg *msg;
>> +
>> + msg = devm_kzalloc(cinfo->dev, sizeof(*msg), GFP_KERNEL);
>> + if (!msg)
>> + return -ENOMEM;
>> +
>> + if (tx) {
>> + msg->request = devm_kzalloc(cinfo->dev,
>> + VIRTIO_SCMI_MAX_PDU_SIZE,
>> + GFP_KERNEL);
>> + if (!msg->request)
>> + return -ENOMEM;
>> + }
>> +
>> + msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE,
>> + GFP_KERNEL);
>> + if (!msg->input)
>> + return -ENOMEM;
>> +
>> + if (tx) {
>> + spin_lock_irqsave(&vioch->lock, flags);
>> + list_add_tail(&msg->list, &vioch->free_list);
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> + } else {
>> + scmi_vio_feed_vq_rx(vioch, msg);
>> + }
>> + }
>> +
>> + spin_lock_irqsave(&vioch->ready_lock, flags);
>> + vioch->ready = true;
>> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
>> +
>> + return 0;
>> +}
>> +
>> +static int virtio_chan_free(int id, void *p, void *data)
>> +{
>> + unsigned long flags;
>> + struct scmi_chan_info *cinfo = p;
>> + struct scmi_vio_channel *vioch = cinfo->transport_info;
>> +
>> + spin_lock_irqsave(&vioch->ready_lock, flags);
>> + vioch->ready = false;
>> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
>> +
>> + scmi_free_channel(cinfo, data, id);
>> + return 0;
>> +}
>> +
>> +static int virtio_send_message(struct scmi_chan_info *cinfo,
>> + struct scmi_xfer *xfer)
>> +{
>> + struct scmi_vio_channel *vioch = cinfo->transport_info;
>> + struct scatterlist sg_out;
>> + struct scatterlist sg_in;
>> + struct scatterlist *sgs[DESCRIPTORS_PER_TX_MSG] = { &sg_out, &sg_in };
>> + unsigned long flags;
>> + int rc;
>> + struct scmi_vio_msg *msg;
>> +
>> + /*
>> + * TODO: For now, we don't support polling. But it should not be
>> + * difficult to add support.
>> + */
>> + if (xfer->hdr.poll_completion)
>> + return -EINVAL;
>> +
>> + spin_lock_irqsave(&vioch->lock, flags);
>> +
>> + if (list_empty(&vioch->free_list)) {
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> + return -EBUSY;
>> + }
>> +
>> + msg = list_first_entry(&vioch->free_list, typeof(*msg), list);
>> + list_del(&msg->list);
>> +
>> + msg_tx_prepare(msg->request, xfer);
>> +
>> + sg_init_one(&sg_out, msg->request, msg_command_size(xfer));
>> + sg_init_one(&sg_in, msg->input, msg_response_size(xfer));
>> +
>> + rc = virtqueue_add_sgs(vioch->vqueue, sgs, 1, 1, msg, GFP_ATOMIC);
>> + if (rc) {
>> + list_add(&msg->list, &vioch->free_list);
>> + dev_err_once(vioch->cinfo->dev,
>> + "%s() failed to add to virtqueue (%d)\n", __func__,
>> + rc);
>> + } else {
>> + virtqueue_kick(vioch->vqueue);
>> + }
>> +
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> +
>> + return rc;
>> +}
>> +
>> +static void virtio_fetch_response(struct scmi_chan_info *cinfo,
>> + struct scmi_xfer *xfer, void *msg_handle)
>> +{
>> + struct scmi_vio_msg *msg = msg_handle;
>> + struct scmi_vio_channel *vioch = cinfo->transport_info;
>> +
>> + if (!msg) {
>> + dev_dbg_once(&vioch->vqueue->vdev->dev,
>> + "Ignoring %s() call with NULL msg_handle\n",
>> + __func__);
>> + return;
>> + }
>> +
>> + msg_fetch_response(msg->input, msg->rx_len, xfer);
>> +}
>> +
>> +static void virtio_fetch_notification(struct scmi_chan_info *cinfo,
>> + size_t max_len, struct scmi_xfer *xfer,
>> + void *msg_handle)
>> +{
>> + struct scmi_vio_msg *msg = msg_handle;
>> + struct scmi_vio_channel *vioch = cinfo->transport_info;
>> +
>> + if (!msg) {
>> + dev_dbg_once(&vioch->vqueue->vdev->dev,
>> + "Ignoring %s() call with NULL msg_handle\n",
>> + __func__);
>> + return;
>> + }
>> +
>> + msg_fetch_notification(msg->input, msg->rx_len, max_len, xfer);
>> +}
>> +
>> +static void dummy_clear_channel(struct scmi_chan_info *cinfo)
>> +{
>> +}
>> +
>> +static bool dummy_poll_done(struct scmi_chan_info *cinfo,
>> + struct scmi_xfer *xfer)
>> +{
>> + return false;
>> +}
>> +
>> +static void virtio_drop_message(struct scmi_chan_info *cinfo, void *msg_handle)
>> +{
>> + unsigned long flags;
>> + struct scmi_vio_channel *vioch = cinfo->transport_info;
>> + struct scmi_vio_msg *msg = msg_handle;
>> +
>> + if (!msg) {
>> + dev_dbg_once(&vioch->vqueue->vdev->dev,
>> + "Ignoring %s() call with NULL msg_handle\n",
>> + __func__);
>> + return;
>> + }
>> +
>> + if (vioch->is_rx) {
>> + scmi_vio_feed_vq_rx(vioch, msg);
>> + } else {
>> + spin_lock_irqsave(&vioch->lock, flags);
>> + list_add(&msg->list, &vioch->free_list);
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> + }
>> +}
>> +
>> +static const struct scmi_transport_ops scmi_virtio_ops = {
>> + .link_supplier = virtio_link_supplier,
>> + .chan_available = virtio_chan_available,
>> + .chan_setup = virtio_chan_setup,
>> + .chan_free = virtio_chan_free,
>> + .get_max_msg = virtio_get_max_msg,
>> + .send_message = virtio_send_message,
>> + .fetch_response = virtio_fetch_response,
>> + .fetch_notification = virtio_fetch_notification,
>> + .clear_channel = dummy_clear_channel,
>> + .poll_done = dummy_poll_done,
>> + .drop_message = virtio_drop_message,
>> +};
>> +
>> +static int scmi_vio_probe(struct virtio_device *vdev)
>> +{
>> + struct device *dev = &vdev->dev;
>> + struct scmi_vio_channel *channels;
>> + bool have_vq_rx;
>> + int vq_cnt;
>> + int i;
>> + int ret;
>> + struct virtqueue *vqs[VIRTIO_SCMI_VQ_MAX_CNT];
>> +
>> + have_vq_rx = scmi_vio_have_vq_rx(vdev);
>> + vq_cnt = have_vq_rx ? VIRTIO_SCMI_VQ_MAX_CNT : 1;
>> +
>> + channels = devm_kcalloc(dev, vq_cnt, sizeof(*channels), GFP_KERNEL);
>> + if (!channels)
>> + return -ENOMEM;
>> +
>> + if (have_vq_rx)
>> + channels[VIRTIO_SCMI_VQ_RX].is_rx = true;
>> +
>> + ret = virtio_find_vqs(vdev, vq_cnt, vqs, scmi_vio_complete_callbacks,
>> + scmi_vio_vqueue_names, NULL);
>> + if (ret) {
>> + dev_err(dev, "Failed to get %d virtqueue(s)\n", vq_cnt);
>> + return ret;
>> + }
>> + dev_info(dev, "Found %d virtqueue(s)\n", vq_cnt);
>> +
>> + for (i = 0; i < vq_cnt; i++) {
>> + spin_lock_init(&channels[i].lock);
>> + spin_lock_init(&channels[i].ready_lock);
>> + INIT_LIST_HEAD(&channels[i].free_list);
>> + channels[i].vqueue = vqs[i];
>> + }
>> +
>> + vdev->priv = channels;
>> +
>> + return 0;
>> +}
>> +
>> +static void scmi_vio_remove(struct virtio_device *vdev)
>> +{
>> + vdev->config->reset(vdev);
>> + vdev->config->del_vqs(vdev);
>> +}
>> +
>> +static unsigned int features[] = {
>> + VIRTIO_SCMI_F_P2A_CHANNELS,
>> +};
>> +
>> +static const struct virtio_device_id id_table[] = {
>> + { VIRTIO_ID_SCMI, VIRTIO_DEV_ANY_ID },
>> + { 0 }
>> +};
>> +
>> +static struct virtio_driver virtio_scmi_driver = {
>> + .driver.name = "scmi-virtio",
>> + .driver.owner = THIS_MODULE,
>> + .feature_table = features,
>> + .feature_table_size = ARRAY_SIZE(features),
>> + .id_table = id_table,
>> + .probe = scmi_vio_probe,
>> + .remove = scmi_vio_remove,
>> +};
>> +
>> +static int __init virtio_scmi_init(void)
>> +{
>> + return register_virtio_driver(&virtio_scmi_driver);
>> +}
>> +
>> +static void __exit virtio_scmi_exit(void)
>> +{
>> + unregister_virtio_driver(&virtio_scmi_driver);
>> +}
>> +
>> +const struct scmi_desc scmi_virtio_desc = {
>> + .init = virtio_scmi_init,
>> + .exit = virtio_scmi_exit,
>> + .ops = &scmi_virtio_ops,
>> + .max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */
>> + .max_msg = 0, /* overridden by virtio_get_max_msg() */
>> + .max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE,
>> +};
>> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
>> index f0c35ce8628c..c146fe30e589 100644
>> --- a/include/uapi/linux/virtio_ids.h
>> +++ b/include/uapi/linux/virtio_ids.h
>> @@ -56,5 +56,6 @@
>> #define VIRTIO_ID_PMEM 27 /* virtio pmem */
>> #define VIRTIO_ID_BT 28 /* virtio bluetooth */
>> #define VIRTIO_ID_MAC80211_HWSIM 29 /* virtio mac80211-hwsim */
>> +#define VIRTIO_ID_SCMI 32 /* virtio SCMI */
>>
>> #endif /* _LINUX_VIRTIO_IDS_H */
>> diff --git a/include/uapi/linux/virtio_scmi.h b/include/uapi/linux/virtio_scmi.h
>> new file mode 100644
>> index 000000000000..732b01504c35
>> --- /dev/null
>> +++ b/include/uapi/linux/virtio_scmi.h
>> @@ -0,0 +1,25 @@
>> +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
>> +/*
>> + * Copyright (C) 2020 OpenSynergy GmbH
>> + */
>> +
>> +#ifndef _UAPI_LINUX_VIRTIO_SCMI_H
>> +#define _UAPI_LINUX_VIRTIO_SCMI_H
>> +
>> +#include <linux/virtio_types.h>
>> +
>> +/* Feature bits */
>> +
>> +/* Device implements some SCMI notifications, or delayed responses. */
>> +#define VIRTIO_SCMI_F_P2A_CHANNELS 0
>> +
>> +/* Device implements any SCMI statistics shared memory region */
>> +#define VIRTIO_SCMI_F_SHARED_MEMORY 1
>> +
>> +/* Virtqueues */
>> +
>> +#define VIRTIO_SCMI_VQ_TX 0 /* cmdq */
>> +#define VIRTIO_SCMI_VQ_RX 1 /* eventq */
>> +#define VIRTIO_SCMI_VQ_MAX_CNT 2
>> +
>> +#endif /* _UAPI_LINUX_VIRTIO_SCMI_H */
>> --
>> 2.25.1
>>
>>
>


2021-06-02 09:47:53

by Vincent Guittot

[permalink] [raw]
Subject: Re: [RFC PATCH v3 11/12] firmware: arm_scmi: Add virtio transport

On Wed, 2 Jun 2021 at 10:25, Peter Hilber <[email protected]> wrote:
>
> On 01.06.21 16:53, Vincent Guittot wrote:
> > On Tue, 11 May 2021 at 02:28, Peter Hilber <[email protected]> wrote:
> >>
> >> From: Igor Skalkin <[email protected]>
> >>
> >> This transport enables accessing an SCMI platform as a virtio device.
> >>
> >> Implement an SCMI virtio driver according to the virtio SCMI device spec
> >> [1]. Virtio device id 32 has been reserved for the SCMI device [2].
> >>
> >> The virtio transport has one Tx channel (virtio cmdq, A2P channel) and
> >> at most one Rx channel (virtio eventq, P2A channel).
> >>
> >> The following feature bit defined in [1] is not implemented:
> >> VIRTIO_SCMI_F_SHARED_MEMORY.
> >>
> >> After the preparatory patches, this implements the virtio transport as
> >> paraphrased:
> >>
> >> Only support a single arm-scmi device (which is consistent with the SCMI
> >> spec). scmi-virtio init is called from arm-scmi module init. During the
> >> arm-scmi probing, link to the first probed scmi-virtio device. Defer
> >> arm-scmi probing if no scmi-virtio device is bound yet.
> >>
> >> For simplicity, restrict the number of messages which can be pending
> >> simultaneously according to the virtqueue capacity. (The virtqueue sizes
> >> are negotiated with the virtio device.)
> >>
> >> As soon as Rx channel message buffers are allocated or have been read
> >> out by the arm-scmi driver, feed them to the virtio device.
> >>
> >> Since some virtio devices may not have the short response time exhibited
> >> by SCMI platforms using other transports, set a generous response
> >> timeout.
> >>
> >> Limitations:
> >>
> >> - Polling is not supported.
> >>
> >> - The timeout for delayed responses has not been adjusted.
> >>
> >> [1] https://github.com/oasis-tcs/virtio-spec/blob/master/virtio-scmi.tex
> >> [2] https://www.oasis-open.org/committees/ballot.php?id=3496
> >>
> >> Signed-off-by: Igor Skalkin <[email protected]>
> >> [ Peter: Adapted patch for submission to upstream. ]
> >> Co-developed-by: Peter Hilber <[email protected]>
> >> Signed-off-by: Peter Hilber <[email protected]>
> >> ---
> >> MAINTAINERS | 1 +
> >> drivers/firmware/Kconfig | 12 +
> >> drivers/firmware/arm_scmi/Makefile | 1 +
> >> drivers/firmware/arm_scmi/common.h | 3 +
> >> drivers/firmware/arm_scmi/driver.c | 3 +
> >> drivers/firmware/arm_scmi/virtio.c | 523 +++++++++++++++++++++++++++++
> >> include/uapi/linux/virtio_ids.h | 1 +
> >> include/uapi/linux/virtio_scmi.h | 25 ++
> >> 8 files changed, 569 insertions(+)
> >> create mode 100644 drivers/firmware/arm_scmi/virtio.c
> >> create mode 100644 include/uapi/linux/virtio_scmi.h
> >>
> >> diff --git a/MAINTAINERS b/MAINTAINERS
> >> index bd7aff0c120f..449c336872f3 100644
> >> --- a/MAINTAINERS
> >> +++ b/MAINTAINERS
> >> @@ -17705,6 +17705,7 @@ F: drivers/regulator/scmi-regulator.c
> >> F: drivers/reset/reset-scmi.c
> >> F: include/linux/sc[mp]i_protocol.h
> >> F: include/trace/events/scmi.h
> >> +F: include/uapi/linux/virtio_scmi.h
> >>
> >> SYSTEM RESET/SHUTDOWN DRIVERS
> >> M: Sebastian Reichel <[email protected]>
> >> diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
> >> index e8377b12e4d0..7e9eafdd9b63 100644
> >> --- a/drivers/firmware/Kconfig
> >> +++ b/drivers/firmware/Kconfig
> >> @@ -39,6 +39,18 @@ config ARM_SCMI_HAVE_MSG
> >> This declares whether a message passing based transport for SCMI is
> >> available.
> >>
> >> +# This config option includes the dependencies of ARM_SCMI_PROTOCOL so that
> >> +# this config doesn't show up when SCMI wouldn't be available.
> >> +config VIRTIO_SCMI
> >> + bool "Virtio transport for SCMI"
> >> + select ARM_SCMI_HAVE_MSG
> >> + depends on VIRTIO && (ARM || ARM64 || COMPILE_TEST)
> >> + help
> >> + This enables the virtio based transport for SCMI.
> >> +
> >> + If you want to use the ARM SCMI protocol between the virtio guest and
> >> + a host providing a virtio SCMI device, answer Y.
> >> +
> >> config ARM_SCMI_POWER_DOMAIN
> >> tristate "SCMI power domain driver"
> >> depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF)
> >> diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
> >> index f6b4acb8abdb..db1787606fb2 100644
> >> --- a/drivers/firmware/arm_scmi/Makefile
> >> +++ b/drivers/firmware/arm_scmi/Makefile
> >> @@ -5,6 +5,7 @@ scmi-transport-$(CONFIG_ARM_SCMI_HAVE_SHMEM) = shmem.o
> >> scmi-transport-$(CONFIG_MAILBOX) += mailbox.o
> >> scmi-transport-$(CONFIG_HAVE_ARM_SMCCC_DISCOVERY) += smc.o
> >> scmi-transport-$(CONFIG_ARM_SCMI_HAVE_MSG) += msg.o
> >> +scmi-transport-$(CONFIG_VIRTIO_SCMI) += virtio.o
> >> scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o
> >> scmi-module-objs := $(scmi-bus-y) $(scmi-driver-y) $(scmi-protocols-y) \
> >> $(scmi-transport-y)
> >> diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
> >> index 4cb6571c7aaf..bada06cfd33d 100644
> >> --- a/drivers/firmware/arm_scmi/common.h
> >> +++ b/drivers/firmware/arm_scmi/common.h
> >> @@ -349,6 +349,9 @@ extern const struct scmi_desc scmi_mailbox_desc;
> >> #ifdef CONFIG_HAVE_ARM_SMCCC
> >> extern const struct scmi_desc scmi_smc_desc;
> >> #endif
> >> +#ifdef CONFIG_VIRTIO_SCMI
> >> +extern const struct scmi_desc scmi_virtio_desc;
> >> +#endif
> >>
> >> int scmi_set_transport_info(struct device *dev, void *transport_info);
> >> void *scmi_get_transport_info(struct device *dev);
> >> diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
> >> index e04e7c8e6928..a31187385470 100644
> >> --- a/drivers/firmware/arm_scmi/driver.c
> >> +++ b/drivers/firmware/arm_scmi/driver.c
> >> @@ -1637,6 +1637,9 @@ static const struct of_device_id scmi_of_match[] = {
> >> #endif
> >> #ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
> >> { .compatible = "arm,scmi-smc", .data = &scmi_smc_desc},
> >> +#endif
> >> +#ifdef CONFIG_VIRTIO_SCMI
> >> + { .compatible = "arm,scmi-virtio", .data = &scmi_virtio_desc},
> >> #endif
> >> { /* Sentinel */ },
> >> };
> >> diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c
> >> new file mode 100644
> >> index 000000000000..20972adf6dc7
> >> --- /dev/null
> >> +++ b/drivers/firmware/arm_scmi/virtio.c
> >> @@ -0,0 +1,523 @@
> >> +// SPDX-License-Identifier: GPL-2.0
> >> +/*
> >> + * Virtio Transport driver for Arm System Control and Management Interface
> >> + * (SCMI).
> >> + *
> >> + * Copyright (C) 2020 OpenSynergy.
> >> + */
> >> +
> >> +/**
> >> + * DOC: Theory of Operation
> >> + *
> >> + * The scmi-virtio transport implements a driver for the virtio SCMI device.
> >> + *
> >> + * There is one Tx channel (virtio cmdq, A2P channel) and at most one Rx
> >> + * channel (virtio eventq, P2A channel). Each channel is implemented through a
> >> + * virtqueue. Access to each virtqueue is protected by spinlocks.
> >> + */
> >> +
> >> +#include <linux/errno.h>
> >> +#include <linux/of.h>
> >> +#include <linux/of_platform.h>
> >> +#include <linux/platform_device.h>
> >> +#include <linux/module.h>
> >> +#include <linux/slab.h>
> >> +#include <linux/virtio.h>
> >> +#include <linux/virtio_config.h>
> >> +#include <uapi/linux/virtio_ids.h>
> >> +#include <uapi/linux/virtio_scmi.h>
> >> +
> >> +#include "common.h"
> >> +
> >> +#define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */
> >> +#define VIRTIO_SCMI_MAX_PDU_SIZE \
> >> + (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD)
> >> +#define DESCRIPTORS_PER_TX_MSG 2
> >> +
> >> +/**
> >> + * struct scmi_vio_channel - Transport channel information
> >> + *
> >> + * @lock: Protects access to all members except ready.
> >> + * @ready_lock: Protects access to ready. If required, it must be taken before
> >> + * lock.
> >> + * @vqueue: Associated virtqueue
> >> + * @cinfo: SCMI Tx or Rx channel
> >> + * @free_list: List of unused scmi_vio_msg, maintained for Tx channels only
> >> + * @is_rx: Whether channel is an Rx channel
> >> + * @ready: Whether transport user is ready to hear about channel
> >> + */
> >> +struct scmi_vio_channel {
> >> + spinlock_t lock;
> >> + spinlock_t ready_lock;
> >> + struct virtqueue *vqueue;
> >> + struct scmi_chan_info *cinfo;
> >> + struct list_head free_list;
> >> + u8 is_rx;
> >> + u8 ready;
> >> +};
> >> +
> >> +/**
> >> + * struct scmi_vio_msg - Transport PDU information
> >> + *
> >> + * @request: SDU used for commands
> >> + * @input: SDU used for (delayed) responses and notifications
> >> + * @list: List which scmi_vio_msg may be part of
> >> + * @rx_len: Input SDU size in bytes, once input has been received
> >> + */
> >> +struct scmi_vio_msg {
> >> + struct scmi_msg_payld *request;
> >> + struct scmi_msg_payld *input;
> >> + struct list_head list;
> >> + unsigned int rx_len;
> >> +};
> >> +
> >> +static bool scmi_vio_have_vq_rx(struct virtio_device *vdev)
> >> +{
> >> + return virtio_has_feature(vdev, VIRTIO_SCMI_F_P2A_CHANNELS);
> >> +}
> >> +
> >> +static int scmi_vio_feed_vq_rx(struct scmi_vio_channel *vioch,
> >> + struct scmi_vio_msg *msg)
> >> +{
> >> + struct scatterlist sg_in;
> >> + int rc;
> >> + unsigned long flags;
> >> +
> >> + sg_init_one(&sg_in, msg->input, VIRTIO_SCMI_MAX_PDU_SIZE);
> >> +
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> +
> >> + rc = virtqueue_add_inbuf(vioch->vqueue, &sg_in, 1, msg, GFP_ATOMIC);
> >> + if (rc)
> >> + dev_err_once(vioch->cinfo->dev,
> >> + "%s() failed to add to virtqueue (%d)\n", __func__,
> >> + rc);
> >> + else
> >> + virtqueue_kick(vioch->vqueue);
> >> +
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> +
> >> + return rc;
> >> +}
> >> +
> >> +static void scmi_vio_complete_cb(struct virtqueue *vqueue)
> >> +{
> >> + unsigned long ready_flags;
> >> + unsigned long flags;
> >> + unsigned int length;
> >> + struct scmi_vio_channel *vioch;
> >> + struct scmi_vio_msg *msg;
> >> + bool cb_enabled = true;
> >> +
> >> + if (WARN_ON_ONCE(!vqueue->vdev->priv))
> >> + return;
> >> + vioch = &((struct scmi_vio_channel *)vqueue->vdev->priv)[vqueue->index];
> >> +
> >> + for (;;) {
> >> + spin_lock_irqsave(&vioch->ready_lock, ready_flags);
> >> +
> >> + if (!vioch->ready) {
> >> + if (!cb_enabled)
> >> + (void)virtqueue_enable_cb(vqueue);
> >> + goto unlock_ready_out;
> >> + }
> >> +
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> + if (cb_enabled) {
> >> + virtqueue_disable_cb(vqueue);
> >> + cb_enabled = false;
> >> + }
> >> + msg = virtqueue_get_buf(vqueue, &length);
> >> + if (!msg) {
> >> + if (virtqueue_enable_cb(vqueue))
> >> + goto unlock_out;
> >> + else
> >> + cb_enabled = true;
> >> + }
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> +
> >> + if (msg) {
> >> + msg->rx_len = length;
> >> +
> >> + /*
> >> + * Hold the ready_lock during the callback to avoid
> >> + * races when the arm-scmi driver is unbinding while
> >> + * the virtio device is not quiesced yet.
> >> + */
> >> + scmi_rx_callback(vioch->cinfo,
> >> + msg_read_header(msg->input), msg);
> >> + }
> >> + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
> >> + }
> >> +
> >> +unlock_out:
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> +unlock_ready_out:
> >> + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
> >> +}
> >> +
> >> +static const char *const scmi_vio_vqueue_names[] = { "tx", "rx" };
> >> +
> >> +static vq_callback_t *scmi_vio_complete_callbacks[] = {
> >> + scmi_vio_complete_cb,
> >> + scmi_vio_complete_cb
> >> +};
> >> +
> >> +static unsigned int virtio_get_max_msg(bool tx,
> >> + struct scmi_chan_info *base_cinfo)
> >> +{
> >> + struct scmi_vio_channel *vioch = base_cinfo->transport_info;
> >> + unsigned int ret;
> >> +
> >> + ret = virtqueue_get_vring_size(vioch->vqueue);
> >> +
> >> + /* Tx messages need multiple descriptors. */()
> >> + if (tx)
> >> + ret /= DESCRIPTORS_PER_TX_MSG;
> >> +
> >> + if (ret > MSG_TOKEN_MAX) {
> >
> > __scmi_xfer_info_init() returns error for info->max_msg >=
> > MSG_TOKEN_MAX so it should be (ret >= MSG_TOKEN_MAX
> >
>
> I think it would be better to use the > comparison in
> __scmi_xfer_info_init(), too. That should work in my understanding. I

yes should work as well

> think that change was originally in my branch, but I lost it somehow.
>
> >
> >> + dev_info_once(
> >> + base_cinfo->dev,
> >> + "Only %ld messages can be pending simultaneously, while the %s virtqueue could hold %d\n",
> >> + MSG_TOKEN_MAX, tx ? "tx" : "rx", ret);
> >> + ret = MSG_TOKEN_MAX;
> >
> > should be ret = MSG_TOKEN_MAX-1;
> >
> >> + }
> >> +
> >> + return ret;
> >> +}
> >> +
> >> +static int scmi_vio_match_any_dev(struct device *dev, const void *data)
> >> +{
> >> + return 1;
> >> +}
> >> +
> >> +static struct virtio_driver virtio_scmi_driver; /* Forward declaration */
> >> +
> >> +static int virtio_link_supplier(struct device *dev)
> >> +{
> >> + struct device *vdev = driver_find_device(
> >> + &virtio_scmi_driver.driver, NULL, NULL, scmi_vio_match_any_dev);
> >> +
> >> + if (!vdev) {
> >> + dev_notice_once(
> >> + dev,
> >> + "Deferring probe after not finding a bound scmi-virtio device\n");
> >> + return -EPROBE_DEFER;
> >> + }
> >> +
> >> + /* Add device link for remove order and sysfs link. */
> >> + if (!device_link_add(dev, vdev, DL_FLAG_AUTOREMOVE_CONSUMER)) {
> >> + put_device(vdev);
> >> + dev_err(dev, "Adding link to supplier virtio device failed\n");
> >> + return -ECANCELED;
> >> + }
> >> +
> >> + put_device(vdev);
> >> + return scmi_set_transport_info(dev, dev_to_virtio(vdev));
> >> +}
> >> +
> >> +static bool virtio_chan_available(struct device *dev, int idx)
> >> +{
> >> + struct virtio_device *vdev;
> >> +
> >> + /* scmi-virtio doesn't support per-protocol channels */
> >> + if (is_scmi_protocol_device(dev))
> >> + return false;
> >> +
> >> + vdev = scmi_get_transport_info(dev);
> >> + if (!vdev)
> >> + return false;
> >> +
> >> + switch (idx) {
> >> + case VIRTIO_SCMI_VQ_TX:
> >> + return true;
> >> + case VIRTIO_SCMI_VQ_RX:
> >> + return scmi_vio_have_vq_rx(vdev);
> >> + default:
> >> + return false;
> >> + }
> >> +}
> >> +
> >> +static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
> >> + bool tx)
> >> +{
> >> + unsigned long flags;
> >> + struct virtio_device *vdev;
> >> + struct scmi_vio_channel *vioch;
> >> + int index = tx ? VIRTIO_SCMI_VQ_TX : VIRTIO_SCMI_VQ_RX;
> >> + int max_msg;
> >> + int i;
> >> +
> >> + if (!virtio_chan_available(dev, index))
> >> + return -ENODEV;
> >> +
> >> + vdev = scmi_get_transport_info(dev);
> >> + vioch = &((struct scmi_vio_channel *)vdev->priv)[index];
> >> +
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> + cinfo->transport_info = vioch;
> >> + vioch->cinfo = cinfo;
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> +
> >> + max_msg = virtio_get_max_msg(tx, cinfo);
> >> +
> >> + for (i = 0; i < max_msg; i++) {
> >> + struct scmi_vio_msg *msg;
> >> +
> >> + msg = devm_kzalloc(cinfo->dev, sizeof(*msg), GFP_KERNEL);
> >> + if (!msg)
> >> + return -ENOMEM;
> >> +
> >> + if (tx) {
> >> + msg->request = devm_kzalloc(cinfo->dev,
> >> + VIRTIO_SCMI_MAX_PDU_SIZE,
> >> + GFP_KERNEL);
> >> + if (!msg->request)
> >> + return -ENOMEM;
> >> + }
> >> +
> >> + msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE,
> >> + GFP_KERNEL);
> >> + if (!msg->input)
> >> + return -ENOMEM;
> >> +
> >> + if (tx) {
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> + list_add_tail(&msg->list, &vioch->free_list);
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> + } else {
> >> + scmi_vio_feed_vq_rx(vioch, msg);
> >> + }
> >> + }
> >> +
> >> + spin_lock_irqsave(&vioch->ready_lock, flags);
> >> + vioch->ready = true;
> >> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int virtio_chan_free(int id, void *p, void *data)
> >> +{
> >> + unsigned long flags;
> >> + struct scmi_chan_info *cinfo = p;
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> +
> >> + spin_lock_irqsave(&vioch->ready_lock, flags);
> >> + vioch->ready = false;
> >> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
> >> +
> >> + scmi_free_channel(cinfo, data, id);
> >> + return 0;
> >> +}
> >> +
> >> +static int virtio_send_message(struct scmi_chan_info *cinfo,
> >> + struct scmi_xfer *xfer)
> >> +{
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> + struct scatterlist sg_out;
> >> + struct scatterlist sg_in;
> >> + struct scatterlist *sgs[DESCRIPTORS_PER_TX_MSG] = { &sg_out, &sg_in };
> >> + unsigned long flags;
> >> + int rc;
> >> + struct scmi_vio_msg *msg;
> >> +
> >> + /*
> >> + * TODO: For now, we don't support polling. But it should not be
> >> + * difficult to add support.
> >> + */
> >> + if (xfer->hdr.poll_completion)
> >> + return -EINVAL;
> >> +
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> +
> >> + if (list_empty(&vioch->free_list)) {
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> + return -EBUSY;
> >> + }
> >> +
> >> + msg = list_first_entry(&vioch->free_list, typeof(*msg), list);
> >> + list_del(&msg->list);
> >> +
> >> + msg_tx_prepare(msg->request, xfer);
> >> +
> >> + sg_init_one(&sg_out, msg->request, msg_command_size(xfer));
> >> + sg_init_one(&sg_in, msg->input, msg_response_size(xfer));
> >> +
> >> + rc = virtqueue_add_sgs(vioch->vqueue, sgs, 1, 1, msg, GFP_ATOMIC);
> >> + if (rc) {
> >> + list_add(&msg->list, &vioch->free_list);
> >> + dev_err_once(vioch->cinfo->dev,
> >> + "%s() failed to add to virtqueue (%d)\n", __func__,
> >> + rc);
> >> + } else {
> >> + virtqueue_kick(vioch->vqueue);
> >> + }
> >> +
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> +
> >> + return rc;
> >> +}
> >> +
> >> +static void virtio_fetch_response(struct scmi_chan_info *cinfo,
> >> + struct scmi_xfer *xfer, void *msg_handle)
> >> +{
> >> + struct scmi_vio_msg *msg = msg_handle;
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> +
> >> + if (!msg) {
> >> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> >> + "Ignoring %s() call with NULL msg_handle\n",
> >> + __func__);
> >> + return;
> >> + }
> >> +
> >> + msg_fetch_response(msg->input, msg->rx_len, xfer);
> >> +}
> >> +
> >> +static void virtio_fetch_notification(struct scmi_chan_info *cinfo,
> >> + size_t max_len, struct scmi_xfer *xfer,
> >> + void *msg_handle)
> >> +{
> >> + struct scmi_vio_msg *msg = msg_handle;
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> +
> >> + if (!msg) {
> >> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> >> + "Ignoring %s() call with NULL msg_handle\n",
> >> + __func__);
> >> + return;
> >> + }
> >> +
> >> + msg_fetch_notification(msg->input, msg->rx_len, max_len, xfer);
> >> +}
> >> +
> >> +static void dummy_clear_channel(struct scmi_chan_info *cinfo)
> >> +{
> >> +}
> >> +
> >> +static bool dummy_poll_done(struct scmi_chan_info *cinfo,
> >> + struct scmi_xfer *xfer)
> >> +{
> >> + return false;
> >> +}
> >> +
> >> +static void virtio_drop_message(struct scmi_chan_info *cinfo, void *msg_handle)
> >> +{
> >> + unsigned long flags;
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> + struct scmi_vio_msg *msg = msg_handle;
> >> +
> >> + if (!msg) {
> >> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> >> + "Ignoring %s() call with NULL msg_handle\n",
> >> + __func__);
> >> + return;
> >> + }
> >> +
> >> + if (vioch->is_rx) {
> >> + scmi_vio_feed_vq_rx(vioch, msg);
> >> + } else {
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> + list_add(&msg->list, &vioch->free_list);
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> + }
> >> +}
> >> +
> >> +static const struct scmi_transport_ops scmi_virtio_ops = {
> >> + .link_supplier = virtio_link_supplier,
> >> + .chan_available = virtio_chan_available,
> >> + .chan_setup = virtio_chan_setup,
> >> + .chan_free = virtio_chan_free,
> >> + .get_max_msg = virtio_get_max_msg,
> >> + .send_message = virtio_send_message,
> >> + .fetch_response = virtio_fetch_response,
> >> + .fetch_notification = virtio_fetch_notification,
> >> + .clear_channel = dummy_clear_channel,
> >> + .poll_done = dummy_poll_done,
> >> + .drop_message = virtio_drop_message,
> >> +};
> >> +
> >> +static int scmi_vio_probe(struct virtio_device *vdev)
> >> +{
> >> + struct device *dev = &vdev->dev;
> >> + struct scmi_vio_channel *channels;
> >> + bool have_vq_rx;
> >> + int vq_cnt;
> >> + int i;
> >> + int ret;
> >> + struct virtqueue *vqs[VIRTIO_SCMI_VQ_MAX_CNT];
> >> +
> >> + have_vq_rx = scmi_vio_have_vq_rx(vdev);
> >> + vq_cnt = have_vq_rx ? VIRTIO_SCMI_VQ_MAX_CNT : 1;
> >> +
> >> + channels = devm_kcalloc(dev, vq_cnt, sizeof(*channels), GFP_KERNEL);
> >> + if (!channels)
> >> + return -ENOMEM;
> >> +
> >> + if (have_vq_rx)
> >> + channels[VIRTIO_SCMI_VQ_RX].is_rx = true;
> >> +
> >> + ret = virtio_find_vqs(vdev, vq_cnt, vqs, scmi_vio_complete_callbacks,
> >> + scmi_vio_vqueue_names, NULL);
> >> + if (ret) {
> >> + dev_err(dev, "Failed to get %d virtqueue(s)\n", vq_cnt);
> >> + return ret;
> >> + }
> >> + dev_info(dev, "Found %d virtqueue(s)\n", vq_cnt);
> >> +
> >> + for (i = 0; i < vq_cnt; i++) {
> >> + spin_lock_init(&channels[i].lock);
> >> + spin_lock_init(&channels[i].ready_lock);
> >> + INIT_LIST_HEAD(&channels[i].free_list);
> >> + channels[i].vqueue = vqs[i];
> >> + }
> >> +
> >> + vdev->priv = channels;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static void scmi_vio_remove(struct virtio_device *vdev)
> >> +{
> >> + vdev->config->reset(vdev);
> >> + vdev->config->del_vqs(vdev);
> >> +}
> >> +
> >> +static unsigned int features[] = {
> >> + VIRTIO_SCMI_F_P2A_CHANNELS,
> >> +};
> >> +
> >> +static const struct virtio_device_id id_table[] = {
> >> + { VIRTIO_ID_SCMI, VIRTIO_DEV_ANY_ID },
> >> + { 0 }
> >> +};
> >> +
> >> +static struct virtio_driver virtio_scmi_driver = {
> >> + .driver.name = "scmi-virtio",
> >> + .driver.owner = THIS_MODULE,
> >> + .feature_table = features,
> >> + .feature_table_size = ARRAY_SIZE(features),
> >> + .id_table = id_table,
> >> + .probe = scmi_vio_probe,
> >> + .remove = scmi_vio_remove,
> >> +};
> >> +
> >> +static int __init virtio_scmi_init(void)
> >> +{
> >> + return register_virtio_driver(&virtio_scmi_driver);
> >> +}
> >> +
> >> +static void __exit virtio_scmi_exit(void)
> >> +{
> >> + unregister_virtio_driver(&virtio_scmi_driver);
> >> +}
> >> +
> >> +const struct scmi_desc scmi_virtio_desc = {
> >> + .init = virtio_scmi_init,
> >> + .exit = virtio_scmi_exit,
> >> + .ops = &scmi_virtio_ops,
> >> + .max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */
> >> + .max_msg = 0, /* overridden by virtio_get_max_msg() */
> >> + .max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE,
> >> +};
> >> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> >> index f0c35ce8628c..c146fe30e589 100644
> >> --- a/include/uapi/linux/virtio_ids.h
> >> +++ b/include/uapi/linux/virtio_ids.h
> >> @@ -56,5 +56,6 @@
> >> #define VIRTIO_ID_PMEM 27 /* virtio pmem */
> >> #define VIRTIO_ID_BT 28 /* virtio bluetooth */
> >> #define VIRTIO_ID_MAC80211_HWSIM 29 /* virtio mac80211-hwsim */
> >> +#define VIRTIO_ID_SCMI 32 /* virtio SCMI */
> >>
> >> #endif /* _LINUX_VIRTIO_IDS_H */
> >> diff --git a/include/uapi/linux/virtio_scmi.h b/include/uapi/linux/virtio_scmi.h
> >> new file mode 100644
> >> index 000000000000..732b01504c35
> >> --- /dev/null
> >> +++ b/include/uapi/linux/virtio_scmi.h
> >> @@ -0,0 +1,25 @@
> >> +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
> >> +/*
> >> + * Copyright (C) 2020 OpenSynergy GmbH
> >> + */
> >> +
> >> +#ifndef _UAPI_LINUX_VIRTIO_SCMI_H
> >> +#define _UAPI_LINUX_VIRTIO_SCMI_H
> >> +
> >> +#include <linux/virtio_types.h>
> >> +
> >> +/* Feature bits */
> >> +
> >> +/* Device implements some SCMI notifications, or delayed responses. */
> >> +#define VIRTIO_SCMI_F_P2A_CHANNELS 0
> >> +
> >> +/* Device implements any SCMI statistics shared memory region */
> >> +#define VIRTIO_SCMI_F_SHARED_MEMORY 1
> >> +
> >> +/* Virtqueues */
> >> +
> >> +#define VIRTIO_SCMI_VQ_TX 0 /* cmdq */
> >> +#define VIRTIO_SCMI_VQ_RX 1 /* eventq */
> >> +#define VIRTIO_SCMI_VQ_MAX_CNT 2
> >> +
> >> +#endif /* _UAPI_LINUX_VIRTIO_SCMI_H */
> >> --
> >> 2.25.1
> >>
> >>
> >
>
>

2021-06-02 11:52:33

by Cristian Marussi

[permalink] [raw]
Subject: Re: [RFC PATCH v3 11/12] firmware: arm_scmi: Add virtio transport

Hi,

On Wed, Jun 02, 2021 at 10:25:36AM +0200, Peter Hilber wrote:
> On 01.06.21 16:53, Vincent Guittot wrote:
> > On Tue, 11 May 2021 at 02:28, Peter Hilber <[email protected]> wrote:
> >>
> >> From: Igor Skalkin <[email protected]>
> >>
> >> This transport enables accessing an SCMI platform as a virtio device.
> >>
> >> Implement an SCMI virtio driver according to the virtio SCMI device spec
> >> [1]. Virtio device id 32 has been reserved for the SCMI device [2].
> >>
> >> The virtio transport has one Tx channel (virtio cmdq, A2P channel) and
> >> at most one Rx channel (virtio eventq, P2A channel).
> >>
> >> The following feature bit defined in [1] is not implemented:
> >> VIRTIO_SCMI_F_SHARED_MEMORY.
> >>
> >> After the preparatory patches, this implements the virtio transport as
> >> paraphrased:
> >>
> >> Only support a single arm-scmi device (which is consistent with the SCMI
> >> spec). scmi-virtio init is called from arm-scmi module init. During the
> >> arm-scmi probing, link to the first probed scmi-virtio device. Defer
> >> arm-scmi probing if no scmi-virtio device is bound yet.
> >>
> >> For simplicity, restrict the number of messages which can be pending
> >> simultaneously according to the virtqueue capacity. (The virtqueue sizes
> >> are negotiated with the virtio device.)
> >>
> >> As soon as Rx channel message buffers are allocated or have been read
> >> out by the arm-scmi driver, feed them to the virtio device.
> >>
> >> Since some virtio devices may not have the short response time exhibited
> >> by SCMI platforms using other transports, set a generous response
> >> timeout.
> >>
> >> Limitations:
> >>
> >> - Polling is not supported.
> >>
> >> - The timeout for delayed responses has not been adjusted.
> >>
> >> [1] https://github.com/oasis-tcs/virtio-spec/blob/master/virtio-scmi.tex
> >> [2] https://www.oasis-open.org/committees/ballot.php?id=3496
> >>
> >> Signed-off-by: Igor Skalkin <[email protected]>
> >> [ Peter: Adapted patch for submission to upstream. ]
> >> Co-developed-by: Peter Hilber <[email protected]>
> >> Signed-off-by: Peter Hilber <[email protected]>

[snip]

> >> +
> >> +static unsigned int virtio_get_max_msg(bool tx,
> >> + struct scmi_chan_info *base_cinfo)
> >> +{
> >> + struct scmi_vio_channel *vioch = base_cinfo->transport_info;
> >> + unsigned int ret;
> >> +
> >> + ret = virtqueue_get_vring_size(vioch->vqueue);
> >> +
> >> + /* Tx messages need multiple descriptors. */()
> >> + if (tx)
> >> + ret /= DESCRIPTORS_PER_TX_MSG;
> >> +
> >> + if (ret > MSG_TOKEN_MAX) {
> >
> > __scmi_xfer_info_init() returns error for info->max_msg >=
> > MSG_TOKEN_MAX so it should be (ret >= MSG_TOKEN_MAX
> >
>
> I think it would be better to use the > comparison in
> __scmi_xfer_info_init(), too. That should work in my understanding. I
> think that change was originally in my branch, but I lost it somehow.
>

I'll include this proposed fixes on my temp rework for V4.

Thanks,
Cristian

> >
> >> + dev_info_once(
> >> + base_cinfo->dev,
> >> + "Only %ld messages can be pending simultaneously, while the %s virtqueue could hold %d\n",
> >> + MSG_TOKEN_MAX, tx ? "tx" : "rx", ret);
> >> + ret = MSG_TOKEN_MAX;
> >
> > should be ret = MSG_TOKEN_MAX-1;
> >
> >> + }
> >> +
> >> + return ret;
> >> +}
> >> +
> >> +static int scmi_vio_match_any_dev(struct device *dev, const void *data)
> >> +{
> >> + return 1;
> >> +}
> >> +
> >> +static struct virtio_driver virtio_scmi_driver; /* Forward declaration */
> >> +
> >> +static int virtio_link_supplier(struct device *dev)
> >> +{
> >> + struct device *vdev = driver_find_device(
> >> + &virtio_scmi_driver.driver, NULL, NULL, scmi_vio_match_any_dev);
> >> +
> >> + if (!vdev) {
> >> + dev_notice_once(
> >> + dev,
> >> + "Deferring probe after not finding a bound scmi-virtio device\n");
> >> + return -EPROBE_DEFER;
> >> + }
> >> +
> >> + /* Add device link for remove order and sysfs link. */
> >> + if (!device_link_add(dev, vdev, DL_FLAG_AUTOREMOVE_CONSUMER)) {
> >> + put_device(vdev);
> >> + dev_err(dev, "Adding link to supplier virtio device failed\n");
> >> + return -ECANCELED;
> >> + }
> >> +
> >> + put_device(vdev);
> >> + return scmi_set_transport_info(dev, dev_to_virtio(vdev));
> >> +}
> >> +
> >> +static bool virtio_chan_available(struct device *dev, int idx)
> >> +{
> >> + struct virtio_device *vdev;
> >> +
> >> + /* scmi-virtio doesn't support per-protocol channels */
> >> + if (is_scmi_protocol_device(dev))
> >> + return false;
> >> +
> >> + vdev = scmi_get_transport_info(dev);
> >> + if (!vdev)
> >> + return false;
> >> +
> >> + switch (idx) {
> >> + case VIRTIO_SCMI_VQ_TX:
> >> + return true;
> >> + case VIRTIO_SCMI_VQ_RX:
> >> + return scmi_vio_have_vq_rx(vdev);
> >> + default:
> >> + return false;
> >> + }
> >> +}
> >> +
> >> +static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
> >> + bool tx)
> >> +{
> >> + unsigned long flags;
> >> + struct virtio_device *vdev;
> >> + struct scmi_vio_channel *vioch;
> >> + int index = tx ? VIRTIO_SCMI_VQ_TX : VIRTIO_SCMI_VQ_RX;
> >> + int max_msg;
> >> + int i;
> >> +
> >> + if (!virtio_chan_available(dev, index))
> >> + return -ENODEV;
> >> +
> >> + vdev = scmi_get_transport_info(dev);
> >> + vioch = &((struct scmi_vio_channel *)vdev->priv)[index];
> >> +
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> + cinfo->transport_info = vioch;
> >> + vioch->cinfo = cinfo;
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> +
> >> + max_msg = virtio_get_max_msg(tx, cinfo);
> >> +
> >> + for (i = 0; i < max_msg; i++) {
> >> + struct scmi_vio_msg *msg;
> >> +
> >> + msg = devm_kzalloc(cinfo->dev, sizeof(*msg), GFP_KERNEL);
> >> + if (!msg)
> >> + return -ENOMEM;
> >> +
> >> + if (tx) {
> >> + msg->request = devm_kzalloc(cinfo->dev,
> >> + VIRTIO_SCMI_MAX_PDU_SIZE,
> >> + GFP_KERNEL);
> >> + if (!msg->request)
> >> + return -ENOMEM;
> >> + }
> >> +
> >> + msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE,
> >> + GFP_KERNEL);
> >> + if (!msg->input)
> >> + return -ENOMEM;
> >> +
> >> + if (tx) {
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> + list_add_tail(&msg->list, &vioch->free_list);
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> + } else {
> >> + scmi_vio_feed_vq_rx(vioch, msg);
> >> + }
> >> + }
> >> +
> >> + spin_lock_irqsave(&vioch->ready_lock, flags);
> >> + vioch->ready = true;
> >> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int virtio_chan_free(int id, void *p, void *data)
> >> +{
> >> + unsigned long flags;
> >> + struct scmi_chan_info *cinfo = p;
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> +
> >> + spin_lock_irqsave(&vioch->ready_lock, flags);
> >> + vioch->ready = false;
> >> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
> >> +
> >> + scmi_free_channel(cinfo, data, id);
> >> + return 0;
> >> +}
> >> +
> >> +static int virtio_send_message(struct scmi_chan_info *cinfo,
> >> + struct scmi_xfer *xfer)
> >> +{
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> + struct scatterlist sg_out;
> >> + struct scatterlist sg_in;
> >> + struct scatterlist *sgs[DESCRIPTORS_PER_TX_MSG] = { &sg_out, &sg_in };
> >> + unsigned long flags;
> >> + int rc;
> >> + struct scmi_vio_msg *msg;
> >> +
> >> + /*
> >> + * TODO: For now, we don't support polling. But it should not be
> >> + * difficult to add support.
> >> + */
> >> + if (xfer->hdr.poll_completion)
> >> + return -EINVAL;
> >> +
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> +
> >> + if (list_empty(&vioch->free_list)) {
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> + return -EBUSY;
> >> + }
> >> +
> >> + msg = list_first_entry(&vioch->free_list, typeof(*msg), list);
> >> + list_del(&msg->list);
> >> +
> >> + msg_tx_prepare(msg->request, xfer);
> >> +
> >> + sg_init_one(&sg_out, msg->request, msg_command_size(xfer));
> >> + sg_init_one(&sg_in, msg->input, msg_response_size(xfer));
> >> +
> >> + rc = virtqueue_add_sgs(vioch->vqueue, sgs, 1, 1, msg, GFP_ATOMIC);
> >> + if (rc) {
> >> + list_add(&msg->list, &vioch->free_list);
> >> + dev_err_once(vioch->cinfo->dev,
> >> + "%s() failed to add to virtqueue (%d)\n", __func__,
> >> + rc);
> >> + } else {
> >> + virtqueue_kick(vioch->vqueue);
> >> + }
> >> +
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> +
> >> + return rc;
> >> +}
> >> +
> >> +static void virtio_fetch_response(struct scmi_chan_info *cinfo,
> >> + struct scmi_xfer *xfer, void *msg_handle)
> >> +{
> >> + struct scmi_vio_msg *msg = msg_handle;
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> +
> >> + if (!msg) {
> >> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> >> + "Ignoring %s() call with NULL msg_handle\n",
> >> + __func__);
> >> + return;
> >> + }
> >> +
> >> + msg_fetch_response(msg->input, msg->rx_len, xfer);
> >> +}
> >> +
> >> +static void virtio_fetch_notification(struct scmi_chan_info *cinfo,
> >> + size_t max_len, struct scmi_xfer *xfer,
> >> + void *msg_handle)
> >> +{
> >> + struct scmi_vio_msg *msg = msg_handle;
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> +
> >> + if (!msg) {
> >> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> >> + "Ignoring %s() call with NULL msg_handle\n",
> >> + __func__);
> >> + return;
> >> + }
> >> +
> >> + msg_fetch_notification(msg->input, msg->rx_len, max_len, xfer);
> >> +}
> >> +
> >> +static void dummy_clear_channel(struct scmi_chan_info *cinfo)
> >> +{
> >> +}
> >> +
> >> +static bool dummy_poll_done(struct scmi_chan_info *cinfo,
> >> + struct scmi_xfer *xfer)
> >> +{
> >> + return false;
> >> +}
> >> +
> >> +static void virtio_drop_message(struct scmi_chan_info *cinfo, void *msg_handle)
> >> +{
> >> + unsigned long flags;
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> + struct scmi_vio_msg *msg = msg_handle;
> >> +
> >> + if (!msg) {
> >> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> >> + "Ignoring %s() call with NULL msg_handle\n",
> >> + __func__);
> >> + return;
> >> + }
> >> +
> >> + if (vioch->is_rx) {
> >> + scmi_vio_feed_vq_rx(vioch, msg);
> >> + } else {
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> + list_add(&msg->list, &vioch->free_list);
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> + }
> >> +}
> >> +
> >> +static const struct scmi_transport_ops scmi_virtio_ops = {
> >> + .link_supplier = virtio_link_supplier,
> >> + .chan_available = virtio_chan_available,
> >> + .chan_setup = virtio_chan_setup,
> >> + .chan_free = virtio_chan_free,
> >> + .get_max_msg = virtio_get_max_msg,
> >> + .send_message = virtio_send_message,
> >> + .fetch_response = virtio_fetch_response,
> >> + .fetch_notification = virtio_fetch_notification,
> >> + .clear_channel = dummy_clear_channel,
> >> + .poll_done = dummy_poll_done,
> >> + .drop_message = virtio_drop_message,
> >> +};
> >> +
> >> +static int scmi_vio_probe(struct virtio_device *vdev)
> >> +{
> >> + struct device *dev = &vdev->dev;
> >> + struct scmi_vio_channel *channels;
> >> + bool have_vq_rx;
> >> + int vq_cnt;
> >> + int i;
> >> + int ret;
> >> + struct virtqueue *vqs[VIRTIO_SCMI_VQ_MAX_CNT];
> >> +
> >> + have_vq_rx = scmi_vio_have_vq_rx(vdev);
> >> + vq_cnt = have_vq_rx ? VIRTIO_SCMI_VQ_MAX_CNT : 1;
> >> +
> >> + channels = devm_kcalloc(dev, vq_cnt, sizeof(*channels), GFP_KERNEL);
> >> + if (!channels)
> >> + return -ENOMEM;
> >> +
> >> + if (have_vq_rx)
> >> + channels[VIRTIO_SCMI_VQ_RX].is_rx = true;
> >> +
> >> + ret = virtio_find_vqs(vdev, vq_cnt, vqs, scmi_vio_complete_callbacks,
> >> + scmi_vio_vqueue_names, NULL);
> >> + if (ret) {
> >> + dev_err(dev, "Failed to get %d virtqueue(s)\n", vq_cnt);
> >> + return ret;
> >> + }
> >> + dev_info(dev, "Found %d virtqueue(s)\n", vq_cnt);
> >> +
> >> + for (i = 0; i < vq_cnt; i++) {
> >> + spin_lock_init(&channels[i].lock);
> >> + spin_lock_init(&channels[i].ready_lock);
> >> + INIT_LIST_HEAD(&channels[i].free_list);
> >> + channels[i].vqueue = vqs[i];
> >> + }
> >> +
> >> + vdev->priv = channels;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static void scmi_vio_remove(struct virtio_device *vdev)
> >> +{
> >> + vdev->config->reset(vdev);
> >> + vdev->config->del_vqs(vdev);
> >> +}
> >> +
> >> +static unsigned int features[] = {
> >> + VIRTIO_SCMI_F_P2A_CHANNELS,
> >> +};
> >> +
> >> +static const struct virtio_device_id id_table[] = {
> >> + { VIRTIO_ID_SCMI, VIRTIO_DEV_ANY_ID },
> >> + { 0 }
> >> +};
> >> +
> >> +static struct virtio_driver virtio_scmi_driver = {
> >> + .driver.name = "scmi-virtio",
> >> + .driver.owner = THIS_MODULE,
> >> + .feature_table = features,
> >> + .feature_table_size = ARRAY_SIZE(features),
> >> + .id_table = id_table,
> >> + .probe = scmi_vio_probe,
> >> + .remove = scmi_vio_remove,
> >> +};
> >> +
> >> +static int __init virtio_scmi_init(void)
> >> +{
> >> + return register_virtio_driver(&virtio_scmi_driver);
> >> +}
> >> +
> >> +static void __exit virtio_scmi_exit(void)
> >> +{
> >> + unregister_virtio_driver(&virtio_scmi_driver);
> >> +}
> >> +
> >> +const struct scmi_desc scmi_virtio_desc = {
> >> + .init = virtio_scmi_init,
> >> + .exit = virtio_scmi_exit,
> >> + .ops = &scmi_virtio_ops,
> >> + .max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */
> >> + .max_msg = 0, /* overridden by virtio_get_max_msg() */
> >> + .max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE,
> >> +};
> >> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> >> index f0c35ce8628c..c146fe30e589 100644
> >> --- a/include/uapi/linux/virtio_ids.h
> >> +++ b/include/uapi/linux/virtio_ids.h
> >> @@ -56,5 +56,6 @@
> >> #define VIRTIO_ID_PMEM 27 /* virtio pmem */
> >> #define VIRTIO_ID_BT 28 /* virtio bluetooth */
> >> #define VIRTIO_ID_MAC80211_HWSIM 29 /* virtio mac80211-hwsim */
> >> +#define VIRTIO_ID_SCMI 32 /* virtio SCMI */
> >>
> >> #endif /* _LINUX_VIRTIO_IDS_H */
> >> diff --git a/include/uapi/linux/virtio_scmi.h b/include/uapi/linux/virtio_scmi.h
> >> new file mode 100644
> >> index 000000000000..732b01504c35
> >> --- /dev/null
> >> +++ b/include/uapi/linux/virtio_scmi.h
> >> @@ -0,0 +1,25 @@
> >> +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
> >> +/*
> >> + * Copyright (C) 2020 OpenSynergy GmbH
> >> + */
> >> +
> >> +#ifndef _UAPI_LINUX_VIRTIO_SCMI_H
> >> +#define _UAPI_LINUX_VIRTIO_SCMI_H
> >> +
> >> +#include <linux/virtio_types.h>
> >> +
> >> +/* Feature bits */
> >> +
> >> +/* Device implements some SCMI notifications, or delayed responses. */
> >> +#define VIRTIO_SCMI_F_P2A_CHANNELS 0
> >> +
> >> +/* Device implements any SCMI statistics shared memory region */
> >> +#define VIRTIO_SCMI_F_SHARED_MEMORY 1
> >> +
> >> +/* Virtqueues */
> >> +
> >> +#define VIRTIO_SCMI_VQ_TX 0 /* cmdq */
> >> +#define VIRTIO_SCMI_VQ_RX 1 /* eventq */
> >> +#define VIRTIO_SCMI_VQ_MAX_CNT 2
> >> +
> >> +#endif /* _UAPI_LINUX_VIRTIO_SCMI_H */
> >> --
> >> 2.25.1
> >>
> >>
> >
>
>

2021-06-04 09:21:01

by Peter Hilber

[permalink] [raw]
Subject: Re: [RFC PATCH v3 11/12] firmware: arm_scmi: Add virtio transport

On 26.05.21 16:40, Cristian Marussi wrote:
> Hi Peter,
>
> as anticipated I'm adding some new SCMI core mechanisms that should help
> simplyfying virtio-scmi series.
>
> Such core work is still in progress (and to be properly reviewed) and it
> is at:
>
> https://lore.kernel.org/linux-arm-kernel/[email protected]/
>
> but in the meantime I have an initial but working (for me at least :D)
> rework of your V3 virtio-scmi series; rework is still in progress and to be
> cleaned up (nor I have addressed probing sequence or polling mode), and I am
> anyway holding it for now since Rob asked about DT txt-to-yaml conversion
> too, BUT if you can or want to have a look in the meantime, you can find the
> whole V4 transitional series rebased on top of my core changes with some
> changes on top at:
>
> https://gitlab.arm.com/linux-arm/linux-cm/-/commits/scmi_virtio_trans_V4_rework/
>
> where:
>
> - I dropped V3 patches 7,8,12
> - the virtio changes I applied to make use of my core changes are all
> embedded in the last patch (just for now):
>
> [RFC] firmware: arm_scmi: make virtio-scmi use delegated xfers
>
> Definitely not the final version, so you may want to just wait for a
> real V4, but just to give an idea of the direction I'm trying to follow
> if you want.
>
> Thanks,
> Cristian
>

Hi Cristian,

I had a look at the concepts in the linked branch. The following race
condition seems to not be addressed ATM:

- concurrent not delayed and delayed response (or inverted order)

The virtio device will send the not delayed and delayed response through
different virtqueues. Sending in a particular order will in my
understanding not entail that the receiver is able to restore this
order. I think the virtio transport cannot handle this race condition in
general without interpreting message headers (which it shouldn't).

Also, it might be documented that after a response timeout, do_xfer
should not be called on the xfer any more (since a belated response to
the timed out message might corrupt the new message).

Best regards,

Peter

> On Tue, May 11, 2021 at 02:20:39AM +0200, Peter Hilber wrote:
>> From: Igor Skalkin <[email protected]>
>>
>> This transport enables accessing an SCMI platform as a virtio device.
>>
>> Implement an SCMI virtio driver according to the virtio SCMI device spec
>> [1]. Virtio device id 32 has been reserved for the SCMI device [2].
>>
>> The virtio transport has one Tx channel (virtio cmdq, A2P channel) and
>> at most one Rx channel (virtio eventq, P2A channel).
>>
>> The following feature bit defined in [1] is not implemented:
>> VIRTIO_SCMI_F_SHARED_MEMORY.
>>
>> After the preparatory patches, this implements the virtio transport as
>> paraphrased:
>>
>> Only support a single arm-scmi device (which is consistent with the SCMI
>> spec). scmi-virtio init is called from arm-scmi module init. During the
>> arm-scmi probing, link to the first probed scmi-virtio device. Defer
>> arm-scmi probing if no scmi-virtio device is bound yet.
>>
>> For simplicity, restrict the number of messages which can be pending
>> simultaneously according to the virtqueue capacity. (The virtqueue sizes
>> are negotiated with the virtio device.)
>>
>> As soon as Rx channel message buffers are allocated or have been read
>> out by the arm-scmi driver, feed them to the virtio device.
>>
>> Since some virtio devices may not have the short response time exhibited
>> by SCMI platforms using other transports, set a generous response
>> timeout.
>>
>> Limitations:
>>
>> - Polling is not supported.
>>
>> - The timeout for delayed responses has not been adjusted.
>>
>> [1] https://github.com/oasis-tcs/virtio-spec/blob/master/virtio-scmi.tex
>> [2] https://www.oasis-open.org/committees/ballot.php?id=3496
>>
>> Signed-off-by: Igor Skalkin <[email protected]>
>> [ Peter: Adapted patch for submission to upstream. ]
>> Co-developed-by: Peter Hilber <[email protected]>
>> Signed-off-by: Peter Hilber <[email protected]>
>> ---
>> MAINTAINERS | 1 +
>> drivers/firmware/Kconfig | 12 +
>> drivers/firmware/arm_scmi/Makefile | 1 +
>> drivers/firmware/arm_scmi/common.h | 3 +
>> drivers/firmware/arm_scmi/driver.c | 3 +
>> drivers/firmware/arm_scmi/virtio.c | 523 +++++++++++++++++++++++++++++
>> include/uapi/linux/virtio_ids.h | 1 +
>> include/uapi/linux/virtio_scmi.h | 25 ++
>> 8 files changed, 569 insertions(+)
>> create mode 100644 drivers/firmware/arm_scmi/virtio.c
>> create mode 100644 include/uapi/linux/virtio_scmi.h
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index bd7aff0c120f..449c336872f3 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -17705,6 +17705,7 @@ F: drivers/regulator/scmi-regulator.c
>> F: drivers/reset/reset-scmi.c
>> F: include/linux/sc[mp]i_protocol.h
>> F: include/trace/events/scmi.h
>> +F: include/uapi/linux/virtio_scmi.h
>>
>> SYSTEM RESET/SHUTDOWN DRIVERS
>> M: Sebastian Reichel <[email protected]>
>> diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
>> index e8377b12e4d0..7e9eafdd9b63 100644
>> --- a/drivers/firmware/Kconfig
>> +++ b/drivers/firmware/Kconfig
>> @@ -39,6 +39,18 @@ config ARM_SCMI_HAVE_MSG
>> This declares whether a message passing based transport for SCMI is
>> available.
>>
>> +# This config option includes the dependencies of ARM_SCMI_PROTOCOL so that
>> +# this config doesn't show up when SCMI wouldn't be available.
>> +config VIRTIO_SCMI
>> + bool "Virtio transport for SCMI"
>> + select ARM_SCMI_HAVE_MSG
>> + depends on VIRTIO && (ARM || ARM64 || COMPILE_TEST)
>> + help
>> + This enables the virtio based transport for SCMI.
>> +
>> + If you want to use the ARM SCMI protocol between the virtio guest and
>> + a host providing a virtio SCMI device, answer Y.
>> +
>> config ARM_SCMI_POWER_DOMAIN
>> tristate "SCMI power domain driver"
>> depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF)
>> diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
>> index f6b4acb8abdb..db1787606fb2 100644
>> --- a/drivers/firmware/arm_scmi/Makefile
>> +++ b/drivers/firmware/arm_scmi/Makefile
>> @@ -5,6 +5,7 @@ scmi-transport-$(CONFIG_ARM_SCMI_HAVE_SHMEM) = shmem.o
>> scmi-transport-$(CONFIG_MAILBOX) += mailbox.o
>> scmi-transport-$(CONFIG_HAVE_ARM_SMCCC_DISCOVERY) += smc.o
>> scmi-transport-$(CONFIG_ARM_SCMI_HAVE_MSG) += msg.o
>> +scmi-transport-$(CONFIG_VIRTIO_SCMI) += virtio.o
>> scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o
>> scmi-module-objs := $(scmi-bus-y) $(scmi-driver-y) $(scmi-protocols-y) \
>> $(scmi-transport-y)
>> diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
>> index 4cb6571c7aaf..bada06cfd33d 100644
>> --- a/drivers/firmware/arm_scmi/common.h
>> +++ b/drivers/firmware/arm_scmi/common.h
>> @@ -349,6 +349,9 @@ extern const struct scmi_desc scmi_mailbox_desc;
>> #ifdef CONFIG_HAVE_ARM_SMCCC
>> extern const struct scmi_desc scmi_smc_desc;
>> #endif
>> +#ifdef CONFIG_VIRTIO_SCMI
>> +extern const struct scmi_desc scmi_virtio_desc;
>> +#endif
>>
>> int scmi_set_transport_info(struct device *dev, void *transport_info);
>> void *scmi_get_transport_info(struct device *dev);
>> diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
>> index e04e7c8e6928..a31187385470 100644
>> --- a/drivers/firmware/arm_scmi/driver.c
>> +++ b/drivers/firmware/arm_scmi/driver.c
>> @@ -1637,6 +1637,9 @@ static const struct of_device_id scmi_of_match[] = {
>> #endif
>> #ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
>> { .compatible = "arm,scmi-smc", .data = &scmi_smc_desc},
>> +#endif
>> +#ifdef CONFIG_VIRTIO_SCMI
>> + { .compatible = "arm,scmi-virtio", .data = &scmi_virtio_desc},
>> #endif
>> { /* Sentinel */ },
>> };
>> diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c
>> new file mode 100644
>> index 000000000000..20972adf6dc7
>> --- /dev/null
>> +++ b/drivers/firmware/arm_scmi/virtio.c
>> @@ -0,0 +1,523 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Virtio Transport driver for Arm System Control and Management Interface
>> + * (SCMI).
>> + *
>> + * Copyright (C) 2020 OpenSynergy.
>> + */
>> +
>> +/**
>> + * DOC: Theory of Operation
>> + *
>> + * The scmi-virtio transport implements a driver for the virtio SCMI device.
>> + *
>> + * There is one Tx channel (virtio cmdq, A2P channel) and at most one Rx
>> + * channel (virtio eventq, P2A channel). Each channel is implemented through a
>> + * virtqueue. Access to each virtqueue is protected by spinlocks.
>> + */
>> +
>> +#include <linux/errno.h>
>> +#include <linux/of.h>
>> +#include <linux/of_platform.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/module.h>
>> +#include <linux/slab.h>
>> +#include <linux/virtio.h>
>> +#include <linux/virtio_config.h>
>> +#include <uapi/linux/virtio_ids.h>
>> +#include <uapi/linux/virtio_scmi.h>
>> +
>> +#include "common.h"
>> +
>> +#define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */
>> +#define VIRTIO_SCMI_MAX_PDU_SIZE \
>> + (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD)
>> +#define DESCRIPTORS_PER_TX_MSG 2
>> +
>> +/**
>> + * struct scmi_vio_channel - Transport channel information
>> + *
>> + * @lock: Protects access to all members except ready.
>> + * @ready_lock: Protects access to ready. If required, it must be taken before
>> + * lock.
>> + * @vqueue: Associated virtqueue
>> + * @cinfo: SCMI Tx or Rx channel
>> + * @free_list: List of unused scmi_vio_msg, maintained for Tx channels only
>> + * @is_rx: Whether channel is an Rx channel
>> + * @ready: Whether transport user is ready to hear about channel
>> + */
>> +struct scmi_vio_channel {
>> + spinlock_t lock;
>> + spinlock_t ready_lock;
>> + struct virtqueue *vqueue;
>> + struct scmi_chan_info *cinfo;
>> + struct list_head free_list;
>> + u8 is_rx;
>> + u8 ready;
>> +};
>> +
>> +/**
>> + * struct scmi_vio_msg - Transport PDU information
>> + *
>> + * @request: SDU used for commands
>> + * @input: SDU used for (delayed) responses and notifications
>> + * @list: List which scmi_vio_msg may be part of
>> + * @rx_len: Input SDU size in bytes, once input has been received
>> + */
>> +struct scmi_vio_msg {
>> + struct scmi_msg_payld *request;
>> + struct scmi_msg_payld *input;
>> + struct list_head list;
>> + unsigned int rx_len;
>> +};
>> +
>> +static bool scmi_vio_have_vq_rx(struct virtio_device *vdev)
>> +{
>> + return virtio_has_feature(vdev, VIRTIO_SCMI_F_P2A_CHANNELS);
>> +}
>> +
>> +static int scmi_vio_feed_vq_rx(struct scmi_vio_channel *vioch,
>> + struct scmi_vio_msg *msg)
>> +{
>> + struct scatterlist sg_in;
>> + int rc;
>> + unsigned long flags;
>> +
>> + sg_init_one(&sg_in, msg->input, VIRTIO_SCMI_MAX_PDU_SIZE);
>> +
>> + spin_lock_irqsave(&vioch->lock, flags);
>> +
>> + rc = virtqueue_add_inbuf(vioch->vqueue, &sg_in, 1, msg, GFP_ATOMIC);
>> + if (rc)
>> + dev_err_once(vioch->cinfo->dev,
>> + "%s() failed to add to virtqueue (%d)\n", __func__,
>> + rc);
>> + else
>> + virtqueue_kick(vioch->vqueue);
>> +
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> +
>> + return rc;
>> +}
>> +
>> +static void scmi_vio_complete_cb(struct virtqueue *vqueue)
>> +{
>> + unsigned long ready_flags;
>> + unsigned long flags;
>> + unsigned int length;
>> + struct scmi_vio_channel *vioch;
>> + struct scmi_vio_msg *msg;
>> + bool cb_enabled = true;
>> +
>> + if (WARN_ON_ONCE(!vqueue->vdev->priv))
>> + return;
>> + vioch = &((struct scmi_vio_channel *)vqueue->vdev->priv)[vqueue->index];
>> +
>> + for (;;) {
>> + spin_lock_irqsave(&vioch->ready_lock, ready_flags);
>> +
>> + if (!vioch->ready) {
>> + if (!cb_enabled)
>> + (void)virtqueue_enable_cb(vqueue);
>> + goto unlock_ready_out;
>> + }
>> +
>> + spin_lock_irqsave(&vioch->lock, flags);
>> + if (cb_enabled) {
>> + virtqueue_disable_cb(vqueue);
>> + cb_enabled = false;
>> + }
>> + msg = virtqueue_get_buf(vqueue, &length);
>> + if (!msg) {
>> + if (virtqueue_enable_cb(vqueue))
>> + goto unlock_out;
>> + else
>> + cb_enabled = true;
>> + }
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> +
>> + if (msg) {
>> + msg->rx_len = length;
>> +
>> + /*
>> + * Hold the ready_lock during the callback to avoid
>> + * races when the arm-scmi driver is unbinding while
>> + * the virtio device is not quiesced yet.
>> + */
>> + scmi_rx_callback(vioch->cinfo,
>> + msg_read_header(msg->input), msg);
>> + }
>> + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
>> + }
>> +
>> +unlock_out:
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> +unlock_ready_out:
>> + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
>> +}
>> +
>> +static const char *const scmi_vio_vqueue_names[] = { "tx", "rx" };
>> +
>> +static vq_callback_t *scmi_vio_complete_callbacks[] = {
>> + scmi_vio_complete_cb,
>> + scmi_vio_complete_cb
>> +};
>> +
>> +static unsigned int virtio_get_max_msg(bool tx,
>> + struct scmi_chan_info *base_cinfo)
>> +{
>> + struct scmi_vio_channel *vioch = base_cinfo->transport_info;
>> + unsigned int ret;
>> +
>> + ret = virtqueue_get_vring_size(vioch->vqueue);
>> +
>> + /* Tx messages need multiple descriptors. */
>> + if (tx)
>> + ret /= DESCRIPTORS_PER_TX_MSG;
>> +
>> + if (ret > MSG_TOKEN_MAX) {
>> + dev_info_once(
>> + base_cinfo->dev,
>> + "Only %ld messages can be pending simultaneously, while the %s virtqueue could hold %d\n",
>> + MSG_TOKEN_MAX, tx ? "tx" : "rx", ret);
>> + ret = MSG_TOKEN_MAX;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +static int scmi_vio_match_any_dev(struct device *dev, const void *data)
>> +{
>> + return 1;
>> +}
>> +
>> +static struct virtio_driver virtio_scmi_driver; /* Forward declaration */
>> +
>> +static int virtio_link_supplier(struct device *dev)
>> +{
>> + struct device *vdev = driver_find_device(
>> + &virtio_scmi_driver.driver, NULL, NULL, scmi_vio_match_any_dev);
>> +
>> + if (!vdev) {
>> + dev_notice_once(
>> + dev,
>> + "Deferring probe after not finding a bound scmi-virtio device\n");
>> + return -EPROBE_DEFER;
>> + }
>> +
>> + /* Add device link for remove order and sysfs link. */
>> + if (!device_link_add(dev, vdev, DL_FLAG_AUTOREMOVE_CONSUMER)) {
>> + put_device(vdev);
>> + dev_err(dev, "Adding link to supplier virtio device failed\n");
>> + return -ECANCELED;
>> + }
>> +
>> + put_device(vdev);
>> + return scmi_set_transport_info(dev, dev_to_virtio(vdev));
>> +}
>> +
>> +static bool virtio_chan_available(struct device *dev, int idx)
>> +{
>> + struct virtio_device *vdev;
>> +
>> + /* scmi-virtio doesn't support per-protocol channels */
>> + if (is_scmi_protocol_device(dev))
>> + return false;
>> +
>> + vdev = scmi_get_transport_info(dev);
>> + if (!vdev)
>> + return false;
>> +
>> + switch (idx) {
>> + case VIRTIO_SCMI_VQ_TX:
>> + return true;
>> + case VIRTIO_SCMI_VQ_RX:
>> + return scmi_vio_have_vq_rx(vdev);
>> + default:
>> + return false;
>> + }
>> +}
>> +
>> +static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
>> + bool tx)
>> +{
>> + unsigned long flags;
>> + struct virtio_device *vdev;
>> + struct scmi_vio_channel *vioch;
>> + int index = tx ? VIRTIO_SCMI_VQ_TX : VIRTIO_SCMI_VQ_RX;
>> + int max_msg;
>> + int i;
>> +
>> + if (!virtio_chan_available(dev, index))
>> + return -ENODEV;
>> +
>> + vdev = scmi_get_transport_info(dev);
>> + vioch = &((struct scmi_vio_channel *)vdev->priv)[index];
>> +
>> + spin_lock_irqsave(&vioch->lock, flags);
>> + cinfo->transport_info = vioch;
>> + vioch->cinfo = cinfo;
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> +
>> + max_msg = virtio_get_max_msg(tx, cinfo);
>> +
>> + for (i = 0; i < max_msg; i++) {
>> + struct scmi_vio_msg *msg;
>> +
>> + msg = devm_kzalloc(cinfo->dev, sizeof(*msg), GFP_KERNEL);
>> + if (!msg)
>> + return -ENOMEM;
>> +
>> + if (tx) {
>> + msg->request = devm_kzalloc(cinfo->dev,
>> + VIRTIO_SCMI_MAX_PDU_SIZE,
>> + GFP_KERNEL);
>> + if (!msg->request)
>> + return -ENOMEM;
>> + }
>> +
>> + msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE,
>> + GFP_KERNEL);
>> + if (!msg->input)
>> + return -ENOMEM;
>> +
>> + if (tx) {
>> + spin_lock_irqsave(&vioch->lock, flags);
>> + list_add_tail(&msg->list, &vioch->free_list);
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> + } else {
>> + scmi_vio_feed_vq_rx(vioch, msg);
>> + }
>> + }
>> +
>> + spin_lock_irqsave(&vioch->ready_lock, flags);
>> + vioch->ready = true;
>> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
>> +
>> + return 0;
>> +}
>> +
>> +static int virtio_chan_free(int id, void *p, void *data)
>> +{
>> + unsigned long flags;
>> + struct scmi_chan_info *cinfo = p;
>> + struct scmi_vio_channel *vioch = cinfo->transport_info;
>> +
>> + spin_lock_irqsave(&vioch->ready_lock, flags);
>> + vioch->ready = false;
>> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
>> +
>> + scmi_free_channel(cinfo, data, id);
>> + return 0;
>> +}
>> +
>> +static int virtio_send_message(struct scmi_chan_info *cinfo,
>> + struct scmi_xfer *xfer)
>> +{
>> + struct scmi_vio_channel *vioch = cinfo->transport_info;
>> + struct scatterlist sg_out;
>> + struct scatterlist sg_in;
>> + struct scatterlist *sgs[DESCRIPTORS_PER_TX_MSG] = { &sg_out, &sg_in };
>> + unsigned long flags;
>> + int rc;
>> + struct scmi_vio_msg *msg;
>> +
>> + /*
>> + * TODO: For now, we don't support polling. But it should not be
>> + * difficult to add support.
>> + */
>> + if (xfer->hdr.poll_completion)
>> + return -EINVAL;
>> +
>> + spin_lock_irqsave(&vioch->lock, flags);
>> +
>> + if (list_empty(&vioch->free_list)) {
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> + return -EBUSY;
>> + }
>> +
>> + msg = list_first_entry(&vioch->free_list, typeof(*msg), list);
>> + list_del(&msg->list);
>> +
>> + msg_tx_prepare(msg->request, xfer);
>> +
>> + sg_init_one(&sg_out, msg->request, msg_command_size(xfer));
>> + sg_init_one(&sg_in, msg->input, msg_response_size(xfer));
>> +
>> + rc = virtqueue_add_sgs(vioch->vqueue, sgs, 1, 1, msg, GFP_ATOMIC);
>> + if (rc) {
>> + list_add(&msg->list, &vioch->free_list);
>> + dev_err_once(vioch->cinfo->dev,
>> + "%s() failed to add to virtqueue (%d)\n", __func__,
>> + rc);
>> + } else {
>> + virtqueue_kick(vioch->vqueue);
>> + }
>> +
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> +
>> + return rc;
>> +}
>> +
>> +static void virtio_fetch_response(struct scmi_chan_info *cinfo,
>> + struct scmi_xfer *xfer, void *msg_handle)
>> +{
>> + struct scmi_vio_msg *msg = msg_handle;
>> + struct scmi_vio_channel *vioch = cinfo->transport_info;
>> +
>> + if (!msg) {
>> + dev_dbg_once(&vioch->vqueue->vdev->dev,
>> + "Ignoring %s() call with NULL msg_handle\n",
>> + __func__);
>> + return;
>> + }
>> +
>> + msg_fetch_response(msg->input, msg->rx_len, xfer);
>> +}
>> +
>> +static void virtio_fetch_notification(struct scmi_chan_info *cinfo,
>> + size_t max_len, struct scmi_xfer *xfer,
>> + void *msg_handle)
>> +{
>> + struct scmi_vio_msg *msg = msg_handle;
>> + struct scmi_vio_channel *vioch = cinfo->transport_info;
>> +
>> + if (!msg) {
>> + dev_dbg_once(&vioch->vqueue->vdev->dev,
>> + "Ignoring %s() call with NULL msg_handle\n",
>> + __func__);
>> + return;
>> + }
>> +
>> + msg_fetch_notification(msg->input, msg->rx_len, max_len, xfer);
>> +}
>> +
>> +static void dummy_clear_channel(struct scmi_chan_info *cinfo)
>> +{
>> +}
>> +
>> +static bool dummy_poll_done(struct scmi_chan_info *cinfo,
>> + struct scmi_xfer *xfer)
>> +{
>> + return false;
>> +}
>> +
>> +static void virtio_drop_message(struct scmi_chan_info *cinfo, void *msg_handle)
>> +{
>> + unsigned long flags;
>> + struct scmi_vio_channel *vioch = cinfo->transport_info;
>> + struct scmi_vio_msg *msg = msg_handle;
>> +
>> + if (!msg) {
>> + dev_dbg_once(&vioch->vqueue->vdev->dev,
>> + "Ignoring %s() call with NULL msg_handle\n",
>> + __func__);
>> + return;
>> + }
>> +
>> + if (vioch->is_rx) {
>> + scmi_vio_feed_vq_rx(vioch, msg);
>> + } else {
>> + spin_lock_irqsave(&vioch->lock, flags);
>> + list_add(&msg->list, &vioch->free_list);
>> + spin_unlock_irqrestore(&vioch->lock, flags);
>> + }
>> +}
>> +
>> +static const struct scmi_transport_ops scmi_virtio_ops = {
>> + .link_supplier = virtio_link_supplier,
>> + .chan_available = virtio_chan_available,
>> + .chan_setup = virtio_chan_setup,
>> + .chan_free = virtio_chan_free,
>> + .get_max_msg = virtio_get_max_msg,
>> + .send_message = virtio_send_message,
>> + .fetch_response = virtio_fetch_response,
>> + .fetch_notification = virtio_fetch_notification,
>> + .clear_channel = dummy_clear_channel,
>> + .poll_done = dummy_poll_done,
>> + .drop_message = virtio_drop_message,
>> +};
>> +
>> +static int scmi_vio_probe(struct virtio_device *vdev)
>> +{
>> + struct device *dev = &vdev->dev;
>> + struct scmi_vio_channel *channels;
>> + bool have_vq_rx;
>> + int vq_cnt;
>> + int i;
>> + int ret;
>> + struct virtqueue *vqs[VIRTIO_SCMI_VQ_MAX_CNT];
>> +
>> + have_vq_rx = scmi_vio_have_vq_rx(vdev);
>> + vq_cnt = have_vq_rx ? VIRTIO_SCMI_VQ_MAX_CNT : 1;
>> +
>> + channels = devm_kcalloc(dev, vq_cnt, sizeof(*channels), GFP_KERNEL);
>> + if (!channels)
>> + return -ENOMEM;
>> +
>> + if (have_vq_rx)
>> + channels[VIRTIO_SCMI_VQ_RX].is_rx = true;
>> +
>> + ret = virtio_find_vqs(vdev, vq_cnt, vqs, scmi_vio_complete_callbacks,
>> + scmi_vio_vqueue_names, NULL);
>> + if (ret) {
>> + dev_err(dev, "Failed to get %d virtqueue(s)\n", vq_cnt);
>> + return ret;
>> + }
>> + dev_info(dev, "Found %d virtqueue(s)\n", vq_cnt);
>> +
>> + for (i = 0; i < vq_cnt; i++) {
>> + spin_lock_init(&channels[i].lock);
>> + spin_lock_init(&channels[i].ready_lock);
>> + INIT_LIST_HEAD(&channels[i].free_list);
>> + channels[i].vqueue = vqs[i];
>> + }
>> +
>> + vdev->priv = channels;
>> +
>> + return 0;
>> +}
>> +
>> +static void scmi_vio_remove(struct virtio_device *vdev)
>> +{
>> + vdev->config->reset(vdev);
>> + vdev->config->del_vqs(vdev);
>> +}
>> +
>> +static unsigned int features[] = {
>> + VIRTIO_SCMI_F_P2A_CHANNELS,
>> +};
>> +
>> +static const struct virtio_device_id id_table[] = {
>> + { VIRTIO_ID_SCMI, VIRTIO_DEV_ANY_ID },
>> + { 0 }
>> +};
>> +
>> +static struct virtio_driver virtio_scmi_driver = {
>> + .driver.name = "scmi-virtio",
>> + .driver.owner = THIS_MODULE,
>> + .feature_table = features,
>> + .feature_table_size = ARRAY_SIZE(features),
>> + .id_table = id_table,
>> + .probe = scmi_vio_probe,
>> + .remove = scmi_vio_remove,
>> +};
>> +
>> +static int __init virtio_scmi_init(void)
>> +{
>> + return register_virtio_driver(&virtio_scmi_driver);
>> +}
>> +
>> +static void __exit virtio_scmi_exit(void)
>> +{
>> + unregister_virtio_driver(&virtio_scmi_driver);
>> +}
>> +
>> +const struct scmi_desc scmi_virtio_desc = {
>> + .init = virtio_scmi_init,
>> + .exit = virtio_scmi_exit,
>> + .ops = &scmi_virtio_ops,
>> + .max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */
>> + .max_msg = 0, /* overridden by virtio_get_max_msg() */
>> + .max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE,
>> +};
>> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
>> index f0c35ce8628c..c146fe30e589 100644
>> --- a/include/uapi/linux/virtio_ids.h
>> +++ b/include/uapi/linux/virtio_ids.h
>> @@ -56,5 +56,6 @@
>> #define VIRTIO_ID_PMEM 27 /* virtio pmem */
>> #define VIRTIO_ID_BT 28 /* virtio bluetooth */
>> #define VIRTIO_ID_MAC80211_HWSIM 29 /* virtio mac80211-hwsim */
>> +#define VIRTIO_ID_SCMI 32 /* virtio SCMI */
>>
>> #endif /* _LINUX_VIRTIO_IDS_H */
>> diff --git a/include/uapi/linux/virtio_scmi.h b/include/uapi/linux/virtio_scmi.h
>> new file mode 100644
>> index 000000000000..732b01504c35
>> --- /dev/null
>> +++ b/include/uapi/linux/virtio_scmi.h
>> @@ -0,0 +1,25 @@
>> +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
>> +/*
>> + * Copyright (C) 2020 OpenSynergy GmbH
>> + */
>> +
>> +#ifndef _UAPI_LINUX_VIRTIO_SCMI_H
>> +#define _UAPI_LINUX_VIRTIO_SCMI_H
>> +
>> +#include <linux/virtio_types.h>
>> +
>> +/* Feature bits */
>> +
>> +/* Device implements some SCMI notifications, or delayed responses. */
>> +#define VIRTIO_SCMI_F_P2A_CHANNELS 0
>> +
>> +/* Device implements any SCMI statistics shared memory region */
>> +#define VIRTIO_SCMI_F_SHARED_MEMORY 1
>> +
>> +/* Virtqueues */
>> +
>> +#define VIRTIO_SCMI_VQ_TX 0 /* cmdq */
>> +#define VIRTIO_SCMI_VQ_RX 1 /* eventq */
>> +#define VIRTIO_SCMI_VQ_MAX_CNT 2
>> +
>> +#endif /* _UAPI_LINUX_VIRTIO_SCMI_H */
>> --
>> 2.25.1
>>
>>
>


2021-06-04 11:57:44

by Cristian Marussi

[permalink] [raw]
Subject: Re: [RFC PATCH v3 11/12] firmware: arm_scmi: Add virtio transport

Hi Peter,

first of all thanks a lot for having a look at this draft work.

On Fri, Jun 04, 2021 at 11:19:02AM +0200, Peter Hilber wrote:
> On 26.05.21 16:40, Cristian Marussi wrote:
> > Hi Peter,
> >
> > as anticipated I'm adding some new SCMI core mechanisms that should help
> > simplyfying virtio-scmi series.
> >
> > Such core work is still in progress (and to be properly reviewed) and it
> > is at:
> >
> > https://lore.kernel.org/linux-arm-kernel/[email protected]/
> >
> > but in the meantime I have an initial but working (for me at least :D)
> > rework of your V3 virtio-scmi series; rework is still in progress and to be
> > cleaned up (nor I have addressed probing sequence or polling mode), and I am
> > anyway holding it for now since Rob asked about DT txt-to-yaml conversion
> > too, BUT if you can or want to have a look in the meantime, you can find the
> > whole V4 transitional series rebased on top of my core changes with some
> > changes on top at:
> >
> > https://gitlab.arm.com/linux-arm/linux-cm/-/commits/scmi_virtio_trans_V4_rework/
> >
> > where:
> >
> > - I dropped V3 patches 7,8,12
> > - the virtio changes I applied to make use of my core changes are all
> > embedded in the last patch (just for now):
> >
> > [RFC] firmware: arm_scmi: make virtio-scmi use delegated xfers
> >
> > Definitely not the final version, so you may want to just wait for a
> > real V4, but just to give an idea of the direction I'm trying to follow
> > if you want.
> >
> > Thanks,
> > Cristian
> >
>
> Hi Cristian,
>
> I had a look at the concepts in the linked branch. The following race
> condition seems to not be addressed ATM:
>
> - concurrent not delayed and delayed response (or inverted order)
>
> The virtio device will send the not delayed and delayed response through
> different virtqueues. Sending in a particular order will in my
> understanding not entail that the receiver is able to restore this
> order. I think the virtio transport cannot handle this race condition in
> general without interpreting message headers (which it shouldn't).

The only valid assumption in SCMI comms should be that the platform is
constrained to send a delayed_resposne ONLY after the corresponding
sync-response is sent, but you are indeed right that, even when the
platform obeys such constraint, with this specific transport it could
happen, even though as a limit condition, that such messages, being
delivered through different vqueues, could be processed indeed concurrently
or inverted, so we should take care somehow of this situation.

I agree that virtio transport should not peek inside the message headers
and interpret them, but the core can do it when virtio::complete_cb()
looks for matching xfers, so I could detect this out-of-order condition
(in the core...adding some state info) and take 2 alternative actions:

1. identify and discard such (rare ?) out-of-order (least desirable
solution I think)

2. assume that an expected dresp received ahead of a resp implies
the related resp was OK (missing/lost ?), so maybe warn, BUT proceeed
processing normally the dresp while forcibly completing the waiters for
the missing sync-resp (with any future late arrival for that missing
sync-resp seq_num discarded by the core...assuming seqs are now uniques
and monotonically increasing..more on this below). This would
require also better serializiation to avoid the concurrent case.

This is just off the top of my head, I have not think it througly, I'll
investigate 2. and see if can be kept reasonably simple and include in
V4 properly.

>
> Also, it might be documented that after a response timeout, do_xfer
> should not be called on the xfer any more (since a belated response to
> the timed out message might corrupt the new message).
>

So this is an interesting point because it is something that sort of I
wanted to get feedbaclk about; indeed the new monotonic sequence
numbers are assigned when an xfer is get, so that, in fact, if core
protocols then reuse the same xfer in a loop to send multiple similar
requests there is still a possibility of a late response being associated
with a new message (having the same seq), even though presumably the first
timeout would lead to such set of commands-in-loop to be aborted.

I knew that (:D) and I had indeed some (non-posted) patches where the
monotonically increasing token/seq is instead assigned brand new for each
do_xfer instead than when get ... I reverted to the current solution because
it was a bit simpler and I was not sure if it was worth trying to protect
also from the above 'loop' scenario. I'll rethink about this.

Thanks for the feedback,
Cristian

> Best regards,
>
> Peter
>
> > On Tue, May 11, 2021 at 02:20:39AM +0200, Peter Hilber wrote:
> >> From: Igor Skalkin <[email protected]>
> >>
> >> This transport enables accessing an SCMI platform as a virtio device.
> >>
> >> Implement an SCMI virtio driver according to the virtio SCMI device spec
> >> [1]. Virtio device id 32 has been reserved for the SCMI device [2].
> >>
> >> The virtio transport has one Tx channel (virtio cmdq, A2P channel) and
> >> at most one Rx channel (virtio eventq, P2A channel).
> >>
> >> The following feature bit defined in [1] is not implemented:
> >> VIRTIO_SCMI_F_SHARED_MEMORY.
> >>
> >> After the preparatory patches, this implements the virtio transport as
> >> paraphrased:
> >>
> >> Only support a single arm-scmi device (which is consistent with the SCMI
> >> spec). scmi-virtio init is called from arm-scmi module init. During the
> >> arm-scmi probing, link to the first probed scmi-virtio device. Defer
> >> arm-scmi probing if no scmi-virtio device is bound yet.
> >>
> >> For simplicity, restrict the number of messages which can be pending
> >> simultaneously according to the virtqueue capacity. (The virtqueue sizes
> >> are negotiated with the virtio device.)
> >>
> >> As soon as Rx channel message buffers are allocated or have been read
> >> out by the arm-scmi driver, feed them to the virtio device.
> >>
> >> Since some virtio devices may not have the short response time exhibited
> >> by SCMI platforms using other transports, set a generous response
> >> timeout.
> >>
> >> Limitations:
> >>
> >> - Polling is not supported.
> >>
> >> - The timeout for delayed responses has not been adjusted.
> >>
> >> [1] https://github.com/oasis-tcs/virtio-spec/blob/master/virtio-scmi.tex
> >> [2] https://www.oasis-open.org/committees/ballot.php?id=3496
> >>
> >> Signed-off-by: Igor Skalkin <[email protected]>
> >> [ Peter: Adapted patch for submission to upstream. ]
> >> Co-developed-by: Peter Hilber <[email protected]>
> >> Signed-off-by: Peter Hilber <[email protected]>
> >> ---
> >> MAINTAINERS | 1 +
> >> drivers/firmware/Kconfig | 12 +
> >> drivers/firmware/arm_scmi/Makefile | 1 +
> >> drivers/firmware/arm_scmi/common.h | 3 +
> >> drivers/firmware/arm_scmi/driver.c | 3 +
> >> drivers/firmware/arm_scmi/virtio.c | 523 +++++++++++++++++++++++++++++
> >> include/uapi/linux/virtio_ids.h | 1 +
> >> include/uapi/linux/virtio_scmi.h | 25 ++
> >> 8 files changed, 569 insertions(+)
> >> create mode 100644 drivers/firmware/arm_scmi/virtio.c
> >> create mode 100644 include/uapi/linux/virtio_scmi.h
> >>
> >> diff --git a/MAINTAINERS b/MAINTAINERS
> >> index bd7aff0c120f..449c336872f3 100644
> >> --- a/MAINTAINERS
> >> +++ b/MAINTAINERS
> >> @@ -17705,6 +17705,7 @@ F: drivers/regulator/scmi-regulator.c
> >> F: drivers/reset/reset-scmi.c
> >> F: include/linux/sc[mp]i_protocol.h
> >> F: include/trace/events/scmi.h
> >> +F: include/uapi/linux/virtio_scmi.h
> >>
> >> SYSTEM RESET/SHUTDOWN DRIVERS
> >> M: Sebastian Reichel <[email protected]>
> >> diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
> >> index e8377b12e4d0..7e9eafdd9b63 100644
> >> --- a/drivers/firmware/Kconfig
> >> +++ b/drivers/firmware/Kconfig
> >> @@ -39,6 +39,18 @@ config ARM_SCMI_HAVE_MSG
> >> This declares whether a message passing based transport for SCMI is
> >> available.
> >>
> >> +# This config option includes the dependencies of ARM_SCMI_PROTOCOL so that
> >> +# this config doesn't show up when SCMI wouldn't be available.
> >> +config VIRTIO_SCMI
> >> + bool "Virtio transport for SCMI"
> >> + select ARM_SCMI_HAVE_MSG
> >> + depends on VIRTIO && (ARM || ARM64 || COMPILE_TEST)
> >> + help
> >> + This enables the virtio based transport for SCMI.
> >> +
> >> + If you want to use the ARM SCMI protocol between the virtio guest and
> >> + a host providing a virtio SCMI device, answer Y.
> >> +
> >> config ARM_SCMI_POWER_DOMAIN
> >> tristate "SCMI power domain driver"
> >> depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF)
> >> diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
> >> index f6b4acb8abdb..db1787606fb2 100644
> >> --- a/drivers/firmware/arm_scmi/Makefile
> >> +++ b/drivers/firmware/arm_scmi/Makefile
> >> @@ -5,6 +5,7 @@ scmi-transport-$(CONFIG_ARM_SCMI_HAVE_SHMEM) = shmem.o
> >> scmi-transport-$(CONFIG_MAILBOX) += mailbox.o
> >> scmi-transport-$(CONFIG_HAVE_ARM_SMCCC_DISCOVERY) += smc.o
> >> scmi-transport-$(CONFIG_ARM_SCMI_HAVE_MSG) += msg.o
> >> +scmi-transport-$(CONFIG_VIRTIO_SCMI) += virtio.o
> >> scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o
> >> scmi-module-objs := $(scmi-bus-y) $(scmi-driver-y) $(scmi-protocols-y) \
> >> $(scmi-transport-y)
> >> diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
> >> index 4cb6571c7aaf..bada06cfd33d 100644
> >> --- a/drivers/firmware/arm_scmi/common.h
> >> +++ b/drivers/firmware/arm_scmi/common.h
> >> @@ -349,6 +349,9 @@ extern const struct scmi_desc scmi_mailbox_desc;
> >> #ifdef CONFIG_HAVE_ARM_SMCCC
> >> extern const struct scmi_desc scmi_smc_desc;
> >> #endif
> >> +#ifdef CONFIG_VIRTIO_SCMI
> >> +extern const struct scmi_desc scmi_virtio_desc;
> >> +#endif
> >>
> >> int scmi_set_transport_info(struct device *dev, void *transport_info);
> >> void *scmi_get_transport_info(struct device *dev);
> >> diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
> >> index e04e7c8e6928..a31187385470 100644
> >> --- a/drivers/firmware/arm_scmi/driver.c
> >> +++ b/drivers/firmware/arm_scmi/driver.c
> >> @@ -1637,6 +1637,9 @@ static const struct of_device_id scmi_of_match[] = {
> >> #endif
> >> #ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
> >> { .compatible = "arm,scmi-smc", .data = &scmi_smc_desc},
> >> +#endif
> >> +#ifdef CONFIG_VIRTIO_SCMI
> >> + { .compatible = "arm,scmi-virtio", .data = &scmi_virtio_desc},
> >> #endif
> >> { /* Sentinel */ },
> >> };
> >> diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c
> >> new file mode 100644
> >> index 000000000000..20972adf6dc7
> >> --- /dev/null
> >> +++ b/drivers/firmware/arm_scmi/virtio.c
> >> @@ -0,0 +1,523 @@
> >> +// SPDX-License-Identifier: GPL-2.0
> >> +/*
> >> + * Virtio Transport driver for Arm System Control and Management Interface
> >> + * (SCMI).
> >> + *
> >> + * Copyright (C) 2020 OpenSynergy.
> >> + */
> >> +
> >> +/**
> >> + * DOC: Theory of Operation
> >> + *
> >> + * The scmi-virtio transport implements a driver for the virtio SCMI device.
> >> + *
> >> + * There is one Tx channel (virtio cmdq, A2P channel) and at most one Rx
> >> + * channel (virtio eventq, P2A channel). Each channel is implemented through a
> >> + * virtqueue. Access to each virtqueue is protected by spinlocks.
> >> + */
> >> +
> >> +#include <linux/errno.h>
> >> +#include <linux/of.h>
> >> +#include <linux/of_platform.h>
> >> +#include <linux/platform_device.h>
> >> +#include <linux/module.h>
> >> +#include <linux/slab.h>
> >> +#include <linux/virtio.h>
> >> +#include <linux/virtio_config.h>
> >> +#include <uapi/linux/virtio_ids.h>
> >> +#include <uapi/linux/virtio_scmi.h>
> >> +
> >> +#include "common.h"
> >> +
> >> +#define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */
> >> +#define VIRTIO_SCMI_MAX_PDU_SIZE \
> >> + (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD)
> >> +#define DESCRIPTORS_PER_TX_MSG 2
> >> +
> >> +/**
> >> + * struct scmi_vio_channel - Transport channel information
> >> + *
> >> + * @lock: Protects access to all members except ready.
> >> + * @ready_lock: Protects access to ready. If required, it must be taken before
> >> + * lock.
> >> + * @vqueue: Associated virtqueue
> >> + * @cinfo: SCMI Tx or Rx channel
> >> + * @free_list: List of unused scmi_vio_msg, maintained for Tx channels only
> >> + * @is_rx: Whether channel is an Rx channel
> >> + * @ready: Whether transport user is ready to hear about channel
> >> + */
> >> +struct scmi_vio_channel {
> >> + spinlock_t lock;
> >> + spinlock_t ready_lock;
> >> + struct virtqueue *vqueue;
> >> + struct scmi_chan_info *cinfo;
> >> + struct list_head free_list;
> >> + u8 is_rx;
> >> + u8 ready;
> >> +};
> >> +
> >> +/**
> >> + * struct scmi_vio_msg - Transport PDU information
> >> + *
> >> + * @request: SDU used for commands
> >> + * @input: SDU used for (delayed) responses and notifications
> >> + * @list: List which scmi_vio_msg may be part of
> >> + * @rx_len: Input SDU size in bytes, once input has been received
> >> + */
> >> +struct scmi_vio_msg {
> >> + struct scmi_msg_payld *request;
> >> + struct scmi_msg_payld *input;
> >> + struct list_head list;
> >> + unsigned int rx_len;
> >> +};
> >> +
> >> +static bool scmi_vio_have_vq_rx(struct virtio_device *vdev)
> >> +{
> >> + return virtio_has_feature(vdev, VIRTIO_SCMI_F_P2A_CHANNELS);
> >> +}
> >> +
> >> +static int scmi_vio_feed_vq_rx(struct scmi_vio_channel *vioch,
> >> + struct scmi_vio_msg *msg)
> >> +{
> >> + struct scatterlist sg_in;
> >> + int rc;
> >> + unsigned long flags;
> >> +
> >> + sg_init_one(&sg_in, msg->input, VIRTIO_SCMI_MAX_PDU_SIZE);
> >> +
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> +
> >> + rc = virtqueue_add_inbuf(vioch->vqueue, &sg_in, 1, msg, GFP_ATOMIC);
> >> + if (rc)
> >> + dev_err_once(vioch->cinfo->dev,
> >> + "%s() failed to add to virtqueue (%d)\n", __func__,
> >> + rc);
> >> + else
> >> + virtqueue_kick(vioch->vqueue);
> >> +
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> +
> >> + return rc;
> >> +}
> >> +
> >> +static void scmi_vio_complete_cb(struct virtqueue *vqueue)
> >> +{
> >> + unsigned long ready_flags;
> >> + unsigned long flags;
> >> + unsigned int length;
> >> + struct scmi_vio_channel *vioch;
> >> + struct scmi_vio_msg *msg;
> >> + bool cb_enabled = true;
> >> +
> >> + if (WARN_ON_ONCE(!vqueue->vdev->priv))
> >> + return;
> >> + vioch = &((struct scmi_vio_channel *)vqueue->vdev->priv)[vqueue->index];
> >> +
> >> + for (;;) {
> >> + spin_lock_irqsave(&vioch->ready_lock, ready_flags);
> >> +
> >> + if (!vioch->ready) {
> >> + if (!cb_enabled)
> >> + (void)virtqueue_enable_cb(vqueue);
> >> + goto unlock_ready_out;
> >> + }
> >> +
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> + if (cb_enabled) {
> >> + virtqueue_disable_cb(vqueue);
> >> + cb_enabled = false;
> >> + }
> >> + msg = virtqueue_get_buf(vqueue, &length);
> >> + if (!msg) {
> >> + if (virtqueue_enable_cb(vqueue))
> >> + goto unlock_out;
> >> + else
> >> + cb_enabled = true;
> >> + }
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> +
> >> + if (msg) {
> >> + msg->rx_len = length;
> >> +
> >> + /*
> >> + * Hold the ready_lock during the callback to avoid
> >> + * races when the arm-scmi driver is unbinding while
> >> + * the virtio device is not quiesced yet.
> >> + */
> >> + scmi_rx_callback(vioch->cinfo,
> >> + msg_read_header(msg->input), msg);
> >> + }
> >> + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
> >> + }
> >> +
> >> +unlock_out:
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> +unlock_ready_out:
> >> + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
> >> +}
> >> +
> >> +static const char *const scmi_vio_vqueue_names[] = { "tx", "rx" };
> >> +
> >> +static vq_callback_t *scmi_vio_complete_callbacks[] = {
> >> + scmi_vio_complete_cb,
> >> + scmi_vio_complete_cb
> >> +};
> >> +
> >> +static unsigned int virtio_get_max_msg(bool tx,
> >> + struct scmi_chan_info *base_cinfo)
> >> +{
> >> + struct scmi_vio_channel *vioch = base_cinfo->transport_info;
> >> + unsigned int ret;
> >> +
> >> + ret = virtqueue_get_vring_size(vioch->vqueue);
> >> +
> >> + /* Tx messages need multiple descriptors. */
> >> + if (tx)
> >> + ret /= DESCRIPTORS_PER_TX_MSG;
> >> +
> >> + if (ret > MSG_TOKEN_MAX) {
> >> + dev_info_once(
> >> + base_cinfo->dev,
> >> + "Only %ld messages can be pending simultaneously, while the %s virtqueue could hold %d\n",
> >> + MSG_TOKEN_MAX, tx ? "tx" : "rx", ret);
> >> + ret = MSG_TOKEN_MAX;
> >> + }
> >> +
> >> + return ret;
> >> +}
> >> +
> >> +static int scmi_vio_match_any_dev(struct device *dev, const void *data)
> >> +{
> >> + return 1;
> >> +}
> >> +
> >> +static struct virtio_driver virtio_scmi_driver; /* Forward declaration */
> >> +
> >> +static int virtio_link_supplier(struct device *dev)
> >> +{
> >> + struct device *vdev = driver_find_device(
> >> + &virtio_scmi_driver.driver, NULL, NULL, scmi_vio_match_any_dev);
> >> +
> >> + if (!vdev) {
> >> + dev_notice_once(
> >> + dev,
> >> + "Deferring probe after not finding a bound scmi-virtio device\n");
> >> + return -EPROBE_DEFER;
> >> + }
> >> +
> >> + /* Add device link for remove order and sysfs link. */
> >> + if (!device_link_add(dev, vdev, DL_FLAG_AUTOREMOVE_CONSUMER)) {
> >> + put_device(vdev);
> >> + dev_err(dev, "Adding link to supplier virtio device failed\n");
> >> + return -ECANCELED;
> >> + }
> >> +
> >> + put_device(vdev);
> >> + return scmi_set_transport_info(dev, dev_to_virtio(vdev));
> >> +}
> >> +
> >> +static bool virtio_chan_available(struct device *dev, int idx)
> >> +{
> >> + struct virtio_device *vdev;
> >> +
> >> + /* scmi-virtio doesn't support per-protocol channels */
> >> + if (is_scmi_protocol_device(dev))
> >> + return false;
> >> +
> >> + vdev = scmi_get_transport_info(dev);
> >> + if (!vdev)
> >> + return false;
> >> +
> >> + switch (idx) {
> >> + case VIRTIO_SCMI_VQ_TX:
> >> + return true;
> >> + case VIRTIO_SCMI_VQ_RX:
> >> + return scmi_vio_have_vq_rx(vdev);
> >> + default:
> >> + return false;
> >> + }
> >> +}
> >> +
> >> +static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
> >> + bool tx)
> >> +{
> >> + unsigned long flags;
> >> + struct virtio_device *vdev;
> >> + struct scmi_vio_channel *vioch;
> >> + int index = tx ? VIRTIO_SCMI_VQ_TX : VIRTIO_SCMI_VQ_RX;
> >> + int max_msg;
> >> + int i;
> >> +
> >> + if (!virtio_chan_available(dev, index))
> >> + return -ENODEV;
> >> +
> >> + vdev = scmi_get_transport_info(dev);
> >> + vioch = &((struct scmi_vio_channel *)vdev->priv)[index];
> >> +
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> + cinfo->transport_info = vioch;
> >> + vioch->cinfo = cinfo;
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> +
> >> + max_msg = virtio_get_max_msg(tx, cinfo);
> >> +
> >> + for (i = 0; i < max_msg; i++) {
> >> + struct scmi_vio_msg *msg;
> >> +
> >> + msg = devm_kzalloc(cinfo->dev, sizeof(*msg), GFP_KERNEL);
> >> + if (!msg)
> >> + return -ENOMEM;
> >> +
> >> + if (tx) {
> >> + msg->request = devm_kzalloc(cinfo->dev,
> >> + VIRTIO_SCMI_MAX_PDU_SIZE,
> >> + GFP_KERNEL);
> >> + if (!msg->request)
> >> + return -ENOMEM;
> >> + }
> >> +
> >> + msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE,
> >> + GFP_KERNEL);
> >> + if (!msg->input)
> >> + return -ENOMEM;
> >> +
> >> + if (tx) {
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> + list_add_tail(&msg->list, &vioch->free_list);
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> + } else {
> >> + scmi_vio_feed_vq_rx(vioch, msg);
> >> + }
> >> + }
> >> +
> >> + spin_lock_irqsave(&vioch->ready_lock, flags);
> >> + vioch->ready = true;
> >> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int virtio_chan_free(int id, void *p, void *data)
> >> +{
> >> + unsigned long flags;
> >> + struct scmi_chan_info *cinfo = p;
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> +
> >> + spin_lock_irqsave(&vioch->ready_lock, flags);
> >> + vioch->ready = false;
> >> + spin_unlock_irqrestore(&vioch->ready_lock, flags);
> >> +
> >> + scmi_free_channel(cinfo, data, id);
> >> + return 0;
> >> +}
> >> +
> >> +static int virtio_send_message(struct scmi_chan_info *cinfo,
> >> + struct scmi_xfer *xfer)
> >> +{
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> + struct scatterlist sg_out;
> >> + struct scatterlist sg_in;
> >> + struct scatterlist *sgs[DESCRIPTORS_PER_TX_MSG] = { &sg_out, &sg_in };
> >> + unsigned long flags;
> >> + int rc;
> >> + struct scmi_vio_msg *msg;
> >> +
> >> + /*
> >> + * TODO: For now, we don't support polling. But it should not be
> >> + * difficult to add support.
> >> + */
> >> + if (xfer->hdr.poll_completion)
> >> + return -EINVAL;
> >> +
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> +
> >> + if (list_empty(&vioch->free_list)) {
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> + return -EBUSY;
> >> + }
> >> +
> >> + msg = list_first_entry(&vioch->free_list, typeof(*msg), list);
> >> + list_del(&msg->list);
> >> +
> >> + msg_tx_prepare(msg->request, xfer);
> >> +
> >> + sg_init_one(&sg_out, msg->request, msg_command_size(xfer));
> >> + sg_init_one(&sg_in, msg->input, msg_response_size(xfer));
> >> +
> >> + rc = virtqueue_add_sgs(vioch->vqueue, sgs, 1, 1, msg, GFP_ATOMIC);
> >> + if (rc) {
> >> + list_add(&msg->list, &vioch->free_list);
> >> + dev_err_once(vioch->cinfo->dev,
> >> + "%s() failed to add to virtqueue (%d)\n", __func__,
> >> + rc);
> >> + } else {
> >> + virtqueue_kick(vioch->vqueue);
> >> + }
> >> +
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> +
> >> + return rc;
> >> +}
> >> +
> >> +static void virtio_fetch_response(struct scmi_chan_info *cinfo,
> >> + struct scmi_xfer *xfer, void *msg_handle)
> >> +{
> >> + struct scmi_vio_msg *msg = msg_handle;
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> +
> >> + if (!msg) {
> >> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> >> + "Ignoring %s() call with NULL msg_handle\n",
> >> + __func__);
> >> + return;
> >> + }
> >> +
> >> + msg_fetch_response(msg->input, msg->rx_len, xfer);
> >> +}
> >> +
> >> +static void virtio_fetch_notification(struct scmi_chan_info *cinfo,
> >> + size_t max_len, struct scmi_xfer *xfer,
> >> + void *msg_handle)
> >> +{
> >> + struct scmi_vio_msg *msg = msg_handle;
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> +
> >> + if (!msg) {
> >> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> >> + "Ignoring %s() call with NULL msg_handle\n",
> >> + __func__);
> >> + return;
> >> + }
> >> +
> >> + msg_fetch_notification(msg->input, msg->rx_len, max_len, xfer);
> >> +}
> >> +
> >> +static void dummy_clear_channel(struct scmi_chan_info *cinfo)
> >> +{
> >> +}
> >> +
> >> +static bool dummy_poll_done(struct scmi_chan_info *cinfo,
> >> + struct scmi_xfer *xfer)
> >> +{
> >> + return false;
> >> +}
> >> +
> >> +static void virtio_drop_message(struct scmi_chan_info *cinfo, void *msg_handle)
> >> +{
> >> + unsigned long flags;
> >> + struct scmi_vio_channel *vioch = cinfo->transport_info;
> >> + struct scmi_vio_msg *msg = msg_handle;
> >> +
> >> + if (!msg) {
> >> + dev_dbg_once(&vioch->vqueue->vdev->dev,
> >> + "Ignoring %s() call with NULL msg_handle\n",
> >> + __func__);
> >> + return;
> >> + }
> >> +
> >> + if (vioch->is_rx) {
> >> + scmi_vio_feed_vq_rx(vioch, msg);
> >> + } else {
> >> + spin_lock_irqsave(&vioch->lock, flags);
> >> + list_add(&msg->list, &vioch->free_list);
> >> + spin_unlock_irqrestore(&vioch->lock, flags);
> >> + }
> >> +}
> >> +
> >> +static const struct scmi_transport_ops scmi_virtio_ops = {
> >> + .link_supplier = virtio_link_supplier,
> >> + .chan_available = virtio_chan_available,
> >> + .chan_setup = virtio_chan_setup,
> >> + .chan_free = virtio_chan_free,
> >> + .get_max_msg = virtio_get_max_msg,
> >> + .send_message = virtio_send_message,
> >> + .fetch_response = virtio_fetch_response,
> >> + .fetch_notification = virtio_fetch_notification,
> >> + .clear_channel = dummy_clear_channel,
> >> + .poll_done = dummy_poll_done,
> >> + .drop_message = virtio_drop_message,
> >> +};
> >> +
> >> +static int scmi_vio_probe(struct virtio_device *vdev)
> >> +{
> >> + struct device *dev = &vdev->dev;
> >> + struct scmi_vio_channel *channels;
> >> + bool have_vq_rx;
> >> + int vq_cnt;
> >> + int i;
> >> + int ret;
> >> + struct virtqueue *vqs[VIRTIO_SCMI_VQ_MAX_CNT];
> >> +
> >> + have_vq_rx = scmi_vio_have_vq_rx(vdev);
> >> + vq_cnt = have_vq_rx ? VIRTIO_SCMI_VQ_MAX_CNT : 1;
> >> +
> >> + channels = devm_kcalloc(dev, vq_cnt, sizeof(*channels), GFP_KERNEL);
> >> + if (!channels)
> >> + return -ENOMEM;
> >> +
> >> + if (have_vq_rx)
> >> + channels[VIRTIO_SCMI_VQ_RX].is_rx = true;
> >> +
> >> + ret = virtio_find_vqs(vdev, vq_cnt, vqs, scmi_vio_complete_callbacks,
> >> + scmi_vio_vqueue_names, NULL);
> >> + if (ret) {
> >> + dev_err(dev, "Failed to get %d virtqueue(s)\n", vq_cnt);
> >> + return ret;
> >> + }
> >> + dev_info(dev, "Found %d virtqueue(s)\n", vq_cnt);
> >> +
> >> + for (i = 0; i < vq_cnt; i++) {
> >> + spin_lock_init(&channels[i].lock);
> >> + spin_lock_init(&channels[i].ready_lock);
> >> + INIT_LIST_HEAD(&channels[i].free_list);
> >> + channels[i].vqueue = vqs[i];
> >> + }
> >> +
> >> + vdev->priv = channels;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static void scmi_vio_remove(struct virtio_device *vdev)
> >> +{
> >> + vdev->config->reset(vdev);
> >> + vdev->config->del_vqs(vdev);
> >> +}
> >> +
> >> +static unsigned int features[] = {
> >> + VIRTIO_SCMI_F_P2A_CHANNELS,
> >> +};
> >> +
> >> +static const struct virtio_device_id id_table[] = {
> >> + { VIRTIO_ID_SCMI, VIRTIO_DEV_ANY_ID },
> >> + { 0 }
> >> +};
> >> +
> >> +static struct virtio_driver virtio_scmi_driver = {
> >> + .driver.name = "scmi-virtio",
> >> + .driver.owner = THIS_MODULE,
> >> + .feature_table = features,
> >> + .feature_table_size = ARRAY_SIZE(features),
> >> + .id_table = id_table,
> >> + .probe = scmi_vio_probe,
> >> + .remove = scmi_vio_remove,
> >> +};
> >> +
> >> +static int __init virtio_scmi_init(void)
> >> +{
> >> + return register_virtio_driver(&virtio_scmi_driver);
> >> +}
> >> +
> >> +static void __exit virtio_scmi_exit(void)
> >> +{
> >> + unregister_virtio_driver(&virtio_scmi_driver);
> >> +}
> >> +
> >> +const struct scmi_desc scmi_virtio_desc = {
> >> + .init = virtio_scmi_init,
> >> + .exit = virtio_scmi_exit,
> >> + .ops = &scmi_virtio_ops,
> >> + .max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */
> >> + .max_msg = 0, /* overridden by virtio_get_max_msg() */
> >> + .max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE,
> >> +};
> >> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> >> index f0c35ce8628c..c146fe30e589 100644
> >> --- a/include/uapi/linux/virtio_ids.h
> >> +++ b/include/uapi/linux/virtio_ids.h
> >> @@ -56,5 +56,6 @@
> >> #define VIRTIO_ID_PMEM 27 /* virtio pmem */
> >> #define VIRTIO_ID_BT 28 /* virtio bluetooth */
> >> #define VIRTIO_ID_MAC80211_HWSIM 29 /* virtio mac80211-hwsim */
> >> +#define VIRTIO_ID_SCMI 32 /* virtio SCMI */
> >>
> >> #endif /* _LINUX_VIRTIO_IDS_H */
> >> diff --git a/include/uapi/linux/virtio_scmi.h b/include/uapi/linux/virtio_scmi.h
> >> new file mode 100644
> >> index 000000000000..732b01504c35
> >> --- /dev/null
> >> +++ b/include/uapi/linux/virtio_scmi.h
> >> @@ -0,0 +1,25 @@
> >> +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
> >> +/*
> >> + * Copyright (C) 2020 OpenSynergy GmbH
> >> + */
> >> +
> >> +#ifndef _UAPI_LINUX_VIRTIO_SCMI_H
> >> +#define _UAPI_LINUX_VIRTIO_SCMI_H
> >> +
> >> +#include <linux/virtio_types.h>
> >> +
> >> +/* Feature bits */
> >> +
> >> +/* Device implements some SCMI notifications, or delayed responses. */
> >> +#define VIRTIO_SCMI_F_P2A_CHANNELS 0
> >> +
> >> +/* Device implements any SCMI statistics shared memory region */
> >> +#define VIRTIO_SCMI_F_SHARED_MEMORY 1
> >> +
> >> +/* Virtqueues */
> >> +
> >> +#define VIRTIO_SCMI_VQ_TX 0 /* cmdq */
> >> +#define VIRTIO_SCMI_VQ_RX 1 /* eventq */
> >> +#define VIRTIO_SCMI_VQ_MAX_CNT 2
> >> +
> >> +#endif /* _UAPI_LINUX_VIRTIO_SCMI_H */
> >> --
> >> 2.25.1
> >>
> >>
> >
>
>

2021-06-21 04:11:21

by Jassi Brar

[permalink] [raw]
Subject: Re: [RFC PATCH v3 01/12] firmware: arm_scmi, smccc, mailbox: Make shmem based transports optional

On Mon, May 10, 2021 at 7:22 PM Peter Hilber
<[email protected]> wrote:

.....

> --- a/drivers/mailbox/Kconfig
> +++ b/drivers/mailbox/Kconfig
> @@ -1,6 +1,7 @@
> # SPDX-License-Identifier: GPL-2.0-only
> menuconfig MAILBOX
> bool "Mailbox Hardware Support"
> + select ARM_SCMI_HAVE_SHMEM
> help
> Mailbox is a framework to control hardware communication between
> on-chip processors through queued messages and interrupt driven
>
Isn't this too generic?
Not all platforms, with a mailbox controller, use SCMI as the protocol.

thnx.

2021-06-21 08:57:05

by Sudeep Holla

[permalink] [raw]
Subject: Re: [RFC PATCH v3 01/12] firmware: arm_scmi, smccc, mailbox: Make shmem based transports optional

On Sun, Jun 20, 2021 at 11:09:21PM -0500, Jassi Brar wrote:
> On Mon, May 10, 2021 at 7:22 PM Peter Hilber
> <[email protected]> wrote:
>
> .....
>
> > --- a/drivers/mailbox/Kconfig
> > +++ b/drivers/mailbox/Kconfig
> > @@ -1,6 +1,7 @@
> > # SPDX-License-Identifier: GPL-2.0-only
> > menuconfig MAILBOX
> > bool "Mailbox Hardware Support"
> > + select ARM_SCMI_HAVE_SHMEM
> > help
> > Mailbox is a framework to control hardware communication between
> > on-chip processors through queued messages and interrupt driven
> >
> Isn't this too generic?
> Not all platforms, with a mailbox controller, use SCMI as the protocol.
>

Yikes! I agree Jassi, this looks super hack.

--
Regards,
Sudeep

2021-06-21 09:04:56

by Cristian Marussi

[permalink] [raw]
Subject: Re: [RFC PATCH v3 01/12] firmware: arm_scmi, smccc, mailbox: Make shmem based transports optional

Hi,

On Mon, Jun 21, 2021 at 09:54:31AM +0100, Sudeep Holla wrote:
> On Sun, Jun 20, 2021 at 11:09:21PM -0500, Jassi Brar wrote:
> > On Mon, May 10, 2021 at 7:22 PM Peter Hilber
> > <[email protected]> wrote:
> >
> > .....
> >
> > > --- a/drivers/mailbox/Kconfig
> > > +++ b/drivers/mailbox/Kconfig
> > > @@ -1,6 +1,7 @@
> > > # SPDX-License-Identifier: GPL-2.0-only
> > > menuconfig MAILBOX
> > > bool "Mailbox Hardware Support"
> > > + select ARM_SCMI_HAVE_SHMEM
> > > help
> > > Mailbox is a framework to control hardware communication between
> > > on-chip processors through queued messages and interrupt driven
> > >
> > Isn't this too generic?
> > Not all platforms, with a mailbox controller, use SCMI as the protocol.
> >
>
> Yikes! I agree Jassi, this looks super hack.
>

Yes indeed, I have still to rework this part of the series.

(@Jassi: I'm reworking this series starting from the work done by Peter up to
V3, since it needed some core SCMI transport rework to ease implememtation of
virtio-scmi transport)

Thanks,
Cristian

> --
> Regards,
> Sudeep