2022-06-09 07:37:39

by Neeraj Upadhyay

[permalink] [raw]
Subject: [RFC 0/3] SCMI Vhost and Virtio backend implementation

This RFC series, provides ARM System Control and Management Interface (SCMI)
protocol backend implementation for Virtio transport. The purpose of this
feature is to provide para-virtualized interfaces to guest VMs, to various
hardware blocks like clocks, regulators. This allows the guest VMs to
communicate their resource needs to the host, in the absence of direct
access to those resources.

1. Architecture overview
---------------------

Below diagram shows the overall software architecture of SCMI communication
between guest VM and the host software. In this diagram, guest is a linux
VM; also, host uses KVM linux.

GUEST VM HOST
+--------------------+ +---------------------+ +--------------+
| a. Device A | | k. Device B | | PLL |
| (Clock consumer) | | (Clock consumer) | | |
+--------------------+ +---------------------+ +--------------+
| | ^
v v |
+--------------------+ +---------------------+ +-----------------+
| b. Clock Framework | | j. Clock Framework | -->| l. Clock Driver |
+-- -----------------+ +---------------------+ +-----------------+
| ^
v |
+--------------------+ +------------------------+
| c. SCMI Clock | | i. SCMI Virtio Backend |
+--------------------+ +------------------------+
| ^
v |
+--------------------+ +----------------------+
| d. SCMI Virtio | | h. SCMI Vhost |<-----------+
+--------------------+ +----------------------+ |
| ^ |
v | |
+-------------------------------------------------+ +-----------------+
| e. Virtio Infra | | g. VMM |
+-------------------------------------------------+ +-----------------+
| ^ ^
v | |
+-------------------------------------------------+ |
| f. Hypervisor |-------------
+-------------------------------------------------+

a. Device A This is the client kernel driver in guest VM,
for ex. diplay driver, which uses standard
clock framework APIs to vote for a clock.

b. Clock Framework Underlying kernel clock framework on
guest.

c. SCMI Clock SCMI interface based clock driver.

d. SCMI Virtio Underlying SCMI framework, using Virtio as
transport driver.

e. Virtio Infra Virtio drivers on guest VM. These drivers
initiate virtqueue requests over Virtio
transport (MMIO/PCI), and forwards response
to SCMI Virtio registered callbacks.

f. Hypervisor Hosted Hypervisor (KVM for ex.), which traps
and forwards requests on virtqueue ring
buffers to the VMM.

g. VMM Virtual Machine Monitor, running on host userspace,
which manages the lifecycle of guest VMs, and forwards
guest initiated virtqueue requests as IOCTLs to the
Vhost driver on host.

h. SCMI Vhost In kernel driver, which handles SCMI virtqueue
requests from guest VMs. This driver forwards the
requests to SCMI Virtio backend driver, and returns
the response from backend, over the virtqueue ring
buffers.

i. SCMI Virtio Backend SCMI backend, which handles the incoming SCMI messages
from SCMI Vhost driver, and forwards them to the
backend protocols like clock and voltage protocols.
The backend protocols uses the host apis for those
resources like clock APIs provided by clock framework,
to vote/request for the resource. The response from
the host api is parceled into a SCMI response message,
and is returned to the SCMI Vhost driver. The SCMI
Vhost driver in turn, returns the reponse over the
Virtqueue reponse buffers.

j. Clock Framework Clock framework on the host, which is used by
clients/drivers running on the host, to vote/request
for clocks.

k. Device B Native driver running on host, which acts as a
consumer of one of the clocks.

l. Clock Driver Underlying Clock driver, which programs the
corresponding hardware PLL, for a clock request, or
forwards the request to a SCP controller, over
SCMI channel between host and the controller.


2. SCMI Vhost and Virtio backend
--------------------------------

Below description provides information on few key aspects of handling SCMI
requests received over Virtio channel, at host.

2.1 VMM Support
---------------

VMM need to provide support for SCMI vhost device setup. Below
description outlines the steps which VMM need to do.

a. VMM invokes `open()` on `/dev/vhost-scmi`, when a new VM is started.

b. VMM calls below ioctls on the SCMI Vhost fd, for the VM, to setup
Virtqueue ring buffers and eventfd, irqfd. P2A and SHARED_MEMORY
SCMI features should not be set.

ioctl(sdev->vhost_fd, VHOST_SET_OWNER);
ioctl(sdev->vhost_fd, VHOST_GET_FEATURES, &features);
ioctl(sdev->vhost_fd, VHOST_SET_FEATURES, &features);
ioctl(sdev->vhost_fd, VHOST_SET_MEM_TABLE, mem);

ioctl(sdev->vhost_fd, VHOST_SET_VRING_KICK, &file)
ioctl(ndev->vhost_fd, VHOST_SET_VRING_CALL, &file)
ioctl(sdev->vhost_fd, VHOST_SET_VRING_NUM, &state)
ioctl(sdev->vhost_fd, VHOST_SET_VRING_BASE, &state)
ioctl(sdev->vhost_fd, VHOST_SET_VRING_ADDR, &addr)

ioctl(sdev->vhost_fd, VHOST_SCMI_SET_RUNNING, &on)

c. VMM invokes `close()` on the fd corresponding to `open()` call above
for the VM, when that VM shuts down or crashes.

2.2 Client Handle
-------------------

Each guest VM client is identified using a client handle, which is
declared as below:


1 struct scmi_vio_client_h {
2 const void *handle;
3 };

``->handle`` is an opaque pointer, which is initialized by the SCMI Vhost
driver for each guest VM, SCMI Virtio connection.

Client handles are allocated using ``scmi_vio_get_client_h()``
and freed using api ``scmi_vio_put_client_h()``.

``scmi_vio_get_client_h()`` encapsulates the ``scmi_vio_client_h``
handle in a ``scmi_vio_client_info`` structure, which is declared as:

::
1 struct scmi_vio_client_info {
2 struct scmi_vio_client_h client_h;
3 void *priv;
4 };

``->priv`` member provides a way for the next software layer (SCMI VIO
backend), to save per VM information, using apis -
``scmi_vio_set_client_priv()`` and ``scmi_vio_get_client_priv()``.
This information is used by the backend, to maintain bookkeeping
information for a VM - like the per protocol active requests for it.
This bookkeeping information can be used during VM teardown, to
release any requests/votes active for that VM.


Below is a pictorial representation of how the handle information is
mapped in each software component at host.


SCMI Vhost SCMI VIO backend SCMI Backend Protocols

+----------------+ +----------------------+ +-->+---------------------+
| *priv | +---| backend_protocol_map | | | Protocol 0x10 |
+----------------+ | +----------------------+ | | Client data |
| Client_h | | | Client_h | | +---------------------+
+----------------+ | +----------------------+ | | Client_h |
| | +---------------------+
| |
|a. Backend stores an IDR map |b. IDR member for a protocol
| in the priv member | points to the private
| | data maintained by that
+-->+------------------------------+ | protocol, for the client.
| 0x10 | protocol_0x10-priv |--+
+------------------------------+
| 0x14 | protocol_0x14-priv |----->+---------------------+
+------------------------------+ | Protocol 0x14 |
| .... | | Client data |
+------------------------------+ +---------------------+
| Client_h |
+---------------------+

2.3 Communication between Vhost and backend
-------------------------------------------

a. (Creation) During VM creation, VMM calls ``open()`` on the /dev/vhost-scmi
node, which is exposed by the SCMI Vhost driver.

As part of ``open`` call, SCMI Vhost driver initializes the host side
Virtio interface for the guest VM. This initialization includes setup
of:

* Setting up Vhost virtqueus, tx/rx handler and registering those with
the underlying Vhost framewrk.
* Allocating a client handle for the VM.
* ``scmi_vio_be_open()`` call to the SCMI Virtio backend driver.

``scmi_vio_be_open()`` initializes all active backend protocols as follows:

* Allocates a new client handle, encapsulating the original client
handle from Vhost layer.
* Calls ``->open`` for the protocol, with the new handle allocated
for that protocol. As part of the ``->open`` call, protocol callback
stores its own bookkeeping information into the client handle's
private data.
* Allocates an IDR entry and stores the protocol-id -> protocol-client-handle
mapping in the ``backend_protocol_map``.

b. (Message request handling) SCMI Vhost driver polls on the eventfd for a
guest VM for any SCMI request messages. On incoming SCMI requests from
the client Virtio (Guest VM), it does following:

* Retrieve the request/response descriptor entries from the descriptor
table, for the virqueues set up for the VM's SCMI Virtio transport.

* Copy the request message from the descriptor entry's (addr, length)
information into the request buffer maintained by Vhost for that VM,
and forward the message to the client, by calling
``scmi_vio_be_request()`` with the client handle for the VM, and
request and response buffers information.

* ``scmi_vio_be_request()`` function, unpacks the message header
from the request buffer, identifies the protocol, and forwards the
request and response payload buffers to the protocol specific
``->msg_handle()``.

* Backend Protocol layer calls the host side framework api to request
the resource, like any other consumer driver running on the host.
For ex. ``clock_prepare_enable()`` call, for the ``CLOCK_CONFIG_SET``
clock protocol SCMI request message from the client. Return
value from the host framework (``clock_prepare_enable()`` api in
this example), is remapped to a SCMI status code, and returned to
the SCMI VIO backend driver.

* The Backend VIO driver packs the response status code and payload
into the response buffer and returns from the ``scmi_vio_be_request()``
call.

* SCMI Vhost driver, on return from ``scmi_vio_be_request()`` call,
copies the response buffer to the virtqueue descriptor (addr, length)
entry for the response. It then signals the used entry to the vhost
framework. This results in request completion interrupt signaling
over irqfd to the VM.

c. (Teardown) As part of VM shutdown, VMM calls ``close()`` on the
``/dev/vhost-scmi`` file handle for the VM.

``->release()`` callback handler in SCMI vhost does following:

* Flush all inflight requests for the VM.
* Cleanup Vhost dev resources for the VM.
* Call ``scmi_vio_be_close()`` with the client handle as argument.

``scmi_vio_be_close()`` does following cleanup:

* Call ``->close`` for all active protocols, with the client handle
in the IDR map, for that client-protocol mapping.
As part of ``->close()` handler, protocol releases the resources
for that VM. For example, unvoting any voted clocks, regulators.

* Destroy the client handles for those protocols and free the
private IDR map for the client.

Neeraj Upadhyay (3):
dt-bindings: arm: Add document for SCMI Virtio backend device
firmware: Add ARM SCMI Virtio backend implementation
vhost/scmi: Add Host kernel accelerator for Virtio SCMI

.../firmware/arm,scmi-vio-backend.yaml | 85 +++
drivers/firmware/Kconfig | 9 +
drivers/firmware/arm_scmi/Makefile | 1 +
drivers/firmware/arm_scmi/base.c | 12 -
drivers/firmware/arm_scmi/common.h | 29 +
drivers/firmware/arm_scmi/msg.c | 11 -
drivers/firmware/arm_scmi/virtio.c | 3 -
.../firmware/arm_scmi/virtio_backend/Makefile | 5 +
.../arm_scmi/virtio_backend/backend.c | 516 ++++++++++++++++++
.../arm_scmi/virtio_backend/backend.h | 20 +
.../virtio_backend/backend_protocol.h | 93 ++++
.../firmware/arm_scmi/virtio_backend/base.c | 474 ++++++++++++++++
.../arm_scmi/virtio_backend/client_handle.c | 71 +++
.../arm_scmi/virtio_backend/client_handle.h | 24 +
.../firmware/arm_scmi/virtio_backend/common.h | 53 ++
drivers/vhost/Kconfig | 10 +
drivers/vhost/Makefile | 3 +
drivers/vhost/scmi.c | 466 ++++++++++++++++
include/linux/scmi_vio_backend.h | 31 ++
include/uapi/linux/vhost.h | 3 +
20 files changed, 1893 insertions(+), 26 deletions(-)
create mode 100644 Documentation/devicetree/bindings/firmware/arm,scmi-vio-backend.yaml
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/Makefile
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/backend.c
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/backend.h
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/backend_protocol.h
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/base.c
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/client_handle.c
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/client_handle.h
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/common.h
create mode 100644 drivers/vhost/scmi.c
create mode 100644 include/linux/scmi_vio_backend.h

--
2.17.1


2022-06-09 08:03:19

by Neeraj Upadhyay

[permalink] [raw]
Subject: [RFC 2/3] firmware: Add ARM SCMI Virtio backend implementation

Add implementation for SCMI Virtio backend, for handling SCMI
requests received over Virtio transport. All SCMI
requests from a guest are associated with a unique
client handle. The SCMI request packet, received from
the SCMI Virtio layer is unpacked and depending on the
message id, is forwarded to the correct protocol, if
that protocol is implemented. The response packet is
packed with the payload returned by the protocol layer,
and message header is added to it.

Available protocols are specified using child device tree
nodes of the parent scmi virtio backend node. Current
patch only implements the base protocol.

Notifications and delayed response are not supported at this
point.

As all incoming SCMI requests from a guest are associated
with a unique handle, which is also forwarded to the
protocol layer; on channel close call for a guest, protocols
can do the required cleanup of the resources allocated for
that particular guest.

Co-developed-by: Srinivas Ramana <[email protected]>
Signed-off-by: Neeraj Upadhyay <[email protected]>
Signed-off-by: Srinivas Ramana <[email protected]>
---
drivers/firmware/Kconfig | 9 +
drivers/firmware/arm_scmi/Makefile | 1 +
drivers/firmware/arm_scmi/base.c | 12 -
drivers/firmware/arm_scmi/common.h | 15 +
.../firmware/arm_scmi/virtio_backend/Makefile | 5 +
.../arm_scmi/virtio_backend/backend.c | 516 ++++++++++++++++++
.../arm_scmi/virtio_backend/backend.h | 20 +
.../virtio_backend/backend_protocol.h | 93 ++++
.../firmware/arm_scmi/virtio_backend/base.c | 474 ++++++++++++++++
.../arm_scmi/virtio_backend/client_handle.c | 71 +++
.../arm_scmi/virtio_backend/client_handle.h | 24 +
.../firmware/arm_scmi/virtio_backend/common.h | 53 ++
include/linux/scmi_vio_backend.h | 31 ++
13 files changed, 1312 insertions(+), 12 deletions(-)
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/Makefile
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/backend.c
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/backend.h
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/backend_protocol.h
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/base.c
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/client_handle.c
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/client_handle.h
create mode 100644 drivers/firmware/arm_scmi/virtio_backend/common.h
create mode 100644 include/linux/scmi_vio_backend.h

diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
index e5cfb01353d8..5adafaa320cc 100644
--- a/drivers/firmware/Kconfig
+++ b/drivers/firmware/Kconfig
@@ -8,6 +8,15 @@ menu "Firmware Drivers"

source "drivers/firmware/arm_scmi/Kconfig"

+config ARM_SCMI_VIRTIO_BACKEND
+ tristate "SCMI Message Protocol Virtio Backend device"
+ depends on ARM || ARM64 || COMPILE_TEST
+ help
+ ARM System Control and Management Interface (SCMI) protocol
+ platform device for SCMI Virtio transport. This component
+ provides platform side implementation of SCMI protocols
+ for clients communication over Virtio transport.
+
config ARM_SCPI_PROTOCOL
tristate "ARM System Control and Power Interface (SCPI) Message Protocol"
depends on ARM || ARM64 || COMPILE_TEST
diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
index 8d4afadda38c..00d35d1053c0 100644
--- a/drivers/firmware/arm_scmi/Makefile
+++ b/drivers/firmware/arm_scmi/Makefile
@@ -12,6 +12,7 @@ scmi-module-objs := $(scmi-bus-y) $(scmi-driver-y) $(scmi-protocols-y) \
$(scmi-transport-y)
obj-$(CONFIG_ARM_SCMI_PROTOCOL) += scmi-module.o
obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o
+obj-y += virtio_backend/

ifeq ($(CONFIG_THUMB2_KERNEL)$(CONFIG_CC_IS_CLANG),yy)
# The use of R7 in the SMCCC conflicts with the compiler's use of R7 as a frame
diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c
index f5219334fd3a..2ad019135afb 100644
--- a/drivers/firmware/arm_scmi/base.c
+++ b/drivers/firmware/arm_scmi/base.c
@@ -16,18 +16,6 @@
#define SCMI_BASE_NUM_SOURCES 1
#define SCMI_BASE_MAX_CMD_ERR_COUNT 1024

-enum scmi_base_protocol_cmd {
- BASE_DISCOVER_VENDOR = 0x3,
- BASE_DISCOVER_SUB_VENDOR = 0x4,
- BASE_DISCOVER_IMPLEMENT_VERSION = 0x5,
- BASE_DISCOVER_LIST_PROTOCOLS = 0x6,
- BASE_DISCOVER_AGENT = 0x7,
- BASE_NOTIFY_ERRORS = 0x8,
- BASE_SET_DEVICE_PERMISSIONS = 0x9,
- BASE_SET_PROTOCOL_PERMISSIONS = 0xa,
- BASE_RESET_AGENT_CONFIGURATION = 0xb,
-};
-
struct scmi_msg_resp_base_attributes {
u8 num_protocols;
u8 num_agents;
diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index 4fda84bfab42..91cf3ffeb0e8 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -39,6 +39,18 @@ enum scmi_common_cmd {
PROTOCOL_MESSAGE_ATTRIBUTES = 0x2,
};

+enum scmi_base_protocol_cmd {
+ BASE_DISCOVER_VENDOR = 0x3,
+ BASE_DISCOVER_SUB_VENDOR = 0x4,
+ BASE_DISCOVER_IMPLEMENT_VERSION = 0x5,
+ BASE_DISCOVER_LIST_PROTOCOLS = 0x6,
+ BASE_DISCOVER_AGENT = 0x7,
+ BASE_NOTIFY_ERRORS = 0x8,
+ BASE_SET_DEVICE_PERMISSIONS = 0x9,
+ BASE_SET_PROTOCOL_PERMISSIONS = 0xa,
+ BASE_RESET_AGENT_CONFIGURATION = 0xb,
+};
+
/**
* struct scmi_msg_resp_prot_version - Response for a message
*
@@ -68,6 +80,9 @@ struct scmi_msg_resp_prot_version {
#define MSG_TOKEN_ID_MASK GENMASK(27, 18)
#define MSG_XTRACT_TOKEN(hdr) FIELD_GET(MSG_TOKEN_ID_MASK, (hdr))
#define MSG_TOKEN_MAX (MSG_XTRACT_TOKEN(MSG_TOKEN_ID_MASK) + 1)
+#define MSG_HDR_SZ 4
+#define MSG_STATUS_SZ 4
+

/*
* Size of @pending_xfers hashtable included in @scmi_xfers_info; ideally, in
diff --git a/drivers/firmware/arm_scmi/virtio_backend/Makefile b/drivers/firmware/arm_scmi/virtio_backend/Makefile
new file mode 100644
index 000000000000..a77d40e35883
--- /dev/null
+++ b/drivers/firmware/arm_scmi/virtio_backend/Makefile
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0-only
+scmi-vio-backend-y = backend.o client_handle.o
+scmi-vio-backend-protocols-y = base.o
+scmi-vio-backend-objs := $(scmi-vio-backend-y) $(scmi-vio-backend-protocols-y)
+obj-$(CONFIG_ARM_SCMI_VIRTIO_BACKEND) += scmi-vio-backend.o
diff --git a/drivers/firmware/arm_scmi/virtio_backend/backend.c b/drivers/firmware/arm_scmi/virtio_backend/backend.c
new file mode 100644
index 000000000000..b5e2c6b44b68
--- /dev/null
+++ b/drivers/firmware/arm_scmi/virtio_backend/backend.c
@@ -0,0 +1,516 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#define pr_fmt(fmt) "SCMI Virtio BE: " fmt
+
+#include <linux/device.h>
+#include <linux/export.h>
+#include <linux/idr.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/scmi_protocol.h>
+#include <linux/scmi_vio_backend.h>
+#include <linux/slab.h>
+#include <linux/bitfield.h>
+
+#include "common.h"
+#include "backend.h"
+#include "client_handle.h"
+#include "../common.h"
+#include "backend_protocol.h"
+
+
+/**
+ * scmi_vio_client_priv - Structure respreseting client specific
+ * private data maintained by backend.
+ *
+ * @backend_protocol_map: IDR used by Virtio backend, to maintain per protocol
+ * private data. This map is initialized with protocol data,
+ * on scmi_vio_be_open() call during client channel setup.
+ * The per protocol pointer is initialized by each registered
+ * backend protocol, in its open() call, and can be used to
+ * reference client specific bookkeeping information, which
+ * is maintained by that protocol. This can be used to release
+ * client specific protocol resources during close call for
+ * it.
+ */
+struct scmi_vio_client_priv {
+ struct idr backend_protocol_map;
+};
+
+struct scmi_vio_be_info {
+ struct device *dev;
+ struct idr registered_protocols;
+ struct idr active_protocols;
+ struct rw_semaphore active_protocols_rwsem;
+};
+
+static struct scmi_vio_be_info scmi_vio_be_info;
+
+void *scmi_vio_get_protocol_priv(
+ const struct scmi_vio_protocol_h *__maybe_unused prot_h,
+ const struct scmi_vio_client_h *client_h)
+{
+ return scmi_vio_get_client_priv(client_h);
+}
+
+void scmi_vio_set_protocol_priv(
+ const struct scmi_vio_protocol_h *__maybe_unused prot_h,
+ const struct scmi_vio_client_h *client_h,
+ void *priv)
+{
+ scmi_vio_set_client_priv(client_h, priv);
+}
+
+int scmi_vio_protocol_register(const struct scmi_vio_protocol *proto)
+{
+ int ret;
+
+ if (!proto) {
+ pr_err("Invalid protocol\n");
+ return -EINVAL;
+ }
+
+ if (!proto->prot_init_fn || !proto->prot_exit_fn) {
+ pr_err("Missing protocol init/exit fn %#x\n", proto->id);
+ return -EINVAL;
+ }
+
+ ret = idr_alloc(&scmi_vio_be_info.registered_protocols, (void *)proto,
+ proto->id, proto->id + 1, GFP_ATOMIC);
+ if (ret != proto->id) {
+ pr_err(
+ "Idr allocation failed for %#x - err %d\n",
+ proto->id, ret);
+ return ret;
+ }
+
+ pr_err("Registered Protocol %#x\n", proto->id);
+
+ return 0;
+}
+
+void scmi_vio_protocol_unregister(const struct scmi_vio_protocol *proto)
+{
+ idr_remove(&scmi_vio_be_info.registered_protocols, proto->id);
+ pr_err("Unregistered Protocol %#x\n", proto->id);
+}
+
+int scmi_vio_implemented_protocols(
+ const struct scmi_vio_protocol_h *__maybe_unused prot_h,
+ const struct scmi_vio_client_h *__maybe_unused client_h,
+ u8 *imp_protocols, int *imp_proto_num)
+{
+ const struct scmi_vio_protocol_h *protocol_h;
+ const struct scmi_vio_protocol_h *base_protocol_h;
+ int active_protocols_num = 0;
+ int proto_num = *imp_proto_num;
+ unsigned int id = 0;
+ int ret = 0;
+
+ base_protocol_h = idr_find(&scmi_vio_be_info.active_protocols,
+ SCMI_PROTOCOL_BASE);
+ if (prot_h != base_protocol_h)
+ return -EINVAL;
+
+ /*
+ * SCMI agents, identified by client_h, may not have access to
+ * all protocols?
+ */
+ idr_for_each_entry(&scmi_vio_be_info.active_protocols, protocol_h, id) {
+ if (id == SCMI_PROTOCOL_BASE)
+ continue;
+ if (active_protocols_num >= proto_num) {
+ pr_err("Number of active protocols %d > max num: %d\n",
+ (active_protocols_num + 1), proto_num);
+ ret = -ENOMEM;
+ goto imp_protocols_exit;
+ }
+ imp_protocols[active_protocols_num] = id;
+ active_protocols_num++;
+ }
+imp_protocols_exit:
+ *imp_proto_num = active_protocols_num;
+ return ret;
+}
+
+static int scmi_vio_protocol_init(struct device *dev, u8 protocol_id)
+{
+ int ret = 0, ret2 = 0;
+ struct scmi_vio_protocol *protocol_info;
+ struct scmi_vio_protocol_h *protocol_h;
+
+ down_write(&scmi_vio_be_info.active_protocols_rwsem);
+ protocol_info = idr_find(&scmi_vio_be_info.registered_protocols, protocol_id);
+ if (!protocol_info) {
+ pr_err("Protocol %#x not registered; protocol init failed\n",
+ protocol_id);
+ ret = -EINVAL;
+ goto out_protocol_release_rwsem;
+ }
+
+ protocol_h = kzalloc(sizeof(*protocol_h), GFP_KERNEL);
+ if (!protocol_h) {
+ ret = -ENOMEM;
+ goto out_protocol_release_rwsem;
+ }
+
+ protocol_h->dev = dev;
+ ret = protocol_info->prot_init_fn(protocol_h);
+ if (ret) {
+ pr_err("Protocol %#x init function returned: %d\n",
+ protocol_id, ret);
+ kfree(protocol_h);
+ protocol_h = NULL;
+ goto out_protocol_release_rwsem;
+ }
+
+ /* Register this protocol in active protocols list */
+ ret = idr_alloc(&scmi_vio_be_info.active_protocols,
+ (void *)protocol_h,
+ protocol_id, protocol_id + 1, GFP_ATOMIC);
+ if (ret != protocol_id) {
+ pr_err(
+ "Allocation failed for active protocol %#x - err %d\n",
+ protocol_id, ret);
+ ret2 = protocol_info->prot_exit_fn(protocol_h);
+ if (ret2)
+ pr_err("Protocol %#x exit function returned: %d\n",
+ protocol_id, ret2);
+ kfree(protocol_h);
+ protocol_h = NULL;
+ goto out_protocol_release_rwsem;
+ }
+
+ ret = 0;
+out_protocol_release_rwsem:
+ up_write(&scmi_vio_be_info.active_protocols_rwsem);
+ return ret;
+}
+
+static int scmi_vio_protocol_exit(struct device *dev)
+{
+ int ret = 0;
+ unsigned int id = 0;
+ struct scmi_vio_protocol *protocol_info;
+ const struct scmi_vio_protocol_h *protocol_h;
+
+ down_write(&scmi_vio_be_info.active_protocols_rwsem);
+ idr_for_each_entry(&scmi_vio_be_info.active_protocols, protocol_h, id) {
+ protocol_info = idr_find(&scmi_vio_be_info.registered_protocols, id);
+ ret = protocol_info->prot_exit_fn(protocol_h);
+ if (ret) {
+ pr_err("Protocol %#x exit function returned: %d\n",
+ id, ret);
+ }
+ kfree(protocol_h);
+ idr_remove(&scmi_vio_be_info.active_protocols, id);
+ }
+ up_write(&scmi_vio_be_info.active_protocols_rwsem);
+
+ return ret;
+}
+
+int scmi_vio_be_open(const struct scmi_vio_client_h *client_h)
+{
+ int tmp_id, ret = 0, close_ret = 0;
+ const struct scmi_vio_protocol_h *protocol_h;
+ struct scmi_vio_protocol *vio_be_protocol;
+ unsigned int id = 0;
+ struct scmi_vio_client_h *proto_client_h;
+ struct scmi_vio_client_priv *client_priv;
+ u8 open_protocols[MAX_PROTOCOLS_IMP];
+ u8 open_protocols_num = 0;
+
+ client_priv = kzalloc(sizeof(*client_priv), GFP_KERNEL);
+
+ if (!client_priv)
+ return -ENOMEM;
+
+ idr_init(&client_priv->backend_protocol_map);
+
+ down_read(&scmi_vio_be_info.active_protocols_rwsem);
+ idr_for_each_entry(&scmi_vio_be_info.active_protocols,
+ protocol_h, id) {
+ vio_be_protocol = idr_find(&scmi_vio_be_info.registered_protocols, id);
+ proto_client_h = scmi_vio_get_client_h(client_h);
+ if (!proto_client_h) {
+ ret = -ENOMEM;
+ goto vio_proto_open_fail;
+ }
+ ret = vio_be_protocol->prot_ops->open(proto_client_h);
+ if (ret) {
+ pr_err("->open() failed for id: %d ret: %d\n",
+ id, ret);
+ goto vio_proto_open_fail;
+ }
+ tmp_id = idr_alloc(&client_priv->backend_protocol_map,
+ (void *)proto_client_h,
+ id, id + 1, GFP_ATOMIC);
+ if (tmp_id != id) {
+ pr_err("Failed to allocate client handle for %#x\n",
+ id);
+ close_ret = vio_be_protocol->prot_ops->close(
+ proto_client_h);
+ if (close_ret) {
+ pr_err("->close() failed for id: %d ret: %d\n",
+ id, close_ret);
+ }
+ scmi_vio_put_client_h(proto_client_h);
+ ret = -EINVAL;
+ goto vio_proto_open_fail;
+ }
+ open_protocols[open_protocols_num++] = (u8)id;
+ }
+ up_read(&scmi_vio_be_info.active_protocols_rwsem);
+ scmi_vio_set_client_priv(client_h, client_priv);
+
+ return 0;
+
+vio_proto_open_fail:
+ for ( ; open_protocols_num > 0; open_protocols_num--) {
+ id = open_protocols[open_protocols_num-1];
+ vio_be_protocol = idr_find(&scmi_vio_be_info.registered_protocols, id);
+ proto_client_h = idr_find(&client_priv->backend_protocol_map, id);
+ close_ret = vio_be_protocol->prot_ops->close(proto_client_h);
+ WARN(close_ret, "->close() failed for id: %d ret: %d\n",
+ id, close_ret);
+ scmi_vio_put_client_h(proto_client_h);
+ }
+
+ up_read(&scmi_vio_be_info.active_protocols_rwsem);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(scmi_vio_be_open);
+
+int scmi_vio_be_close(const struct scmi_vio_client_h *client_h)
+{
+ const struct scmi_vio_protocol_h *protocol_h;
+ struct scmi_vio_protocol *vio_be_protocol;
+ unsigned int id = 0;
+ struct scmi_vio_client_priv *client_priv =
+ scmi_vio_get_client_priv(client_h);
+ struct scmi_vio_client_h *proto_client_h;
+ int ret = 0, close_ret = 0;
+
+ if (!client_priv) {
+ pr_err("->close() failed: priv data not available\n");
+ return -EINVAL;
+ }
+
+ down_read(&scmi_vio_be_info.active_protocols_rwsem);
+ idr_for_each_entry(&scmi_vio_be_info.active_protocols,
+ protocol_h, id) {
+ vio_be_protocol = idr_find(&scmi_vio_be_info.registered_protocols, id);
+ proto_client_h = idr_find(&client_priv->backend_protocol_map, id);
+ /* We might have failed to alloc idr, in scmi_vio_be_open() */
+ if (!proto_client_h)
+ continue;
+ close_ret = vio_be_protocol->prot_ops->close(proto_client_h);
+ WARN(close_ret, "->close() failed for id: %d ret: %d\n",
+ id, close_ret);
+ /* Return first failure return code */
+ ret = ret ? : close_ret;
+ scmi_vio_put_client_h(proto_client_h);
+ }
+ up_read(&scmi_vio_be_info.active_protocols_rwsem);
+
+ idr_destroy(&client_priv->backend_protocol_map);
+ kfree(client_priv);
+ scmi_vio_set_client_priv(client_h, NULL);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(scmi_vio_be_close);
+
+int scmi_vio_be_request(const struct scmi_vio_client_h *client_h,
+ const struct scmi_vio_be_msg *req,
+ struct scmi_vio_be_msg *resp)
+{ int ret = 0;
+ struct scmi_vio_be_msg_hdr msg_header;
+ u32 msg;
+ struct scmi_vio_be_payload_buf req_payload, resp_payload;
+ struct scmi_vio_protocol *vio_be_protocol;
+ const struct scmi_vio_protocol_h *protocol_h;
+ struct scmi_vio_client_priv *client_priv =
+ scmi_vio_get_client_priv(client_h);
+ struct scmi_vio_client_h *proto_client_h;
+ int scmi_error = 0;
+
+ /* Unpack message header */
+ if (!req || req->msg_sz < MSG_HDR_SZ ||
+ !resp || resp->msg_sz <
+ (MSG_HDR_SZ + MSG_STATUS_SZ)) {
+ pr_err("Invalid equest/response message size\n");
+ return -EINVAL;
+ }
+
+ down_read(&scmi_vio_be_info.active_protocols_rwsem);
+ msg = le32_to_cpu(*(__le32 *)req->msg_payld);
+ msg_header.protocol_id = MSG_XTRACT_PROT_ID(msg);
+ msg_header.msg_id = MSG_XTRACT_ID(msg);
+ msg_header.type = MSG_XTRACT_TYPE(msg);
+ msg_header.seq = MSG_XTRACT_TOKEN(msg);
+
+ *(__le32 *)resp->msg_payld = cpu_to_le32(msg);
+
+ resp_payload.payload =
+ (void *)((unsigned long)(uintptr_t)resp->msg_payld
+ + MSG_HDR_SZ);
+
+ if (!client_priv) {
+ pr_err("Private map not initialized\n");
+ scmi_error = SCMI_VIO_BE_NOT_FOUND;
+ goto protocol_find_fail;
+ }
+
+ protocol_h = idr_find(&scmi_vio_be_info.active_protocols,
+ msg_header.protocol_id);
+ if (unlikely(!protocol_h)) {
+ pr_err("Invalid protocol id in request header :%#x\n",
+ msg_header.protocol_id);
+ scmi_error = SCMI_VIO_BE_NOT_FOUND;
+ goto protocol_find_fail;
+ }
+
+ proto_client_h = idr_find(&client_priv->backend_protocol_map,
+ msg_header.protocol_id);
+ if (!proto_client_h) {
+ pr_err("Frontend handle not present for protocol : %#x\n",
+ msg_header.protocol_id);
+ scmi_error = SCMI_VIO_BE_NOT_FOUND;
+ goto protocol_find_fail;
+ }
+
+ vio_be_protocol = idr_find(&scmi_vio_be_info.registered_protocols,
+ msg_header.protocol_id);
+ if (!vio_be_protocol) {
+ pr_err("Protocol %#x is not registered\n",
+ msg_header.protocol_id);
+ scmi_error = SCMI_VIO_BE_NOT_FOUND;
+ goto protocol_find_fail;
+ }
+
+ req_payload.payload =
+ (void *)((unsigned long)(uintptr_t)req->msg_payld +
+ MSG_HDR_SZ);
+ req_payload.payload_size = req->msg_sz - MSG_HDR_SZ;
+
+ resp_payload.payload_size = resp->msg_sz - MSG_HDR_SZ;
+
+ ret = vio_be_protocol->prot_ops->msg_handle(
+ proto_client_h, &msg_header, &req_payload, &resp_payload);
+ if (ret) {
+ pr_err("Protocol %#x ->msg_handle() failed err: %d\n",
+ msg_header.protocol_id, ret);
+ scmi_error = scmi_vio_linux_err_remap(ret);
+ goto protocol_find_fail;
+ }
+
+ up_read(&scmi_vio_be_info.active_protocols_rwsem);
+
+ resp->msg_sz = MSG_HDR_SZ + resp_payload.payload_size;
+
+ return 0;
+
+protocol_find_fail:
+ up_read(&scmi_vio_be_info.active_protocols_rwsem);
+ *((__le32 *)(resp_payload.payload)) = cpu_to_le32(scmi_error);
+ resp->msg_sz = MSG_HDR_SZ + MSG_STATUS_SZ;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(scmi_vio_be_request);
+
+static int scmi_vio_be_probe(struct platform_device *pdev)
+{
+ int ret;
+ struct device *dev = &pdev->dev;
+ struct device_node *child, *np = dev->of_node;
+
+ scmi_vio_be_info.dev = dev;
+
+ platform_set_drvdata(pdev, &scmi_vio_be_info);
+
+ /*
+ * Base protocol is always implemented. So init it.
+ */
+ ret = scmi_vio_protocol_init(dev, SCMI_PROTOCOL_BASE);
+ if (ret) {
+ dev_err(dev, "Base protocol init failed with err: %d\n", ret);
+ return -EPROBE_DEFER;
+ }
+
+ for_each_available_child_of_node(np, child) {
+ u32 prot_id;
+
+ if (of_property_read_u32(child, "reg", &prot_id))
+ continue;
+
+ if (!FIELD_FIT(MSG_PROTOCOL_ID_MASK, prot_id)) {
+ dev_err(dev, "Out of range protocol %d\n", prot_id);
+ continue;
+ }
+
+ ret = scmi_vio_protocol_init(dev, prot_id);
+ if (ret == -EPROBE_DEFER)
+ goto vio_be_protocol_exit;
+ }
+
+ return 0;
+
+vio_be_protocol_exit:
+ scmi_vio_protocol_exit(dev);
+
+ return ret;
+}
+
+static int scmi_vio_be_remove(struct platform_device *pdev)
+{
+ scmi_vio_protocol_exit(&pdev->dev);
+
+ return 0;
+}
+
+static const struct of_device_id scmi_vio_be_of_match[] = {
+ { .compatible = "arm,scmi-vio-backend" },
+ { },
+};
+
+MODULE_DEVICE_TABLE(of, scmi_vio_be_of_match);
+
+static struct platform_driver scmi_vio_be_driver = {
+ .driver = {
+ .name = "arm-scmi-virtio-backend",
+ .of_match_table = scmi_vio_be_of_match,
+ },
+ .probe = scmi_vio_be_probe,
+ .remove = scmi_vio_be_remove,
+};
+
+static int __init scmi_vio_be_init(void)
+{
+ idr_init(&scmi_vio_be_info.registered_protocols);
+ idr_init(&scmi_vio_be_info.active_protocols);
+ init_rwsem(&scmi_vio_be_info.active_protocols_rwsem);
+ scmi_vio_base_register();
+
+ return platform_driver_register(&scmi_vio_be_driver);
+}
+module_init(scmi_vio_be_init);
+
+static void __exit scmi_vio_be_exit(void)
+{
+ scmi_vio_base_unregister();
+ idr_destroy(&scmi_vio_be_info.active_protocols);
+ idr_destroy(&scmi_vio_be_info.registered_protocols);
+ platform_driver_unregister(&scmi_vio_be_driver);
+}
+module_exit(scmi_vio_be_exit);
+
+MODULE_ALIAS("platform: scmi-vio-backend");
+MODULE_DESCRIPTION("ARM SCMI protocol Virtio Backend driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/firmware/arm_scmi/virtio_backend/backend.h b/drivers/firmware/arm_scmi/virtio_backend/backend.h
new file mode 100644
index 000000000000..9c6d353682dd
--- /dev/null
+++ b/drivers/firmware/arm_scmi/virtio_backend/backend.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+#ifndef _SCMI_VIO_BE_H
+#define _SCMI_VIO_BE_H
+
+void *scmi_vio_get_client_priv(
+ const struct scmi_vio_client_h *client_h);
+
+void scmi_vio_set_client_priv(
+ const struct scmi_vio_client_h *client_h,
+ void *priv);
+
+#define DECLARE_SCMI_VIO_REGISTER_UNREGISTER(protocol) \
+ int __init scmi_vio_##protocol##_register(void); \
+ void __exit scmi_vio_##protocol##_unregister(void)
+
+DECLARE_SCMI_VIO_REGISTER_UNREGISTER(base);
+#endif /* _SCMI_VIO_BE_H */
diff --git a/drivers/firmware/arm_scmi/virtio_backend/backend_protocol.h b/drivers/firmware/arm_scmi/virtio_backend/backend_protocol.h
new file mode 100644
index 000000000000..a11f24301ff2
--- /dev/null
+++ b/drivers/firmware/arm_scmi/virtio_backend/backend_protocol.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef _SCMI_VIO_BE_PROTOCOL_H
+#define _SCMI_VIO_BE_PROTOCOL_H
+
+#include <linux/bitfield.h>
+
+#define SCMI_VIO_BE_SUCCESS 0
+#define SCMI_VIO_BE_NOT_SUPPORTED -1
+#define SCMI_VIO_BE_INVALID_PARAMETERS -2
+#define SCMI_VIO_BE_DENIED -3
+#define SCMI_VIO_BE_NOT_FOUND -4
+#define SCMI_VIO_BE_OUT_OF_RANGE -5
+#define SCMI_VIO_BE_BUSY -6
+#define SCMI_VIO_BE_COMMS_ERROR -7
+#define SCMI_VIO_BE_GENERIC_ERROR -8
+#define SCMI_VIO_BE_HARDWARE_ERROR -9
+#define SCMI_VIO_BE_PROTOCOL_ERROR -10
+
+#define SCMI_VIO_BE_VER_MINOR_MASK GENMASK(15, 0)
+#define SCMI_VIO_BE_VER_MAJOR_MASK GENMASK(31, 16)
+
+#define SCMI_VIO_BE_PROTO_MAJOR_VERSION(val) \
+ ((u32)FIELD_PREP(SCMI_VIO_BE_VER_MAJOR_MASK, (val)))
+#define SCMI_VIO_BE_PROTO_MINOR_VERSION(val) \
+ ((u32)FIELD_PREP(SCMI_VIO_BE_VER_MINOR_MASK, (val)))
+
+#define SCMI_VIO_BE_PROTO_VERSION(major, minor) \
+ (SCMI_VIO_BE_PROTO_MAJOR_VERSION((major)) | \
+ SCMI_VIO_BE_PROTO_MINOR_VERSION((minor)))
+
+static int __maybe_unused scmi_vio_linux_err_remap(const int errno)
+{
+ switch (errno) {
+ case 0:
+ return SCMI_VIO_BE_SUCCESS;
+ case -EOPNOTSUPP:
+ return SCMI_VIO_BE_NOT_SUPPORTED;
+ case -EINVAL:
+ return SCMI_VIO_BE_INVALID_PARAMETERS;
+ case -EACCES:
+ return SCMI_VIO_BE_DENIED;
+ case -ENOENT:
+ return SCMI_VIO_BE_NOT_FOUND;
+ case -ERANGE:
+ return SCMI_VIO_BE_OUT_OF_RANGE;
+ case -EBUSY:
+ return SCMI_VIO_BE_BUSY;
+ case -ECOMM:
+ return SCMI_VIO_BE_COMMS_ERROR;
+ case -EIO:
+ return SCMI_VIO_BE_GENERIC_ERROR;
+ case -EREMOTEIO:
+ return SCMI_VIO_BE_HARDWARE_ERROR;
+ case -EPROTO:
+ return SCMI_VIO_BE_PROTOCOL_ERROR;
+ default:
+ return SCMI_VIO_BE_GENERIC_ERROR;
+ }
+}
+
+int scmi_vio_protocol_register(const struct scmi_vio_protocol *proto);
+void scmi_vio_protocol_unregister(const struct scmi_vio_protocol *proto);
+
+void *scmi_vio_get_protocol_priv(
+ const struct scmi_vio_protocol_h *__maybe_unused prot_handle,
+ const struct scmi_vio_client_h *client_h);
+void scmi_vio_set_protocol_priv(
+ const struct scmi_vio_protocol_h *__maybe_unused prot_handle,
+ const struct scmi_vio_client_h *client_h,
+ void *priv);
+
+int scmi_vio_implemented_protocols(
+ const struct scmi_vio_protocol_h *__maybe_unused prot_handle,
+ const struct scmi_vio_client_h *__maybe_unused client_h,
+ u8 *imp_protocols, int *imp_proto_num);
+
+#define DEFINE_SCMI_VIO_PROT_REG_UNREG(name, proto) \
+static const struct scmi_vio_protocol *__this_proto = &(proto); \
+ \
+int __init scmi_vio_##name##_register(void) \
+{ \
+ return scmi_vio_protocol_register(__this_proto); \
+} \
+ \
+void __exit scmi_vio_##name##_unregister(void) \
+{ \
+ scmi_vio_protocol_unregister(__this_proto); \
+}
+#endif /* _SCMI_VIO_BE_PROTOCOL_H */
diff --git a/drivers/firmware/arm_scmi/virtio_backend/base.c b/drivers/firmware/arm_scmi/virtio_backend/base.c
new file mode 100644
index 000000000000..465b609dcdab
--- /dev/null
+++ b/drivers/firmware/arm_scmi/virtio_backend/base.c
@@ -0,0 +1,474 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include <linux/device.h>
+#include <linux/export.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/scmi_protocol.h>
+#include <linux/scmi_vio_backend.h>
+#include <linux/slab.h>
+#include <linux/bitfield.h>
+#include <linux/byteorder/generic.h>
+
+#include "common.h"
+#include "backend.h"
+#include "client_handle.h"
+
+#include "../common.h"
+#include "backend_protocol.h"
+
+#define SCMI_VIO_BASE_NUM_AGENTS 1
+#define SCMI_VIO_BASE_AGENT_MASK GENMASK(15, 8)
+#define SCMI_VIO_BASE_NUM_AGENT(num) \
+ ((u32)FIELD_PREP(SCMI_VIO_BASE_AGENT_MASK, (num)))
+#define SCMI_VIO_BASE_NUM_PROT_MASK GENMASK(7, 0)
+#define SCMI_VIO_BASE_NUM_PROT(num) \
+ ((u32)FIELD_PREP(SCMI_VIO_BASE_NUM_PROT_MASK, (num)))
+#define SCMI_VIO_BASE_NUM_AGENT_PROTOCOL(num) \
+ (SCMI_VIO_BASE_NUM_AGENT(SCMI_VIO_BASE_NUM_AGENTS) | \
+ SCMI_VIO_BASE_NUM_PROT(num))
+
+#define SCMI_VIO_BASE_MAX_VENDORID_STR 16
+#define SCMI_VIO_BASE_IMP_VER 1
+
+/*
+ * Number of protocols returned in single call to
+ * DISCOVER_LIST_PROTOCOLS.
+ */
+#define SCMI_VIO_BASE_PROTO_LIST_LEN 8
+
+struct scmi_vio_base_info {
+ const struct scmi_vio_protocol_h *prot_h;
+};
+
+static struct scmi_vio_base_info *scmi_vio_base_proto_info;
+
+struct scmi_vio_base_client_info {
+ const struct scmi_vio_client_h *client_h;
+ u8 *implemented_protocols;
+ int implemented_proto_num;
+};
+
+struct scmi_vio_base_version_resp {
+ __le32 status;
+ __le32 version;
+} __packed;
+
+struct scmi_vio_base_attr_resp {
+ __le32 status;
+ __le32 attributes;
+} __packed;
+
+struct scmi_vio_base_msg_attr_resp {
+ __le32 status;
+ __le32 attributes;
+} __packed;
+
+struct scmi_vio_base_imp_version_resp {
+ __le32 status;
+ __le32 imp_version;
+} __packed;
+
+struct scmi_vio_base_list_protocols_resp {
+ __le32 num_protocols;
+ u8 *protocols;
+} __packed;
+
+static int scmi_vio_base_open(
+ const struct scmi_vio_client_h *client_h)
+{
+ struct scmi_vio_base_client_info *base_client_info =
+ kzalloc(sizeof(*base_client_info),
+ GFP_KERNEL);
+ if (!base_client_info)
+ return -ENOMEM;
+ base_client_info->client_h = client_h;
+ base_client_info->implemented_protocols = kcalloc(
+ sizeof(u8), MAX_PROTOCOLS_IMP,
+ GFP_KERNEL);
+ if (!base_client_info->implemented_protocols)
+ return -ENOMEM;
+ base_client_info->implemented_proto_num = MAX_PROTOCOLS_IMP;
+ scmi_vio_implemented_protocols(
+ scmi_vio_base_proto_info->prot_h,
+ client_h, base_client_info->implemented_protocols,
+ &base_client_info->implemented_proto_num);
+ scmi_vio_set_protocol_priv(
+ scmi_vio_base_proto_info->prot_h,
+ client_h, base_client_info);
+
+ return 0;
+}
+
+static s32 scmi_vio_base_version_get(u32 *version)
+{
+ *version = SCMI_VIO_BE_PROTO_VERSION(2, 0);
+ return SCMI_VIO_BE_SUCCESS;
+}
+
+static s32 scmi_vio_base_attributes_get(int num_protocols, u32 *attributes)
+{
+ *attributes = SCMI_VIO_BASE_NUM_AGENT_PROTOCOL(num_protocols);
+ return SCMI_VIO_BE_SUCCESS;
+}
+
+static s32 scmi_vio_base_msg_attibutes_get(u32 msg_id, u32 *attributes)
+{
+ *attributes = 0;
+ switch (msg_id) {
+ case PROTOCOL_VERSION:
+ case PROTOCOL_ATTRIBUTES:
+ case PROTOCOL_MESSAGE_ATTRIBUTES:
+ case BASE_DISCOVER_VENDOR:
+ case BASE_DISCOVER_IMPLEMENT_VERSION:
+ case BASE_DISCOVER_LIST_PROTOCOLS:
+ return SCMI_VIO_BE_SUCCESS;
+ case BASE_DISCOVER_AGENT:
+ case BASE_NOTIFY_ERRORS:
+ case BASE_SET_DEVICE_PERMISSIONS:
+ case BASE_SET_PROTOCOL_PERMISSIONS:
+ case BASE_RESET_AGENT_CONFIGURATION:
+ return SCMI_VIO_BE_NOT_FOUND;
+ default:
+ return SCMI_VIO_BE_NOT_FOUND;
+ }
+}
+
+static s32 scmi_vio_base_vendor_get(char *vendor_id)
+{
+ char vendor[SCMI_VIO_BASE_MAX_VENDORID_STR] = "SCMI-VIO-BE";
+
+ strscpy(vendor_id, vendor, SCMI_VIO_BASE_MAX_VENDORID_STR);
+ return SCMI_VIO_BE_SUCCESS;
+}
+
+static s32 scmi_vio_base_imp_version_get(u32 *imp_version)
+{
+ *imp_version = SCMI_VIO_BASE_IMP_VER;
+ return SCMI_VIO_BE_SUCCESS;
+}
+
+static s32 scmi_vio_protocol_list_get(
+ const u8 *implemented_protocols, const int num_protocols_imp,
+ u32 skip, u32 *num_protocols,
+ u8 *protocols, int protocols_len)
+{
+ int i = 0;
+
+ if (skip >= num_protocols_imp) {
+ *num_protocols = 0;
+ return SCMI_VIO_BE_INVALID_PARAMETERS;
+ }
+ while (i < protocols_len && skip < num_protocols_imp) {
+ protocols[i] = implemented_protocols[skip];
+ i++;
+ skip++;
+ }
+ *num_protocols = i;
+ return SCMI_VIO_BE_SUCCESS;
+}
+
+static int debug_resp_size(
+ struct scmi_vio_be_payload_buf *resp_payload_buf,
+ u8 msg_id, size_t required_size)
+{
+ if (resp_payload_buf->payload_size < required_size) {
+ pr_err("Invalid response buffer size : %#zx required: %#zx for protocol: %#x msg: %#x\n",
+ resp_payload_buf->payload_size, required_size,
+ SCMI_PROTOCOL_BASE, msg_id);
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int handle_protocol_ver(
+ struct scmi_vio_be_payload_buf *resp_payload_buf)
+{
+ int err;
+ s32 ret;
+ u32 version = 0;
+ size_t msg_resp_size;
+ struct scmi_vio_base_version_resp *ver_response;
+
+ msg_resp_size = sizeof(version) + sizeof(ret);
+ err = debug_resp_size(resp_payload_buf, PROTOCOL_VERSION,
+ msg_resp_size);
+ if (err)
+ return err;
+ ret = scmi_vio_base_version_get(&version);
+ ver_response = (struct scmi_vio_base_version_resp *)resp_payload_buf->payload;
+ ver_response->status = cpu_to_le32(ret);
+ ver_response->version = cpu_to_le32(version);
+ resp_payload_buf->payload_size = msg_resp_size;
+
+ return 0;
+}
+
+static int handle_protocol_attrs(int num_protocols,
+ struct scmi_vio_be_payload_buf *resp_payload_buf)
+{
+ int err;
+ s32 ret;
+ u32 attributes = 0;
+ size_t msg_resp_size;
+ struct scmi_vio_base_attr_resp *attr_response;
+
+ msg_resp_size = sizeof(attributes) + sizeof(ret);
+ err = debug_resp_size(resp_payload_buf, PROTOCOL_ATTRIBUTES,
+ msg_resp_size);
+ if (err)
+ return err;
+ ret = scmi_vio_base_attributes_get(num_protocols, &attributes);
+ attr_response = (struct scmi_vio_base_attr_resp *)resp_payload_buf->payload;
+ attr_response->status = cpu_to_le32(ret);
+ attr_response->attributes = cpu_to_le32(attributes);
+ resp_payload_buf->payload_size = msg_resp_size;
+
+ return 0;
+}
+
+static int handle_msg_attrs(
+ const struct scmi_vio_be_payload_buf *req_payload_buf,
+ struct scmi_vio_be_payload_buf *resp_payload_buf)
+{
+ int err;
+ s32 ret;
+ u32 req_msg_id = 0;
+ size_t msg_resp_size;
+ u32 msg_attributes = 0;
+ struct scmi_vio_base_msg_attr_resp *msg_attr_response;
+
+ msg_resp_size = sizeof(msg_attributes) + sizeof(ret);
+ err = debug_resp_size(
+ resp_payload_buf, PROTOCOL_MESSAGE_ATTRIBUTES,
+ msg_resp_size);
+ if (err)
+ return err;
+ if (req_payload_buf->payload_size < sizeof(req_msg_id)) {
+ pr_err("Invalid request buffer size : %#zx required: %#zx for protocol: %#x msg: %#x\n",
+ req_payload_buf->payload_size, sizeof(req_msg_id),
+ SCMI_PROTOCOL_BASE, PROTOCOL_MESSAGE_ATTRIBUTES);
+ return -EINVAL;
+ }
+ req_msg_id = le32_to_cpu(*((__le32 *)req_payload_buf->payload));
+ ret = scmi_vio_base_msg_attibutes_get(req_msg_id, &msg_attributes);
+ msg_attr_response =
+ (struct scmi_vio_base_msg_attr_resp *)resp_payload_buf->payload;
+ msg_attr_response->status = cpu_to_le32(ret);
+ msg_attr_response->attributes = cpu_to_le32(msg_attributes);
+ resp_payload_buf->payload_size = msg_resp_size;
+
+ return 0;
+}
+
+static int handle_discover_vendor(
+ struct scmi_vio_be_payload_buf *resp_payload_buf)
+{
+ int err;
+ s32 ret;
+ size_t vendor_id_sz;
+ size_t msg_resp_size;
+ void *vendor_payload;
+ char vendor[SCMI_VIO_BASE_MAX_VENDORID_STR];
+
+ msg_resp_size = (size_t)SCMI_VIO_BASE_MAX_VENDORID_STR +
+ sizeof(ret);
+ err = debug_resp_size(
+ resp_payload_buf, BASE_DISCOVER_VENDOR,
+ msg_resp_size);
+ ret = scmi_vio_base_vendor_get(vendor);
+ WARN_ON(strlen(vendor) >= SCMI_VIO_BASE_MAX_VENDORID_STR);
+ vendor_id_sz = (size_t)min_t(size_t, (size_t)(strlen(vendor) + 1),
+ (size_t)SCMI_VIO_BASE_MAX_VENDORID_STR);
+ *(__le32 *)resp_payload_buf->payload = cpu_to_le32(ret);
+ vendor_payload =
+ (void *)((unsigned long)(uintptr_t)resp_payload_buf->payload +
+ (unsigned long)sizeof(ret));
+ memcpy((u8 *)vendor_payload, vendor, vendor_id_sz);
+ resp_payload_buf->payload_size = vendor_id_sz + sizeof(ret);
+
+ return 0;
+}
+
+static int handle_discover_imp_ver(
+ struct scmi_vio_be_payload_buf *resp_payload_buf)
+{
+ int err;
+ s32 ret;
+ u32 imp_version = 0;
+ size_t msg_resp_size;
+ struct scmi_vio_base_imp_version_resp *imp_ver_response;
+
+ msg_resp_size = sizeof(imp_version) + sizeof(ret);
+ err = debug_resp_size(resp_payload_buf,
+ BASE_DISCOVER_IMPLEMENT_VERSION,
+ msg_resp_size);
+ if (err)
+ return err;
+ ret = scmi_vio_base_imp_version_get(&imp_version);
+ imp_ver_response =
+ (struct scmi_vio_base_imp_version_resp *)resp_payload_buf->payload;
+ imp_ver_response->status = cpu_to_le32(ret);
+ imp_ver_response->imp_version = cpu_to_le32(imp_version);
+ resp_payload_buf->payload_size = msg_resp_size;
+
+ return 0;
+}
+
+static int handle_discover_list_protocols(
+ const struct scmi_vio_be_payload_buf *req_payload_buf,
+ struct scmi_vio_be_payload_buf *resp_payload_buf,
+ u8 *implemented_protocols, int num_protocols_imp)
+{
+ int err;
+ s32 ret;
+ u32 num_protocols = 0;
+ size_t msg_resp_size;
+ u32 protocol_skip_count;
+ void *protocol_list_payload;
+ void *protocol_num_payload;
+ u8 proto_list[SCMI_VIO_BASE_PROTO_LIST_LEN];
+
+ msg_resp_size = sizeof(u8) * SCMI_VIO_BASE_PROTO_LIST_LEN +
+ sizeof(ret);
+ err = debug_resp_size(
+ resp_payload_buf, BASE_DISCOVER_LIST_PROTOCOLS,
+ msg_resp_size);
+ if (err)
+ return err;
+ if (req_payload_buf->payload_size < sizeof(protocol_skip_count)) {
+ pr_err("Invalid request buffer size : %#zx required: %#zx for protocol: %#x msg: %#x\n",
+ req_payload_buf->payload_size, sizeof(protocol_skip_count),
+ SCMI_PROTOCOL_BASE, BASE_DISCOVER_LIST_PROTOCOLS);
+ return -EINVAL;
+ }
+ protocol_skip_count =
+ le32_to_cpu((*(__le32 *)req_payload_buf->payload));
+ ret = scmi_vio_protocol_list_get(
+ implemented_protocols, num_protocols_imp,
+ protocol_skip_count, &num_protocols,
+ proto_list, SCMI_VIO_BASE_PROTO_LIST_LEN);
+ *(__le32 *)resp_payload_buf->payload = cpu_to_le32(ret);
+ protocol_num_payload =
+ (void *)((unsigned long)(uintptr_t)resp_payload_buf->payload +
+ (unsigned long)sizeof(ret));
+ *(__le32 *)protocol_num_payload = cpu_to_le32(num_protocols);
+ if (num_protocols > 0) {
+ pr_info("%s: num_protocols:%d\n", __func__, num_protocols);
+ protocol_list_payload =
+ (void *)((unsigned long)(uintptr_t)resp_payload_buf->payload +
+ (unsigned long)sizeof(ret) + (unsigned long)sizeof(num_protocols));
+ memcpy((u8 *)protocol_list_payload, proto_list, num_protocols);
+ }
+ resp_payload_buf->payload_size =
+ num_protocols * sizeof(u8) + sizeof(num_protocols) + sizeof(ret);
+
+ return 0;
+}
+
+static int scmi_vio_base_msg_handle(
+ const struct scmi_vio_client_h *client_h,
+ const struct scmi_vio_be_msg_hdr *msg_hdr,
+ const struct scmi_vio_be_payload_buf *req_payload_buf,
+ struct scmi_vio_be_payload_buf *resp_payload_buf)
+{
+ int err;
+ size_t msg_resp_size;
+ struct scmi_vio_base_client_info *base_client_info;
+
+ base_client_info = (struct scmi_vio_base_client_info *)
+ scmi_vio_get_protocol_priv(
+ scmi_vio_base_proto_info->prot_h,
+ client_h);
+ WARN_ON(client_h != base_client_info->client_h);
+
+ WARN_ON(msg_hdr->protocol_id != SCMI_PROTOCOL_BASE);
+ switch (msg_hdr->msg_id) {
+ case PROTOCOL_VERSION:
+ return handle_protocol_ver(resp_payload_buf);
+ case PROTOCOL_ATTRIBUTES:
+ return handle_protocol_attrs(
+ base_client_info->implemented_proto_num,
+ resp_payload_buf);
+ case PROTOCOL_MESSAGE_ATTRIBUTES:
+ return handle_msg_attrs(req_payload_buf, resp_payload_buf);
+ case BASE_DISCOVER_VENDOR:
+ return handle_discover_vendor(resp_payload_buf);
+ case BASE_DISCOVER_IMPLEMENT_VERSION:
+ return handle_discover_imp_ver(resp_payload_buf);
+ case BASE_DISCOVER_LIST_PROTOCOLS:
+ return handle_discover_list_protocols(
+ req_payload_buf, resp_payload_buf,
+ base_client_info->implemented_protocols,
+ base_client_info->implemented_proto_num);
+ default:
+ pr_err("Msg id %#x not supported for base protocol\n",
+ msg_hdr->msg_id);
+ msg_resp_size = sizeof(u32);
+ err = debug_resp_size(resp_payload_buf, msg_hdr->msg_id,
+ msg_resp_size);
+ if (err)
+ return err;
+ *(__le32 *)resp_payload_buf->payload =
+ cpu_to_le32(SCMI_VIO_BE_NOT_FOUND);
+ resp_payload_buf->payload_size = sizeof(u32);
+ break;
+ }
+
+ return 0;
+}
+
+
+static int scmi_vio_base_close(
+ const struct scmi_vio_client_h *client_h)
+{
+ struct scmi_vio_base_client_info *base_client_info;
+
+ base_client_info = (struct scmi_vio_base_client_info *)
+ scmi_vio_get_protocol_priv(
+ scmi_vio_base_proto_info->prot_h,
+ client_h);
+ WARN_ON(client_h != base_client_info->client_h);
+ kfree(base_client_info->implemented_protocols);
+ kfree(base_client_info);
+ return 0;
+}
+
+const struct scmi_vio_protocol_ops scmi_vio_base_ops = {
+ .open = scmi_vio_base_open,
+ .msg_handle = scmi_vio_base_msg_handle,
+ .close = scmi_vio_base_close,
+};
+
+int scmi_vio_base_init_fn(
+ const struct scmi_vio_protocol_h *prot_h)
+{
+ scmi_vio_base_proto_info = kzalloc(
+ sizeof(*scmi_vio_base_proto_info),
+ GFP_KERNEL);
+ if (!scmi_vio_base_proto_info)
+ return -ENOMEM;
+ scmi_vio_base_proto_info->prot_h = prot_h;
+ return 0;
+}
+
+int scmi_vio_base_exit_fn(
+ const struct scmi_vio_protocol_h *prot_h)
+{
+ kfree(scmi_vio_base_proto_info);
+ scmi_vio_base_proto_info = NULL;
+
+ return 0;
+}
+
+const struct scmi_vio_protocol scmi_vio_base_protocol = {
+ .id = SCMI_PROTOCOL_BASE,
+ .prot_init_fn = scmi_vio_base_init_fn,
+ .prot_exit_fn = scmi_vio_base_exit_fn,
+ .prot_ops = &scmi_vio_base_ops,
+};
+
+DEFINE_SCMI_VIO_PROT_REG_UNREG(base, scmi_vio_base_protocol);
diff --git a/drivers/firmware/arm_scmi/virtio_backend/client_handle.c b/drivers/firmware/arm_scmi/virtio_backend/client_handle.c
new file mode 100644
index 000000000000..1d53ec12e459
--- /dev/null
+++ b/drivers/firmware/arm_scmi/virtio_backend/client_handle.c
@@ -0,0 +1,71 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+
+#include <linux/slab.h>
+#include <linux/scmi_vio_backend.h>
+
+#define to_client_info(clienth) \
+ container_of((clienth), struct scmi_vio_client_info, \
+ client_h)
+
+/*
+ * scmi_vio_client_info - Structure respreseting information
+ * associated with a client handle.
+ *
+ * @client_h: Opaque handle used as agent/client idenitifier.
+ * @priv: Private data mainted by SCMI Virtio Backend, for
+ * an agent/client.
+ * This pointer can be used by backend, to maintain,
+ * per protocol private information. This pointer is
+ * typically populated by SCMI backend, on open() call
+ * by client, by calling scmi_vio_set_client_priv().
+ */
+struct scmi_vio_client_info {
+ struct scmi_vio_client_h client_h;
+ void *priv;
+};
+
+struct scmi_vio_client_h *scmi_vio_get_client_h(
+ const void *handle)
+{
+ struct scmi_vio_client_info *client_info =
+ kzalloc(sizeof(*client_info), GFP_KERNEL);
+
+ if (!client_info)
+ return NULL;
+ client_info->client_h.handle = handle;
+ return &client_info->client_h;
+}
+EXPORT_SYMBOL_GPL(scmi_vio_get_client_h);
+
+void scmi_vio_put_client_h(
+ const struct scmi_vio_client_h *client_h)
+{
+ struct scmi_vio_client_info *client_info =
+ to_client_info(client_h);
+
+ kfree(client_info);
+}
+EXPORT_SYMBOL_GPL(scmi_vio_put_client_h);
+
+void *scmi_vio_get_client_priv(
+ const struct scmi_vio_client_h *client_h)
+{
+ struct scmi_vio_client_info *client_info =
+ to_client_info(client_h);
+
+ return client_info->priv;
+}
+
+void scmi_vio_set_client_priv(
+ const struct scmi_vio_client_h *client_h,
+ void *priv)
+{
+ struct scmi_vio_client_info *client_info =
+ to_client_info(client_h);
+
+ client_info->priv = priv;
+}
diff --git a/drivers/firmware/arm_scmi/virtio_backend/client_handle.h b/drivers/firmware/arm_scmi/virtio_backend/client_handle.h
new file mode 100644
index 000000000000..2cb2dcbb8481
--- /dev/null
+++ b/drivers/firmware/arm_scmi/virtio_backend/client_handle.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+#ifndef _SCMI_VIO_BE_CLIENT_HANDLE_H
+#define _SCMI_VIO_BE_CLIENT_HANDLE_H
+
+/**
+ * scmi_vio_get_client_h: This function in used by callers
+ * to construct a unique client handle, enclosing the client handle.
+ */
+struct scmi_vio_client_h *scmi_vio_get_client_h(
+ const void *handle);
+
+
+/**
+ * scmi_vio_put_client_h: This function in used by callers
+ * to release the client handle, and any private data associated
+ * with it.
+ */
+void scmi_vio_put_client_h(
+ const struct scmi_vio_client_h *client_h);
+
+#endif /*_SCMI_VIO_BE_CLIENT_HANDLE_H */
diff --git a/drivers/firmware/arm_scmi/virtio_backend/common.h b/drivers/firmware/arm_scmi/virtio_backend/common.h
new file mode 100644
index 000000000000..7797a4c1e7a2
--- /dev/null
+++ b/drivers/firmware/arm_scmi/virtio_backend/common.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef _SCMI_VIO_BE_COMMON_H
+#define _SCMI_VIO_BE_COMMON_H
+
+struct scmi_vio_protocol_h {
+ struct device *dev;
+};
+
+struct scmi_vio_be_payload_buf {
+ void *payload;
+ size_t payload_size;
+};
+
+struct scmi_vio_be_msg_hdr {
+ u8 msg_id;
+ u8 protocol_id;
+ u8 type;
+ u16 seq;
+};
+
+/**
+ * struct scmi_vio_protocol_ops - operations
+ * supported by SCMI Virtio backend protocol drivers.
+ *
+ * @open: Notify protocol driver, about new client.
+ * @close: Notify protocol driver, about exiting client.
+ * @msg_handle: Unparcel a request and parcel response, to be sent over
+ * client channels.
+ */
+struct scmi_vio_protocol_ops {
+ int (*open)(const struct scmi_vio_client_h *client_h);
+ int (*msg_handle)(const struct scmi_vio_client_h *client_h,
+ const struct scmi_vio_be_msg_hdr *msg_hdr,
+ const struct scmi_vio_be_payload_buf *req_payload_buf,
+ struct scmi_vio_be_payload_buf *resp_payload_buf);
+ int (*close)(const struct scmi_vio_client_h *client_h);
+};
+
+typedef int (*scmi_vio_prot_init_fn_t)(const struct scmi_vio_protocol_h *);
+typedef int (*scmi_vio_prot_exit_fn_t)(const struct scmi_vio_protocol_h *);
+
+struct scmi_vio_protocol {
+ const u8 id;
+ const scmi_vio_prot_init_fn_t prot_init_fn;
+ const scmi_vio_prot_exit_fn_t prot_exit_fn;
+ const struct scmi_vio_protocol_ops *prot_ops;
+};
+
+#endif /* _SCMI_VIO_BE_COMMON_H */
diff --git a/include/linux/scmi_vio_backend.h b/include/linux/scmi_vio_backend.h
new file mode 100644
index 000000000000..1f33122c6856
--- /dev/null
+++ b/include/linux/scmi_vio_backend.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef _LINUX_SCMI_VIO_BACKEND_H
+#define _LINUX_SCMI_VIO_BACKEND_H
+
+#include <linux/idr.h>
+
+/**
+ * scmi_vio_client_h : Structure encapsulating a unique
+ * handle, identifying the client connection to SCMI
+ * Virtio backend.
+ */
+struct scmi_vio_client_h {
+ const void *handle;
+};
+
+struct scmi_vio_be_msg {
+ struct scmi_msg_payld *msg_payld;
+ size_t msg_sz;
+};
+
+int scmi_vio_be_open(const struct scmi_vio_client_h *client_h);
+int scmi_vio_be_close(const struct scmi_vio_client_h *client_h);
+int scmi_vio_be_request(const struct scmi_vio_client_h *client_h,
+ const struct scmi_vio_be_msg *req,
+ struct scmi_vio_be_msg *resp);
+
+#endif /* _LINUX_SCMI_VIO_BACKEND_H */
--
2.17.1

2022-06-09 08:06:12

by Neeraj Upadhyay

[permalink] [raw]
Subject: [RFC 3/3] vhost/scmi: Add Host kernel accelerator for Virtio SCMI

Add Vhost implementation for SCMI over Virtio transport.
The SCMI Vhost driver adds a misc device (/dev/vhost-scmi)
that exposes the SCMI Virtio channel capabilities to userspace:

- Set up cmdq, eventq.
- VIRTIO_SCMI_F_P2A_CHANNELS feature is not negotiated, as
notifications and delayed response are not implemented
at present.
- VIRTIO_SCMI_F_SHARED_MEMORY feature is not negotiated.

1. cmdq

All cmd requests on cmdq, for a guest are processed sequentially.
So, all outstanding SCMI requests from a guest are processed in order
and response for current request is send before handling next request.
This behavior may change in furture; however, compliance with Virtio
SCMI spec will be ensured. SCMI request/response from different
guests can be handled concurrently.

Each SCMI request is forwarded to the SCMI Virtio backend device, and
the response from SCMI backend is put into the response buffer of the
cmdq entry for that request.

Any error during request handling like failure to read the request
message, results in signalling a response with response length 0
to the frontend. As a future enhancement, we can see the feasibility
of adding capability to frontend, to associate such failures with
outstanding requests and returning error to the upper layers
immediately (rather than waiting for the channel timeout).

2. eventq

As VIRTIO_SCMI_F_P2A_CHANNELS feature is not negotiated.

3. Error handling

On occurrence of any event, which triggers release of SCMI channel,
for a guest - guest VM crash, VMM crash; all inflight SCMI requests
are waited upon; however the outstanding SCMI requests, which
haven't been forwarded to the SCMI backend device, are dropped.

Co-developed-by: Srinivas Ramana <[email protected]>
Signed-off-by: Neeraj Upadhyay <[email protected]>
Signed-off-by: Srinivas Ramana <[email protected]>
---
drivers/firmware/arm_scmi/common.h | 14 +
drivers/firmware/arm_scmi/msg.c | 11 -
drivers/firmware/arm_scmi/virtio.c | 3 -
drivers/vhost/Kconfig | 10 +
drivers/vhost/Makefile | 3 +
drivers/vhost/scmi.c | 466 +++++++++++++++++++++++++++++
include/uapi/linux/vhost.h | 3 +
7 files changed, 496 insertions(+), 14 deletions(-)
create mode 100644 drivers/vhost/scmi.c

diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index 91cf3ffeb0e8..833575b7f5e2 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -156,6 +156,17 @@ struct scmi_msg {
size_t len;
};

+/*
+ * struct scmi_msg_payld - Transport SDU layout
+ *
+ * The SCMI specification requires all parameters, message headers, return
+ * arguments or any protocol data to be expressed in little endian format only.
+ */
+struct scmi_msg_payld {
+ __le32 msg_header;
+ __le32 msg_payload[];
+};
+
/**
* struct scmi_xfer - Structure representing a message flow
*
@@ -483,6 +494,9 @@ struct scmi_msg_payld;

/* Maximum overhead of message w.r.t. struct scmi_desc.max_msg_size */
#define SCMI_MSG_MAX_PROT_OVERHEAD (2 * sizeof(__le32))
+#define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */
+#define VIRTIO_SCMI_MAX_PDU_SIZE \
+ (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD)

size_t msg_response_size(struct scmi_xfer *xfer);
size_t msg_command_size(struct scmi_xfer *xfer);
diff --git a/drivers/firmware/arm_scmi/msg.c b/drivers/firmware/arm_scmi/msg.c
index d33a704e5814..613c0a9c4e63 100644
--- a/drivers/firmware/arm_scmi/msg.c
+++ b/drivers/firmware/arm_scmi/msg.c
@@ -12,17 +12,6 @@

#include "common.h"

-/*
- * struct scmi_msg_payld - Transport SDU layout
- *
- * The SCMI specification requires all parameters, message headers, return
- * arguments or any protocol data to be expressed in little endian format only.
- */
-struct scmi_msg_payld {
- __le32 msg_header;
- __le32 msg_payload[];
-};
-
/**
* msg_command_size() - Actual size of transport SDU for command.
*
diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c
index 14709dbc96a1..f09100481c80 100644
--- a/drivers/firmware/arm_scmi/virtio.c
+++ b/drivers/firmware/arm_scmi/virtio.c
@@ -30,9 +30,6 @@
#include "common.h"

#define VIRTIO_MAX_RX_TIMEOUT_MS 60000
-#define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */
-#define VIRTIO_SCMI_MAX_PDU_SIZE \
- (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD)
#define DESCRIPTORS_PER_TX_MSG 2

/**
diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
index 587fbae06182..c2e9b0e026c3 100644
--- a/drivers/vhost/Kconfig
+++ b/drivers/vhost/Kconfig
@@ -74,6 +74,16 @@ config VHOST_VDPA
To compile this driver as a module, choose M here: the module
will be called vhost_vdpa.

+config VHOST_SCMI
+ tristate "Host kernel accelerator for Virtio SCMI"
+ select VHOST
+ help
+ This kernel module can be loaded in host kernel to accelerate
+ guest SCMI over Virtio transport.
+
+ To compile this driver as a module, choose M here: the module will
+ be called vhost_scmi.
+
config VHOST_CROSS_ENDIAN_LEGACY
bool "Cross-endian support for vhost"
default n
diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
index f3e1897cce85..16862ba89cb4 100644
--- a/drivers/vhost/Makefile
+++ b/drivers/vhost/Makefile
@@ -13,6 +13,9 @@ obj-$(CONFIG_VHOST_RING) += vringh.o
obj-$(CONFIG_VHOST_VDPA) += vhost_vdpa.o
vhost_vdpa-y := vdpa.o

+obj-$(CONFIG_VHOST_SCMI) += vhost_scmi.o
+vhost_scmi-y := scmi.o
+
obj-$(CONFIG_VHOST) += vhost.o

obj-$(CONFIG_VHOST_IOTLB) += vhost_iotlb.o
diff --git a/drivers/vhost/scmi.c b/drivers/vhost/scmi.c
new file mode 100644
index 000000000000..4ed0e6419ab5
--- /dev/null
+++ b/drivers/vhost/scmi.c
@@ -0,0 +1,466 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include <linux/eventfd.h>
+#include <linux/vhost.h>
+#include <linux/miscdevice.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/mutex.h>
+#include <linux/workqueue.h>
+#include <linux/slab.h>
+#include <linux/sched/signal.h>
+#include <linux/vmalloc.h>
+#include <uapi/linux/virtio_scmi.h>
+#include <linux/scmi_vio_backend.h>
+
+#include "vhost.h"
+#include "../firmware/arm_scmi/common.h"
+#include "../firmware/arm_scmi/virtio_backend/client_handle.h"
+
+enum {
+ VHOST_SCMI_VQ_TX = 0,
+ VHOST_SCMI_VQ_RX = 1,
+ VHOST_SCMI_VQ_NUM = 2,
+};
+
+/*
+ * P2A_CHANNELS and SHARED_MEMORY features are negotiated,
+ * based on whether backend device supports these features/
+ */
+enum {
+ VHOST_SCMI_FEATURES = VHOST_FEATURES,
+};
+
+struct vhost_scmi {
+ struct vhost_dev dev;
+ struct vhost_virtqueue vqs[VHOST_SCMI_VQ_NUM];
+ /* Single request/response msg per VQ */
+ void *req_msg_payld;
+ void *resp_msg_payld;
+ struct scmi_vio_client_h *client_handle;
+ atomic_t release_fe_channels;
+};
+
+static int vhost_scmi_handle_request(struct vhost_virtqueue *vq,
+ struct vhost_scmi *vh_scmi, const struct iovec *iov,
+ size_t req_sz, size_t *resp_sz)
+{
+ int ret;
+ struct iov_iter req_iter;
+
+ struct scmi_vio_be_msg req_msg_payld = {
+ .msg_payld = vh_scmi->req_msg_payld,
+ .msg_sz = req_sz,
+ };
+ struct scmi_vio_be_msg resp_msg_payld = {
+ .msg_payld = vh_scmi->resp_msg_payld,
+ .msg_sz = *resp_sz,
+ };
+
+ /* Clear request and response buffers */
+ memset(vh_scmi->req_msg_payld, 0, VIRTIO_SCMI_MAX_PDU_SIZE);
+ memset(vh_scmi->resp_msg_payld, 0, VIRTIO_SCMI_MAX_PDU_SIZE);
+
+ iov_iter_init(&req_iter, READ, iov, 1, req_sz);
+
+ if (unlikely(!copy_from_iter_full(
+ vh_scmi->req_msg_payld, req_sz, &req_iter))) {
+ vq_err(vq, "Faulted on SCMI request copy\n");
+ return -EFAULT;
+ }
+
+ ret = scmi_vio_be_request(vh_scmi->client_handle, &req_msg_payld,
+ &resp_msg_payld);
+ *resp_sz = resp_msg_payld.msg_sz;
+
+ return ret;
+}
+
+static int vhost_scmi_send_resp(struct vhost_scmi *vh_scmi,
+ const struct iovec *iov, size_t resp_sz)
+{
+ void *resp = iov->iov_base;
+
+ return copy_to_user(resp, vh_scmi->resp_msg_payld, resp_sz);
+}
+
+static void handle_scmi_rx_kick(struct vhost_work *work)
+{
+ pr_err("%s: unexpected call for rx kick\n", __func__);
+}
+
+static void handle_scmi_tx(struct vhost_scmi *vh_scmi)
+{
+ struct vhost_virtqueue *vq = &vh_scmi->vqs[VHOST_SCMI_VQ_TX];
+ unsigned int out, in;
+ int head;
+ void *private;
+ size_t req_sz, resp_sz = 0, orig_resp_sz;
+ int ret = 0;
+
+ mutex_lock(&vq->mutex);
+ private = vhost_vq_get_backend(vq);
+ if (!private) {
+ mutex_unlock(&vq->mutex);
+ return;
+ }
+
+ vhost_disable_notify(&vh_scmi->dev, vq);
+
+ for (;;) {
+ /*
+ * Skip descriptor processing, if teardown has started for the client.
+ * Enforce visibility by using atomic_add_return().
+ */
+ if (unlikely(atomic_add_return(0, &vh_scmi->release_fe_channels)))
+ break;
+
+ head = vhost_get_vq_desc(vq, vq->iov,
+ ARRAY_SIZE(vq->iov),
+ &out, &in,
+ NULL, NULL);
+ /* On error, stop handling until the next kick. */
+ if (unlikely(head < 0))
+ break;
+ /*
+ * Nothing new? Check if any new entry is available -
+ * vhost_enable_notify() returns non-zero.
+ * Otherwise wait for eventfd to tell us that client
+ * refilled.
+ */
+ if (head == vq->num) {
+ if (unlikely(vhost_enable_notify(&vh_scmi->dev, vq))) {
+ vhost_disable_notify(&vh_scmi->dev, vq);
+ continue;
+ }
+ break;
+ }
+
+ /*
+ * Each new scmi request over Virtio transport is a descriptor
+ * chain, consisting of 2 descriptors. First descriptor is
+ * the request desc and the second one is for response.
+ * vhost_get_vq_desc() checks the order of these 2 descriptors
+ * in the chain. Check for correct number of each:
+ * "in" = Number of response descriptors.
+ * "out" = Number of request descriptors.
+ *
+ * Note: All descriptor entries, which result in vhost error,
+ * are skipped. This will result in SCMI Virtio clients
+ * for these descs to timeout. At this layer, we do not have
+ * enough information, to populate the response descriptor.
+ * As a possible future enhancement, client can be enhanced
+ * to handle 0 length response descriptors, and we signal
+ * 0 length response descriptor here, on vhost errors.
+ */
+ if (in != 1 || out != 1) {
+ vq_err(vq, "Unexpected req(%d)/resp(%d) buffers\n",
+ out, in);
+ continue;
+ }
+
+ req_sz = iov_length(vq->iov, out);
+ resp_sz = orig_resp_sz = iov_length(&vq->iov[out], out);
+ /* Sanitize request and response buffer size */
+ if (!req_sz || !resp_sz
+ || req_sz > VIRTIO_SCMI_MAX_PDU_SIZE
+ || orig_resp_sz > VIRTIO_SCMI_MAX_PDU_SIZE) {
+ vq_err(vq,
+ "Unexpected len for SCMI req(%#zx)/resp(%#zx)\n",
+ req_sz, orig_resp_sz);
+ goto error_scmi_tx;
+ }
+
+ ret = vhost_scmi_handle_request(vq, vh_scmi, vq->iov, req_sz, &resp_sz);
+ if (ret) {
+ vq_err(vq, "Handle request failed with error : %d\n", ret);
+ goto error_scmi_tx;
+ }
+
+ if (resp_sz > orig_resp_sz) {
+ vq_err(vq,
+ "Unexpected response size: %#zx orig: %#zx\n",
+ resp_sz, orig_resp_sz);
+ resp_sz = orig_resp_sz;
+ }
+
+ ret = vhost_scmi_send_resp(vh_scmi, &vq->iov[in], resp_sz);
+ if (ret) {
+ vq_err(vq, "Handle request failed with error : %d\n",
+ ret);
+ goto error_scmi_tx;
+ }
+ goto scmi_tx_signal;
+error_scmi_tx:
+ resp_sz = 0;
+scmi_tx_signal:
+ vhost_add_used_and_signal(&vh_scmi->dev, vq, head, resp_sz);
+ /*
+ * Implement vhost_exceeds_weight(), to avoid flooding of SCMI
+ * requests and starvation of other virtio clients?
+ */
+ }
+
+ mutex_unlock(&vq->mutex);
+}
+
+static void handle_scmi_tx_kick(struct vhost_work *work)
+{
+ struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
+ poll.work);
+ struct vhost_scmi *vh_scmi = container_of(vq->dev,
+ struct vhost_scmi, dev);
+
+ handle_scmi_tx(vh_scmi);
+}
+
+static int vhost_scmi_open(struct inode *inode, struct file *f)
+{
+ struct vhost_scmi *vh_scmi = kzalloc(sizeof(*vh_scmi), GFP_KERNEL);
+ struct vhost_dev *dev;
+ struct vhost_virtqueue **vqs;
+ int ret = -ENOMEM;
+
+ if (!vh_scmi)
+ return ret;
+
+ vqs = kcalloc(VHOST_SCMI_VQ_NUM, sizeof(*vqs), GFP_KERNEL);
+ if (!vqs)
+ goto free_vh_scmi;
+
+ vh_scmi->req_msg_payld = kzalloc(VIRTIO_SCMI_MAX_PDU_SIZE,
+ GFP_KERNEL);
+ if (!vh_scmi->req_msg_payld)
+ goto free_vqs;
+
+ vh_scmi->resp_msg_payld = kzalloc(VIRTIO_SCMI_MAX_PDU_SIZE,
+ GFP_KERNEL);
+ if (!vh_scmi->resp_msg_payld)
+ goto free_req_msg;
+
+ vh_scmi->client_handle = scmi_vio_get_client_h(vh_scmi);
+ if (!vh_scmi->client_handle)
+ goto free_resp_msg;
+
+ dev = &vh_scmi->dev;
+ vqs[VHOST_SCMI_VQ_RX] = &vh_scmi->vqs[VHOST_SCMI_VQ_RX];
+ vqs[VHOST_SCMI_VQ_TX] = &vh_scmi->vqs[VHOST_SCMI_VQ_TX];
+ vh_scmi->vqs[VHOST_SCMI_VQ_RX].handle_kick = handle_scmi_rx_kick;
+ vh_scmi->vqs[VHOST_SCMI_VQ_TX].handle_kick = handle_scmi_tx_kick;
+ /* Use kworker and disable weight checks */
+ vhost_dev_init(dev, vqs, VHOST_SCMI_VQ_NUM, UIO_MAXIOV,
+ 0, 0, true, NULL);
+
+ f->private_data = vh_scmi;
+ ret = scmi_vio_be_open(vh_scmi->client_handle);
+ if (ret) {
+ pr_err("SCMI Virtio backend open() failed with error %d\n", ret);
+ goto free_client_handle;
+ }
+ return ret;
+
+free_client_handle:
+ scmi_vio_put_client_h(vh_scmi->client_handle);
+free_resp_msg:
+ kfree(vh_scmi->resp_msg_payld);
+free_req_msg:
+ kfree(vh_scmi->req_msg_payld);
+free_vqs:
+ kfree(vqs);
+free_vh_scmi:
+ kfree(vh_scmi);
+ return ret;
+}
+
+static int vhost_scmi_start(struct vhost_scmi *vh_scmi)
+{
+ struct vhost_virtqueue *vq;
+ size_t i;
+ int ret;
+
+ mutex_lock(&vh_scmi->dev.mutex);
+
+ ret = vhost_dev_check_owner(&vh_scmi->dev);
+ if (ret)
+ goto err;
+
+ for (i = 0; i < ARRAY_SIZE(vh_scmi->vqs); i++) {
+ vq = &vh_scmi->vqs[i];
+
+ mutex_lock(&vq->mutex);
+
+ if (!vhost_vq_access_ok(vq)) {
+ ret = -EFAULT;
+ goto err_vq;
+ }
+
+ if (!vhost_vq_get_backend(vq)) {
+ vhost_vq_set_backend(vq, vh_scmi);
+ ret = vhost_vq_init_access(vq);
+ if (ret)
+ goto err_vq;
+ }
+
+ mutex_unlock(&vq->mutex);
+ }
+
+ mutex_unlock(&vh_scmi->dev.mutex);
+ return 0;
+
+err_vq:
+ mutex_unlock(&vq->mutex);
+ for (i = 0; i < ARRAY_SIZE(vh_scmi->vqs); i++) {
+ vq = &vh_scmi->vqs[i];
+
+ mutex_lock(&vq->mutex);
+ vhost_vq_set_backend(vq, NULL);
+ mutex_unlock(&vq->mutex);
+ }
+err:
+ mutex_unlock(&vh_scmi->dev.mutex);
+ return ret;
+}
+
+static int vhost_scmi_stop(struct vhost_scmi *vh_scmi)
+{
+ int ret = 0, i;
+ struct vhost_virtqueue *vq;
+
+ mutex_lock(&vh_scmi->dev.mutex);
+ ret = vhost_dev_check_owner(&vh_scmi->dev);
+ if (ret)
+ goto err;
+ for (i = 0; i < ARRAY_SIZE(vh_scmi->vqs); i++) {
+ vq = &vh_scmi->vqs[i];
+ mutex_lock(&vq->mutex);
+ vhost_vq_set_backend(vq, NULL);
+ mutex_unlock(&vq->mutex);
+ }
+err:
+ mutex_unlock(&vh_scmi->dev.mutex);
+ return ret;
+}
+
+static void vhost_scmi_flush_vq(struct vhost_scmi *vh_scmi, int index)
+{
+ vhost_poll_flush(&vh_scmi->vqs[index].poll);
+}
+
+static void vhost_scmi_flush(struct vhost_scmi *vh_scmi)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(vh_scmi->vqs); i++)
+ vhost_scmi_flush_vq(vh_scmi, i);
+}
+
+static int vhost_scmi_release(struct inode *inode, struct file *f)
+{
+ struct vhost_scmi *vh_scmi = f->private_data;
+ int ret = 0;
+
+ atomic_set(&vh_scmi->release_fe_channels, 1);
+ vhost_scmi_stop(vh_scmi);
+ vhost_scmi_flush(vh_scmi);
+ vhost_dev_stop(&vh_scmi->dev);
+ vhost_dev_cleanup(&vh_scmi->dev);
+ /*
+ * We do an extra flush before freeing memory,
+ * since jobs can re-queue themselves.
+ */
+
+ vhost_scmi_flush(vh_scmi);
+ ret = scmi_vio_be_close(vh_scmi->client_handle);
+ if (ret)
+ pr_err("SCMI Virtio backend close() failed with error %d\n", ret);
+ scmi_vio_put_client_h(vh_scmi->client_handle);
+ kfree(vh_scmi->resp_msg_payld);
+ kfree(vh_scmi->req_msg_payld);
+ kfree(vh_scmi->dev.vqs);
+ kfree(vh_scmi);
+
+ return ret;
+}
+
+static int vhost_scmi_set_features(struct vhost_scmi *vh_scmi, u64 features)
+{
+ struct vhost_virtqueue *vq;
+ int i;
+
+ mutex_lock(&vh_scmi->dev.mutex);
+ if ((features & (1 << VHOST_F_LOG_ALL)) &&
+ !vhost_log_access_ok(&vh_scmi->dev)) {
+ mutex_unlock(&vh_scmi->dev.mutex);
+ return -EFAULT;
+ }
+
+ for (i = 0; i < VHOST_SCMI_VQ_NUM; ++i) {
+ vq = &vh_scmi->vqs[i];
+ mutex_lock(&vq->mutex);
+ vq->acked_features = features;
+ mutex_unlock(&vq->mutex);
+ }
+ mutex_unlock(&vh_scmi->dev.mutex);
+ return 0;
+}
+
+static long vhost_scmi_ioctl(struct file *f, unsigned int ioctl,
+ unsigned long arg)
+{
+ struct vhost_scmi *vh_scmi = f->private_data;
+ void __user *argp = (void __user *)arg;
+ u64 __user *featurep = argp;
+ u64 features;
+ int r, start;
+
+ switch (ioctl) {
+ case VHOST_GET_FEATURES:
+ features = VHOST_SCMI_FEATURES;
+ if (copy_to_user(featurep, &features, sizeof(features)))
+ return -EFAULT;
+ return 0;
+ case VHOST_SET_FEATURES:
+ if (copy_from_user(&features, featurep, sizeof(features)))
+ return -EFAULT;
+ if (features & ~VHOST_SCMI_FEATURES)
+ return -EOPNOTSUPP;
+ return vhost_scmi_set_features(vh_scmi, features);
+ case VHOST_SCMI_SET_RUNNING:
+ if (copy_from_user(&start, argp, sizeof(start)))
+ return -EFAULT;
+ if (start)
+ return vhost_scmi_start(vh_scmi);
+ else
+ return vhost_scmi_stop(vh_scmi);
+ default:
+ mutex_lock(&vh_scmi->dev.mutex);
+ r = vhost_dev_ioctl(&vh_scmi->dev, ioctl, argp);
+ if (r == -ENOIOCTLCMD)
+ r = vhost_vring_ioctl(&vh_scmi->dev, ioctl, argp);
+ vhost_scmi_flush(vh_scmi);
+ mutex_unlock(&vh_scmi->dev.mutex);
+ return r;
+ }
+}
+
+static const struct file_operations vhost_scmi_fops = {
+ .owner = THIS_MODULE,
+ .release = vhost_scmi_release,
+ .unlocked_ioctl = vhost_scmi_ioctl,
+ .compat_ioctl = compat_ptr_ioctl,
+ .open = vhost_scmi_open,
+ .llseek = noop_llseek,
+};
+
+static struct miscdevice vhost_scmi_misc = {
+ MISC_DYNAMIC_MINOR,
+ "vhost-scmi",
+ &vhost_scmi_fops,
+};
+module_misc_device(vhost_scmi_misc);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Host kernel accelerator for SCMI Virtio");
diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
index 5d99e7c242a2..e5ad915bed84 100644
--- a/include/uapi/linux/vhost.h
+++ b/include/uapi/linux/vhost.h
@@ -157,4 +157,7 @@
/* Get the count of all virtqueues */
#define VHOST_VDPA_GET_VQS_COUNT _IOR(VHOST_VIRTIO, 0x80, __u32)

+/* VHOST_SCMI specific defines */
+#define VHOST_SCMI_SET_RUNNING _IOW(VHOST_VIRTIO, 0x81, int)
+
#endif
--
2.17.1

2022-06-13 19:43:54

by Cristian Marussi

[permalink] [raw]
Subject: Re: [RFC 0/3] SCMI Vhost and Virtio backend implementation

+CC: Souvik

On Thu, Jun 09, 2022 at 12:49:53PM +0530, Neeraj Upadhyay wrote:
> This RFC series, provides ARM System Control and Management Interface (SCMI)
> protocol backend implementation for Virtio transport. The purpose of this

Hi Neeraj,

Thanks for this work, I only glanced through the series at first to
grasp a general understanding of it (without goind into much details for
now) and I'd have a few questions/concerns that I'll noted down below.

I focused mainly on the backend server aims/functionalities/issues ignoring
at first the vhost-scmi entry-point since the vost-scmi accelerator is just
a (more-or-less) standard means of configuring and grabbing SCMI traffic
from the VMs into the Host Kernel and so I found more interesting at first
to understand what we can do with such traffic at first.
(IOW the vhost-scmi layer is welcome but remain to see what to do with it...)

> feature is to provide para-virtualized interfaces to guest VMs, to various
> hardware blocks like clocks, regulators. This allows the guest VMs to
> communicate their resource needs to the host, in the absence of direct
> access to those resources.

In an SCMI stack the agents (like VMs) issue requests to an SCMI platform
backend that is in charge of policying and armonizing such requests
eventually denying some of these (possibly malicious) while allowing others
(possibly armonizing/merging such reqs); with your solution basically the
SCMI backend in Kernel marshals/conveys all of such SCMI requests to the
proper Linux Kernel subsystem that is usually in charge of it, using
dedicated protocol handlers that basically translates SCMI requests to
Linux APIs calls to the Host. (I may have oversimplified or missed
something...)

At the price of a bit of overhead and code-duplication introduced by
this SCMI Backend you can indeed leverage the existing mechanisms for
resource accounting and sharing included in such Linux subsystems (like
Clock framework), and that's nice and useful, BUT how do you policy/filter
(possibly dinamically as VMs come and go) what these VMs can see and do
with these resources ?

... MORE importantly how do you protect the Host (or another VM) from
unacceptable (or possibly malicious) requests conveyed from one VM request
vqueue into the Linux subsystems (like clocks) ?

I saw you have added a good deal of DT bindings for the backend
describing protocols, so you could just expose only some protocols via
the backend (if I get it right) but you cannot anyway selectively expose
only a subset of resources to the different agents, so, if you expose the
clock protocol, that will be visible by any VMs and an agent could potentially
kill the Host or mount some clock related attack acting on the right clock.
(I mean you cannot describe in the Host DT a number X of clocks to be
supported by the Host Linux Clock framework BUT then expose selectively to
the SCMI agents only a subset Y < X to shield the Host from misbehaviour...
...at least not in a dynamic way avoiding to bake a fixed policy into
the backend...or maybe I'm missing how you can do that, in such a case
please explain...)

Moreover, in a normal SCMI stack the server resides out of reach from the
OSPM agents since the server, wherever it sits, has the last word and can
deny and block unreasonable/malicious requests while armonizing others: this
means the typical SCMI platform fw is configured in such a way that clearly
defines a set of policies to be enforced between the access of the various
agents. (and it can reside in the trusted codebase given its 'reduced'
size...even though this policies are probably at the moment not so
dynamically modificable there either...)

With your approach of a Linux Kernel based SCMI platform backend you are
certainly using all the good and well proven mechanisms offered by the
Kernel to share and co-ordinate access to such resources, which is good
(.. even though Linux is not so small in term of codebase to be used as
a TCB to tell the truth :D), BUT I don't see the same level of policying
or filtering applied anywhere in the proposed RFCs, especially to protect
the Host which at the end is supposed to use the same Linux subsystems and
possibly share some of those resources for its own needs.

I saw the Base protocol basic implementation you provided to expose the
supported backend protocols to the VMs, it would be useful to see how
you plan to handle something like the Clock protocol you mention in the
example below. (if you have Clock protocol backend that as WIP already
would be interesting to see it...)

Another issue/criticality that comes to my mind is how do you gather in
general basic resources states/descriptors from the existing Linux subsystems
(even leaving out any policying concerns): as an example, how do you gather
from the Host Clock framework the list of available clocks and their rates
descriptors that you're going expose to a specific VMs once this latter will
issue the related SCMI commands to get to know which SCMI Clock domain are
available ?
(...and I mean in a dynamic way not using a builtin per-platform baked set of
resources known to be made available... I doubt that any sort of DT
description would be accepted in this regards ...)

>
> 1. Architecture overview
> ---------------------
>
> Below diagram shows the overall software architecture of SCMI communication
> between guest VM and the host software. In this diagram, guest is a linux
> VM; also, host uses KVM linux.
>
> GUEST VM HOST
> +--------------------+ +---------------------+ +--------------+
> | a. Device A | | k. Device B | | PLL |
> | (Clock consumer) | | (Clock consumer) | | |
> +--------------------+ +---------------------+ +--------------+
> | | ^
> v v |
> +--------------------+ +---------------------+ +-----------------+
> | b. Clock Framework | | j. Clock Framework | -->| l. Clock Driver |
> +-- -----------------+ +---------------------+ +-----------------+
> | ^
> v |
> +--------------------+ +------------------------+
> | c. SCMI Clock | | i. SCMI Virtio Backend |
> +--------------------+ +------------------------+
> | ^
> v |
> +--------------------+ +----------------------+
> | d. SCMI Virtio | | h. SCMI Vhost |<-----------+
> +--------------------+ +----------------------+ |
> | ^ |
> v | |
> +-------------------------------------------------+ +-----------------+
> | e. Virtio Infra | | g. VMM |
> +-------------------------------------------------+ +-----------------+
> | ^ ^
> v | |
> +-------------------------------------------------+ |
> | f. Hypervisor |-------------
> +-------------------------------------------------+
>

Looking at the above schema and thinking out loud where any dynamic
policying against the resources can fit (..and trying desperately NOT to push
that into the Kernel too :P...) ... I think that XEN was trying something similar
(with a real backend SCMI platform FW at the end of the pipe though I think...) and
in their case the per-VMs resource allocation was performed using SCMI
BASE_SET_DEVICE_PERMISSIONS commands issued by the Hypervisor/VMM itself
I think or by a Dom0 elected as a trusted agent and so allowed to configure
such resource partitioning ...

https://www.mail-archive.com/[email protected]/msg113868.html

...maybe a similar approach, with some sort of SCMI Trusted Agent living within
the VMM and in charge of directing such resources' partitioning between
VMs by issuing BASE_SET_DEVICE_PERMISSIONS towards the Kernel SCMI Virtio
Backend, could help keeping at least the policy bits related to the VMs out of
the kernel/DTs and possibly dynamically configurable following VMs lifecycle.

Even though, in our case ALL the resource management by device ID would have to
happen in the Kernel SCMI backend at the end, given that is where the SCMI
platform resides indeed, BUT at least you could keep the effective policy out of
kernel space, doing something like:

1. VMM/TrustedAgent query Kernel_SCMI_Virtio_backend for available resources

2. VMM/TrustedAg decides resources allocation between VMs (and/or possibly the Host
based on some configured policy)

3. VMM/TrustedAgent issues BASE_SET_DEVICE_PERMISSIONS/PROTOCOLS to the
Kernel_SCMI_Virtio_backend

4. Kernel_SCMI_Virtio_backend enforces resource partioning and sharing
when processing subsequent VMs SCMI requests coming via Vhost-SCMI

...where the TrustedAgent here could be (I guess) the VMM or the Host or
both with different level of privilege if you don't want the VMM to be able
to configure resources access for the whole Host.

> a. Device A This is the client kernel driver in guest VM,
> for ex. diplay driver, which uses standard
> clock framework APIs to vote for a clock.
>
> b. Clock Framework Underlying kernel clock framework on
> guest.
>
> c. SCMI Clock SCMI interface based clock driver.
>
> d. SCMI Virtio Underlying SCMI framework, using Virtio as
> transport driver.
>
> e. Virtio Infra Virtio drivers on guest VM. These drivers
> initiate virtqueue requests over Virtio
> transport (MMIO/PCI), and forwards response
> to SCMI Virtio registered callbacks.
>
> f. Hypervisor Hosted Hypervisor (KVM for ex.), which traps
> and forwards requests on virtqueue ring
> buffers to the VMM.
>
> g. VMM Virtual Machine Monitor, running on host userspace,
> which manages the lifecycle of guest VMs, and forwards
> guest initiated virtqueue requests as IOCTLs to the
> Vhost driver on host.
>
> h. SCMI Vhost In kernel driver, which handles SCMI virtqueue
> requests from guest VMs. This driver forwards the
> requests to SCMI Virtio backend driver, and returns
> the response from backend, over the virtqueue ring
> buffers.
>
> i. SCMI Virtio Backend SCMI backend, which handles the incoming SCMI messages
> from SCMI Vhost driver, and forwards them to the
> backend protocols like clock and voltage protocols.
> The backend protocols uses the host apis for those
> resources like clock APIs provided by clock framework,
> to vote/request for the resource. The response from
> the host api is parceled into a SCMI response message,
> and is returned to the SCMI Vhost driver. The SCMI
> Vhost driver in turn, returns the reponse over the
> Virtqueue reponse buffers.
>

Last but not least, this SCMI Virtio Backend layer in charge of
processing incoming SCMI packets, interfacing with the Linux subsystems
final backend and building SCMI replies from Linux will introduce a
certain level of code/funcs duplication given that this same SCMI basic
processing capabilities have been already baked in the SCMI stacks found in
SCP and in TF-A (.. and maybe a few other other proprietary backends)...

... but this is something maybe to be addressed in general in a
different context not something that can be addressed by this series.

Sorry for the usual flood of words :P ... I'll have a more in deep
review of the series in the next days, for now I wanted just to share my
concerns and (maybe wrong) understanding and see what you or Sudeep and
Souvik think about.

Thanks,
Cristian

2022-06-30 14:45:51

by Vincent Guittot

[permalink] [raw]
Subject: Re: [RFC 0/3] SCMI Vhost and Virtio backend implementation

Hi Neeraj and Cristian,

On Mon, 13 Jun 2022 at 19:20, Cristian Marussi <[email protected]> wrote:
>
> +CC: Souvik
>
> On Thu, Jun 09, 2022 at 12:49:53PM +0530, Neeraj Upadhyay wrote:
> > This RFC series, provides ARM System Control and Management Interface (SCMI)
> > protocol backend implementation for Virtio transport. The purpose of this
>
> Hi Neeraj,
>
> Thanks for this work, I only glanced through the series at first to
> grasp a general understanding of it (without goind into much details for
> now) and I'd have a few questions/concerns that I'll noted down below.
>
> I focused mainly on the backend server aims/functionalities/issues ignoring
> at first the vhost-scmi entry-point since the vost-scmi accelerator is just
> a (more-or-less) standard means of configuring and grabbing SCMI traffic
> from the VMs into the Host Kernel and so I found more interesting at first
> to understand what we can do with such traffic at first.
> (IOW the vhost-scmi layer is welcome but remain to see what to do with it...)
>
> > feature is to provide para-virtualized interfaces to guest VMs, to various
> > hardware blocks like clocks, regulators. This allows the guest VMs to
> > communicate their resource needs to the host, in the absence of direct
> > access to those resources.

IIUC, you want to leverage on the drivers already developed in the
kernel for those resources instead of developing a dedicated SCMI
server. The main concern is that this also provides full access to
these resources from userspace without any control which is a concern
also described by Cristian below. It would be good to describe how you
want to manage resources availability and permission access.This is
the main open point in your RFC so far

>
> In an SCMI stack the agents (like VMs) issue requests to an SCMI platform
> backend that is in charge of policying and armonizing such requests
> eventually denying some of these (possibly malicious) while allowing others
> (possibly armonizing/merging such reqs); with your solution basically the
> SCMI backend in Kernel marshals/conveys all of such SCMI requests to the
> proper Linux Kernel subsystem that is usually in charge of it, using
> dedicated protocol handlers that basically translates SCMI requests to
> Linux APIs calls to the Host. (I may have oversimplified or missed
> something...)
>
> At the price of a bit of overhead and code-duplication introduced by
> this SCMI Backend you can indeed leverage the existing mechanisms for
> resource accounting and sharing included in such Linux subsystems (like
> Clock framework), and that's nice and useful, BUT how do you policy/filter
> (possibly dinamically as VMs come and go) what these VMs can see and do
> with these resources ?
>
> ... MORE importantly how do you protect the Host (or another VM) from
> unacceptable (or possibly malicious) requests conveyed from one VM request
> vqueue into the Linux subsystems (like clocks) ?
>
> I saw you have added a good deal of DT bindings for the backend
> describing protocols, so you could just expose only some protocols via
> the backend (if I get it right) but you cannot anyway selectively expose
> only a subset of resources to the different agents, so, if you expose the
> clock protocol, that will be visible by any VMs and an agent could potentially
> kill the Host or mount some clock related attack acting on the right clock.
> (I mean you cannot describe in the Host DT a number X of clocks to be
> supported by the Host Linux Clock framework BUT then expose selectively to
> the SCMI agents only a subset Y < X to shield the Host from misbehaviour...
> ...at least not in a dynamic way avoiding to bake a fixed policy into
> the backend...or maybe I'm missing how you can do that, in such a case
> please explain...)
>
> Moreover, in a normal SCMI stack the server resides out of reach from the
> OSPM agents since the server, wherever it sits, has the last word and can
> deny and block unreasonable/malicious requests while armonizing others: this
> means the typical SCMI platform fw is configured in such a way that clearly
> defines a set of policies to be enforced between the access of the various
> agents. (and it can reside in the trusted codebase given its 'reduced'
> size...even though this policies are probably at the moment not so
> dynamically modificable there either...)
>
> With your approach of a Linux Kernel based SCMI platform backend you are
> certainly using all the good and well proven mechanisms offered by the
> Kernel to share and co-ordinate access to such resources, which is good
> (.. even though Linux is not so small in term of codebase to be used as
> a TCB to tell the truth :D), BUT I don't see the same level of policying
> or filtering applied anywhere in the proposed RFCs, especially to protect
> the Host which at the end is supposed to use the same Linux subsystems and
> possibly share some of those resources for its own needs.
>
> I saw the Base protocol basic implementation you provided to expose the
> supported backend protocols to the VMs, it would be useful to see how
> you plan to handle something like the Clock protocol you mention in the
> example below. (if you have Clock protocol backend that as WIP already
> would be interesting to see it...)
>
> Another issue/criticality that comes to my mind is how do you gather in
> general basic resources states/descriptors from the existing Linux subsystems
> (even leaving out any policying concerns): as an example, how do you gather
> from the Host Clock framework the list of available clocks and their rates
> descriptors that you're going expose to a specific VMs once this latter will
> issue the related SCMI commands to get to know which SCMI Clock domain are
> available ?
> (...and I mean in a dynamic way not using a builtin per-platform baked set of
> resources known to be made available... I doubt that any sort of DT
> description would be accepted in this regards ...)
>
> >
> > 1. Architecture overview
> > ---------------------
> >
> > Below diagram shows the overall software architecture of SCMI communication
> > between guest VM and the host software. In this diagram, guest is a linux
> > VM; also, host uses KVM linux.
> >
> > GUEST VM HOST
> > +--------------------+ +---------------------+ +--------------+
> > | a. Device A | | k. Device B | | PLL |
> > | (Clock consumer) | | (Clock consumer) | | |
> > +--------------------+ +---------------------+ +--------------+
> > | | ^
> > v v |
> > +--------------------+ +---------------------+ +-----------------+
> > | b. Clock Framework | | j. Clock Framework | -->| l. Clock Driver |
> > +-- -----------------+ +---------------------+ +-----------------+
> > | ^
> > v |
> > +--------------------+ +------------------------+
> > | c. SCMI Clock | | i. SCMI Virtio Backend |
> > +--------------------+ +------------------------+
> > | ^
> > v |
> > +--------------------+ +----------------------+
> > | d. SCMI Virtio | | h. SCMI Vhost |<-----------+
> > +--------------------+ +----------------------+ |
> > | ^ |
> > v | |
> > +-------------------------------------------------+ +-----------------+
> > | e. Virtio Infra | | g. VMM |
> > +-------------------------------------------------+ +-----------------+
> > | ^ ^
> > v | |
> > +-------------------------------------------------+ |
> > | f. Hypervisor |-------------
> > +-------------------------------------------------+
> >
>
> Looking at the above schema and thinking out loud where any dynamic
> policying against the resources can fit (..and trying desperately NOT to push
> that into the Kernel too :P...) ... I think that XEN was trying something similar
> (with a real backend SCMI platform FW at the end of the pipe though I think...) and
> in their case the per-VMs resource allocation was performed using SCMI
> BASE_SET_DEVICE_PERMISSIONS commands issued by the Hypervisor/VMM itself
> I think or by a Dom0 elected as a trusted agent and so allowed to configure
> such resource partitioning ...
>
> https://www.mail-archive.com/[email protected]/msg113868.html
>
> ...maybe a similar approach, with some sort of SCMI Trusted Agent living within
> the VMM and in charge of directing such resources' partitioning between
> VMs by issuing BASE_SET_DEVICE_PERMISSIONS towards the Kernel SCMI Virtio
> Backend, could help keeping at least the policy bits related to the VMs out of
> the kernel/DTs and possibly dynamically configurable following VMs lifecycle.
>
> Even though, in our case ALL the resource management by device ID would have to
> happen in the Kernel SCMI backend at the end, given that is where the SCMI
> platform resides indeed, BUT at least you could keep the effective policy out of
> kernel space, doing something like:
>
> 1. VMM/TrustedAgent query Kernel_SCMI_Virtio_backend for available resources
>
> 2. VMM/TrustedAg decides resources allocation between VMs (and/or possibly the Host
> based on some configured policy)
>
> 3. VMM/TrustedAgent issues BASE_SET_DEVICE_PERMISSIONS/PROTOCOLS to the
> Kernel_SCMI_Virtio_backend
>
> 4. Kernel_SCMI_Virtio_backend enforces resource partioning and sharing
> when processing subsequent VMs SCMI requests coming via Vhost-SCMI
>
> ...where the TrustedAgent here could be (I guess) the VMM or the Host or
> both with different level of privilege if you don't want the VMM to be able
> to configure resources access for the whole Host.
>
> > a. Device A This is the client kernel driver in guest VM,
> > for ex. diplay driver, which uses standard
> > clock framework APIs to vote for a clock.
> >
> > b. Clock Framework Underlying kernel clock framework on
> > guest.
> >
> > c. SCMI Clock SCMI interface based clock driver.
> >
> > d. SCMI Virtio Underlying SCMI framework, using Virtio as
> > transport driver.
> >
> > e. Virtio Infra Virtio drivers on guest VM. These drivers
> > initiate virtqueue requests over Virtio
> > transport (MMIO/PCI), and forwards response
> > to SCMI Virtio registered callbacks.
> >
> > f. Hypervisor Hosted Hypervisor (KVM for ex.), which traps
> > and forwards requests on virtqueue ring
> > buffers to the VMM.
> >
> > g. VMM Virtual Machine Monitor, running on host userspace,
> > which manages the lifecycle of guest VMs, and forwards
> > guest initiated virtqueue requests as IOCTLs to the
> > Vhost driver on host.
> >
> > h. SCMI Vhost In kernel driver, which handles SCMI virtqueue
> > requests from guest VMs. This driver forwards the
> > requests to SCMI Virtio backend driver, and returns
> > the response from backend, over the virtqueue ring
> > buffers.
> >
> > i. SCMI Virtio Backend SCMI backend, which handles the incoming SCMI messages
> > from SCMI Vhost driver, and forwards them to the
> > backend protocols like clock and voltage protocols.
> > The backend protocols uses the host apis for those
> > resources like clock APIs provided by clock framework,
> > to vote/request for the resource. The response from
> > the host api is parceled into a SCMI response message,
> > and is returned to the SCMI Vhost driver. The SCMI
> > Vhost driver in turn, returns the reponse over the
> > Virtqueue reponse buffers.
> >
>
> Last but not least, this SCMI Virtio Backend layer in charge of
> processing incoming SCMI packets, interfacing with the Linux subsystems
> final backend and building SCMI replies from Linux will introduce a
> certain level of code/funcs duplication given that this same SCMI basic
> processing capabilities have been already baked in the SCMI stacks found in
> SCP and in TF-A (.. and maybe a few other other proprietary backends)...
>
> ... but this is something maybe to be addressed in general in a
> different context not something that can be addressed by this series.
>
> Sorry for the usual flood of words :P ... I'll have a more in deep
> review of the series in the next days, for now I wanted just to share my
> concerns and (maybe wrong) understanding and see what you or Sudeep and
> Souvik think about.
>
> Thanks,
> Cristian
>

2022-07-09 03:41:52

by Mike Tipton

[permalink] [raw]
Subject: Re: [RFC 0/3] SCMI Vhost and Virtio backend implementation

I'll let Neeraj respond to more of the core backend details and policy
enforcement options, but I can provide some details for our prototype
clock protocol handler. Note that it's a pretty simple proof-of-concept
handler that's implemented entirely outside of the common clock
framework. It operates as just another client to the framework. This
approach has some limitations. And a more full-featured implementation
could benefit from being implemented in the clock framework itself. But
that level of support hasn't been necessary for our purposes yet.

On 6/13/2022 10:20 AM, Cristian Marussi wrote:
> +CC: Souvik
>
> On Thu, Jun 09, 2022 at 12:49:53PM +0530, Neeraj Upadhyay wrote:
>> This RFC series, provides ARM System Control and Management Interface (SCMI)
>> protocol backend implementation for Virtio transport. The purpose of this
>
> Hi Neeraj,
>
> Thanks for this work, I only glanced through the series at first to
> grasp a general understanding of it (without goind into much details for
> now) and I'd have a few questions/concerns that I'll noted down below.
>
> I focused mainly on the backend server aims/functionalities/issues ignoring
> at first the vhost-scmi entry-point since the vost-scmi accelerator is just
> a (more-or-less) standard means of configuring and grabbing SCMI traffic
> from the VMs into the Host Kernel and so I found more interesting at first
> to understand what we can do with such traffic at first.
> (IOW the vhost-scmi layer is welcome but remain to see what to do with it...)
>
>> feature is to provide para-virtualized interfaces to guest VMs, to various
>> hardware blocks like clocks, regulators. This allows the guest VMs to
>> communicate their resource needs to the host, in the absence of direct
>> access to those resources.
>
> In an SCMI stack the agents (like VMs) issue requests to an SCMI platform
> backend that is in charge of policying and armonizing such requests
> eventually denying some of these (possibly malicious) while allowing others
> (possibly armonizing/merging such reqs); with your solution basically the
> SCMI backend in Kernel marshals/conveys all of such SCMI requests to the
> proper Linux Kernel subsystem that is usually in charge of it, using
> dedicated protocol handlers that basically translates SCMI requests to
> Linux APIs calls to the Host. (I may have oversimplified or missed
> something...)
>
> At the price of a bit of overhead and code-duplication introduced by
> this SCMI Backend you can indeed leverage the existing mechanisms for
> resource accounting and sharing included in such Linux subsystems (like
> Clock framework), and that's nice and useful, BUT how do you policy/filter
> (possibly dinamically as VMs come and go) what these VMs can see and do
> with these resources ?
>

Currently, our only level of filtering is for which clocks we choose to
expose over SCMI. Those chosen clocks are exposed to all VMs equally.
The clock protocol handler exposes a registration function, which we
call from our clock drivers. Which clocks we register are currently
hardcoded in the drivers themselves. We often want to register all the
clocks in a given driver, since we have separate drivers for each clock
controller and many clock controllers are already dedicated to a
particular core or subsystem. So if that core or subsystem needs to be
controlled by a VM, then we give the VM all of its clocks. This can mean
exposing a large number of clocks (in the hundreds).


> ... MORE importantly how do you protect the Host (or another VM) from
> unacceptable (or possibly malicious) requests conveyed from one VM request
> vqueue into the Linux subsystems (like clocks) ?
>

The clock protocol handler tracks its own reference counts for each
clock that's been registered with it. It'll only enable clocks through
the host framework when the reference count increases from 0 -> 1, and
it'll only disable clocks through host framework when the reference
count decreases from 1 -> 0. And since the clock framework has its own
internal reference counts, then it's not possible for a VM to disable
clocks that the host itself has enabled.

We don't support frequency aggregation, so a VM could override the
frequency request of another VM or of the host. We could support max
aggregation across VMs, so that a VM couldn't reduce the frequency below
what another VM has requested. But without clock framework changes, we
can't aggregate with the local host clients. So a VM could reduce the
frequency below what the host has requested.

Generally speaking we don't expect more than one entity (VM or host) to
control a given clock at a time. But all we can currently enforce is
that clocks only turn off when *all* entities (including the the host)
request them to be off.


> I saw you have added a good deal of DT bindings for the backend
> describing protocols, so you could just expose only some protocols via
> the backend (if I get it right) but you cannot anyway selectively expose
> only a subset of resources to the different agents, so, if you expose the
> clock protocol, that will be visible by any VMs and an agent could potentially
> kill the Host or mount some clock related attack acting on the right clock.
> (I mean you cannot describe in the Host DT a number X of clocks to be
> supported by the Host Linux Clock framework BUT then expose selectively to
> the SCMI agents only a subset Y < X to shield the Host from misbehaviour...
> ...at least not in a dynamic way avoiding to bake a fixed policy into
> the backend...or maybe I'm missing how you can do that, in such a case
> please explain...)
>
> Moreover, in a normal SCMI stack the server resides out of reach from the
> OSPM agents since the server, wherever it sits, has the last word and can
> deny and block unreasonable/malicious requests while armonizing others: this
> means the typical SCMI platform fw is configured in such a way that clearly
> defines a set of policies to be enforced between the access of the various
> agents. (and it can reside in the trusted codebase given its 'reduced'
> size...even though this policies are probably at the moment not so
> dynamically modificable there either...)
>
> With your approach of a Linux Kernel based SCMI platform backend you are
> certainly using all the good and well proven mechanisms offered by the
> Kernel to share and co-ordinate access to such resources, which is good
> (.. even though Linux is not so small in term of codebase to be used as
> a TCB to tell the truth :D), BUT I don't see the same level of policying
> or filtering applied anywhere in the proposed RFCs, especially to protect
> the Host which at the end is supposed to use the same Linux subsystems and
> possibly share some of those resources for its own needs.
>
> I saw the Base protocol basic implementation you provided to expose the
> supported backend protocols to the VMs, it would be useful to see how
> you plan to handle something like the Clock protocol you mention in the
> example below. (if you have Clock protocol backend that as WIP already
> would be interesting to see it...)
>
> Another issue/criticality that comes to my mind is how do you gather in
> general basic resources states/descriptors from the existing Linux subsystems
> (even leaving out any policying concerns): as an example, how do you gather
> from the Host Clock framework the list of available clocks and their rates
> descriptors that you're going expose to a specific VMs once this latter will
> issue the related SCMI commands to get to know which SCMI Clock domain are
> available ?
> (...and I mean in a dynamic way not using a builtin per-platform baked set of
> resources known to be made available... I doubt that any sort of DT
> description would be accepted in this regards ...)
>

As mentioned, the list of clocks we choose to expose are currently
hardcoded in the clock drivers outside of the clock framework. There is
no dynamic policy in place.

For supported rates, we currently just implement the
CLOCK_DESCRIBE_RATES command using rate ranges, rather than lists of
discrete rates (num_rates_flags[12] = 1). And we just communicate the
full u32 range 0..U32_MAX with step_size=1. We do this for simplicity.
Many of our clocks only support a small list of discrete rates (though
some support large ranges). If a VM requests a rate not aligned to these
discrete rates, then we'll just round up to what the host supports. We
currently operate under the assumption that the VM knows what it needs
and doesn't need to query the specific supported rates from the host.
That's fine for our current use cases, at least. Publishing
clock-specific rate lists and/or proper ranges would be more complicated
and require some amount of clock framework changes to get this information.


>>
>> 1. Architecture overview
>> ---------------------
>>
>> Below diagram shows the overall software architecture of SCMI communication
>> between guest VM and the host software. In this diagram, guest is a linux
>> VM; also, host uses KVM linux.
>>
>> GUEST VM HOST
>> +--------------------+ +---------------------+ +--------------+
>> | a. Device A | | k. Device B | | PLL |
>> | (Clock consumer) | | (Clock consumer) | | |
>> +--------------------+ +---------------------+ +--------------+
>> | | ^
>> v v |
>> +--------------------+ +---------------------+ +-----------------+
>> | b. Clock Framework | | j. Clock Framework | -->| l. Clock Driver |
>> +-- -----------------+ +---------------------+ +-----------------+
>> | ^
>> v |
>> +--------------------+ +------------------------+
>> | c. SCMI Clock | | i. SCMI Virtio Backend |
>> +--------------------+ +------------------------+
>> | ^
>> v |
>> +--------------------+ +----------------------+
>> | d. SCMI Virtio | | h. SCMI Vhost |<-----------+
>> +--------------------+ +----------------------+ |
>> | ^ |
>> v | |
>> +-------------------------------------------------+ +-----------------+
>> | e. Virtio Infra | | g. VMM |
>> +-------------------------------------------------+ +-----------------+
>> | ^ ^
>> v | |
>> +-------------------------------------------------+ |
>> | f. Hypervisor |-------------
>> +-------------------------------------------------+
>>
>
> Looking at the above schema and thinking out loud where any dynamic
> policying against the resources can fit (..and trying desperately NOT to push
> that into the Kernel too :P...) ... I think that XEN was trying something similar
> (with a real backend SCMI platform FW at the end of the pipe though I think...) and
> in their case the per-VMs resource allocation was performed using SCMI
> BASE_SET_DEVICE_PERMISSIONS commands issued by the Hypervisor/VMM itself
> I think or by a Dom0 elected as a trusted agent and so allowed to configure
> such resource partitioning ...
>
> https://www.mail-archive.com/[email protected]/msg113868.html
>
> ...maybe a similar approach, with some sort of SCMI Trusted Agent living within
> the VMM and in charge of directing such resources' partitioning between
> VMs by issuing BASE_SET_DEVICE_PERMISSIONS towards the Kernel SCMI Virtio
> Backend, could help keeping at least the policy bits related to the VMs out of
> the kernel/DTs and possibly dynamically configurable following VMs lifecycle.
>
> Even though, in our case ALL the resource management by device ID would have to
> happen in the Kernel SCMI backend at the end, given that is where the SCMI
> platform resides indeed, BUT at least you could keep the effective policy out of
> kernel space, doing something like:
>
> 1. VMM/TrustedAgent query Kernel_SCMI_Virtio_backend for available resources
>
> 2. VMM/TrustedAg decides resources allocation between VMs (and/or possibly the Host
> based on some configured policy)
>
> 3. VMM/TrustedAgent issues BASE_SET_DEVICE_PERMISSIONS/PROTOCOLS to the
> Kernel_SCMI_Virtio_backend
>
> 4. Kernel_SCMI_Virtio_backend enforces resource partioning and sharing
> when processing subsequent VMs SCMI requests coming via Vhost-SCMI
>
> ...where the TrustedAgent here could be (I guess) the VMM or the Host or
> both with different level of privilege if you don't want the VMM to be able
> to configure resources access for the whole Host.
>
>> a. Device A This is the client kernel driver in guest VM,
>> for ex. diplay driver, which uses standard
>> clock framework APIs to vote for a clock.
>>
>> b. Clock Framework Underlying kernel clock framework on
>> guest.
>>
>> c. SCMI Clock SCMI interface based clock driver.
>>
>> d. SCMI Virtio Underlying SCMI framework, using Virtio as
>> transport driver.
>>
>> e. Virtio Infra Virtio drivers on guest VM. These drivers
>> initiate virtqueue requests over Virtio
>> transport (MMIO/PCI), and forwards response
>> to SCMI Virtio registered callbacks.
>>
>> f. Hypervisor Hosted Hypervisor (KVM for ex.), which traps
>> and forwards requests on virtqueue ring
>> buffers to the VMM.
>>
>> g. VMM Virtual Machine Monitor, running on host userspace,
>> which manages the lifecycle of guest VMs, and forwards
>> guest initiated virtqueue requests as IOCTLs to the
>> Vhost driver on host.
>>
>> h. SCMI Vhost In kernel driver, which handles SCMI virtqueue
>> requests from guest VMs. This driver forwards the
>> requests to SCMI Virtio backend driver, and returns
>> the response from backend, over the virtqueue ring
>> buffers.
>>
>> i. SCMI Virtio Backend SCMI backend, which handles the incoming SCMI messages
>> from SCMI Vhost driver, and forwards them to the
>> backend protocols like clock and voltage protocols.
>> The backend protocols uses the host apis for those
>> resources like clock APIs provided by clock framework,
>> to vote/request for the resource. The response from
>> the host api is parceled into a SCMI response message,
>> and is returned to the SCMI Vhost driver. The SCMI
>> Vhost driver in turn, returns the reponse over the
>> Virtqueue reponse buffers.
>>
>
> Last but not least, this SCMI Virtio Backend layer in charge of
> processing incoming SCMI packets, interfacing with the Linux subsystems
> final backend and building SCMI replies from Linux will introduce a
> certain level of code/funcs duplication given that this same SCMI basic
> processing capabilities have been already baked in the SCMI stacks found in
> SCP and in TF-A (.. and maybe a few other other proprietary backends)...
>
> ... but this is something maybe to be addressed in general in a
> different context not something that can be addressed by this series.
>
> Sorry for the usual flood of words :P ... I'll have a more in deep
> review of the series in the next days, for now I wanted just to share my
> concerns and (maybe wrong) understanding and see what you or Sudeep and
> Souvik think about.
>
> Thanks,
> Cristian
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

2022-07-27 04:26:51

by Neeraj Upadhyay

[permalink] [raw]
Subject: Re: [RFC 0/3] SCMI Vhost and Virtio backend implementation

Hi Cristian,

Thanks for your feedback! Sorry, it took long before replying. Few
thoughts inline to your comments.

On 6/13/2022 10:50 PM, Cristian Marussi wrote:
> +CC: Souvik
>
> On Thu, Jun 09, 2022 at 12:49:53PM +0530, Neeraj Upadhyay wrote:
>> This RFC series, provides ARM System Control and Management Interface (SCMI)
>> protocol backend implementation for Virtio transport. The purpose of this
>
> Hi Neeraj,
>
> Thanks for this work, I only glanced through the series at first to
> grasp a general understanding of it (without goind into much details for
> now) and I'd have a few questions/concerns that I'll noted down below.
>
> I focused mainly on the backend server aims/functionalities/issues ignoring
> at first the vhost-scmi entry-point since the vost-scmi accelerator is just
> a (more-or-less) standard means of configuring and grabbing SCMI traffic
> from the VMs into the Host Kernel and so I found more interesting at first
> to understand what we can do with such traffic at first.
> (IOW the vhost-scmi layer is welcome but remain to see what to do with it...)
>
>> feature is to provide para-virtualized interfaces to guest VMs, to various
>> hardware blocks like clocks, regulators. This allows the guest VMs to
>> communicate their resource needs to the host, in the absence of direct
>> access to those resources.
>
> In an SCMI stack the agents (like VMs) issue requests to an SCMI platform
> backend that is in charge of policying and armonizing such requests
> eventually denying some of these (possibly malicious) while allowing others
> (possibly armonizing/merging such reqs); with your solution basically the
> SCMI backend in Kernel marshals/conveys all of such SCMI requests to the
> proper Linux Kernel subsystem that is usually in charge of it, using
> dedicated protocol handlers that basically translates SCMI requests to
> Linux APIs calls to the Host. (I may have oversimplified or missed
> something...)
>
> At the price of a bit of overhead and code-duplication introduced by
> this SCMI Backend you can indeed leverage the existing mechanisms for
> resource accounting and sharing included in such Linux subsystems (like
> Clock framework), and that's nice and useful, BUT how do you policy/filter
> (possibly dinamically as VMs come and go) what these VMs can see and do
> with these resources ?
>
> ... MORE importantly how do you protect the Host (or another VM) from
> unacceptable (or possibly malicious) requests conveyed from one VM request
> vqueue into the Linux subsystems (like clocks) ?
>
> I saw you have added a good deal of DT bindings for the backend
> describing protocols, so you could just expose only some protocols via
> the backend (if I get it right) but you cannot anyway selectively expose
> only a subset of resources to the different agents, so, if you expose the
> clock protocol, that will be visible by any VMs and an agent could potentially
> kill the Host or mount some clock related attack acting on the right clock.
> (I mean you cannot describe in the Host DT a number X of clocks to be
> supported by the Host Linux Clock framework BUT then expose selectively to
> the SCMI agents only a subset Y < X to shield the Host from misbehaviour...
> ...at least not in a dynamic way avoiding to bake a fixed policy into
> the backend...or maybe I'm missing how you can do that, in such a case
> please explain...)
>
> Moreover, in a normal SCMI stack the server resides out of reach from the
> OSPM agents since the server, wherever it sits, has the last word and can
> deny and block unreasonable/malicious requests while armonizing others: this
> means the typical SCMI platform fw is configured in such a way that clearly
> defines a set of policies to be enforced between the access of the various
> agents. (and it can reside in the trusted codebase given its 'reduced'
> size...even though this policies are probably at the moment not so
> dynamically modificable there either...)
>
> With your approach of a Linux Kernel based SCMI platform backend you are
> certainly using all the good and well proven mechanisms offered by the
> Kernel to share and co-ordinate access to such resources, which is good
> (.. even though Linux is not so small in term of codebase to be used as
> a TCB to tell the truth :D), BUT I don't see the same level of policying
> or filtering applied anywhere in the proposed RFCs, especially to protect
> the Host which at the end is supposed to use the same Linux subsystems and
> possibly share some of those resources for its own needs.
>
> I saw the Base protocol basic implementation you provided to expose the
> supported backend protocols to the VMs, it would be useful to see how
> you plan to handle something like the Clock protocol you mention in the
> example below. (if you have Clock protocol backend that as WIP already
> would be interesting to see it...) >
> Another issue/criticality that comes to my mind is how do you gather in
> general basic resources states/descriptors from the existing Linux subsystems
> (even leaving out any policying concerns): as an example, how do you gather
> from the Host Clock framework the list of available clocks and their rates
> descriptors that you're going expose to a specific VMs once this latter will
> issue the related SCMI commands to get to know which SCMI Clock domain are
> available ?
> (...and I mean in a dynamic way not using a builtin per-platform baked set of
> resources known to be made available... I doubt that any sort of DT
> description would be accepted in this regards ...)
>
>>
>> 1. Architecture overview
>> ---------------------
>>
>> Below diagram shows the overall software architecture of SCMI communication
>> between guest VM and the host software. In this diagram, guest is a linux
>> VM; also, host uses KVM linux.
>>
>> GUEST VM HOST
>> +--------------------+ +---------------------+ +--------------+
>> | a. Device A | | k. Device B | | PLL |
>> | (Clock consumer) | | (Clock consumer) | | |
>> +--------------------+ +---------------------+ +--------------+
>> | | ^
>> v v |
>> +--------------------+ +---------------------+ +-----------------+
>> | b. Clock Framework | | j. Clock Framework | -->| l. Clock Driver |
>> +-- -----------------+ +---------------------+ +-----------------+
>> | ^
>> v |
>> +--------------------+ +------------------------+
>> | c. SCMI Clock | | i. SCMI Virtio Backend |
>> +--------------------+ +------------------------+
>> | ^
>> v |
>> +--------------------+ +----------------------+
>> | d. SCMI Virtio | | h. SCMI Vhost |<-----------+
>> +--------------------+ +----------------------+ |
>> | ^ |
>> v | |
>> +-------------------------------------------------+ +-----------------+
>> | e. Virtio Infra | | g. VMM |
>> +-------------------------------------------------+ +-----------------+
>> | ^ ^
>> v | |
>> +-------------------------------------------------+ |
>> | f. Hypervisor |-------------
>> +-------------------------------------------------+
>>
>
> Looking at the above schema and thinking out loud where any dynamic
> policying against the resources can fit (..and trying desperately NOT to push
> that into the Kernel too :P...) ... I think that XEN was trying something similar
> (with a real backend SCMI platform FW at the end of the pipe though I think...) and
> in their case the per-VMs resource allocation was performed using SCMI
> BASE_SET_DEVICE_PERMISSIONS commands issued by the Hypervisor/VMM itself
> I think or by a Dom0 elected as a trusted agent and so allowed to configure
> such resource partitioning ...
>
> https://www.mail-archive.com/[email protected]/msg113868.html
>
> ...maybe a similar approach, with some sort of SCMI Trusted Agent living within
> the VMM and in charge of directing such resources' partitioning between
> VMs by issuing BASE_SET_DEVICE_PERMISSIONS towards the Kernel SCMI Virtio
> Backend, could help keeping at least the policy bits related to the VMs out of
> the kernel/DTs and possibly dynamically configurable following VMs lifecycle.
>
> Even though, in our case ALL the resource management by device ID would have to
> happen in the Kernel SCMI backend at the end, given that is where the SCMI
> platform resides indeed, BUT at least you could keep the effective policy out of
> kernel space, doing something like:
>
> 1. VMM/TrustedAgent query Kernel_SCMI_Virtio_backend for available resources
>
> 2. VMM/TrustedAg decides resources allocation between VMs (and/or possibly the Host
> based on some configured policy)
>
> 3. VMM/TrustedAgent issues BASE_SET_DEVICE_PERMISSIONS/PROTOCOLS to the
> Kernel_SCMI_Virtio_backend
>
> 4. Kernel_SCMI_Virtio_backend enforces resource partioning and sharing
> when processing subsequent VMs SCMI requests coming via Vhost-SCMI
>
> ...where the TrustedAgent here could be (I guess) the VMM or the Host or
> both with different level of privilege if you don't want the VMM to be able
> to configure resources access for the whole Host.
>

Thanks for sharing your thoughts on this. Some thoughts on this:

One of the challenges in device ID based resource management appears to
be, mapping these devices to SCMI protocol resources (clocks,
regulators), and providing a means for VMM/TrustedAgent(userspace) to
query and identify devices (to maintain policy information) and request
for those SCMI devices for each VM.


As SCMI spec does not cover the discovery of device ids and how they
are mapped to protocol resources likes clocks and voltage ids.

Going though previous discussions (thanks Vincent for sharing this
link!) [1] , looks like there has been discussions around similar
concepts, where device node contains <vendor>,scmi_devid device
property, to map a device to the corresponding SCMI device. Those
discussions also mention about some ongoing work in the SCMI spec,
on device-id. Putting some of thoughts here, on managing device IDs
in Kernel_SCMI_Virtio_backend. Looking for inputs on this.


1. Device representation in device tree

Alternative 1

Add arm,scmi-devid property to device nodes, similar to the approach in
[1]. Device management software component of Kernel_SCMI_Virtio_backend
parses device tree to get information about these devices and map them
to protocol resources, by checking the "clocks", "-supply" regulator
nodes and finding the corresponding scmi clock / voltage ID for them.

With this approach, we would also need to maintain some name (using
arm,scmi-devname) in addition to the id for each node? One problem
with this approach looks to be, device ids are not maintained
at a centralized place and spread across the device nodes. How do we
assign these ids to various nodes i.e. what is the correct device id
for lets say usb node and how this can be enforced? Maybe we do not
need to maintain device id in device tree and only maintain
arm,scmi-devname, and Device management sw component dynamically assigns an
incremental device ID to each device node, which has arm,scmi-devname
property. However,this means device ID for a node is not fixed and
device policy need to use device names, which might be difficult to
maintain?

Another problem looks to be tight coupling between the resource
properties in a device node and its corresponding SCMI device. Parsing
the specific resource properties like "clocks", "-supply" might become
cumbersome (we would need to identify which property, and its
representation for each resource provided by SCMI protocol) to extend to
other resources? What if we want to map SCMI device to only a subset of
clocks/regulators, and not to the full set of lets say clocks for a
device node? Do we need that facility?


Alternative 2

Maintain arm,scmi-devid property for SCMI devices defined within scmi
backend node.


// 1. Use phandle for a host device, to get device specific resources.

scmi-vio-backend {
compatible = "arm,scmi-vio-backend";

devices {

device@1 {
arm,scmi-devid = 1;
arm,scmi-devname = "USB";
arm,scmi-basedev = <&usb_device>;
};
};
};

OR


// 2. Use phandles of specific clocks/regulators within SCMI device.

scmi-vio-backend {
compatible = "arm,scmi-vio-backend";

devices {

device@1 {
arm,scmi-devid = 1;
arm,scmi-devname = "USB";
clocks = <&clock_phandle ...>;
*-supply = <&regulator_phandle>;
};
};
};

OR

// 3. Use SCMI protocol specific clock and voltage IDs in SCMI device.

scmi-vio-backend {
compatible = "arm,scmi-vio-backend";

devices {

device@1 {
arm,scmi-devid = 1;
arm,scmi-devname = "USB";
arm,scmi-clock-ids = <clock_id1 clock_id2 ...>;
arm,scmi-voltage-ids = <voltage_id1 voltage_id2 ...>;
};
};
};


2. Resource discovery and policy management within VMM/TrustedAgent

a. VMM/TrustedAgent assigns agent ID to a VM using
SCMI_ASSIGN_AGENT_INFO ioctl to SCMI vhost. Same ID and name mapping
is returned by BASE_DISCOVER_AGENT SCMI message.

b. VMM/TrustedAgent does SCMI_GET_DEVICE_ATTRIBUTES ioctl to get the
# of devices.

c. VMM/TrustedAgent does SCMI_GET_DEVICES ioctl to get the list of
all device IDs.

d. VMM/TrustedAgent does SCMI_GET_DEVICE_INFO to get the name for a
device ID.

e. VMM/TrustedAgent does BASE_SET_DEVICE_PERMISSIONS using ioctl to
allow/revoke permissions for an agent id (which maps to a VM), for
a device. VMM/TrustedAgent would need to maintain information
about which device IDs a VM is allowed to access. These policies
could be platform specific.


Thanks
Neeraj

[1]
https://lore.kernel.org/lkml/[email protected]/

>> a. Device A This is the client kernel driver in guest VM,
>> for ex. diplay driver, which uses standard
>> clock framework APIs to vote for a clock.
>>
>> b. Clock Framework Underlying kernel clock framework on
>> guest.
>>
>> c. SCMI Clock SCMI interface based clock driver.
>>
>> d. SCMI Virtio Underlying SCMI framework, using Virtio as
>> transport driver.
>>
>> e. Virtio Infra Virtio drivers on guest VM. These drivers
>> initiate virtqueue requests over Virtio
>> transport (MMIO/PCI), and forwards response
>> to SCMI Virtio registered callbacks.
>>
>> f. Hypervisor Hosted Hypervisor (KVM for ex.), which traps
>> and forwards requests on virtqueue ring
>> buffers to the VMM.
>>
>> g. VMM Virtual Machine Monitor, running on host userspace,
>> which manages the lifecycle of guest VMs, and forwards
>> guest initiated virtqueue requests as IOCTLs to the
>> Vhost driver on host.
>>
>> h. SCMI Vhost In kernel driver, which handles SCMI virtqueue
>> requests from guest VMs. This driver forwards the
>> requests to SCMI Virtio backend driver, and returns
>> the response from backend, over the virtqueue ring
>> buffers.
>>
>> i. SCMI Virtio Backend SCMI backend, which handles the incoming SCMI messages
>> from SCMI Vhost driver, and forwards them to the
>> backend protocols like clock and voltage protocols.
>> The backend protocols uses the host apis for those
>> resources like clock APIs provided by clock framework,
>> to vote/request for the resource. The response from
>> the host api is parceled into a SCMI response message,
>> and is returned to the SCMI Vhost driver. The SCMI
>> Vhost driver in turn, returns the reponse over the
>> Virtqueue reponse buffers.
>>
>
> Last but not least, this SCMI Virtio Backend layer in charge of
> processing incoming SCMI packets, interfacing with the Linux subsystems
> final backend and building SCMI replies from Linux will introduce a
> certain level of code/funcs duplication given that this same SCMI basic
> processing capabilities have been already baked in the SCMI stacks found in
> SCP and in TF-A (.. and maybe a few other other proprietary backends)...
>
> ... but this is something maybe to be addressed in general in a
> different context not something that can be addressed by this series.
>
> Sorry for the usual flood of words :P ... I'll have a more in deep
> review of the series in the next days, for now I wanted just to share my
> concerns and (maybe wrong) understanding and see what you or Sudeep and
> Souvik think about.
>
> Thanks,
> Cristian
>