2022-04-06 12:05:05

by Alex Bennée

[permalink] [raw]
Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver

Hi,

This is another attempt to come up with an RPMB API for the kernel.
The last discussion of this was in the thread:

Subject: [RFC PATCH 0/5] RPMB internal and user-space API + WIP virtio-rpmb frontend
Date: Wed, 3 Mar 2021 13:54:55 +0000
Message-Id: <[email protected]>

The series provides for the RPMB sub-system, a new chardev API driven
by ioctls and a full multi-block capable virtio-rpmb driver. You can
find a working vhost-user backend in my QEMU branch here:

https://github.com/stsquad/qemu/commits/virtio/vhost-user-rpmb-v2

The branch is a little messy but I'll be posting a cleaned up version
in the following weeks. The only real changes to the backend is the
multi-block awareness and some tweaks to deal with QEMU internals
handling VirtIO config space messages which weren't previously
exercised. The test.sh script in tools/rpmb works through the various
transactions but isn't comprehensive.

Changes since the last posting:

- frame construction is mostly back in userspace

The previous discussion showed there wasn't any appetite for using
the kernels keyctl() interface so userspace yet again takes
responsibility for constructing most* frames. Currently these are
all pure virtio-rpmb frames but the code is written so we can plug
in additional frame types. The virtio-rpmb driver does some
validation and in some cases (* read-blocks) constructs the request
frame in the driver. It would take someone implementing a driver for
another RPMB device type to see if this makes sense.

- user-space interface is still split across several ioctls

Although 3 of the ioctls share the common rpmb_ioc_reqresp_cmd
structure it does mean things like capacity, write_count and
read_blocks can have their own structure associated with the
command.

As before I shall follow up with the QEMU based vhost-user backend and
hopefully a rust-vmm re-implementation. However I've no direct
interest in implementing the interfaces to real hardware. I leave that
to people who have access to such things and are willing to take up
the maintainer burden if this is merged.

Regards,

Alex


Alex Bennée (4):
rpmb: add Replay Protected Memory Block (RPMB) subsystem
char: rpmb: provide a user space interface
rpmb: create virtio rpmb frontend driver
tools rpmb: add RPBM access tool

.../userspace-api/ioctl/ioctl-number.rst | 1 +
MAINTAINERS | 9 +
drivers/Kconfig | 2 +
drivers/Makefile | 1 +
drivers/rpmb/Kconfig | 28 +
drivers/rpmb/Makefile | 9 +
drivers/rpmb/cdev.c | 309 +++++
drivers/rpmb/core.c | 439 +++++++
drivers/rpmb/rpmb-cdev.h | 17 +
drivers/rpmb/virtio_rpmb.c | 518 ++++++++
include/linux/rpmb.h | 182 +++
include/uapi/linux/rpmb.h | 99 ++
include/uapi/linux/virtio_rpmb.h | 54 +
tools/Makefile | 16 +-
tools/rpmb/.gitignore | 2 +
tools/rpmb/Makefile | 41 +
tools/rpmb/key | 1 +
tools/rpmb/rpmb.c | 1083 +++++++++++++++++
tools/rpmb/test.sh | 22 +
19 files changed, 2828 insertions(+), 5 deletions(-)
create mode 100644 drivers/rpmb/Kconfig
create mode 100644 drivers/rpmb/Makefile
create mode 100644 drivers/rpmb/cdev.c
create mode 100644 drivers/rpmb/core.c
create mode 100644 drivers/rpmb/rpmb-cdev.h
create mode 100644 drivers/rpmb/virtio_rpmb.c
create mode 100644 include/linux/rpmb.h
create mode 100644 include/uapi/linux/rpmb.h
create mode 100644 include/uapi/linux/virtio_rpmb.h
create mode 100644 tools/rpmb/.gitignore
create mode 100644 tools/rpmb/Makefile
create mode 100644 tools/rpmb/key
create mode 100644 tools/rpmb/rpmb.c
create mode 100755 tools/rpmb/test.sh

--
2.30.2


2022-04-06 12:19:41

by Alex Bennée

[permalink] [raw]
Subject: [PATCH v2 1/4] rpmb: add Replay Protected Memory Block (RPMB) subsystem

A number of storage technologies support a specialised hardware
partition designed to be resistant to replay attacks. The underlying
HW protocols differ but the operations are common. The RPMB partition
cannot be accessed via standard block layer, but by a set of specific
commands: WRITE, READ, GET_WRITE_COUNTER, and PROGRAM_KEY. Such a
partition provides authenticated and replay protected access, hence
suitable as a secure storage.

The RPMB layer aims to provide in-kernel API for Trusted Execution
Environment (TEE) devices that are capable to securely compute block
frame signature. In case a TEE device wishes to store a replay
protected data, requests the storage device via RPMB layer to store
the data.

A TEE device driver can claim the RPMB interface, for example, via
class_interface_register(). The RPMB layer provides a series of
operations for interacting with the device.

* program_key - a one time operation for setting up a new device
* get_capacity - introspect the device capacity
* get_write_count - check the write counter
* write_blocks - write a series of blocks to the RPMB device
* read_blocks - read a series of blocks from the RPMB device

The detailed operation of implementing the access is left to the TEE
device driver itself.

[This is based-on Thomas Winkler's proposed API from:

https://lore.kernel.org/linux-mmc/[email protected]/

The principle difference is the framing details and HW specific
bits (JDEC vs NVME frames) are left to the lower level TEE driver to
worry about. The eventual userspace ioctl interface will aim to be
similarly generic. This is an RFC to follow up on:

Subject: RPMB user space ABI
Date: Thu, 11 Feb 2021 14:07:00 +0000
Message-ID: <[email protected]>]

Signed-off-by: Alex Bennée <[email protected]>
Cc: Tomas Winkler <[email protected]>
Cc: Ulf Hansson <[email protected]>
Cc: Linus Walleij <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Ilias Apalodimas <[email protected]>

---
v2
- dropped keyid stuff
- moved from char to its own subdirectory driver/rpmb
- fixed compile errors for bisectability
---
MAINTAINERS | 7 +
drivers/Kconfig | 2 +
drivers/Makefile | 1 +
drivers/rpmb/Kconfig | 11 ++
drivers/rpmb/Makefile | 7 +
drivers/rpmb/core.c | 434 ++++++++++++++++++++++++++++++++++++++++++
include/linux/rpmb.h | 172 +++++++++++++++++
7 files changed, 634 insertions(+)
create mode 100644 drivers/rpmb/Kconfig
create mode 100644 drivers/rpmb/Makefile
create mode 100644 drivers/rpmb/core.c
create mode 100644 include/linux/rpmb.h

diff --git a/MAINTAINERS b/MAINTAINERS
index cd0f68d4a34a..9ab02b589005 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -16744,6 +16744,13 @@ T: git git://linuxtv.org/media_tree.git
F: Documentation/devicetree/bindings/media/allwinner,sun8i-a83t-de2-rotate.yaml
F: drivers/media/platform/sunxi/sun8i-rotate/

+RPMB SUBSYSTEM
+M: ?
+L: [email protected]
+S: Supported
+F: drivers/rpmb/*
+F: include/linux/rpmb.h
+
RPMSG TTY DRIVER
M: Arnaud Pouliquen <[email protected]>
L: [email protected]
diff --git a/drivers/Kconfig b/drivers/Kconfig
index 0d399ddaa185..90d18a8a0e72 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -236,4 +236,6 @@ source "drivers/interconnect/Kconfig"
source "drivers/counter/Kconfig"

source "drivers/most/Kconfig"
+
+source "drivers/rpmb/Kconfig"
endmenu
diff --git a/drivers/Makefile b/drivers/Makefile
index a110338c860c..ade465c4589f 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -187,3 +187,4 @@ obj-$(CONFIG_GNSS) += gnss/
obj-$(CONFIG_INTERCONNECT) += interconnect/
obj-$(CONFIG_COUNTER) += counter/
obj-$(CONFIG_MOST) += most/
+obj-$(CONFIG_RPMB) += rpmb/
diff --git a/drivers/rpmb/Kconfig b/drivers/rpmb/Kconfig
new file mode 100644
index 000000000000..f2a9ebdc4435
--- /dev/null
+++ b/drivers/rpmb/Kconfig
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2015-2019, Intel Corporation.
+
+config RPMB
+ tristate "RPMB partition interface"
+ help
+ Unified RPMB partition interface for RPMB capable devices such as
+ eMMC and UFS. Provides interface for in kernel security controllers to
+ access RPMB partition.
+
+ If unsure, select N.
diff --git a/drivers/rpmb/Makefile b/drivers/rpmb/Makefile
new file mode 100644
index 000000000000..24d4752a9a53
--- /dev/null
+++ b/drivers/rpmb/Makefile
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2015-2019, Intel Corporation.
+
+obj-$(CONFIG_RPMB) += rpmb.o
+rpmb-objs += core.o
+
+ccflags-y += -D__CHECK_ENDIAN__
diff --git a/drivers/rpmb/core.c b/drivers/rpmb/core.c
new file mode 100644
index 000000000000..50b358a14db6
--- /dev/null
+++ b/drivers/rpmb/core.c
@@ -0,0 +1,434 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright(c) 2015 - 2019 Intel Corporation. All rights reserved.
+ * Copyright(c) 2021 - 2022 Linaro Ltd.
+ */
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/mutex.h>
+#include <linux/list.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+
+#include <linux/rpmb.h>
+
+static DEFINE_IDA(rpmb_ida);
+
+/**
+ * rpmb_dev_get() - increase rpmb device ref counter
+ * @rdev: rpmb device
+ */
+struct rpmb_dev *rpmb_dev_get(struct rpmb_dev *rdev)
+{
+ return get_device(&rdev->dev) ? rdev : NULL;
+}
+EXPORT_SYMBOL_GPL(rpmb_dev_get);
+
+/**
+ * rpmb_dev_put() - decrease rpmb device ref counter
+ * @rdev: rpmb device
+ */
+void rpmb_dev_put(struct rpmb_dev *rdev)
+{
+ put_device(&rdev->dev);
+}
+EXPORT_SYMBOL_GPL(rpmb_dev_put);
+
+/**
+ * rpmb_program_key() - program the RPMB access key
+ * @rdev: rpmb device
+ * @keylen: length of key data
+ * @key: key data
+ *
+ * A successful programming of the key implies it has been set by the
+ * driver and can be used.
+ *
+ * Return:
+ * * 0 on success
+ * * -EINVAL on wrong parameters
+ * * -EPERM key already programmed
+ * * -EOPNOTSUPP if device doesn't support the requested operation
+ * * < 0 if the operation fails
+ */
+int rpmb_program_key(struct rpmb_dev *rdev, int klen, u8 *key, int rlen, u8 *resp)
+{
+ int err;
+
+ if (!rdev || !key)
+ return -EINVAL;
+
+ mutex_lock(&rdev->lock);
+ err = -EOPNOTSUPP;
+ if (rdev->ops && rdev->ops->program_key) {
+ err = rdev->ops->program_key(rdev->dev.parent, rdev->target,
+ klen, key, rlen, resp);
+ }
+ mutex_unlock(&rdev->lock);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(rpmb_program_key);
+
+/**
+ * rpmb_get_capacity() - returns the capacity of the rpmb device
+ * @rdev: rpmb device
+ *
+ * Return:
+ * * capacity of the device in units of 128K, on success
+ * * -EINVAL on wrong parameters
+ * * -EOPNOTSUPP if device doesn't support the requested operation
+ * * < 0 if the operation fails
+ */
+int rpmb_get_capacity(struct rpmb_dev *rdev)
+{
+ int err;
+
+ if (!rdev)
+ return -EINVAL;
+
+ mutex_lock(&rdev->lock);
+ err = -EOPNOTSUPP;
+ if (rdev->ops && rdev->ops->get_capacity)
+ err = rdev->ops->get_capacity(rdev->dev.parent, rdev->target);
+ mutex_unlock(&rdev->lock);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(rpmb_get_capacity);
+
+/**
+ * rpmb_get_write_count() - returns the write counter of the rpmb device
+ * @rdev: rpmb device
+ * @len: size of request frame
+ * @request: request frame
+ * @rlen: size of response frame
+ * @resp: response frame
+ *
+ * Return:
+ * * counter
+ * * -EINVAL on wrong parameters
+ * * -EOPNOTSUPP if device doesn't support the requested operation
+ * * < 0 if the operation fails
+ */
+int rpmb_get_write_count(struct rpmb_dev *rdev, int len, u8 *request, int rlen, u8 *resp)
+{
+ int err;
+
+ if (!rdev)
+ return -EINVAL;
+
+ mutex_lock(&rdev->lock);
+ err = -EOPNOTSUPP;
+ if (rdev->ops && rdev->ops->get_write_count)
+ err = rdev->ops->get_write_count(rdev->dev.parent, rdev->target,
+ len, request, rlen, resp);
+ mutex_unlock(&rdev->lock);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(rpmb_get_write_count);
+
+/**
+ * rpmb_write_blocks() - write data to RPMB device
+ * @rdev: rpmb device
+ * @addr: block address (index of first block - 256B blocks)
+ * @count: number of 256B blosks
+ * @data: pointer to data to program
+ *
+ * Write a series of blocks to the RPMB device.
+ *
+ * Return:
+ * * 0 on success
+ * * -EINVAL on wrong parameters
+ * * -EACCESS no key set
+ * * -EOPNOTSUPP if device doesn't support the requested operation
+ * * < 0 if the operation fails
+ */
+int rpmb_write_blocks(struct rpmb_dev *rdev, int len, u8 *request,
+ int rlen, u8 *response)
+{
+ int err;
+
+ if (!rdev || !len || !request)
+ return -EINVAL;
+
+ mutex_lock(&rdev->lock);
+ err = -EOPNOTSUPP;
+ if (rdev->ops && rdev->ops->write_blocks) {
+ err = rdev->ops->write_blocks(rdev->dev.parent, rdev->target,
+ len, request, rlen, response);
+ }
+ mutex_unlock(&rdev->lock);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(rpmb_write_blocks);
+
+/**
+ * rpmb_read_blocks() - read data from RPMB device
+ * @rdev: rpmb device
+ * @addr: block address (index of first block - 256B blocks)
+ * @count: number of 256B blocks
+ * @data: pointer to data to read
+ *
+ * Read a series of one or more blocks from the RPMB device.
+ *
+ * Return:
+ * * 0 on success
+ * * -EINVAL on wrong parameters
+ * * -EACCESS no key set
+ * * -EOPNOTSUPP if device doesn't support the requested operation
+ * * < 0 if the operation fails
+ */
+int rpmb_read_blocks(struct rpmb_dev *rdev, int addr, int count, int len, u8 *data)
+{
+ int err;
+
+ if (!rdev || !count || !data)
+ return -EINVAL;
+
+ mutex_lock(&rdev->lock);
+ err = -EOPNOTSUPP;
+ if (rdev->ops && rdev->ops->read_blocks) {
+ err = rdev->ops->read_blocks(rdev->dev.parent, rdev->target,
+ addr, count, len, data);
+ }
+ mutex_unlock(&rdev->lock);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(rpmb_read_blocks);
+
+
+static void rpmb_dev_release(struct device *dev)
+{
+ struct rpmb_dev *rdev = to_rpmb_dev(dev);
+
+ ida_simple_remove(&rpmb_ida, rdev->id);
+ kfree(rdev);
+}
+
+struct class rpmb_class = {
+ .name = "rpmb",
+ .owner = THIS_MODULE,
+ .dev_release = rpmb_dev_release,
+};
+EXPORT_SYMBOL(rpmb_class);
+
+/**
+ * rpmb_dev_find_device() - return first matching rpmb device
+ * @data: data for the match function
+ * @match: the matching function
+ *
+ * Return: matching rpmb device or NULL on failure
+ */
+static
+struct rpmb_dev *rpmb_dev_find_device(const void *data,
+ int (*match)(struct device *dev,
+ const void *data))
+{
+ struct device *dev;
+
+ dev = class_find_device(&rpmb_class, NULL, data, match);
+
+ return dev ? to_rpmb_dev(dev) : NULL;
+}
+
+struct device_with_target {
+ const struct device *dev;
+ u8 target;
+};
+
+static int match_by_parent(struct device *dev, const void *data)
+{
+ const struct device_with_target *d = data;
+ struct rpmb_dev *rdev = to_rpmb_dev(dev);
+
+ return (d->dev && dev->parent == d->dev && rdev->target == d->target);
+}
+
+/**
+ * rpmb_dev_find_by_device() - retrieve rpmb device from the parent device
+ * @parent: parent device of the rpmb device
+ * @target: RPMB target/region within the physical device
+ *
+ * Return: NULL if there is no rpmb device associated with the parent device
+ */
+struct rpmb_dev *rpmb_dev_find_by_device(struct device *parent, u8 target)
+{
+ struct device_with_target t;
+
+ if (!parent)
+ return NULL;
+
+ t.dev = parent;
+ t.target = target;
+
+ return rpmb_dev_find_device(&t, match_by_parent);
+}
+EXPORT_SYMBOL_GPL(rpmb_dev_find_by_device);
+
+/**
+ * rpmb_dev_unregister() - unregister RPMB partition from the RPMB subsystem
+ * @rdev: the rpmb device to unregister
+ * Return:
+ * * 0 on success
+ * * -EINVAL on wrong parameters
+ */
+int rpmb_dev_unregister(struct rpmb_dev *rdev)
+{
+ if (!rdev)
+ return -EINVAL;
+
+ mutex_lock(&rdev->lock);
+ device_del(&rdev->dev);
+ mutex_unlock(&rdev->lock);
+
+ rpmb_dev_put(rdev);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(rpmb_dev_unregister);
+
+/**
+ * rpmb_dev_unregister_by_device() - unregister RPMB partition
+ * from the RPMB subsystem
+ * @dev: the parent device of the rpmb device
+ * @target: RPMB target/region within the physical device
+ * Return:
+ * * 0 on success
+ * * -EINVAL on wrong parameters
+ * * -ENODEV if a device cannot be find.
+ */
+int rpmb_dev_unregister_by_device(struct device *dev, u8 target)
+{
+ struct rpmb_dev *rdev;
+
+ if (!dev)
+ return -EINVAL;
+
+ rdev = rpmb_dev_find_by_device(dev, target);
+ if (!rdev) {
+ dev_warn(dev, "no disk found %s\n", dev_name(dev->parent));
+ return -ENODEV;
+ }
+
+ rpmb_dev_put(rdev);
+
+ return rpmb_dev_unregister(rdev);
+}
+EXPORT_SYMBOL_GPL(rpmb_dev_unregister_by_device);
+
+/**
+ * rpmb_dev_get_drvdata() - driver data getter
+ * @rdev: rpmb device
+ *
+ * Return: driver private data
+ */
+void *rpmb_dev_get_drvdata(const struct rpmb_dev *rdev)
+{
+ return dev_get_drvdata(&rdev->dev);
+}
+EXPORT_SYMBOL_GPL(rpmb_dev_get_drvdata);
+
+/**
+ * rpmb_dev_set_drvdata() - driver data setter
+ * @rdev: rpmb device
+ * @data: data to store
+ */
+void rpmb_dev_set_drvdata(struct rpmb_dev *rdev, void *data)
+{
+ dev_set_drvdata(&rdev->dev, data);
+}
+EXPORT_SYMBOL_GPL(rpmb_dev_set_drvdata);
+
+/**
+ * rpmb_dev_register - register RPMB partition with the RPMB subsystem
+ * @dev: storage device of the rpmb device
+ * @target: RPMB target/region within the physical device
+ * @ops: device specific operations
+ *
+ * Return: a pointer to rpmb device
+ */
+struct rpmb_dev *rpmb_dev_register(struct device *dev, u8 target,
+ const struct rpmb_ops *ops)
+{
+ struct rpmb_dev *rdev;
+ int id;
+ int ret;
+
+ if (!dev || !ops)
+ return ERR_PTR(-EINVAL);
+
+ if (!ops->program_key)
+ return ERR_PTR(-EINVAL);
+
+ if (!ops->get_capacity)
+ return ERR_PTR(-EINVAL);
+
+ if (!ops->get_write_count)
+ return ERR_PTR(-EINVAL);
+
+ if (!ops->write_blocks)
+ return ERR_PTR(-EINVAL);
+
+ if (!ops->read_blocks)
+ return ERR_PTR(-EINVAL);
+
+ rdev = kzalloc(sizeof(*rdev), GFP_KERNEL);
+ if (!rdev)
+ return ERR_PTR(-ENOMEM);
+
+ id = ida_simple_get(&rpmb_ida, 0, 0, GFP_KERNEL);
+ if (id < 0) {
+ ret = id;
+ goto exit;
+ }
+
+ mutex_init(&rdev->lock);
+ rdev->ops = ops;
+ rdev->id = id;
+ rdev->target = target;
+
+ dev_set_name(&rdev->dev, "rpmb%d", id);
+ rdev->dev.class = &rpmb_class;
+ rdev->dev.parent = dev;
+
+ rpmb_cdev_prepare(rdev);
+
+ ret = device_register(&rdev->dev);
+ if (ret)
+ goto exit;
+
+ dev_dbg(&rdev->dev, "registered device\n");
+
+ return rdev;
+
+exit:
+ if (id >= 0)
+ ida_simple_remove(&rpmb_ida, id);
+ kfree(rdev);
+ return ERR_PTR(ret);
+}
+EXPORT_SYMBOL_GPL(rpmb_dev_register);
+
+static int __init rpmb_init(void)
+{
+ ida_init(&rpmb_ida);
+ class_register(&rpmb_class);
+ return 0;
+}
+
+static void __exit rpmb_exit(void)
+{
+ class_unregister(&rpmb_class);
+ ida_destroy(&rpmb_ida);
+}
+
+subsys_initcall(rpmb_init);
+module_exit(rpmb_exit);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_DESCRIPTION("RPMB class");
+MODULE_LICENSE("GPL v2");
diff --git a/include/linux/rpmb.h b/include/linux/rpmb.h
new file mode 100644
index 000000000000..4ed5e299623e
--- /dev/null
+++ b/include/linux/rpmb.h
@@ -0,0 +1,172 @@
+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 */
+/*
+ * Copyright (C) 2015-2019 Intel Corp. All rights reserved
+ * Copyright (C) 2021-2022 Linaro Ltd
+ */
+#ifndef __RPMB_H__
+#define __RPMB_H__
+
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/kref.h>
+
+/**
+ * struct rpmb_ops - RPMB ops to be implemented by underlying block device
+ *
+ * @program_key : program device key (once only op).
+ * @get_capacity : rpmb size in 128K units in for region/target.
+ * @get_write_count: return the device write counter
+ * @write_blocks : write blocks to RPMB device
+ * @read_blocks : read blocks from RPMB device
+ * @block_size : block size in half sectors (1 == 256B)
+ * @wr_cnt_max : maximal number of blocks that can be
+ * written in one access.
+ * @rd_cnt_max : maximal number of blocks that can be
+ * read in one access.
+ * @dev_id : unique device identifier
+ * @dev_id_len : unique device identifier length
+ */
+struct rpmb_ops {
+ int (*program_key)(struct device *dev, u8 target,
+ int keylen, u8 *key_frame,
+ int rlen, u8 *resp);
+ int (*get_capacity)(struct device *dev, u8 target);
+ int (*get_write_count)(struct device *dev, u8 target,
+ int len, u8 *requests,
+ int rlen, u8 *resp);
+ int (*write_blocks)(struct device *dev, u8 target,
+ int len, u8 *requests,
+ int rlen, u8 *resp);
+ int (*read_blocks)(struct device *dev, u8 target,
+ int addr, int count,
+ int len, u8 *data);
+ u16 block_size;
+ u16 wr_cnt_max;
+ u16 rd_cnt_max;
+ const u8 *dev_id;
+ size_t dev_id_len;
+};
+
+/**
+ * struct rpmb_dev - device which can support RPMB partition
+ *
+ * @lock : the device lock
+ * @dev : device
+ * @id : device id
+ * @target : RPMB target/region within the physical device
+ * @ops : operation exported by rpmb
+ */
+struct rpmb_dev {
+ struct mutex lock; /* device serialization lock */
+ struct device dev;
+ int id;
+ u8 target;
+ const struct rpmb_ops *ops;
+};
+
+#define to_rpmb_dev(x) container_of((x), struct rpmb_dev, dev)
+
+#if IS_ENABLED(CONFIG_RPMB)
+struct rpmb_dev *rpmb_dev_get(struct rpmb_dev *rdev);
+void rpmb_dev_put(struct rpmb_dev *rdev);
+struct rpmb_dev *rpmb_dev_find_by_device(struct device *parent, u8 target);
+struct rpmb_dev *rpmb_dev_get_by_type(u32 type);
+struct rpmb_dev *rpmb_dev_register(struct device *dev, u8 target,
+ const struct rpmb_ops *ops);
+void *rpmb_dev_get_drvdata(const struct rpmb_dev *rdev);
+void rpmb_dev_set_drvdata(struct rpmb_dev *rdev, void *data);
+int rpmb_dev_unregister(struct rpmb_dev *rdev);
+int rpmb_dev_unregister_by_device(struct device *dev, u8 target);
+
+int rpmb_program_key(struct rpmb_dev *rdev,
+ int klen, u8 *key, int rlen, u8 *resp);
+int rpmb_get_capacity(struct rpmb_dev *rdev);
+int rpmb_get_write_count(struct rpmb_dev *rdev,
+ int len, u8 *request, int rlen, u8 *resp);
+int rpmb_write_blocks(struct rpmb_dev *rdev,
+ int len, u8 *request, int rlen, u8 *resp);
+int rpmb_read_blocks(struct rpmb_dev *rdev, int addr, int count, int len, u8 *data);
+
+#else
+static inline struct rpmb_dev *rpmb_dev_get(struct rpmb_dev *rdev)
+{
+ return NULL;
+}
+
+static inline void rpmb_dev_put(struct rpmb_dev *rdev) { }
+
+static inline struct rpmb_dev *rpmb_dev_find_by_device(struct device *parent,
+ u8 target)
+{
+ return NULL;
+}
+
+static inline
+struct rpmb_dev *rpmb_dev_get_by_type(enum rpmb_type type)
+{
+ return NULL;
+}
+
+static inline void *rpmb_dev_get_drvdata(const struct rpmb_dev *rdev)
+{
+ return NULL;
+}
+
+static inline void rpmb_dev_set_drvdata(struct rpmb_dev *rdev, void *data)
+{
+}
+
+static inline struct rpmb_dev *
+rpmb_dev_register(struct device *dev, u8 target, const struct rpmb_ops *ops)
+{
+ return NULL;
+}
+
+static inline int rpmb_dev_unregister(struct rpmb_dev *dev)
+{
+ return 0;
+}
+
+static inline int rpmb_dev_unregister_by_device(struct device *dev, u8 target)
+{
+ return 0;
+}
+
+static inline int rpmb_program_key(struct rpmb_dev *rdev,
+ int klen, u8 *key,
+ int rlen, u8 *resp)
+{
+ return 0;
+}
+
+static inline rpmb_set_key(struct rpmb_dev *rdev, u8 *key, int keylen);
+{
+ return 0;
+}
+
+static inline int rpmb_get_capacity(struct rpmb_dev *rdev)
+{
+ return 0;
+}
+
+static inline int rpmb_get_write_count(struct rpmb_dev *rdev,
+ int len, u8 *request, int rlen, u8 *resp)
+{
+ return 0;
+}
+
+static inline int rpmb_write_blocks(struct rpmb_dev *rdev,
+ int len, u8 *request, int rlen, u8 *resp);
+{
+ return 0;
+}
+
+static inline int rpmb_read_blocks(struct rpmb_dev *rdev, int addr, int count,
+ int len, u8 *data)
+{
+ return 0;
+}
+
+#endif /* CONFIG_RPMB */
+
+#endif /* __RPMB_H__ */
--
2.30.2

2022-04-06 13:48:00

by Alex Bennée

[permalink] [raw]
Subject: [PATCH v2 3/4] rpmb: create virtio rpmb frontend driver

This implements a virtio rpmb frontend driver for the RPMB subsystem.
This driver conforms to the rpmb internal API which attempts to stay
common for the majority of cases and only expose the low level frame
details as required. In these cases it is up to something outside of
the driver itself to do the appropriate MAC calculations.

The driver does do some basic verification of the incoming data and
will reject frames which are the wrong size or have the wrong commands
in them.

Signed-off-by: Alex Bennée <[email protected]>
Cc: Tomas Winkler <[email protected]>

---
ajb:
- rewrite for new API
---
drivers/rpmb/Kconfig | 10 +
drivers/rpmb/Makefile | 1 +
drivers/rpmb/virtio_rpmb.c | 518 +++++++++++++++++++++++++++++++
include/uapi/linux/virtio_rpmb.h | 54 ++++
4 files changed, 583 insertions(+)
create mode 100644 drivers/rpmb/virtio_rpmb.c
create mode 100644 include/uapi/linux/virtio_rpmb.h

diff --git a/drivers/rpmb/Kconfig b/drivers/rpmb/Kconfig
index 126f336fe9ea..cd0d1bb10910 100644
--- a/drivers/rpmb/Kconfig
+++ b/drivers/rpmb/Kconfig
@@ -16,3 +16,13 @@ config RPMB_INTF_DEV
help
Say yes here if you want to access RPMB from user space
via character device interface /dev/rpmb%d
+
+config VIRTIO_RPMB
+ tristate "Virtio RPMB driver"
+ default n
+ depends on VIRTIO
+ select RPMB
+ help
+ Say yes here if you have a Virtio aware RPMB device or want to use
+ RPMB from a Virtual Machine images.
+ This device interface is only for guest/frontend virtio driver.
diff --git a/drivers/rpmb/Makefile b/drivers/rpmb/Makefile
index f54b3f30514b..4b397b50a42c 100644
--- a/drivers/rpmb/Makefile
+++ b/drivers/rpmb/Makefile
@@ -4,5 +4,6 @@
obj-$(CONFIG_RPMB) += rpmb.o
rpmb-objs += core.o
rpmb-$(CONFIG_RPMB_INTF_DEV) += cdev.o
+obj-$(CONFIG_VIRTIO_RPMB) += virtio_rpmb.o

ccflags-y += -D__CHECK_ENDIAN__
diff --git a/drivers/rpmb/virtio_rpmb.c b/drivers/rpmb/virtio_rpmb.c
new file mode 100644
index 000000000000..db309014a157
--- /dev/null
+++ b/drivers/rpmb/virtio_rpmb.c
@@ -0,0 +1,518 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Virtio RPMB Front End Driver
+ *
+ * Copyright (c) 2018-2019 Intel Corporation.
+ * Copyright (c) 2021-2022 Linaro Ltd.
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/err.h>
+#include <linux/scatterlist.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/virtio.h>
+#include <linux/module.h>
+#include <linux/virtio_ids.h>
+#include <linux/virtio_config.h>
+#include <linux/virtio_rpmb.h>
+#include <linux/uaccess.h>
+#include <linux/byteorder/generic.h>
+#include <linux/rpmb.h>
+
+#define RPMB_MAC_SIZE 32
+#define VIRTIO_RPMB_FRAME_SZ 512
+
+static const char id[] = "RPMB:VIRTIO";
+
+struct virtio_rpmb_info {
+ /* The virtio device we're associated with */
+ struct virtio_device *vdev;
+
+ /* The virtq we use */
+ struct virtqueue *vq;
+
+ struct mutex lock; /* info lock */
+ wait_queue_head_t have_data;
+
+ /* Underlying RPMB device */
+ struct rpmb_dev *rdev;
+
+ /* Config values */
+ u8 max_wr, max_rd, capacity;
+};
+
+/**
+ * virtio_rpmb_recv_done() - vq completion callback
+ */
+static void virtio_rpmb_recv_done(struct virtqueue *vq)
+{
+ struct virtio_rpmb_info *vi;
+ struct virtio_device *vdev = vq->vdev;
+
+ vi = vq->vdev->priv;
+ if (!vi) {
+ dev_err(&vdev->dev, "Error: no found vi data.\n");
+ return;
+ }
+
+ wake_up(&vi->have_data);
+}
+
+/**
+ * do_virtio_transaction() - send sg list and wait for result
+ * @dev: linux device structure
+ * @vi: the device info (where the lock is)
+ * @sgs: array of scatterlists
+ * @out: total outgoing scatter lists
+ * @in: total returning scatter lists
+ *
+ * This is just a simple helper for processing the sg list. It will
+ * block until the response arrives. Returns number of bytes written
+ * back or negative if it failed.
+ */
+static int do_virtio_transaction(struct device *dev,
+ struct virtio_rpmb_info *vi,
+ struct scatterlist *sgs[],
+ int out, int in)
+{
+ int ret, len = 0;
+
+ mutex_lock(&vi->lock);
+ ret = virtqueue_add_sgs(vi->vq, sgs, out, in, vi, GFP_KERNEL);
+ if (ret) {
+ dev_err(dev, "failed to send %d, recv %d sgs (%d) to vq\n",
+ out, in, ret);
+ ret = -1;
+ } else {
+ virtqueue_kick(vi->vq);
+ wait_event(vi->have_data, virtqueue_get_buf(vi->vq, &len));
+ }
+ mutex_unlock(&vi->lock);
+
+ return len;
+}
+
+/**
+ * rpmb_virtio_program_key(): program key into virtio device
+ * @dev: device handle
+ * @target: target region (unused for VirtIO devices)
+ * @klen: length of key programming request
+ * @key_frame: key programming frames
+ * @rlen: length of response buffer
+ * @resp_frame: pointer to optional response frame
+ *
+ * Handle programming of the key (VIRTIO_RPMB_REQ_PROGRAM_KEY)
+ *
+ * The mandatory first frame contains the programming sequence. An
+ * optional second frame may ask for the result of the operation
+ * (VIRTIO_RPMB_REQ_RESULT_READ) which would trigger a response frame.
+ *
+ * Returns success/fail with errno and optional response frame
+ */
+static int rpmb_virtio_program_key(struct device *dev, u8 target,
+ int klen, u8 *key_frame, int rlen, u8 *resp_frame)
+{
+ struct virtio_rpmb_info *vi = dev_get_drvdata(dev);
+ struct virtio_rpmb_frame *pkey = (struct virtio_rpmb_frame *) key_frame;
+ struct virtio_rpmb_frame *resp = NULL;
+ struct scatterlist out_frame;
+ struct scatterlist in_frame;
+ struct scatterlist *sgs[2] = { };
+ int len;
+
+ if (!pkey)
+ return -EINVAL;
+
+ if (be16_to_cpu(pkey->req_resp) != VIRTIO_RPMB_REQ_PROGRAM_KEY)
+ return -EINVAL;
+
+ /* validate incoming frame */
+ switch (klen) {
+ case VIRTIO_RPMB_FRAME_SZ:
+ if (rlen || resp_frame)
+ return -EINVAL;
+ break;
+ case VIRTIO_RPMB_FRAME_SZ * 2:
+ if (!rlen || !resp_frame)
+ return -EINVAL;
+ if (be16_to_cpu(pkey[1].req_resp) != VIRTIO_RPMB_REQ_RESULT_READ)
+ return -EINVAL;
+ if (rlen < VIRTIO_RPMB_FRAME_SZ)
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* setup outgoing frame(s) */
+ sg_init_one(&out_frame, pkey, klen);
+ sgs[0] = &out_frame;
+
+ /* optional incoming frame */
+ if (rlen && resp_frame) {
+ resp = (struct virtio_rpmb_frame *) resp_frame;
+ sg_init_one(&in_frame, resp, sizeof(*resp));
+ sgs[1] = &in_frame;
+ }
+
+ len = do_virtio_transaction(dev, vi, sgs, 1, resp ? 1 : 0);
+
+ if (len > 0 && resp) {
+ if (be16_to_cpu(resp->req_resp) != VIRTIO_RPMB_RESP_PROGRAM_KEY) {
+ dev_err(dev, "Bad response from device (%x/%x)",
+ be16_to_cpu(resp->req_resp), be16_to_cpu(resp->result));
+ return -EPROTO;
+ } else {
+ /* map responses to better errors? */
+ return be16_to_cpu(resp->result) == VIRTIO_RPMB_RES_OK ? 0 : -EIO;
+ }
+ }
+
+ /* Something must have failed at this point. */
+ return len < 0 ? -EIO : 0;
+}
+
+static int rpmb_virtio_get_capacity(struct device *dev, u8 target)
+{
+ struct virtio_rpmb_info *vi = dev_get_drvdata(dev);
+ struct virtio_device *vdev = vi->vdev;
+
+ u8 capacity;
+
+ virtio_cread(vdev, struct virtio_rpmb_config, capacity, &capacity);
+
+ if (capacity > 0x80) {
+ dev_err(&vdev->dev, "Error: invalid capacity reported.\n");
+ capacity = 0x80;
+ }
+
+ return capacity;
+}
+
+static int rpmb_virtio_get_write_count(struct device *dev, u8 target,
+ int len, u8 *req, int rlen, u8 *resp)
+
+{
+ struct virtio_rpmb_info *vi = dev_get_drvdata(dev);
+ struct virtio_rpmb_frame *request = (struct virtio_rpmb_frame *) req;
+ struct virtio_rpmb_frame *response = (struct virtio_rpmb_frame *) resp;
+ struct scatterlist out_frame;
+ struct scatterlist in_frame;
+ struct scatterlist *sgs[2];
+ unsigned int received;
+
+ if (!len || len != VIRTIO_RPMB_FRAME_SZ || !request)
+ return -EINVAL;
+
+ if (!rlen || rlen != VIRTIO_RPMB_FRAME_SZ || !resp)
+ return -EINVAL;
+
+ if (be16_to_cpu(request->req_resp) != VIRTIO_RPMB_REQ_GET_WRITE_COUNTER)
+ return -EINVAL;
+
+ /* Wrap into SG array */
+ sg_init_one(&out_frame, request, VIRTIO_RPMB_FRAME_SZ);
+ sg_init_one(&in_frame, response, VIRTIO_RPMB_FRAME_SZ);
+ sgs[0] = &out_frame;
+ sgs[1] = &in_frame;
+
+ /* Send it, blocks until response */
+ received = do_virtio_transaction(dev, vi, sgs, 1, 1);
+
+ if (received != VIRTIO_RPMB_FRAME_SZ)
+ return -EPROTO;
+
+ if (be16_to_cpu(response->req_resp) != VIRTIO_RPMB_RESP_GET_COUNTER) {
+ dev_err(dev, "failed to get counter (%x/%x)",
+ be16_to_cpu(response->req_resp), be16_to_cpu(response->result));
+ return -EPROTO;
+ }
+
+ return be16_to_cpu(response->result) == VIRTIO_RPMB_RES_OK ?
+ be32_to_cpu(response->write_counter) : -EIO;
+}
+
+static int rpmb_virtio_write_blocks(struct device *dev, u8 target,
+ int len, u8 *req, int rlen, u8 *resp)
+{
+ struct virtio_rpmb_info *vi = dev_get_drvdata(dev);
+ struct virtio_rpmb_frame *request = (struct virtio_rpmb_frame *) req;
+ struct virtio_rpmb_frame *response = (struct virtio_rpmb_frame *) resp;
+ struct scatterlist out_frame;
+ struct scatterlist in_frame;
+ struct scatterlist *sgs[2];
+ int blocks, data_len, received;
+
+ if (!len || (len % VIRTIO_RPMB_FRAME_SZ) != 0 || !request)
+ return -EINVAL;
+
+ /* The first frame will contain the details of the request */
+ if (be16_to_cpu(request->req_resp) != VIRTIO_RPMB_REQ_DATA_WRITE)
+ return -EINVAL;
+
+ blocks = be16_to_cpu(request->block_count);
+ if (blocks > vi->max_wr)
+ return -EINVAL;
+
+ /*
+ * We either have exactly enough frames to write all the data
+ * or we have that plus a frame looking for a response.
+ */
+ data_len = blocks * VIRTIO_RPMB_FRAME_SZ;
+
+ if (len == data_len + VIRTIO_RPMB_FRAME_SZ) {
+ struct virtio_rpmb_frame *reply = &request[blocks];
+
+ if (be16_to_cpu(reply->req_resp) != VIRTIO_RPMB_REQ_RESULT_READ)
+ return -EINVAL;
+
+ if (!rlen || rlen != VIRTIO_RPMB_FRAME_SZ || !resp)
+ return -EINVAL;
+ } else if (len > data_len) {
+ return -E2BIG;
+ } else if (len < data_len) {
+ return -ENOSPC;
+ } else if (rlen || resp) {
+ return -EINVAL;
+ }
+
+ /* time to do the transaction */
+ sg_init_one(&out_frame, request, len);
+ sgs[0] = &out_frame;
+
+ /* optional incoming frame */
+ if (rlen && resp) {
+ sg_init_one(&in_frame, resp, VIRTIO_RPMB_FRAME_SZ);
+ sgs[1] = &in_frame;
+ }
+
+ received = do_virtio_transaction(dev, vi, sgs, 1, resp ? 1 : 0);
+
+ if (response && received != VIRTIO_RPMB_FRAME_SZ)
+ return -EPROTO;
+
+ if (response && be16_to_cpu(response->req_resp) != VIRTIO_RPMB_RESP_DATA_WRITE) {
+ dev_err(dev, "didn't get a response result (%x/%x)",
+ be16_to_cpu(response->req_resp), be16_to_cpu(response->result));
+ return -EPROTO;
+ }
+
+ return be16_to_cpu(response->result) == VIRTIO_RPMB_RES_OK ? 0 : -EIO;
+}
+
+/**
+ * rpmb_virtio_read_blocks(): read blocks of data
+ * @dev: device handle
+ * @target: target region (unused for VirtIO devices)
+ * @addr: block address to start reading from
+ * @count: number of blocks to read
+ * @len: length of receiving buffer
+ * @data: receiving buffer
+ *
+ * Read a number of blocks from RPMB device. As there is no
+ * authentication required to read data we construct the outgoing
+ * frame in this driver.
+ *
+ * Returns success/fail with errno and filling in the buffer pointed
+ * to by @data.
+ */
+static int rpmb_virtio_read_blocks(struct device *dev, u8 target,
+ int addr, int count, int len, u8 *data)
+{
+ struct virtio_rpmb_info *vi = dev_get_drvdata(dev);
+ struct virtio_rpmb_frame *request;
+ struct virtio_rpmb_frame *response = (struct virtio_rpmb_frame *) data;
+ struct scatterlist out_frame;
+ struct scatterlist in_frame;
+ struct scatterlist *sgs[2];
+ int computed_len = count * VIRTIO_RPMB_FRAME_SZ;
+ int received;
+
+ if (!count || !data)
+ return -EINVAL;
+
+ if (addr + count > vi->capacity)
+ return -ESPIPE;
+
+ if (count > vi->max_rd)
+ return -EINVAL;
+
+ /* EMSGSIZE? */
+ if (len < computed_len)
+ return -EFBIG;
+
+ /*
+ * With the basics done we can construct our request.
+ */
+ request = kmalloc(VIRTIO_RPMB_FRAME_SZ, GFP_KERNEL);
+ if (!request)
+ return -ENOMEM;
+
+ request->req_resp = cpu_to_be16(VIRTIO_RPMB_REQ_DATA_READ);
+ request->block_count = cpu_to_be16(count);
+ request->address = cpu_to_be16(addr);
+
+ /* time to do the transaction */
+ sg_init_one(&out_frame, request, sizeof(*request));
+ sgs[0] = &out_frame;
+ sg_init_one(&in_frame, data, len);
+ sgs[1] = &in_frame;
+
+ received = do_virtio_transaction(dev, vi, sgs, 1, 1);
+
+ kfree(request);
+
+ if (received != computed_len)
+ return -EPROTO;
+
+ if (be16_to_cpu(response->req_resp) != VIRTIO_RPMB_RESP_DATA_READ) {
+ dev_err(dev, "didn't get a response result (%x/%x)",
+ be16_to_cpu(response->req_resp), be16_to_cpu(response->result));
+ return -EPROTO;
+ }
+
+ return be16_to_cpu(response->result) == VIRTIO_RPMB_RES_OK ? 0 : -EIO;
+}
+
+static struct rpmb_ops rpmb_virtio_ops = {
+ .program_key = rpmb_virtio_program_key,
+ .get_capacity = rpmb_virtio_get_capacity,
+ .get_write_count = rpmb_virtio_get_write_count,
+ .write_blocks = rpmb_virtio_write_blocks,
+ .read_blocks = rpmb_virtio_read_blocks,
+};
+
+static int rpmb_virtio_dev_init(struct virtio_rpmb_info *vi)
+{
+ struct virtio_device *vdev = vi->vdev;
+ /* XXX this seems very roundabout */
+ struct device *dev = &vi->vq->vdev->dev;
+ int ret = 0;
+
+ virtio_cread(vdev, struct virtio_rpmb_config,
+ max_wr_cnt, &vi->max_wr);
+ virtio_cread(vdev, struct virtio_rpmb_config,
+ max_rd_cnt, &vi->max_rd);
+ virtio_cread(vdev, struct virtio_rpmb_config,
+ capacity, &vi->capacity);
+
+ rpmb_virtio_ops.dev_id_len = strlen(id);
+ rpmb_virtio_ops.dev_id = id;
+ rpmb_virtio_ops.wr_cnt_max = vi->max_wr;
+ rpmb_virtio_ops.rd_cnt_max = vi->max_rd;
+ rpmb_virtio_ops.block_size = 1;
+
+ vi->rdev = rpmb_dev_register(dev, 0, &rpmb_virtio_ops);
+ if (IS_ERR(vi->rdev)) {
+ ret = PTR_ERR(vi->rdev);
+ goto err;
+ }
+
+ dev_set_drvdata(dev, vi);
+err:
+ return ret;
+}
+
+static int virtio_rpmb_init(struct virtio_device *vdev)
+{
+ int ret;
+ struct virtio_rpmb_info *vi;
+
+ vi = kzalloc(sizeof(*vi), GFP_KERNEL);
+ if (!vi)
+ return -ENOMEM;
+
+ init_waitqueue_head(&vi->have_data);
+ mutex_init(&vi->lock);
+
+ /* link virtio_rpmb_info to virtio_device */
+ vdev->priv = vi;
+ vi->vdev = vdev;
+
+ /* We expect a single virtqueue. */
+ vi->vq = virtio_find_single_vq(vdev, virtio_rpmb_recv_done, "request");
+ if (IS_ERR(vi->vq)) {
+ dev_err(&vdev->dev, "get single vq failed!\n");
+ ret = PTR_ERR(vi->vq);
+ goto err;
+ }
+
+ /* create vrpmb device. */
+ ret = rpmb_virtio_dev_init(vi);
+ if (ret) {
+ dev_err(&vdev->dev, "create vrpmb device failed.\n");
+ goto err;
+ }
+
+ dev_info(&vdev->dev, "init done!\n");
+
+ return 0;
+
+err:
+ kfree(vi);
+ return ret;
+}
+
+static void virtio_rpmb_remove(struct virtio_device *vdev)
+{
+ struct virtio_rpmb_info *vi;
+
+ vi = vdev->priv;
+ if (!vi)
+ return;
+
+ if (wq_has_sleeper(&vi->have_data))
+ wake_up(&vi->have_data);
+
+ rpmb_dev_unregister(vi->rdev);
+
+ if (vdev->config->reset)
+ vdev->config->reset(vdev);
+
+ if (vdev->config->del_vqs)
+ vdev->config->del_vqs(vdev);
+
+ kfree(vi);
+}
+
+static int virtio_rpmb_probe(struct virtio_device *vdev)
+{
+ return virtio_rpmb_init(vdev);
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int virtio_rpmb_freeze(struct virtio_device *vdev)
+{
+ virtio_rpmb_remove(vdev);
+ return 0;
+}
+
+static int virtio_rpmb_restore(struct virtio_device *vdev)
+{
+ return virtio_rpmb_init(vdev);
+}
+#endif
+
+static struct virtio_device_id id_table[] = {
+ { VIRTIO_ID_RPMB, VIRTIO_DEV_ANY_ID },
+ { 0 },
+};
+
+static struct virtio_driver virtio_rpmb_driver = {
+ .driver.name = KBUILD_MODNAME,
+ .driver.owner = THIS_MODULE,
+ .id_table = id_table,
+ .probe = virtio_rpmb_probe,
+ .remove = virtio_rpmb_remove,
+#ifdef CONFIG_PM_SLEEP
+ .freeze = virtio_rpmb_freeze,
+ .restore = virtio_rpmb_restore,
+#endif
+};
+
+module_virtio_driver(virtio_rpmb_driver);
+MODULE_DEVICE_TABLE(virtio, id_table);
+
+MODULE_DESCRIPTION("Virtio rpmb frontend driver");
+MODULE_AUTHOR("Intel Corporation");
+MODULE_LICENSE("Dual BSD/GPL");
diff --git a/include/uapi/linux/virtio_rpmb.h b/include/uapi/linux/virtio_rpmb.h
new file mode 100644
index 000000000000..f048cd968210
--- /dev/null
+++ b/include/uapi/linux/virtio_rpmb.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
+
+#ifndef _UAPI_LINUX_VIRTIO_RPMB_H
+#define _UAPI_LINUX_VIRTIO_RPMB_H
+
+#include <linux/types.h>
+#include <linux/virtio_ids.h>
+#include <linux/virtio_config.h>
+#include <linux/virtio_types.h>
+
+struct virtio_rpmb_config {
+ __u8 capacity;
+ __u8 max_wr_cnt;
+ __u8 max_rd_cnt;
+} __attribute__((packed));
+
+/* RPMB Request Types (in .req_resp) */
+#define VIRTIO_RPMB_REQ_PROGRAM_KEY 0x0001
+#define VIRTIO_RPMB_REQ_GET_WRITE_COUNTER 0x0002
+#define VIRTIO_RPMB_REQ_DATA_WRITE 0x0003
+#define VIRTIO_RPMB_REQ_DATA_READ 0x0004
+#define VIRTIO_RPMB_REQ_RESULT_READ 0x0005
+
+/* RPMB Response Types (in .req_resp) */
+#define VIRTIO_RPMB_RESP_PROGRAM_KEY 0x0100
+#define VIRTIO_RPMB_RESP_GET_COUNTER 0x0200
+#define VIRTIO_RPMB_RESP_DATA_WRITE 0x0300
+#define VIRTIO_RPMB_RESP_DATA_READ 0x0400
+
+struct virtio_rpmb_frame {
+ __u8 stuff[196];
+ __u8 key_mac[32];
+ __u8 data[256];
+ __u8 nonce[16];
+ __be32 write_counter;
+ __be16 address;
+ __be16 block_count;
+ __be16 result;
+ __be16 req_resp;
+} __attribute__((packed));
+
+/* RPMB Operation Results (in .result) */
+#define VIRTIO_RPMB_RES_OK 0x0000
+#define VIRTIO_RPMB_RES_GENERAL_FAILURE 0x0001
+#define VIRTIO_RPMB_RES_AUTH_FAILURE 0x0002
+#define VIRTIO_RPMB_RES_COUNT_FAILURE 0x0003
+#define VIRTIO_RPMB_RES_ADDR_FAILURE 0x0004
+#define VIRTIO_RPMB_RES_WRITE_FAILURE 0x0005
+#define VIRTIO_RPMB_RES_READ_FAILURE 0x0006
+#define VIRTIO_RPMB_RES_NO_AUTH_KEY 0x0007
+#define VIRTIO_RPMB_RES_WRITE_COUNTER_EXPIRED 0x0080
+
+
+#endif /* _UAPI_LINUX_VIRTIO_RPMB_H */
--
2.30.2

2022-04-22 18:29:04

by Alex Bennée

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver


Alex Bennée <[email protected]> writes:

> Hi,
>
> This is another attempt to come up with an RPMB API for the kernel.
> The last discussion of this was in the thread:

Ping?

Any other comments or reviews? Is there a desire to make other devices
that provide RPMB functionality visible via a common API?

>
> Subject: [RFC PATCH 0/5] RPMB internal and user-space API + WIP virtio-rpmb frontend
> Date: Wed, 3 Mar 2021 13:54:55 +0000
> Message-Id: <[email protected]>
>
> The series provides for the RPMB sub-system, a new chardev API driven
> by ioctls and a full multi-block capable virtio-rpmb driver. You can
> find a working vhost-user backend in my QEMU branch here:
>
> https://github.com/stsquad/qemu/commits/virtio/vhost-user-rpmb-v2
>
> The branch is a little messy but I'll be posting a cleaned up version
> in the following weeks. The only real changes to the backend is the
> multi-block awareness and some tweaks to deal with QEMU internals
> handling VirtIO config space messages which weren't previously
> exercised. The test.sh script in tools/rpmb works through the various
> transactions but isn't comprehensive.
>
> Changes since the last posting:
>
> - frame construction is mostly back in userspace
>
> The previous discussion showed there wasn't any appetite for using
> the kernels keyctl() interface so userspace yet again takes
> responsibility for constructing most* frames. Currently these are
> all pure virtio-rpmb frames but the code is written so we can plug
> in additional frame types. The virtio-rpmb driver does some
> validation and in some cases (* read-blocks) constructs the request
> frame in the driver. It would take someone implementing a driver for
> another RPMB device type to see if this makes sense.
>
> - user-space interface is still split across several ioctls
>
> Although 3 of the ioctls share the common rpmb_ioc_reqresp_cmd
> structure it does mean things like capacity, write_count and
> read_blocks can have their own structure associated with the
> command.
>
> As before I shall follow up with the QEMU based vhost-user backend and
> hopefully a rust-vmm re-implementation. However I've no direct
> interest in implementing the interfaces to real hardware. I leave that
> to people who have access to such things and are willing to take up
> the maintainer burden if this is merged.
>
> Regards,
>
> Alex
>
>
> Alex Bennée (4):
> rpmb: add Replay Protected Memory Block (RPMB) subsystem
> char: rpmb: provide a user space interface
> rpmb: create virtio rpmb frontend driver
> tools rpmb: add RPBM access tool
>
> .../userspace-api/ioctl/ioctl-number.rst | 1 +
> MAINTAINERS | 9 +
> drivers/Kconfig | 2 +
> drivers/Makefile | 1 +
> drivers/rpmb/Kconfig | 28 +
> drivers/rpmb/Makefile | 9 +
> drivers/rpmb/cdev.c | 309 +++++
> drivers/rpmb/core.c | 439 +++++++
> drivers/rpmb/rpmb-cdev.h | 17 +
> drivers/rpmb/virtio_rpmb.c | 518 ++++++++
> include/linux/rpmb.h | 182 +++
> include/uapi/linux/rpmb.h | 99 ++
> include/uapi/linux/virtio_rpmb.h | 54 +
> tools/Makefile | 16 +-
> tools/rpmb/.gitignore | 2 +
> tools/rpmb/Makefile | 41 +
> tools/rpmb/key | 1 +
> tools/rpmb/rpmb.c | 1083 +++++++++++++++++
> tools/rpmb/test.sh | 22 +
> 19 files changed, 2828 insertions(+), 5 deletions(-)
> create mode 100644 drivers/rpmb/Kconfig
> create mode 100644 drivers/rpmb/Makefile
> create mode 100644 drivers/rpmb/cdev.c
> create mode 100644 drivers/rpmb/core.c
> create mode 100644 drivers/rpmb/rpmb-cdev.h
> create mode 100644 drivers/rpmb/virtio_rpmb.c
> create mode 100644 include/linux/rpmb.h
> create mode 100644 include/uapi/linux/rpmb.h
> create mode 100644 include/uapi/linux/virtio_rpmb.h
> create mode 100644 tools/rpmb/.gitignore
> create mode 100644 tools/rpmb/Makefile
> create mode 100644 tools/rpmb/key
> create mode 100644 tools/rpmb/rpmb.c
> create mode 100755 tools/rpmb/test.sh


--
Alex Bennée

2022-06-16 16:20:49

by Harald Mommer

[permalink] [raw]
Subject: Re: [PATCH v2 3/4] rpmb: create virtio rpmb frontend driver


On 05.04.22 11:37, Alex Bennée wrote:
> +++ b/drivers/rpmb/virtio_rpmb.c
> ...
> +static int rpmb_virtio_write_blocks(struct device *dev, u8 target,
> + int len, u8 *req, int rlen, u8 *resp)
> +{
> + struct virtio_rpmb_info *vi = dev_get_drvdata(dev);
> + struct virtio_rpmb_frame *request = (struct virtio_rpmb_frame *) req;
> + struct virtio_rpmb_frame *response = (struct virtio_rpmb_frame *) resp;
> + struct scatterlist out_frame;
> + struct scatterlist in_frame;
> + struct scatterlist *sgs[2];
> + int blocks, data_len, received;
> +
> + if (!len || (len % VIRTIO_RPMB_FRAME_SZ) != 0 || !request)
> + return -EINVAL;
> +
> + /* The first frame will contain the details of the request */
> + if (be16_to_cpu(request->req_resp) != VIRTIO_RPMB_REQ_DATA_WRITE)
> + return -EINVAL;
> +
> + blocks = be16_to_cpu(request->block_count);
> + if (blocks > vi->max_wr)
> + return -EINVAL;

Not exactly. The virtio specification reserves 0 for "unlimited". And I
see no special handling for 0 in your code. Damned trap in the
specification.

What's "unlimited"? As the blocks_count in the rpmb frame is a be16 I
guess it's 65535. But if you consider this theoretically you have a
possible max array allocation of 16 MB - 512B max. instead of the 16 KB
- 512B you have with 255. Getting headache about this "unlimited" in the
virtio specification. I don't like the theoretical possibility having to
allocate 16MB dynamically for a moment due to this "unlimited".

And of course there is the same problem in virtio_rpmb_read_blocks()
with max_rd.

A fix is needed, either 0 is 65535 or 0 is 255. Choosing 255 as
implementation decision would be limiting down but maybe without any
practical impact. For UFS it's a single byte bRPMBReadWriteSize only in
JESD220C-2.2 GEOMETRY DESCRIPTOR and 0 is not defined there as
"unlimited". Unfortunately I've no idea about the other possible
underlying technologies (eMMC, NVMe, ...).

> ...
> +}
> +...
> +static int rpmb_virtio_read_blocks(struct device *dev, u8 target,
> + int addr, int count, int len, u8 *data)
> +{
> ...
> + if (count > vi->max_rd)
> + return -EINVAL;
See above.
> ...

--
Dipl.-Ing. Harald Mommer
Senior Software Engineer

OpenSynergy GmbH
Rotherstr. 20, 10245 Berlin

Phone: +49 (30) 60 98 540-0 <== Zentrale
Fax: +49 (30) 60 98 540-99
E-Mail: [email protected]

http://www.opensynergy.com

Handelsregister: Amtsgericht Charlottenburg, HRB 108616B
Geschäftsführer/Managing Director: Regis Adjamah

2023-05-31 19:13:24

by Shyam Saini

[permalink] [raw]
Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver

Hi Alex,

[ Resending, Sorry for the noise ]

Are you still working on it or planning to resubmit it ?

[1] The current optee tee kernel driver implementation doesn't work when IMA is used with optee implemented ftpm.

The ftpm has dependency on tee-supplicant which comes once the user space is up and running and IMA attestation happens at boot time and it requires to extend ftpm PCRs.

But IMA can't use PCRs if ftpm use secure emmc RPMB partition. As optee can only access RPMB via tee-supplicant(user space). So, there should be a fast path to allow optee os to access the RPMB parititon without waiting for user-space tee supplicant.

To achieve this fast path linux optee driver and mmc driver needs some work and finally it will need RPMB driver which you posted.

Please let me know what's your plan on this.

[1] https://optee.readthedocs.io/en/latest/architecture/secure_storage.html

Best Regards,
Shyam

2023-06-01 01:30:05

by Zhu, Bing

[permalink] [raw]
Subject: RE: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver

As an alternative, Is it possible to change ftpm design not to depend on RPMB access at the earlier/boot stage? Because to my understanding, typically PCRs don't require persistent/NV storage (for example, before RPMB or tee-supplicant is ready, use TEE memory instead as temporary storage)

Bing

IPAS Security Brown Belt (https://www.credly.com/badges/69ea809f-3a96-4bc7-bb2f-442c1b17af26)
System Software Engineering
Software and Advanced Technology Group
Zizhu Science Park, Shanghai, China

-----Original Message-----
From: Shyam Saini <[email protected]>
Sent: Thursday, June 1, 2023 3:10 AM
To: [email protected]
Cc: [email protected]; [email protected]; [email protected]; Zhu, Bing <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; Winkler, Tomas <[email protected]>; [email protected]; Huang, Yang <[email protected]>; [email protected]; [email protected]; [email protected]
Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver

Hi Alex,

[ Resending, Sorry for the noise ]

Are you still working on it or planning to resubmit it ?

[1] The current optee tee kernel driver implementation doesn't work when IMA is used with optee implemented ftpm.

The ftpm has dependency on tee-supplicant which comes once the user space is up and running and IMA attestation happens at boot time and it requires to extend ftpm PCRs.

But IMA can't use PCRs if ftpm use secure emmc RPMB partition. As optee can only access RPMB via tee-supplicant(user space). So, there should be a fast path to allow optee os to access the RPMB parititon without waiting for user-space tee supplicant.

To achieve this fast path linux optee driver and mmc driver needs some work and finally it will need RPMB driver which you posted.

Please let me know what's your plan on this.

[1] https://optee.readthedocs.io/en/latest/architecture/secure_storage.html

Best Regards,
Shyam

2023-06-01 05:48:37

by Ilias Apalodimas

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver

Hi Bing

On Thu, 1 Jun 2023 at 04:03, Zhu, Bing <[email protected]> wrote:
>
> As an alternative, Is it possible to change ftpm design not to depend on RPMB access at the earlier/boot stage? Because to my understanding, typically PCRs don't require persistent/NV storage (for example, before RPMB or tee-supplicant is ready, use TEE memory instead as temporary storage)

I am not entirely sure this will solve our problem here. You are
right that we shouldn't depend on the supplicant to extend PCRs. But
what happens if an object is sealed against certain PCR values? We
are back to the same problem

Thanks
/Ilias
>
> Bing
>
> IPAS Security Brown Belt (https://www.credly.com/badges/69ea809f-3a96-4bc7-bb2f-442c1b17af26)
> System Software Engineering
> Software and Advanced Technology Group
> Zizhu Science Park, Shanghai, China
>
> -----Original Message-----
> From: Shyam Saini <[email protected]>
> Sent: Thursday, June 1, 2023 3:10 AM
> To: [email protected]
> Cc: [email protected]; [email protected]; [email protected]; Zhu, Bing <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; Winkler, Tomas <[email protected]>; [email protected]; Huang, Yang <[email protected]>; [email protected]; [email protected]; [email protected]
> Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver
>
> Hi Alex,
>
> [ Resending, Sorry for the noise ]
>
> Are you still working on it or planning to resubmit it ?
>
> [1] The current optee tee kernel driver implementation doesn't work when IMA is used with optee implemented ftpm.
>
> The ftpm has dependency on tee-supplicant which comes once the user space is up and running and IMA attestation happens at boot time and it requires to extend ftpm PCRs.
>
> But IMA can't use PCRs if ftpm use secure emmc RPMB partition. As optee can only access RPMB via tee-supplicant(user space). So, there should be a fast path to allow optee os to access the RPMB parititon without waiting for user-space tee supplicant.
>
> To achieve this fast path linux optee driver and mmc driver needs some work and finally it will need RPMB driver which you posted.
>
> Please let me know what's your plan on this.
>
> [1] https://optee.readthedocs.io/en/latest/architecture/secure_storage.html
>
> Best Regards,
> Shyam

2023-06-01 06:00:56

by Sumit Garg

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver

On Thu, 1 Jun 2023 at 11:02, Ilias Apalodimas
<[email protected]> wrote:
>
> Hi Bing
>
> On Thu, 1 Jun 2023 at 04:03, Zhu, Bing <[email protected]> wrote:
> >
> > As an alternative, Is it possible to change ftpm design not to depend on RPMB access at the earlier/boot stage? Because to my understanding, typically PCRs don't require persistent/NV storage (for example, before RPMB or tee-supplicant is ready, use TEE memory instead as temporary storage)
>
> I am not entirely sure this will solve our problem here. You are
> right that we shouldn't depend on the supplicant to extend PCRs. But
> what happens if an object is sealed against certain PCR values? We
> are back to the same problem

+1

Temporary storage may be a stop gap solution for some use-cases but
having a fast path access to RPMB via kernel should be our final goal.
I would suggest we start small with the MMC subsystem to expose RPMB
access APIs for OP-TEE driver rather than a complete RPMB subsystem.

-Sumit

>
> Thanks
> /Ilias
> >
> > Bing
> >
> > IPAS Security Brown Belt (https://www.credly.com/badges/69ea809f-3a96-4bc7-bb2f-442c1b17af26)
> > System Software Engineering
> > Software and Advanced Technology Group
> > Zizhu Science Park, Shanghai, China
> >
> > -----Original Message-----
> > From: Shyam Saini <[email protected]>
> > Sent: Thursday, June 1, 2023 3:10 AM
> > To: [email protected]
> > Cc: [email protected]; [email protected]; [email protected]; Zhu, Bing <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; Winkler, Tomas <[email protected]>; [email protected]; Huang, Yang <[email protected]>; [email protected]; [email protected]; [email protected]
> > Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver
> >
> > Hi Alex,
> >
> > [ Resending, Sorry for the noise ]
> >
> > Are you still working on it or planning to resubmit it ?
> >
> > [1] The current optee tee kernel driver implementation doesn't work when IMA is used with optee implemented ftpm.
> >
> > The ftpm has dependency on tee-supplicant which comes once the user space is up and running and IMA attestation happens at boot time and it requires to extend ftpm PCRs.
> >
> > But IMA can't use PCRs if ftpm use secure emmc RPMB partition. As optee can only access RPMB via tee-supplicant(user space). So, there should be a fast path to allow optee os to access the RPMB parititon without waiting for user-space tee supplicant.
> >
> > To achieve this fast path linux optee driver and mmc driver needs some work and finally it will need RPMB driver which you posted.
> >
> > Please let me know what's your plan on this.
> >
> > [1] https://optee.readthedocs.io/en/latest/architecture/secure_storage.html
> >
> > Best Regards,
> > Shyam

2023-06-02 08:29:16

by Ilias Apalodimas

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver

On Thu, 1 Jun 2023 at 08:49, Sumit Garg <[email protected]> wrote:
>
> On Thu, 1 Jun 2023 at 11:02, Ilias Apalodimas
> <[email protected]> wrote:
> >
> > Hi Bing
> >
> > On Thu, 1 Jun 2023 at 04:03, Zhu, Bing <[email protected]> wrote:
> > >
> > > As an alternative, Is it possible to change ftpm design not to depend on RPMB access at the earlier/boot stage? Because to my understanding, typically PCRs don't require persistent/NV storage (for example, before RPMB or tee-supplicant is ready, use TEE memory instead as temporary storage)
> >
> > I am not entirely sure this will solve our problem here. You are
> > right that we shouldn't depend on the supplicant to extend PCRs. But
> > what happens if an object is sealed against certain PCR values? We
> > are back to the same problem
>
> +1
>
> Temporary storage may be a stop gap solution for some use-cases but
> having a fast path access to RPMB via kernel should be our final goal.
> I would suggest we start small with the MMC subsystem to expose RPMB
> access APIs for OP-TEE driver rather than a complete RPMB subsystem.

I discussed with the OP-TEE maintainers about adding parts of the
supplicant in the kernel. The supplicant 'just' sends an ioctl to
store/read stuff anyway. So it would make sense to have a closer and
see if that looks reasonable.
Thanks

/Ilias

>
> -Sumit
>
> >
> > Thanks
> > /Ilias
> > >
> > > Bing
> > >
> > > IPAS Security Brown Belt (https://www.credly.com/badges/69ea809f-3a96-4bc7-bb2f-442c1b17af26)
> > > System Software Engineering
> > > Software and Advanced Technology Group
> > > Zizhu Science Park, Shanghai, China
> > >
> > > -----Original Message-----
> > > From: Shyam Saini <[email protected]>
> > > Sent: Thursday, June 1, 2023 3:10 AM
> > > To: [email protected]
> > > Cc: [email protected]; [email protected]; [email protected]; Zhu, Bing <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; Winkler, Tomas <[email protected]>; [email protected]; Huang, Yang <[email protected]>; [email protected]; [email protected]; [email protected]
> > > Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver
> > >
> > > Hi Alex,
> > >
> > > [ Resending, Sorry for the noise ]
> > >
> > > Are you still working on it or planning to resubmit it ?
> > >
> > > [1] The current optee tee kernel driver implementation doesn't work when IMA is used with optee implemented ftpm.
> > >
> > > The ftpm has dependency on tee-supplicant which comes once the user space is up and running and IMA attestation happens at boot time and it requires to extend ftpm PCRs.
> > >
> > > But IMA can't use PCRs if ftpm use secure emmc RPMB partition. As optee can only access RPMB via tee-supplicant(user space). So, there should be a fast path to allow optee os to access the RPMB parititon without waiting for user-space tee supplicant.
> > >
> > > To achieve this fast path linux optee driver and mmc driver needs some work and finally it will need RPMB driver which you posted.
> > >
> > > Please let me know what's your plan on this.
> > >
> > > [1] https://optee.readthedocs.io/en/latest/architecture/secure_storage.html
> > >
> > > Best Regards,
> > > Shyam

2023-06-12 17:29:34

by Jens Wiklander

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver

Hi Alex,

On Fri, Jun 2, 2023 at 10:26 AM Ilias Apalodimas
<[email protected]> wrote:
>
> On Thu, 1 Jun 2023 at 08:49, Sumit Garg <[email protected]> wrote:
> >
> > On Thu, 1 Jun 2023 at 11:02, Ilias Apalodimas
> > <[email protected]> wrote:
> > >
> > > Hi Bing
> > >
> > > On Thu, 1 Jun 2023 at 04:03, Zhu, Bing <[email protected]> wrote:
> > > >
> > > > As an alternative, Is it possible to change ftpm design not to depend on RPMB access at the earlier/boot stage? Because to my understanding, typically PCRs don't require persistent/NV storage (for example, before RPMB or tee-supplicant is ready, use TEE memory instead as temporary storage)
> > >
> > > I am not entirely sure this will solve our problem here. You are
> > > right that we shouldn't depend on the supplicant to extend PCRs. But
> > > what happens if an object is sealed against certain PCR values? We
> > > are back to the same problem
> >
> > +1
> >
> > Temporary storage may be a stop gap solution for some use-cases but
> > having a fast path access to RPMB via kernel should be our final goal.
> > I would suggest we start small with the MMC subsystem to expose RPMB
> > access APIs for OP-TEE driver rather than a complete RPMB subsystem.
>
> I discussed with the OP-TEE maintainers about adding parts of the
> supplicant in the kernel. The supplicant 'just' sends an ioctl to
> store/read stuff anyway. So it would make sense to have a closer and
> see if that looks reasonable.
> Thanks

I was trying to create a setup to test this. I've added the kernel
patches on top of https://github.com/linaro-swg/linux/tree/optee. The
QEMU branch is a bit dated and I had to add
3a845a214b42 target/arm: allow setting SCR_EL3.EnTP2 when FEAT_SME is
implemented
d4a7b0ef1a03 hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when
booting Linux with EL3
9745a003f878 hw/intc/arm_gicv3: fix prio masking on pmr write
beeec926d24a target/arm: mark SP_EL1 with ARM_CP_EL3_NO_EL2_KEEP
on top of that branch to be able to boot to the Linux kernel.

I have the vhost-user-rpmb process running and connected with QEMU,
but around (guessing really) when the RPMB subsystem is initializing
the process dies with:
================ Vhost user message ================
Request: VHOST_USER_SET_VRING_ADDR (9)
Flags: 0x1
Size: 40
vhost-user-rpmb-INFO: 18:58:08.312: vrpmb_process_msg: msg
VHOST_USER_SET_VRING_ADDR(9)
vhost_vring_addr:
index: 0
flags: 0
desc_user_addr: 0x00007ff15fa91000
used_user_addr: 0x00007ff15fa91080
avail_user_addr: 0x00007ff15fa91040
log_guest_addr: 0x0000000041c91080
Setting virtq addresses:
vring_desc at (nil)
vring_used at (nil)
vring_avail at (nil)

(vhost-user-rpmb:3236474): vhost-user-rpmb-CRITICAL **: 18:58:08.312:
Invalid vring_addr message

Among other options, I'm starting QEMU with -machine
virt,secure=on,mte=off,gic-version=3,virtualization=false to enable
the secure world.

Do you have an idea of what might be wrong? Where should I start looking?

Thanks,
Jens

>
> /Ilias
>
> >
> > -Sumit
> >
> > >
> > > Thanks
> > > /Ilias
> > > >
> > > > Bing
> > > >
> > > > IPAS Security Brown Belt (https://www.credly.com/badges/69ea809f-3a96-4bc7-bb2f-442c1b17af26)
> > > > System Software Engineering
> > > > Software and Advanced Technology Group
> > > > Zizhu Science Park, Shanghai, China
> > > >
> > > > -----Original Message-----
> > > > From: Shyam Saini <[email protected]>
> > > > Sent: Thursday, June 1, 2023 3:10 AM
> > > > To: [email protected]
> > > > Cc: [email protected]; [email protected]; [email protected]; Zhu, Bing <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; Winkler, Tomas <[email protected]>; [email protected]; Huang, Yang <[email protected]>; [email protected]; [email protected]; [email protected]
> > > > Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver
> > > >
> > > > Hi Alex,
> > > >
> > > > [ Resending, Sorry for the noise ]
> > > >
> > > > Are you still working on it or planning to resubmit it ?
> > > >
> > > > [1] The current optee tee kernel driver implementation doesn't work when IMA is used with optee implemented ftpm.
> > > >
> > > > The ftpm has dependency on tee-supplicant which comes once the user space is up and running and IMA attestation happens at boot time and it requires to extend ftpm PCRs.
> > > >
> > > > But IMA can't use PCRs if ftpm use secure emmc RPMB partition. As optee can only access RPMB via tee-supplicant(user space). So, there should be a fast path to allow optee os to access the RPMB parititon without waiting for user-space tee supplicant.
> > > >
> > > > To achieve this fast path linux optee driver and mmc driver needs some work and finally it will need RPMB driver which you posted.
> > > >
> > > > Please let me know what's your plan on this.
> > > >
> > > > [1] https://optee.readthedocs.io/en/latest/architecture/secure_storage.html
> > > >
> > > > Best Regards,
> > > > Shyam

2023-06-13 01:19:33

by Shyam Saini

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver


Thank you everyone for your valueable feedback.

Alex, are you planning submit this patch series ?
Please let me know.

> On Thu, 1 Jun 2023 at 08:49, Sumit Garg <[email protected]> wrote:
>>
>> On Thu, 1 Jun 2023 at 11:02, Ilias Apalodimas
>> <[email protected]> wrote:
>>>
>>> Hi Bing
>>>
>>> On Thu, 1 Jun 2023 at 04:03, Zhu, Bing <[email protected]> wrote:
>>>>
>>>> As an alternative, Is it possible to change ftpm design not to depend on RPMB access at the earlier/boot stage? Because to my understanding, typically PCRs don't require persistent/NV storage (for example, before RPMB or tee-supplicant is ready, use TEE memory instead as temporary storage)
>>>
>>> I am not entirely sure this will solve our problem here. You are
>>> right that we shouldn't depend on the supplicant to extend PCRs. But
>>> what happens if an object is sealed against certain PCR values? We
>>> are back to the same problem
>>
>> +1
>>
>> Temporary storage may be a stop gap solution for some use-cases but
>> having a fast path access to RPMB via kernel should be our final goal.
>> I would suggest we start small with the MMC subsystem to expose RPMB
>> access APIs for OP-TEE driver rather than a complete RPMB subsystem.
>
> I discussed with the OP-TEE maintainers about adding parts of the
> supplicant in the kernel. The supplicant 'just' sends an ioctl to
> store/read stuff anyway. So it would make sense to have a closer and
> see if that looks reasonable.
> Thanks
>
> /Ilias
>
>>
>> -Sumit
>>
>>>
>>> Thanks
>>> /Ilias
>>>>
>>>> Bing
>>>>
>>>> IPAS Security Brown Belt (https://www.credly.com/badges/69ea809f-3a96-4bc7-bb2f-442c1b17af26)
>>>> System Software Engineering
>>>> Software and Advanced Technology Group
>>>> Zizhu Science Park, Shanghai, China
>>>>
>>>> -----Original Message-----
>>>> From: Shyam Saini <[email protected]>
>>>> Sent: Thursday, June 1, 2023 3:10 AM
>>>> To: [email protected]
>>>> Cc: [email protected]; [email protected]; [email protected]; Zhu, Bing <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; Winkler, Tomas <[email protected]>; [email protected]; Huang, Yang <[email protected]>; [email protected]; [email protected]; [email protected]
>>>> Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver
>>>>
>>>> Hi Alex,
>>>>
>>>> [ Resending, Sorry for the noise ]
>>>>
>>>> Are you still working on it or planning to resubmit it ?
>>>>
>>>> [1] The current optee tee kernel driver implementation doesn't work when IMA is used with optee implemented ftpm.
>>>>
>>>> The ftpm has dependency on tee-supplicant which comes once the user space is up and running and IMA attestation happens at boot time and it requires to extend ftpm PCRs.
>>>>
>>>> But IMA can't use PCRs if ftpm use secure emmc RPMB partition. As optee can only access RPMB via tee-supplicant(user space). So, there should be a fast path to allow optee os to access the RPMB parititon without waiting for user-space tee supplicant.
>>>>
>>>> To achieve this fast path linux optee driver and mmc driver needs some work and finally it will need RPMB driver which you posted.
>>>>
>>>> Please let me know what's your plan on this.
>>>>
>>>> [1] https://optee.readthedocs.io/en/latest/architecture/secure_storage.html
>>>>
>>>> Best Regards,
>>>> Shyam
>

2023-06-13 17:02:22

by Shyam Saini

[permalink] [raw]
Subject: RE: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver

Hi Bing,

Other than PCRs we also want to store Non volatile ftpm data (NVData),
storing these in volatile DDR shared memory will be a spec violation.

Best Regards,
Shyam

> As an alternative, Is it possible to change ftpm design not to depend on RPMB access at the earlier/boot stage? Because to my understanding, typically PCRs don't require persistent/NV storage (for example, before RPMB or tee-supplicant is ready, use TEE memory instead as temporary storage)
>
> Bing
>
> IPAS Security Brown Belt (https://www.credly.com/badges/69ea809f-3a96-4bc7-bb2f-442c1b17af26)
> System Software Engineering
> Software and Advanced Technology Group
> Zizhu Science Park, Shanghai, China
>
> -----Original Message-----
> From: Shyam Saini <[email protected]>
> Sent: Thursday, June 1, 2023 3:10 AM
> To: [email protected]
> Cc: [email protected]; [email protected]; [email protected]; Zhu, Bing <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; Winkler, Tomas <[email protected]>; [email protected]; Huang, Yang <[email protected]>; [email protected]; [email protected]; [email protected]
> Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver
>
> Hi Alex,
>
> [ Resending, Sorry for the noise ]
>
> Are you still working on it or planning to resubmit it ?
>
> [1] The current optee tee kernel driver implementation doesn't work when IMA is used with optee implemented ftpm.
>
> The ftpm has dependency on tee-supplicant which comes once the user space is up and running and IMA attestation happens at boot time and it requires to extend ftpm PCRs.
>
> But IMA can't use PCRs if ftpm use secure emmc RPMB partition. As optee can only access RPMB via tee-supplicant(user space). So, there should be a fast path to allow optee os to access the RPMB parititon without waiting for user-space tee supplicant.
>
> To achieve this fast path linux optee driver and mmc driver needs some work and finally it will need RPMB driver which you posted.
>
> Please let me know what's your plan on this.
>
> [1] https://optee.readthedocs.io/en/latest/architecture/secure_storage.html
>
> Best Regards,
> Shyam
>