2022-06-05 06:23:57

by Oleksandr Tyshchenko

[permalink] [raw]
Subject: [PATCH V4 0/8] virtio: Solution to restrict memory access under Xen using xen-grant DMA-mapping layer

From: Oleksandr Tyshchenko <[email protected]>

Hello all.

The purpose of this patch series is to add support for restricting memory access under Xen using specific
grant table [1] based DMA-mapping layer. Patch series is based on Juergen Gross’ initial work [2] which implies
using grant references instead of raw guest physical addresses (GPA) for the virtio communications (some
kind of the software IOMMU).

You can find RFC-V3 patch series (and previous discussions) at [3].

!!! Please note, the only diff between V3 and V4 is in commit #5, also I have collected the acks (commits ##4-7).

The high level idea is to create new Xen’s grant table based DMA-mapping layer for the guest Linux whose main
purpose is to provide a special 64-bit DMA address which is formed by using the grant reference (for a page
to be shared with the backend) with offset and setting the highest address bit (this is for the backend to
be able to distinguish grant ref based DMA address from normal GPA). For this to work we need the ability
to allocate contiguous (consecutive) grant references for multi-page allocations. And the backend then needs
to offer VIRTIO_F_ACCESS_PLATFORM and VIRTIO_F_VERSION_1 feature bits (it must support virtio-mmio modern
transport for 64-bit addresses in the virtqueue).

Xen's grant mapping mechanism is the secure and safe solution to share pages between domains which proven
to work and works for years (in the context of traditional Xen PV drivers for example). So far, the foreign
mapping is used for the virtio backend to map and access guest memory. With the foreign mapping, the backend
is able to map arbitrary pages from the guest memory (or even from Dom0 memory). And as the result, the malicious
backend which runs in a non-trusted domain can take advantage of this. Instead, with the grant mapping
the backend is only allowed to map pages which were explicitly granted by the guest before and nothing else.
According to the discussions in various mainline threads this solution would likely be welcome because it
perfectly fits in the security model Xen provides.

What is more, the grant table based solution requires zero changes to the Xen hypervisor itself at least
with virtio-mmio and DT (in comparison, for example, with "foreign mapping + virtio-iommu" solution which would
require the whole new complex emulator in hypervisor in addition to new functionality/hypercall to pass IOVA
from the virtio backend running elsewhere to the hypervisor and translate it to the GPA before mapping into
P2M or denying the foreign mapping request if no corresponding IOVA-GPA mapping present in the IOMMU page table
for that particular device). We only need to update toolstack to insert "xen,grant-dma" IOMMU node (to be referred
by the virtio-mmio device using "iommus" property) when creating a guest device-tree (this is an indicator for
the guest to use Xen grant mappings scheme for that device with the endpoint ID being used as ID of Xen domain
where the corresponding backend is running, the backend domid is used as an argument to the grant mapping APIs).
It worth mentioning that toolstack patch is based on non upstreamed yet “Virtio support for toolstack on Arm”
series which is on review now [4].

Please note the following:
- Patch series only covers Arm and virtio-mmio (device-tree) for now. To enable the restricted memory access
feature on Arm the following option should be set:
CONFIG_XEN_VIRTIO=y
- Patch series is based on "kernel: add new infrastructure for platform_has() support" patch series which
is on review now [5]
- Xen should be built with the following options:
CONFIG_IOREQ_SERVER=y
CONFIG_EXPERT=y

Patch series is rebased on "for-linus-5.19" branch [1] with "platform_has()" series applied and tested on Renesas
Salvator-X board + H3 ES3.0 SoC (Arm64) with standalone userspace (non-Qemu) virtio-mmio based virtio-disk backend
running in Driver domain and Linux guest running on existing virtio-blk driver (frontend). No issues were observed.
Guest domain 'reboot/destroy' use-cases work properly.
I have also tested other use-cases such as assigning several virtio block devices or a mix of virtio and Xen PV block
devices to the guest. Patch series was build-tested on Arm32 and x86.

1. Xen changes located at (last patch):
https://github.com/otyshchenko1/xen/commits/libxl_virtio_next2_1
2. Linux changes located at (last 8 patches):
https://github.com/otyshchenko1/linux/commits/virtio_grant9
3. virtio-disk changes located at:
https://github.com/otyshchenko1/virtio-disk/commits/virtio_grant

Any feedback/help would be highly appreciated.

[1] https://xenbits.xenproject.org/docs/4.16-testing/misc/grant-tables.txt
[2] https://www.youtube.com/watch?v=IrlEdaIUDPk
[3] https://lore.kernel.org/xen-devel/[email protected]/
https://lore.kernel.org/xen-devel/[email protected]/
https://lore.kernel.org/xen-devel/[email protected]/
https://lore.kernel.org/xen-devel/[email protected]/
[4] https://lore.kernel.org/xen-devel/[email protected]/
https://lore.kernel.org/xen-devel/[email protected]/
[5] https://lore.kernel.org/xen-devel/[email protected]/
[6] https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git/log/?h=for-linus-5.19

Juergen Gross (3):
xen/grants: support allocating consecutive grants
xen/grant-dma-ops: Add option to restrict memory access under Xen
xen/virtio: Enable restricted memory access using Xen grant mappings

Oleksandr Tyshchenko (5):
arm/xen: Introduce xen_setup_dma_ops()
dt-bindings: Add xen,grant-dma IOMMU description for xen-grant DMA ops
xen/grant-dma-iommu: Introduce stub IOMMU driver
xen/grant-dma-ops: Retrieve the ID of backend's domain for DT devices
arm/xen: Assign xen-grant DMA ops for xen-grant DMA devices

.../devicetree/bindings/iommu/xen,grant-dma.yaml | 39 +++
arch/arm/include/asm/xen/xen-ops.h | 2 +
arch/arm/mm/dma-mapping.c | 7 +-
arch/arm/xen/enlighten.c | 2 +
arch/arm64/include/asm/xen/xen-ops.h | 2 +
arch/arm64/mm/dma-mapping.c | 7 +-
arch/x86/xen/enlighten_hvm.c | 2 +
arch/x86/xen/enlighten_pv.c | 2 +
drivers/xen/Kconfig | 20 ++
drivers/xen/Makefile | 2 +
drivers/xen/grant-dma-iommu.c | 78 +++++
drivers/xen/grant-dma-ops.c | 345 +++++++++++++++++++++
drivers/xen/grant-table.c | 251 ++++++++++++---
include/xen/arm/xen-ops.h | 18 ++
include/xen/grant_table.h | 4 +
include/xen/xen-ops.h | 13 +
include/xen/xen.h | 8 +
17 files changed, 756 insertions(+), 46 deletions(-)
create mode 100644 Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
create mode 100644 arch/arm/include/asm/xen/xen-ops.h
create mode 100644 arch/arm64/include/asm/xen/xen-ops.h
create mode 100644 drivers/xen/grant-dma-iommu.c
create mode 100644 drivers/xen/grant-dma-ops.c
create mode 100644 include/xen/arm/xen-ops.h

--
2.7.4


2022-06-05 13:27:50

by Oleksandr Tyshchenko

[permalink] [raw]
Subject: [PATCH V4 8/8] arm/xen: Assign xen-grant DMA ops for xen-grant DMA devices

From: Oleksandr Tyshchenko <[email protected]>

By assigning xen-grant DMA ops we will restrict memory access for
passed device using Xen grant mappings. This is needed for using any
virtualized device (e.g. virtio) in Xen guests in a safe manner.

Please note, for the virtio devices the XEN_VIRTIO config should
be enabled (it forces ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS).

Signed-off-by: Oleksandr Tyshchenko <[email protected]>
Reviewed-by: Stefano Stabellini <[email protected]>
---
Changes RFC -> V1:
- update commit subject/description
- remove #ifdef CONFIG_XEN_VIRTIO
- re-organize the check taking into the account that
swiotlb and virtio cases are mutually exclusive
- update according to the new naming scheme:
s/virtio/grant_dma

Changes V1 -> V2:
- add Stefano's R-b
- remove arch_has_restricted_virtio_memory_access() check
- update commit description
- remove the inclusion of virtio_config.h

Changes V2 -> V3:
- no changes

Changes V3 -> V4:
- no changes
---
include/xen/arm/xen-ops.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/xen/arm/xen-ops.h b/include/xen/arm/xen-ops.h
index 288deb1..b0766a6 100644
--- a/include/xen/arm/xen-ops.h
+++ b/include/xen/arm/xen-ops.h
@@ -3,11 +3,14 @@
#define _ASM_ARM_XEN_OPS_H

#include <xen/swiotlb-xen.h>
+#include <xen/xen-ops.h>

static inline void xen_setup_dma_ops(struct device *dev)
{
#ifdef CONFIG_XEN
- if (xen_swiotlb_detect())
+ if (xen_is_grant_dma_device(dev))
+ xen_grant_setup_dma_ops(dev);
+ else if (xen_swiotlb_detect())
dev->dma_ops = &xen_swiotlb_dma_ops;
#endif
}
--
2.7.4

2022-06-06 04:05:30

by Oleksandr Tyshchenko

[permalink] [raw]
Subject: [PATCH V4 3/8] xen/grant-dma-ops: Add option to restrict memory access under Xen

From: Juergen Gross <[email protected]>

Introduce Xen grant DMA-mapping layer which contains special DMA-mapping
routines for providing grant references as DMA addresses to be used by
frontends (e.g. virtio) in Xen guests.

Add the needed functionality by providing a special set of DMA ops
handling the needed grant operations for the I/O pages.

The subsequent commit will introduce the use case for xen-grant DMA ops
layer to enable using virtio devices in Xen guests in a safe manner.

Signed-off-by: Juergen Gross <[email protected]>
Signed-off-by: Oleksandr Tyshchenko <[email protected]>
Reviewed-by: Stefano Stabellini <[email protected]>
---
Changes RFC -> V1:
- squash with almost all changes from commit (except handling "xen,dev-domid"
property):
"[PATCH 4/6] virtio: Various updates to xen-virtio DMA ops layer"
- update commit subject/description and comments in code
- leave only single Kconfig option XEN_VIRTIO and remove architectural
dependencies
- introduce common xen_has_restricted_virtio_memory_access() in xen.h
and update arch_has_restricted_virtio_memory_access() for both
Arm and x86 to call new helper
- use (1ULL << 63) instead of 0x8000000000000000ULL for XEN_GRANT_ADDR_OFF
- implement xen_virtio_dma_map(unmap)_sg() using example in swiotlb-xen.c
- optimize padding by moving "broken" field in struct xen_virtio_data
- remove unneeded per-device spinlock
- remove the inclusion of virtio_config.h
- remane everything according to the new naming scheme:
s/virtio/grant_dma
- add new hidden config option XEN_GRANT_DMA_OPS

Changes V1 -> V2:
- fix checkpatch.pl warnings
- remove the inclusion of linux/pci.h
- rework to use xarray for data context
- remove EXPORT_SYMBOL_GPL(xen_grant_setup_dma_ops);
- remove the line of * after SPDX-License-Identifier
- split changes into grant-dma-ops.c and arch_has_restricted_virtio_memory_access()
and update commit subject/description accordingly
- remove "default n" for config XEN_VIRTIO
- implement xen_grant_dma_alloc(free)_pages()

Changes V2 -> V3:
- Stefano already gave his Reviewed-by, I dropped it due to the changes (minor)
- remane field "dev_domid" in struct xen_grant_dma_data to "backend_domid"
- remove local variable "domid" in xen_grant_setup_dma_ops()

Changes V3 -> V4:
- add Stefano's R-b
---
---
drivers/xen/Kconfig | 4 +
drivers/xen/Makefile | 1 +
drivers/xen/grant-dma-ops.c | 311 ++++++++++++++++++++++++++++++++++++++++++++
include/xen/xen-ops.h | 8 ++
4 files changed, 324 insertions(+)
create mode 100644 drivers/xen/grant-dma-ops.c

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index 120d32f..313a9127 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -335,4 +335,8 @@ config XEN_UNPOPULATED_ALLOC
having to balloon out RAM regions in order to obtain physical memory
space to create such mappings.

+config XEN_GRANT_DMA_OPS
+ bool
+ select DMA_OPS
+
endmenu
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 5aae66e..1a23cb0 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -39,3 +39,4 @@ xen-gntalloc-y := gntalloc.o
xen-privcmd-y := privcmd.o privcmd-buf.o
obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF) += xen-front-pgdir-shbuf.o
obj-$(CONFIG_XEN_UNPOPULATED_ALLOC) += unpopulated-alloc.o
+obj-$(CONFIG_XEN_GRANT_DMA_OPS) += grant-dma-ops.o
diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
new file mode 100644
index 00000000..44659f4
--- /dev/null
+++ b/drivers/xen/grant-dma-ops.c
@@ -0,0 +1,311 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Xen grant DMA-mapping layer - contains special DMA-mapping routines
+ * for providing grant references as DMA addresses to be used by frontends
+ * (e.g. virtio) in Xen guests
+ *
+ * Copyright (c) 2021, Juergen Gross <[email protected]>
+ */
+
+#include <linux/module.h>
+#include <linux/dma-map-ops.h>
+#include <linux/of.h>
+#include <linux/pfn.h>
+#include <linux/xarray.h>
+#include <xen/xen.h>
+#include <xen/grant_table.h>
+
+struct xen_grant_dma_data {
+ /* The ID of backend domain */
+ domid_t backend_domid;
+ /* Is device behaving sane? */
+ bool broken;
+};
+
+static DEFINE_XARRAY(xen_grant_dma_devices);
+
+#define XEN_GRANT_DMA_ADDR_OFF (1ULL << 63)
+
+static inline dma_addr_t grant_to_dma(grant_ref_t grant)
+{
+ return XEN_GRANT_DMA_ADDR_OFF | ((dma_addr_t)grant << PAGE_SHIFT);
+}
+
+static inline grant_ref_t dma_to_grant(dma_addr_t dma)
+{
+ return (grant_ref_t)((dma & ~XEN_GRANT_DMA_ADDR_OFF) >> PAGE_SHIFT);
+}
+
+static struct xen_grant_dma_data *find_xen_grant_dma_data(struct device *dev)
+{
+ struct xen_grant_dma_data *data;
+
+ xa_lock(&xen_grant_dma_devices);
+ data = xa_load(&xen_grant_dma_devices, (unsigned long)dev);
+ xa_unlock(&xen_grant_dma_devices);
+
+ return data;
+}
+
+/*
+ * DMA ops for Xen frontends (e.g. virtio).
+ *
+ * Used to act as a kind of software IOMMU for Xen guests by using grants as
+ * DMA addresses.
+ * Such a DMA address is formed by using the grant reference as a frame
+ * number and setting the highest address bit (this bit is for the backend
+ * to be able to distinguish it from e.g. a mmio address).
+ *
+ * Note that for now we hard wire dom0 to be the backend domain. In order
+ * to support any domain as backend we'd need to add a way to communicate
+ * the domid of this backend, e.g. via Xenstore, via the PCI-device's
+ * config space or DT/ACPI.
+ */
+static void *xen_grant_dma_alloc(struct device *dev, size_t size,
+ dma_addr_t *dma_handle, gfp_t gfp,
+ unsigned long attrs)
+{
+ struct xen_grant_dma_data *data;
+ unsigned int i, n_pages = PFN_UP(size);
+ unsigned long pfn;
+ grant_ref_t grant;
+ void *ret;
+
+ data = find_xen_grant_dma_data(dev);
+ if (!data)
+ return NULL;
+
+ if (unlikely(data->broken))
+ return NULL;
+
+ ret = alloc_pages_exact(n_pages * PAGE_SIZE, gfp);
+ if (!ret)
+ return NULL;
+
+ pfn = virt_to_pfn(ret);
+
+ if (gnttab_alloc_grant_reference_seq(n_pages, &grant)) {
+ free_pages_exact(ret, n_pages * PAGE_SIZE);
+ return NULL;
+ }
+
+ for (i = 0; i < n_pages; i++) {
+ gnttab_grant_foreign_access_ref(grant + i, data->backend_domid,
+ pfn_to_gfn(pfn + i), 0);
+ }
+
+ *dma_handle = grant_to_dma(grant);
+
+ return ret;
+}
+
+static void xen_grant_dma_free(struct device *dev, size_t size, void *vaddr,
+ dma_addr_t dma_handle, unsigned long attrs)
+{
+ struct xen_grant_dma_data *data;
+ unsigned int i, n_pages = PFN_UP(size);
+ grant_ref_t grant;
+
+ data = find_xen_grant_dma_data(dev);
+ if (!data)
+ return;
+
+ if (unlikely(data->broken))
+ return;
+
+ grant = dma_to_grant(dma_handle);
+
+ for (i = 0; i < n_pages; i++) {
+ if (unlikely(!gnttab_end_foreign_access_ref(grant + i))) {
+ dev_alert(dev, "Grant still in use by backend domain, disabled for further use\n");
+ data->broken = true;
+ return;
+ }
+ }
+
+ gnttab_free_grant_reference_seq(grant, n_pages);
+
+ free_pages_exact(vaddr, n_pages * PAGE_SIZE);
+}
+
+static struct page *xen_grant_dma_alloc_pages(struct device *dev, size_t size,
+ dma_addr_t *dma_handle,
+ enum dma_data_direction dir,
+ gfp_t gfp)
+{
+ void *vaddr;
+
+ vaddr = xen_grant_dma_alloc(dev, size, dma_handle, gfp, 0);
+ if (!vaddr)
+ return NULL;
+
+ return virt_to_page(vaddr);
+}
+
+static void xen_grant_dma_free_pages(struct device *dev, size_t size,
+ struct page *vaddr, dma_addr_t dma_handle,
+ enum dma_data_direction dir)
+{
+ xen_grant_dma_free(dev, size, page_to_virt(vaddr), dma_handle, 0);
+}
+
+static dma_addr_t xen_grant_dma_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir,
+ unsigned long attrs)
+{
+ struct xen_grant_dma_data *data;
+ unsigned int i, n_pages = PFN_UP(size);
+ grant_ref_t grant;
+ dma_addr_t dma_handle;
+
+ if (WARN_ON(dir == DMA_NONE))
+ return DMA_MAPPING_ERROR;
+
+ data = find_xen_grant_dma_data(dev);
+ if (!data)
+ return DMA_MAPPING_ERROR;
+
+ if (unlikely(data->broken))
+ return DMA_MAPPING_ERROR;
+
+ if (gnttab_alloc_grant_reference_seq(n_pages, &grant))
+ return DMA_MAPPING_ERROR;
+
+ for (i = 0; i < n_pages; i++) {
+ gnttab_grant_foreign_access_ref(grant + i, data->backend_domid,
+ xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
+ }
+
+ dma_handle = grant_to_dma(grant) + offset;
+
+ return dma_handle;
+}
+
+static void xen_grant_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
+ size_t size, enum dma_data_direction dir,
+ unsigned long attrs)
+{
+ struct xen_grant_dma_data *data;
+ unsigned int i, n_pages = PFN_UP(size);
+ grant_ref_t grant;
+
+ if (WARN_ON(dir == DMA_NONE))
+ return;
+
+ data = find_xen_grant_dma_data(dev);
+ if (!data)
+ return;
+
+ if (unlikely(data->broken))
+ return;
+
+ grant = dma_to_grant(dma_handle);
+
+ for (i = 0; i < n_pages; i++) {
+ if (unlikely(!gnttab_end_foreign_access_ref(grant + i))) {
+ dev_alert(dev, "Grant still in use by backend domain, disabled for further use\n");
+ data->broken = true;
+ return;
+ }
+ }
+
+ gnttab_free_grant_reference_seq(grant, n_pages);
+}
+
+static void xen_grant_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
+ int nents, enum dma_data_direction dir,
+ unsigned long attrs)
+{
+ struct scatterlist *s;
+ unsigned int i;
+
+ if (WARN_ON(dir == DMA_NONE))
+ return;
+
+ for_each_sg(sg, s, nents, i)
+ xen_grant_dma_unmap_page(dev, s->dma_address, sg_dma_len(s), dir,
+ attrs);
+}
+
+static int xen_grant_dma_map_sg(struct device *dev, struct scatterlist *sg,
+ int nents, enum dma_data_direction dir,
+ unsigned long attrs)
+{
+ struct scatterlist *s;
+ unsigned int i;
+
+ if (WARN_ON(dir == DMA_NONE))
+ return -EINVAL;
+
+ for_each_sg(sg, s, nents, i) {
+ s->dma_address = xen_grant_dma_map_page(dev, sg_page(s), s->offset,
+ s->length, dir, attrs);
+ if (s->dma_address == DMA_MAPPING_ERROR)
+ goto out;
+
+ sg_dma_len(s) = s->length;
+ }
+
+ return nents;
+
+out:
+ xen_grant_dma_unmap_sg(dev, sg, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC);
+ sg_dma_len(sg) = 0;
+
+ return -EIO;
+}
+
+static int xen_grant_dma_supported(struct device *dev, u64 mask)
+{
+ return mask == DMA_BIT_MASK(64);
+}
+
+static const struct dma_map_ops xen_grant_dma_ops = {
+ .alloc = xen_grant_dma_alloc,
+ .free = xen_grant_dma_free,
+ .alloc_pages = xen_grant_dma_alloc_pages,
+ .free_pages = xen_grant_dma_free_pages,
+ .mmap = dma_common_mmap,
+ .get_sgtable = dma_common_get_sgtable,
+ .map_page = xen_grant_dma_map_page,
+ .unmap_page = xen_grant_dma_unmap_page,
+ .map_sg = xen_grant_dma_map_sg,
+ .unmap_sg = xen_grant_dma_unmap_sg,
+ .dma_supported = xen_grant_dma_supported,
+};
+
+void xen_grant_setup_dma_ops(struct device *dev)
+{
+ struct xen_grant_dma_data *data;
+
+ data = find_xen_grant_dma_data(dev);
+ if (data) {
+ dev_err(dev, "Xen grant DMA data is already created\n");
+ return;
+ }
+
+ data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
+ if (!data)
+ goto err;
+
+ /* XXX The dom0 is hardcoded as the backend domain for now */
+ data->backend_domid = 0;
+
+ if (xa_err(xa_store(&xen_grant_dma_devices, (unsigned long)dev, data,
+ GFP_KERNEL))) {
+ dev_err(dev, "Cannot store Xen grant DMA data\n");
+ goto err;
+ }
+
+ dev->dma_ops = &xen_grant_dma_ops;
+
+ return;
+
+err:
+ dev_err(dev, "Cannot set up Xen grant DMA ops, retain platform DMA ops\n");
+}
+
+MODULE_DESCRIPTION("Xen grant DMA-mapping layer");
+MODULE_AUTHOR("Juergen Gross <[email protected]>");
+MODULE_LICENSE("GPL");
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index a3584a3..4f9fad5 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -221,4 +221,12 @@ static inline void xen_preemptible_hcall_end(void) { }

#endif /* CONFIG_XEN_PV && !CONFIG_PREEMPTION */

+#ifdef CONFIG_XEN_GRANT_DMA_OPS
+void xen_grant_setup_dma_ops(struct device *dev);
+#else
+static inline void xen_grant_setup_dma_ops(struct device *dev)
+{
+}
+#endif /* CONFIG_XEN_GRANT_DMA_OPS */
+
#endif /* INCLUDE_XEN_OPS_H */
--
2.7.4

2022-06-06 05:36:16

by Oleksandr Tyshchenko

[permalink] [raw]
Subject: [PATCH V4 4/8] xen/virtio: Enable restricted memory access using Xen grant mappings

From: Juergen Gross <[email protected]>

In order to support virtio in Xen guests add a config option XEN_VIRTIO
enabling the user to specify whether in all Xen guests virtio should
be able to access memory via Xen grant mappings only on the host side.

Also set PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS feature from the guest
initialization code on Arm and x86 if CONFIG_XEN_VIRTIO is enabled.

Signed-off-by: Juergen Gross <[email protected]>
Signed-off-by: Oleksandr Tyshchenko <[email protected]>
Reviewed-by: Stefano Stabellini <[email protected]>
Reviewed-by: Boris Ostrovsky <[email protected]>
---
Changes V1 -> V2:
- new patch, split required changes from commit:
"[PATCH V1 3/6] xen/virtio: Add option to restrict memory access under Xen"
- rework according to new platform_has() infrastructure

Changes V2 -> V3:
- add Stefano's R-b

Changes V3 -> V4:
- add Boris' R-b
---
arch/arm/xen/enlighten.c | 2 ++
arch/x86/xen/enlighten_hvm.c | 2 ++
arch/x86/xen/enlighten_pv.c | 2 ++
drivers/xen/Kconfig | 11 +++++++++++
include/xen/xen.h | 8 ++++++++
5 files changed, 25 insertions(+)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 07eb69f..1f9c3ba 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -443,6 +443,8 @@ static int __init xen_guest_init(void)
if (!xen_domain())
return 0;

+ xen_set_restricted_virtio_memory_access();
+
if (!acpi_disabled)
xen_acpi_guest_init();
else
diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index 517a9d8..8b71b1d 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -195,6 +195,8 @@ static void __init xen_hvm_guest_init(void)
if (xen_pv_domain())
return;

+ xen_set_restricted_virtio_memory_access();
+
init_hvm_pv_info();

reserve_shared_info();
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index ca85d14..30d24fe 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -108,6 +108,8 @@ static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);

static void __init xen_pv_init_platform(void)
{
+ xen_set_restricted_virtio_memory_access();
+
populate_extra_pte(fix_to_virt(FIX_PARAVIRT_BOOTMAP));

set_fixmap(FIX_PARAVIRT_BOOTMAP, xen_start_info->shared_info);
diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index 313a9127..a7bd8ce 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -339,4 +339,15 @@ config XEN_GRANT_DMA_OPS
bool
select DMA_OPS

+config XEN_VIRTIO
+ bool "Xen virtio support"
+ depends on VIRTIO
+ select XEN_GRANT_DMA_OPS
+ help
+ Enable virtio support for running as Xen guest. Depending on the
+ guest type this will require special support on the backend side
+ (qemu or kernel, depending on the virtio device types used).
+
+ If in doubt, say n.
+
endmenu
diff --git a/include/xen/xen.h b/include/xen/xen.h
index a99bab8..0780a81 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -52,6 +52,14 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
extern u64 xen_saved_max_mem_size;
#endif

+#include <linux/platform-feature.h>
+
+static inline void xen_set_restricted_virtio_memory_access(void)
+{
+ if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
+ platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
+}
+
#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
--
2.7.4

2022-06-08 08:00:50

by Juergen Gross

[permalink] [raw]
Subject: Re: [PATCH V4 0/8] virtio: Solution to restrict memory access under Xen using xen-grant DMA-mapping layer

On 02.06.22 21:23, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <[email protected]>
>
> Hello all.
>
> The purpose of this patch series is to add support for restricting memory access under Xen using specific
> grant table [1] based DMA-mapping layer. Patch series is based on Juergen Gross’ initial work [2] which implies
> using grant references instead of raw guest physical addresses (GPA) for the virtio communications (some
> kind of the software IOMMU).
>
> You can find RFC-V3 patch series (and previous discussions) at [3].
>
> !!! Please note, the only diff between V3 and V4 is in commit #5, also I have collected the acks (commits ##4-7).
>
> The high level idea is to create new Xen’s grant table based DMA-mapping layer for the guest Linux whose main
> purpose is to provide a special 64-bit DMA address which is formed by using the grant reference (for a page
> to be shared with the backend) with offset and setting the highest address bit (this is for the backend to
> be able to distinguish grant ref based DMA address from normal GPA). For this to work we need the ability
> to allocate contiguous (consecutive) grant references for multi-page allocations. And the backend then needs
> to offer VIRTIO_F_ACCESS_PLATFORM and VIRTIO_F_VERSION_1 feature bits (it must support virtio-mmio modern
> transport for 64-bit addresses in the virtqueue).
>
> Xen's grant mapping mechanism is the secure and safe solution to share pages between domains which proven
> to work and works for years (in the context of traditional Xen PV drivers for example). So far, the foreign
> mapping is used for the virtio backend to map and access guest memory. With the foreign mapping, the backend
> is able to map arbitrary pages from the guest memory (or even from Dom0 memory). And as the result, the malicious
> backend which runs in a non-trusted domain can take advantage of this. Instead, with the grant mapping
> the backend is only allowed to map pages which were explicitly granted by the guest before and nothing else.
> According to the discussions in various mainline threads this solution would likely be welcome because it
> perfectly fits in the security model Xen provides.
>
> What is more, the grant table based solution requires zero changes to the Xen hypervisor itself at least
> with virtio-mmio and DT (in comparison, for example, with "foreign mapping + virtio-iommu" solution which would
> require the whole new complex emulator in hypervisor in addition to new functionality/hypercall to pass IOVA
> from the virtio backend running elsewhere to the hypervisor and translate it to the GPA before mapping into
> P2M or denying the foreign mapping request if no corresponding IOVA-GPA mapping present in the IOMMU page table
> for that particular device). We only need to update toolstack to insert "xen,grant-dma" IOMMU node (to be referred
> by the virtio-mmio device using "iommus" property) when creating a guest device-tree (this is an indicator for
> the guest to use Xen grant mappings scheme for that device with the endpoint ID being used as ID of Xen domain
> where the corresponding backend is running, the backend domid is used as an argument to the grant mapping APIs).
> It worth mentioning that toolstack patch is based on non upstreamed yet “Virtio support for toolstack on Arm”
> series which is on review now [4].
>
> Please note the following:
> - Patch series only covers Arm and virtio-mmio (device-tree) for now. To enable the restricted memory access
> feature on Arm the following option should be set:
> CONFIG_XEN_VIRTIO=y
> - Patch series is based on "kernel: add new infrastructure for platform_has() support" patch series which
> is on review now [5]
> - Xen should be built with the following options:
> CONFIG_IOREQ_SERVER=y
> CONFIG_EXPERT=y
>
> Patch series is rebased on "for-linus-5.19" branch [1] with "platform_has()" series applied and tested on Renesas
> Salvator-X board + H3 ES3.0 SoC (Arm64) with standalone userspace (non-Qemu) virtio-mmio based virtio-disk backend
> running in Driver domain and Linux guest running on existing virtio-blk driver (frontend). No issues were observed.
> Guest domain 'reboot/destroy' use-cases work properly.
> I have also tested other use-cases such as assigning several virtio block devices or a mix of virtio and Xen PV block
> devices to the guest. Patch series was build-tested on Arm32 and x86.
>
> 1. Xen changes located at (last patch):
> https://github.com/otyshchenko1/xen/commits/libxl_virtio_next2_1
> 2. Linux changes located at (last 8 patches):
> https://github.com/otyshchenko1/linux/commits/virtio_grant9
> 3. virtio-disk changes located at:
> https://github.com/otyshchenko1/virtio-disk/commits/virtio_grant
>
> Any feedback/help would be highly appreciated.
>
> [1] https://xenbits.xenproject.org/docs/4.16-testing/misc/grant-tables.txt
> [2] https://www.youtube.com/watch?v=IrlEdaIUDPk
> [3] https://lore.kernel.org/xen-devel/[email protected]/
> https://lore.kernel.org/xen-devel/[email protected]/
> https://lore.kernel.org/xen-devel/[email protected]/
> https://lore.kernel.org/xen-devel/[email protected]/
> [4] https://lore.kernel.org/xen-devel/[email protected]/
> https://lore.kernel.org/xen-devel/[email protected]/
> [5] https://lore.kernel.org/xen-devel/[email protected]/
> [6] https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git/log/?h=for-linus-5.19
>
> Juergen Gross (3):
> xen/grants: support allocating consecutive grants
> xen/grant-dma-ops: Add option to restrict memory access under Xen
> xen/virtio: Enable restricted memory access using Xen grant mappings
>
> Oleksandr Tyshchenko (5):
> arm/xen: Introduce xen_setup_dma_ops()
> dt-bindings: Add xen,grant-dma IOMMU description for xen-grant DMA ops
> xen/grant-dma-iommu: Introduce stub IOMMU driver
> xen/grant-dma-ops: Retrieve the ID of backend's domain for DT devices
> arm/xen: Assign xen-grant DMA ops for xen-grant DMA devices
>
> .../devicetree/bindings/iommu/xen,grant-dma.yaml | 39 +++
> arch/arm/include/asm/xen/xen-ops.h | 2 +
> arch/arm/mm/dma-mapping.c | 7 +-
> arch/arm/xen/enlighten.c | 2 +
> arch/arm64/include/asm/xen/xen-ops.h | 2 +
> arch/arm64/mm/dma-mapping.c | 7 +-
> arch/x86/xen/enlighten_hvm.c | 2 +
> arch/x86/xen/enlighten_pv.c | 2 +
> drivers/xen/Kconfig | 20 ++
> drivers/xen/Makefile | 2 +
> drivers/xen/grant-dma-iommu.c | 78 +++++
> drivers/xen/grant-dma-ops.c | 345 +++++++++++++++++++++
> drivers/xen/grant-table.c | 251 ++++++++++++---
> include/xen/arm/xen-ops.h | 18 ++
> include/xen/grant_table.h | 4 +
> include/xen/xen-ops.h | 13 +
> include/xen/xen.h | 8 +
> 17 files changed, 756 insertions(+), 46 deletions(-)
> create mode 100644 Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
> create mode 100644 arch/arm/include/asm/xen/xen-ops.h
> create mode 100644 arch/arm64/include/asm/xen/xen-ops.h
> create mode 100644 drivers/xen/grant-dma-iommu.c
> create mode 100644 drivers/xen/grant-dma-ops.c
> create mode 100644 include/xen/arm/xen-ops.h
>

Series pushed to xen/tip.git for-linus-5.19a


Juergen


Attachments:
OpenPGP_0xB0DE9DD628BF132F.asc (3.08 kB)
OpenPGP public key
OpenPGP_signature (505.00 B)
OpenPGP digital signature
Download all attachments

2022-06-15 06:50:09

by Viresh Kumar

[permalink] [raw]
Subject: Re: [PATCH V4 0/8] virtio: Solution to restrict memory access under Xen using xen-grant DMA-mapping layer

Hi Oleksandr,

On Mon, Jun 6, 2022 at 10:16 AM Oleksandr Tyshchenko
<[email protected]> wrote:
> The high level idea is to create new Xen’s grant table based DMA-mapping layer for the guest Linux whose main
> purpose is to provide a special 64-bit DMA address which is formed by using the grant reference (for a page
> to be shared with the backend) with offset and setting the highest address bit (this is for the backend to
> be able to distinguish grant ref based DMA address from normal GPA). For this to work we need the ability
> to allocate contiguous (consecutive) grant references for multi-page allocations. And the backend then needs
> to offer VIRTIO_F_ACCESS_PLATFORM and VIRTIO_F_VERSION_1 feature bits (it must support virtio-mmio modern
> transport for 64-bit addresses in the virtqueue).

I was trying your series, from Linus's tree now and started seeing
boot failures,
failed to mount rootfs. And the reason probably is these messages:

[ 1.222498] virtio_scsi virtio1: device must provide VIRTIO_F_ACCESS_PLATFORM
[ 1.316334] virtio_net virtio0: device must provide VIRTIO_F_ACCESS_PLATFORM

I understand from your email that the backends need to offer
VIRTIO_F_ACCESS_PLATFORM flag now, but should this requirement be a
bit soft ? I mean shouldn't we allow both types of backends to run with the same
kernel, ones that offer this feature and others that don't ? The ones that don't
offer the feature, should continue to work like they used to, i.e.
without the restricted
memory access feature.

I am testing Xen currently with help of Qemu over my x86 desktop and
these backends
(scsi and net) are part of QEMU itself I think, and I don't really
want to go and make the
change there.

Thanks.

--
Viresh

2022-06-15 13:17:44

by Oleksandr Tyshchenko

[permalink] [raw]
Subject: Re: [PATCH V4 0/8] virtio: Solution to restrict memory access under Xen using xen-grant DMA-mapping layer


On 15.06.22 09:23, Viresh Kumar wrote:
> Hi Oleksandr,


Hello Viresh


>
> On Mon, Jun 6, 2022 at 10:16 AM Oleksandr Tyshchenko
> <[email protected]> wrote:
>> The high level idea is to create new Xen’s grant table based DMA-mapping layer for the guest Linux whose main
>> purpose is to provide a special 64-bit DMA address which is formed by using the grant reference (for a page
>> to be shared with the backend) with offset and setting the highest address bit (this is for the backend to
>> be able to distinguish grant ref based DMA address from normal GPA). For this to work we need the ability
>> to allocate contiguous (consecutive) grant references for multi-page allocations. And the backend then needs
>> to offer VIRTIO_F_ACCESS_PLATFORM and VIRTIO_F_VERSION_1 feature bits (it must support virtio-mmio modern
>> transport for 64-bit addresses in the virtqueue).
> I was trying your series, from Linus's tree now and started seeing
> boot failures,
> failed to mount rootfs. And the reason probably is these messages:
>
> [ 1.222498] virtio_scsi virtio1: device must provide VIRTIO_F_ACCESS_PLATFORM
> [ 1.316334] virtio_net virtio0: device must provide VIRTIO_F_ACCESS_PLATFORM
>
> I understand from your email that the backends need to offer
> VIRTIO_F_ACCESS_PLATFORM flag now, but should this requirement be a
> bit soft ? I mean shouldn't we allow both types of backends to run with the same
> kernel, ones that offer this feature and others that don't ? The ones that don't
> offer the feature, should continue to work like they used to, i.e.
> without the restricted
> memory access feature.
> I am testing Xen currently with help of Qemu over my x86 desktop and
> these backends
> (scsi and net) are part of QEMU itself I think, and I don't really
> want to go and make the
> change there.


Thank you for testing on x86.


I assume your guest type in HVM. Within current series the
PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS is set for *all* type of Xen
guests if CONFIG_XEN_VIRTIO is enabled.

I have to admit that from the very beginning it could be possible to
configure for PV and HVM guests separately [1] because the usage of
grant mappings for virtio is mandatory for paravirtualized guest, but
not strictly necessary for the fully virtualized guests (if the backends
are in Dom0). But it was decided to drop these extra options (including
XEN_HVM_VIRTIO_GRANT) and leave only single one CONFIG_XEN_VIRTIO.

I see that Juergen has already pushed a fix.

Sorry for the inconvenience.



[1]
https://lore.kernel.org/xen-devel/[email protected]/


>
> Thanks.
>
> --
> Viresh

--
Regards,

Oleksandr Tyshchenko