Hi:
Sometimes, the driver doesn't trust the device. This is usually
happens for the encrtpyed VM or VDUSE[1]. In both cases, technology
like swiotlb is used to prevent the poking/mangling of memory from the
device. But this is not sufficient since current virtio driver may
trust what is stored in the descriptor table (coherent mapping) for
performing the DMA operations like unmap and bounce so the device may
choose to utilize the behaviour of swiotlb to perform attacks[2].
To protect from a malicous device, this series store and use the
descriptor metadata in an auxiliay structure which can not be accessed
via swiotlb instead of the ones in the descriptor table. This means
the descriptor table is write-only from the view of the driver.
Actually, we've almost achieved that through packed virtqueue and we
just need to fix a corner case of handling mapping errors. For split
virtqueue we just follow what's done in the packed.
Note that we don't duplicate descriptor medata for indirect
descriptors since it uses stream mapping which is read only so it's
safe if the metadata of non-indirect descriptors are correct.
For split virtqueue, the change increase the footprint due the the
auxiliary metadata but it's almost neglectlable in the simple test
like pktgen or netpef.
Slightly tested with packed on/off, iommu on/of, swiotlb force/off in
the guest.
Please review.
Changes from V1:
- Always use auxiliary metadata for split virtqueue
- Don't read from descripto when detaching indirect descriptor
[1]
https://lore.kernel.org/netdev/[email protected]/T/
[2]
https://yhbt.net/lore/all/[email protected]/T/#mc6b6e2343cbeffca68ca7a97e0f473aaa871c95b
Jason Wang (7):
virtio-ring: maintain next in extra state for packed virtqueue
virtio_ring: rename vring_desc_extra_packed
virtio-ring: factor out desc_extra allocation
virtio_ring: secure handling of mapping errors
virtio_ring: introduce virtqueue_desc_add_split()
virtio: use err label in __vring_new_virtqueue()
virtio-ring: store DMA metadata in desc_extra for split virtqueue
drivers/virtio/virtio_ring.c | 201 +++++++++++++++++++++++++----------
1 file changed, 144 insertions(+), 57 deletions(-)
--
2.25.1
Rename vring_desc_extra_packed to vring_desc_extra since the structure
are pretty generic which could be reused by split virtqueue as well.
Signed-off-by: Jason Wang <[email protected]>
---
drivers/virtio/virtio_ring.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index e1e9ed42e637..c25ea5776687 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -77,7 +77,7 @@ struct vring_desc_state_packed {
u16 last; /* The last desc state in a list. */
};
-struct vring_desc_extra_packed {
+struct vring_desc_extra {
dma_addr_t addr; /* Buffer DMA addr. */
u32 len; /* Buffer length. */
u16 flags; /* Descriptor flags. */
@@ -166,7 +166,7 @@ struct vring_virtqueue {
/* Per-descriptor state. */
struct vring_desc_state_packed *desc_state;
- struct vring_desc_extra_packed *desc_extra;
+ struct vring_desc_extra *desc_extra;
/* DMA address and size information */
dma_addr_t ring_dma_addr;
@@ -912,7 +912,7 @@ static struct virtqueue *vring_create_virtqueue_split(
*/
static void vring_unmap_state_packed(const struct vring_virtqueue *vq,
- struct vring_desc_extra_packed *state)
+ struct vring_desc_extra *state)
{
u16 flags;
@@ -1651,13 +1651,13 @@ static struct virtqueue *vring_create_virtqueue_packed(
vq->free_head = 0;
vq->packed.desc_extra = kmalloc_array(num,
- sizeof(struct vring_desc_extra_packed),
+ sizeof(struct vring_desc_extra),
GFP_KERNEL);
if (!vq->packed.desc_extra)
goto err_desc_extra;
memset(vq->packed.desc_extra, 0,
- num * sizeof(struct vring_desc_extra_packed));
+ num * sizeof(struct vring_desc_extra));
for (i = 0; i < num - 1; i++)
vq->packed.desc_extra[i].next = i + 1;
--
2.25.1
A helper is introduced for the logic of allocating the descriptor
extra data. This will be reused by split virtqueue.
Signed-off-by: Jason Wang <[email protected]>
---
drivers/virtio/virtio_ring.c | 30 ++++++++++++++++++++----------
1 file changed, 20 insertions(+), 10 deletions(-)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index c25ea5776687..0cdd965dba58 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -1550,6 +1550,25 @@ static void *virtqueue_detach_unused_buf_packed(struct virtqueue *_vq)
return NULL;
}
+static struct vring_desc_extra *vring_alloc_desc_extra(struct vring_virtqueue *vq,
+ unsigned int num)
+{
+ struct vring_desc_extra *desc_extra;
+ unsigned int i;
+
+ desc_extra = kmalloc_array(num, sizeof(struct vring_desc_extra),
+ GFP_KERNEL);
+ if (!desc_extra)
+ return NULL;
+
+ memset(desc_extra, 0, num * sizeof(struct vring_desc_extra));
+
+ for (i = 0; i < num - 1; i++)
+ desc_extra[i].next = i + 1;
+
+ return desc_extra;
+}
+
static struct virtqueue *vring_create_virtqueue_packed(
unsigned int index,
unsigned int num,
@@ -1567,7 +1586,6 @@ static struct virtqueue *vring_create_virtqueue_packed(
struct vring_packed_desc_event *driver, *device;
dma_addr_t ring_dma_addr, driver_event_dma_addr, device_event_dma_addr;
size_t ring_size_in_bytes, event_size_in_bytes;
- unsigned int i;
ring_size_in_bytes = num * sizeof(struct vring_packed_desc);
@@ -1650,18 +1668,10 @@ static struct virtqueue *vring_create_virtqueue_packed(
/* Put everything in free lists. */
vq->free_head = 0;
- vq->packed.desc_extra = kmalloc_array(num,
- sizeof(struct vring_desc_extra),
- GFP_KERNEL);
+ vq->packed.desc_extra = vring_alloc_desc_extra(vq, num);
if (!vq->packed.desc_extra)
goto err_desc_extra;
- memset(vq->packed.desc_extra, 0,
- num * sizeof(struct vring_desc_extra));
-
- for (i = 0; i < num - 1; i++)
- vq->packed.desc_extra[i].next = i + 1;
-
/* No callback? Tell other side not to bother us. */
if (!callback) {
vq->packed.event_flags_shadow = VRING_PACKED_EVENT_FLAG_DISABLE;
--
2.25.1
This patch moves next from vring_desc_state_packed to
vring_desc_desc_extra_packed. This makes it simpler to let extra state
to be reused by split virtqueue.
Signed-off-by: Jason Wang <[email protected]>
---
drivers/virtio/virtio_ring.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 71e16b53e9c1..e1e9ed42e637 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -74,7 +74,6 @@ struct vring_desc_state_packed {
void *data; /* Data for callback. */
struct vring_packed_desc *indir_desc; /* Indirect descriptor, if any. */
u16 num; /* Descriptor list length. */
- u16 next; /* The next desc state in a list. */
u16 last; /* The last desc state in a list. */
};
@@ -82,6 +81,7 @@ struct vring_desc_extra_packed {
dma_addr_t addr; /* Buffer DMA addr. */
u32 len; /* Buffer length. */
u16 flags; /* Descriptor flags. */
+ u16 next; /* The next desc state in a list. */
};
struct vring_virtqueue {
@@ -1061,7 +1061,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
1 << VRING_PACKED_DESC_F_USED;
}
vq->packed.next_avail_idx = n;
- vq->free_head = vq->packed.desc_state[id].next;
+ vq->free_head = vq->packed.desc_extra[id].next;
/* Store token and indirect buffer state. */
vq->packed.desc_state[id].num = 1;
@@ -1169,7 +1169,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
le16_to_cpu(flags);
}
prev = curr;
- curr = vq->packed.desc_state[curr].next;
+ curr = vq->packed.desc_extra[curr].next;
if ((unlikely(++i >= vq->packed.vring.num))) {
i = 0;
@@ -1290,7 +1290,7 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
/* Clear data ptr. */
state->data = NULL;
- vq->packed.desc_state[state->last].next = vq->free_head;
+ vq->packed.desc_extra[state->last].next = vq->free_head;
vq->free_head = id;
vq->vq.num_free += state->num;
@@ -1299,7 +1299,7 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
for (i = 0; i < state->num; i++) {
vring_unmap_state_packed(vq,
&vq->packed.desc_extra[curr]);
- curr = vq->packed.desc_state[curr].next;
+ curr = vq->packed.desc_extra[curr].next;
}
}
@@ -1649,8 +1649,6 @@ static struct virtqueue *vring_create_virtqueue_packed(
/* Put everything in free lists. */
vq->free_head = 0;
- for (i = 0; i < num-1; i++)
- vq->packed.desc_state[i].next = i + 1;
vq->packed.desc_extra = kmalloc_array(num,
sizeof(struct vring_desc_extra_packed),
@@ -1661,6 +1659,9 @@ static struct virtqueue *vring_create_virtqueue_packed(
memset(vq->packed.desc_extra, 0,
num * sizeof(struct vring_desc_extra_packed));
+ for (i = 0; i < num - 1; i++)
+ vq->packed.desc_extra[i].next = i + 1;
+
/* No callback? Tell other side not to bother us. */
if (!callback) {
vq->packed.event_flags_shadow = VRING_PACKED_EVENT_FLAG_DISABLE;
--
2.25.1
We should not depend on the DMA address, length and flag of descriptor
table since they could be wrote with arbitrary value by the device. So
this patch switches to use the stored one in desc_extra.
Note that the indirect descriptors are fine since they are read-only
streaming mappings.
Signed-off-by: Jason Wang <[email protected]>
---
drivers/virtio/virtio_ring.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 0cdd965dba58..5509c2643fb1 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -1213,13 +1213,16 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
unmap_release:
err_idx = i;
i = head;
+ curr = vq->free_head;
vq->packed.avail_used_flags = avail_used_flags;
for (n = 0; n < total_sg; n++) {
if (i == err_idx)
break;
- vring_unmap_desc_packed(vq, &desc[i]);
+ vring_unmap_state_packed(vq,
+ &vq->packed.desc_extra[curr]);
+ curr = vq->packed.desc_extra[curr].next;
i++;
if (i >= vq->packed.vring.num)
i = 0;
--
2.25.1
For split virtqueue, we used to depend on the address, length and
flags stored in the descriptor ring for DMA unmapping. This is unsafe
for the case since the device can manipulate the behavior of virtio
driver, IOMMU drivers and swiotlb.
For safety, maintain the DMA address, DMA length, descriptor flags and
next filed of the non indirect descriptors in vring_desc_state_extra
when DMA API is used for virtio as we did for packed virtqueue and use
those metadata for performing DMA operations. Indirect descriptors
should be safe since they are using streaming mappings.
With this the descriptor ring is write only form the view of the
driver.
This slight increase the footprint of the drive but it's not noticed
through pktgen (64B) test and netperf test in the case of virtio-net.
Signed-off-by: Jason Wang <[email protected]>
---
drivers/virtio/virtio_ring.c | 112 +++++++++++++++++++++++++++--------
1 file changed, 87 insertions(+), 25 deletions(-)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 9800f1c9ce4c..5f0076eeb39c 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -130,6 +130,7 @@ struct vring_virtqueue {
/* Per-descriptor state. */
struct vring_desc_state_split *desc_state;
+ struct vring_desc_extra *desc_extra;
/* DMA address and size information */
dma_addr_t queue_dma_addr;
@@ -364,8 +365,8 @@ static int vring_mapping_error(const struct vring_virtqueue *vq,
* Split ring specific functions - *_split().
*/
-static void vring_unmap_one_split(const struct vring_virtqueue *vq,
- struct vring_desc *desc)
+static void vring_unmap_one_split_indirect(const struct vring_virtqueue *vq,
+ struct vring_desc *desc)
{
u16 flags;
@@ -389,6 +390,35 @@ static void vring_unmap_one_split(const struct vring_virtqueue *vq,
}
}
+static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq,
+ unsigned int i)
+{
+ struct vring_desc_extra *extra = vq->split.desc_extra;
+ u16 flags;
+
+ if (!vq->use_dma_api)
+ goto out;
+
+ flags = extra[i].flags;
+
+ if (flags & VRING_DESC_F_INDIRECT) {
+ dma_unmap_single(vring_dma_dev(vq),
+ extra[i].addr,
+ extra[i].len,
+ (flags & VRING_DESC_F_WRITE) ?
+ DMA_FROM_DEVICE : DMA_TO_DEVICE);
+ } else {
+ dma_unmap_page(vring_dma_dev(vq),
+ extra[i].addr,
+ extra[i].len,
+ (flags & VRING_DESC_F_WRITE) ?
+ DMA_FROM_DEVICE : DMA_TO_DEVICE);
+ }
+
+out:
+ return extra[i].next;
+}
+
static struct vring_desc *alloc_indirect_split(struct virtqueue *_vq,
unsigned int total_sg,
gfp_t gfp)
@@ -417,13 +447,28 @@ static inline unsigned int virtqueue_add_desc_split(struct virtqueue *vq,
unsigned int i,
dma_addr_t addr,
unsigned int len,
- u16 flags)
+ u16 flags,
+ bool indirect)
{
+ struct vring_virtqueue *vring = to_vvq(vq);
+ struct vring_desc_extra *extra = vring->split.desc_extra;
+ u16 next;
+
desc[i].flags = cpu_to_virtio16(vq->vdev, flags);
desc[i].addr = cpu_to_virtio64(vq->vdev, addr);
desc[i].len = cpu_to_virtio32(vq->vdev, len);
- return virtio16_to_cpu(vq->vdev, desc[i].next);
+ if (!indirect) {
+ next = extra[i].next;
+ desc[i].next = cpu_to_virtio16(vq->vdev, next);
+
+ extra[i].addr = addr;
+ extra[i].len = len;
+ extra[i].flags = flags;
+ } else
+ next = virtio16_to_cpu(vq->vdev, desc[i].next);
+
+ return next;
}
static inline int virtqueue_add_split(struct virtqueue *_vq,
@@ -499,8 +544,12 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
goto unmap_release;
prev = i;
+ /* Note that we trust indirect descriptor
+ * table since it use stream DMA mapping.
+ */
i = virtqueue_add_desc_split(_vq, desc, i, addr, sg->length,
- VRING_DESC_F_NEXT);
+ VRING_DESC_F_NEXT,
+ indirect);
}
}
for (; n < (out_sgs + in_sgs); n++) {
@@ -510,14 +559,21 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
goto unmap_release;
prev = i;
+ /* Note that we trust indirect descriptor
+ * table since it use stream DMA mapping.
+ */
i = virtqueue_add_desc_split(_vq, desc, i, addr,
sg->length,
VRING_DESC_F_NEXT |
- VRING_DESC_F_WRITE);
+ VRING_DESC_F_WRITE,
+ indirect);
}
}
/* Last one doesn't continue. */
desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
+ if (!indirect && vq->use_dma_api)
+ vq->split.desc_extra[prev & (vq->split.vring.num - 1)].flags =
+ ~VRING_DESC_F_NEXT;
if (indirect) {
/* Now that the indirect table is filled in, map it. */
@@ -530,7 +586,8 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
virtqueue_add_desc_split(_vq, vq->split.vring.desc,
head, addr,
total_sg * sizeof(struct vring_desc),
- VRING_DESC_F_INDIRECT);
+ VRING_DESC_F_INDIRECT,
+ false);
}
/* We're using some buffers from the free list. */
@@ -538,8 +595,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
/* Update free pointer */
if (indirect)
- vq->free_head = virtio16_to_cpu(_vq->vdev,
- vq->split.vring.desc[head].next);
+ vq->free_head = vq->split.desc_extra[head].next;
else
vq->free_head = i;
@@ -584,8 +640,11 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
for (n = 0; n < total_sg; n++) {
if (i == err_idx)
break;
- vring_unmap_one_split(vq, &desc[i]);
- i = virtio16_to_cpu(_vq->vdev, desc[i].next);
+ if (indirect) {
+ vring_unmap_one_split_indirect(vq, &desc[i]);
+ i = virtio16_to_cpu(_vq->vdev, desc[i].next);
+ } else
+ i = vring_unmap_one_split(vq, i);
}
if (indirect)
@@ -639,14 +698,13 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head,
i = head;
while (vq->split.vring.desc[i].flags & nextflag) {
- vring_unmap_one_split(vq, &vq->split.vring.desc[i]);
- i = virtio16_to_cpu(vq->vq.vdev, vq->split.vring.desc[i].next);
+ vring_unmap_one_split(vq, i);
+ i = vq->split.desc_extra[i].next;
vq->vq.num_free++;
}
- vring_unmap_one_split(vq, &vq->split.vring.desc[i]);
- vq->split.vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev,
- vq->free_head);
+ vring_unmap_one_split(vq, i);
+ vq->split.desc_extra[i].next = vq->free_head;
vq->free_head = head;
/* Plus final descriptor */
@@ -661,15 +719,14 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head,
if (!indir_desc)
return;
- len = virtio32_to_cpu(vq->vq.vdev,
- vq->split.vring.desc[head].len);
+ len = vq->split.desc_extra[head].len;
- BUG_ON(!(vq->split.vring.desc[head].flags &
- cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));
+ BUG_ON(!(vq->split.desc_extra[head].flags &
+ VRING_DESC_F_INDIRECT));
BUG_ON(len == 0 || len % sizeof(struct vring_desc));
for (j = 0; j < len / sizeof(struct vring_desc); j++)
- vring_unmap_one_split(vq, &indir_desc[j]);
+ vring_unmap_one_split_indirect(vq, &indir_desc[j]);
kfree(indir_desc);
vq->split.desc_state[head].indir_desc = NULL;
@@ -2085,7 +2142,6 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
void (*callback)(struct virtqueue *),
const char *name)
{
- unsigned int i;
struct vring_virtqueue *vq;
if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED))
@@ -2140,16 +2196,20 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
if (!vq->split.desc_state)
goto err_state;
+ vq->split.desc_extra = vring_alloc_desc_extra(vq, vring.num);
+ if (!vq->split.desc_extra)
+ goto err_extra;
+
/* Put everything in free lists. */
vq->free_head = 0;
- for (i = 0; i < vring.num-1; i++)
- vq->split.vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
memset(vq->split.desc_state, 0, vring.num *
sizeof(struct vring_desc_state_split));
list_add_tail(&vq->vq.list, &vdev->vqs);
return &vq->vq;
+err_extra:
+ kfree(vq->split.desc_state);
err_state:
kfree(vq);
return NULL;
@@ -2233,8 +2293,10 @@ void vring_del_virtqueue(struct virtqueue *_vq)
vq->split.queue_dma_addr);
}
}
- if (!vq->packed_ring)
+ if (!vq->packed_ring) {
kfree(vq->split.desc_state);
+ kfree(vq->split.desc_extra);
+ }
list_del(&_vq->list);
kfree(vq);
}
--
2.25.1
This patch introduces a helper for storing descriptor in the
descriptor table for split virtqueue.
Signed-off-by: Jason Wang <[email protected]>
---
drivers/virtio/virtio_ring.c | 39 ++++++++++++++++++++++--------------
1 file changed, 24 insertions(+), 15 deletions(-)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 5509c2643fb1..11dfa0dc8ec1 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -412,6 +412,20 @@ static struct vring_desc *alloc_indirect_split(struct virtqueue *_vq,
return desc;
}
+static inline unsigned int virtqueue_add_desc_split(struct virtqueue *vq,
+ struct vring_desc *desc,
+ unsigned int i,
+ dma_addr_t addr,
+ unsigned int len,
+ u16 flags)
+{
+ desc[i].flags = cpu_to_virtio16(vq->vdev, flags);
+ desc[i].addr = cpu_to_virtio64(vq->vdev, addr);
+ desc[i].len = cpu_to_virtio32(vq->vdev, len);
+
+ return virtio16_to_cpu(vq->vdev, desc[i].next);
+}
+
static inline int virtqueue_add_split(struct virtqueue *_vq,
struct scatterlist *sgs[],
unsigned int total_sg,
@@ -484,11 +498,9 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
if (vring_mapping_error(vq, addr))
goto unmap_release;
- desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT);
- desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
- desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
prev = i;
- i = virtio16_to_cpu(_vq->vdev, desc[i].next);
+ i = virtqueue_add_desc_split(_vq, desc, i, addr, sg->length,
+ VRING_DESC_F_NEXT);
}
}
for (; n < (out_sgs + in_sgs); n++) {
@@ -497,11 +509,11 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
if (vring_mapping_error(vq, addr))
goto unmap_release;
- desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT | VRING_DESC_F_WRITE);
- desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
- desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
prev = i;
- i = virtio16_to_cpu(_vq->vdev, desc[i].next);
+ i = virtqueue_add_desc_split(_vq, desc, i, addr,
+ sg->length,
+ VRING_DESC_F_NEXT |
+ VRING_DESC_F_WRITE);
}
}
/* Last one doesn't continue. */
@@ -515,13 +527,10 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
if (vring_mapping_error(vq, addr))
goto unmap_release;
- vq->split.vring.desc[head].flags = cpu_to_virtio16(_vq->vdev,
- VRING_DESC_F_INDIRECT);
- vq->split.vring.desc[head].addr = cpu_to_virtio64(_vq->vdev,
- addr);
-
- vq->split.vring.desc[head].len = cpu_to_virtio32(_vq->vdev,
- total_sg * sizeof(struct vring_desc));
+ virtqueue_add_desc_split(_vq, vq->split.vring.desc,
+ head, addr,
+ total_sg * sizeof(struct vring_desc),
+ VRING_DESC_F_INDIRECT);
}
/* We're using some buffers from the free list. */
--
2.25.1
Using error label for unwind in __vring_new_virtqueue. This is useful
for future refacotring.
Signed-off-by: Jason Wang <[email protected]>
---
drivers/virtio/virtio_ring.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 11dfa0dc8ec1..9800f1c9ce4c 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -2137,10 +2137,8 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
vq->split.desc_state = kmalloc_array(vring.num,
sizeof(struct vring_desc_state_split), GFP_KERNEL);
- if (!vq->split.desc_state) {
- kfree(vq);
- return NULL;
- }
+ if (!vq->split.desc_state)
+ goto err_state;
/* Put everything in free lists. */
vq->free_head = 0;
@@ -2151,6 +2149,10 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
list_add_tail(&vq->vq.list, &vdev->vqs);
return &vq->vq;
+
+err_state:
+ kfree(vq);
+ return NULL;
}
EXPORT_SYMBOL_GPL(__vring_new_virtqueue);
--
2.25.1
?? 2021/4/23 ????4:09, Jason Wang д??:
> Hi:
>
> Sometimes, the driver doesn't trust the device. This is usually
> happens for the encrtpyed VM or VDUSE[1]. In both cases, technology
> like swiotlb is used to prevent the poking/mangling of memory from the
> device. But this is not sufficient since current virtio driver may
> trust what is stored in the descriptor table (coherent mapping) for
> performing the DMA operations like unmap and bounce so the device may
> choose to utilize the behaviour of swiotlb to perform attacks[2].
>
> To protect from a malicous device, this series store and use the
> descriptor metadata in an auxiliay structure which can not be accessed
> via swiotlb instead of the ones in the descriptor table. This means
> the descriptor table is write-only from the view of the driver.
>
> Actually, we've almost achieved that through packed virtqueue and we
> just need to fix a corner case of handling mapping errors. For split
> virtqueue we just follow what's done in the packed.
>
> Note that we don't duplicate descriptor medata for indirect
> descriptors since it uses stream mapping which is read only so it's
> safe if the metadata of non-indirect descriptors are correct.
>
> For split virtqueue, the change increase the footprint due the the
> auxiliary metadata but it's almost neglectlable in the simple test
> like pktgen or netpef.
>
> Slightly tested with packed on/off, iommu on/of, swiotlb force/off in
> the guest.
>
> Please review.
>
> Changes from V1:
> - Always use auxiliary metadata for split virtqueue
> - Don't read from descripto when detaching indirect descriptor
Hi Michael:
Our QE see no regression on the perf test for 10G but some regressions
(5%-10%) on 40G card.
I think this is expected since we increase the footprint, are you OK
with this and we can try to optimize on top or you have other ideas?
Thanks
>
> [1]
> https://lore.kernel.org/netdev/[email protected]/T/
> [2]
> https://yhbt.net/lore/all/[email protected]/T/#mc6b6e2343cbeffca68ca7a97e0f473aaa871c95b
>
> Jason Wang (7):
> virtio-ring: maintain next in extra state for packed virtqueue
> virtio_ring: rename vring_desc_extra_packed
> virtio-ring: factor out desc_extra allocation
> virtio_ring: secure handling of mapping errors
> virtio_ring: introduce virtqueue_desc_add_split()
> virtio: use err label in __vring_new_virtqueue()
> virtio-ring: store DMA metadata in desc_extra for split virtqueue
>
> drivers/virtio/virtio_ring.c | 201 +++++++++++++++++++++++++----------
> 1 file changed, 144 insertions(+), 57 deletions(-)
>
On Thu, May 06, 2021 at 11:20:30AM +0800, Jason Wang wrote:
>
> 在 2021/4/23 下午4:09, Jason Wang 写道:
> > Hi:
> >
> > Sometimes, the driver doesn't trust the device. This is usually
> > happens for the encrtpyed VM or VDUSE[1]. In both cases, technology
> > like swiotlb is used to prevent the poking/mangling of memory from the
> > device. But this is not sufficient since current virtio driver may
> > trust what is stored in the descriptor table (coherent mapping) for
> > performing the DMA operations like unmap and bounce so the device may
> > choose to utilize the behaviour of swiotlb to perform attacks[2].
> >
> > To protect from a malicous device, this series store and use the
> > descriptor metadata in an auxiliay structure which can not be accessed
> > via swiotlb instead of the ones in the descriptor table. This means
> > the descriptor table is write-only from the view of the driver.
> >
> > Actually, we've almost achieved that through packed virtqueue and we
> > just need to fix a corner case of handling mapping errors. For split
> > virtqueue we just follow what's done in the packed.
> >
> > Note that we don't duplicate descriptor medata for indirect
> > descriptors since it uses stream mapping which is read only so it's
> > safe if the metadata of non-indirect descriptors are correct.
> >
> > For split virtqueue, the change increase the footprint due the the
> > auxiliary metadata but it's almost neglectlable in the simple test
> > like pktgen or netpef.
> >
> > Slightly tested with packed on/off, iommu on/of, swiotlb force/off in
> > the guest.
> >
> > Please review.
> >
> > Changes from V1:
> > - Always use auxiliary metadata for split virtqueue
> > - Don't read from descripto when detaching indirect descriptor
>
>
> Hi Michael:
>
> Our QE see no regression on the perf test for 10G but some regressions
> (5%-10%) on 40G card.
>
> I think this is expected since we increase the footprint, are you OK with
> this and we can try to optimize on top or you have other ideas?
>
> Thanks
Let's try for just a bit, won't make this window anyway:
I have an old idea. Add a way to find out that unmap is a nop
(or more exactly does not use the address/length).
Then in that case even with DMA API we do not need
the extra data. Hmm?
>
> >
> > [1]
> > https://lore.kernel.org/netdev/[email protected]/T/
> > [2]
> > https://yhbt.net/lore/all/[email protected]/T/#mc6b6e2343cbeffca68ca7a97e0f473aaa871c95b
> >
> > Jason Wang (7):
> > virtio-ring: maintain next in extra state for packed virtqueue
> > virtio_ring: rename vring_desc_extra_packed
> > virtio-ring: factor out desc_extra allocation
> > virtio_ring: secure handling of mapping errors
> > virtio_ring: introduce virtqueue_desc_add_split()
> > virtio: use err label in __vring_new_virtqueue()
> > virtio-ring: store DMA metadata in desc_extra for split virtqueue
> >
> > drivers/virtio/virtio_ring.c | 201 +++++++++++++++++++++++++----------
> > 1 file changed, 144 insertions(+), 57 deletions(-)
> >
On Thu, May 06, 2021 at 04:12:17AM -0400, Michael S. Tsirkin wrote:
> Let's try for just a bit, won't make this window anyway:
>
> I have an old idea. Add a way to find out that unmap is a nop
> (or more exactly does not use the address/length).
> Then in that case even with DMA API we do not need
> the extra data. Hmm?
So we actually do have a check for that from the early days of the DMA
API, but it only works at compile time: CONFIG_NEED_DMA_MAP_STATE.
But given how rare configs without an iommu or swiotlb are these days
it has stopped to be very useful. Unfortunately a runtime-version is
not entirely trivial, but maybe if we allow for false positives we
could do something like this
bool dma_direct_need_state(struct device *dev)
{
/* some areas could not be covered by any map at all */
if (dev->dma_range_map)
return false;
if (force_dma_unencrypted(dev))
return false;
if (dma_direct_need_sync(dev))
return false;
return *dev->dma_mask == DMA_BIT_MASK(64);
}
bool dma_need_state(struct device *dev)
{
const struct dma_map_ops *ops = get_dma_ops(dev);
if (dma_map_direct(dev, ops))
return dma_direct_need_state(dev);
return ops->unmap_page ||
ops->sync_single_for_cpu || ops->sync_single_for_device;
}
On Fri, Apr 23, 2021 at 04:09:35PM +0800, Jason Wang wrote:
> Sometimes, the driver doesn't trust the device. This is usually
> happens for the encrtpyed VM or VDUSE[1].
Thanks for doing this.
Can you describe the overall memory safety model that virtio drivers
must follow? For example:
- Driver-to-device buffers must be on dedicated pages to avoid
information leaks.
- Driver-to-device buffers must be on dedicated pages to avoid memory
corruption.
When I say "pages" I guess it's the IOMMU page size that matters?
What is the memory access granularity of VDUSE?
I'm asking these questions because there is driver code that exposes
kernel memory to the device and I'm not sure it's safe. For example:
static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr,
struct scatterlist *data_sg, bool have_data)
{
struct scatterlist hdr, status, *sgs[3];
unsigned int num_out = 0, num_in = 0;
sg_init_one(&hdr, &vbr->out_hdr, sizeof(vbr->out_hdr));
^^^^^^^^^^^^^
sgs[num_out++] = &hdr;
if (have_data) {
if (vbr->out_hdr.type & cpu_to_virtio32(vq->vdev, VIRTIO_BLK_T_OUT))
sgs[num_out++] = data_sg;
else
sgs[num_out + num_in++] = data_sg;
}
sg_init_one(&status, &vbr->status, sizeof(vbr->status));
^^^^^^^^^^^^
sgs[num_out + num_in++] = &status;
return virtqueue_add_sgs(vq, sgs, num_out, num_in, vbr, GFP_ATOMIC);
}
I guess the drivers don't need to be modified as long as swiotlb is used
to bounce the buffers through "insecure" memory so that the memory
surrounding the buffers is not exposed?
Stefan
On Fri, May 14, 2021 at 12:27 AM Stefan Hajnoczi <[email protected]> wrote:
>
> On Fri, Apr 23, 2021 at 04:09:35PM +0800, Jason Wang wrote:
> > Sometimes, the driver doesn't trust the device. This is usually
> > happens for the encrtpyed VM or VDUSE[1].
>
> Thanks for doing this.
>
> Can you describe the overall memory safety model that virtio drivers
> must follow? For example:
>
> - Driver-to-device buffers must be on dedicated pages to avoid
> information leaks.
>
> - Driver-to-device buffers must be on dedicated pages to avoid memory
> corruption.
>
> When I say "pages" I guess it's the IOMMU page size that matters?
>
> What is the memory access granularity of VDUSE?
>
Now we use PAGE_SIZE as the access granularity. I think it should be
safe to access the Driver-to-device buffers in VDUSE case because we
also use bounce-buffering mechanism like swiotlb does.
Thanks,
Yongji
On Fri, May 14, 2021 at 2:07 PM Yongji Xie <[email protected]> wrote:
>
> On Fri, May 14, 2021 at 12:27 AM Stefan Hajnoczi <[email protected]> wrote:
> >
> > On Fri, Apr 23, 2021 at 04:09:35PM +0800, Jason Wang wrote:
> > > Sometimes, the driver doesn't trust the device. This is usually
> > > happens for the encrtpyed VM or VDUSE[1].
> >
> > Thanks for doing this.
> >
> > Can you describe the overall memory safety model that virtio drivers
> > must follow? For example:
> >
> > - Driver-to-device buffers must be on dedicated pages to avoid
> > information leaks.
> >
> > - Driver-to-device buffers must be on dedicated pages to avoid memory
> > corruption.
> >
> > When I say "pages" I guess it's the IOMMU page size that matters?
> >
> > What is the memory access granularity of VDUSE?
> >
>
> Now we use PAGE_SIZE as the access granularity. I think it should be
> safe to access the Driver-to-device buffers in VDUSE case because we
> also use bounce-buffering mechanism like swiotlb does.
>
> Thanks,
> Yongji
>
Yes, while at this, I wonder it's possible the re-use the swiotlb
codes for VDUSE, or having some common library for this. Otherwise
there would be duplicated codes (bugs).
Thanks
On Fri, May 14, 2021 at 3:31 PM Jason Wang <[email protected]> wrote:
>
> On Fri, May 14, 2021 at 2:07 PM Yongji Xie <[email protected]> wrote:
> >
> > On Fri, May 14, 2021 at 12:27 AM Stefan Hajnoczi <[email protected]> wrote:
> > >
> > > On Fri, Apr 23, 2021 at 04:09:35PM +0800, Jason Wang wrote:
> > > > Sometimes, the driver doesn't trust the device. This is usually
> > > > happens for the encrtpyed VM or VDUSE[1].
> > >
> > > Thanks for doing this.
> > >
> > > Can you describe the overall memory safety model that virtio drivers
> > > must follow? For example:
> > >
> > > - Driver-to-device buffers must be on dedicated pages to avoid
> > > information leaks.
> > >
> > > - Driver-to-device buffers must be on dedicated pages to avoid memory
> > > corruption.
> > >
> > > When I say "pages" I guess it's the IOMMU page size that matters?
> > >
> > > What is the memory access granularity of VDUSE?
> > >
> >
> > Now we use PAGE_SIZE as the access granularity. I think it should be
> > safe to access the Driver-to-device buffers in VDUSE case because we
> > also use bounce-buffering mechanism like swiotlb does.
> >
> > Thanks,
> > Yongji
> >
>
> Yes, while at this, I wonder it's possible the re-use the swiotlb
> codes for VDUSE, or having some common library for this. Otherwise
> there would be duplicated codes (bugs).
>
I think there are still some gaps between VDUSE codes and swiotlb
codes. For example, swiotlb allocates and uses contiguous memory for
bouncing but VDUSE doesn't. The swiotlb works in singleton mode
(designed for platform IOMMU) , but VDUSE is based on on-chip IOMMU
(supports multiple instances). So we will need some extra work if we
want a common library for them both.
And since the only duplicated codes now are swiotlb_bounce() (swiotlb)
and do_bounce() (VDUSE). So I prefer to do this work in future rather
than in the current series.
Thanks,
Yongji
On Thu, May 06, 2021 at 01:38:29PM +0100, Christoph Hellwig wrote:
> On Thu, May 06, 2021 at 04:12:17AM -0400, Michael S. Tsirkin wrote:
> > Let's try for just a bit, won't make this window anyway:
> >
> > I have an old idea. Add a way to find out that unmap is a nop
> > (or more exactly does not use the address/length).
> > Then in that case even with DMA API we do not need
> > the extra data. Hmm?
>
> So we actually do have a check for that from the early days of the DMA
> API, but it only works at compile time: CONFIG_NEED_DMA_MAP_STATE.
>
> But given how rare configs without an iommu or swiotlb are these days
> it has stopped to be very useful. Unfortunately a runtime-version is
> not entirely trivial, but maybe if we allow for false positives we
> could do something like this
>
> bool dma_direct_need_state(struct device *dev)
> {
> /* some areas could not be covered by any map at all */
> if (dev->dma_range_map)
> return false;
> if (force_dma_unencrypted(dev))
> return false;
> if (dma_direct_need_sync(dev))
> return false;
> return *dev->dma_mask == DMA_BIT_MASK(64);
> }
>
> bool dma_need_state(struct device *dev)
> {
> const struct dma_map_ops *ops = get_dma_ops(dev);
>
> if (dma_map_direct(dev, ops))
> return dma_direct_need_state(dev);
> return ops->unmap_page ||
> ops->sync_single_for_cpu || ops->sync_single_for_device;
> }
Yea that sounds like a good idea. We will need to document that.
Something like:
/*
* dma_need_state - report whether unmap calls use the address and length
* @dev: device to guery
*
* This is a runtime version of CONFIG_NEED_DMA_MAP_STATE.
*
* Return the value indicating whether dma_unmap_* and dma_sync_* calls for the device
* use the DMA state parameters passed to them.
* The DMA state parameters are: scatter/gather list/table, address and
* length.
*
* If dma_need_state returns false then DMA state parameters are
* ignored by all dma_unmap_* and dma_sync_* calls, so it is safe to pass 0 for
* address and length, and DMA_UNMAP_SG_TABLE_INVALID and
* DMA_UNMAP_SG_LIST_INVALID for s/g table and length respectively.
* If dma_need_state returns true then DMA state might
* be used and so the actual values are required.
*/
And we will need DMA_UNMAP_SG_TABLE_INVALID and
DMA_UNMAP_SG_LIST_INVALID as pointers to an empty global table and list
for calls such as dma_unmap_sgtable that dereference pointers before checking
they are used.
Does this look good?
The table/length variants are for consistency, virtio specifically does
not use s/g at the moment, but it seems nicer than leaving
users wonder what to do about these.
Thoughts? Jason want to try implementing?
--
MST
On Fri, May 14, 2021 at 7:17 PM Stefan Hajnoczi <[email protected]> wrote:
>
> On Fri, May 14, 2021 at 03:29:20PM +0800, Jason Wang wrote:
> > On Fri, May 14, 2021 at 12:27 AM Stefan Hajnoczi <[email protected]> wrote:
> > >
> > > On Fri, Apr 23, 2021 at 04:09:35PM +0800, Jason Wang wrote:
> > > > Sometimes, the driver doesn't trust the device. This is usually
> > > > happens for the encrtpyed VM or VDUSE[1].
> > >
> > > Thanks for doing this.
> > >
> > > Can you describe the overall memory safety model that virtio drivers
> > > must follow?
> >
> > My understanding is that, basically the driver should not trust the
> > device (since the driver doesn't know what kind of device that it
> > tries to drive)
> >
> > 1) For any read only metadata (required at the spec level) which is
> > mapped as coherent, driver should not depend on the metadata that is
> > stored in a place that could be wrote by the device. This is what this
> > series tries to achieve.
> > 2) For other metadata that is produced by the device, need to make
> > sure there's no malicious device triggered behavior, this is somehow
> > similar to what vhost did. No DOS, loop, kernel bug and other stuffs.
> > 3) swiotb is a must to enforce memory access isolation. (VDUSE or encrypted VM)
> >
> > > For example:
> > >
> > > - Driver-to-device buffers must be on dedicated pages to avoid
> > > information leaks.
> >
> > It looks to me if swiotlb is used, we don't need this since the
> > bouncing is not done at byte not page.
> >
> > But if swiotlb is not used, we need to enforce this.
> >
> > >
> > > - Driver-to-device buffers must be on dedicated pages to avoid memory
> > > corruption.
> >
> > Similar to the above.
> >
> > >
> > > When I say "pages" I guess it's the IOMMU page size that matters?
> > >
> >
> > And the IOTLB page size.
> >
> > > What is the memory access granularity of VDUSE?
> >
> > It has an swiotlb, but the access and bouncing is done per byte.
> >
> > >
> > > I'm asking these questions because there is driver code that exposes
> > > kernel memory to the device and I'm not sure it's safe. For example:
> > >
> > > static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr,
> > > struct scatterlist *data_sg, bool have_data)
> > > {
> > > struct scatterlist hdr, status, *sgs[3];
> > > unsigned int num_out = 0, num_in = 0;
> > >
> > > sg_init_one(&hdr, &vbr->out_hdr, sizeof(vbr->out_hdr));
> > > ^^^^^^^^^^^^^
> > > sgs[num_out++] = &hdr;
> > >
> > > if (have_data) {
> > > if (vbr->out_hdr.type & cpu_to_virtio32(vq->vdev, VIRTIO_BLK_T_OUT))
> > > sgs[num_out++] = data_sg;
> > > else
> > > sgs[num_out + num_in++] = data_sg;
> > > }
> > >
> > > sg_init_one(&status, &vbr->status, sizeof(vbr->status));
> > > ^^^^^^^^^^^^
> > > sgs[num_out + num_in++] = &status;
> > >
> > > return virtqueue_add_sgs(vq, sgs, num_out, num_in, vbr, GFP_ATOMIC);
> > > }
> > >
> > > I guess the drivers don't need to be modified as long as swiotlb is used
> > > to bounce the buffers through "insecure" memory so that the memory
> > > surrounding the buffers is not exposed?
> >
> > Yes, swiotlb won't bounce the whole page. So I think it's safe.
>
> Thanks Jason and Yongji Xie for clarifying. Seems like swiotlb or a
> similar mechanism can handle byte-granularity isolation so the drivers
> not need to worry about information leaks or memory corruption outside
> the mapped byte range.
>
> We still need to audit virtio guest drivers to ensure they don't trust
> data that can be modified by the device. I will look at virtio-blk and
> virtio-fs next week.
>
Oh, that's great. Thank you!
I also did some audit work these days and will send a new version for
reviewing next Monday.
Thanks,
Yongji
On Fri, May 14, 2021 at 07:27:22PM +0800, Yongji Xie wrote:
> On Fri, May 14, 2021 at 7:17 PM Stefan Hajnoczi <[email protected]> wrote:
> >
> > On Fri, May 14, 2021 at 03:29:20PM +0800, Jason Wang wrote:
> > > On Fri, May 14, 2021 at 12:27 AM Stefan Hajnoczi <[email protected]> wrote:
> > > >
> > > > On Fri, Apr 23, 2021 at 04:09:35PM +0800, Jason Wang wrote:
> > > > > Sometimes, the driver doesn't trust the device. This is usually
> > > > > happens for the encrtpyed VM or VDUSE[1].
> > > >
> > > > Thanks for doing this.
> > > >
> > > > Can you describe the overall memory safety model that virtio drivers
> > > > must follow?
> > >
> > > My understanding is that, basically the driver should not trust the
> > > device (since the driver doesn't know what kind of device that it
> > > tries to drive)
> > >
> > > 1) For any read only metadata (required at the spec level) which is
> > > mapped as coherent, driver should not depend on the metadata that is
> > > stored in a place that could be wrote by the device. This is what this
> > > series tries to achieve.
> > > 2) For other metadata that is produced by the device, need to make
> > > sure there's no malicious device triggered behavior, this is somehow
> > > similar to what vhost did. No DOS, loop, kernel bug and other stuffs.
> > > 3) swiotb is a must to enforce memory access isolation. (VDUSE or encrypted VM)
> > >
> > > > For example:
> > > >
> > > > - Driver-to-device buffers must be on dedicated pages to avoid
> > > > information leaks.
> > >
> > > It looks to me if swiotlb is used, we don't need this since the
> > > bouncing is not done at byte not page.
> > >
> > > But if swiotlb is not used, we need to enforce this.
> > >
> > > >
> > > > - Driver-to-device buffers must be on dedicated pages to avoid memory
> > > > corruption.
> > >
> > > Similar to the above.
> > >
> > > >
> > > > When I say "pages" I guess it's the IOMMU page size that matters?
> > > >
> > >
> > > And the IOTLB page size.
> > >
> > > > What is the memory access granularity of VDUSE?
> > >
> > > It has an swiotlb, but the access and bouncing is done per byte.
> > >
> > > >
> > > > I'm asking these questions because there is driver code that exposes
> > > > kernel memory to the device and I'm not sure it's safe. For example:
> > > >
> > > > static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr,
> > > > struct scatterlist *data_sg, bool have_data)
> > > > {
> > > > struct scatterlist hdr, status, *sgs[3];
> > > > unsigned int num_out = 0, num_in = 0;
> > > >
> > > > sg_init_one(&hdr, &vbr->out_hdr, sizeof(vbr->out_hdr));
> > > > ^^^^^^^^^^^^^
> > > > sgs[num_out++] = &hdr;
> > > >
> > > > if (have_data) {
> > > > if (vbr->out_hdr.type & cpu_to_virtio32(vq->vdev, VIRTIO_BLK_T_OUT))
> > > > sgs[num_out++] = data_sg;
> > > > else
> > > > sgs[num_out + num_in++] = data_sg;
> > > > }
> > > >
> > > > sg_init_one(&status, &vbr->status, sizeof(vbr->status));
> > > > ^^^^^^^^^^^^
> > > > sgs[num_out + num_in++] = &status;
> > > >
> > > > return virtqueue_add_sgs(vq, sgs, num_out, num_in, vbr, GFP_ATOMIC);
> > > > }
> > > >
> > > > I guess the drivers don't need to be modified as long as swiotlb is used
> > > > to bounce the buffers through "insecure" memory so that the memory
> > > > surrounding the buffers is not exposed?
> > >
> > > Yes, swiotlb won't bounce the whole page. So I think it's safe.
> >
> > Thanks Jason and Yongji Xie for clarifying. Seems like swiotlb or a
> > similar mechanism can handle byte-granularity isolation so the drivers
> > not need to worry about information leaks or memory corruption outside
> > the mapped byte range.
> >
> > We still need to audit virtio guest drivers to ensure they don't trust
> > data that can be modified by the device. I will look at virtio-blk and
> > virtio-fs next week.
> >
>
> Oh, that's great. Thank you!
>
> I also did some audit work these days and will send a new version for
> reviewing next Monday.
>
> Thanks,
> Yongji
Doing it in a way that won't hurt performance for simple
configs that trust the device is a challenge though.
Pls take a look at the discussion with Christoph for some ideas
on how to do this.
--
MST
On Fri, May 14, 2021 at 12:27 AM Stefan Hajnoczi <[email protected]> wrote:
>
> On Fri, Apr 23, 2021 at 04:09:35PM +0800, Jason Wang wrote:
> > Sometimes, the driver doesn't trust the device. This is usually
> > happens for the encrtpyed VM or VDUSE[1].
>
> Thanks for doing this.
>
> Can you describe the overall memory safety model that virtio drivers
> must follow?
My understanding is that, basically the driver should not trust the
device (since the driver doesn't know what kind of device that it
tries to drive)
1) For any read only metadata (required at the spec level) which is
mapped as coherent, driver should not depend on the metadata that is
stored in a place that could be wrote by the device. This is what this
series tries to achieve.
2) For other metadata that is produced by the device, need to make
sure there's no malicious device triggered behavior, this is somehow
similar to what vhost did. No DOS, loop, kernel bug and other stuffs.
3) swiotb is a must to enforce memory access isolation. (VDUSE or encrypted VM)
> For example:
>
> - Driver-to-device buffers must be on dedicated pages to avoid
> information leaks.
It looks to me if swiotlb is used, we don't need this since the
bouncing is not done at byte not page.
But if swiotlb is not used, we need to enforce this.
>
> - Driver-to-device buffers must be on dedicated pages to avoid memory
> corruption.
Similar to the above.
>
> When I say "pages" I guess it's the IOMMU page size that matters?
>
And the IOTLB page size.
> What is the memory access granularity of VDUSE?
It has an swiotlb, but the access and bouncing is done per byte.
>
> I'm asking these questions because there is driver code that exposes
> kernel memory to the device and I'm not sure it's safe. For example:
>
> static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr,
> struct scatterlist *data_sg, bool have_data)
> {
> struct scatterlist hdr, status, *sgs[3];
> unsigned int num_out = 0, num_in = 0;
>
> sg_init_one(&hdr, &vbr->out_hdr, sizeof(vbr->out_hdr));
> ^^^^^^^^^^^^^
> sgs[num_out++] = &hdr;
>
> if (have_data) {
> if (vbr->out_hdr.type & cpu_to_virtio32(vq->vdev, VIRTIO_BLK_T_OUT))
> sgs[num_out++] = data_sg;
> else
> sgs[num_out + num_in++] = data_sg;
> }
>
> sg_init_one(&status, &vbr->status, sizeof(vbr->status));
> ^^^^^^^^^^^^
> sgs[num_out + num_in++] = &status;
>
> return virtqueue_add_sgs(vq, sgs, num_out, num_in, vbr, GFP_ATOMIC);
> }
>
> I guess the drivers don't need to be modified as long as swiotlb is used
> to bounce the buffers through "insecure" memory so that the memory
> surrounding the buffers is not exposed?
Yes, swiotlb won't bounce the whole page. So I think it's safe.
Thanks
>
> Stefan
On Fri, May 14, 2021 at 03:29:20PM +0800, Jason Wang wrote:
> On Fri, May 14, 2021 at 12:27 AM Stefan Hajnoczi <[email protected]> wrote:
> >
> > On Fri, Apr 23, 2021 at 04:09:35PM +0800, Jason Wang wrote:
> > > Sometimes, the driver doesn't trust the device. This is usually
> > > happens for the encrtpyed VM or VDUSE[1].
> >
> > Thanks for doing this.
> >
> > Can you describe the overall memory safety model that virtio drivers
> > must follow?
>
> My understanding is that, basically the driver should not trust the
> device (since the driver doesn't know what kind of device that it
> tries to drive)
>
> 1) For any read only metadata (required at the spec level) which is
> mapped as coherent, driver should not depend on the metadata that is
> stored in a place that could be wrote by the device. This is what this
> series tries to achieve.
> 2) For other metadata that is produced by the device, need to make
> sure there's no malicious device triggered behavior, this is somehow
> similar to what vhost did. No DOS, loop, kernel bug and other stuffs.
> 3) swiotb is a must to enforce memory access isolation. (VDUSE or encrypted VM)
>
> > For example:
> >
> > - Driver-to-device buffers must be on dedicated pages to avoid
> > information leaks.
>
> It looks to me if swiotlb is used, we don't need this since the
> bouncing is not done at byte not page.
>
> But if swiotlb is not used, we need to enforce this.
>
> >
> > - Driver-to-device buffers must be on dedicated pages to avoid memory
> > corruption.
>
> Similar to the above.
>
> >
> > When I say "pages" I guess it's the IOMMU page size that matters?
> >
>
> And the IOTLB page size.
>
> > What is the memory access granularity of VDUSE?
>
> It has an swiotlb, but the access and bouncing is done per byte.
>
> >
> > I'm asking these questions because there is driver code that exposes
> > kernel memory to the device and I'm not sure it's safe. For example:
> >
> > static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr,
> > struct scatterlist *data_sg, bool have_data)
> > {
> > struct scatterlist hdr, status, *sgs[3];
> > unsigned int num_out = 0, num_in = 0;
> >
> > sg_init_one(&hdr, &vbr->out_hdr, sizeof(vbr->out_hdr));
> > ^^^^^^^^^^^^^
> > sgs[num_out++] = &hdr;
> >
> > if (have_data) {
> > if (vbr->out_hdr.type & cpu_to_virtio32(vq->vdev, VIRTIO_BLK_T_OUT))
> > sgs[num_out++] = data_sg;
> > else
> > sgs[num_out + num_in++] = data_sg;
> > }
> >
> > sg_init_one(&status, &vbr->status, sizeof(vbr->status));
> > ^^^^^^^^^^^^
> > sgs[num_out + num_in++] = &status;
> >
> > return virtqueue_add_sgs(vq, sgs, num_out, num_in, vbr, GFP_ATOMIC);
> > }
> >
> > I guess the drivers don't need to be modified as long as swiotlb is used
> > to bounce the buffers through "insecure" memory so that the memory
> > surrounding the buffers is not exposed?
>
> Yes, swiotlb won't bounce the whole page. So I think it's safe.
Thanks Jason and Yongji Xie for clarifying. Seems like swiotlb or a
similar mechanism can handle byte-granularity isolation so the drivers
not need to worry about information leaks or memory corruption outside
the mapped byte range.
We still need to audit virtio guest drivers to ensure they don't trust
data that can be modified by the device. I will look at virtio-blk and
virtio-fs next week.
Stefan
On Fri, May 14, 2021 at 7:36 PM Michael S. Tsirkin <[email protected]> wrote:
>
> On Fri, May 14, 2021 at 07:27:22PM +0800, Yongji Xie wrote:
> > On Fri, May 14, 2021 at 7:17 PM Stefan Hajnoczi <[email protected]> wrote:
> > >
> > > On Fri, May 14, 2021 at 03:29:20PM +0800, Jason Wang wrote:
> > > > On Fri, May 14, 2021 at 12:27 AM Stefan Hajnoczi <[email protected]> wrote:
> > > > >
> > > > > On Fri, Apr 23, 2021 at 04:09:35PM +0800, Jason Wang wrote:
> > > > > > Sometimes, the driver doesn't trust the device. This is usually
> > > > > > happens for the encrtpyed VM or VDUSE[1].
> > > > >
> > > > > Thanks for doing this.
> > > > >
> > > > > Can you describe the overall memory safety model that virtio drivers
> > > > > must follow?
> > > >
> > > > My understanding is that, basically the driver should not trust the
> > > > device (since the driver doesn't know what kind of device that it
> > > > tries to drive)
> > > >
> > > > 1) For any read only metadata (required at the spec level) which is
> > > > mapped as coherent, driver should not depend on the metadata that is
> > > > stored in a place that could be wrote by the device. This is what this
> > > > series tries to achieve.
> > > > 2) For other metadata that is produced by the device, need to make
> > > > sure there's no malicious device triggered behavior, this is somehow
> > > > similar to what vhost did. No DOS, loop, kernel bug and other stuffs.
> > > > 3) swiotb is a must to enforce memory access isolation. (VDUSE or encrypted VM)
> > > >
> > > > > For example:
> > > > >
> > > > > - Driver-to-device buffers must be on dedicated pages to avoid
> > > > > information leaks.
> > > >
> > > > It looks to me if swiotlb is used, we don't need this since the
> > > > bouncing is not done at byte not page.
> > > >
> > > > But if swiotlb is not used, we need to enforce this.
> > > >
> > > > >
> > > > > - Driver-to-device buffers must be on dedicated pages to avoid memory
> > > > > corruption.
> > > >
> > > > Similar to the above.
> > > >
> > > > >
> > > > > When I say "pages" I guess it's the IOMMU page size that matters?
> > > > >
> > > >
> > > > And the IOTLB page size.
> > > >
> > > > > What is the memory access granularity of VDUSE?
> > > >
> > > > It has an swiotlb, but the access and bouncing is done per byte.
> > > >
> > > > >
> > > > > I'm asking these questions because there is driver code that exposes
> > > > > kernel memory to the device and I'm not sure it's safe. For example:
> > > > >
> > > > > static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr,
> > > > > struct scatterlist *data_sg, bool have_data)
> > > > > {
> > > > > struct scatterlist hdr, status, *sgs[3];
> > > > > unsigned int num_out = 0, num_in = 0;
> > > > >
> > > > > sg_init_one(&hdr, &vbr->out_hdr, sizeof(vbr->out_hdr));
> > > > > ^^^^^^^^^^^^^
> > > > > sgs[num_out++] = &hdr;
> > > > >
> > > > > if (have_data) {
> > > > > if (vbr->out_hdr.type & cpu_to_virtio32(vq->vdev, VIRTIO_BLK_T_OUT))
> > > > > sgs[num_out++] = data_sg;
> > > > > else
> > > > > sgs[num_out + num_in++] = data_sg;
> > > > > }
> > > > >
> > > > > sg_init_one(&status, &vbr->status, sizeof(vbr->status));
> > > > > ^^^^^^^^^^^^
> > > > > sgs[num_out + num_in++] = &status;
> > > > >
> > > > > return virtqueue_add_sgs(vq, sgs, num_out, num_in, vbr, GFP_ATOMIC);
> > > > > }
> > > > >
> > > > > I guess the drivers don't need to be modified as long as swiotlb is used
> > > > > to bounce the buffers through "insecure" memory so that the memory
> > > > > surrounding the buffers is not exposed?
> > > >
> > > > Yes, swiotlb won't bounce the whole page. So I think it's safe.
> > >
> > > Thanks Jason and Yongji Xie for clarifying. Seems like swiotlb or a
> > > similar mechanism can handle byte-granularity isolation so the drivers
> > > not need to worry about information leaks or memory corruption outside
> > > the mapped byte range.
> > >
> > > We still need to audit virtio guest drivers to ensure they don't trust
> > > data that can be modified by the device. I will look at virtio-blk and
> > > virtio-fs next week.
> > >
> >
> > Oh, that's great. Thank you!
> >
> > I also did some audit work these days and will send a new version for
> > reviewing next Monday.
> >
> > Thanks,
> > Yongji
>
> Doing it in a way that won't hurt performance for simple
> configs that trust the device is a challenge though.
> Pls take a look at the discussion with Christoph for some ideas
> on how to do this.
>
I see. Thanks for the reminder.
Thanks,
Yongji
?? 2021/5/14 ????7:13, Michael S. Tsirkin д??:
> On Thu, May 06, 2021 at 01:38:29PM +0100, Christoph Hellwig wrote:
>> On Thu, May 06, 2021 at 04:12:17AM -0400, Michael S. Tsirkin wrote:
>>> Let's try for just a bit, won't make this window anyway:
>>>
>>> I have an old idea. Add a way to find out that unmap is a nop
>>> (or more exactly does not use the address/length).
>>> Then in that case even with DMA API we do not need
>>> the extra data. Hmm?
>> So we actually do have a check for that from the early days of the DMA
>> API, but it only works at compile time: CONFIG_NEED_DMA_MAP_STATE.
>>
>> But given how rare configs without an iommu or swiotlb are these days
>> it has stopped to be very useful. Unfortunately a runtime-version is
>> not entirely trivial, but maybe if we allow for false positives we
>> could do something like this
>>
>> bool dma_direct_need_state(struct device *dev)
>> {
>> /* some areas could not be covered by any map at all */
>> if (dev->dma_range_map)
>> return false;
>> if (force_dma_unencrypted(dev))
>> return false;
>> if (dma_direct_need_sync(dev))
>> return false;
>> return *dev->dma_mask == DMA_BIT_MASK(64);
>> }
>>
>> bool dma_need_state(struct device *dev)
>> {
>> const struct dma_map_ops *ops = get_dma_ops(dev);
>>
>> if (dma_map_direct(dev, ops))
>> return dma_direct_need_state(dev);
>> return ops->unmap_page ||
>> ops->sync_single_for_cpu || ops->sync_single_for_device;
>> }
> Yea that sounds like a good idea. We will need to document that.
>
>
> Something like:
>
> /*
> * dma_need_state - report whether unmap calls use the address and length
> * @dev: device to guery
> *
> * This is a runtime version of CONFIG_NEED_DMA_MAP_STATE.
> *
> * Return the value indicating whether dma_unmap_* and dma_sync_* calls for the device
> * use the DMA state parameters passed to them.
> * The DMA state parameters are: scatter/gather list/table, address and
> * length.
> *
> * If dma_need_state returns false then DMA state parameters are
> * ignored by all dma_unmap_* and dma_sync_* calls, so it is safe to pass 0 for
> * address and length, and DMA_UNMAP_SG_TABLE_INVALID and
> * DMA_UNMAP_SG_LIST_INVALID for s/g table and length respectively.
> * If dma_need_state returns true then DMA state might
> * be used and so the actual values are required.
> */
>
> And we will need DMA_UNMAP_SG_TABLE_INVALID and
> DMA_UNMAP_SG_LIST_INVALID as pointers to an empty global table and list
> for calls such as dma_unmap_sgtable that dereference pointers before checking
> they are used.
>
>
> Does this look good?
>
> The table/length variants are for consistency, virtio specifically does
> not use s/g at the moment, but it seems nicer than leaving
> users wonder what to do about these.
>
> Thoughts? Jason want to try implementing?
I can add it in my todo list other if other people are interested in
this, please let us know.
But this is just about saving the efforts of unmap and it doesn't
eliminate the necessary of using private memory (addr, length) for the
metadata for validating the device inputs.
And just to clarify, the slight regression we see is testing without
VIRTIO_F_ACCESS_PLATFORM which means DMA API is not used.
So I will go to post a formal version of this series and we can start
from there.
Thanks
>
On Fri, Jun 04, 2021 at 01:38:01PM +0800, Jason Wang wrote:
>
> 在 2021/5/14 下午7:13, Michael S. Tsirkin 写道:
> > On Thu, May 06, 2021 at 01:38:29PM +0100, Christoph Hellwig wrote:
> > > On Thu, May 06, 2021 at 04:12:17AM -0400, Michael S. Tsirkin wrote:
> > > > Let's try for just a bit, won't make this window anyway:
> > > >
> > > > I have an old idea. Add a way to find out that unmap is a nop
> > > > (or more exactly does not use the address/length).
> > > > Then in that case even with DMA API we do not need
> > > > the extra data. Hmm?
> > > So we actually do have a check for that from the early days of the DMA
> > > API, but it only works at compile time: CONFIG_NEED_DMA_MAP_STATE.
> > >
> > > But given how rare configs without an iommu or swiotlb are these days
> > > it has stopped to be very useful. Unfortunately a runtime-version is
> > > not entirely trivial, but maybe if we allow for false positives we
> > > could do something like this
> > >
> > > bool dma_direct_need_state(struct device *dev)
> > > {
> > > /* some areas could not be covered by any map at all */
> > > if (dev->dma_range_map)
> > > return false;
> > > if (force_dma_unencrypted(dev))
> > > return false;
> > > if (dma_direct_need_sync(dev))
> > > return false;
> > > return *dev->dma_mask == DMA_BIT_MASK(64);
> > > }
> > >
> > > bool dma_need_state(struct device *dev)
> > > {
> > > const struct dma_map_ops *ops = get_dma_ops(dev);
> > >
> > > if (dma_map_direct(dev, ops))
> > > return dma_direct_need_state(dev);
> > > return ops->unmap_page ||
> > > ops->sync_single_for_cpu || ops->sync_single_for_device;
> > > }
> > Yea that sounds like a good idea. We will need to document that.
> >
> >
> > Something like:
> >
> > /*
> > * dma_need_state - report whether unmap calls use the address and length
> > * @dev: device to guery
> > *
> > * This is a runtime version of CONFIG_NEED_DMA_MAP_STATE.
> > *
> > * Return the value indicating whether dma_unmap_* and dma_sync_* calls for the device
> > * use the DMA state parameters passed to them.
> > * The DMA state parameters are: scatter/gather list/table, address and
> > * length.
> > *
> > * If dma_need_state returns false then DMA state parameters are
> > * ignored by all dma_unmap_* and dma_sync_* calls, so it is safe to pass 0 for
> > * address and length, and DMA_UNMAP_SG_TABLE_INVALID and
> > * DMA_UNMAP_SG_LIST_INVALID for s/g table and length respectively.
> > * If dma_need_state returns true then DMA state might
> > * be used and so the actual values are required.
> > */
> >
> > And we will need DMA_UNMAP_SG_TABLE_INVALID and
> > DMA_UNMAP_SG_LIST_INVALID as pointers to an empty global table and list
> > for calls such as dma_unmap_sgtable that dereference pointers before checking
> > they are used.
> >
> >
> > Does this look good?
> >
> > The table/length variants are for consistency, virtio specifically does
> > not use s/g at the moment, but it seems nicer than leaving
> > users wonder what to do about these.
> >
> > Thoughts? Jason want to try implementing?
>
>
> I can add it in my todo list other if other people are interested in this,
> please let us know.
>
> But this is just about saving the efforts of unmap and it doesn't eliminate
> the necessary of using private memory (addr, length) for the metadata for
> validating the device inputs.
Besides unmap, why do we need to validate address? length can be
typically validated by specific drivers - not all of them even use it ..
> And just to clarify, the slight regression we see is testing without
> VIRTIO_F_ACCESS_PLATFORM which means DMA API is not used.
I guess this is due to extra cache pressure? Maybe create yet another
array just for DMA state ...
> So I will go to post a formal version of this series and we can start from
> there.
>
> Thanks
>
>
> >
在 2021/7/12 上午12:08, Michael S. Tsirkin 写道:
> On Fri, Jun 04, 2021 at 01:38:01PM +0800, Jason Wang wrote:
>> 在 2021/5/14 下午7:13, Michael S. Tsirkin 写道:
>>> On Thu, May 06, 2021 at 01:38:29PM +0100, Christoph Hellwig wrote:
>>>> On Thu, May 06, 2021 at 04:12:17AM -0400, Michael S. Tsirkin wrote:
>>>>> Let's try for just a bit, won't make this window anyway:
>>>>>
>>>>> I have an old idea. Add a way to find out that unmap is a nop
>>>>> (or more exactly does not use the address/length).
>>>>> Then in that case even with DMA API we do not need
>>>>> the extra data. Hmm?
>>>> So we actually do have a check for that from the early days of the DMA
>>>> API, but it only works at compile time: CONFIG_NEED_DMA_MAP_STATE.
>>>>
>>>> But given how rare configs without an iommu or swiotlb are these days
>>>> it has stopped to be very useful. Unfortunately a runtime-version is
>>>> not entirely trivial, but maybe if we allow for false positives we
>>>> could do something like this
>>>>
>>>> bool dma_direct_need_state(struct device *dev)
>>>> {
>>>> /* some areas could not be covered by any map at all */
>>>> if (dev->dma_range_map)
>>>> return false;
>>>> if (force_dma_unencrypted(dev))
>>>> return false;
>>>> if (dma_direct_need_sync(dev))
>>>> return false;
>>>> return *dev->dma_mask == DMA_BIT_MASK(64);
>>>> }
>>>>
>>>> bool dma_need_state(struct device *dev)
>>>> {
>>>> const struct dma_map_ops *ops = get_dma_ops(dev);
>>>>
>>>> if (dma_map_direct(dev, ops))
>>>> return dma_direct_need_state(dev);
>>>> return ops->unmap_page ||
>>>> ops->sync_single_for_cpu || ops->sync_single_for_device;
>>>> }
>>> Yea that sounds like a good idea. We will need to document that.
>>>
>>>
>>> Something like:
>>>
>>> /*
>>> * dma_need_state - report whether unmap calls use the address and length
>>> * @dev: device to guery
>>> *
>>> * This is a runtime version of CONFIG_NEED_DMA_MAP_STATE.
>>> *
>>> * Return the value indicating whether dma_unmap_* and dma_sync_* calls for the device
>>> * use the DMA state parameters passed to them.
>>> * The DMA state parameters are: scatter/gather list/table, address and
>>> * length.
>>> *
>>> * If dma_need_state returns false then DMA state parameters are
>>> * ignored by all dma_unmap_* and dma_sync_* calls, so it is safe to pass 0 for
>>> * address and length, and DMA_UNMAP_SG_TABLE_INVALID and
>>> * DMA_UNMAP_SG_LIST_INVALID for s/g table and length respectively.
>>> * If dma_need_state returns true then DMA state might
>>> * be used and so the actual values are required.
>>> */
>>>
>>> And we will need DMA_UNMAP_SG_TABLE_INVALID and
>>> DMA_UNMAP_SG_LIST_INVALID as pointers to an empty global table and list
>>> for calls such as dma_unmap_sgtable that dereference pointers before checking
>>> they are used.
>>>
>>>
>>> Does this look good?
>>>
>>> The table/length variants are for consistency, virtio specifically does
>>> not use s/g at the moment, but it seems nicer than leaving
>>> users wonder what to do about these.
>>>
>>> Thoughts? Jason want to try implementing?
>>
>> I can add it in my todo list other if other people are interested in this,
>> please let us know.
>>
>> But this is just about saving the efforts of unmap and it doesn't eliminate
>> the necessary of using private memory (addr, length) for the metadata for
>> validating the device inputs.
>
> Besides unmap, why do we need to validate address?
Sorry, it's not validating actually, the driver doesn't do any
validation. As the subject, the driver will just use the metadata stored
in the desc_state instead of the one stored in the descriptor ring.
> length can be
> typically validated by specific drivers - not all of them even use it ..
>
>> And just to clarify, the slight regression we see is testing without
>> VIRTIO_F_ACCESS_PLATFORM which means DMA API is not used.
> I guess this is due to extra cache pressure?
Yes.
> Maybe create yet another
> array just for DMA state ...
I'm not sure I get this, we use this basically:
struct vring_desc_extra {
dma_addr_t addr; /* Buffer DMA addr. */
u32 len; /* Buffer length. */
u16 flags; /* Descriptor flags. */
u16 next; /* The next desc state in a
list. */
};
Except for the "next" the rest are all DMA state.
Thanks
>
>> So I will go to post a formal version of this series and we can start from
>> there.
>>
>> Thanks
>>
>>
On Mon, Jul 12, 2021 at 11:07:44AM +0800, Jason Wang wrote:
>
> 在 2021/7/12 上午12:08, Michael S. Tsirkin 写道:
> > On Fri, Jun 04, 2021 at 01:38:01PM +0800, Jason Wang wrote:
> > > 在 2021/5/14 下午7:13, Michael S. Tsirkin 写道:
> > > > On Thu, May 06, 2021 at 01:38:29PM +0100, Christoph Hellwig wrote:
> > > > > On Thu, May 06, 2021 at 04:12:17AM -0400, Michael S. Tsirkin wrote:
> > > > > > Let's try for just a bit, won't make this window anyway:
> > > > > >
> > > > > > I have an old idea. Add a way to find out that unmap is a nop
> > > > > > (or more exactly does not use the address/length).
> > > > > > Then in that case even with DMA API we do not need
> > > > > > the extra data. Hmm?
> > > > > So we actually do have a check for that from the early days of the DMA
> > > > > API, but it only works at compile time: CONFIG_NEED_DMA_MAP_STATE.
> > > > >
> > > > > But given how rare configs without an iommu or swiotlb are these days
> > > > > it has stopped to be very useful. Unfortunately a runtime-version is
> > > > > not entirely trivial, but maybe if we allow for false positives we
> > > > > could do something like this
> > > > >
> > > > > bool dma_direct_need_state(struct device *dev)
> > > > > {
> > > > > /* some areas could not be covered by any map at all */
> > > > > if (dev->dma_range_map)
> > > > > return false;
> > > > > if (force_dma_unencrypted(dev))
> > > > > return false;
> > > > > if (dma_direct_need_sync(dev))
> > > > > return false;
> > > > > return *dev->dma_mask == DMA_BIT_MASK(64);
> > > > > }
> > > > >
> > > > > bool dma_need_state(struct device *dev)
> > > > > {
> > > > > const struct dma_map_ops *ops = get_dma_ops(dev);
> > > > >
> > > > > if (dma_map_direct(dev, ops))
> > > > > return dma_direct_need_state(dev);
> > > > > return ops->unmap_page ||
> > > > > ops->sync_single_for_cpu || ops->sync_single_for_device;
> > > > > }
> > > > Yea that sounds like a good idea. We will need to document that.
> > > >
> > > >
> > > > Something like:
> > > >
> > > > /*
> > > > * dma_need_state - report whether unmap calls use the address and length
> > > > * @dev: device to guery
> > > > *
> > > > * This is a runtime version of CONFIG_NEED_DMA_MAP_STATE.
> > > > *
> > > > * Return the value indicating whether dma_unmap_* and dma_sync_* calls for the device
> > > > * use the DMA state parameters passed to them.
> > > > * The DMA state parameters are: scatter/gather list/table, address and
> > > > * length.
> > > > *
> > > > * If dma_need_state returns false then DMA state parameters are
> > > > * ignored by all dma_unmap_* and dma_sync_* calls, so it is safe to pass 0 for
> > > > * address and length, and DMA_UNMAP_SG_TABLE_INVALID and
> > > > * DMA_UNMAP_SG_LIST_INVALID for s/g table and length respectively.
> > > > * If dma_need_state returns true then DMA state might
> > > > * be used and so the actual values are required.
> > > > */
> > > >
> > > > And we will need DMA_UNMAP_SG_TABLE_INVALID and
> > > > DMA_UNMAP_SG_LIST_INVALID as pointers to an empty global table and list
> > > > for calls such as dma_unmap_sgtable that dereference pointers before checking
> > > > they are used.
> > > >
> > > >
> > > > Does this look good?
> > > >
> > > > The table/length variants are for consistency, virtio specifically does
> > > > not use s/g at the moment, but it seems nicer than leaving
> > > > users wonder what to do about these.
> > > >
> > > > Thoughts? Jason want to try implementing?
> > >
> > > I can add it in my todo list other if other people are interested in this,
> > > please let us know.
> > >
> > > But this is just about saving the efforts of unmap and it doesn't eliminate
> > > the necessary of using private memory (addr, length) for the metadata for
> > > validating the device inputs.
> >
> > Besides unmap, why do we need to validate address?
>
>
> Sorry, it's not validating actually, the driver doesn't do any validation.
> As the subject, the driver will just use the metadata stored in the
> desc_state instead of the one stored in the descriptor ring.
>
>
> > length can be
> > typically validated by specific drivers - not all of them even use it ..
> >
> > > And just to clarify, the slight regression we see is testing without
> > > VIRTIO_F_ACCESS_PLATFORM which means DMA API is not used.
> > I guess this is due to extra cache pressure?
>
>
> Yes.
>
>
> > Maybe create yet another
> > array just for DMA state ...
>
>
> I'm not sure I get this, we use this basically:
>
> struct vring_desc_extra {
> dma_addr_t addr; /* Buffer DMA addr. */
> u32 len; /* Buffer length. */
> u16 flags; /* Descriptor flags. */
> u16 next; /* The next desc state in a list. */
> };
>
> Except for the "next" the rest are all DMA state.
>
> Thanks
I am talking about the dma need state idea where we interrogate the DMA
API to figure out whether unmap is actually a nop.
>
> >
> > > So I will go to post a formal version of this series and we can start from
> > > there.
> > >
> > > Thanks
> > >
> > >