Commit 780bc7903a32 ("virtio_ring: Support DMA APIs") makes
virtqueue_add() return -EIO when we fail to map our I/O buffers. This is
a very realistic scenario for guests with encrypted memory, as swiotlb
may run out of space, depending on it's size and the I/O load.
The virtio-blk driver interprets -EIO form virtqueue_add() as an IO
error, despite the fact that swiotlb full is in absence of bugs a
recoverable condition.
Let us change the return code to -ENOMEM, and make the block layer
recover form these failures when virtio-blk encounters the condition
described above.
Fixes: 780bc7903a32 ("virtio_ring: Support DMA APIs")
Signed-off-by: Halil Pasic <[email protected]>
Tested-by: Michael Mueller <[email protected]>
---
Notes
=====
* When out of descriptors (which might regarded as a similar out of
resources condition) virtio uses -ENOSPC, this however seems wrong,
as ENOSPC is defined as -ENOSPC. Thus I choose -ENOMEM over -ENOSPC.
* In virtio_queue_rq() in virtio_blk.c both -ENOMEM and -ENOSPC are
handled as BLK_STS_DEV_RESOURCE. Returning BLK_STS_RESOURCE however
seems more appropriate for dma mapping failed as we are talking about
a global, and not a device local resource. Both seem to do the trick.
* Mimu tested the patch with virtio-blk and virtio-net (thanks!). We
should look into how other virtio devices behave when DMA mapping fails.
---
drivers/virtio/virtio_ring.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index a8041e451e9e..867c7ebd3f10 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -583,7 +583,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
kfree(desc);
END_USE(vq);
- return -EIO;
+ return -ENOMEM;
}
static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
@@ -1085,7 +1085,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
kfree(desc);
END_USE(vq);
- return -EIO;
+ return -ENOMEM;
}
static inline int virtqueue_add_packed(struct virtqueue *_vq,
--
2.17.1
ping
On Thu, 14 Nov 2019 13:46:46 +0100
Halil Pasic <[email protected]> wrote:
> Commit 780bc7903a32 ("virtio_ring: Support DMA APIs") makes
> virtqueue_add() return -EIO when we fail to map our I/O buffers. This is
> a very realistic scenario for guests with encrypted memory, as swiotlb
> may run out of space, depending on it's size and the I/O load.
>
> The virtio-blk driver interprets -EIO form virtqueue_add() as an IO
> error, despite the fact that swiotlb full is in absence of bugs a
> recoverable condition.
>
> Let us change the return code to -ENOMEM, and make the block layer
> recover form these failures when virtio-blk encounters the condition
> described above.
>
> Fixes: 780bc7903a32 ("virtio_ring: Support DMA APIs")
> Signed-off-by: Halil Pasic <[email protected]>
> Tested-by: Michael Mueller <[email protected]>
> ---
>
> Notes
> =====
>
> * When out of descriptors (which might regarded as a similar out of
> resources condition) virtio uses -ENOSPC, this however seems wrong,
> as ENOSPC is defined as -ENOSPC. Thus I choose -ENOMEM over -ENOSPC.
>
> * In virtio_queue_rq() in virtio_blk.c both -ENOMEM and -ENOSPC are
> handled as BLK_STS_DEV_RESOURCE. Returning BLK_STS_RESOURCE however
> seems more appropriate for dma mapping failed as we are talking about
> a global, and not a device local resource. Both seem to do the trick.
>
> * Mimu tested the patch with virtio-blk and virtio-net (thanks!). We
> should look into how other virtio devices behave when DMA mapping fails.
> ---
> drivers/virtio/virtio_ring.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index a8041e451e9e..867c7ebd3f10 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -583,7 +583,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> kfree(desc);
>
> END_USE(vq);
> - return -EIO;
> + return -ENOMEM;
> }
>
> static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
> @@ -1085,7 +1085,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
> kfree(desc);
>
> END_USE(vq);
> - return -EIO;
> + return -ENOMEM;
> }
>
> static inline int virtqueue_add_packed(struct virtqueue *_vq,
Will be in the next pull request.
On Tue, Nov 19, 2019 at 12:10:22PM +0100, Halil Pasic wrote:
> ping
>
> On Thu, 14 Nov 2019 13:46:46 +0100
> Halil Pasic <[email protected]> wrote:
>
> > Commit 780bc7903a32 ("virtio_ring: Support DMA APIs") makes
> > virtqueue_add() return -EIO when we fail to map our I/O buffers. This is
> > a very realistic scenario for guests with encrypted memory, as swiotlb
> > may run out of space, depending on it's size and the I/O load.
> >
> > The virtio-blk driver interprets -EIO form virtqueue_add() as an IO
> > error, despite the fact that swiotlb full is in absence of bugs a
> > recoverable condition.
> >
> > Let us change the return code to -ENOMEM, and make the block layer
> > recover form these failures when virtio-blk encounters the condition
> > described above.
> >
> > Fixes: 780bc7903a32 ("virtio_ring: Support DMA APIs")
> > Signed-off-by: Halil Pasic <[email protected]>
> > Tested-by: Michael Mueller <[email protected]>
> > ---
> >
> > Notes
> > =====
> >
> > * When out of descriptors (which might regarded as a similar out of
> > resources condition) virtio uses -ENOSPC, this however seems wrong,
> > as ENOSPC is defined as -ENOSPC. Thus I choose -ENOMEM over -ENOSPC.
> >
> > * In virtio_queue_rq() in virtio_blk.c both -ENOMEM and -ENOSPC are
> > handled as BLK_STS_DEV_RESOURCE. Returning BLK_STS_RESOURCE however
> > seems more appropriate for dma mapping failed as we are talking about
> > a global, and not a device local resource. Both seem to do the trick.
> >
> > * Mimu tested the patch with virtio-blk and virtio-net (thanks!). We
> > should look into how other virtio devices behave when DMA mapping fails.
> > ---
> > drivers/virtio/virtio_ring.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index a8041e451e9e..867c7ebd3f10 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -583,7 +583,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> > kfree(desc);
> >
> > END_USE(vq);
> > - return -EIO;
> > + return -ENOMEM;
> > }
> >
> > static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
> > @@ -1085,7 +1085,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
> > kfree(desc);
> >
> > END_USE(vq);
> > - return -EIO;
> > + return -ENOMEM;
> > }
> >
> > static inline int virtqueue_add_packed(struct virtqueue *_vq,
Thanks Michael!
Actually I also hoped to start a discussion on virtio with encrypted
memory.
I assume the AMD folks have the most experience with this, and I very
much like to understand how do they master the challenges we are all
facing.
My understanding of IO in the context of AMD SEV is that the user
is responsible for choosing the swiotlb command line parameter of the
guest kernel so, that the guest never runs out of swiotlb. And that
not doing so may have fatal consequences with regards to the guest. [1]
The swiotlb being a guest global resource, to choose such a size, one
would fist need to know the maximal swiotlb footprint of each device,
and then apply some heuristics regarding fragmentation.
Honestly, if somebody asked me how to calculate the max swiotlb
footprint of the most common virtio devices, I would feel very
uncomfortable.
But maybe I got it all wrong. @Tom can you help me understand how this
works?
In any case, we s390 protected virtualization folks are concerned about
the things laid out above. The goal of this patch is to make the swiotlb
full condition less grave, but it is by no means a full solution.
I would like to work on improving on this situation. Obviously we have
done some thinking about what can be done, but I would very much like to
collect the opinions, of the people in the community that AFAICT face
same problem. One of the ideas is to try to prevent it from happening by
making swiotlb sizing dynamic. Another idea is to make the system deal
with the failures gracefully. Both ideas come with a bag of problems of
their own (AFAICT).
According to my research the people I need to talk to are Tom (AMD), and
Ram and Thiago (Power) and of course the respective maintainers. Have I
missed anybody?
Regards,
Halil
--
[1] https://github.com/AMDESE/AMDSEV#faq-4
On Tue, 19 Nov 2019 08:04:29 -0500
"Michael S. Tsirkin" <[email protected]> wrote:
> Will be in the next pull request.
>
> On Tue, Nov 19, 2019 at 12:10:22PM +0100, Halil Pasic wrote:
> > ping
> >
> > On Thu, 14 Nov 2019 13:46:46 +0100
> > Halil Pasic <[email protected]> wrote:
> >
> > > Commit 780bc7903a32 ("virtio_ring: Support DMA APIs") makes
> > > virtqueue_add() return -EIO when we fail to map our I/O buffers. This is
> > > a very realistic scenario for guests with encrypted memory, as swiotlb
> > > may run out of space, depending on it's size and the I/O load.
> > >
> > > The virtio-blk driver interprets -EIO form virtqueue_add() as an IO
> > > error, despite the fact that swiotlb full is in absence of bugs a
> > > recoverable condition.
> > >
> > > Let us change the return code to -ENOMEM, and make the block layer
> > > recover form these failures when virtio-blk encounters the condition
> > > described above.
> > >
> > > Fixes: 780bc7903a32 ("virtio_ring: Support DMA APIs")
> > > Signed-off-by: Halil Pasic <[email protected]>
> > > Tested-by: Michael Mueller <[email protected]>
> > > ---
> > >
[..]
On 11/22/19 7:08 AM, Halil Pasic wrote:
> Thanks Michael!
>
> Actually I also hoped to start a discussion on virtio with encrypted
> memory.
>
> I assume the AMD folks have the most experience with this, and I very
> much like to understand how do they master the challenges we are all
> facing.
>
> My understanding of IO in the context of AMD SEV is that the user
> is responsible for choosing the swiotlb command line parameter of the
> guest kernel so, that the guest never runs out of swiotlb. And that
> not doing so may have fatal consequences with regards to the guest. [1]
>
> The swiotlb being a guest global resource, to choose such a size, one
> would fist need to know the maximal swiotlb footprint of each device,
> and then apply some heuristics regarding fragmentation.
>
> Honestly, if somebody asked me how to calculate the max swiotlb
> footprint of the most common virtio devices, I would feel very
> uncomfortable.
>
> But maybe I got it all wrong. @Tom can you help me understand how this
> works?
Yes, SWIOTLB sizing is hard. It really depends on the workload and the
associated I/O load that the guest will be performing. We've been looking
at a simple patch to increase the default SWIOTLB size if SEV is active.
But what size do you choose? Do you base it on the overall guest size? And
you're limited because it must reside low in memory.
Ideally, having a pool of shared pages for DMA, outside of standard
SWIOTLB, might be a good thing. On x86, SWIOTLB really seems geared
towards devices that don't support 64-bit DMA. If a device supports 64-bit
DMA then it can use shared pages that reside anywhere to perform the DMA
and bounce buffering. I wonder if the SWIOTLB support can be enhanced to
support something like this, using today's low SWIOTLB buffers if the DMA
mask necessitates it, otherwise using a dynamically sized pool of shared
pages that can live anywhere.
Thanks,
Tom
>
> In any case, we s390 protected virtualization folks are concerned about
> the things laid out above. The goal of this patch is to make the swiotlb
> full condition less grave, but it is by no means a full solution.
>
> I would like to work on improving on this situation. Obviously we have
> done some thinking about what can be done, but I would very much like to
> collect the opinions, of the people in the community that AFAICT face
> same problem. One of the ideas is to try to prevent it from happening by
> making swiotlb sizing dynamic. Another idea is to make the system deal
> with the failures gracefully. Both ideas come with a bag of problems of
> their own (AFAICT).
>
> According to my research the people I need to talk to are Tom (AMD), and
> Ram and Thiago (Power) and of course the respective maintainers. Have I
> missed anybody?
>
> Regards,
> Halil
>
> --
>
> [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAMDESE%2FAMDSEV%23faq-4&data=02%7C01%7CThomas.Lendacky%40amd.com%7Cd733eab74c7346b72fb608d76f4d175d%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637100249200530156&sdata=mUISWUHYJfLE3c1cYoqC%2B3uzM8RtpnffyMlrX84oGug%3D&reserved=0
>
> On Tue, 19 Nov 2019 08:04:29 -0500
> "Michael S. Tsirkin" <[email protected]> wrote:
>
>> Will be in the next pull request.
>>
>> On Tue, Nov 19, 2019 at 12:10:22PM +0100, Halil Pasic wrote:
>>> ping
>>>
>>> On Thu, 14 Nov 2019 13:46:46 +0100
>>> Halil Pasic <[email protected]> wrote:
>>>
>>>> Commit 780bc7903a32 ("virtio_ring: Support DMA APIs") makes
>>>> virtqueue_add() return -EIO when we fail to map our I/O buffers. This is
>>>> a very realistic scenario for guests with encrypted memory, as swiotlb
>>>> may run out of space, depending on it's size and the I/O load.
>>>>
>>>> The virtio-blk driver interprets -EIO form virtqueue_add() as an IO
>>>> error, despite the fact that swiotlb full is in absence of bugs a
>>>> recoverable condition.
>>>>
>>>> Let us change the return code to -ENOMEM, and make the block layer
>>>> recover form these failures when virtio-blk encounters the condition
>>>> described above.
>>>>
>>>> Fixes: 780bc7903a32 ("virtio_ring: Support DMA APIs")
>>>> Signed-off-by: Halil Pasic <[email protected]>
>>>> Tested-by: Michael Mueller <[email protected]>
>>>> ---
>>>>
> [..]
>
On Sat, Nov 23, 2019 at 09:39:08AM -0600, Tom Lendacky wrote:
> Ideally, having a pool of shared pages for DMA, outside of standard
> SWIOTLB, might be a good thing. On x86, SWIOTLB really seems geared
> towards devices that don't support 64-bit DMA. If a device supports 64-bit
> DMA then it can use shared pages that reside anywhere to perform the DMA
> and bounce buffering. I wonder if the SWIOTLB support can be enhanced to
> support something like this, using today's low SWIOTLB buffers if the DMA
> mask necessitates it, otherwise using a dynamically sized pool of shared
> pages that can live anywhere.
I think that can be done relatively easily. I've actually been thinking
of multiple pool support for a whіle to replace the bounce buffering
in the block layer for ISA devices (24-bit addressing).
I've also been looking into a dma_alloc_pages interface to help people
just allocate pages that are always dma addressable, but don't need
a coherent allocation. My last version I shared is here:
http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma_alloc_pages
But it turns out this still doesn't work with SEV as we'll always
bounce. And I've been kinda lost on figuring out a way how to
allocate unencrypted pages that we we can feed into the normal
dma_map_page & co interfaces due to the magic encryption bit in
the address. I guess we could have a fallback path in the mapping
path and just unconditionally clear that bit in the dma_to_phys
path.
Why can't we leverage CMA instead of SWIOTLB for DMA when SEV is
enabled, CMA is well integerated with the DMA subsystem and handles
encrypted pages when force_dma_unencrypted() returns TRUE.
Though, CMA might face the same issues as SWIOTLB bounce buffers, it's
size is similarly setup statically as SWIOTLB does or can be set as a
percentage of the available system memory.
Thanks,
Ashish
Tue, Nov 26, 2019 at 07:45:27PM +0100, Christoph Hellwig wrote:
> On Sat, Nov 23, 2019 at 09:39:08AM -0600, Tom Lendacky wrote:
> > Ideally, having a pool of shared pages for DMA, outside of standard
> > SWIOTLB, might be a good thing. On x86, SWIOTLB really seems geared
> > towards devices that don't support 64-bit DMA. If a device supports 64-bit
> > DMA then it can use shared pages that reside anywhere to perform the DMA
> > and bounce buffering. I wonder if the SWIOTLB support can be enhanced to
> > support something like this, using today's low SWIOTLB buffers if the DMA
> > mask necessitates it, otherwise using a dynamically sized pool of shared
> > pages that can live anywhere.
>
> I think that can be done relatively easily. I've actually been thinking
> of multiple pool support for a whіle to replace the bounce buffering
> in the block layer for ISA devices (24-bit addressing).
>
> I've also been looking into a dma_alloc_pages interface to help people
> just allocate pages that are always dma addressable, but don't need
> a coherent allocation. My last version I shared is here:
>
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgit.infradead.org%2Fusers%2Fhch%2Fmisc.git%2Fshortlog%2Frefs%2Fheads%2Fdma_alloc_pages&data=02%7C01%7CAshish.Kalra%40amd.com%7Cc977f3861fdd40b8f06508d772a0cf1b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637103907325617335&sdata=4FzBxGNqNn36CxpU%2FgQ4socs7InNDgAZlTspBMfUsIw%3D&reserved=0
>
> But it turns out this still doesn't work with SEV as we'll always
> bounce. And I've been kinda lost on figuring out a way how to
> allocate unencrypted pages that we we can feed into the normal
> dma_map_page & co interfaces due to the magic encryption bit in
> the address. I guess we could have a fallback path in the mapping
> path and just unconditionally clear that bit in the dma_to_phys
> path.
On Thu, Nov 28, 2019 at 12:42:25AM +0000, Ashish Kalra wrote:
> Why can't we leverage CMA instead of SWIOTLB for DMA when SEV is
> enabled, CMA is well integerated with the DMA subsystem and handles
> encrypted pages when force_dma_unencrypted() returns TRUE.
>
> Though, CMA might face the same issues as SWIOTLB bounce buffers, it's
> size is similarly setup statically as SWIOTLB does or can be set as a
> percentage of the available system memory.
How is CMA integrated with SEV? CMA just gives a contiguous chunk
of memory, which still needs to be remapped as unencrypted before
returning it to the user.
On Thu, Nov 28, 2019 at 08:05:38AM +0100, Christoph Hellwig wrote:
> On Thu, Nov 28, 2019 at 12:42:25AM +0000, Ashish Kalra wrote:
> > Why can't we leverage CMA instead of SWIOTLB for DMA when SEV is
> > enabled, CMA is well integerated with the DMA subsystem and handles
> > encrypted pages when force_dma_unencrypted() returns TRUE.
> >
> > Though, CMA might face the same issues as SWIOTLB bounce buffers, it's
> > size is similarly setup statically as SWIOTLB does or can be set as a
> > percentage of the available system memory.
>
> How is CMA integrated with SEV? CMA just gives a contiguous chunk
> of memory, which still needs to be remapped as unencrypted before
> returning it to the user.
Looking at the implementation, i see that CMA is accessible using
dma_alloc_coherent() and the buffer allocated by dma_alloc_coherent()
either using the CMA or the page allocator, will be marked/tagged as
unencrypted and returned to the user.
But for dma_map_xx() interfaces, the memory will need be remapped as
unencrypted memory (as you mentioned).
Thanks,
Ashish
On Tue, 26 Nov 2019 19:45:27 +0100
Christoph Hellwig <[email protected]> wrote:
> On Sat, Nov 23, 2019 at 09:39:08AM -0600, Tom Lendacky wrote:
> > Ideally, having a pool of shared pages for DMA, outside of standard
> > SWIOTLB, might be a good thing. On x86, SWIOTLB really seems geared
> > towards devices that don't support 64-bit DMA. If a device supports 64-bit
> > DMA then it can use shared pages that reside anywhere to perform the DMA
> > and bounce buffering. I wonder if the SWIOTLB support can be enhanced to
> > support something like this, using today's low SWIOTLB buffers if the DMA
> > mask necessitates it, otherwise using a dynamically sized pool of shared
> > pages that can live anywhere.
>
> I think that can be done relatively easily. I've actually been thinking
> of multiple pool support for a whіle to replace the bounce buffering
> in the block layer for ISA devices (24-bit addressing).
>
> I've also been looking into a dma_alloc_pages interface to help people
> just allocate pages that are always dma addressable, but don't need
> a coherent allocation. My last version I shared is here:
>
> http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma_alloc_pages
>
> But it turns out this still doesn't work with SEV as we'll always
> bounce. And I've been kinda lost on figuring out a way how to
> allocate unencrypted pages that we we can feed into the normal
> dma_map_page & co interfaces due to the magic encryption bit in
> the address. I guess we could have a fallback path in the mapping
> path and just unconditionally clear that bit in the dma_to_phys
> path.
Thanks Christoph! Thanks Tom! I will do some looking and thinking and
report back.
Regards,
Halil