2023-10-26 09:52:18

by Petr Tesarik

[permalink] [raw]
Subject: [PATCH] swiotlb: check dynamically allocated TLB address before decrypting

From: Petr Tesarik <[email protected]>

Do not decrypt a dynamically allocated TLB area until its physical address
is known to be below the requested limit. Currently, pages are allocated
and decrypted, but then they may be freed while still decrypted if
swiotlb_alloc_tlb() determines that the physical address is too high.

Let the caller differentiate between unsuitable physical address (=> retry
from a lower zone) and allocation failures (=> no point in retrying).

Fixes: 79636caad361 ("swiotlb: if swiotlb is full, fall back to a transient memory pool")
Signed-off-by: Petr Tesarik <[email protected]>
---
kernel/dma/swiotlb.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index dff067bd56b1..d1118f6f61b8 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -558,30 +558,36 @@ void __init swiotlb_exit(void)
* alloc_dma_pages() - allocate pages to be used for DMA
* @gfp: GFP flags for the allocation.
* @bytes: Size of the buffer.
+ * @phys_limit: Maximum allowed physical address of the buffer.
*
* Allocate pages from the buddy allocator. If successful, make the allocated
* pages decrypted that they can be used for DMA.
*
- * Return: Decrypted pages, or %NULL on failure.
+ * Return: Decrypted pages, %NULL on allocation failure, or ERR_PTR(-EAGAIN)
+ * if the allocated physical address was above @phys_limit.
*/
-static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes)
+static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes, u64 phys_limit)
{
unsigned int order = get_order(bytes);
struct page *page;
+ phys_addr_t paddr;
void *vaddr;

page = alloc_pages(gfp, order);
if (!page)
return NULL;

- vaddr = page_address(page);
+ paddr = page_to_phys(page);
+ if (paddr + bytes - 1 > phys_limit)
+ goto error;
+ vaddr = phys_to_virt(paddr);
if (set_memory_decrypted((unsigned long)vaddr, PFN_UP(bytes)))
goto error;
return page;

error:
__free_pages(page, order);
- return NULL;
+ return ERR_PTR(-EAGAIN);
}

/**
@@ -618,11 +624,7 @@ static struct page *swiotlb_alloc_tlb(struct device *dev, size_t bytes,
else if (phys_limit <= DMA_BIT_MASK(32))
gfp |= __GFP_DMA32;

- while ((page = alloc_dma_pages(gfp, bytes)) &&
- page_to_phys(page) + bytes - 1 > phys_limit) {
- /* allocated, but too high */
- __free_pages(page, get_order(bytes));
-
+ while (IS_ERR(page = alloc_dma_pages(gfp, bytes, phys_limit))) {
if (IS_ENABLED(CONFIG_ZONE_DMA32) &&
phys_limit < DMA_BIT_MASK(64) &&
!(gfp & (__GFP_DMA32 | __GFP_DMA)))
--
2.42.0


2023-10-28 17:24:30

by Petr Tesařík

[permalink] [raw]
Subject: Re: [PATCH] swiotlb: check dynamically allocated TLB address before decrypting

Hi Christoph,

On Thu, 26 Oct 2023 11:51:23 +0200
Petr Tesarik <[email protected]> wrote:

> From: Petr Tesarik <[email protected]>
>
> Do not decrypt a dynamically allocated TLB area until its physical address
> is known to be below the requested limit. Currently, pages are allocated
> and decrypted, but then they may be freed while still decrypted if
> swiotlb_alloc_tlb() determines that the physical address is too high.
>
> Let the caller differentiate between unsuitable physical address (=> retry
> from a lower zone) and allocation failures (=> no point in retrying).
>
> Fixes: 79636caad361 ("swiotlb: if swiotlb is full, fall back to a transient memory pool")
> Signed-off-by: Petr Tesarik <[email protected]>

This patch fixes a potential information leak from a CoCo VM, so I
would like to see it reviewed. Is this on your radar, or did it fall
through the cracks?

Petr T

> ---
> kernel/dma/swiotlb.c | 20 +++++++++++---------
> 1 file changed, 11 insertions(+), 9 deletions(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index dff067bd56b1..d1118f6f61b8 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -558,30 +558,36 @@ void __init swiotlb_exit(void)
> * alloc_dma_pages() - allocate pages to be used for DMA
> * @gfp: GFP flags for the allocation.
> * @bytes: Size of the buffer.
> + * @phys_limit: Maximum allowed physical address of the
> buffer. *
> * Allocate pages from the buddy allocator. If successful, make the
> allocated
> * pages decrypted that they can be used for DMA.
> *
> - * Return: Decrypted pages, or %NULL on failure.
> + * Return: Decrypted pages, %NULL on allocation failure, or
> ERR_PTR(-EAGAIN)
> + * if the allocated physical address was above @phys_limit.
> */
> -static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes)
> +static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes, u64
> phys_limit) {
> unsigned int order = get_order(bytes);
> struct page *page;
> + phys_addr_t paddr;
> void *vaddr;
>
> page = alloc_pages(gfp, order);
> if (!page)
> return NULL;
>
> - vaddr = page_address(page);
> + paddr = page_to_phys(page);
> + if (paddr + bytes - 1 > phys_limit)
> + goto error;
> + vaddr = phys_to_virt(paddr);
> if (set_memory_decrypted((unsigned long)vaddr,
> PFN_UP(bytes))) goto error;
> return page;
>
> error:
> __free_pages(page, order);
> - return NULL;
> + return ERR_PTR(-EAGAIN);
> }
>
> /**
> @@ -618,11 +624,7 @@ static struct page *swiotlb_alloc_tlb(struct
> device *dev, size_t bytes, else if (phys_limit <= DMA_BIT_MASK(32))
> gfp |= __GFP_DMA32;
>
> - while ((page = alloc_dma_pages(gfp, bytes)) &&
> - page_to_phys(page) + bytes - 1 > phys_limit) {
> - /* allocated, but too high */
> - __free_pages(page, get_order(bytes));
> -
> + while (IS_ERR(page = alloc_dma_pages(gfp, bytes,
> phys_limit))) { if (IS_ENABLED(CONFIG_ZONE_DMA32) &&
> phys_limit < DMA_BIT_MASK(64) &&
> !(gfp & (__GFP_DMA32 | __GFP_DMA)))

2023-10-30 13:31:49

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH] swiotlb: check dynamically allocated TLB address before decrypting

I'm trying to review it properly this week. It was defintively too big
to just rush it into 6.6 in the last few days.

2023-10-30 14:09:35

by Petr Tesařík

[permalink] [raw]
Subject: Re: [PATCH] swiotlb: check dynamically allocated TLB address before decrypting

Hi Christoph,

On Mon, 30 Oct 2023 14:31:12 +0100
Christoph Hellwig <[email protected]> wrote:

> I'm trying to review it properly this week. It was defintively too big
> to just rush it into 6.6 in the last few days.

Thank you for the answer. This is OK. Let me give a bit of background.

The bug was reported by Michael Kelley to me, while I temporarily lost
access to my @huaweicloud.com mailbox. Then I was not able to add him
in a Reported-by: header, because this was a private email, which could
not be referred by a Closes: header.

Anyway, Michael explained in that private email that the threat is more
or less theoretical, because environments where set_memory_decrypted()
actually does something are unlikely to have physical address
constraints for the bounce buffer.

But maybe we should add a CC: [email protected] nevertheless.

Petr T