2020-06-11 19:26:55

by David Rientjes

[permalink] [raw]
Subject: [patch for-5.8 0/4] dma-direct: dma_direct_alloc_pages() fixes for AMD SEV

While debugging recently reported issues concerning DMA allocation
practices when CONFIG_AMD_MEM_ENCRYPT is enabled, some curiosities arose
when looking at dma_direct_alloc_pages() behavior.

Fix these up. These are likely all stable material, so proposing for 5.8.
---
kernel/dma/direct.c | 42 ++++++++++++++++++++++++++++++++----------
1 file changed, 32 insertions(+), 10 deletions(-)


2020-06-11 19:29:01

by David Rientjes

[permalink] [raw]
Subject: [patch for-5.8 2/4] dma-direct: re-encrypt memory if dma_direct_alloc_pages() fails

If arch_dma_set_uncached() fails after memory has been decrypted, it needs
to be re-encrypted before freeing.

Fixes: fa7e2247c572 ("dma-direct: make uncached_kernel_address more
general")
Cc: [email protected] # 5.7
Signed-off-by: David Rientjes <[email protected]>
---
kernel/dma/direct.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -220,7 +220,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
arch_dma_prep_coherent(page, size);
ret = arch_dma_set_uncached(ret, size);
if (IS_ERR(ret))
- goto out_free_pages;
+ goto out_encrypt_pages;
}
done:
if (force_dma_unencrypted(dev))
@@ -228,6 +228,10 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
else
*dma_handle = phys_to_dma(dev, page_to_phys(page));
return ret;
+out_encrypt_pages:
+ if (force_dma_unencrypted(dev))
+ set_memory_encrypted((unsigned long)page_address(page),
+ 1 << get_order(size));
out_free_pages:
dma_free_contiguous(dev, page, size);
return NULL;

2020-06-11 19:29:14

by David Rientjes

[permalink] [raw]
Subject: [patch for-5.8 4/4] dma-direct: add missing set_memory_decrypted() for coherent mapping

When a coherent mapping is created in dma_direct_alloc_pages(), it needs
to be decrypted if the device requires unencrypted DMA before returning.

Fixes: 3acac065508f ("dma-mapping: merge the generic remapping helpers
into dma-direct")
Cc: [email protected] # 5.5+
Signed-off-by: David Rientjes <[email protected]>
---
kernel/dma/direct.c | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -195,6 +195,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
__builtin_return_address(0));
if (!ret)
goto out_free_pages;
+ if (force_dma_unencrypted(dev)) {
+ err = set_memory_decrypted((unsigned long)ret,
+ 1 << get_order(size));
+ if (err)
+ goto out_free_pages;
+ }
memset(ret, 0, size);
goto done;
}

2020-06-11 19:29:36

by David Rientjes

[permalink] [raw]
Subject: [patch for-5.8 1/4] dma-direct: always align allocation size in dma_direct_alloc_pages()

dma_alloc_contiguous() does size >> PAGE_SHIFT and set_memory_decrypted()
works at page granularity. It's necessary to page align the allocation
size in dma_direct_alloc_pages() for consistent behavior.

This also fixes an issue when arch_dma_prep_coherent() is called on an
unaligned allocation size for dma_alloc_need_uncached() when
CONFIG_DMA_DIRECT_REMAP is disabled but CONFIG_ARCH_HAS_DMA_SET_UNCACHED
is enabled.

Cc: [email protected]
Signed-off-by: David Rientjes <[email protected]>
---
kernel/dma/direct.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -112,11 +112,12 @@ static inline bool dma_should_free_from_pool(struct device *dev,
struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
gfp_t gfp, unsigned long attrs)
{
- size_t alloc_size = PAGE_ALIGN(size);
int node = dev_to_node(dev);
struct page *page = NULL;
u64 phys_limit;

+ VM_BUG_ON(!PAGE_ALIGNED(size));
+
if (attrs & DMA_ATTR_NO_WARN)
gfp |= __GFP_NOWARN;

@@ -124,14 +125,14 @@ struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
gfp &= ~__GFP_ZERO;
gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
&phys_limit);
- page = dma_alloc_contiguous(dev, alloc_size, gfp);
+ page = dma_alloc_contiguous(dev, size, gfp);
if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
- dma_free_contiguous(dev, page, alloc_size);
+ dma_free_contiguous(dev, page, size);
page = NULL;
}
again:
if (!page)
- page = alloc_pages_node(node, gfp, get_order(alloc_size));
+ page = alloc_pages_node(node, gfp, get_order(size));
if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
dma_free_contiguous(dev, page, size);
page = NULL;
@@ -158,8 +159,10 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
struct page *page;
void *ret;

+ size = PAGE_ALIGN(size);
+
if (dma_should_alloc_from_pool(dev, gfp, attrs)) {
- ret = dma_alloc_from_pool(dev, PAGE_ALIGN(size), &page, gfp);
+ ret = dma_alloc_from_pool(dev, size, &page, gfp);
if (!ret)
return NULL;
goto done;
@@ -183,10 +186,10 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
dma_alloc_need_uncached(dev, attrs)) ||
(IS_ENABLED(CONFIG_DMA_REMAP) && PageHighMem(page))) {
/* remove any dirty cache lines on the kernel alias */
- arch_dma_prep_coherent(page, PAGE_ALIGN(size));
+ arch_dma_prep_coherent(page, size);

/* create a coherent mapping */
- ret = dma_common_contiguous_remap(page, PAGE_ALIGN(size),
+ ret = dma_common_contiguous_remap(page, size,
dma_pgprot(dev, PAGE_KERNEL, attrs),
__builtin_return_address(0));
if (!ret)

2020-06-11 19:29:41

by David Rientjes

[permalink] [raw]
Subject: [patch for-5.8 3/4] dma-direct: check return value when encrypting or decrypting memory

__change_page_attr() can fail which will cause set_memory_encrypted() and
set_memory_decrypted() to return non-zero.

If the device requires unencrypted DMA memory and decryption fails, simply
free the memory and fail.

If attempting to re-encrypt in the failure path and that encryption fails,
there is no alternative other than to leak the memory.

Fixes: c10f07aa27da ("dma/direct: Handle force decryption for DMA coherent
buffers in common code")
Cc: [email protected] # 4.17+
Signed-off-by: David Rientjes <[email protected]>
---
kernel/dma/direct.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -158,6 +158,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
{
struct page *page;
void *ret;
+ int err;

size = PAGE_ALIGN(size);

@@ -210,8 +211,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
}

ret = page_address(page);
- if (force_dma_unencrypted(dev))
- set_memory_decrypted((unsigned long)ret, 1 << get_order(size));
+ if (force_dma_unencrypted(dev)) {
+ err = set_memory_decrypted((unsigned long)ret,
+ 1 << get_order(size));
+ if (err)
+ goto out_free_pages;
+ }

memset(ret, 0, size);

@@ -229,9 +234,13 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
*dma_handle = phys_to_dma(dev, page_to_phys(page));
return ret;
out_encrypt_pages:
- if (force_dma_unencrypted(dev))
- set_memory_encrypted((unsigned long)page_address(page),
- 1 << get_order(size));
+ if (force_dma_unencrypted(dev)) {
+ err = set_memory_encrypted((unsigned long)page_address(page),
+ 1 << get_order(size));
+ /* If memory cannot be re-encrypted, it must be leaked */
+ if (err)
+ return NULL;
+ }
out_free_pages:
dma_free_contiguous(dev, page, size);
return NULL;

2020-06-15 06:57:01

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [patch for-5.8 1/4] dma-direct: always align allocation size in dma_direct_alloc_pages()

On Thu, Jun 11, 2020 at 12:20:28PM -0700, David Rientjes wrote:
> dma_alloc_contiguous() does size >> PAGE_SHIFT and set_memory_decrypted()
> works at page granularity. It's necessary to page align the allocation
> size in dma_direct_alloc_pages() for consistent behavior.
>
> This also fixes an issue when arch_dma_prep_coherent() is called on an
> unaligned allocation size for dma_alloc_need_uncached() when
> CONFIG_DMA_DIRECT_REMAP is disabled but CONFIG_ARCH_HAS_DMA_SET_UNCACHED
> is enabled.
>
> Cc: [email protected]
> Signed-off-by: David Rientjes <[email protected]>
> ---
> kernel/dma/direct.c | 17 ++++++++++-------
> 1 file changed, 10 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -112,11 +112,12 @@ static inline bool dma_should_free_from_pool(struct device *dev,
> struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
> gfp_t gfp, unsigned long attrs)
> {
> - size_t alloc_size = PAGE_ALIGN(size);
> int node = dev_to_node(dev);
> struct page *page = NULL;
> u64 phys_limit;
>
> + VM_BUG_ON(!PAGE_ALIGNED(size));

This really should be a WARN_ON_ONCE, but I've fixed this up before
applying. I've also added a prep patch to mark __dma_direct_alloc_pages
static as part of auditing for other callers.

2020-06-15 06:58:29

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [patch for-5.8 2/4] dma-direct: re-encrypt memory if dma_direct_alloc_pages() fails

On Thu, Jun 11, 2020 at 12:20:29PM -0700, David Rientjes wrote:
> If arch_dma_set_uncached() fails after memory has been decrypted, it needs
> to be re-encrypted before freeing.
>
> Fixes: fa7e2247c572 ("dma-direct: make uncached_kernel_address more
> general")
> Cc: [email protected] # 5.7
> Signed-off-by: David Rientjes <[email protected]>

Note that this can't really happen in practice as
CONFIG_ARCH_HAS_DMA_SET_UNCACHED and memory encryption are mutally
exclusive in pracrie. Still looks ok and useful otherwise.

2020-06-15 07:05:26

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [patch for-5.8 4/4] dma-direct: add missing set_memory_decrypted() for coherent mapping

On Thu, Jun 11, 2020 at 12:20:32PM -0700, David Rientjes wrote:
> When a coherent mapping is created in dma_direct_alloc_pages(), it needs
> to be decrypted if the device requires unencrypted DMA before returning.
>
> Fixes: 3acac065508f ("dma-mapping: merge the generic remapping helpers
> into dma-direct")
> Cc: [email protected] # 5.5+
> Signed-off-by: David Rientjes <[email protected]>
> ---
> kernel/dma/direct.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -195,6 +195,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
> __builtin_return_address(0));
> if (!ret)
> goto out_free_pages;
> + if (force_dma_unencrypted(dev)) {
> + err = set_memory_decrypted((unsigned long)ret,
> + 1 << get_order(size));
> + if (err)
> + goto out_free_pages;
> + }

Note that ret is a vmalloc address here. Does set_memory_decrypted
work for that case? Again this should be mostly theoretical, so I'm
not too worried for now.