Subject: [PATCH v4 0/2] Fix SEV user-space mapping of unencrypted coherent memory

This patchset fixes dma_mmap_coherent() mapping of unencrypted memory in
otherwise encrypted environments, where it would incorrectly map that memory as
encrypted.

With SEV and sometimes with SME encryption, The dma api coherent memory is
typically unencrypted, meaning the linear kernel map has the encryption
bit cleared. However, default page protection returned from vm_get_page_prot()
has the encryption bit set. So to compute the correct page protection we need
to clear the encryption bit.

Also, in order for the encryption bit setting to survive across do_mmap() and
mprotect_fixup(), We need to make pgprot_modify() aware of it and not touch it.
Therefore make sme_me_mask part of _PAGE_CHG_MASK and make sure
pgprot_modify() preserves also cleared bits that are part of _PAGE_CHG_MASK,
not just set bits. The use of pgprot_modify() is currently quite limited and
easy to audit.

(Note that the encryption status is not logically encoded in the pfn but in
the page protection even if an address line in the physical address is used).

The patchset has seen some sanity testing by exporting dma_pgprot() and
using it in the vmwgfx mmap handler with SEV enabled.

As far as I can tell there are no current users of dma_mmap_coherent() with
SEV or SME encryption which means that there is no need to CC stable.

Changes since:
RFC:
- Make sme_me_mask port of _PAGE_CHG_MASK rather than using it by its own in
pgprot_modify().
v1:
- Clarify which use-cases this patchset actually fixes.
v2:
- Use _PAGE_ENC instead of sme_me_mask in the definition of _PAGE_CHG_MASK
v3:
- Added RB from Dave Hansen.

Cc: Dave Hansen <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Christian König <[email protected]>
Cc: Marek Szyprowski <[email protected]>
Cc: Tom Lendacky <[email protected]>


Subject: [PATCH v3 1/2] x86: Don't let pgprot_modify() change the page encryption bit

From: Thomas Hellstrom <[email protected]>

When SEV or SME is enabled and active, vm_get_page_prot() typically
returns with the encryption bit set. This means that users of
pgprot_modify(, vm_get_page_prot()) (mprotect_fixup, do_mmap) end up with
a value of vma->vm_pg_prot that is not consistent with the intended
protection of the PTEs. This is also important for fault handlers that
rely on the VMA vm_page_prot to set the page protection. Fix this by
not allowing pgprot_modify() to change the encryption bit, similar to
how it's done for PAT bits.

Cc: Dave Hansen <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Christian König <[email protected]>
Cc: Marek Szyprowski <[email protected]>
Cc: Tom Lendacky <[email protected]>
Signed-off-by: Thomas Hellstrom <[email protected]>
Reviewed-by: Dave Hansen <[email protected]>
---
arch/x86/include/asm/pgtable.h | 7 +++++--
arch/x86/include/asm/pgtable_types.h | 2 +-
2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index d9925b10e326..c4615032c5ef 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -627,12 +627,15 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
return __pmd(val);
}

-/* mprotect needs to preserve PAT bits when updating vm_page_prot */
+/*
+ * mprotect needs to preserve PAT and encryption bits when updating
+ * vm_page_prot
+ */
#define pgprot_modify pgprot_modify
static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
{
pgprotval_t preservebits = pgprot_val(oldprot) & _PAGE_CHG_MASK;
- pgprotval_t addbits = pgprot_val(newprot);
+ pgprotval_t addbits = pgprot_val(newprot) & ~_PAGE_CHG_MASK;
return __pgprot(preservebits | addbits);
}

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 0239998d8cdc..65c2ecd730c5 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -118,7 +118,7 @@
*/
#define _PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \
_PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY | \
- _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
+ _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC)
#define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)

/*
--
2.21.1

Subject: [PATCH v3 2/2] dma-mapping: Fix dma_pgprot() for unencrypted coherent pages

From: Thomas Hellstrom <[email protected]>

When dma_mmap_coherent() sets up a mapping to unencrypted coherent memory
under SEV encryption and sometimes under SME encryption, it will actually
set up an encrypted mapping rather than an unencrypted, causing devices
that DMAs from that memory to read encrypted contents. Fix this.

When force_dma_unencrypted() returns true, the linear kernel map of the
coherent pages have had the encryption bit explicitly cleared and the
page content is unencrypted. Make sure that any additional PTEs we set
up to these pages also have the encryption bit cleared by having
dma_pgprot() return a protection with the encryption bit cleared in this
case.

Cc: Dave Hansen <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Christian König <[email protected]>
Cc: Marek Szyprowski <[email protected]>
Cc: Tom Lendacky <[email protected]>
Signed-off-by: Thomas Hellstrom <[email protected]>
---
kernel/dma/mapping.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 12ff766ec1fa..98e3d873792e 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -154,6 +154,8 @@ EXPORT_SYMBOL(dma_get_sgtable_attrs);
*/
pgprot_t dma_pgprot(struct device *dev, pgprot_t prot, unsigned long attrs)
{
+ if (force_dma_unencrypted(dev))
+ prot = pgprot_decrypted(prot);
if (dev_is_dma_coherent(dev) ||
(IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) &&
(attrs & DMA_ATTR_NON_CONSISTENT)))
--
2.21.1

2020-03-05 15:38:43

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] dma-mapping: Fix dma_pgprot() for unencrypted coherent pages

Looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

x86 maintainers: feel free to pick this up through your tree.

Subject: Re: [PATCH v4 0/2] Fix SEV user-space mapping of unencrypted coherent memory

Dave, Ingo

On 3/4/20 12:45 PM, Thomas Hellström (VMware) wrote:
> This patchset fixes dma_mmap_coherent() mapping of unencrypted memory in
> otherwise encrypted environments, where it would incorrectly map that memory as
> encrypted.
>
> With SEV and sometimes with SME encryption, The dma api coherent memory is
> typically unencrypted, meaning the linear kernel map has the encryption
> bit cleared. However, default page protection returned from vm_get_page_prot()
> has the encryption bit set. So to compute the correct page protection we need
> to clear the encryption bit.
>
> Also, in order for the encryption bit setting to survive across do_mmap() and
> mprotect_fixup(), We need to make pgprot_modify() aware of it and not touch it.
> Therefore make sme_me_mask part of _PAGE_CHG_MASK and make sure
> pgprot_modify() preserves also cleared bits that are part of _PAGE_CHG_MASK,
> not just set bits. The use of pgprot_modify() is currently quite limited and
> easy to audit.
>
> (Note that the encryption status is not logically encoded in the pfn but in
> the page protection even if an address line in the physical address is used).
>
> The patchset has seen some sanity testing by exporting dma_pgprot() and
> using it in the vmwgfx mmap handler with SEV enabled.
>
> As far as I can tell there are no current users of dma_mmap_coherent() with
> SEV or SME encryption which means that there is no need to CC stable.
>
> Changes since:
> RFC:
> - Make sme_me_mask port of _PAGE_CHG_MASK rather than using it by its own in
> pgprot_modify().
> v1:
> - Clarify which use-cases this patchset actually fixes.
> v2:
> - Use _PAGE_ENC instead of sme_me_mask in the definition of _PAGE_CHG_MASK
> v3:
> - Added RB from Dave Hansen.
>
> Cc: Dave Hansen <[email protected]>
> Cc: Andy Lutomirski <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Borislav Petkov <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Christian König <[email protected]>
> Cc: Marek Szyprowski <[email protected]>
> Cc: Tom Lendacky <[email protected]>
Could we merge this small series through x86?
Patch 2/2 has a

Reviewed-by: Christoph Hellwig<[email protected]>

Please let me know if you want me to resend with that RB added.

Thanks,
Thomas

2020-03-16 19:44:54

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH v3 1/2] x86: Don't let pgprot_modify() change the page encryption bit

On 3/4/20 5:45 AM, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom <[email protected]>
>
> When SEV or SME is enabled and active, vm_get_page_prot() typically
> returns with the encryption bit set. This means that users of
> pgprot_modify(, vm_get_page_prot()) (mprotect_fixup, do_mmap) end up with
> a value of vma->vm_pg_prot that is not consistent with the intended
> protection of the PTEs. This is also important for fault handlers that
> rely on the VMA vm_page_prot to set the page protection. Fix this by
> not allowing pgprot_modify() to change the encryption bit, similar to
> how it's done for PAT bits.
>
> Cc: Dave Hansen <[email protected]>
> Cc: Andy Lutomirski <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Borislav Petkov <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Christian König <[email protected]>
> Cc: Marek Szyprowski <[email protected]>
> Cc: Tom Lendacky <[email protected]>
> Signed-off-by: Thomas Hellstrom <[email protected]>
> Reviewed-by: Dave Hansen <[email protected]>

Acked-by: Tom Lendacky <[email protected]>

> ---
> arch/x86/include/asm/pgtable.h | 7 +++++--
> arch/x86/include/asm/pgtable_types.h | 2 +-
> 2 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index d9925b10e326..c4615032c5ef 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -627,12 +627,15 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
> return __pmd(val);
> }
>
> -/* mprotect needs to preserve PAT bits when updating vm_page_prot */
> +/*
> + * mprotect needs to preserve PAT and encryption bits when updating
> + * vm_page_prot
> + */
> #define pgprot_modify pgprot_modify
> static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
> {
> pgprotval_t preservebits = pgprot_val(oldprot) & _PAGE_CHG_MASK;
> - pgprotval_t addbits = pgprot_val(newprot);
> + pgprotval_t addbits = pgprot_val(newprot) & ~_PAGE_CHG_MASK;
> return __pgprot(preservebits | addbits);
> }
>
> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> index 0239998d8cdc..65c2ecd730c5 100644
> --- a/arch/x86/include/asm/pgtable_types.h
> +++ b/arch/x86/include/asm/pgtable_types.h
> @@ -118,7 +118,7 @@
> */
> #define _PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \
> _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY | \
> - _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
> + _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC)
> #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
>
> /*
>

2020-03-16 19:46:28

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] dma-mapping: Fix dma_pgprot() for unencrypted coherent pages

On 3/4/20 5:45 AM, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom <[email protected]>
>
> When dma_mmap_coherent() sets up a mapping to unencrypted coherent memory
> under SEV encryption and sometimes under SME encryption, it will actually
> set up an encrypted mapping rather than an unencrypted, causing devices
> that DMAs from that memory to read encrypted contents. Fix this.
>
> When force_dma_unencrypted() returns true, the linear kernel map of the
> coherent pages have had the encryption bit explicitly cleared and the
> page content is unencrypted. Make sure that any additional PTEs we set
> up to these pages also have the encryption bit cleared by having
> dma_pgprot() return a protection with the encryption bit cleared in this
> case.
>
> Cc: Dave Hansen <[email protected]>
> Cc: Andy Lutomirski <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Borislav Petkov <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Christian König <[email protected]>
> Cc: Marek Szyprowski <[email protected]>
> Cc: Tom Lendacky <[email protected]>
> Signed-off-by: Thomas Hellstrom <[email protected]>

Acked-by: Tom Lendacky <[email protected]>

> ---
> kernel/dma/mapping.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index 12ff766ec1fa..98e3d873792e 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -154,6 +154,8 @@ EXPORT_SYMBOL(dma_get_sgtable_attrs);
> */
> pgprot_t dma_pgprot(struct device *dev, pgprot_t prot, unsigned long attrs)
> {
> + if (force_dma_unencrypted(dev))
> + prot = pgprot_decrypted(prot);
> if (dev_is_dma_coherent(dev) ||
> (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) &&
> (attrs & DMA_ATTR_NON_CONSISTENT)))
>

2020-03-17 14:57:59

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: x86/mm] dma-mapping: Fix dma_pgprot() for unencrypted coherent pages

The following commit has been merged into the x86/mm branch of tip:

Commit-ID: 17c4a2ae15a7aaefe84bdb271952678c5c9cd8e1
Gitweb: https://git.kernel.org/tip/17c4a2ae15a7aaefe84bdb271952678c5c9cd8e1
Author: Thomas Hellstrom <[email protected]>
AuthorDate: Wed, 04 Mar 2020 12:45:27 +01:00
Committer: Borislav Petkov <[email protected]>
CommitterDate: Tue, 17 Mar 2020 11:52:58 +01:00

dma-mapping: Fix dma_pgprot() for unencrypted coherent pages

When dma_mmap_coherent() sets up a mapping to unencrypted coherent memory
under SEV encryption and sometimes under SME encryption, it will actually
set up an encrypted mapping rather than an unencrypted, causing devices
that DMAs from that memory to read encrypted contents. Fix this.

When force_dma_unencrypted() returns true, the linear kernel map of the
coherent pages have had the encryption bit explicitly cleared and the
page content is unencrypted. Make sure that any additional PTEs we set
up to these pages also have the encryption bit cleared by having
dma_pgprot() return a protection with the encryption bit cleared in this
case.

Signed-off-by: Thomas Hellstrom <[email protected]>
Signed-off-by: Borislav Petkov <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Acked-by: Tom Lendacky <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
kernel/dma/mapping.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 12ff766..98e3d87 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -154,6 +154,8 @@ EXPORT_SYMBOL(dma_get_sgtable_attrs);
*/
pgprot_t dma_pgprot(struct device *dev, pgprot_t prot, unsigned long attrs)
{
+ if (force_dma_unencrypted(dev))
+ prot = pgprot_decrypted(prot);
if (dev_is_dma_coherent(dev) ||
(IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) &&
(attrs & DMA_ATTR_NON_CONSISTENT)))

2020-03-17 14:58:12

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: x86/mm] x86: Don't let pgprot_modify() change the page encryption bit

The following commit has been merged into the x86/mm branch of tip:

Commit-ID: 6db73f17c5f155dbcfd5e48e621c706270b84df0
Gitweb: https://git.kernel.org/tip/6db73f17c5f155dbcfd5e48e621c706270b84df0
Author: Thomas Hellstrom <[email protected]>
AuthorDate: Wed, 04 Mar 2020 12:45:26 +01:00
Committer: Borislav Petkov <[email protected]>
CommitterDate: Tue, 17 Mar 2020 11:48:31 +01:00

x86: Don't let pgprot_modify() change the page encryption bit

When SEV or SME is enabled and active, vm_get_page_prot() typically
returns with the encryption bit set. This means that users of
pgprot_modify(, vm_get_page_prot()) (mprotect_fixup(), do_mmap()) end up
with a value of vma->vm_pg_prot that is not consistent with the intended
protection of the PTEs.

This is also important for fault handlers that rely on the VMA
vm_page_prot to set the page protection. Fix this by not allowing
pgprot_modify() to change the encryption bit, similar to how it's done
for PAT bits.

Signed-off-by: Thomas Hellstrom <[email protected]>
Signed-off-by: Borislav Petkov <[email protected]>
Reviewed-by: Dave Hansen <[email protected]>
Acked-by: Tom Lendacky <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
arch/x86/include/asm/pgtable.h | 7 +++++--
arch/x86/include/asm/pgtable_types.h | 2 +-
2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 7e11866..64a03f2 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -627,12 +627,15 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
return __pmd(val);
}

-/* mprotect needs to preserve PAT bits when updating vm_page_prot */
+/*
+ * mprotect needs to preserve PAT and encryption bits when updating
+ * vm_page_prot
+ */
#define pgprot_modify pgprot_modify
static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
{
pgprotval_t preservebits = pgprot_val(oldprot) & _PAGE_CHG_MASK;
- pgprotval_t addbits = pgprot_val(newprot);
+ pgprotval_t addbits = pgprot_val(newprot) & ~_PAGE_CHG_MASK;
return __pgprot(preservebits | addbits);
}

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 0239998..65c2ecd 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -118,7 +118,7 @@
*/
#define _PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \
_PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY | \
- _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
+ _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC)
#define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)

/*