Hello,
Both powerpc¹ and s390² are adding <asm/mem_encrypt.h> headers. Currently,
they have to supply definitions for functions and macros which only have a
meaning on x86: sme_me_mask, sme_active() and sev_active().
Christoph Hellwig made a suggestion to "clean up the Kconfig and generic
headers bits for memory encryption so that we don't need all this
boilerplate code", and this is what this series does.
After this patch set, this is powerpc's <asm/mem_encrypt.h>:
#ifndef _ASM_POWERPC_MEM_ENCRYPT_H
#define _ASM_POWERPC_MEM_ENCRYPT_H
#include <asm/svm.h>
static inline bool mem_encrypt_active(void)
{
return is_secure_guest();
}
static inline bool force_dma_unencrypted(struct device *dev)
{
return is_secure_guest();
}
int set_memory_encrypted(unsigned long addr, int numpages);
int set_memory_decrypted(unsigned long addr, int numpages);
#endif /* _ASM_POWERPC_MEM_ENCRYPT_H */
I don't have a way to test SME nor SEV, so the patches have only been build
tested. They assume the presence of the following two commits:
Commit 4eb5fec31e61 ("fs/proc/vmcore: Enable dumping of encrypted memory
when SEV was active"), which is now in Linus' master branch;
Commit e67a5ed1f86f ("dma-direct: Force unencrypted DMA under SME for
certain DMA masks"), which is in dma-mapping/for-next and comes from this
patch:
https://lore.kernel.org/linux-iommu/10b83d9ff31bca88e94da2ff34e30619eb396078.1562785123.git.thomas.lendacky@amd.com/
Thiago Jung Bauermann (3):
x86/Kconfig: Move ARCH_HAS_MEM_ENCRYPT to arch/Kconfig
DMA mapping: Move SME handling to x86-specific files
fs/core/vmcore: Move sev_active() reference to x86 arch code
arch/Kconfig | 3 +++
arch/x86/Kconfig | 5 ++---
arch/x86/include/asm/dma-mapping.h | 7 +++++++
arch/x86/include/asm/mem_encrypt.h | 10 ++++++++++
arch/x86/kernel/crash_dump_64.c | 5 +++++
fs/proc/vmcore.c | 8 ++++----
include/linux/crash_dump.h | 14 ++++++++++++++
include/linux/mem_encrypt.h | 15 +--------------
kernel/dma/Kconfig | 3 +++
kernel/dma/mapping.c | 4 ++--
kernel/dma/swiotlb.c | 3 +--
11 files changed, 52 insertions(+), 25 deletions(-)
--
¹ https://lore.kernel.org/linuxppc-dev/[email protected]/
² https://lore.kernel.org/kvm/[email protected]/
Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
appear in generic kernel code because it forces non-x86 architectures to
define the sev_active() function, which doesn't make a lot of sense.
To solve this problem, add an x86 elfcorehdr_read() function to override
the generic weak implementation. To do that, it's necessary to make
read_from_oldmem() public so that it can be used outside of vmcore.c.
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/x86/kernel/crash_dump_64.c | 5 +++++
fs/proc/vmcore.c | 8 ++++----
include/linux/crash_dump.h | 14 ++++++++++++++
include/linux/mem_encrypt.h | 1 -
4 files changed, 23 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 22369dd5de3b..045e82e8945b 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -70,3 +70,8 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
{
return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true);
}
+
+ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
+{
+ return read_from_oldmem(buf, count, ppos, 0, sev_active());
+}
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 57957c91c6df..ca1f20bedd8c 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -100,9 +100,9 @@ static int pfn_is_ram(unsigned long pfn)
}
/* Reads a page from the oldmem device from given offset. */
-static ssize_t read_from_oldmem(char *buf, size_t count,
- u64 *ppos, int userbuf,
- bool encrypted)
+ssize_t read_from_oldmem(char *buf, size_t count,
+ u64 *ppos, int userbuf,
+ bool encrypted)
{
unsigned long pfn, offset;
size_t nr_bytes;
@@ -166,7 +166,7 @@ void __weak elfcorehdr_free(unsigned long long addr)
*/
ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
{
- return read_from_oldmem(buf, count, ppos, 0, sev_active());
+ return read_from_oldmem(buf, count, ppos, 0, false);
}
/*
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index f774c5eb9e3c..4664fc1871de 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -115,4 +115,18 @@ static inline int vmcore_add_device_dump(struct vmcoredd_data *data)
return -EOPNOTSUPP;
}
#endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */
+
+#ifdef CONFIG_PROC_VMCORE
+ssize_t read_from_oldmem(char *buf, size_t count,
+ u64 *ppos, int userbuf,
+ bool encrypted);
+#else
+static inline ssize_t read_from_oldmem(char *buf, size_t count,
+ u64 *ppos, int userbuf,
+ bool encrypted)
+{
+ return -EOPNOTSUPP;
+}
+#endif /* CONFIG_PROC_VMCORE */
+
#endif /* LINUX_CRASHDUMP_H */
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index f2e399fb626b..a3747fcae466 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -21,7 +21,6 @@
#else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
-static inline bool sev_active(void) { return false; }
static inline bool mem_encrypt_active(void) { return false; }
#endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
powerpc and s390 are going to use this feature as well, so put it in a
generic location.
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/Kconfig | 3 +++
arch/x86/Kconfig | 4 +---
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index c47b328eada0..4ef3499d4480 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -927,6 +927,9 @@ config LOCK_EVENT_COUNTS
the chance of application behavior change because of timing
differences. The counts are reported via debugfs.
+config ARCH_HAS_MEM_ENCRYPT
+ bool
+
source "kernel/gcov/Kconfig"
source "scripts/gcc-plugins/Kconfig"
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 12e02a8f9de7..7f4d28da8fe3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -67,6 +67,7 @@ config X86
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_KCOV if X86_64
+ select ARCH_HAS_MEM_ENCRYPT
select ARCH_HAS_MEMBARRIER_SYNC_CORE
select ARCH_HAS_PMEM_API if X86_64
select ARCH_HAS_PTE_SPECIAL
@@ -1500,9 +1501,6 @@ config X86_CPA_STATISTICS
helps to determine the effectiveness of preserving large and huge
page mappings when mapping protections are changed.
-config ARCH_HAS_MEM_ENCRYPT
- def_bool y
-
config AMD_MEM_ENCRYPT
bool "AMD Secure Memory Encryption (SME) support"
depends on X86_64 && CPU_SUP_AMD
On Fri, 12 Jul 2019 02:36:31 -0300
Thiago Jung Bauermann <[email protected]> wrote:
> Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
> appear in generic kernel code because it forces non-x86 architectures to
> define the sev_active() function, which doesn't make a lot of sense.
sev_active() might be just bad (too specific) name for a general
concept. s390 code defines it drives the right behavior in
kernel/dma/direct.c (which uses it).
>
> To solve this problem, add an x86 elfcorehdr_read() function to override
> the generic weak implementation. To do that, it's necessary to make
> read_from_oldmem() public so that it can be used outside of vmcore.c.
>
> Signed-off-by: Thiago Jung Bauermann <[email protected]>
> ---
> arch/x86/kernel/crash_dump_64.c | 5 +++++
> fs/proc/vmcore.c | 8 ++++----
> include/linux/crash_dump.h | 14 ++++++++++++++
> include/linux/mem_encrypt.h | 1 -
> 4 files changed, 23 insertions(+), 5 deletions(-)
Does not seem to apply to today's or yesterdays master.
>
> diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
> index 22369dd5de3b..045e82e8945b 100644
> --- a/arch/x86/kernel/crash_dump_64.c
> +++ b/arch/x86/kernel/crash_dump_64.c
> @@ -70,3 +70,8 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
> {
> return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true);
> }
> +
> +ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
> +{
> + return read_from_oldmem(buf, count, ppos, 0, sev_active());
> +}
> diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
> index 57957c91c6df..ca1f20bedd8c 100644
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -100,9 +100,9 @@ static int pfn_is_ram(unsigned long pfn)
> }
>
> /* Reads a page from the oldmem device from given offset. */
> -static ssize_t read_from_oldmem(char *buf, size_t count,
> - u64 *ppos, int userbuf,
> - bool encrypted)
> +ssize_t read_from_oldmem(char *buf, size_t count,
> + u64 *ppos, int userbuf,
> + bool encrypted)
> {
> unsigned long pfn, offset;
> size_t nr_bytes;
> @@ -166,7 +166,7 @@ void __weak elfcorehdr_free(unsigned long long addr)
> */
> ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
> {
> - return read_from_oldmem(buf, count, ppos, 0, sev_active());
> + return read_from_oldmem(buf, count, ppos, 0, false);
> }
>
> /*
> diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
> index f774c5eb9e3c..4664fc1871de 100644
> --- a/include/linux/crash_dump.h
> +++ b/include/linux/crash_dump.h
> @@ -115,4 +115,18 @@ static inline int vmcore_add_device_dump(struct vmcoredd_data *data)
> return -EOPNOTSUPP;
> }
> #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */
> +
> +#ifdef CONFIG_PROC_VMCORE
> +ssize_t read_from_oldmem(char *buf, size_t count,
> + u64 *ppos, int userbuf,
> + bool encrypted);
> +#else
> +static inline ssize_t read_from_oldmem(char *buf, size_t count,
> + u64 *ppos, int userbuf,
> + bool encrypted)
> +{
> + return -EOPNOTSUPP;
> +}
> +#endif /* CONFIG_PROC_VMCORE */
> +
> #endif /* LINUX_CRASHDUMP_H */
> diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
> index f2e399fb626b..a3747fcae466 100644
> --- a/include/linux/mem_encrypt.h
> +++ b/include/linux/mem_encrypt.h
> @@ -21,7 +21,6 @@
>
> #else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
>
> -static inline bool sev_active(void) { return false; }
This is the implementation for the guys that don't
have ARCH_HAS_MEM_ENCRYPT.
Means sev_active() may not be used in such code after this
patch. What about
static inline bool force_dma_unencrypted(void)
{
return sev_active();
}
in kernel/dma/direct.c?
Regards,
Halil
> static inline bool mem_encrypt_active(void) { return false; }
>
> #endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
On Fri, Jul 12, 2019 at 03:09:12PM +0200, Halil Pasic wrote:
> This is the implementation for the guys that don't
> have ARCH_HAS_MEM_ENCRYPT.
>
> Means sev_active() may not be used in such code after this
> patch. What about
>
> static inline bool force_dma_unencrypted(void)
> {
> return sev_active();
> }
>
> in kernel/dma/direct.c?
FYI, I have this pending in the dma-mapping tree:
http://git.infradead.org/users/hch/dma-mapping.git/commitdiff/e67a5ed1f86f4370991c601f2fcad9ebf9e1eebb
On Fri, 12 Jul 2019 16:08:12 +0200
Christoph Hellwig <[email protected]> wrote:
> On Fri, Jul 12, 2019 at 03:09:12PM +0200, Halil Pasic wrote:
> > This is the implementation for the guys that don't
> > have ARCH_HAS_MEM_ENCRYPT.
> >
> > Means sev_active() may not be used in such code after this
> > patch. What about
> >
> > static inline bool force_dma_unencrypted(void)
> > {
> > return sev_active();
> > }
> >
> > in kernel/dma/direct.c?
>
> FYI, I have this pending in the dma-mapping tree:
>
> http://git.infradead.org/users/hch/dma-mapping.git/commitdiff/e67a5ed1f86f4370991c601f2fcad9ebf9e1eebb
Thank you very much! I will have another look, but it seems to me,
without further measures taken, this would break protected virtualization
support on s390. The effect of the che for s390 is that
force_dma_unencrypted() will always return false instead calling into
the platform code like it did before the patch, right?
Should I send a Fixes: e67a5ed1f86f "dma-direct: Force unencrypted DMA
under SME for certain DMA masks" (Tom Lendacky, 2019-07-10) patch that
rectifies things for s390 or how do we want handle this?
Regards,
Halil
On Fri, Jul 12, 2019 at 04:51:53PM +0200, Halil Pasic wrote:
> Thank you very much! I will have another look, but it seems to me,
> without further measures taken, this would break protected virtualization
> support on s390. The effect of the che for s390 is that
> force_dma_unencrypted() will always return false instead calling into
> the platform code like it did before the patch, right?
>
> Should I send a Fixes: e67a5ed1f86f "dma-direct: Force unencrypted DMA
> under SME for certain DMA masks" (Tom Lendacky, 2019-07-10) patch that
> rectifies things for s390 or how do we want handle this?
Yes, please do. I hadn't noticed the s390 support had landed in
mainline already.
On Fri, 12 Jul 2019 17:11:29 +0200
Christoph Hellwig <[email protected]> wrote:
> On Fri, Jul 12, 2019 at 04:51:53PM +0200, Halil Pasic wrote:
> > Thank you very much! I will have another look, but it seems to me,
> > without further measures taken, this would break protected virtualization
> > support on s390. The effect of the che for s390 is that
> > force_dma_unencrypted() will always return false instead calling into
> > the platform code like it did before the patch, right?
> >
> > Should I send a Fixes: e67a5ed1f86f "dma-direct: Force unencrypted DMA
> > under SME for certain DMA masks" (Tom Lendacky, 2019-07-10) patch that
> > rectifies things for s390 or how do we want handle this?
>
> Yes, please do. I hadn't noticed the s390 support had landed in
> mainline already.
>
Will do! I guess I should do the patch against the for-next branch of the
dma-mapping tree. But that branch does not have the s390 support patches (yet?).
To fix it I need both e67a5ed1f86f and 64e1f0c531d1 "s390/mm: force
swiotlb for protected virtualization" (Halil Pasic, 2018-09-13). Or
should I wait for e67a5ed1f86f landing in mainline?
Regards,
Halil
On Fri, 12 Jul 2019, Thiago Jung Bauermann wrote:
> powerpc and s390 are going to use this feature as well, so put it in a
> generic location.
>
> Signed-off-by: Thiago Jung Bauermann <[email protected]>
Reviewed-by: Thomas Gleixner <[email protected]>
[ Cc'ing Tom Lendacky which I forgot to do earlier. Sorry about that. ]
Hello Halil,
Thanks for the quick review.
Halil Pasic <[email protected]> writes:
> On Fri, 12 Jul 2019 02:36:31 -0300
> Thiago Jung Bauermann <[email protected]> wrote:
>
>> Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
>> appear in generic kernel code because it forces non-x86 architectures to
>> define the sev_active() function, which doesn't make a lot of sense.
>
> sev_active() might be just bad (too specific) name for a general
> concept. s390 code defines it drives the right behavior in
> kernel/dma/direct.c (which uses it).
I thought about that but couldn't put my finger on a general concept.
Is it "guest with memory inaccessible to the host"?
Since your proposed definiton for force_dma_unencrypted() is simply to
make it equivalent to sev_active(), I thought it was more
straightforward to make each arch define force_dma_unencrypted()
directly.
Also, does sev_active() drive the right behavior for s390 in
elfcorehdr_read() as well?
>> To solve this problem, add an x86 elfcorehdr_read() function to override
>> the generic weak implementation. To do that, it's necessary to make
>> read_from_oldmem() public so that it can be used outside of vmcore.c.
>>
>> Signed-off-by: Thiago Jung Bauermann <[email protected]>
>> ---
>> arch/x86/kernel/crash_dump_64.c | 5 +++++
>> fs/proc/vmcore.c | 8 ++++----
>> include/linux/crash_dump.h | 14 ++++++++++++++
>> include/linux/mem_encrypt.h | 1 -
>> 4 files changed, 23 insertions(+), 5 deletions(-)
>
> Does not seem to apply to today's or yesterdays master.
It assumes the presence of the two patches I mentioned in the cover
letter. Only one of them is in master.
I hadn't realized the s390 virtio patches were on their way to upstream.
I was keeping an eye on the email thread but didn't see they were picked
up in the s390 pull request. I'll add a new patch to this series making
the corresponding changes to s390's <asm/mem_encrypt.h> as well.
--
Thiago Jung Bauermann
IBM Linux Technology Center
Hello Thomas,
Thanks for quickly reviewing the patches.
Thomas Gleixner <[email protected]> writes:
> On Fri, 12 Jul 2019, Thiago Jung Bauermann wrote:
>
>> powerpc and s390 are going to use this feature as well, so put it in a
>> generic location.
>>
>> Signed-off-by: Thiago Jung Bauermann <[email protected]>
>
> Reviewed-by: Thomas Gleixner <[email protected]>
Thanks!
--
Thiago Jung Bauermann
IBM Linux Technology Center
On Fri, Jul 12, 2019 at 05:42:49PM +0200, Halil Pasic wrote:
>
> Will do! I guess I should do the patch against the for-next branch of the
> dma-mapping tree. But that branch does not have the s390 support patches (yet?).
> To fix it I need both e67a5ed1f86f and 64e1f0c531d1 "s390/mm: force
> swiotlb for protected virtualization" (Halil Pasic, 2018-09-13). Or
> should I wait for e67a5ed1f86f landing in mainline?
I've rebased the dma-mapping for-next branch to latest mainline as of
today that has both commits.
On Fri, 12 Jul 2019 18:55:47 -0300
Thiago Jung Bauermann <[email protected]> wrote:
>
> [ Cc'ing Tom Lendacky which I forgot to do earlier. Sorry about that. ]
>
> Hello Halil,
>
> Thanks for the quick review.
>
> Halil Pasic <[email protected]> writes:
>
> > On Fri, 12 Jul 2019 02:36:31 -0300
> > Thiago Jung Bauermann <[email protected]> wrote:
> >
> >> Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
> >> appear in generic kernel code because it forces non-x86 architectures to
> >> define the sev_active() function, which doesn't make a lot of sense.
> >
> > sev_active() might be just bad (too specific) name for a general
> > concept. s390 code defines it drives the right behavior in
> > kernel/dma/direct.c (which uses it).
>
> I thought about that but couldn't put my finger on a general concept.
> Is it "guest with memory inaccessible to the host"?
>
Well, force_dma_unencrypted() is a much better name thatn sev_active():
s390 has no AMD SEV, that is sure, but for virtio to work we do need to
make our dma accessible to the hypervisor. Yes, your "guest with memory
inaccessible to the host" shows into the right direction IMHO.
Unfortunately I don't have too many cycles to spend on this right now.
> Since your proposed definiton for force_dma_unencrypted() is simply to
> make it equivalent to sev_active(), I thought it was more
> straightforward to make each arch define force_dma_unencrypted()
> directly.
I did not mean to propose equivalence. I intended to say the name
sev_active() is not suitable for a common concept. On the other hand
we do have a common concept -- as common code needs to do or not do
things depending on whether "memory is protected/encrypted" or not. I'm
fine with the name force_dma_unencrypted(), especially because I don't
have a better name.
>
> Also, does sev_active() drive the right behavior for s390 in
> elfcorehdr_read() as well?
>
AFAIU, since s390 does not override it boils down to the same, whether
sev_active() returns true or false. I'm no expert in that area, but I
strongly hope that is the right behavior. @Janosch: can you help me
out with this one?
> >> To solve this problem, add an x86 elfcorehdr_read() function to override
> >> the generic weak implementation. To do that, it's necessary to make
> >> read_from_oldmem() public so that it can be used outside of vmcore.c.
> >>
> >> Signed-off-by: Thiago Jung Bauermann <[email protected]>
> >> ---
> >> arch/x86/kernel/crash_dump_64.c | 5 +++++
> >> fs/proc/vmcore.c | 8 ++++----
> >> include/linux/crash_dump.h | 14 ++++++++++++++
> >> include/linux/mem_encrypt.h | 1 -
> >> 4 files changed, 23 insertions(+), 5 deletions(-)
> >
> > Does not seem to apply to today's or yesterdays master.
>
> It assumes the presence of the two patches I mentioned in the cover
> letter. Only one of them is in master.
>
> I hadn't realized the s390 virtio patches were on their way to upstream.
> I was keeping an eye on the email thread but didn't see they were picked
> up in the s390 pull request. I'll add a new patch to this series making
> the corresponding changes to s390's <asm/mem_encrypt.h> as well.
>
Being on cc for your patch made me realize that things got broken on
s390. Thanks! I've sent out a patch that fixes protvirt, but we are going
to benefit from your cleanups. I think with your cleanups and that patch
of mine both sev_active() and sme_active() can be removed. Feel free to
do so. If not, I can attend to it as well.
Regards,
Halil
On Mon, Jul 15, 2019 at 04:03:17PM +0200, Halil Pasic wrote:
> > I thought about that but couldn't put my finger on a general concept.
> > Is it "guest with memory inaccessible to the host"?
> >
>
> Well, force_dma_unencrypted() is a much better name thatn sev_active():
> s390 has no AMD SEV, that is sure, but for virtio to work we do need to
> make our dma accessible to the hypervisor. Yes, your "guest with memory
> inaccessible to the host" shows into the right direction IMHO.
> Unfortunately I don't have too many cycles to spend on this right now.
In x86 it means that we need to remove dma encryption using
set_memory_decrypted before using it for DMA purposes. In the SEV
case that seems to be so that the hypervisor can access it, in the SME
case that Tom just fixes it is because there is an encrypted bit set
in the physical address, and if the device doesn't support a large
enough DMA address the direct mapping code has to encrypt the pages
used for the contigous allocation.
> Being on cc for your patch made me realize that things got broken on
> s390. Thanks! I've sent out a patch that fixes protvirt, but we are going
> to benefit from your cleanups. I think with your cleanups and that patch
> of mine both sev_active() and sme_active() can be removed. Feel free to
> do so. If not, I can attend to it as well.
Yes, I think with the dma-mapping fix and this series sme_active and
sev_active should be gone from common code. We should also be able
to remove the exports x86 has for them.
I'll wait a few days and will then feed the dma-mapping fix to Linus,
it might make sense to either rebase Thiagos series on top of the
dma-mapping for-next branch, or wait a few days before reposting.
On 7/15/19 9:30 AM, Christoph Hellwig wrote:
> On Mon, Jul 15, 2019 at 04:03:17PM +0200, Halil Pasic wrote:
>>> I thought about that but couldn't put my finger on a general concept.
>>> Is it "guest with memory inaccessible to the host"?
>>>
>>
>> Well, force_dma_unencrypted() is a much better name thatn sev_active():
>> s390 has no AMD SEV, that is sure, but for virtio to work we do need to
>> make our dma accessible to the hypervisor. Yes, your "guest with memory
>> inaccessible to the host" shows into the right direction IMHO.
>> Unfortunately I don't have too many cycles to spend on this right now.
>
> In x86 it means that we need to remove dma encryption using
> set_memory_decrypted before using it for DMA purposes. In the SEV
> case that seems to be so that the hypervisor can access it, in the SME
> case that Tom just fixes it is because there is an encrypted bit set
> in the physical address, and if the device doesn't support a large
> enough DMA address the direct mapping code has to encrypt the pages
> used for the contigous allocation.
Just a correction/clarification...
For SME, when a device doesn't support a large enough DMA address to
accommodate the encryption bit as part of the DMA address, the direct
mapping code has to provide un-encrypted pages. For un-encrypted pages,
the DMA address now does not include the encryption bit, making it
acceptable to the device. Since the device is now using a DMA address
without the encryption bit, the physical address in the CPU page table
must match (the call to set_memory_decrypted) so that both the device and
the CPU interact in the same way with the memory.
Thanks,
Tom
>
>> Being on cc for your patch made me realize that things got broken on
>> s390. Thanks! I've sent out a patch that fixes protvirt, but we are going
>> to benefit from your cleanups. I think with your cleanups and that patch
>> of mine both sev_active() and sme_active() can be removed. Feel free to
>> do so. If not, I can attend to it as well.
>
> Yes, I think with the dma-mapping fix and this series sme_active and
> sev_active should be gone from common code. We should also be able
> to remove the exports x86 has for them.
>
> I'll wait a few days and will then feed the dma-mapping fix to Linus,
> it might make sense to either rebase Thiagos series on top of the
> dma-mapping for-next branch, or wait a few days before reposting.
>
Christoph Hellwig <[email protected]> writes:
> On Mon, Jul 15, 2019 at 04:03:17PM +0200, Halil Pasic wrote:
>> > I thought about that but couldn't put my finger on a general concept.
>> > Is it "guest with memory inaccessible to the host"?
>> >
>>
>> Well, force_dma_unencrypted() is a much better name thatn sev_active():
>> s390 has no AMD SEV, that is sure, but for virtio to work we do need to
>> make our dma accessible to the hypervisor. Yes, your "guest with memory
>> inaccessible to the host" shows into the right direction IMHO.
>> Unfortunately I don't have too many cycles to spend on this right now.
>
> In x86 it means that we need to remove dma encryption using
> set_memory_decrypted before using it for DMA purposes. In the SEV
> case that seems to be so that the hypervisor can access it, in the SME
> case that Tom just fixes it is because there is an encrypted bit set
> in the physical address, and if the device doesn't support a large
> enough DMA address the direct mapping code has to encrypt the pages
> used for the contigous allocation.
>
>> Being on cc for your patch made me realize that things got broken on
>> s390. Thanks! I've sent out a patch that fixes protvirt, but we are going
>> to benefit from your cleanups. I think with your cleanups and that patch
>> of mine both sev_active() and sme_active() can be removed. Feel free to
>> do so. If not, I can attend to it as well.
>
> Yes, I think with the dma-mapping fix and this series sme_active and
> sev_active should be gone from common code. We should also be able
> to remove the exports x86 has for them.
>
> I'll wait a few days and will then feed the dma-mapping fix to Linus,
> it might make sense to either rebase Thiagos series on top of the
> dma-mapping for-next branch, or wait a few days before reposting.
I'll rebase on top of dma-mapping/for-next and do the break up of patch
2 that you mentioned as well.
--
Thiago Jung Bauermann
IBM Linux Technology Center