Hello,
This version is mostly about splitting up patch 2/3 into three separate
patches, as suggested by Christoph Hellwig. Two other changes are a fix in
patch 1 which wasn't selecting ARCH_HAS_MEM_ENCRYPT for s390 spotted by
Janani and removal of sme_active and sev_active symbol exports as suggested
by Christoph Hellwig.
These patches are applied on top of today's dma-mapping/for-next.
I don't have a way to test SME, SEV, nor s390's PEF so the patches have only
been build tested.
Changelog
Since v2:
- Patch "x86,s390: Move ARCH_HAS_MEM_ENCRYPT definition to arch/Kconfig"
- Added "select ARCH_HAS_MEM_ENCRYPT" to config S390. Suggested by Janani.
- Patch "DMA mapping: Move SME handling to x86-specific files"
- Split up into 3 new patches. Suggested by Christoph Hellwig.
- Patch "swiotlb: Remove call to sme_active()"
- New patch.
- Patch "dma-mapping: Remove dma_check_mask()"
- New patch.
- Patch "x86,s390/mm: Move sme_active() and sme_me_mask to x86-specific header"
- New patch.
- Removed export of sme_active symbol. Suggested by Christoph Hellwig.
- Patch "fs/core/vmcore: Move sev_active() reference to x86 arch code"
- Removed export of sev_active symbol. Suggested by Christoph Hellwig.
- Patch "s390/mm: Remove sev_active() function"
- New patch.
Since v1:
- Patch "x86,s390: Move ARCH_HAS_MEM_ENCRYPT definition to arch/Kconfig"
- Remove definition of ARCH_HAS_MEM_ENCRYPT from s390/Kconfig as well.
- Reworded patch title and message a little bit.
- Patch "DMA mapping: Move SME handling to x86-specific files"
- Adapt s390's <asm/mem_encrypt.h> as well.
- Remove dma_check_mask() from kernel/dma/mapping.c. Suggested by
Christoph Hellwig.
Thiago Jung Bauermann (6):
x86,s390: Move ARCH_HAS_MEM_ENCRYPT definition to arch/Kconfig
swiotlb: Remove call to sme_active()
dma-mapping: Remove dma_check_mask()
x86,s390/mm: Move sme_active() and sme_me_mask to x86-specific header
fs/core/vmcore: Move sev_active() reference to x86 arch code
s390/mm: Remove sev_active() function
arch/Kconfig | 3 +++
arch/s390/Kconfig | 4 +---
arch/s390/include/asm/mem_encrypt.h | 5 +----
arch/s390/mm/init.c | 8 +-------
arch/x86/Kconfig | 4 +---
arch/x86/include/asm/mem_encrypt.h | 10 ++++++++++
arch/x86/kernel/crash_dump_64.c | 5 +++++
arch/x86/mm/mem_encrypt.c | 2 --
fs/proc/vmcore.c | 8 ++++----
include/linux/crash_dump.h | 14 ++++++++++++++
include/linux/mem_encrypt.h | 15 +--------------
kernel/dma/mapping.c | 8 --------
kernel/dma/swiotlb.c | 3 +--
13 files changed, 42 insertions(+), 47 deletions(-)
Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
appear in generic kernel code because it forces non-x86 architectures to
define the sev_active() function, which doesn't make a lot of sense.
To solve this problem, add an x86 elfcorehdr_read() function to override
the generic weak implementation. To do that, it's necessary to make
read_from_oldmem() public so that it can be used outside of vmcore.c.
Also, remove the export for sev_active() since it's only used in files that
won't be built as modules.
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/x86/kernel/crash_dump_64.c | 5 +++++
arch/x86/mm/mem_encrypt.c | 1 -
fs/proc/vmcore.c | 8 ++++----
include/linux/crash_dump.h | 14 ++++++++++++++
include/linux/mem_encrypt.h | 1 -
5 files changed, 23 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 22369dd5de3b..045e82e8945b 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -70,3 +70,8 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
{
return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true);
}
+
+ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
+{
+ return read_from_oldmem(buf, count, ppos, 0, sev_active());
+}
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 7139f2f43955..b1e823441093 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -349,7 +349,6 @@ bool sev_active(void)
{
return sme_me_mask && sev_enabled;
}
-EXPORT_SYMBOL(sev_active);
/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
bool force_dma_unencrypted(struct device *dev)
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 57957c91c6df..ca1f20bedd8c 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -100,9 +100,9 @@ static int pfn_is_ram(unsigned long pfn)
}
/* Reads a page from the oldmem device from given offset. */
-static ssize_t read_from_oldmem(char *buf, size_t count,
- u64 *ppos, int userbuf,
- bool encrypted)
+ssize_t read_from_oldmem(char *buf, size_t count,
+ u64 *ppos, int userbuf,
+ bool encrypted)
{
unsigned long pfn, offset;
size_t nr_bytes;
@@ -166,7 +166,7 @@ void __weak elfcorehdr_free(unsigned long long addr)
*/
ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
{
- return read_from_oldmem(buf, count, ppos, 0, sev_active());
+ return read_from_oldmem(buf, count, ppos, 0, false);
}
/*
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index f774c5eb9e3c..4664fc1871de 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -115,4 +115,18 @@ static inline int vmcore_add_device_dump(struct vmcoredd_data *data)
return -EOPNOTSUPP;
}
#endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */
+
+#ifdef CONFIG_PROC_VMCORE
+ssize_t read_from_oldmem(char *buf, size_t count,
+ u64 *ppos, int userbuf,
+ bool encrypted);
+#else
+static inline ssize_t read_from_oldmem(char *buf, size_t count,
+ u64 *ppos, int userbuf,
+ bool encrypted)
+{
+ return -EOPNOTSUPP;
+}
+#endif /* CONFIG_PROC_VMCORE */
+
#endif /* LINUX_CRASHDUMP_H */
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index 0c5b0ff9eb29..5c4a18a91f89 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -19,7 +19,6 @@
#else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
static inline bool mem_encrypt_active(void) { return false; }
-static inline bool sev_active(void) { return false; }
#endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
sme_active() is an x86-specific function so it's better not to call it from
generic code. Christoph Hellwig mentioned that "There is no reason why we
should have a special debug printk just for one specific reason why there
is a requirement for a large DMA mask.", so just remove dma_check_mask().
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
kernel/dma/mapping.c | 8 --------
1 file changed, 8 deletions(-)
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 1f628e7ac709..61eeefbfcb36 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -291,12 +291,6 @@ void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr,
}
EXPORT_SYMBOL(dma_free_attrs);
-static inline void dma_check_mask(struct device *dev, u64 mask)
-{
- if (sme_active() && (mask < (((u64)sme_get_me_mask() << 1) - 1)))
- dev_warn(dev, "SME is active, device will require DMA bounce buffers\n");
-}
-
int dma_supported(struct device *dev, u64 mask)
{
const struct dma_map_ops *ops = get_dma_ops(dev);
@@ -327,7 +321,6 @@ int dma_set_mask(struct device *dev, u64 mask)
return -EIO;
arch_dma_set_mask(dev, mask);
- dma_check_mask(dev, mask);
*dev->dma_mask = mask;
return 0;
}
@@ -345,7 +338,6 @@ int dma_set_coherent_mask(struct device *dev, u64 mask)
if (!dma_supported(dev, mask))
return -EIO;
- dma_check_mask(dev, mask);
dev->coherent_dma_mask = mask;
return 0;
}
Now that generic code doesn't reference them, move sme_active() and
sme_me_mask to x86's <asm/mem_encrypt.h>.
Also remove the export for sme_active() since it's only used in files that
won't be built as modules. sme_me_mask on the other hand is used in
arch/x86/kvm/svm.c (via __sme_set() and __psp_pa()) which can be built as a
module so its export needs to stay.
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/s390/include/asm/mem_encrypt.h | 4 +---
arch/x86/include/asm/mem_encrypt.h | 10 ++++++++++
arch/x86/mm/mem_encrypt.c | 1 -
include/linux/mem_encrypt.h | 14 +-------------
4 files changed, 12 insertions(+), 17 deletions(-)
diff --git a/arch/s390/include/asm/mem_encrypt.h b/arch/s390/include/asm/mem_encrypt.h
index 3eb018508190..ff813a56bc30 100644
--- a/arch/s390/include/asm/mem_encrypt.h
+++ b/arch/s390/include/asm/mem_encrypt.h
@@ -4,9 +4,7 @@
#ifndef __ASSEMBLY__
-#define sme_me_mask 0ULL
-
-static inline bool sme_active(void) { return false; }
+static inline bool mem_encrypt_active(void) { return false; }
extern bool sev_active(void);
int set_memory_encrypted(unsigned long addr, int numpages);
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 0c196c47d621..848ce43b9040 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -92,6 +92,16 @@ early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0;
extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_decrypted_unused[];
+static inline bool mem_encrypt_active(void)
+{
+ return sme_me_mask;
+}
+
+static inline u64 sme_get_me_mask(void)
+{
+ return sme_me_mask;
+}
+
#endif /* __ASSEMBLY__ */
#endif /* __X86_MEM_ENCRYPT_H__ */
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index c805f0a5c16e..7139f2f43955 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -344,7 +344,6 @@ bool sme_active(void)
{
return sme_me_mask && !sev_enabled;
}
-EXPORT_SYMBOL(sme_active);
bool sev_active(void)
{
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index 470bd53a89df..0c5b0ff9eb29 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -18,23 +18,11 @@
#else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
-#define sme_me_mask 0ULL
-
-static inline bool sme_active(void) { return false; }
+static inline bool mem_encrypt_active(void) { return false; }
static inline bool sev_active(void) { return false; }
#endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
-static inline bool mem_encrypt_active(void)
-{
- return sme_me_mask;
-}
-
-static inline u64 sme_get_me_mask(void)
-{
- return sme_me_mask;
-}
-
#ifdef CONFIG_AMD_MEM_ENCRYPT
/*
* The __sme_set() and __sme_clr() macros are useful for adding or removing
powerpc is also going to use this feature, so put it in a generic location.
Signed-off-by: Thiago Jung Bauermann <[email protected]>
Reviewed-by: Thomas Gleixner <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
arch/Kconfig | 3 +++
arch/s390/Kconfig | 4 +---
arch/x86/Kconfig | 4 +---
3 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index e8d19c3cb91f..8fc285180848 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -935,6 +935,9 @@ config LOCK_EVENT_COUNTS
the chance of application behavior change because of timing
differences. The counts are reported via debugfs.
+config ARCH_HAS_MEM_ENCRYPT
+ bool
+
source "kernel/gcov/Kconfig"
source "scripts/gcc-plugins/Kconfig"
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index a4ad2733eedf..f43319c44454 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -1,7 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
-config ARCH_HAS_MEM_ENCRYPT
- def_bool y
-
config MMU
def_bool y
@@ -68,6 +65,7 @@ config S390
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_GIGANTIC_PAGE
select ARCH_HAS_KCOV
+ select ARCH_HAS_MEM_ENCRYPT
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_SET_MEMORY
select ARCH_HAS_STRICT_KERNEL_RWX
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index c9f331bb538b..5d3295f2df94 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -68,6 +68,7 @@ config X86
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_KCOV if X86_64
+ select ARCH_HAS_MEM_ENCRYPT
select ARCH_HAS_MEMBARRIER_SYNC_CORE
select ARCH_HAS_PMEM_API if X86_64
select ARCH_HAS_PTE_SPECIAL
@@ -1520,9 +1521,6 @@ config X86_CPA_STATISTICS
helps to determine the effectiveness of preserving large and huge
page mappings when mapping protections are changed.
-config ARCH_HAS_MEM_ENCRYPT
- def_bool y
-
config AMD_MEM_ENCRYPT
bool "AMD Secure Memory Encryption (SME) support"
depends on X86_64 && CPU_SUP_AMD
sme_active() is an x86-specific function so it's better not to call it from
generic code.
There's no need to mention which memory encryption feature is active, so
just use a more generic message. Besides, other architectures will have
different names for similar technology.
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
kernel/dma/swiotlb.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 62fa5a82a065..e52401f94e91 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -459,8 +459,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
if (mem_encrypt_active())
- pr_warn_once("%s is active and system is using DMA bounce buffers\n",
- sme_active() ? "SME" : "SEV");
+ pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");
mask = dma_get_seg_boundary(hwdev);
All references to sev_active() were moved to arch/x86 so we don't need to
define it for s390 anymore.
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/s390/include/asm/mem_encrypt.h | 1 -
arch/s390/mm/init.c | 8 +-------
2 files changed, 1 insertion(+), 8 deletions(-)
diff --git a/arch/s390/include/asm/mem_encrypt.h b/arch/s390/include/asm/mem_encrypt.h
index ff813a56bc30..2542cbf7e2d1 100644
--- a/arch/s390/include/asm/mem_encrypt.h
+++ b/arch/s390/include/asm/mem_encrypt.h
@@ -5,7 +5,6 @@
#ifndef __ASSEMBLY__
static inline bool mem_encrypt_active(void) { return false; }
-extern bool sev_active(void);
int set_memory_encrypted(unsigned long addr, int numpages);
int set_memory_decrypted(unsigned long addr, int numpages);
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 78c319c5ce48..6286eb3e815b 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -155,15 +155,9 @@ int set_memory_decrypted(unsigned long addr, int numpages)
return 0;
}
-/* are we a protected virtualization guest? */
-bool sev_active(void)
-{
- return is_prot_virt_guest();
-}
-
bool force_dma_unencrypted(struct device *dev)
{
- return sev_active();
+ return is_prot_virt_guest();
}
/* protected virtualization */
On Thu, Jul 18, 2019 at 12:28:54AM -0300, Thiago Jung Bauermann wrote:
> sme_active() is an x86-specific function so it's better not to call it from
> generic code.
>
> There's no need to mention which memory encryption feature is active, so
> just use a more generic message. Besides, other architectures will have
> different names for similar technology.
>
> Signed-off-by: Thiago Jung Bauermann <[email protected]>
Looks good,
Reviewed-by: Christoph Hellwig <[email protected]>
On Thu, Jul 18, 2019 at 12:28:55AM -0300, Thiago Jung Bauermann wrote:
> sme_active() is an x86-specific function so it's better not to call it from
> generic code. Christoph Hellwig mentioned that "There is no reason why we
> should have a special debug printk just for one specific reason why there
> is a requirement for a large DMA mask.", so just remove dma_check_mask().
>
> Signed-off-by: Thiago Jung Bauermann <[email protected]>
Looks good,
Reviewed-by: Christoph Hellwig <[email protected]>
On Thu, Jul 18, 2019 at 12:28:56AM -0300, Thiago Jung Bauermann wrote:
> Now that generic code doesn't reference them, move sme_active() and
> sme_me_mask to x86's <asm/mem_encrypt.h>.
>
> Also remove the export for sme_active() since it's only used in files that
> won't be built as modules. sme_me_mask on the other hand is used in
> arch/x86/kvm/svm.c (via __sme_set() and __psp_pa()) which can be built as a
> module so its export needs to stay.
>
> Signed-off-by: Thiago Jung Bauermann <[email protected]>
Looks good,
Reviewed-by: Christoph Hellwig <[email protected]>
> -/* are we a protected virtualization guest? */
> -bool sev_active(void)
> -{
> - return is_prot_virt_guest();
> -}
> -
> bool force_dma_unencrypted(struct device *dev)
> {
> - return sev_active();
> + return is_prot_virt_guest();
> }
Do we want to keep the comment for force_dma_unencrypted?
Otherwise looks good:
Reviewed-by: Christoph Hellwig <[email protected]>
On Thu, Jul 18, 2019 at 12:28:57AM -0300, Thiago Jung Bauermann wrote:
> Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
> appear in generic kernel code because it forces non-x86 architectures to
> define the sev_active() function, which doesn't make a lot of sense.
>
> To solve this problem, add an x86 elfcorehdr_read() function to override
> the generic weak implementation. To do that, it's necessary to make
> read_from_oldmem() public so that it can be used outside of vmcore.c.
>
> Also, remove the export for sev_active() since it's only used in files that
> won't be built as modules.
I have to say I find the __weak overrides of the vmcore files very
confusing and which we'd have a better scheme there. But as this fits
into that scheme and allows to remove the AMD SME vs SEV knowledge from
the core I'm fine with it.
Reviewed-by: Christoph Hellwig <[email protected]>
On Thu, 18 Jul 2019 10:44:56 +0200
Christoph Hellwig <[email protected]> wrote:
> > -/* are we a protected virtualization guest? */
> > -bool sev_active(void)
> > -{
> > - return is_prot_virt_guest();
> > -}
> > -
> > bool force_dma_unencrypted(struct device *dev)
> > {
> > - return sev_active();
> > + return is_prot_virt_guest();
> > }
>
> Do we want to keep the comment for force_dma_unencrypted?
Yes we do. With the comment transferred:
Reviewed-by: Halil Pasic <[email protected]>
>
> Otherwise looks good:
>
> Reviewed-by: Christoph Hellwig <[email protected]>
Christoph Hellwig <[email protected]> writes:
>> -/* are we a protected virtualization guest? */
>> -bool sev_active(void)
>> -{
>> - return is_prot_virt_guest();
>> -}
>> -
>> bool force_dma_unencrypted(struct device *dev)
>> {
>> - return sev_active();
>> + return is_prot_virt_guest();
>> }
>
> Do we want to keep the comment for force_dma_unencrypted?
>
> Otherwise looks good:
>
> Reviewed-by: Christoph Hellwig <[email protected]>
Thank you for your review on al these patches.
--
Thiago Jung Bauermann
IBM Linux Technology Center
Halil Pasic <[email protected]> writes:
> On Thu, 18 Jul 2019 10:44:56 +0200
> Christoph Hellwig <[email protected]> wrote:
>
>> > -/* are we a protected virtualization guest? */
>> > -bool sev_active(void)
>> > -{
>> > - return is_prot_virt_guest();
>> > -}
>> > -
>> > bool force_dma_unencrypted(struct device *dev)
>> > {
>> > - return sev_active();
>> > + return is_prot_virt_guest();
>> > }
>>
>> Do we want to keep the comment for force_dma_unencrypted?
>
> Yes we do. With the comment transferred:
>
> Reviewed-by: Halil Pasic <[email protected]>
Thanks for your review.
Here is the new version. Should I send a new patch series with this
patch and the Reviewed-by on the other ones?
--
Thiago Jung Bauermann
IBM Linux Technology Center
From 1726205c73fb9e29feaa3d8909c5a1b0f2054c04 Mon Sep 17 00:00:00 2001
From: Thiago Jung Bauermann <[email protected]>
Date: Mon, 15 Jul 2019 20:50:43 -0300
Subject: [PATCH v4] s390/mm: Remove sev_active() function
All references to sev_active() were moved to arch/x86 so we don't need to
define it for s390 anymore.
Signed-off-by: Thiago Jung Bauermann <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Halil Pasic <[email protected]>
---
arch/s390/include/asm/mem_encrypt.h | 1 -
arch/s390/mm/init.c | 7 +------
2 files changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/s390/include/asm/mem_encrypt.h b/arch/s390/include/asm/mem_encrypt.h
index ff813a56bc30..2542cbf7e2d1 100644
--- a/arch/s390/include/asm/mem_encrypt.h
+++ b/arch/s390/include/asm/mem_encrypt.h
@@ -5,7 +5,6 @@
#ifndef __ASSEMBLY__
static inline bool mem_encrypt_active(void) { return false; }
-extern bool sev_active(void);
int set_memory_encrypted(unsigned long addr, int numpages);
int set_memory_decrypted(unsigned long addr, int numpages);
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 78c319c5ce48..6c43a1ed1beb 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -156,14 +156,9 @@ int set_memory_decrypted(unsigned long addr, int numpages)
}
/* are we a protected virtualization guest? */
-bool sev_active(void)
-{
- return is_prot_virt_guest();
-}
-
bool force_dma_unencrypted(struct device *dev)
{
- return sev_active();
+ return is_prot_virt_guest();
}
/* protected virtualization */
On 7/17/19 10:28 PM, Thiago Jung Bauermann wrote:
> sme_active() is an x86-specific function so it's better not to call it from
> generic code.
>
> There's no need to mention which memory encryption feature is active, so
> just use a more generic message. Besides, other architectures will have
> different names for similar technology.
>
> Signed-off-by: Thiago Jung Bauermann <[email protected]>
Reviewed-by: Tom Lendacky <[email protected]>
> ---
> kernel/dma/swiotlb.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 62fa5a82a065..e52401f94e91 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -459,8 +459,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
> panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
>
> if (mem_encrypt_active())
> - pr_warn_once("%s is active and system is using DMA bounce buffers\n",
> - sme_active() ? "SME" : "SEV");
> + pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");
>
> mask = dma_get_seg_boundary(hwdev);
>
>
On 7/17/19 10:28 PM, Thiago Jung Bauermann wrote:
> sme_active() is an x86-specific function so it's better not to call it from
> generic code. Christoph Hellwig mentioned that "There is no reason why we
> should have a special debug printk just for one specific reason why there
> is a requirement for a large DMA mask.", so just remove dma_check_mask().
>
> Signed-off-by: Thiago Jung Bauermann <[email protected]>
Reviewed-by: Tom Lendacky <[email protected]>
> ---
> kernel/dma/mapping.c | 8 --------
> 1 file changed, 8 deletions(-)
>
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index 1f628e7ac709..61eeefbfcb36 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -291,12 +291,6 @@ void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr,
> }
> EXPORT_SYMBOL(dma_free_attrs);
>
> -static inline void dma_check_mask(struct device *dev, u64 mask)
> -{
> - if (sme_active() && (mask < (((u64)sme_get_me_mask() << 1) - 1)))
> - dev_warn(dev, "SME is active, device will require DMA bounce buffers\n");
> -}
> -
> int dma_supported(struct device *dev, u64 mask)
> {
> const struct dma_map_ops *ops = get_dma_ops(dev);
> @@ -327,7 +321,6 @@ int dma_set_mask(struct device *dev, u64 mask)
> return -EIO;
>
> arch_dma_set_mask(dev, mask);
> - dma_check_mask(dev, mask);
> *dev->dma_mask = mask;
> return 0;
> }
> @@ -345,7 +338,6 @@ int dma_set_coherent_mask(struct device *dev, u64 mask)
> if (!dma_supported(dev, mask))
> return -EIO;
>
> - dma_check_mask(dev, mask);
> dev->coherent_dma_mask = mask;
> return 0;
> }
>
On 7/17/19 10:28 PM, Thiago Jung Bauermann wrote:
> Now that generic code doesn't reference them, move sme_active() and
> sme_me_mask to x86's <asm/mem_encrypt.h>.
>
> Also remove the export for sme_active() since it's only used in files that
> won't be built as modules. sme_me_mask on the other hand is used in
> arch/x86/kvm/svm.c (via __sme_set() and __psp_pa()) which can be built as a
> module so its export needs to stay.
You may want to try and build the out-of-tree nvidia driver just to be
sure you can remove the EXPORT_SYMBOL(). But I believe that was related
to the DMA mask check, which now removed, may no longer be a problem.
>
> Signed-off-by: Thiago Jung Bauermann <[email protected]>
Reviewed-by: Tom Lendacky <[email protected]>
> ---
> arch/s390/include/asm/mem_encrypt.h | 4 +---
> arch/x86/include/asm/mem_encrypt.h | 10 ++++++++++
> arch/x86/mm/mem_encrypt.c | 1 -
> include/linux/mem_encrypt.h | 14 +-------------
> 4 files changed, 12 insertions(+), 17 deletions(-)
>
> diff --git a/arch/s390/include/asm/mem_encrypt.h b/arch/s390/include/asm/mem_encrypt.h
> index 3eb018508190..ff813a56bc30 100644
> --- a/arch/s390/include/asm/mem_encrypt.h
> +++ b/arch/s390/include/asm/mem_encrypt.h
> @@ -4,9 +4,7 @@
>
> #ifndef __ASSEMBLY__
>
> -#define sme_me_mask 0ULL
> -
> -static inline bool sme_active(void) { return false; }
> +static inline bool mem_encrypt_active(void) { return false; }
> extern bool sev_active(void);
>
> int set_memory_encrypted(unsigned long addr, int numpages);
> diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
> index 0c196c47d621..848ce43b9040 100644
> --- a/arch/x86/include/asm/mem_encrypt.h
> +++ b/arch/x86/include/asm/mem_encrypt.h
> @@ -92,6 +92,16 @@ early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0;
>
> extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_decrypted_unused[];
>
> +static inline bool mem_encrypt_active(void)
> +{
> + return sme_me_mask;
> +}
> +
> +static inline u64 sme_get_me_mask(void)
> +{
> + return sme_me_mask;
> +}
> +
> #endif /* __ASSEMBLY__ */
>
> #endif /* __X86_MEM_ENCRYPT_H__ */
> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index c805f0a5c16e..7139f2f43955 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -344,7 +344,6 @@ bool sme_active(void)
> {
> return sme_me_mask && !sev_enabled;
> }
> -EXPORT_SYMBOL(sme_active);
>
> bool sev_active(void)
> {
> diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
> index 470bd53a89df..0c5b0ff9eb29 100644
> --- a/include/linux/mem_encrypt.h
> +++ b/include/linux/mem_encrypt.h
> @@ -18,23 +18,11 @@
>
> #else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
>
> -#define sme_me_mask 0ULL
> -
> -static inline bool sme_active(void) { return false; }
> +static inline bool mem_encrypt_active(void) { return false; }
> static inline bool sev_active(void) { return false; }
>
> #endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
>
> -static inline bool mem_encrypt_active(void)
> -{
> - return sme_me_mask;
> -}
> -
> -static inline u64 sme_get_me_mask(void)
> -{
> - return sme_me_mask;
> -}
> -
> #ifdef CONFIG_AMD_MEM_ENCRYPT
> /*
> * The __sme_set() and __sme_clr() macros are useful for adding or removing
>
On 7/17/19 10:28 PM, Thiago Jung Bauermann wrote:
> Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
> appear in generic kernel code because it forces non-x86 architectures to
> define the sev_active() function, which doesn't make a lot of sense.
>
> To solve this problem, add an x86 elfcorehdr_read() function to override
> the generic weak implementation. To do that, it's necessary to make
> read_from_oldmem() public so that it can be used outside of vmcore.c.
>
> Also, remove the export for sev_active() since it's only used in files that
> won't be built as modules.
>
> Signed-off-by: Thiago Jung Bauermann <[email protected]>
Adding Lianbo and Baoquan, who recently worked on this, for their review.
Thanks,
Tom
> ---
> arch/x86/kernel/crash_dump_64.c | 5 +++++
> arch/x86/mm/mem_encrypt.c | 1 -
> fs/proc/vmcore.c | 8 ++++----
> include/linux/crash_dump.h | 14 ++++++++++++++
> include/linux/mem_encrypt.h | 1 -
> 5 files changed, 23 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
> index 22369dd5de3b..045e82e8945b 100644
> --- a/arch/x86/kernel/crash_dump_64.c
> +++ b/arch/x86/kernel/crash_dump_64.c
> @@ -70,3 +70,8 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
> {
> return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true);
> }
> +
> +ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
> +{
> + return read_from_oldmem(buf, count, ppos, 0, sev_active());
> +}
> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index 7139f2f43955..b1e823441093 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -349,7 +349,6 @@ bool sev_active(void)
> {
> return sme_me_mask && sev_enabled;
> }
> -EXPORT_SYMBOL(sev_active);
>
> /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
> bool force_dma_unencrypted(struct device *dev)
> diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
> index 57957c91c6df..ca1f20bedd8c 100644
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -100,9 +100,9 @@ static int pfn_is_ram(unsigned long pfn)
> }
>
> /* Reads a page from the oldmem device from given offset. */
> -static ssize_t read_from_oldmem(char *buf, size_t count,
> - u64 *ppos, int userbuf,
> - bool encrypted)
> +ssize_t read_from_oldmem(char *buf, size_t count,
> + u64 *ppos, int userbuf,
> + bool encrypted)
> {
> unsigned long pfn, offset;
> size_t nr_bytes;
> @@ -166,7 +166,7 @@ void __weak elfcorehdr_free(unsigned long long addr)
> */
> ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
> {
> - return read_from_oldmem(buf, count, ppos, 0, sev_active());
> + return read_from_oldmem(buf, count, ppos, 0, false);
> }
>
> /*
> diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
> index f774c5eb9e3c..4664fc1871de 100644
> --- a/include/linux/crash_dump.h
> +++ b/include/linux/crash_dump.h
> @@ -115,4 +115,18 @@ static inline int vmcore_add_device_dump(struct vmcoredd_data *data)
> return -EOPNOTSUPP;
> }
> #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */
> +
> +#ifdef CONFIG_PROC_VMCORE
> +ssize_t read_from_oldmem(char *buf, size_t count,
> + u64 *ppos, int userbuf,
> + bool encrypted);
> +#else
> +static inline ssize_t read_from_oldmem(char *buf, size_t count,
> + u64 *ppos, int userbuf,
> + bool encrypted)
> +{
> + return -EOPNOTSUPP;
> +}
> +#endif /* CONFIG_PROC_VMCORE */
> +
> #endif /* LINUX_CRASHDUMP_H */
> diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
> index 0c5b0ff9eb29..5c4a18a91f89 100644
> --- a/include/linux/mem_encrypt.h
> +++ b/include/linux/mem_encrypt.h
> @@ -19,7 +19,6 @@
> #else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
>
> static inline bool mem_encrypt_active(void) { return false; }
> -static inline bool sev_active(void) { return false; }
>
> #endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
>
>
On 7/17/19 10:28 PM, Thiago Jung Bauermann wrote:
> Hello,
>
> This version is mostly about splitting up patch 2/3 into three separate
> patches, as suggested by Christoph Hellwig. Two other changes are a fix in
> patch 1 which wasn't selecting ARCH_HAS_MEM_ENCRYPT for s390 spotted by
> Janani and removal of sme_active and sev_active symbol exports as suggested
> by Christoph Hellwig.
>
> These patches are applied on top of today's dma-mapping/for-next.
>
> I don't have a way to test SME, SEV, nor s390's PEF so the patches have only
> been build tested.
I'll try and get this tested quickly to be sure everything works for SME
and SEV.
Thanks,
Tom
>
> Changelog
>
> Since v2:
>
> - Patch "x86,s390: Move ARCH_HAS_MEM_ENCRYPT definition to arch/Kconfig"
> - Added "select ARCH_HAS_MEM_ENCRYPT" to config S390. Suggested by Janani.
>
> - Patch "DMA mapping: Move SME handling to x86-specific files"
> - Split up into 3 new patches. Suggested by Christoph Hellwig.
>
> - Patch "swiotlb: Remove call to sme_active()"
> - New patch.
>
> - Patch "dma-mapping: Remove dma_check_mask()"
> - New patch.
>
> - Patch "x86,s390/mm: Move sme_active() and sme_me_mask to x86-specific header"
> - New patch.
> - Removed export of sme_active symbol. Suggested by Christoph Hellwig.
>
> - Patch "fs/core/vmcore: Move sev_active() reference to x86 arch code"
> - Removed export of sev_active symbol. Suggested by Christoph Hellwig.
>
> - Patch "s390/mm: Remove sev_active() function"
> - New patch.
>
> Since v1:
>
> - Patch "x86,s390: Move ARCH_HAS_MEM_ENCRYPT definition to arch/Kconfig"
> - Remove definition of ARCH_HAS_MEM_ENCRYPT from s390/Kconfig as well.
> - Reworded patch title and message a little bit.
>
> - Patch "DMA mapping: Move SME handling to x86-specific files"
> - Adapt s390's <asm/mem_encrypt.h> as well.
> - Remove dma_check_mask() from kernel/dma/mapping.c. Suggested by
> Christoph Hellwig.
>
> Thiago Jung Bauermann (6):
> x86,s390: Move ARCH_HAS_MEM_ENCRYPT definition to arch/Kconfig
> swiotlb: Remove call to sme_active()
> dma-mapping: Remove dma_check_mask()
> x86,s390/mm: Move sme_active() and sme_me_mask to x86-specific header
> fs/core/vmcore: Move sev_active() reference to x86 arch code
> s390/mm: Remove sev_active() function
>
> arch/Kconfig | 3 +++
> arch/s390/Kconfig | 4 +---
> arch/s390/include/asm/mem_encrypt.h | 5 +----
> arch/s390/mm/init.c | 8 +-------
> arch/x86/Kconfig | 4 +---
> arch/x86/include/asm/mem_encrypt.h | 10 ++++++++++
> arch/x86/kernel/crash_dump_64.c | 5 +++++
> arch/x86/mm/mem_encrypt.c | 2 --
> fs/proc/vmcore.c | 8 ++++----
> include/linux/crash_dump.h | 14 ++++++++++++++
> include/linux/mem_encrypt.h | 15 +--------------
> kernel/dma/mapping.c | 8 --------
> kernel/dma/swiotlb.c | 3 +--
> 13 files changed, 42 insertions(+), 47 deletions(-)
>
On Thu, Jul 18, 2019 at 05:42:18PM +0000, Lendacky, Thomas wrote:
> You may want to try and build the out-of-tree nvidia driver just to be
> sure you can remove the EXPORT_SYMBOL(). But I believe that was related
> to the DMA mask check, which now removed, may no longer be a problem.
Out of tree driver simply don't matter for kernel development decisions.
Lendacky, Thomas <[email protected]> writes:
> On 7/17/19 10:28 PM, Thiago Jung Bauermann wrote:
>> Hello,
>>
>> This version is mostly about splitting up patch 2/3 into three separate
>> patches, as suggested by Christoph Hellwig. Two other changes are a fix in
>> patch 1 which wasn't selecting ARCH_HAS_MEM_ENCRYPT for s390 spotted by
>> Janani and removal of sme_active and sev_active symbol exports as suggested
>> by Christoph Hellwig.
>>
>> These patches are applied on top of today's dma-mapping/for-next.
>>
>> I don't have a way to test SME, SEV, nor s390's PEF so the patches have only
>> been build tested.
>
> I'll try and get this tested quickly to be sure everything works for SME
> and SEV.
Thanks! And thanks for reviewing the patches.
--
Thiago Jung Bauermann
IBM Linux Technology Center
在 2019年07月19日 01:47, Lendacky, Thomas 写道:
> On 7/17/19 10:28 PM, Thiago Jung Bauermann wrote:
>> Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
>> appear in generic kernel code because it forces non-x86 architectures to
>> define the sev_active() function, which doesn't make a lot of sense.
>>
>> To solve this problem, add an x86 elfcorehdr_read() function to override
>> the generic weak implementation. To do that, it's necessary to make
>> read_from_oldmem() public so that it can be used outside of vmcore.c.
>>
>> Also, remove the export for sev_active() since it's only used in files that
>> won't be built as modules.
>>
>> Signed-off-by: Thiago Jung Bauermann <[email protected]>
>
> Adding Lianbo and Baoquan, who recently worked on this, for their review.
>
This change looks good to me.
Reviewed-by: Lianbo Jiang <[email protected]>
Thanks.
Lianbo
> Thanks,
> Tom
>
>> ---
>> arch/x86/kernel/crash_dump_64.c | 5 +++++
>> arch/x86/mm/mem_encrypt.c | 1 -
>> fs/proc/vmcore.c | 8 ++++----
>> include/linux/crash_dump.h | 14 ++++++++++++++
>> include/linux/mem_encrypt.h | 1 -
>> 5 files changed, 23 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
>> index 22369dd5de3b..045e82e8945b 100644
>> --- a/arch/x86/kernel/crash_dump_64.c
>> +++ b/arch/x86/kernel/crash_dump_64.c
>> @@ -70,3 +70,8 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> {
>> return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true);
>> }
>> +
>> +ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
>> +{
>> + return read_from_oldmem(buf, count, ppos, 0, sev_active());
>> +}
>> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
>> index 7139f2f43955..b1e823441093 100644
>> --- a/arch/x86/mm/mem_encrypt.c
>> +++ b/arch/x86/mm/mem_encrypt.c
>> @@ -349,7 +349,6 @@ bool sev_active(void)
>> {
>> return sme_me_mask && sev_enabled;
>> }
>> -EXPORT_SYMBOL(sev_active);
>>
>> /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
>> bool force_dma_unencrypted(struct device *dev)
>> diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
>> index 57957c91c6df..ca1f20bedd8c 100644
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -100,9 +100,9 @@ static int pfn_is_ram(unsigned long pfn)
>> }
>>
>> /* Reads a page from the oldmem device from given offset. */
>> -static ssize_t read_from_oldmem(char *buf, size_t count,
>> - u64 *ppos, int userbuf,
>> - bool encrypted)
>> +ssize_t read_from_oldmem(char *buf, size_t count,
>> + u64 *ppos, int userbuf,
>> + bool encrypted)
>> {
>> unsigned long pfn, offset;
>> size_t nr_bytes;
>> @@ -166,7 +166,7 @@ void __weak elfcorehdr_free(unsigned long long addr)
>> */
>> ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
>> {
>> - return read_from_oldmem(buf, count, ppos, 0, sev_active());
>> + return read_from_oldmem(buf, count, ppos, 0, false);
>> }
>>
>> /*
>> diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
>> index f774c5eb9e3c..4664fc1871de 100644
>> --- a/include/linux/crash_dump.h
>> +++ b/include/linux/crash_dump.h
>> @@ -115,4 +115,18 @@ static inline int vmcore_add_device_dump(struct vmcoredd_data *data)
>> return -EOPNOTSUPP;
>> }
>> #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */
>> +
>> +#ifdef CONFIG_PROC_VMCORE
>> +ssize_t read_from_oldmem(char *buf, size_t count,
>> + u64 *ppos, int userbuf,
>> + bool encrypted);
>> +#else
>> +static inline ssize_t read_from_oldmem(char *buf, size_t count,
>> + u64 *ppos, int userbuf,
>> + bool encrypted)
>> +{
>> + return -EOPNOTSUPP;
>> +}
>> +#endif /* CONFIG_PROC_VMCORE */
>> +
>> #endif /* LINUX_CRASHDUMP_H */
>> diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
>> index 0c5b0ff9eb29..5c4a18a91f89 100644
>> --- a/include/linux/mem_encrypt.h
>> +++ b/include/linux/mem_encrypt.h
>> @@ -19,7 +19,6 @@
>> #else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
>>
>> static inline bool mem_encrypt_active(void) { return false; }
>> -static inline bool sev_active(void) { return false; }
>>
>> #endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
>>
>>
On 7/18/19 2:44 PM, Thiago Jung Bauermann wrote:
>
> Lendacky, Thomas <[email protected]> writes:
>
>> On 7/17/19 10:28 PM, Thiago Jung Bauermann wrote:
>>> Hello,
>>>
>>> This version is mostly about splitting up patch 2/3 into three separate
>>> patches, as suggested by Christoph Hellwig. Two other changes are a fix in
>>> patch 1 which wasn't selecting ARCH_HAS_MEM_ENCRYPT for s390 spotted by
>>> Janani and removal of sme_active and sev_active symbol exports as suggested
>>> by Christoph Hellwig.
>>>
>>> These patches are applied on top of today's dma-mapping/for-next.
>>>
>>> I don't have a way to test SME, SEV, nor s390's PEF so the patches have only
>>> been build tested.
>>
>> I'll try and get this tested quickly to be sure everything works for SME
>> and SEV.
Built and tested both SME and SEV and everything appears to be working
well (not extensive testing, but should be good enough).
Thanks,
Tom
>
> Thanks! And thanks for reviewing the patches.
>
Lendacky, Thomas <[email protected]> writes:
> On 7/18/19 2:44 PM, Thiago Jung Bauermann wrote:
>>
>> Lendacky, Thomas <[email protected]> writes:
>>
>>> On 7/17/19 10:28 PM, Thiago Jung Bauermann wrote:
>>>> Hello,
>>>>
>>>> This version is mostly about splitting up patch 2/3 into three separate
>>>> patches, as suggested by Christoph Hellwig. Two other changes are a fix in
>>>> patch 1 which wasn't selecting ARCH_HAS_MEM_ENCRYPT for s390 spotted by
>>>> Janani and removal of sme_active and sev_active symbol exports as suggested
>>>> by Christoph Hellwig.
>>>>
>>>> These patches are applied on top of today's dma-mapping/for-next.
>>>>
>>>> I don't have a way to test SME, SEV, nor s390's PEF so the patches have only
>>>> been build tested.
>>>
>>> I'll try and get this tested quickly to be sure everything works for SME
>>> and SEV.
>
> Built and tested both SME and SEV and everything appears to be working
> well (not extensive testing, but should be good enough).
Great news. Thanks for testing!
--
Thiago Jung Bauermann
IBM Linux Technology Center
Hello Lianbo,
lijiang <[email protected]> writes:
> 在 2019年07月19日 01:47, Lendacky, Thomas 写道:
>> On 7/17/19 10:28 PM, Thiago Jung Bauermann wrote:
>>> Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
>>> appear in generic kernel code because it forces non-x86 architectures to
>>> define the sev_active() function, which doesn't make a lot of sense.
>>>
>>> To solve this problem, add an x86 elfcorehdr_read() function to override
>>> the generic weak implementation. To do that, it's necessary to make
>>> read_from_oldmem() public so that it can be used outside of vmcore.c.
>>>
>>> Also, remove the export for sev_active() since it's only used in files that
>>> won't be built as modules.
>>>
>>> Signed-off-by: Thiago Jung Bauermann <[email protected]>
>>
>> Adding Lianbo and Baoquan, who recently worked on this, for their review.
>>
>
> This change looks good to me.
>
> Reviewed-by: Lianbo Jiang <[email protected]>
Thanks for your review!
--
Thiago Jung Bauermann
IBM Linux Technology Center