2021-07-08 18:15:55

by Luck, Tony

[permalink] [raw]
Subject: [PATCH 0/4] Basic recovery for machine checks inside SGX

Cover the easy cases:
1) memory errors reported by patrol scrubber in unused SGX pages
2) machine checks due to poison consumption from SGX_PAGE_TYPE_REG
pages
3) When poison is consumed in an enclave inside a guest, just kill
the guest.

Tony Luck (4):
x86/sgx: Track phase and type of SGX EPC pages
x86/sgx: Add basic infrastructure to recover from errors in SGX memory
x86/sgx: Hook sgx_memory_failure() into mainline code
x86/sgx: Add hook to error injection address validation

.../firmware-guide/acpi/apei/einj.rst | 19 +++
arch/x86/include/asm/sgx.h | 6 +
arch/x86/kernel/cpu/sgx/encl.c | 4 +-
arch/x86/kernel/cpu/sgx/ioctl.c | 4 +-
arch/x86/kernel/cpu/sgx/main.c | 147 +++++++++++++++++-
arch/x86/kernel/cpu/sgx/sgx.h | 17 +-
arch/x86/kernel/cpu/sgx/virt.c | 11 +-
drivers/acpi/apei/einj.c | 3 +-
include/linux/mm.h | 15 ++
mm/memory-failure.c | 4 +
10 files changed, 219 insertions(+), 11 deletions(-)


base-commit: 62fb9874f5da54fdb243003b386128037319b219
--
2.29.2


2021-07-08 18:16:19

by Luck, Tony

[permalink] [raw]
Subject: [PATCH 4/4] x86/sgx: Add hook to error injection address validation

SGX reserved memory does not appear in the standard address maps.

Add hook to call into the SGX code to check if an address is located
in SGX memory.

There are other challenges in injecting errors into SGX. Update the
documentation with a sequence of operations to inject.

Signed-off-by: Tony Luck <[email protected]>
---
.../firmware-guide/acpi/apei/einj.rst | 19 +++++++++++++++++++
drivers/acpi/apei/einj.c | 3 ++-
include/linux/mm.h | 6 ++++++
3 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/Documentation/firmware-guide/acpi/apei/einj.rst b/Documentation/firmware-guide/acpi/apei/einj.rst
index c042176e1707..55e2331a6438 100644
--- a/Documentation/firmware-guide/acpi/apei/einj.rst
+++ b/Documentation/firmware-guide/acpi/apei/einj.rst
@@ -181,5 +181,24 @@ You should see something like this in dmesg::
[22715.834759] EDAC sbridge MC3: PROCESSOR 0:306e7 TIME 1422553404 SOCKET 0 APIC 0
[22716.616173] EDAC MC3: 1 CE memory read error on CPU_SrcID#0_Channel#0_DIMM#0 (channel:0 slot:0 page:0x12345 offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0001:0090 socket:0 channel_mask:1 rank:0)

+Special notes for injection into SGX enclaves:
+
+There may be a separate BIOS setup option to enable SGX injection.
+
+The injection process consists of setting some special memory controller
+trigger that will inject the error on the next write to the target
+address. But the h/w prevents any software outside of an SGX enclave
+from accessing enclave pages (even BIOS SMM mode).
+
+The following sequence can be used:
+ 1) Determine physical address of enclave page
+ 2) Use "notrigger=1" mode to inject (this will setup
+ the injection address, but will not actually inject)
+ 3) Enter the enclave
+ 4) Store data to the virtual address matching physical address from step 1
+ 5) Execute CLFLUSH for that virtual address
+ 6) Spin delay for 250ms
+ 7) Read from the virtual address. This will trigger the error
+
For more information about EINJ, please refer to ACPI specification
version 4.0, section 17.5 and ACPI 5.0, section 18.6.
diff --git a/drivers/acpi/apei/einj.c b/drivers/acpi/apei/einj.c
index 328e8aeece6c..fb634219e232 100644
--- a/drivers/acpi/apei/einj.c
+++ b/drivers/acpi/apei/einj.c
@@ -544,7 +544,8 @@ static int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2,
((region_intersects(base_addr, size, IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE)
!= REGION_INTERSECTS) &&
(region_intersects(base_addr, size, IORESOURCE_MEM, IORES_DESC_PERSISTENT_MEMORY)
- != REGION_INTERSECTS)))
+ != REGION_INTERSECTS) &&
+ !sgx_is_epc_page(base_addr)))
return -EINVAL;

inject:
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1b9d0912942a..47eb960516cf 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3253,11 +3253,17 @@ static inline int seal_check_future_write(int seals, struct vm_area_struct *vma)

#ifdef CONFIG_X86_SGX
int sgx_memory_failure(unsigned long pfn, int flags);
+bool sgx_is_epc_page(u64 paddr);
#else
static inline int sgx_memory_failure(unsigned long pfn, int flags)
{
return -ENXIO;
}
+
+static inline bool sgx_is_epc_page(u64 paddr)
+{
+ return false;
+}
#endif

#endif /* __KERNEL__ */
--
2.29.2

2021-07-08 18:19:06

by Luck, Tony

[permalink] [raw]
Subject: [PATCH 1/4] x86/sgx: Track phase and type of SGX EPC pages

Memory errors can be reported either synchronously as memory is accessed,
or asynchronously by speculative access or by a memory controller page
scrubber. The life cycle of an EPC page takes it through:
dirty -> free -> in-use -> free.

Memory errors are reported using physical addresses. It is a simple
matter to find which sgx_epc_page structure maps a given address.
But then recovery code needs to be able to determine the current use of
the page to take the appropriate recovery action. Within the "in-use"
phase different actions are needed based on how the page is used in
the enclave.

Add new flags bits to describe the phase (with an extra bit for the new
phase of "poisoned"). Drop pages marked as poisoned instead of adding
them to a free list to make sure they are not re-used.

Add a type field to struct epc_page for how an in-use page has been
allocated. Re-use "enum sgx_page_type" for this type, with a couple
of additions for s/w types.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/include/asm/sgx.h | 6 ++++++
arch/x86/kernel/cpu/sgx/encl.c | 4 ++--
arch/x86/kernel/cpu/sgx/ioctl.c | 4 ++--
arch/x86/kernel/cpu/sgx/main.c | 21 +++++++++++++++++++--
arch/x86/kernel/cpu/sgx/sgx.h | 14 ++++++++++++--
arch/x86/kernel/cpu/sgx/virt.c | 2 +-
6 files changed, 42 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/sgx.h b/arch/x86/include/asm/sgx.h
index 9c31e0ebc55b..9619a6d77a83 100644
--- a/arch/x86/include/asm/sgx.h
+++ b/arch/x86/include/asm/sgx.h
@@ -216,6 +216,8 @@ struct sgx_pageinfo {
* %SGX_PAGE_TYPE_REG: a regular page
* %SGX_PAGE_TYPE_VA: a VA page
* %SGX_PAGE_TYPE_TRIM: a page in trimmed state
+ *
+ * Also used to track current use of &struct sgx_epc_page
*/
enum sgx_page_type {
SGX_PAGE_TYPE_SECS,
@@ -223,6 +225,10 @@ enum sgx_page_type {
SGX_PAGE_TYPE_REG,
SGX_PAGE_TYPE_VA,
SGX_PAGE_TYPE_TRIM,
+
+ /* sgx_epc_page.type */
+ SGX_PAGE_TYPE_FREE = 100,
+ SGX_PAGE_TYPE_KVM = 101,
};

#define SGX_NR_PAGE_TYPES 5
diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 3be203297988..abf6e1a704c0 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -72,7 +72,7 @@ static struct sgx_epc_page *sgx_encl_eldu(struct sgx_encl_page *encl_page,
struct sgx_epc_page *epc_page;
int ret;

- epc_page = sgx_alloc_epc_page(encl_page, false);
+ epc_page = sgx_alloc_epc_page(encl_page, SGX_PAGE_TYPE_REG, false);
if (IS_ERR(epc_page))
return epc_page;

@@ -679,7 +679,7 @@ struct sgx_epc_page *sgx_alloc_va_page(void)
struct sgx_epc_page *epc_page;
int ret;

- epc_page = sgx_alloc_epc_page(NULL, true);
+ epc_page = sgx_alloc_epc_page(NULL, SGX_PAGE_TYPE_VA, true);
if (IS_ERR(epc_page))
return ERR_CAST(epc_page);

diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 83df20e3e633..a74ae00194cc 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -83,7 +83,7 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)

encl->backing = backing;

- secs_epc = sgx_alloc_epc_page(&encl->secs, true);
+ secs_epc = sgx_alloc_epc_page(&encl->secs, SGX_PAGE_TYPE_SECS, true);
if (IS_ERR(secs_epc)) {
ret = PTR_ERR(secs_epc);
goto err_out_backing;
@@ -300,7 +300,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long src,
if (IS_ERR(encl_page))
return PTR_ERR(encl_page);

- epc_page = sgx_alloc_epc_page(encl_page, true);
+ epc_page = sgx_alloc_epc_page(encl_page, SGX_PAGE_TYPE_REG, true);
if (IS_ERR(epc_page)) {
kfree(encl_page);
return PTR_ERR(epc_page);
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 63d3de02bbcc..643df87b3e01 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -401,7 +401,12 @@ static void sgx_reclaim_pages(void)
section = &sgx_epc_sections[epc_page->section];
node = section->node;

+ /* drop poison pages instead of adding to free list */
+ if (epc_page->flags & SGX_EPC_PAGE_POISON)
+ continue;
+
spin_lock(&node->lock);
+ epc_page->flags = SGX_EPC_PAGE_FREE;
list_add_tail(&epc_page->list, &node->free_page_list);
sgx_nr_free_pages++;
spin_unlock(&node->lock);
@@ -560,6 +565,7 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)
/**
* sgx_alloc_epc_page() - Allocate an EPC page
* @owner: the owner of the EPC page
+ * @type: type of page being allocated
* @reclaim: reclaim pages if necessary
*
* Iterate through EPC sections and borrow a free EPC page to the caller. When a
@@ -574,7 +580,7 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)
* an EPC page,
* -errno on error
*/
-struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
+struct sgx_epc_page *sgx_alloc_epc_page(void *owner, enum sgx_page_type type, bool reclaim)
{
struct sgx_epc_page *page;

@@ -582,6 +588,8 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
page = __sgx_alloc_epc_page();
if (!IS_ERR(page)) {
page->owner = owner;
+ page->type = type;
+ page->flags = 0;
break;
}

@@ -616,14 +624,22 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
* responsibility to make sure that the page is in uninitialized state. In other
* words, do EREMOVE, EWB or whatever operation is necessary before calling
* this function.
+ *
+ * Note that if the page has been tagged as poisoned, it is simply
+ * dropped on the floor instead of added to the free list to make
+ * sure we do not re-use it.
*/
void sgx_free_epc_page(struct sgx_epc_page *page)
{
struct sgx_epc_section *section = &sgx_epc_sections[page->section];
struct sgx_numa_node *node = section->node;

+ if (page->flags & SGX_EPC_PAGE_POISON)
+ return;
+
spin_lock(&node->lock);

+ page->flags = SGX_EPC_PAGE_FREE;
list_add_tail(&page->list, &node->free_page_list);
sgx_nr_free_pages++;

@@ -651,7 +667,8 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,

for (i = 0; i < nr_pages; i++) {
section->pages[i].section = index;
- section->pages[i].flags = 0;
+ section->pages[i].flags = SGX_EPC_PAGE_DIRTY;
+ section->pages[i].type = SGX_PAGE_TYPE_FREE;
section->pages[i].owner = NULL;
list_add_tail(&section->pages[i].list, &sgx_dirty_page_list);
}
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 4628acec0009..e43d3c27eb96 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -26,9 +26,19 @@
/* Pages, which are being tracked by the page reclaimer. */
#define SGX_EPC_PAGE_RECLAIMER_TRACKED BIT(0)

+/* Pages, on the "sgx_dirty_page_list" */
+#define SGX_EPC_PAGE_DIRTY BIT(1)
+
+/* Pages, on one of the node free lists */
+#define SGX_EPC_PAGE_FREE BIT(2)
+
+/* Pages, with h/w poison errors */
+#define SGX_EPC_PAGE_POISON BIT(3)
+
struct sgx_epc_page {
unsigned int section;
- unsigned int flags;
+ u16 flags;
+ u16 type;
struct sgx_encl_page *owner;
struct list_head list;
};
@@ -82,7 +92,7 @@ void sgx_free_epc_page(struct sgx_epc_page *page);

void sgx_mark_page_reclaimable(struct sgx_epc_page *page);
int sgx_unmark_page_reclaimable(struct sgx_epc_page *page);
-struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim);
+struct sgx_epc_page *sgx_alloc_epc_page(void *owner, enum sgx_page_type type, bool reclaim);

#ifdef CONFIG_X86_SGX_KVM
int __init sgx_vepc_init(void);
diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c
index 64511c4a5200..044dd92ebd63 100644
--- a/arch/x86/kernel/cpu/sgx/virt.c
+++ b/arch/x86/kernel/cpu/sgx/virt.c
@@ -46,7 +46,7 @@ static int __sgx_vepc_fault(struct sgx_vepc *vepc,
if (epc_page)
return 0;

- epc_page = sgx_alloc_epc_page(vepc, false);
+ epc_page = sgx_alloc_epc_page(vepc, SGX_PAGE_TYPE_KVM, false);
if (IS_ERR(epc_page))
return PTR_ERR(epc_page);

--
2.29.2

2021-07-09 18:09:32

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH 1/4] x86/sgx: Track phase and type of SGX EPC pages

On Thu, Jul 08, 2021 at 11:14:20AM -0700, Tony Luck wrote:
> Memory errors can be reported either synchronously as memory is accessed,
> or asynchronously by speculative access or by a memory controller page
> scrubber. The life cycle of an EPC page takes it through:
> dirty -> free -> in-use -> free.
>
> Memory errors are reported using physical addresses. It is a simple
> matter to find which sgx_epc_page structure maps a given address.
> But then recovery code needs to be able to determine the current use of
> the page to take the appropriate recovery action. Within the "in-use"
> phase different actions are needed based on how the page is used in
> the enclave.
>
> Add new flags bits to describe the phase (with an extra bit for the new
> phase of "poisoned"). Drop pages marked as poisoned instead of adding
> them to a free list to make sure they are not re-used.
>
> Add a type field to struct epc_page for how an in-use page has been
> allocated. Re-use "enum sgx_page_type" for this type, with a couple
> of additions for s/w types.
>
> Signed-off-by: Tony Luck <[email protected]>
> ---
> arch/x86/include/asm/sgx.h | 6 ++++++
> arch/x86/kernel/cpu/sgx/encl.c | 4 ++--
> arch/x86/kernel/cpu/sgx/ioctl.c | 4 ++--
> arch/x86/kernel/cpu/sgx/main.c | 21 +++++++++++++++++++--
> arch/x86/kernel/cpu/sgx/sgx.h | 14 ++++++++++++--
> arch/x86/kernel/cpu/sgx/virt.c | 2 +-
> 6 files changed, 42 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/include/asm/sgx.h b/arch/x86/include/asm/sgx.h
> index 9c31e0ebc55b..9619a6d77a83 100644
> --- a/arch/x86/include/asm/sgx.h
> +++ b/arch/x86/include/asm/sgx.h
> @@ -216,6 +216,8 @@ struct sgx_pageinfo {
> * %SGX_PAGE_TYPE_REG: a regular page
> * %SGX_PAGE_TYPE_VA: a VA page
> * %SGX_PAGE_TYPE_TRIM: a page in trimmed state
> + *
> + * Also used to track current use of &struct sgx_epc_page
> */
> enum sgx_page_type {
> SGX_PAGE_TYPE_SECS,
> @@ -223,6 +225,10 @@ enum sgx_page_type {
> SGX_PAGE_TYPE_REG,
> SGX_PAGE_TYPE_VA,
> SGX_PAGE_TYPE_TRIM,
> +
> + /* sgx_epc_page.type */
> + SGX_PAGE_TYPE_FREE = 100,
> + SGX_PAGE_TYPE_KVM = 101,
> };
>
> #define SGX_NR_PAGE_TYPES 5
> diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
> index 3be203297988..abf6e1a704c0 100644
> --- a/arch/x86/kernel/cpu/sgx/encl.c
> +++ b/arch/x86/kernel/cpu/sgx/encl.c
> @@ -72,7 +72,7 @@ static struct sgx_epc_page *sgx_encl_eldu(struct sgx_encl_page *encl_page,
> struct sgx_epc_page *epc_page;
> int ret;
>
> - epc_page = sgx_alloc_epc_page(encl_page, false);
> + epc_page = sgx_alloc_epc_page(encl_page, SGX_PAGE_TYPE_REG, false);
> if (IS_ERR(epc_page))
> return epc_page;
>
> @@ -679,7 +679,7 @@ struct sgx_epc_page *sgx_alloc_va_page(void)
> struct sgx_epc_page *epc_page;
> int ret;
>
> - epc_page = sgx_alloc_epc_page(NULL, true);
> + epc_page = sgx_alloc_epc_page(NULL, SGX_PAGE_TYPE_VA, true);
> if (IS_ERR(epc_page))
> return ERR_CAST(epc_page);
>
> diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
> index 83df20e3e633..a74ae00194cc 100644
> --- a/arch/x86/kernel/cpu/sgx/ioctl.c
> +++ b/arch/x86/kernel/cpu/sgx/ioctl.c
> @@ -83,7 +83,7 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
>
> encl->backing = backing;
>
> - secs_epc = sgx_alloc_epc_page(&encl->secs, true);
> + secs_epc = sgx_alloc_epc_page(&encl->secs, SGX_PAGE_TYPE_SECS, true);
> if (IS_ERR(secs_epc)) {
> ret = PTR_ERR(secs_epc);
> goto err_out_backing;
> @@ -300,7 +300,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long src,
> if (IS_ERR(encl_page))
> return PTR_ERR(encl_page);
>
> - epc_page = sgx_alloc_epc_page(encl_page, true);
> + epc_page = sgx_alloc_epc_page(encl_page, SGX_PAGE_TYPE_REG, true);
> if (IS_ERR(epc_page)) {
> kfree(encl_page);
> return PTR_ERR(epc_page);
> diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> index 63d3de02bbcc..643df87b3e01 100644
> --- a/arch/x86/kernel/cpu/sgx/main.c
> +++ b/arch/x86/kernel/cpu/sgx/main.c
> @@ -401,7 +401,12 @@ static void sgx_reclaim_pages(void)
> section = &sgx_epc_sections[epc_page->section];
> node = section->node;
>
> + /* drop poison pages instead of adding to free list */
> + if (epc_page->flags & SGX_EPC_PAGE_POISON)
> + continue;
> +
> spin_lock(&node->lock);
> + epc_page->flags = SGX_EPC_PAGE_FREE;
> list_add_tail(&epc_page->list, &node->free_page_list);
> sgx_nr_free_pages++;
> spin_unlock(&node->lock);
> @@ -560,6 +565,7 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)
> /**
> * sgx_alloc_epc_page() - Allocate an EPC page
> * @owner: the owner of the EPC page
> + * @type: type of page being allocated
> * @reclaim: reclaim pages if necessary
> *
> * Iterate through EPC sections and borrow a free EPC page to the caller. When a
> @@ -574,7 +580,7 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)
> * an EPC page,
> * -errno on error
> */
> -struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
> +struct sgx_epc_page *sgx_alloc_epc_page(void *owner, enum sgx_page_type type, bool reclaim)
> {
> struct sgx_epc_page *page;
>
> @@ -582,6 +588,8 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
> page = __sgx_alloc_epc_page();
> if (!IS_ERR(page)) {
> page->owner = owner;
> + page->type = type;
> + page->flags = 0;
> break;
> }
>
> @@ -616,14 +624,22 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
> * responsibility to make sure that the page is in uninitialized state. In other
> * words, do EREMOVE, EWB or whatever operation is necessary before calling
> * this function.
> + *
> + * Note that if the page has been tagged as poisoned, it is simply
> + * dropped on the floor instead of added to the free list to make
> + * sure we do not re-use it.
> */
> void sgx_free_epc_page(struct sgx_epc_page *page)
> {
> struct sgx_epc_section *section = &sgx_epc_sections[page->section];
> struct sgx_numa_node *node = section->node;
>
> + if (page->flags & SGX_EPC_PAGE_POISON)
> + return;

I tend to think that it would be nice to collect them somewhere instead
purposely leaking. E.g. this gives possibility to examine list with
debugging tools.

/Jarkko

2021-07-09 18:11:23

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH 1/4] x86/sgx: Track phase and type of SGX EPC pages

On Fri, Jul 09, 2021 at 09:08:03PM +0300, Jarkko Sakkinen wrote:
> On Thu, Jul 08, 2021 at 11:14:20AM -0700, Tony Luck wrote:
> > Memory errors can be reported either synchronously as memory is accessed,
> > or asynchronously by speculative access or by a memory controller page
> > scrubber. The life cycle of an EPC page takes it through:
> > dirty -> free -> in-use -> free.
> >
> > Memory errors are reported using physical addresses. It is a simple
> > matter to find which sgx_epc_page structure maps a given address.
> > But then recovery code needs to be able to determine the current use of
> > the page to take the appropriate recovery action. Within the "in-use"
> > phase different actions are needed based on how the page is used in
> > the enclave.
> >
> > Add new flags bits to describe the phase (with an extra bit for the new
> > phase of "poisoned"). Drop pages marked as poisoned instead of adding
> > them to a free list to make sure they are not re-used.
> >
> > Add a type field to struct epc_page for how an in-use page has been
> > allocated. Re-use "enum sgx_page_type" for this type, with a couple
> > of additions for s/w types.
> >
> > Signed-off-by: Tony Luck <[email protected]>
> > ---
> > arch/x86/include/asm/sgx.h | 6 ++++++
> > arch/x86/kernel/cpu/sgx/encl.c | 4 ++--
> > arch/x86/kernel/cpu/sgx/ioctl.c | 4 ++--
> > arch/x86/kernel/cpu/sgx/main.c | 21 +++++++++++++++++++--
> > arch/x86/kernel/cpu/sgx/sgx.h | 14 ++++++++++++--
> > arch/x86/kernel/cpu/sgx/virt.c | 2 +-
> > 6 files changed, 42 insertions(+), 9 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/sgx.h b/arch/x86/include/asm/sgx.h
> > index 9c31e0ebc55b..9619a6d77a83 100644
> > --- a/arch/x86/include/asm/sgx.h
> > +++ b/arch/x86/include/asm/sgx.h
> > @@ -216,6 +216,8 @@ struct sgx_pageinfo {
> > * %SGX_PAGE_TYPE_REG: a regular page
> > * %SGX_PAGE_TYPE_VA: a VA page
> > * %SGX_PAGE_TYPE_TRIM: a page in trimmed state
> > + *
> > + * Also used to track current use of &struct sgx_epc_page
> > */
> > enum sgx_page_type {
> > SGX_PAGE_TYPE_SECS,
> > @@ -223,6 +225,10 @@ enum sgx_page_type {
> > SGX_PAGE_TYPE_REG,
> > SGX_PAGE_TYPE_VA,
> > SGX_PAGE_TYPE_TRIM,
> > +
> > + /* sgx_epc_page.type */
> > + SGX_PAGE_TYPE_FREE = 100,
> > + SGX_PAGE_TYPE_KVM = 101,
> > };
> >
> > #define SGX_NR_PAGE_TYPES 5
> > diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
> > index 3be203297988..abf6e1a704c0 100644
> > --- a/arch/x86/kernel/cpu/sgx/encl.c
> > +++ b/arch/x86/kernel/cpu/sgx/encl.c
> > @@ -72,7 +72,7 @@ static struct sgx_epc_page *sgx_encl_eldu(struct sgx_encl_page *encl_page,
> > struct sgx_epc_page *epc_page;
> > int ret;
> >
> > - epc_page = sgx_alloc_epc_page(encl_page, false);
> > + epc_page = sgx_alloc_epc_page(encl_page, SGX_PAGE_TYPE_REG, false);
> > if (IS_ERR(epc_page))
> > return epc_page;
> >
> > @@ -679,7 +679,7 @@ struct sgx_epc_page *sgx_alloc_va_page(void)
> > struct sgx_epc_page *epc_page;
> > int ret;
> >
> > - epc_page = sgx_alloc_epc_page(NULL, true);
> > + epc_page = sgx_alloc_epc_page(NULL, SGX_PAGE_TYPE_VA, true);
> > if (IS_ERR(epc_page))
> > return ERR_CAST(epc_page);
> >
> > diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
> > index 83df20e3e633..a74ae00194cc 100644
> > --- a/arch/x86/kernel/cpu/sgx/ioctl.c
> > +++ b/arch/x86/kernel/cpu/sgx/ioctl.c
> > @@ -83,7 +83,7 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
> >
> > encl->backing = backing;
> >
> > - secs_epc = sgx_alloc_epc_page(&encl->secs, true);
> > + secs_epc = sgx_alloc_epc_page(&encl->secs, SGX_PAGE_TYPE_SECS, true);
> > if (IS_ERR(secs_epc)) {
> > ret = PTR_ERR(secs_epc);
> > goto err_out_backing;
> > @@ -300,7 +300,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long src,
> > if (IS_ERR(encl_page))
> > return PTR_ERR(encl_page);
> >
> > - epc_page = sgx_alloc_epc_page(encl_page, true);
> > + epc_page = sgx_alloc_epc_page(encl_page, SGX_PAGE_TYPE_REG, true);
> > if (IS_ERR(epc_page)) {
> > kfree(encl_page);
> > return PTR_ERR(epc_page);
> > diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> > index 63d3de02bbcc..643df87b3e01 100644
> > --- a/arch/x86/kernel/cpu/sgx/main.c
> > +++ b/arch/x86/kernel/cpu/sgx/main.c
> > @@ -401,7 +401,12 @@ static void sgx_reclaim_pages(void)
> > section = &sgx_epc_sections[epc_page->section];
> > node = section->node;
> >
> > + /* drop poison pages instead of adding to free list */
> > + if (epc_page->flags & SGX_EPC_PAGE_POISON)
> > + continue;
> > +
> > spin_lock(&node->lock);
> > + epc_page->flags = SGX_EPC_PAGE_FREE;
> > list_add_tail(&epc_page->list, &node->free_page_list);
> > sgx_nr_free_pages++;
> > spin_unlock(&node->lock);
> > @@ -560,6 +565,7 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)
> > /**
> > * sgx_alloc_epc_page() - Allocate an EPC page
> > * @owner: the owner of the EPC page
> > + * @type: type of page being allocated
> > * @reclaim: reclaim pages if necessary
> > *
> > * Iterate through EPC sections and borrow a free EPC page to the caller. When a
> > @@ -574,7 +580,7 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)
> > * an EPC page,
> > * -errno on error
> > */
> > -struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
> > +struct sgx_epc_page *sgx_alloc_epc_page(void *owner, enum sgx_page_type type, bool reclaim)
> > {
> > struct sgx_epc_page *page;
> >
> > @@ -582,6 +588,8 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
> > page = __sgx_alloc_epc_page();
> > if (!IS_ERR(page)) {
> > page->owner = owner;
> > + page->type = type;
> > + page->flags = 0;
> > break;
> > }
> >
> > @@ -616,14 +624,22 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
> > * responsibility to make sure that the page is in uninitialized state. In other
> > * words, do EREMOVE, EWB or whatever operation is necessary before calling
> > * this function.
> > + *
> > + * Note that if the page has been tagged as poisoned, it is simply
> > + * dropped on the floor instead of added to the free list to make
> > + * sure we do not re-use it.
> > */
> > void sgx_free_epc_page(struct sgx_epc_page *page)
> > {
> > struct sgx_epc_section *section = &sgx_epc_sections[page->section];
> > struct sgx_numa_node *node = section->node;
> >
> > + if (page->flags & SGX_EPC_PAGE_POISON)
> > + return;
>
> I tend to think that it would be nice to collect them somewhere instead
> purposely leaking. E.g. this gives possibility to examine list with
> debugging tools.

I'm not also sure why free and dirty pages need to be tagged. Why a
poison flag is enough? This could be better explained in the commit
message.

2021-07-14 20:44:29

by Reinette Chatre

[permalink] [raw]
Subject: Re: [PATCH 1/4] x86/sgx: Track phase and type of SGX EPC pages

Hi Tony,

On 7/8/2021 11:14 AM, Tony Luck wrote:
>
> Add a type field to struct epc_page for how an in-use page has been
> allocated. Re-use "enum sgx_page_type" for this type, with a couple
> of additions for s/w types.

Tracking the enclave page type is a useful addition that will also help
the SGX2 support where some instructions (ENCLS[EMODPR]) are only
allowed on pages with particular type.

Could this tracking be done at the enclave page (struct sgx_encl_page)
instead? The enclave page's EPC page information is not available when
the page is in swap and it would be useful to know the page type without
loading the page from swap. The information would continue to be
accessible from struct epc_page via the owner pointer that may make some
of the changes easier since it would not be needed to pass the page type
around so much and thus possibly address the SECS page issue that Sean
pointed out in
https://lore.kernel.org/lkml/[email protected]/

> diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
> index 4628acec0009..e43d3c27eb96 100644
> --- a/arch/x86/kernel/cpu/sgx/sgx.h
> +++ b/arch/x86/kernel/cpu/sgx/sgx.h
> @@ -26,9 +26,19 @@
> /* Pages, which are being tracked by the page reclaimer. */
> #define SGX_EPC_PAGE_RECLAIMER_TRACKED BIT(0)
>
> +/* Pages, on the "sgx_dirty_page_list" */
> +#define SGX_EPC_PAGE_DIRTY BIT(1)
> +
> +/* Pages, on one of the node free lists */
> +#define SGX_EPC_PAGE_FREE BIT(2)
> +
> +/* Pages, with h/w poison errors */
> +#define SGX_EPC_PAGE_POISON BIT(3)
> +
> struct sgx_epc_page {
> unsigned int section;
> - unsigned int flags;
> + u16 flags;
> + u16 type;

Could this be "enum sgx_page_type type" ?

Reinette

2021-07-14 21:02:28

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH 1/4] x86/sgx: Track phase and type of SGX EPC pages

> Could this tracking be done at the enclave page (struct sgx_encl_page)
> instead?

In principle yes. Though Sean has some issues with me tracking types
at all.

> The enclave page's EPC page information is not available when
> the page is in swap and it would be useful to know the page type without
> loading the page from swap. The information would continue to be
> accessible from struct epc_page via the owner pointer that may make some
> of the changes easier since it would not be needed to pass the page type
> around so much and thus possibly address the SECS page issue that Sean
> pointed out in
> https://lore.kernel.org/lkml/[email protected]/

I think I noticed that the "owner" pointer in sgx_encl_page doesn't point
back to the epc_page for all types of SGX pages. So some additional
changes would be needed. I'm not at all sure why this is different (or
what use the non-REG pages use "owner" for.

>> struct sgx_epc_page {
>> unsigned int section;
>> - unsigned int flags;
>> + u16 flags;
>> + u16 type;
>
> Could this be "enum sgx_page_type type" ?

Maybe. I thought I needed extra types (like FREE and DIRTY). But
Sean pointed out how to avoid some of them.

-Tony

2021-07-14 22:22:24

by Reinette Chatre

[permalink] [raw]
Subject: Re: [PATCH 1/4] x86/sgx: Track phase and type of SGX EPC pages

Hi Tony,

On 7/14/2021 1:59 PM, Luck, Tony wrote:
>> Could this tracking be done at the enclave page (struct sgx_encl_page)
>> instead?
>
> In principle yes. Though Sean has some issues with me tracking types
> at all.

For the SGX2 work knowing the page types are useful. Some instructions
only work on certain page types and knowing beforehand whether an
instruction could work helps to avoid dealing with the errors when it
does not work.

>> The enclave page's EPC page information is not available when
>> the page is in swap and it would be useful to know the page type without
>> loading the page from swap. The information would continue to be
>> accessible from struct epc_page via the owner pointer that may make some
>> of the changes easier since it would not be needed to pass the page type
>> around so much and thus possibly address the SECS page issue that Sean
>> pointed out in
>> https://lore.kernel.org/lkml/[email protected]/
>
> I think I noticed that the "owner" pointer in sgx_encl_page doesn't point
> back to the epc_page for all types of SGX pages. So some additional
> changes would be needed. I'm not at all sure why this is different (or
> what use the non-REG pages use "owner" for.

This may be VA pages? struct sgx_va_page also contains a pointer to an
EPC page. I did not consider that for this case. Perhaps these could be
identified uniquely.

Reinette

2021-07-15 00:25:02

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH 1/4] x86/sgx: Track phase and type of SGX EPC pages

On Wed, Jul 14, 2021, Reinette Chatre wrote:
> Hi Tony,
>
> On 7/14/2021 1:59 PM, Luck, Tony wrote:
> > > Could this tracking be done at the enclave page (struct sgx_encl_page)
> > > instead?
> >
> > In principle yes. Though Sean has some issues with me tracking types
> > at all.

I've no objection to tracking the type for SGX2, my argument in the context of
#MC support is that there should be no need to track the type. Either the #MC
is recoverable or it isn't, and the enclave is toast regardless of what type of
page hit the #MC.

There might be a need to identify track vEPC pages, e.g. to avoid the retpoline
associated with a virtual function table, but IMO that would be better done as a
new flag instead of overloading the page type. E.g. a page can be both a
vEPC page and an SECS/REG/VA page depending on its use in the guest.

> For the SGX2 work knowing the page types are useful. Some instructions only
> work on certain page types and knowing beforehand whether an instruction
> could work helps to avoid dealing with the errors when it does not work.

Yes, but the SGX2 use case is specific to "native" enclaves, i.e. it can and
should be limited to sgx_encl_page, as opposed to being shoved into sgx_epc_page.

> > > The enclave page's EPC page information is not available when
> > > the page is in swap and it would be useful to know the page type without
> > > loading the page from swap. The information would continue to be
> > > accessible from struct epc_page via the owner pointer that may make some
> > > of the changes easier since it would not be needed to pass the page type
> > > around so much and thus possibly address the SECS page issue that Sean
> > > pointed out in
> > > https://lore.kernel.org/lkml/[email protected]/
> >
> > I think I noticed that the "owner" pointer in sgx_encl_page doesn't point
> > back to the epc_page for all types of SGX pages. So some additional
> > changes would be needed. I'm not at all sure why this is different (or
> > what use the non-REG pages use "owner" for.
>
> This may be VA pages? struct sgx_va_page also contains a pointer to an EPC
> page. I did not consider that for this case. Perhaps these could be
> identified uniquely.

The "owner" is currently only used for reclaim. IIRC, the proposed EPC cgroup
also used "owner" to enable forced "reclaim", i.e. reclaiming EPC by nuking the
owning entity, e.g. tearing down a virtual EPC section. And I believe the cgroup
also used the aforementioned vEPC flag to invoke the correct EPC OOM reaper.

2021-07-15 00:47:09

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH 1/4] x86/sgx: Track phase and type of SGX EPC pages

> I've no objection to tracking the type for SGX2, my argument in the context of
> #MC support is that there should be no need to track the type. Either the #MC
> is recoverable or it isn't, and the enclave is toast regardless of what type of
> page hit the #MC.

I'll separate the "phase" from the "type".

Here phase is used for the life-cycle of EPC pages:

DIRTY -> FREE -> IN-USE -> DIRTY

Errors can be reported by memory controller page scrubbers
for pages that are not "IN-USE" ... and the recovery action is
just to make sure that they are never allocated.

When a page is IN-USE ... it has a "type". I currently
only have a way to inject errors into SGX_PAGE_TYPE_REG
pages. That means initial recovery code is going to focus on
those since that is all I can test. But I'll try not to special case
them as far as possible.

-Tony

2021-07-15 16:50:39

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH 1/4] x86/sgx: Track phase and type of SGX EPC pages

On Wed, Jul 14, 2021, Luck, Tony wrote:
> > I've no objection to tracking the type for SGX2, my argument in the context of
> > #MC support is that there should be no need to track the type. Either the #MC
> > is recoverable or it isn't, and the enclave is toast regardless of what type of
> > page hit the #MC.
>
> I'll separate the "phase" from the "type".
>
> Here phase is used for the life-cycle of EPC pages:
>
> DIRTY -> FREE -> IN-USE -> DIRTY

Not that it affects anything, but that's not quite true. In hardware, pages are
either FREE or IN-USE, there is no concept of DIRTY. DIRTY is the kernel's
arbitrary description of a page that has not been sanitized and so is considered
to be in an unknown state, i.e. the kernel doesn't know if it's FREE or IN-USE.

Once a page is sanitized (during boot), its state is known and the page is never
put back on the so called dirty list, i.e. the software flow is:

DIRTY -> FREE -> IN-USE -> FREE

> Errors can be reported by memory controller page scrubbers for pages that are
> not "IN-USE" ... and the recovery action is just to make sure that they are
> never allocated.
>
> When a page is IN-USE ... it has a "type". I currently only have a way to
> inject errors into SGX_PAGE_TYPE_REG pages. That means initial recovery code
> is going to focus on those since that is all I can test. But I'll try not to
> special case them as far as possible.

Inability to test expected behavior doesn't mean we shouldn't implement towards
the expected behavior, i.e. someone somewhere must know how SECS and VA pages
behave in response to a memory error.

2021-07-20 02:04:36

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v2 0/6] Basic recovery for machine checks inside SGX

Very different from version 1 based on feedback.

Sean: Didn't like tracking types of SGX pages, so that's all gone now. I
do track the life cycle (in patch 1) using the "owner" field to
determine whether a page is in use vs. dirty/free. Currently
this series doesn't make use of that ... so patch 1 could be
dropped. But it is very small, and I think a pre-requisite for
future improvements to take pre-emptive action for asynch poison
notification (rather that just hoping that the enclave will exit
without accessing poison, or that if it does consume the poison
the error will be recoverable).

I think we should defer the whole asynch action to a subsequent
series that can build on top of this (and do it properly ...
my version 1 sent out SIGBUS signals without regard for system
(/proc/sys/vm/memory_failure_early_kill) or per-task (prctl
PR_MCE_KILL) policies).

Jarkko: Said poison pages should not just be dropped on the floor. They
should be added to a list for future tools to examine. I tried
the list approach, but safely removing pages from free/dirty
lists involved some complex locking, so I skipped ahead to the
"tools" idea and just added files in debugfs to show the count
of poison pages and a list of addresses (maybe the count is
redundant? Could just "wc -l poison_page_list"?).

Other: I got a complaint that after a poison page is handled Linux
spits out this message:
Could not invalidate pfn=0x2000c4d from 1:1 map
this is from set_mce_nospec() and happens because EPC pages
are not in the 1:1 map. Add code to check and ignore them.

Tony Luck (6):
x86/sgx: Provide indication of life-cycle of EPC pages
x86/sgx: Add infrastructure to identify SGX EPC pages
x86/sgx: Initial poison handling for dirty and free pages
x86/sgx: Add SGX infrastructure to recover from poison
x86/sgx: Hook sgx_memory_failure() into mainline code
x86/sgx: Add hook to error injection address validation

.../firmware-guide/acpi/apei/einj.rst | 19 +++
arch/x86/include/asm/set_memory.h | 4 +
arch/x86/kernel/cpu/sgx/encl.c | 2 +-
arch/x86/kernel/cpu/sgx/main.c | 137 +++++++++++++++++-
arch/x86/kernel/cpu/sgx/sgx.h | 6 +-
drivers/acpi/apei/einj.c | 3 +-
include/linux/mm.h | 15 ++
mm/memory-failure.c | 19 ++-
8 files changed, 195 insertions(+), 10 deletions(-)


base-commit: 2734d6c1b1a089fb593ef6a23d4b70903526fe0c
--
2.29.2

2021-07-20 02:04:44

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v2 2/6] x86/sgx: Add infrastructure to identify SGX EPC pages

X86 machine check architecture reports a physical address when there
is a memory error. Handling that error requires a method to determine
whether the physical address reported is in any of the areas reserved
for EPC pages by BIOS.

Add an end_phys_addr field to the sgx_epc_section structure and a
new function sgx_paddr_to_page() that searches all such structures
and returns the struct sgx_epc_page pointer if the address is an EPC
page. This function is only intended for use within SGX code.

Export a function sgx_is_epc_page() that simply reports whether an
address is an EPC page for use elsewhere in the kernel.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/kernel/cpu/sgx/main.c | 24 ++++++++++++++++++++++++
arch/x86/kernel/cpu/sgx/sgx.h | 1 +
2 files changed, 25 insertions(+)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index d61bc1f635a1..41753f81a071 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -654,6 +654,7 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
}

section->phys_addr = phys_addr;
+ section->end_phys_addr = phys_addr + size - 1;

for (i = 0; i < nr_pages; i++) {
section->pages[i].section = index;
@@ -665,6 +666,29 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
return true;
}

+static struct sgx_epc_page *sgx_paddr_to_page(u64 paddr)
+{
+ struct sgx_epc_section *section;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(sgx_epc_sections); i++) {
+ section = &sgx_epc_sections[i];
+
+ if (paddr < section->phys_addr || paddr > section->end_phys_addr)
+ continue;
+
+ return &section->pages[PFN_DOWN(paddr - section->phys_addr)];
+ }
+
+ return NULL;
+}
+
+bool sgx_is_epc_page(u64 paddr)
+{
+ return !!sgx_paddr_to_page(paddr);
+}
+EXPORT_SYMBOL_GPL(sgx_is_epc_page);
+
/**
* A section metric is concatenated in a way that @low bits 12-31 define the
* bits 12-31 of the metric and @high bits 0-19 define the bits 32-51 of the
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 4e1a410b8a62..226b081a4d05 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -50,6 +50,7 @@ struct sgx_numa_node {
*/
struct sgx_epc_section {
unsigned long phys_addr;
+ unsigned long end_phys_addr;
void *virt_addr;
struct sgx_epc_page *pages;
struct sgx_numa_node *node;
--
2.29.2

2021-07-20 02:06:58

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v2 6/6] x86/sgx: Add hook to error injection address validation

SGX reserved memory does not appear in the standard address maps.

Add hook to call into the SGX code to check if an address is located
in SGX memory.

There are other challenges in injecting errors into SGX. Update the
documentation with a sequence of operations to inject.

Signed-off-by: Tony Luck <[email protected]>
---
.../firmware-guide/acpi/apei/einj.rst | 19 +++++++++++++++++++
drivers/acpi/apei/einj.c | 3 ++-
2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/Documentation/firmware-guide/acpi/apei/einj.rst b/Documentation/firmware-guide/acpi/apei/einj.rst
index c042176e1707..55e2331a6438 100644
--- a/Documentation/firmware-guide/acpi/apei/einj.rst
+++ b/Documentation/firmware-guide/acpi/apei/einj.rst
@@ -181,5 +181,24 @@ You should see something like this in dmesg::
[22715.834759] EDAC sbridge MC3: PROCESSOR 0:306e7 TIME 1422553404 SOCKET 0 APIC 0
[22716.616173] EDAC MC3: 1 CE memory read error on CPU_SrcID#0_Channel#0_DIMM#0 (channel:0 slot:0 page:0x12345 offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0001:0090 socket:0 channel_mask:1 rank:0)

+Special notes for injection into SGX enclaves:
+
+There may be a separate BIOS setup option to enable SGX injection.
+
+The injection process consists of setting some special memory controller
+trigger that will inject the error on the next write to the target
+address. But the h/w prevents any software outside of an SGX enclave
+from accessing enclave pages (even BIOS SMM mode).
+
+The following sequence can be used:
+ 1) Determine physical address of enclave page
+ 2) Use "notrigger=1" mode to inject (this will setup
+ the injection address, but will not actually inject)
+ 3) Enter the enclave
+ 4) Store data to the virtual address matching physical address from step 1
+ 5) Execute CLFLUSH for that virtual address
+ 6) Spin delay for 250ms
+ 7) Read from the virtual address. This will trigger the error
+
For more information about EINJ, please refer to ACPI specification
version 4.0, section 17.5 and ACPI 5.0, section 18.6.
diff --git a/drivers/acpi/apei/einj.c b/drivers/acpi/apei/einj.c
index 2882450c443e..cd7cffc955bf 100644
--- a/drivers/acpi/apei/einj.c
+++ b/drivers/acpi/apei/einj.c
@@ -544,7 +544,8 @@ static int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2,
((region_intersects(base_addr, size, IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE)
!= REGION_INTERSECTS) &&
(region_intersects(base_addr, size, IORESOURCE_MEM, IORES_DESC_PERSISTENT_MEMORY)
- != REGION_INTERSECTS)))
+ != REGION_INTERSECTS) &&
+ !sgx_is_epc_page(base_addr)))
return -EINVAL;

inject:
--
2.29.2

2021-07-20 02:53:08

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v2 3/6] x86/sgx: Initial poison handling for dirty and free pages

A memory controller patrol scrubber can report poison in a page
that isn't currently being used.

Add a new flag bit (SGX_EPC_PAGE_POISON) that can be set for an
sgx_epc_page. Check for it:
1) When sanitizing dirty pages
2) When allocating pages
3) When freeing epc pages

In all cases drop the poisoned page to make sure it will not be
reallocated.

Add debugfs files /sys/kernel/debug/sgx/poison_page_{count,list}
so that system administrators can see how many enclave pages have
been dropped and get a list of those pages.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/kernel/cpu/sgx/main.c | 50 +++++++++++++++++++++++++++++++++-
arch/x86/kernel/cpu/sgx/sgx.h | 3 ++
2 files changed, 52 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 41753f81a071..db77f62d6ef1 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -11,6 +11,7 @@
#include <linux/sched/mm.h>
#include <linux/sched/signal.h>
#include <linux/slab.h>
+#include <linux/debugfs.h>
#include <asm/sgx.h>
#include "driver.h"
#include "encl.h"
@@ -34,6 +35,9 @@ static unsigned long sgx_nr_free_pages;
/* Nodes with one or more EPC sections. */
static nodemask_t sgx_numa_mask;

+/* Maintain a count of poison pages */
+static u32 poison_page_count;
+
/*
* Array with one list_head for each possible NUMA node. Each
* list contains all the sgx_epc_section's which are on that
@@ -47,6 +51,9 @@ static LIST_HEAD(sgx_dirty_page_list);
* Reset post-kexec EPC pages to the uninitialized state. The pages are removed
* from the input list, and made available for the page allocator. SECS pages
* prepending their children in the input list are left intact.
+ *
+ * Don't try to clean a poisoned page. That might trigger a machine check.
+ * Just drop the page and move on.
*/
static void __sgx_sanitize_pages(struct list_head *dirty_page_list)
{
@@ -61,6 +68,11 @@ static void __sgx_sanitize_pages(struct list_head *dirty_page_list)

page = list_first_entry(dirty_page_list, struct sgx_epc_page, list);

+ if (page->flags & SGX_EPC_PAGE_POISON) {
+ list_del(&page->list);
+ continue;
+ }
+
ret = __eremove(sgx_get_epc_virt_addr(page));
if (!ret) {
/*
@@ -567,6 +579,9 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)
* @reclaim is set to true, directly reclaim pages when we are out of pages. No
* mm's can be locked when @reclaim is set to true.
*
+ * A page on the free list might have been reported as poisoned by the patrol
+ * scrubber. If so, skip this page, and try again.
+ *
* Finally, wake up ksgxd when the number of pages goes below the watermark
* before returning back to the caller.
*
@@ -585,6 +600,10 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)

for ( ; ; ) {
page = __sgx_alloc_epc_page();
+
+ if (page->flags & SGX_EPC_PAGE_POISON)
+ continue;
+
if (!IS_ERR(page)) {
page->owner = owner;
break;
@@ -621,6 +640,8 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
* responsibility to make sure that the page is in uninitialized state. In other
* words, do EREMOVE, EWB or whatever operation is necessary before calling
* this function.
+ *
+ * Drop poison pages so they won't be reallocated.
*/
void sgx_free_epc_page(struct sgx_epc_page *page)
{
@@ -630,7 +651,8 @@ void sgx_free_epc_page(struct sgx_epc_page *page)
spin_lock(&node->lock);

page->owner = NULL;
- list_add_tail(&page->list, &node->free_page_list);
+ if (!(page->flags & SGX_EPC_PAGE_POISON))
+ list_add_tail(&page->list, &node->free_page_list);
sgx_nr_free_pages++;

spin_unlock(&node->lock);
@@ -820,8 +842,30 @@ int sgx_set_attribute(unsigned long *allowed_attributes,
}
EXPORT_SYMBOL_GPL(sgx_set_attribute);

+static int poison_list_show(struct seq_file *m, void *private)
+{
+ struct sgx_epc_section *section;
+ struct sgx_epc_page *page;
+ unsigned long addr;
+ int i;
+
+ for (i = 0; i < SGX_MAX_EPC_SECTIONS; i++) {
+ section = &sgx_epc_sections[i];
+ page = section->pages;
+ for (addr = section->phys_addr; addr < section->end_phys_addr;
+ addr += PAGE_SIZE, page++) {
+ if (page->flags & SGX_EPC_PAGE_POISON)
+ seq_printf(m, "0x%lx\n", addr);
+ }
+ }
+ return 0;
+}
+
+DEFINE_SHOW_ATTRIBUTE(poison_list);
+
static int __init sgx_init(void)
{
+ struct dentry *dir;
int ret;
int i;

@@ -853,6 +897,10 @@ static int __init sgx_init(void)
if (sgx_vepc_init() && ret)
goto err_provision;

+ dir = debugfs_create_dir("sgx", NULL);
+ debugfs_create_u32("poison_page_count", 0400, dir, &poison_page_count);
+ debugfs_create_file("poison_page_list", 0400, dir, NULL, &poison_list_fops);
+
return 0;

err_provision:
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 226b081a4d05..2c3987ecdfe4 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -26,6 +26,9 @@
/* Pages, which are being tracked by the page reclaimer. */
#define SGX_EPC_PAGE_RECLAIMER_TRACKED BIT(0)

+/* Poisoned pages */
+#define SGX_EPC_PAGE_POISON BIT(1)
+
struct sgx_epc_page {
unsigned int section;
unsigned int flags;
--
2.29.2

2021-07-27 01:55:33

by Sakkinen, Jarkko

[permalink] [raw]
Subject: Re: [PATCH v2 0/6] Basic recovery for machine checks inside SGX

On Mon, 2021-07-19 at 11:20 -0700, Tony Luck wrote:
> Very different from version 1 based on feedback.
>
> Sean: Didn't like tracking types of SGX pages, so that's all gone now. I
> do track the life cycle (in patch 1) using the "owner" field to
> determine whether a page is in use vs. dirty/free. Currently
> this series doesn't make use of that ... so patch 1 could be
> dropped. But it is very small, and I think a pre-requisite for
> future improvements to take pre-emptive action for asynch poison
> notification (rather that just hoping that the enclave will exit
> without accessing poison, or that if it does consume the poison
> the error will be recoverable).
>
> I think we should defer the whole asynch action to a subsequent
> series that can build on top of this (and do it properly ...
> my version 1 sent out SIGBUS signals without regard for system
> (/proc/sys/vm/memory_failure_early_kill) or per-task (prctl
> PR_MCE_KILL) policies).
>
> Jarkko: Said poison pages should not just be dropped on the floor. They
> should be added to a list for future tools to examine. I tried
> the list approach, but safely removing pages from free/dirty
> lists involved some complex locking, so I skipped ahead to the
> "tools" idea and just added files in debugfs to show the count
> of poison pages and a list of addresses (maybe the count is
> redundant? Could just "wc -l poison_page_list"?).
>
> Other: I got a complaint that after a poison page is handled Linux
> spits out this message:
> Could not invalidate pfn=0x2000c4d from 1:1 map
> this is from set_mce_nospec() and happens because EPC pages
> are not in the 1:1 map. Add code to check and ignore them.
>
> Tony Luck (6):
> x86/sgx: Provide indication of life-cycle of EPC pages
> x86/sgx: Add infrastructure to identify SGX EPC pages
> x86/sgx: Initial poison handling for dirty and free pages
> x86/sgx: Add SGX infrastructure to recover from poison
> x86/sgx: Hook sgx_memory_failure() into mainline code
> x86/sgx: Add hook to error injection address validation
>
> .../firmware-guide/acpi/apei/einj.rst | 19 +++
> arch/x86/include/asm/set_memory.h | 4 +
> arch/x86/kernel/cpu/sgx/encl.c | 2 +-
> arch/x86/kernel/cpu/sgx/main.c | 137 +++++++++++++++++-
> arch/x86/kernel/cpu/sgx/sgx.h | 6 +-
> drivers/acpi/apei/einj.c | 3 +-
> include/linux/mm.h | 15 ++
> mm/memory-failure.c | 19 ++-
> 8 files changed, 195 insertions(+), 10 deletions(-)
>
>
> base-commit: 2734d6c1b1a089fb593ef6a23d4b70903526fe0c

Use [email protected] in future versions.

/Jarkko

2021-07-27 02:09:08

by Sakkinen, Jarkko

[permalink] [raw]
Subject: Re: [PATCH v2 3/6] x86/sgx: Initial poison handling for dirty and free pages

On Mon, 2021-07-19 at 11:20 -0700, Tony Luck wrote:
> + dir = debugfs_create_dir("sgx", NULL);
> + debugfs_create_u32("poison_page_count", 0400, dir, &poison_page_count);
> + debugfs_create_file("poison_page_list", 0400, dir, NULL, &poison_list_fops);

I'm adding debugfs attributes in my reclaimer kselftest patch
set. The feedback that I got from Boris for that is that these
must be documented in Documentation/x86/sgx.rst.

/Jarkko

2021-07-28 20:48:19

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v3 0/7] Basic recovery for machine checks inside SGX

Changes since v2:

Jarkko:
1) Don't provide a dummy non-NULL value for "owner" of new SGX EPC
pages at the call site. Instead change sgx_alloc_epc_page() to
provide a non-NULL value.
2) Add description of the new debugfs files to sgx.rst
[Added a whole section on uncorrected memory errors]

Tony Luck (7):
x86/sgx: Provide indication of life-cycle of EPC pages
x86/sgx: Add infrastructure to identify SGX EPC pages
x86/sgx: Initial poison handling for dirty and free pages
x86/sgx: Add SGX infrastructure to recover from poison
x86/sgx: Hook sgx_memory_failure() into mainline code
x86/sgx: Add hook to error injection address validation
x86/sgx: Add documentation for SGX memory errors

.../firmware-guide/acpi/apei/einj.rst | 19 +++
Documentation/x86/sgx.rst | 26 ++++
arch/x86/include/asm/set_memory.h | 4 +
arch/x86/kernel/cpu/sgx/main.c | 134 +++++++++++++++++-
arch/x86/kernel/cpu/sgx/sgx.h | 6 +-
drivers/acpi/apei/einj.c | 3 +-
include/linux/mm.h | 15 ++
mm/memory-failure.c | 19 ++-
8 files changed, 216 insertions(+), 10 deletions(-)


base-commit: ff1176468d368232b684f75e82563369208bc371
--
2.29.2


2021-07-28 20:48:50

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v3 5/7] x86/sgx: Hook sgx_memory_failure() into mainline code

Add a call inside memory_failure() to check if the address is an SGX
EPC page and handle it.

Note the SGX EPC pages do not have a "struct page" entry, so the hook
goes in at the same point as the device mapping hook.

Pull the call to acquire the mutex earlier so the SGX errors are also
protected.

Make set_mce_nospec() skip SGX pages when trying to adjust
the 1:1 map.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/include/asm/set_memory.h | 4 ++++
include/linux/mm.h | 15 +++++++++++++++
mm/memory-failure.c | 19 +++++++++++++------
3 files changed, 32 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 43fa081a1adb..801af8f30c83 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -2,6 +2,7 @@
#ifndef _ASM_X86_SET_MEMORY_H
#define _ASM_X86_SET_MEMORY_H

+#include <linux/mm.h>
#include <asm/page.h>
#include <asm-generic/set_memory.h>

@@ -98,6 +99,9 @@ static inline int set_mce_nospec(unsigned long pfn, bool unmap)
unsigned long decoy_addr;
int rc;

+ /* SGX pages are not in the 1:1 map */
+ if (sgx_is_epc_page(pfn << PAGE_SHIFT))
+ return 0;
/*
* We would like to just call:
* set_memory_XX((unsigned long)pfn_to_kaddr(pfn), 1);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7ca22e6e694a..2ff599bcf8c2 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3283,5 +3283,20 @@ static inline int seal_check_future_write(int seals, struct vm_area_struct *vma)
return 0;
}

+#ifdef CONFIG_X86_SGX
+int sgx_memory_failure(unsigned long pfn, int flags);
+bool sgx_is_epc_page(u64 paddr);
+#else
+static inline int sgx_memory_failure(unsigned long pfn, int flags)
+{
+ return -ENXIO;
+}
+
+static inline bool sgx_is_epc_page(u64 paddr)
+{
+ return false;
+}
+#endif
+
#endif /* __KERNEL__ */
#endif /* _LINUX_MM_H */
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index eefd823deb67..3ce6b6aabf0f 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1626,21 +1626,28 @@ int memory_failure(unsigned long pfn, int flags)
if (!sysctl_memory_failure_recovery)
panic("Memory failure on page %lx", pfn);

+ mutex_lock(&mf_mutex);
+
p = pfn_to_online_page(pfn);
if (!p) {
+ res = sgx_memory_failure(pfn, flags);
+ if (res == 0)
+ goto unlock_mutex;
+
if (pfn_valid(pfn)) {
pgmap = get_dev_pagemap(pfn, NULL);
- if (pgmap)
- return memory_failure_dev_pagemap(pfn, flags,
- pgmap);
+ if (pgmap) {
+ res = memory_failure_dev_pagemap(pfn, flags,
+ pgmap);
+ goto unlock_mutex;
+ }
}
pr_err("Memory failure: %#lx: memory outside kernel control\n",
pfn);
- return -ENXIO;
+ res = -ENXIO;
+ goto unlock_mutex;
}

- mutex_lock(&mf_mutex);
-
try_again:
if (PageHuge(p)) {
res = memory_failure_hugetlb(pfn, flags);
--
2.29.2


2021-07-28 20:48:53

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v3 6/7] x86/sgx: Add hook to error injection address validation

SGX reserved memory does not appear in the standard address maps.

Add hook to call into the SGX code to check if an address is located
in SGX memory.

There are other challenges in injecting errors into SGX. Update the
documentation with a sequence of operations to inject.

Signed-off-by: Tony Luck <[email protected]>
---
.../firmware-guide/acpi/apei/einj.rst | 19 +++++++++++++++++++
drivers/acpi/apei/einj.c | 3 ++-
2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/Documentation/firmware-guide/acpi/apei/einj.rst b/Documentation/firmware-guide/acpi/apei/einj.rst
index c042176e1707..55e2331a6438 100644
--- a/Documentation/firmware-guide/acpi/apei/einj.rst
+++ b/Documentation/firmware-guide/acpi/apei/einj.rst
@@ -181,5 +181,24 @@ You should see something like this in dmesg::
[22715.834759] EDAC sbridge MC3: PROCESSOR 0:306e7 TIME 1422553404 SOCKET 0 APIC 0
[22716.616173] EDAC MC3: 1 CE memory read error on CPU_SrcID#0_Channel#0_DIMM#0 (channel:0 slot:0 page:0x12345 offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0001:0090 socket:0 channel_mask:1 rank:0)

+Special notes for injection into SGX enclaves:
+
+There may be a separate BIOS setup option to enable SGX injection.
+
+The injection process consists of setting some special memory controller
+trigger that will inject the error on the next write to the target
+address. But the h/w prevents any software outside of an SGX enclave
+from accessing enclave pages (even BIOS SMM mode).
+
+The following sequence can be used:
+ 1) Determine physical address of enclave page
+ 2) Use "notrigger=1" mode to inject (this will setup
+ the injection address, but will not actually inject)
+ 3) Enter the enclave
+ 4) Store data to the virtual address matching physical address from step 1
+ 5) Execute CLFLUSH for that virtual address
+ 6) Spin delay for 250ms
+ 7) Read from the virtual address. This will trigger the error
+
For more information about EINJ, please refer to ACPI specification
version 4.0, section 17.5 and ACPI 5.0, section 18.6.
diff --git a/drivers/acpi/apei/einj.c b/drivers/acpi/apei/einj.c
index 2882450c443e..cd7cffc955bf 100644
--- a/drivers/acpi/apei/einj.c
+++ b/drivers/acpi/apei/einj.c
@@ -544,7 +544,8 @@ static int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2,
((region_intersects(base_addr, size, IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE)
!= REGION_INTERSECTS) &&
(region_intersects(base_addr, size, IORESOURCE_MEM, IORES_DESC_PERSISTENT_MEMORY)
- != REGION_INTERSECTS)))
+ != REGION_INTERSECTS) &&
+ !sgx_is_epc_page(base_addr)))
return -EINVAL;

inject:
--
2.29.2


2021-07-28 20:48:56

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v3 7/7] x86/sgx: Add documentation for SGX memory errors

Error handling is a bit different for SGX pages. Add a section describing
how asynchronous and consumed errors are handled and the two new
debugfs files that show the count and list of pages with uncorrected
memory errors.

Signed-off-by: Tony Luck <[email protected]>
---
Documentation/x86/sgx.rst | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)

diff --git a/Documentation/x86/sgx.rst b/Documentation/x86/sgx.rst
index dd0ac96ff9ef..461bd1daa565 100644
--- a/Documentation/x86/sgx.rst
+++ b/Documentation/x86/sgx.rst
@@ -250,3 +250,29 @@ user wants to deploy SGX applications both on the host and in guests
on the same machine, the user should reserve enough EPC (by taking out
total virtual EPC size of all SGX VMs from the physical EPC size) for
host SGX applications so they can run with acceptable performance.
+
+Uncorrected memory errors
+=========================
+Systems that support machine check recovery and have local machine
+check delivery enabled can recover from uncorrected memory errors in
+many situations.
+
+Errors in SGX pages that are not currently in use will prevent those
+pages from being allocated.
+
+Errors asynchronously reported against active SGX pages will simply note
+that the page has an error. If the enclave terminates without accessing
+the page Linux will not return it to the free list for reallocation.
+
+When an uncorrected memory error is consumed from within an enclave the
+h/w will mark that enclave so that it cannot be re-entered. Linux will
+send a SIGBUS to the current task.
+
+In addition to console log entries from processing the machine check or
+corrected machine check interrupt, Linux also provides debugfs files to
+indicate the number of SGX enclave pages that have reported errors and
+the physical addresses of each page:
+
+/sys/kernel/debug/sgx/poison_page_count
+
+/sys/kernel/debug/sgx/poison_page_list
--
2.29.2


2021-07-28 20:49:25

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v3 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

X86 machine check architecture reports a physical address when there
is a memory error. Handling that error requires a method to determine
whether the physical address reported is in any of the areas reserved
for EPC pages by BIOS.

Add an end_phys_addr field to the sgx_epc_section structure and a
new function sgx_paddr_to_page() that searches all such structures
and returns the struct sgx_epc_page pointer if the address is an EPC
page. This function is only intended for use within SGX code.

Export a function sgx_is_epc_page() that simply reports whether an
address is an EPC page for use elsewhere in the kernel.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/kernel/cpu/sgx/main.c | 24 ++++++++++++++++++++++++
arch/x86/kernel/cpu/sgx/sgx.h | 1 +
2 files changed, 25 insertions(+)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 17d09186a6c2..ce40c010c9cb 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -649,6 +649,7 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
}

section->phys_addr = phys_addr;
+ section->end_phys_addr = phys_addr + size - 1;

for (i = 0; i < nr_pages; i++) {
section->pages[i].section = index;
@@ -660,6 +661,29 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
return true;
}

+static struct sgx_epc_page *sgx_paddr_to_page(u64 paddr)
+{
+ struct sgx_epc_section *section;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(sgx_epc_sections); i++) {
+ section = &sgx_epc_sections[i];
+
+ if (paddr < section->phys_addr || paddr > section->end_phys_addr)
+ continue;
+
+ return &section->pages[PFN_DOWN(paddr - section->phys_addr)];
+ }
+
+ return NULL;
+}
+
+bool sgx_is_epc_page(u64 paddr)
+{
+ return !!sgx_paddr_to_page(paddr);
+}
+EXPORT_SYMBOL_GPL(sgx_is_epc_page);
+
/**
* A section metric is concatenated in a way that @low bits 12-31 define the
* bits 12-31 of the metric and @high bits 0-19 define the bits 32-51 of the
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 4e1a410b8a62..226b081a4d05 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -50,6 +50,7 @@ struct sgx_numa_node {
*/
struct sgx_epc_section {
unsigned long phys_addr;
+ unsigned long end_phys_addr;
void *virt_addr;
struct sgx_epc_page *pages;
struct sgx_numa_node *node;
--
2.29.2


2021-07-28 20:49:25

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v3 3/7] x86/sgx: Initial poison handling for dirty and free pages

A memory controller patrol scrubber can report poison in a page
that isn't currently being used.

Add a new flag bit (SGX_EPC_PAGE_POISON) that can be set for an
sgx_epc_page. Check for it:
1) When sanitizing dirty pages
2) When allocating pages
3) When freeing epc pages

In all cases drop the poisoned page to make sure it will not be
reallocated.

Add debugfs files /sys/kernel/debug/sgx/poison_page_{count,list}
so that system administrators can see how many enclave pages have
been dropped and get a list of those pages.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/kernel/cpu/sgx/main.c | 50 +++++++++++++++++++++++++++++++++-
arch/x86/kernel/cpu/sgx/sgx.h | 3 ++
2 files changed, 52 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index ce40c010c9cb..354f0abec12d 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -11,6 +11,7 @@
#include <linux/sched/mm.h>
#include <linux/sched/signal.h>
#include <linux/slab.h>
+#include <linux/debugfs.h>
#include <asm/sgx.h>
#include "driver.h"
#include "encl.h"
@@ -34,6 +35,9 @@ static unsigned long sgx_nr_free_pages;
/* Nodes with one or more EPC sections. */
static nodemask_t sgx_numa_mask;

+/* Maintain a count of poison pages */
+static u32 poison_page_count;
+
/*
* Array with one list_head for each possible NUMA node. Each
* list contains all the sgx_epc_section's which are on that
@@ -47,6 +51,9 @@ static LIST_HEAD(sgx_dirty_page_list);
* Reset post-kexec EPC pages to the uninitialized state. The pages are removed
* from the input list, and made available for the page allocator. SECS pages
* prepending their children in the input list are left intact.
+ *
+ * Don't try to clean a poisoned page. That might trigger a machine check.
+ * Just drop the page and move on.
*/
static void __sgx_sanitize_pages(struct list_head *dirty_page_list)
{
@@ -61,6 +68,11 @@ static void __sgx_sanitize_pages(struct list_head *dirty_page_list)

page = list_first_entry(dirty_page_list, struct sgx_epc_page, list);

+ if (page->flags & SGX_EPC_PAGE_POISON) {
+ list_del(&page->list);
+ continue;
+ }
+
ret = __eremove(sgx_get_epc_virt_addr(page));
if (!ret) {
/*
@@ -567,6 +579,9 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)
* @reclaim is set to true, directly reclaim pages when we are out of pages. No
* mm's can be locked when @reclaim is set to true.
*
+ * A page on the free list might have been reported as poisoned by the patrol
+ * scrubber. If so, skip this page, and try again.
+ *
* Finally, wake up ksgxd when the number of pages goes below the watermark
* before returning back to the caller.
*
@@ -580,6 +595,10 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)

for ( ; ; ) {
page = __sgx_alloc_epc_page();
+
+ if (page->flags & SGX_EPC_PAGE_POISON)
+ continue;
+
if (!IS_ERR(page)) {
page->owner = owner ? owner : page;
break;
@@ -616,6 +635,8 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
* responsibility to make sure that the page is in uninitialized state. In other
* words, do EREMOVE, EWB or whatever operation is necessary before calling
* this function.
+ *
+ * Drop poison pages so they won't be reallocated.
*/
void sgx_free_epc_page(struct sgx_epc_page *page)
{
@@ -625,7 +646,8 @@ void sgx_free_epc_page(struct sgx_epc_page *page)
spin_lock(&node->lock);

page->owner = NULL;
- list_add_tail(&page->list, &node->free_page_list);
+ if (!(page->flags & SGX_EPC_PAGE_POISON))
+ list_add_tail(&page->list, &node->free_page_list);
sgx_nr_free_pages++;

spin_unlock(&node->lock);
@@ -815,8 +837,30 @@ int sgx_set_attribute(unsigned long *allowed_attributes,
}
EXPORT_SYMBOL_GPL(sgx_set_attribute);

+static int poison_list_show(struct seq_file *m, void *private)
+{
+ struct sgx_epc_section *section;
+ struct sgx_epc_page *page;
+ unsigned long addr;
+ int i;
+
+ for (i = 0; i < SGX_MAX_EPC_SECTIONS; i++) {
+ section = &sgx_epc_sections[i];
+ page = section->pages;
+ for (addr = section->phys_addr; addr < section->end_phys_addr;
+ addr += PAGE_SIZE, page++) {
+ if (page->flags & SGX_EPC_PAGE_POISON)
+ seq_printf(m, "0x%lx\n", addr);
+ }
+ }
+ return 0;
+}
+
+DEFINE_SHOW_ATTRIBUTE(poison_list);
+
static int __init sgx_init(void)
{
+ struct dentry *dir;
int ret;
int i;

@@ -848,6 +892,10 @@ static int __init sgx_init(void)
if (sgx_vepc_init() && ret)
goto err_provision;

+ dir = debugfs_create_dir("sgx", NULL);
+ debugfs_create_u32("poison_page_count", 0400, dir, &poison_page_count);
+ debugfs_create_file("poison_page_list", 0400, dir, NULL, &poison_list_fops);
+
return 0;

err_provision:
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 226b081a4d05..2c3987ecdfe4 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -26,6 +26,9 @@
/* Pages, which are being tracked by the page reclaimer. */
#define SGX_EPC_PAGE_RECLAIMER_TRACKED BIT(0)

+/* Poisoned pages */
+#define SGX_EPC_PAGE_POISON BIT(1)
+
struct sgx_epc_page {
unsigned int section;
unsigned int flags;
--
2.29.2


2021-07-28 20:49:55

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v3 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

SGX EPC pages go through the following life cycle:

DIRTY ---> FREE ---> IN-USE --\
^ |
\-----------------/

Recovery action for poison for a DIRTY or FREE page is simple. Just
make sure never to allocate the page. IN-USE pages need some extra
handling.

It would be good to use the sgx_epc_page->owner field as an indicator
of where an EPC page is currently in that cycle (owner != NULL means
the EPC page is IN-USE). But there is one caller, sgx_alloc_va_page(),
that calls with NULL.

Make the following changes:

1) Change the type of "owner" to "void *" (it can have other types
besides "struct sgx_encl_page *).
2) Add a check to sgx_free_epc_page(). If the caller specified the
owner as NULL, then set the owner field to self-reference the
SGX epc page itself.
3) Reset owner to NULL in sgx_free_epc_page().

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/kernel/cpu/sgx/main.c | 3 ++-
arch/x86/kernel/cpu/sgx/sgx.h | 2 +-
2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 63d3de02bbcc..17d09186a6c2 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -581,7 +581,7 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
for ( ; ; ) {
page = __sgx_alloc_epc_page();
if (!IS_ERR(page)) {
- page->owner = owner;
+ page->owner = owner ? owner : page;
break;
}

@@ -624,6 +624,7 @@ void sgx_free_epc_page(struct sgx_epc_page *page)

spin_lock(&node->lock);

+ page->owner = NULL;
list_add_tail(&page->list, &node->free_page_list);
sgx_nr_free_pages++;

diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 4628acec0009..4e1a410b8a62 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -29,7 +29,7 @@
struct sgx_epc_page {
unsigned int section;
unsigned int flags;
- struct sgx_encl_page *owner;
+ void *owner;
struct list_head list;
};

--
2.29.2


2021-07-28 22:13:49

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v3 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

On 7/28/21 1:46 PM, Tony Luck wrote:
> +++ b/arch/x86/kernel/cpu/sgx/main.c
> @@ -581,7 +581,7 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
> for ( ; ; ) {
> page = __sgx_alloc_epc_page();
> if (!IS_ERR(page)) {
> - page->owner = owner;
> + page->owner = owner ? owner : page;
> break;
> }

I'm a little worried about this.

Let's say we get confused about the type of the page and dereference
page->owner. If it's NULL, we get a nice oops. If it's a real, valid
pointer, we get real valid memory back that we can scribble on.

Wouldn't it be safer to do something like:

page->owner = owner ? owner : (void *)-1;

-1 is non-NULL, but also invalid, which makes it harder for us to poke
ourselves in the eye.

2021-07-28 22:21:53

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v3 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

On 7/28/21 1:46 PM, Tony Luck wrote:
> Export a function sgx_is_epc_page() that simply reports whether an
> address is an EPC page for use elsewhere in the kernel.

It would be really nice to mention why this needs to be exported to
modules. I assume it's the error injection driver or something that can
be built as a module, but this export was a surprise when I saw it.

It's probably also worth noting that this is a sloooooooow
implementation compared to the core VM code that does something
analogous: pfn_to_page(). It's fine for error handling, but we should
probably have a comment to this effect so that more liberal use doesn't
creep in anywhere.

2021-07-28 22:58:12

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH v3 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

> Wouldn't it be safer to do something like:
>
> page->owner = owner ? owner : (void *)-1;
>
> -1 is non-NULL, but also invalid, which makes it harder for us to poke
> ourselves in the eye.

Does Linux have some #define INVALID_POINTER thing that
provides a guaranteed bad (e.g. non-canonical) value?

(void *)-1 seems hacky.

-Tony

2021-07-28 23:14:51

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v3 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

On 7/28/21 3:57 PM, Luck, Tony wrote:
>> Wouldn't it be safer to do something like:
>>
>> page->owner = owner ? owner : (void *)-1;
>>
>> -1 is non-NULL, but also invalid, which makes it harder for us to poke
>> ourselves in the eye.
> Does Linux have some #define INVALID_POINTER thing that
> provides a guaranteed bad (e.g. non-canonical) value?
>
> (void *)-1 seems hacky.

ERR_PTR(-SOMETHING) wouldn't be too bad. I guess it could even be:

page->owner = ERR_PTR(SGX_EPC_PAGE_VA);

and then:

#define SGX_EPC_PAGE_VA 0xffff...something...greppable

I *thought* we had a file full of these magic values, but maybe I'm
misremembering the uapi magic header.

2021-07-28 23:36:28

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v3 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

On Wed, Jul 28, 2021, Dave Hansen wrote:
> On 7/28/21 3:57 PM, Luck, Tony wrote:
> >> Wouldn't it be safer to do something like:
> >>
> >> page->owner = owner ? owner : (void *)-1;
> >>
> >> -1 is non-NULL, but also invalid, which makes it harder for us to poke
> >> ourselves in the eye.
> > Does Linux have some #define INVALID_POINTER thing that
> > provides a guaranteed bad (e.g. non-canonical) value?
> >
> > (void *)-1 seems hacky.
>
> ERR_PTR(-SOMETHING) wouldn't be too bad. I guess it could even be:
>
> page->owner = ERR_PTR(SGX_EPC_PAGE_VA);
>
> and then:
>
> #define SGX_EPC_PAGE_VA 0xffff...something...greppable
>
> I *thought* we had a file full of these magic values, but maybe I'm
> misremembering the uapi magic header.

Rather than use a magic const, just pass in the actual va_page. The only reason
NULL is passed is that prior to virtual EPC, there were only enclave pages and
VA pages, and assiging a non-NULL pointer to sgx_epc_page.owner, which is a
struct sgx_encl_page, was gross. Virtual EPC sets owner somewhat prematurely;
it's needed iff an EPC cgroup is added, to support OOM EPC killing (and a pointer
to va_page is also needed in this case).

sgx_epc_page.owner can even be converted to 'void *' without additional changes
since all consumers capture it in a local sgx_encl_page variable.


diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 001808e3901c..f9da8fe4dd6b 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -674,12 +674,12 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm,
* a VA page,
* -errno otherwise
*/
-struct sgx_epc_page *sgx_alloc_va_page(void)
+struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page)
{
struct sgx_epc_page *epc_page;
int ret;

- epc_page = sgx_alloc_epc_page(NULL, true);
+ epc_page = sgx_alloc_epc_page(va_page, true);
if (IS_ERR(epc_page))
return ERR_CAST(epc_page);

diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index fec43ca65065..3d12dbeae14a 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -111,7 +111,7 @@ void sgx_encl_put_backing(struct sgx_backing *backing, bool do_write);
int sgx_encl_test_and_clear_young(struct mm_struct *mm,
struct sgx_encl_page *page);

-struct sgx_epc_page *sgx_alloc_va_page(void);
+struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page);
unsigned int sgx_alloc_va_slot(struct sgx_va_page *va_page);
void sgx_free_va_slot(struct sgx_va_page *va_page, unsigned int offset);
bool sgx_va_page_full(struct sgx_va_page *va_page);
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 83df20e3e633..655ce0bb069d 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -30,7 +30,7 @@ static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl)
if (!va_page)
return ERR_PTR(-ENOMEM);

- va_page->epc_page = sgx_alloc_va_page();
+ va_page->epc_page = sgx_alloc_va_page(va_page);
if (IS_ERR(va_page->epc_page)) {
err = ERR_CAST(va_page->epc_page);
kfree(va_page);
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 4628acec0009..4e1a410b8a62 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -29,7 +29,7 @@
struct sgx_epc_page {
unsigned int section;
unsigned int flags;
- struct sgx_encl_page *owner;
+ void *owner;
struct list_head list;
};



2021-07-28 23:51:50

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH v3 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

> - epc_page = sgx_alloc_epc_page(NULL, true);
> + epc_page = sgx_alloc_epc_page(va_page, true);

Providing a real value for the owner seems much better than all the hacks
to invent a value to use instead of NULL.

Can you add a "Signed-off-by"? Then I'll replace my part 0001 with your version.

-Tony

[Just need to coax you into re-writing all the other parts for me now :-) ]

2021-07-29 00:11:23

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v3 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

On Wed, Jul 28, 2021, Luck, Tony wrote:
> > - epc_page = sgx_alloc_epc_page(NULL, true);
> > + epc_page = sgx_alloc_epc_page(va_page, true);
>
> Providing a real value for the owner seems much better than all the hacks
> to invent a value to use instead of NULL.
>
> Can you add a "Signed-off-by"? Then I'll replace my part 0001 with your version.

Signed-off-by: Sean Christopherson <[email protected]>

> -Tony
>
> [Just need to coax you into re-writing all the other parts for me now :-) ]

LOL, it might be easier to convince folks to just kill off SGX ;-)

2021-07-29 00:44:07

by Luck, Tony

[permalink] [raw]
Subject: Re: [PATCH v3 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

On Thu, Jul 29, 2021 at 12:07:08AM +0000, Sean Christopherson wrote:
> On Wed, Jul 28, 2021, Luck, Tony wrote:
> > > - epc_page = sgx_alloc_epc_page(NULL, true);
> > > + epc_page = sgx_alloc_epc_page(va_page, true);
> >
> > Providing a real value for the owner seems much better than all the hacks
> > to invent a value to use instead of NULL.
> >
> > Can you add a "Signed-off-by"? Then I'll replace my part 0001 with your version.

My commit comment (updated to match how the code actually changed).
Sean's code.

N.B. I added the kernel doc entry for the new argument to sgx_alloc_va_page()

+ * @va_page: struct sgx_va_page connected to this VA page

If you have something better, then I will swap that line out too.

-Tony

From: Sean Christopherson <[email protected]>
Subject: [PATCH] x86/sgx: Provide indication of life-cycle of EPC pages

SGX EPC pages go through the following life cycle:

DIRTY ---> FREE ---> IN-USE --\
^ |
\-----------------/

Recovery action for poison for a DIRTY or FREE page is simple. Just
make sure never to allocate the page. IN-USE pages need some extra
handling.

It would be good to use the sgx_epc_page->owner field as an indicator
of where an EPC page is currently in that cycle (owner != NULL means
the EPC page is IN-USE). But there is one caller, sgx_alloc_va_page(),
that calls with NULL.

Fix up the one holdout to provide a non-NULL owner.

Also change the type of "owner" to "void *" (since it can have other
types besides "struct sgx_encl_page *").

Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/kernel/cpu/sgx/encl.c | 5 +++--
arch/x86/kernel/cpu/sgx/encl.h | 2 +-
arch/x86/kernel/cpu/sgx/ioctl.c | 2 +-
arch/x86/kernel/cpu/sgx/sgx.h | 2 +-
4 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 001808e3901c..ad8c61933b0a 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -667,6 +667,7 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm,

/**
* sgx_alloc_va_page() - Allocate a Version Array (VA) page
+ * @va_page: struct sgx_va_page connected to this VA page
*
* Allocate a free EPC page and convert it to a Version Array (VA) page.
*
@@ -674,12 +675,12 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm,
* a VA page,
* -errno otherwise
*/
-struct sgx_epc_page *sgx_alloc_va_page(void)
+struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page)
{
struct sgx_epc_page *epc_page;
int ret;

- epc_page = sgx_alloc_epc_page(NULL, true);
+ epc_page = sgx_alloc_epc_page(va_page, true);
if (IS_ERR(epc_page))
return ERR_CAST(epc_page);

diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index fec43ca65065..3d12dbeae14a 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -111,7 +111,7 @@ void sgx_encl_put_backing(struct sgx_backing *backing, bool do_write);
int sgx_encl_test_and_clear_young(struct mm_struct *mm,
struct sgx_encl_page *page);

-struct sgx_epc_page *sgx_alloc_va_page(void);
+struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page);
unsigned int sgx_alloc_va_slot(struct sgx_va_page *va_page);
void sgx_free_va_slot(struct sgx_va_page *va_page, unsigned int offset);
bool sgx_va_page_full(struct sgx_va_page *va_page);
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 83df20e3e633..655ce0bb069d 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -30,7 +30,7 @@ static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl)
if (!va_page)
return ERR_PTR(-ENOMEM);

- va_page->epc_page = sgx_alloc_va_page();
+ va_page->epc_page = sgx_alloc_va_page(va_page);
if (IS_ERR(va_page->epc_page)) {
err = ERR_CAST(va_page->epc_page);
kfree(va_page);
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 4628acec0009..4e1a410b8a62 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -29,7 +29,7 @@
struct sgx_epc_page {
unsigned int section;
unsigned int flags;
- struct sgx_encl_page *owner;
+ void *owner;
struct list_head list;
};

--
2.29.2


2021-07-30 00:34:53

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v3 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

On Wed, Jul 28, 2021 at 03:12:03PM -0700, Dave Hansen wrote:
> On 7/28/21 1:46 PM, Tony Luck wrote:
> > +++ b/arch/x86/kernel/cpu/sgx/main.c
> > @@ -581,7 +581,7 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
> > for ( ; ; ) {
> > page = __sgx_alloc_epc_page();
> > if (!IS_ERR(page)) {
> > - page->owner = owner;
> > + page->owner = owner ? owner : page;
> > break;
> > }
>
> I'm a little worried about this.
>
> Let's say we get confused about the type of the page and dereference
> page->owner. If it's NULL, we get a nice oops. If it's a real, valid
> pointer, we get real valid memory back that we can scribble on.
>
> Wouldn't it be safer to do something like:
>
> page->owner = owner ? owner : (void *)-1;
>
> -1 is non-NULL, but also invalid, which makes it harder for us to poke
> ourselves in the eye.

Works for me.

/Jarkko

2021-07-30 00:36:33

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v3 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

On Wed, Jul 28, 2021 at 10:57:07PM +0000, Luck, Tony wrote:
> > Wouldn't it be safer to do something like:
> >
> > page->owner = owner ? owner : (void *)-1;
> >
> > -1 is non-NULL, but also invalid, which makes it harder for us to poke
> > ourselves in the eye.
>
> Does Linux have some #define INVALID_POINTER thing that
> provides a guaranteed bad (e.g. non-canonical) value?
>
> (void *)-1 seems hacky.
>
> -Tony

MAP_FAILED?

/Jarkko

2021-07-30 00:39:15

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v3 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

On Wed, Jul 28, 2021 at 03:19:46PM -0700, Dave Hansen wrote:
> On 7/28/21 1:46 PM, Tony Luck wrote:
> > Export a function sgx_is_epc_page() that simply reports whether an
> > address is an EPC page for use elsewhere in the kernel.
>
> It would be really nice to mention why this needs to be exported to
> modules. I assume it's the error injection driver or something that can
> be built as a module, but this export was a surprise when I saw it.
>
> It's probably also worth noting that this is a sloooooooow
> implementation compared to the core VM code that does something
> analogous: pfn_to_page(). It's fine for error handling, but we should
> probably have a comment to this effect so that more liberal use doesn't
> creep in anywhere.

You could also create an xarray to track physical EPC address ranges,
and make the query fast.

/Jarkko

2021-07-30 00:43:20

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v3 3/7] x86/sgx: Initial poison handling for dirty and free pages

On Wed, Jul 28, 2021 at 01:46:49PM -0700, Tony Luck wrote:
> + dir = debugfs_create_dir("sgx", NULL);

dir = debugfs_create_dir("sgx", arch_debugfs_dir);

/Jarkko

2021-07-30 16:48:11

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v3 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

On Fri, Jul 30, 2021, Jarkko Sakkinen wrote:
> On Wed, Jul 28, 2021 at 03:19:46PM -0700, Dave Hansen wrote:
> > On 7/28/21 1:46 PM, Tony Luck wrote:
> > > Export a function sgx_is_epc_page() that simply reports whether an
> > > address is an EPC page for use elsewhere in the kernel.
> >
> > It would be really nice to mention why this needs to be exported to
> > modules. I assume it's the error injection driver or something that can
> > be built as a module, but this export was a surprise when I saw it.
> >
> > It's probably also worth noting that this is a sloooooooow
> > implementation compared to the core VM code that does something
> > analogous: pfn_to_page(). It's fine for error handling, but we should
> > probably have a comment to this effect so that more liberal use doesn't
> > creep in anywhere.
>
> You could also create an xarray to track physical EPC address ranges,
> and make the query fast.

Eh, it's not _that_ slow due to the constraints on the number of EPC sections.
The hard limit is currently '8', and practically speaking there will be one
section per socket. Turning a linear search into a binary search in this case
isn't going to buy much.

Out of curiosity, on multi-socket systems, are EPC sections clustered in a single
address range, or are they interleaved with regular RAM? If they're clustered,
you could track the min/max across all sections to optimize the common case that
an address isn't in any EPC section.

static struct sgx_epc_page *sgx_paddr_to_page(u64 paddr)
{
struct sgx_epc_section *section;
int i;

if (paddr < min_epc_pa || paddr > max_epc_pa)
return NULL;

for (i = 0; i < ARRAY_SIZE(sgx_epc_sections); i++) {
section = &sgx_epc_sections[i];

if (paddr < section->phys_addr || paddr > section->end_phys_addr)
continue;

return &section->pages[PFN_DOWN(paddr - section->phys_addr)];
}

return NULL;
}

2021-07-30 16:52:15

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v3 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

On 7/30/21 9:46 AM, Sean Christopherson wrote:
> Out of curiosity, on multi-socket systems, are EPC sections clustered in a single
> address range, or are they interleaved with regular RAM? If they're clustered,
> you could track the min/max across all sections to optimize the common case that
> an address isn't in any EPC section.

They're interleaved on the systems that I've seen:

Socket 0 - RAM
Socket 0 - EPC
Socket 1 - RAM
Socket 1 - EPC

It would probably be pretty expensive in terms of the physical address
remapping resources to cluster them.

2021-07-30 18:45:02

by Luck, Tony

[permalink] [raw]
Subject: Re: [PATCH v3 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

On Fri, Jul 30, 2021 at 09:50:59AM -0700, Dave Hansen wrote:
> On 7/30/21 9:46 AM, Sean Christopherson wrote:
> > Out of curiosity, on multi-socket systems, are EPC sections clustered in a single
> > address range, or are they interleaved with regular RAM? If they're clustered,
> > you could track the min/max across all sections to optimize the common case that
> > an address isn't in any EPC section.
>
> They're interleaved on the systems that I've seen:
>
> Socket 0 - RAM
> Socket 0 - EPC
> Socket 1 - RAM
> Socket 1 - EPC
>
> It would probably be pretty expensive in terms of the physical address
> remapping resources to cluster them.

I thought xarray was overkill ... and it is ... but it makes the code
considerably shorter/simpler!

I think I'm going to go with it. Thanks to Jarkko for the suggestion.

Also added comments based on Dave's feedback on why the function is
exported, and that sgx_is_epc_page() will be slower than people might
expect.

-Tony

From 7026de93f5bf370be9d067cdc068a4a2a54bbd3e Mon Sep 17 00:00:00 2001
From: Tony Luck <[email protected]>
Date: Fri, 30 Jul 2021 11:39:45 -0700
Subject: [PATCH] x86/sgx: Add infrastructure to identify SGX EPC pages

X86 machine check architecture reports a physical address when there
is a memory error. Handling that error requires a method to determine
whether the physical address reported is in any of the areas reserved
for EPC pages by BIOS.

SGX EPC pages do not have Linux "struct page" associated with them.

Keep track of the mapping from ranges of EPC pages to the sections
that contain them using an xarray.

Create a function sgx_is_epc_page() that simply reports whether an address
is an EPC page for use elsewhere in the kernel. The ACPI error injection
code needs this function and is typically built as a module, so export it.

Note that sgx_is_epc_page() will be slower than other similar "what type
is this page" functions that can simply check bits in the "struct page".
If there is some future performance critical user of this function it
may need to be implemented in a more efficient way.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/kernel/cpu/sgx/main.c | 21 +++++++++++++++++++++
arch/x86/kernel/cpu/sgx/sgx.h | 1 +
2 files changed, 22 insertions(+)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 3d19bba3fa7e..d65787391b22 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -20,6 +20,7 @@ struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS];
static int sgx_nr_epc_sections;
static struct task_struct *ksgxd_tsk;
static DECLARE_WAIT_QUEUE_HEAD(ksgxd_waitq);
+static DEFINE_XARRAY(epc_page_ranges);

/*
* These variables are part of the state of the reclaimer, and must be accessed
@@ -649,6 +650,9 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
}

section->phys_addr = phys_addr;
+ section->end_phys_addr = phys_addr + size - 1;
+ xa_store_range(&epc_page_ranges, section->phys_addr,
+ section->end_phys_addr, section, GFP_KERNEL);

for (i = 0; i < nr_pages; i++) {
section->pages[i].section = index;
@@ -660,6 +664,23 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
return true;
}

+static struct sgx_epc_page *sgx_paddr_to_page(u64 paddr)
+{
+ struct sgx_epc_section *section;
+
+ section = xa_load(&epc_page_ranges, paddr);
+ if (!section)
+ return NULL;
+
+ return &section->pages[PFN_DOWN(paddr - section->phys_addr)];
+}
+
+bool sgx_is_epc_page(u64 paddr)
+{
+ return !!xa_load(&epc_page_ranges, paddr);
+}
+EXPORT_SYMBOL_GPL(sgx_is_epc_page);
+
/**
* A section metric is concatenated in a way that @low bits 12-31 define the
* bits 12-31 of the metric and @high bits 0-19 define the bits 32-51 of the
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 4e1a410b8a62..226b081a4d05 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -50,6 +50,7 @@ struct sgx_numa_node {
*/
struct sgx_epc_section {
unsigned long phys_addr;
+ unsigned long end_phys_addr;
void *virt_addr;
struct sgx_epc_page *pages;
struct sgx_numa_node *node;
--
2.29.2


2021-07-30 20:37:10

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v3 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

On 7/30/21 11:44 AM, Luck, Tony wrote:
> @@ -649,6 +650,9 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
> }
>
> section->phys_addr = phys_addr;
> + section->end_phys_addr = phys_addr + size - 1;
> + xa_store_range(&epc_page_ranges, section->phys_addr,
> + section->end_phys_addr, section, GFP_KERNEL);

That is compact, but how much memory does it eat? I'm a little worried
about this hunk of xa_store_range():

> do {
> xas_set_range(&xas, first, last);
> xas_store(&xas, entry);
> if (xas_error(&xas))
> goto unlock;
> first += xas_size(&xas);
> } while (first <= last);

That makes it look like it's iterating over the whole range and making
loads of individual array instead of doing something super clever like
keeping an extent-style structure.

Let's say we have 1TB of EPC. How big is the array to store these
indexes? Would this be more compact if instead of doing a physical
address range:

xa_store_range(&epc_page_ranges,
section->phys_addr,
section->end_phys_addr, ...);

... you did it based on PFNs:

xa_store_range(&epc_page_ranges,
section->phys_addr >> PAGE_SHIFT,
section->end_phys_addr >> PAGE_SHIFT, ...);

SGX sections are at *least* page-aligned, so this should be fine.

2021-07-30 23:37:15

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH v3 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

> xa_store_range(&epc_page_ranges,
> section->phys_addr,
> section->end_phys_addr, ...);
>
> ... you did it based on PFNs:
>
> xa_store_range(&epc_page_ranges,
> section->phys_addr >> PAGE_SHIFT,
> section->end_phys_addr >> PAGE_SHIFT, ...);
>
> SGX sections are at *least* page-aligned, so this should be fine.

I found xa_dump() (hidden inside #ifdef XA_DEBUG)

Trying both with and without the >> PAGE_SHIFT made no difference
to the number of lines of console output that xa_dump() spits out.
266 either way.

There are only two ranges on this system

[ 11.937592] sgx: EPC section 0x8000c00000-0x807f7fffff
[ 11.945811] sgx: EPC section 0x10000c00000-0x1007fffffff

So I'm a little bit sad that xarray appears to have broken them up
into a bunch of pieces.

-Tony

2021-08-02 08:50:27

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v3 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

On Fri, Jul 30, 2021 at 04:46:07PM +0000, Sean Christopherson wrote:
> On Fri, Jul 30, 2021, Jarkko Sakkinen wrote:
> > On Wed, Jul 28, 2021 at 03:19:46PM -0700, Dave Hansen wrote:
> > > On 7/28/21 1:46 PM, Tony Luck wrote:
> > > > Export a function sgx_is_epc_page() that simply reports whether an
> > > > address is an EPC page for use elsewhere in the kernel.
> > >
> > > It would be really nice to mention why this needs to be exported to
> > > modules. I assume it's the error injection driver or something that can
> > > be built as a module, but this export was a surprise when I saw it.
> > >
> > > It's probably also worth noting that this is a sloooooooow
> > > implementation compared to the core VM code that does something
> > > analogous: pfn_to_page(). It's fine for error handling, but we should
> > > probably have a comment to this effect so that more liberal use doesn't
> > > creep in anywhere.
> >
> > You could also create an xarray to track physical EPC address ranges,
> > and make the query fast.
>
> Eh, it's not _that_ slow due to the constraints on the number of EPC sections.
> The hard limit is currently '8', and practically speaking there will be one
> section per socket. Turning a linear search into a binary search in this case
> isn't going to buy much.

Also, consumes more memory.

Just pointing out that it is possible to improve without much fuzz, if ever
required, for instance by using DEFINE_XARRAY() to be define file-scope
xarray.

> Out of curiosity, on multi-socket systems, are EPC sections clustered in a single
> address range, or are they interleaved with regular RAM? If they're clustered,
> you could track the min/max across all sections to optimize the common case that
> an address isn't in any EPC section.

Given that physical address ranges of different NUMA nodes are disjoint,
and each has EPC section is reserved from one such section, I would presume
that they are interleaved.

> static struct sgx_epc_page *sgx_paddr_to_page(u64 paddr)
> {
> struct sgx_epc_section *section;
> int i;
>
> if (paddr < min_epc_pa || paddr > max_epc_pa)
> return NULL;
>
> for (i = 0; i < ARRAY_SIZE(sgx_epc_sections); i++) {
> section = &sgx_epc_sections[i];
>
> if (paddr < section->phys_addr || paddr > section->end_phys_addr)
> continue;
>
> return &section->pages[PFN_DOWN(paddr - section->phys_addr)];
> }
>
> return NULL;
> }

/Jarkko

2021-08-02 08:52:08

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v3 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

On Fri, Jul 30, 2021 at 09:50:59AM -0700, Dave Hansen wrote:
> On 7/30/21 9:46 AM, Sean Christopherson wrote:
> > Out of curiosity, on multi-socket systems, are EPC sections clustered in a single
> > address range, or are they interleaved with regular RAM? If they're clustered,
> > you could track the min/max across all sections to optimize the common case that
> > an address isn't in any EPC section.
>
> They're interleaved on the systems that I've seen:
>
> Socket 0 - RAM
> Socket 0 - EPC
> Socket 1 - RAM
> Socket 1 - EPC
>
> It would probably be pretty expensive in terms of the physical address
> remapping resources to cluster them.

If they were clustered, wouldn't that also break up our initialization code
for NUMA? It's based on detecting of which NUMA nodes address range is the
given EPC section.

I.e. there should be some meta-data to draw the connection to the correct
NUMA node, if they were clustered (which does not exist).

/Jarkko


2021-08-02 08:57:53

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v3 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

On Fri, Jul 30, 2021 at 11:44:00AM -0700, Luck, Tony wrote:
> On Fri, Jul 30, 2021 at 09:50:59AM -0700, Dave Hansen wrote:
> > On 7/30/21 9:46 AM, Sean Christopherson wrote:
> > > Out of curiosity, on multi-socket systems, are EPC sections clustered in a single
> > > address range, or are they interleaved with regular RAM? If they're clustered,
> > > you could track the min/max across all sections to optimize the common case that
> > > an address isn't in any EPC section.
> >
> > They're interleaved on the systems that I've seen:
> >
> > Socket 0 - RAM
> > Socket 0 - EPC
> > Socket 1 - RAM
> > Socket 1 - EPC
> >
> > It would probably be pretty expensive in terms of the physical address
> > remapping resources to cluster them.
>
> I thought xarray was overkill ... and it is ... but it makes the code
> considerably shorter/simpler!
>
> I think I'm going to go with it. Thanks to Jarkko for the suggestion.

If it makes the code considerably simpler, that in my opinion justifies the
minor size increase.

/Jarkko

2021-08-03 21:47:23

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v3 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

On Fri, Jul 30, 2021 at 11:35:38PM +0000, Luck, Tony wrote:
> > xa_store_range(&epc_page_ranges,
> > section->phys_addr,
> > section->end_phys_addr, ...);
> >
> > ... you did it based on PFNs:
> >
> > xa_store_range(&epc_page_ranges,
> > section->phys_addr >> PAGE_SHIFT,
> > section->end_phys_addr >> PAGE_SHIFT, ...);
> >
> > SGX sections are at *least* page-aligned, so this should be fine.
>
> I found xa_dump() (hidden inside #ifdef XA_DEBUG)
>
> Trying both with and without the >> PAGE_SHIFT made no difference
> to the number of lines of console output that xa_dump() spits out.
> 266 either way.
>
> There are only two ranges on this system
>
> [ 11.937592] sgx: EPC section 0x8000c00000-0x807f7fffff
> [ 11.945811] sgx: EPC section 0x10000c00000-0x1007fffffff
>
> So I'm a little bit sad that xarray appears to have broken them up
> into a bunch of pieces.

That's inherent in the (current) back end data structure, I'm afraid.
As a radix tree, it can only look up based on the N bits available at
each level of the tree, so if your entry is an aligned power-of-64,
everything is nice and neat, and you're a single entry at one level
of the tree. If you're an arbitrary range, things get more complicated,
and I have to do a little dance to redirect the lookup towards the
canonical entry.

Liam and I are working on a new replacement data structure called the
Maple Tree, but it's not yet ready to replace the radix tree back end.
It looks like it would be perfect for your case; there would be five
entries in it, stored in one 256-byte node:

NULL
0x8000bfffff
p1
0x807f7fffff
NULL
0x10000c00000
p2
0x1007fffffff
NULL
0xffff'ffff'ffff'ffff

It would actually turn into a linear scan, because that's just the
fastest way to find something in a list of five elements. A third
range would take us to a list of seven elements, which still fits
in a single node. Once we get to more than that, you'd have a
two-level tree, which would work until you have more than ~20 ranges.

We could do better for your case by storing 10x (start, end, p) in each
leaf node, but we're (currently) optimising for VMAs which tend to be
tightly packed, meaning that an implicit 'start' element is a better
choice as it gives us 15x (end, p) pairs.

2021-08-04 00:14:12

by Luck, Tony

[permalink] [raw]
Subject: Re: [PATCH v3 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages



Sent from my iPhone

> On Aug 3, 2021, at 14:47, Matthew Wilcox
>
> Liam and I are working on a new replacement data structure called the
> Maple Tree, but it's not yet ready to replace the radix tree back end.
> It looks like it would be perfect for your case; there would be five
> entries in it, stored in one 256-byte node:
>
> NULL
> 0x8000bfffff
> p1
> 0x807f7fffff
> NULL
> 0x10000c00000
> p2
> 0x1007fffffff
> NULL
> 0xffff'ffff'ffff'ffff
>
> It would actually turn into a linear scan, because that's just the
> fastest way to find something in a list of five elements. A third
> range would take us to a list of seven elements, which still fits
> in a single node. Once we get to more than that, you'd have a
> two-level tree, which would work until you have more than ~20 ranges.
>
> We could do better for your case by storing 10x (start, end, p) in each
> leaf node, but we're (currently) optimising for VMAs which tend to be
> tightly packed, meaning that an implicit 'start' element is a better
> choice as it gives us 15x (end, p) pairs.

That’s good to know. While current xarray
implementation might be a bit wasteful[1],
things will get better.

I’m still going with xarray to keep the source
simple.

-Tony

[1] A few KBytes extra doesn’t even sound
too terrible to manage tens of MBytes (or
more) of SGX EPC memory on a system
with a half TByte total memory.


2021-08-27 19:58:24

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v4 0/6] Basic recovery for machine checks inside SGX

Here's version 4 (just 38 more to go if I want to meet the bar set by
the base SGX series :-) )

Changes since v3:

Dave Hansen:
1) Concerns about assigning a default value to the "owner"
pointer if the caller of sgx_alloc_epc_page() called with
a NULL value.
Resolved: Sean provided a patch to fix the only caller that
was using NULL. I merged it in here.

2) Better commit message to explain why sgx_is_epc_page() is
exported.
Done.

3) Unhappy with "void *owner" in struct sgx_epc_page. Would
be better to use an anonymous union of all the types.
Done.

Sean Christopherson:
1) Races updating bits in flags field.
Resolved: "poison" is now a separate field.

2) More races. When poison alert happens while moving
a page on/off a free/dirty list.
Resolved: Well mostly. All the run time changes are now
done while holding the node->lock. There's a gap while
moving pages from dirty list to free list. But that's
a short-ish window during boot, and the races are mostly
harmless. Worst is that we might call __eremove() for a
page that just got marked as poisoned. But then
sgx_free_epc_page() will see the poison flag and do the
right thing.

Jarkko Sakkinen:
1) Use xarray to keep track of which pages are the special
SGX EPC ones.
This spawned a short discussion on whether it was overkill. But
xarray makes the source much simpler, and there are improvements
in the pipeline for xarray that will make it handle this use
case more efficiently. So I made this change.

2) Move the sgx debugfs directory under arch_debugfs_dir.
Done.

Tony Luck (6):
x86/sgx: Provide indication of life-cycle of EPC pages
x86/sgx: Add infrastructure to identify SGX EPC pages
x86/sgx: Initial poison handling for dirty and free pages
x86/sgx: Add SGX infrastructure to recover from poison
x86/sgx: Hook sgx_memory_failure() into mainline code
x86/sgx: Add hook to error injection address validation

.../firmware-guide/acpi/apei/einj.rst | 19 +++
arch/x86/include/asm/set_memory.h | 4 +
arch/x86/kernel/cpu/sgx/encl.c | 5 +-
arch/x86/kernel/cpu/sgx/encl.h | 2 +-
arch/x86/kernel/cpu/sgx/ioctl.c | 2 +-
arch/x86/kernel/cpu/sgx/main.c | 140 ++++++++++++++++--
arch/x86/kernel/cpu/sgx/sgx.h | 14 +-
drivers/acpi/apei/einj.c | 3 +-
include/linux/mm.h | 15 ++
mm/memory-failure.c | 19 ++-
10 files changed, 196 insertions(+), 27 deletions(-)


base-commit: e22ce8eb631bdc47a4a4ea7ecf4e4ba499db4f93
--
2.29.2

2021-08-27 19:58:24

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v4 2/6] x86/sgx: Add infrastructure to identify SGX EPC pages

X86 machine check architecture reports a physical address when there
is a memory error. Handling that error requires a method to determine
whether the physical address reported is in any of the areas reserved
for EPC pages by BIOS.

SGX EPC pages do not have Linux "struct page" associated with them.

Keep track of the mapping from ranges of EPC pages to the sections
that contain them using an xarray.

Create a function sgx_is_epc_page() that simply reports whether an address
is an EPC page for use elsewhere in the kernel. The ACPI error injection
code needs this function and is typically built as a module, so export it.

Note that sgx_is_epc_page() will be slower than other similar "what type
is this page" functions that can simply check bits in the "struct page".
If there is some future performance critical user of this function it
may need to be implemented in a more efficient way.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/kernel/cpu/sgx/main.c | 10 ++++++++++
arch/x86/kernel/cpu/sgx/sgx.h | 1 +
2 files changed, 11 insertions(+)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 4a5b51d16133..261f81b3f8af 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -20,6 +20,7 @@ struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS];
static int sgx_nr_epc_sections;
static struct task_struct *ksgxd_tsk;
static DECLARE_WAIT_QUEUE_HEAD(ksgxd_waitq);
+static DEFINE_XARRAY(epc_page_ranges);

/*
* These variables are part of the state of the reclaimer, and must be accessed
@@ -649,6 +650,9 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
}

section->phys_addr = phys_addr;
+ section->end_phys_addr = phys_addr + size - 1;
+ xa_store_range(&epc_page_ranges, section->phys_addr,
+ section->end_phys_addr, section, GFP_KERNEL);

for (i = 0; i < nr_pages; i++) {
section->pages[i].section = index;
@@ -660,6 +664,12 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
return true;
}

+bool sgx_is_epc_page(u64 paddr)
+{
+ return !!xa_load(&epc_page_ranges, paddr);
+}
+EXPORT_SYMBOL_GPL(sgx_is_epc_page);
+
/**
* A section metric is concatenated in a way that @low bits 12-31 define the
* bits 12-31 of the metric and @high bits 0-19 define the bits 32-51 of the
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 8b1be10a46f6..6a55b1971956 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -54,6 +54,7 @@ struct sgx_numa_node {
*/
struct sgx_epc_section {
unsigned long phys_addr;
+ unsigned long end_phys_addr;
void *virt_addr;
struct sgx_epc_page *pages;
struct sgx_numa_node *node;
--
2.29.2

2021-08-27 19:58:24

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v4 1/6] x86/sgx: Provide indication of life-cycle of EPC pages

SGX EPC pages go through the following life cycle:

DIRTY ---> FREE ---> IN-USE --\
^ |
\-----------------/

Recovery action for poison for a DIRTY or FREE page is simple. Just
make sure never to allocate the page. IN-USE pages need some extra
handling.

It would be good to use the sgx_epc_page->owner field as an indicator
of where an EPC page is currently in that cycle (owner != NULL means
the EPC page is IN-USE). But there is one caller, sgx_alloc_va_page(),
that calls with NULL.

Since there are multiple uses of the "owner" field with different types
change the sgx_epc_page structure to define an anonymous union with
each of the uses explicitly called out.

Start epc_pages out with a non-NULL owner while they are in DIRTY state.

Fix up the one holdout to provide a non-NULL owner.

Refactor the allocation sequence so that changes to/from NULL
value happen together with adding/removing the epc_page from
a free list while the node->lock is held.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/kernel/cpu/sgx/encl.c | 5 +++--
arch/x86/kernel/cpu/sgx/encl.h | 2 +-
arch/x86/kernel/cpu/sgx/ioctl.c | 2 +-
arch/x86/kernel/cpu/sgx/main.c | 23 ++++++++++++-----------
arch/x86/kernel/cpu/sgx/sgx.h | 12 ++++++++----
5 files changed, 25 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 001808e3901c..ad8c61933b0a 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -667,6 +667,7 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm,

/**
* sgx_alloc_va_page() - Allocate a Version Array (VA) page
+ * @va_page: struct sgx_va_page connected to this VA page
*
* Allocate a free EPC page and convert it to a Version Array (VA) page.
*
@@ -674,12 +675,12 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm,
* a VA page,
* -errno otherwise
*/
-struct sgx_epc_page *sgx_alloc_va_page(void)
+struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page)
{
struct sgx_epc_page *epc_page;
int ret;

- epc_page = sgx_alloc_epc_page(NULL, true);
+ epc_page = sgx_alloc_epc_page(va_page, true);
if (IS_ERR(epc_page))
return ERR_CAST(epc_page);

diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index fec43ca65065..3d12dbeae14a 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -111,7 +111,7 @@ void sgx_encl_put_backing(struct sgx_backing *backing, bool do_write);
int sgx_encl_test_and_clear_young(struct mm_struct *mm,
struct sgx_encl_page *page);

-struct sgx_epc_page *sgx_alloc_va_page(void);
+struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page);
unsigned int sgx_alloc_va_slot(struct sgx_va_page *va_page);
void sgx_free_va_slot(struct sgx_va_page *va_page, unsigned int offset);
bool sgx_va_page_full(struct sgx_va_page *va_page);
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 83df20e3e633..655ce0bb069d 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -30,7 +30,7 @@ static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl)
if (!va_page)
return ERR_PTR(-ENOMEM);

- va_page->epc_page = sgx_alloc_va_page();
+ va_page->epc_page = sgx_alloc_va_page(va_page);
if (IS_ERR(va_page->epc_page)) {
err = ERR_CAST(va_page->epc_page);
kfree(va_page);
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 63d3de02bbcc..4a5b51d16133 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -457,7 +457,7 @@ static bool __init sgx_page_reclaimer_init(void)
return true;
}

-static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)
+static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(void *private, int nid)
{
struct sgx_numa_node *node = &sgx_numa_nodes[nid];
struct sgx_epc_page *page = NULL;
@@ -471,6 +471,7 @@ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)

page = list_first_entry(&node->free_page_list, struct sgx_epc_page, list);
list_del_init(&page->list);
+ page->private = private;
sgx_nr_free_pages--;

spin_unlock(&node->lock);
@@ -480,6 +481,7 @@ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)

/**
* __sgx_alloc_epc_page() - Allocate an EPC page
+ * @owner: the owner of the EPC page
*
* Iterate through NUMA nodes and reserve ia free EPC page to the caller. Start
* from the NUMA node, where the caller is executing.
@@ -488,14 +490,14 @@ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)
* - an EPC page: A borrowed EPC pages were available.
* - NULL: Out of EPC pages.
*/
-struct sgx_epc_page *__sgx_alloc_epc_page(void)
+struct sgx_epc_page *__sgx_alloc_epc_page(void *private)
{
struct sgx_epc_page *page;
int nid_of_current = numa_node_id();
int nid = nid_of_current;

if (node_isset(nid_of_current, sgx_numa_mask)) {
- page = __sgx_alloc_epc_page_from_node(nid_of_current);
+ page = __sgx_alloc_epc_page_from_node(private, nid_of_current);
if (page)
return page;
}
@@ -506,7 +508,7 @@ struct sgx_epc_page *__sgx_alloc_epc_page(void)
if (nid == nid_of_current)
break;

- page = __sgx_alloc_epc_page_from_node(nid);
+ page = __sgx_alloc_epc_page_from_node(private, nid);
if (page)
return page;
}
@@ -559,7 +561,7 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)

/**
* sgx_alloc_epc_page() - Allocate an EPC page
- * @owner: the owner of the EPC page
+ * @private: per-caller private data
* @reclaim: reclaim pages if necessary
*
* Iterate through EPC sections and borrow a free EPC page to the caller. When a
@@ -574,16 +576,14 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)
* an EPC page,
* -errno on error
*/
-struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
+struct sgx_epc_page *sgx_alloc_epc_page(void *private, bool reclaim)
{
struct sgx_epc_page *page;

for ( ; ; ) {
- page = __sgx_alloc_epc_page();
- if (!IS_ERR(page)) {
- page->owner = owner;
+ page = __sgx_alloc_epc_page(private);
+ if (!IS_ERR(page))
break;
- }

if (list_empty(&sgx_active_page_list))
return ERR_PTR(-ENOMEM);
@@ -624,6 +624,7 @@ void sgx_free_epc_page(struct sgx_epc_page *page)

spin_lock(&node->lock);

+ page->private = NULL;
list_add_tail(&page->list, &node->free_page_list);
sgx_nr_free_pages++;

@@ -652,7 +653,7 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
for (i = 0; i < nr_pages; i++) {
section->pages[i].section = index;
section->pages[i].flags = 0;
- section->pages[i].owner = NULL;
+ section->pages[i].private = "dirty";
list_add_tail(&section->pages[i].list, &sgx_dirty_page_list);
}

diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 4628acec0009..8b1be10a46f6 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -28,8 +28,12 @@

struct sgx_epc_page {
unsigned int section;
- unsigned int flags;
- struct sgx_encl_page *owner;
+ int flags;
+ union {
+ void *private;
+ struct sgx_encl_page *owner;
+ struct sgx_encl_page *vepc;
+ };
struct list_head list;
};

@@ -77,12 +81,12 @@ static inline void *sgx_get_epc_virt_addr(struct sgx_epc_page *page)
return section->virt_addr + index * PAGE_SIZE;
}

-struct sgx_epc_page *__sgx_alloc_epc_page(void);
+struct sgx_epc_page *__sgx_alloc_epc_page(void *private);
void sgx_free_epc_page(struct sgx_epc_page *page);

void sgx_mark_page_reclaimable(struct sgx_epc_page *page);
int sgx_unmark_page_reclaimable(struct sgx_epc_page *page);
-struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim);
+struct sgx_epc_page *sgx_alloc_epc_page(void *private, bool reclaim);

#ifdef CONFIG_X86_SGX_KVM
int __init sgx_vepc_init(void);
--
2.29.2

2021-08-27 19:58:27

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v4 4/6] x86/sgx: Add SGX infrastructure to recover from poison

Provide a recovery function sgx_memory_failure(). If the poison was
consumed synchronously then send a SIGBUS. Note that the virtual
address of the access is not included with the SIGBUS as is the case
for poison outside of SGX enclaves. This doesn't matter as addresses
of code/data inside an enclave is of little to no use to code executing
outside the (now dead) enclave.

Poison found in a free page results in the page being moved from the
free list to the poison page list.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/kernel/cpu/sgx/main.c | 77 ++++++++++++++++++++++++++++++++++
1 file changed, 77 insertions(+)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index c08df4e35ff0..d9fe08f68d13 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -682,6 +682,83 @@ bool sgx_is_epc_page(u64 paddr)
}
EXPORT_SYMBOL_GPL(sgx_is_epc_page);

+static struct sgx_epc_page *sgx_paddr_to_page(u64 paddr)
+{
+ struct sgx_epc_section *section;
+
+ section = xa_load(&epc_page_ranges, paddr);
+ if (!section)
+ return NULL;
+
+ return &section->pages[PFN_DOWN(paddr - section->phys_addr)];
+}
+
+/*
+ * Called in process context to handle a hardware reported
+ * error in an SGX EPC page.
+ * If the MF_ACTION_REQUIRED bit is set in flags, then the
+ * context is the task that consumed the poison data. Otherwise
+ * this is called from a kernel thread unrelated to the page.
+ */
+int sgx_memory_failure(unsigned long pfn, int flags)
+{
+ struct sgx_epc_page *page = sgx_paddr_to_page(pfn << PAGE_SHIFT);
+ struct sgx_epc_section *section;
+ struct sgx_numa_node *node;
+
+ /*
+ * mm/memory-failure.c calls this routine for all errors
+ * where there isn't a "struct page" for the address. But that
+ * includes other address ranges besides SGX.
+ */
+ if (!page)
+ return -ENXIO;
+
+ /*
+ * If poison was consumed synchronously. Send a SIGBUS to
+ * the task. Hardware has already exited the SGX enclave and
+ * will not allow re-entry to an enclave that has a memory
+ * error. The signal may help the task understand why the
+ * enclave is broken.
+ */
+ if (flags & MF_ACTION_REQUIRED)
+ force_sig(SIGBUS);
+
+ section = &sgx_epc_sections[page->section];
+ node = section->node;
+
+ spin_lock(&node->lock);
+
+ /* Already poisoned? Nothing more to do */
+ if (page->poison)
+ goto out;
+
+ page->poison = 1;
+
+ /*
+ * If there is no owner, then the page is on a free list.
+ * Move it to the poison page list.
+ */
+ if (!page->private) {
+ list_del(&page->list);
+ list_add(&page->list, &sgx_poison_page_list);
+ goto out;
+ }
+
+ /*
+ * TBD: Add additional plumbing to enable pre-emptive
+ * action for asynchronous poison notification. Until
+ * then just hope that the poison:
+ * a) is not accessed - sgx_free_epc_page() will deal with it
+ * when the user gives it back
+ * b) results in a recoverable machine check rather than
+ * a fatal one
+ */
+out:
+ spin_unlock(&node->lock);
+ return 0;
+}
+
/**
* A section metric is concatenated in a way that @low bits 12-31 define the
* bits 12-31 of the metric and @high bits 0-19 define the bits 32-51 of the
--
2.29.2

2021-08-27 19:58:28

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v4 5/6] x86/sgx: Hook sgx_memory_failure() into mainline code

Add a call inside memory_failure() to check if the address is an SGX
EPC page and handle it.

Note the SGX EPC pages do not have a "struct page" entry, so the hook
goes in at the same point as the device mapping hook.

Pull the call to acquire the mutex earlier so the SGX errors are also
protected.

Make set_mce_nospec() skip SGX pages when trying to adjust
the 1:1 map.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/include/asm/set_memory.h | 4 ++++
include/linux/mm.h | 15 +++++++++++++++
mm/memory-failure.c | 19 +++++++++++++------
3 files changed, 32 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 43fa081a1adb..801af8f30c83 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -2,6 +2,7 @@
#ifndef _ASM_X86_SET_MEMORY_H
#define _ASM_X86_SET_MEMORY_H

+#include <linux/mm.h>
#include <asm/page.h>
#include <asm-generic/set_memory.h>

@@ -98,6 +99,9 @@ static inline int set_mce_nospec(unsigned long pfn, bool unmap)
unsigned long decoy_addr;
int rc;

+ /* SGX pages are not in the 1:1 map */
+ if (sgx_is_epc_page(pfn << PAGE_SHIFT))
+ return 0;
/*
* We would like to just call:
* set_memory_XX((unsigned long)pfn_to_kaddr(pfn), 1);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7ca22e6e694a..2ff599bcf8c2 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3283,5 +3283,20 @@ static inline int seal_check_future_write(int seals, struct vm_area_struct *vma)
return 0;
}

+#ifdef CONFIG_X86_SGX
+int sgx_memory_failure(unsigned long pfn, int flags);
+bool sgx_is_epc_page(u64 paddr);
+#else
+static inline int sgx_memory_failure(unsigned long pfn, int flags)
+{
+ return -ENXIO;
+}
+
+static inline bool sgx_is_epc_page(u64 paddr)
+{
+ return false;
+}
+#endif
+
#endif /* __KERNEL__ */
#endif /* _LINUX_MM_H */
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 470400cc7513..ce04debd18f6 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1632,21 +1632,28 @@ int memory_failure(unsigned long pfn, int flags)
if (!sysctl_memory_failure_recovery)
panic("Memory failure on page %lx", pfn);

+ mutex_lock(&mf_mutex);
+
p = pfn_to_online_page(pfn);
if (!p) {
+ res = sgx_memory_failure(pfn, flags);
+ if (res == 0)
+ goto unlock_mutex;
+
if (pfn_valid(pfn)) {
pgmap = get_dev_pagemap(pfn, NULL);
- if (pgmap)
- return memory_failure_dev_pagemap(pfn, flags,
- pgmap);
+ if (pgmap) {
+ res = memory_failure_dev_pagemap(pfn, flags,
+ pgmap);
+ goto unlock_mutex;
+ }
}
pr_err("Memory failure: %#lx: memory outside kernel control\n",
pfn);
- return -ENXIO;
+ res = -ENXIO;
+ goto unlock_mutex;
}

- mutex_lock(&mf_mutex);
-
try_again:
if (PageHuge(p)) {
res = memory_failure_hugetlb(pfn, flags);
--
2.29.2

2021-08-27 19:59:01

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v4 6/6] x86/sgx: Add hook to error injection address validation

SGX reserved memory does not appear in the standard address maps.

Add hook to call into the SGX code to check if an address is located
in SGX memory.

There are other challenges in injecting errors into SGX. Update the
documentation with a sequence of operations to inject.

Signed-off-by: Tony Luck <[email protected]>
---
.../firmware-guide/acpi/apei/einj.rst | 19 +++++++++++++++++++
drivers/acpi/apei/einj.c | 3 ++-
2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/Documentation/firmware-guide/acpi/apei/einj.rst b/Documentation/firmware-guide/acpi/apei/einj.rst
index c042176e1707..55e2331a6438 100644
--- a/Documentation/firmware-guide/acpi/apei/einj.rst
+++ b/Documentation/firmware-guide/acpi/apei/einj.rst
@@ -181,5 +181,24 @@ You should see something like this in dmesg::
[22715.834759] EDAC sbridge MC3: PROCESSOR 0:306e7 TIME 1422553404 SOCKET 0 APIC 0
[22716.616173] EDAC MC3: 1 CE memory read error on CPU_SrcID#0_Channel#0_DIMM#0 (channel:0 slot:0 page:0x12345 offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0001:0090 socket:0 channel_mask:1 rank:0)

+Special notes for injection into SGX enclaves:
+
+There may be a separate BIOS setup option to enable SGX injection.
+
+The injection process consists of setting some special memory controller
+trigger that will inject the error on the next write to the target
+address. But the h/w prevents any software outside of an SGX enclave
+from accessing enclave pages (even BIOS SMM mode).
+
+The following sequence can be used:
+ 1) Determine physical address of enclave page
+ 2) Use "notrigger=1" mode to inject (this will setup
+ the injection address, but will not actually inject)
+ 3) Enter the enclave
+ 4) Store data to the virtual address matching physical address from step 1
+ 5) Execute CLFLUSH for that virtual address
+ 6) Spin delay for 250ms
+ 7) Read from the virtual address. This will trigger the error
+
For more information about EINJ, please refer to ACPI specification
version 4.0, section 17.5 and ACPI 5.0, section 18.6.
diff --git a/drivers/acpi/apei/einj.c b/drivers/acpi/apei/einj.c
index 2882450c443e..cd7cffc955bf 100644
--- a/drivers/acpi/apei/einj.c
+++ b/drivers/acpi/apei/einj.c
@@ -544,7 +544,8 @@ static int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2,
((region_intersects(base_addr, size, IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE)
!= REGION_INTERSECTS) &&
(region_intersects(base_addr, size, IORESOURCE_MEM, IORES_DESC_PERSISTENT_MEMORY)
- != REGION_INTERSECTS)))
+ != REGION_INTERSECTS) &&
+ !sgx_is_epc_page(base_addr)))
return -EINVAL;

inject:
--
2.29.2

2021-08-27 20:30:27

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] Basic recovery for machine checks inside SGX

On Fri, Aug 27, 2021 at 12:55:37PM -0700, Tony Luck wrote:
> Here's version 4 (just 38 more to go if I want to meet the bar set by
> the base SGX series :-) )

You're off by 1:

https://lore.kernel.org/lkml/[email protected]/

you have only just 37 more.

:-P

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-08-27 20:44:45

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] Basic recovery for machine checks inside SGX

On Fri, Aug 27, 2021, Borislav Petkov wrote:
> On Fri, Aug 27, 2021 at 12:55:37PM -0700, Tony Luck wrote:
> > Here's version 4 (just 38 more to go if I want to meet the bar set by
> > the base SGX series :-) )
>
> You're off by 1:
>
> https://lore.kernel.org/lkml/[email protected]/
>
> you have only just 37 more.
>
> :-P

LOL, sorry for setting such high standards.

2021-09-01 02:08:43

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] Basic recovery for machine checks inside SGX

On Fri, 2021-08-27 at 12:55 -0700, Tony Luck wrote:
> Here's version 4 (just 38 more to go if I want to meet the bar set by
> the base SGX series :-) )
>
> Changes since v3:
>
> Dave Hansen:
> 1) Concerns about assigning a default value to the "owner"
> pointer if the caller of sgx_alloc_epc_page() called with
> a NULL value.
> Resolved: Sean provided a patch to fix the only caller that
> was using NULL. I merged it in here.
>
> 2) Better commit message to explain why sgx_is_epc_page() is
> exported.
> Done.
>
> 3) Unhappy with "void *owner" in struct sgx_epc_page. Would
> be better to use an anonymous union of all the types.
> Done.
>
> Sean Christopherson:
> 1) Races updating bits in flags field.
> Resolved: "poison" is now a separate field.
>
> 2) More races. When poison alert happens while moving
> a page on/off a free/dirty list.
> Resolved: Well mostly. All the run time changes are now
> done while holding the node->lock. There's a gap while
> moving pages from dirty list to free list. But that's
> a short-ish window during boot, and the races are mostly
> harmless. Worst is that we might call __eremove() for a
> page that just got marked as poisoned. But then
> sgx_free_epc_page() will see the poison flag and do the
> right thing.
>
> Jarkko Sakkinen:
> 1) Use xarray to keep track of which pages are the special
> SGX EPC ones.
> This spawned a short discussion on whether it was overkill. But
> xarray makes the source much simpler, and there are improvements
> in the pipeline for xarray that will make it handle this use
> case more efficiently. So I made this change.
>
> 2) Move the sgx debugfs directory under arch_debugfs_dir.
> Done.
>
> Tony Luck (6):
> x86/sgx: Provide indication of life-cycle of EPC pages
> x86/sgx: Add infrastructure to identify SGX EPC pages
> x86/sgx: Initial poison handling for dirty and free pages
> x86/sgx: Add SGX infrastructure to recover from poison
> x86/sgx: Hook sgx_memory_failure() into mainline code
> x86/sgx: Add hook to error injection address validation
>
> .../firmware-guide/acpi/apei/einj.rst | 19 +++
> arch/x86/include/asm/set_memory.h | 4 +
> arch/x86/kernel/cpu/sgx/encl.c | 5 +-
> arch/x86/kernel/cpu/sgx/encl.h | 2 +-
> arch/x86/kernel/cpu/sgx/ioctl.c | 2 +-
> arch/x86/kernel/cpu/sgx/main.c | 140 ++++++++++++++++--
> arch/x86/kernel/cpu/sgx/sgx.h | 14 +-
> drivers/acpi/apei/einj.c | 3 +-
> include/linux/mm.h | 15 ++
> mm/memory-failure.c | 19 ++-
> 10 files changed, 196 insertions(+), 27 deletions(-)
>
>
> base-commit: e22ce8eb631bdc47a4a4ea7ecf4e4ba499db4f93

Would be nice to get this also to [email protected] in
future.

/Jarkko

2021-09-01 04:00:14

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v4 1/6] x86/sgx: Provide indication of life-cycle of EPC pages

On Fri, 2021-08-27 at 12:55 -0700, Tony Luck wrote:
> SGX EPC pages go through the following life cycle:
>
> DIRTY ---> FREE ---> IN-USE --\
> ^ |
> \-----------------/
>
> Recovery action for poison for a DIRTY or FREE page is simple. Just
> make sure never to allocate the page. IN-USE pages need some extra
> handling.
>
> It would be good to use the sgx_epc_page->owner field as an indicator
> of where an EPC page is currently in that cycle (owner != NULL means
> the EPC page is IN-USE). But there is one caller, sgx_alloc_va_page(),
> that calls with NULL.
>
> Since there are multiple uses of the "owner" field with different types
> change the sgx_epc_page structure to define an anonymous union with
> each of the uses explicitly called out.
>
> Start epc_pages out with a non-NULL owner while they are in DIRTY state.
>
> Fix up the one holdout to provide a non-NULL owner.
>
> Refactor the allocation sequence so that changes to/from NULL
> value happen together with adding/removing the epc_page from
> a free list while the node->lock is held.
>
> Signed-off-by: Tony Luck <[email protected]>
> ---
> arch/x86/kernel/cpu/sgx/encl.c | 5 +++--
> arch/x86/kernel/cpu/sgx/encl.h | 2 +-
> arch/x86/kernel/cpu/sgx/ioctl.c | 2 +-
> arch/x86/kernel/cpu/sgx/main.c | 23 ++++++++++++-----------
> arch/x86/kernel/cpu/sgx/sgx.h | 12 ++++++++----
> 5 files changed, 25 insertions(+), 19 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
> index 001808e3901c..ad8c61933b0a 100644
> --- a/arch/x86/kernel/cpu/sgx/encl.c
> +++ b/arch/x86/kernel/cpu/sgx/encl.c
> @@ -667,6 +667,7 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm,
>
> /**
> * sgx_alloc_va_page() - Allocate a Version Array (VA) page
> + * @va_page: struct sgx_va_page connected to this VA page
> *
> * Allocate a free EPC page and convert it to a Version Array (VA) page.
> *
> @@ -674,12 +675,12 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm,
> * a VA page,
> * -errno otherwise
> */
> -struct sgx_epc_page *sgx_alloc_va_page(void)
> +struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page)

Why not just

struct sgx_epc_page *sgx_alloc_va_page(void *owner)

> {
> struct sgx_epc_page *epc_page;
> int ret;
>
> - epc_page = sgx_alloc_epc_page(NULL, true);
> + epc_page = sgx_alloc_epc_page(va_page, true);

epc_page = sgx_alloc_epc_page(owner, true);

> if (IS_ERR(epc_page))
> return ERR_CAST(epc_page);

This function does not do anything with the internals of struct sgx_va_page.

> diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
> index fec43ca65065..3d12dbeae14a 100644
> --- a/arch/x86/kernel/cpu/sgx/encl.h
> +++ b/arch/x86/kernel/cpu/sgx/encl.h
> @@ -111,7 +111,7 @@ void sgx_encl_put_backing(struct sgx_backing *backing, bool do_write);
> int sgx_encl_test_and_clear_young(struct mm_struct *mm,
> struct sgx_encl_page *page);
>
> -struct sgx_epc_page *sgx_alloc_va_page(void);
> +struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page);
> unsigned int sgx_alloc_va_slot(struct sgx_va_page *va_page);
> void sgx_free_va_slot(struct sgx_va_page *va_page, unsigned int offset);
> bool sgx_va_page_full(struct sgx_va_page *va_page);
> diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
> index 83df20e3e633..655ce0bb069d 100644
> --- a/arch/x86/kernel/cpu/sgx/ioctl.c
> +++ b/arch/x86/kernel/cpu/sgx/ioctl.c
> @@ -30,7 +30,7 @@ static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl)
> if (!va_page)
> return ERR_PTR(-ENOMEM);
>
> - va_page->epc_page = sgx_alloc_va_page();
> + va_page->epc_page = sgx_alloc_va_page(va_page);
> if (IS_ERR(va_page->epc_page)) {
> err = ERR_CAST(va_page->epc_page);
> kfree(va_page);
> diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> index 63d3de02bbcc..4a5b51d16133 100644
> --- a/arch/x86/kernel/cpu/sgx/main.c
> +++ b/arch/x86/kernel/cpu/sgx/main.c
> @@ -457,7 +457,7 @@ static bool __init sgx_page_reclaimer_init(void)
> return true;
> }
>
> -static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)
> +static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(void *private, int nid)
> {
> struct sgx_numa_node *node = &sgx_numa_nodes[nid];
> struct sgx_epc_page *page = NULL;
> @@ -471,6 +471,7 @@ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)
>
> page = list_first_entry(&node->free_page_list, struct sgx_epc_page, list);
> list_del_init(&page->list);
> + page->private = private;
> sgx_nr_free_pages--;
>
> spin_unlock(&node->lock);
> @@ -480,6 +481,7 @@ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)
>
> /**
> * __sgx_alloc_epc_page() - Allocate an EPC page
> + * @owner: the owner of the EPC page
> *
> * Iterate through NUMA nodes and reserve ia free EPC page to the caller. Start
> * from the NUMA node, where the caller is executing.
> @@ -488,14 +490,14 @@ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)
> * - an EPC page: A borrowed EPC pages were available.
> * - NULL: Out of EPC pages.
> */
> -struct sgx_epc_page *__sgx_alloc_epc_page(void)
> +struct sgx_epc_page *__sgx_alloc_epc_page(void *private)
> {
> struct sgx_epc_page *page;
> int nid_of_current = numa_node_id();
> int nid = nid_of_current;
>
> if (node_isset(nid_of_current, sgx_numa_mask)) {
> - page = __sgx_alloc_epc_page_from_node(nid_of_current);
> + page = __sgx_alloc_epc_page_from_node(private, nid_of_current);
> if (page)
> return page;
> }
> @@ -506,7 +508,7 @@ struct sgx_epc_page *__sgx_alloc_epc_page(void)
> if (nid == nid_of_current)
> break;
>
> - page = __sgx_alloc_epc_page_from_node(nid);
> + page = __sgx_alloc_epc_page_from_node(private, nid);
> if (page)
> return page;
> }
> @@ -559,7 +561,7 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)
>
> /**
> * sgx_alloc_epc_page() - Allocate an EPC page
> - * @owner: the owner of the EPC page
> + * @private: per-caller private data
> * @reclaim: reclaim pages if necessary
> *
> * Iterate through EPC sections and borrow a free EPC page to the caller. When a
> @@ -574,16 +576,14 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)
> * an EPC page,
> * -errno on error
> */
> -struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
> +struct sgx_epc_page *sgx_alloc_epc_page(void *private, bool reclaim)
> {
> struct sgx_epc_page *page;
>
> for ( ; ; ) {
> - page = __sgx_alloc_epc_page();
> - if (!IS_ERR(page)) {
> - page->owner = owner;
> + page = __sgx_alloc_epc_page(private);
> + if (!IS_ERR(page))
> break;
> - }
>
> if (list_empty(&sgx_active_page_list))
> return ERR_PTR(-ENOMEM);
> @@ -624,6 +624,7 @@ void sgx_free_epc_page(struct sgx_epc_page *page)
>
> spin_lock(&node->lock);
>
> + page->private = NULL;
> list_add_tail(&page->list, &node->free_page_list);
> sgx_nr_free_pages++;
>
> @@ -652,7 +653,7 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
> for (i = 0; i < nr_pages; i++) {
> section->pages[i].section = index;
> section->pages[i].flags = 0;
> - section->pages[i].owner = NULL;
> + section->pages[i].private = "dirty";

#define DIRTY ((void *)-1)

section->pages[i].owner = DIRTY;


> list_add_tail(&section->pages[i].list, &sgx_dirty_page_list);
> }
>
> diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
> index 4628acec0009..8b1be10a46f6 100644
> --- a/arch/x86/kernel/cpu/sgx/sgx.h
> +++ b/arch/x86/kernel/cpu/sgx/sgx.h
> @@ -28,8 +28,12 @@
>
> struct sgx_epc_page {
> unsigned int section;
> - unsigned int flags;
> - struct sgx_encl_page *owner;
> + int flags;
> + union {
> + void *private;
> + struct sgx_encl_page *owner;
> + struct sgx_encl_page *vepc;
> + };


Why not just keep it as void *owner, and cast it as seen
appropriate?

> struct list_head list;
> };
>
> @@ -77,12 +81,12 @@ static inline void *sgx_get_epc_virt_addr(struct sgx_epc_page *page)
> return section->virt_addr + index * PAGE_SIZE;
> }
>
> -struct sgx_epc_page *__sgx_alloc_epc_page(void);
> +struct sgx_epc_page *__sgx_alloc_epc_page(void *private);
> void sgx_free_epc_page(struct sgx_epc_page *page);
>
> void sgx_mark_page_reclaimable(struct sgx_epc_page *page);
> int sgx_unmark_page_reclaimable(struct sgx_epc_page *page);
> -struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim);
> +struct sgx_epc_page *sgx_alloc_epc_page(void *private, bool reclaim);
>
> #ifdef CONFIG_X86_SGX_KVM
> int __init sgx_vepc_init(void);


/Jarkko

2021-09-01 04:36:11

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v4 2/6] x86/sgx: Add infrastructure to identify SGX EPC pages

On Fri, 2021-08-27 at 12:55 -0700, Tony Luck wrote:
> X86 machine check architecture reports a physical address when there
> is a memory error. Handling that error requires a method to determine
> whether the physical address reported is in any of the areas reserved
> for EPC pages by BIOS.
>
> SGX EPC pages do not have Linux "struct page" associated with them.
>
> Keep track of the mapping from ranges of EPC pages to the sections
> that contain them using an xarray.
>
> Create a function sgx_is_epc_page() that simply reports whether an address
> is an EPC page for use elsewhere in the kernel. The ACPI error injection
> code needs this function and is typically built as a module, so export it.
>
> Note that sgx_is_epc_page() will be slower than other similar "what type
> is this page" functions that can simply check bits in the "struct page".
> If there is some future performance critical user of this function it
> may need to be implemented in a more efficient way.
>
> Signed-off-by: Tony Luck <[email protected]>
> ---
> arch/x86/kernel/cpu/sgx/main.c | 10 ++++++++++
> arch/x86/kernel/cpu/sgx/sgx.h | 1 +
> 2 files changed, 11 insertions(+)
>
> diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> index 4a5b51d16133..261f81b3f8af 100644
> --- a/arch/x86/kernel/cpu/sgx/main.c
> +++ b/arch/x86/kernel/cpu/sgx/main.c
> @@ -20,6 +20,7 @@ struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS];
> static int sgx_nr_epc_sections;
> static struct task_struct *ksgxd_tsk;
> static DECLARE_WAIT_QUEUE_HEAD(ksgxd_waitq);
> +static DEFINE_XARRAY(epc_page_ranges);

Maybe we could just call this "sgx_epc_address_space"?

/Jarkko

2021-09-01 14:54:25

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH v4 0/6] Basic recovery for machine checks inside SGX

> Would be nice to get this also to [email protected] in
> future.

Will add to list for next version.

Thanks

-Tony

2021-09-03 06:15:28

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v4 5/6] x86/sgx: Hook sgx_memory_failure() into mainline code

On Fri, 2021-08-27 at 12:55 -0700, Tony Luck wrote:
> +#ifdef CONFIG_X86_SGX
> +int sgx_memory_failure(unsigned long pfn, int flags);
> +bool sgx_is_epc_page(u64 paddr);
> +#else
> +static inline int sgx_memory_failure(unsigned long pfn, int flags)
> +{
> + return -ENXIO;
> +}
> +
> +static inline bool sgx_is_epc_page(u64 paddr)
> +{
> + return false;
> +}
> +#endif

These decl's should be in arch/x86/include/asm/sgx.h, and as part of
patch that contains the implementations.

/Jarkko

2021-09-03 07:51:08

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v4 5/6] x86/sgx: Hook sgx_memory_failure() into mainline code

On Fri, 2021-09-03 at 09:12 +0300, Jarkko Sakkinen wrote:
> On Fri, 2021-08-27 at 12:55 -0700, Tony Luck wrote:
> > +#ifdef CONFIG_X86_SGX
> > +int sgx_memory_failure(unsigned long pfn, int flags);
> > +bool sgx_is_epc_page(u64 paddr);
> > +#else
> > +static inline int sgx_memory_failure(unsigned long pfn, int flags)
> > +{
> > + return -ENXIO;
> > +}
> > +
> > +static inline bool sgx_is_epc_page(u64 paddr)
> > +{
> > + return false;
> > +}
> > +#endif
>
> These decl's should be in arch/x86/include/asm/sgx.h, and as part of
> patch that contains the implementations.

To align with this, I wrote a small patch:

https://lore.kernel.org/linux-sgx/[email protected]/T/#u

/Jarkko

2021-09-06 18:53:05

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH v4 5/6] x86/sgx: Hook sgx_memory_failure() into mainline code

On Fri, 2021-09-03 at 09:12 +0300, Jarkko Sakkinen wrote:
> On Fri, 2021-08-27 at 12:55 -0700, Tony Luck wrote:
> > +#ifdef CONFIG_X86_SGX
> > +int sgx_memory_failure(unsigned long pfn, int flags);
> > +bool sgx_is_epc_page(u64 paddr);
> > +#else
> > +static inline int sgx_memory_failure(unsigned long pfn, int flags)
> > +{
> > + return -ENXIO;
> > +}
> > +
> > +static inline bool sgx_is_epc_page(u64 paddr)
> > +{
> > + return false;
> > +}
> > +#endif
>
> These decl's should be in arch/x86/include/asm/sgx.h, and as part of
> patch that contains the implementations.

But I need to use these functions in arch independent code. Specifically in
mm/memory-failure.c and drivers/acpi/apei/einj.c

If I just #include <asm/sgx.h> in those files I'll break the build for other
architectures.

-Tony

2021-09-07 14:08:25

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v4 5/6] x86/sgx: Hook sgx_memory_failure() into mainline code

On Mon, 2021-09-06 at 18:51 +0000, Luck, Tony wrote:
> On Fri, 2021-09-03 at 09:12 +0300, Jarkko Sakkinen wrote:
> > On Fri, 2021-08-27 at 12:55 -0700, Tony Luck wrote:
> > > +#ifdef CONFIG_X86_SGX
> > > +int sgx_memory_failure(unsigned long pfn, int flags);
> > > +bool sgx_is_epc_page(u64 paddr);
> > > +#else
> > > +static inline int sgx_memory_failure(unsigned long pfn, int flags)
> > > +{
> > > + return -ENXIO;
> > > +}
> > > +
> > > +static inline bool sgx_is_epc_page(u64 paddr)
> > > +{
> > > + return false;
> > > +}
> > > +#endif
> >
> > These decl's should be in arch/x86/include/asm/sgx.h, and as part of
> > patch that contains the implementations.
>
> But I need to use these functions in arch independent code. Specifically in
> mm/memory-failure.c and drivers/acpi/apei/einj.c
>
> If I just #include <asm/sgx.h> in those files I'll break the build for other
> architectures.

What does specifically break the build?

/Jarkko

2021-09-07 14:33:02

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v4 5/6] x86/sgx: Hook sgx_memory_failure() into mainline code

On 9/7/21 7:07 AM, Jarkko Sakkinen wrote:
>> If I just #include <asm/sgx.h> in those files I'll break the build for other
>> architectures.
> What does specifically break the build?

Remember, our x86 "<asm/sgx.h>" is:

arch/x86/include/asm/sgx.h

On powerpc, "<asm/sgx.h>" is:

arch/powerpc/include/asm/sgx.h

You'll get a file not found error looking for sgx.h.

That said... Tony, it's probably a bit more friendly if the mm.h code
you add:

> +#ifdef CONFIG_X86_SGX
> +int sgx_memory_failure(unsigned long pfn, int flags);
> +bool sgx_is_epc_page(u64 paddr);
> +#else
> +static inline int sgx_memory_failure(unsigned long pfn, int flags)
> +{
> + return -ENXIO;
> +}
> +
> +static inline bool sgx_is_epc_page(u64 paddr)
> +{
> + return false;
> +}
> +#endif

was a bit more generic. Maybe something like:

int arch_memory_failure(unsigned long pfn, int flags);

BTW, I don't see sgx_is_epc_page() in arch-generic code. Does it really
need to be in mm.h?

2021-09-07 16:01:17

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH v4 5/6] x86/sgx: Hook sgx_memory_failure() into mainline code

>> If I just #include <asm/sgx.h> in those files I'll break the build for other
>> architectures.
>
> What does specifically break the build?

There is no file named arch/arm/include/asm/sgx.h (ditto for other architectures that build memory-failure.c and einj.c).

-Tony

2021-09-07 16:01:25

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v4 5/6] x86/sgx: Hook sgx_memory_failure() into mainline code

On Tue, 2021-09-07 at 15:03 +0000, Luck, Tony wrote:
> > > If I just #include <asm/sgx.h> in those files I'll break the build for other
> > > architectures.
> >
> > What does specifically break the build?
>
> There is no file named arch/arm/include/asm/sgx.h (ditto for other architectures that build memory-failure.c and einj.c).
>
> -Tony

Would it be too obnoxious to flag that include in those files?

/Jarkko

2021-09-07 16:02:45

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH v4 5/6] x86/sgx: Hook sgx_memory_failure() into mainline code

> BTW, I don't see sgx_is_epc_page() in arch-generic code. Does it really
> need to be in mm.h?

I use it in drivers/acpi/apei/einj.c

Arm is a big user of ACPI. I don't see any Kconfig exclusions for CONFIG_ACPI_APEI_EINJ

-Tony

2021-09-07 18:51:36

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH v4 5/6] x86/sgx: Hook sgx_memory_failure() into mainline code

> Would it be too obnoxious to flag that include in those files?

Jarkko,

You mean:

#ifdef CONFIG_X86_SGX
#include <asm/sgx.h>
#endif

in mm/memory-failure.h?

That wouldn't help. I need the do-nothing stub definition on other architectures.

I'm going to explore Dave's suggestion of changing the names to something less sgx specific.

-Tony

2021-09-08 01:02:29

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH v4 5/6] x86/sgx: Hook sgx_memory_failure() into mainline code

> I'm going to explore Dave's suggestion of changing the names to something less sgx specific.

So now I have the two functions renamed to

arch_memory_failure() and arch_is_platform_page()

in arch/x86/kernel/cpu/sgx/main.c

In arch/x86/include/asm/processor.h

+#ifdef CONFIG_X86_SGX
+int arch_memory_failure(unsigned long pfn, int flags);
+#define arch_memory_failure arch_memory_failure
+
+bool arch_is_platform_page(u64 paddr);
+#define arch_is_platform_page arch_is_platform_page
+#endif

and in include/linux/mm.h

+#ifndef arch_memory_failure
+static inline int arch_memory_failure(unsigned long pfn, int flags)
+{
+ return -ENXIO;
+}
+#endif
+#ifndef arch_is_platform_page
+static inline bool arch_is_platform_page(u64 paddr)
+{
+ return false;
+}
+#endif

Dave: Is that what you wanted? If so I can fold these bits back into the
appropriate bits of the series. Address other comments. and post v5.

Sean: If you have stuff that needs attention in v4 please holler soon.

-Tony

2021-09-08 02:31:03

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v4 5/6] x86/sgx: Hook sgx_memory_failure() into mainline code

On Tue, 2021-09-07 at 17:46 +0000, Luck, Tony wrote:
> > Would it be too obnoxious to flag that include in those files?
>
> Jarkko,
>
> You mean:
>
> #ifdef CONFIG_X86_SGX
> #include <asm/sgx.h>
> #endif
>
> in mm/memory-failure.h?
>
> That wouldn't help. I need the do-nothing stub definition on other architectures.
>
> I'm going to explore Dave's suggestion of changing the names to something less sgx specific.
>
> -Tony


Ah sorry, I get it :-) Yeah, Dave's suggestion makes much more sense.

/Jarkko

2021-09-08 17:48:10

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v4 5/6] x86/sgx: Hook sgx_memory_failure() into mainline code

On 9/7/21 5:59 PM, Luck, Tony wrote:
> +#ifdef CONFIG_X86_SGX
> +int arch_memory_failure(unsigned long pfn, int flags);
> +#define arch_memory_failure arch_memory_failure
> +
> +bool arch_is_platform_page(u64 paddr);
> +#define arch_is_platform_page arch_is_platform_page
> +#endif
>
> and in include/linux/mm.h
>
> +#ifndef arch_memory_failure
> +static inline int arch_memory_failure(unsigned long pfn, int flags)
> +{
> + return -ENXIO;
> +}
> +#endif
> +#ifndef arch_is_platform_page
> +static inline bool arch_is_platform_page(u64 paddr)
> +{
> + return false;
> +}
> +#endif
>
> Dave: Is that what you wanted? If so I can fold these bits back into the
> appropriate bits of the series. Address other comments. and post v5.

Looks good to me.

These can *also* be done with a

config ARCH_HAS_SPECIAL_MEMORY_FAILURE
bool

in mm/Kconfig.h, and then:

select ARCH_HAS_SPECIAL_MEMORY_FAILURE

in the SGX Kconfig instead of the ifndef's. I prefer the configs
personally because they are less ambiguous and can't be screwed up my
missing #includes or weird #include ordering problems. But, some folks
prefer to avoid polluting the CONFIG_* space.

That's just pure personal preference though. Either way is fine.

2021-09-18 04:57:28

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v5 0/7] Basic recovery for machine checks inside SGX

Now version 5.

Changes since v4:

Jarkko Sakkinen:
+ Add [email protected] to Cc: list
+ Remove explicit struct sgx_va_page *va_page type
from argument and use in sgx_alloc_va_page(). Just
use "void *" as this code doesn't do anything with the
internals of struct sgx_va_page.
+ Drop the union of all possible types for the "owner"
field in struct sgx_epc_page (sorry Dave Hansen, this
went in last time from your comment, but it doesn't
seem to add much value). Back to "void *owner;"
+ rename the xarray that tracks which addresses are
EPC pages from "epc_page_ranges" to "sgx_epc_address_space".

Dave Hansen:
+ Use more generic names for the globally visible
functions that are needed in generic code:
sgx_memory_failure -> arch_memory_failure
sgx_is_epc_page -> arch_is_platform_page

Tony Luck:
+ Found that ghes code spits warnings for memory addresses
that it thinks are bad. Add a check for SGX pages.

Tony Luck (7):
x86/sgx: Provide indication of life-cycle of EPC pages
x86/sgx: Add infrastructure to identify SGX EPC pages
x86/sgx: Initial poison handling for dirty and free pages
x86/sgx: Add SGX infrastructure to recover from poison
x86/sgx: Hook arch_memory_failure() into mainline code
x86/sgx: Add hook to error injection address validation
x86/sgx: Add check for SGX pages to ghes_do_memory_failure()

.../firmware-guide/acpi/apei/einj.rst | 19 +++
arch/x86/include/asm/processor.h | 8 +
arch/x86/include/asm/set_memory.h | 4 +
arch/x86/kernel/cpu/sgx/encl.c | 5 +-
arch/x86/kernel/cpu/sgx/encl.h | 2 +-
arch/x86/kernel/cpu/sgx/ioctl.c | 2 +-
arch/x86/kernel/cpu/sgx/main.c | 140 ++++++++++++++++--
arch/x86/kernel/cpu/sgx/sgx.h | 14 +-
drivers/acpi/apei/einj.c | 3 +-
drivers/acpi/apei/ghes.c | 2 +-
include/linux/mm.h | 13 ++
mm/memory-failure.c | 19 ++-
12 files changed, 203 insertions(+), 28 deletions(-)


base-commit: 6880fa6c56601bb8ed59df6c30fd390cc5f6dd8f
--
2.31.1

2021-09-18 04:59:16

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v5 3/7] x86/sgx: Initial poison handling for dirty and free pages

A memory controller patrol scrubber can report poison in a page
that isn't currently being used.

Add "poison" field in the sgx_epc_page that can be set for an
sgx_epc_page. Check for it:
1) When sanitizing dirty pages
2) When freeing epc pages

Poison is a new field separated from flags to avoid having to make
all updates to flags atomic, or integrate poison state changes into
some other locking scheme to protect flags.

In both cases place the poisoned page on a list of poisoned epc pages
to make sure it will not be reallocated.

Add debugfs files /sys/kernel/debug/sgx/poison_page_list so that system
administrators get a list of those pages that have been dropped because
of poison.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/kernel/cpu/sgx/main.c | 30 +++++++++++++++++++++++++++++-
arch/x86/kernel/cpu/sgx/sgx.h | 3 ++-
2 files changed, 31 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 10892513212d..7a53ff876059 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2016-20 Intel Corporation. */

+#include <linux/debugfs.h>
#include <linux/file.h>
#include <linux/freezer.h>
#include <linux/highmem.h>
@@ -43,6 +44,7 @@ static nodemask_t sgx_numa_mask;
static struct sgx_numa_node *sgx_numa_nodes;

static LIST_HEAD(sgx_dirty_page_list);
+static LIST_HEAD(sgx_poison_page_list);

/*
* Reset post-kexec EPC pages to the uninitialized state. The pages are removed
@@ -62,6 +64,12 @@ static void __sgx_sanitize_pages(struct list_head *dirty_page_list)

page = list_first_entry(dirty_page_list, struct sgx_epc_page, list);

+ if (page->poison) {
+ list_del(&page->list);
+ list_add(&page->list, &sgx_poison_page_list);
+ continue;
+ }
+
ret = __eremove(sgx_get_epc_virt_addr(page));
if (!ret) {
/*
@@ -626,7 +634,10 @@ void sgx_free_epc_page(struct sgx_epc_page *page)
spin_lock(&node->lock);

page->private = NULL;
- list_add_tail(&page->list, &node->free_page_list);
+ if (page->poison)
+ list_add(&page->list, &sgx_poison_page_list);
+ else
+ list_add_tail(&page->list, &node->free_page_list);
sgx_nr_free_pages++;

spin_unlock(&node->lock);
@@ -657,6 +668,7 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
for (i = 0; i < nr_pages; i++) {
section->pages[i].section = index;
section->pages[i].flags = 0;
+ section->pages[i].poison = 0;
section->pages[i].private = "dirty";
list_add_tail(&section->pages[i].list, &sgx_dirty_page_list);
}
@@ -801,8 +813,21 @@ int sgx_set_attribute(unsigned long *allowed_attributes,
}
EXPORT_SYMBOL_GPL(sgx_set_attribute);

+static int poison_list_show(struct seq_file *m, void *private)
+{
+ struct sgx_epc_page *page;
+
+ list_for_each_entry(page, &sgx_poison_page_list, list)
+ seq_printf(m, "0x%lx\n", sgx_get_epc_phys_addr(page));
+
+ return 0;
+}
+
+DEFINE_SHOW_ATTRIBUTE(poison_list);
+
static int __init sgx_init(void)
{
+ struct dentry *dir;
int ret;
int i;

@@ -834,6 +859,9 @@ static int __init sgx_init(void)
if (sgx_vepc_init() && ret)
goto err_provision;

+ dir = debugfs_create_dir("sgx", arch_debugfs_dir);
+ debugfs_create_file("poison_page_list", 0400, dir, NULL, &poison_list_fops);
+
return 0;

err_provision:
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 6a55b1971956..77f3d98c9fbf 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -28,7 +28,8 @@

struct sgx_epc_page {
unsigned int section;
- int flags;
+ u16 flags;
+ u16 poison;
union {
void *private;
struct sgx_encl_page *owner;
--
2.31.1

2021-09-18 05:00:09

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v5 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

SGX EPC pages go through the following life cycle:

DIRTY ---> FREE ---> IN-USE --\
^ |
\-----------------/

Recovery action for poison for a DIRTY or FREE page is simple. Just
make sure never to allocate the page. IN-USE pages need some extra
handling.

It would be good to use the sgx_epc_page->owner field as an indicator
of where an EPC page is currently in that cycle (owner != NULL means
the EPC page is IN-USE). But there is one caller, sgx_alloc_va_page(),
that calls with NULL.

Since there are multiple uses of the "owner" field with different types
change the sgx_epc_page structure to define an anonymous union with
each of the uses explicitly called out.

Start epc_pages out with a non-NULL owner while they are in DIRTY state.

Fix up the one holdout to provide a non-NULL owner.

Refactor the allocation sequence so that changes to/from NULL
value happen together with adding/removing the epc_page from
a free list while the node->lock is held.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/kernel/cpu/sgx/encl.c | 5 +++--
arch/x86/kernel/cpu/sgx/encl.h | 2 +-
arch/x86/kernel/cpu/sgx/ioctl.c | 2 +-
arch/x86/kernel/cpu/sgx/main.c | 23 ++++++++++++-----------
arch/x86/kernel/cpu/sgx/sgx.h | 12 ++++++++----
5 files changed, 25 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 001808e3901c..ad8c61933b0a 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -667,6 +667,7 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm,

/**
* sgx_alloc_va_page() - Allocate a Version Array (VA) page
+ * @va_page: struct sgx_va_page connected to this VA page
*
* Allocate a free EPC page and convert it to a Version Array (VA) page.
*
@@ -674,12 +675,12 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm,
* a VA page,
* -errno otherwise
*/
-struct sgx_epc_page *sgx_alloc_va_page(void)
+struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page)
{
struct sgx_epc_page *epc_page;
int ret;

- epc_page = sgx_alloc_epc_page(NULL, true);
+ epc_page = sgx_alloc_epc_page(va_page, true);
if (IS_ERR(epc_page))
return ERR_CAST(epc_page);

diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index fec43ca65065..3d12dbeae14a 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -111,7 +111,7 @@ void sgx_encl_put_backing(struct sgx_backing *backing, bool do_write);
int sgx_encl_test_and_clear_young(struct mm_struct *mm,
struct sgx_encl_page *page);

-struct sgx_epc_page *sgx_alloc_va_page(void);
+struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page);
unsigned int sgx_alloc_va_slot(struct sgx_va_page *va_page);
void sgx_free_va_slot(struct sgx_va_page *va_page, unsigned int offset);
bool sgx_va_page_full(struct sgx_va_page *va_page);
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 83df20e3e633..655ce0bb069d 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -30,7 +30,7 @@ static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl)
if (!va_page)
return ERR_PTR(-ENOMEM);

- va_page->epc_page = sgx_alloc_va_page();
+ va_page->epc_page = sgx_alloc_va_page(va_page);
if (IS_ERR(va_page->epc_page)) {
err = ERR_CAST(va_page->epc_page);
kfree(va_page);
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 63d3de02bbcc..4a5b51d16133 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -457,7 +457,7 @@ static bool __init sgx_page_reclaimer_init(void)
return true;
}

-static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)
+static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(void *private, int nid)
{
struct sgx_numa_node *node = &sgx_numa_nodes[nid];
struct sgx_epc_page *page = NULL;
@@ -471,6 +471,7 @@ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)

page = list_first_entry(&node->free_page_list, struct sgx_epc_page, list);
list_del_init(&page->list);
+ page->private = private;
sgx_nr_free_pages--;

spin_unlock(&node->lock);
@@ -480,6 +481,7 @@ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)

/**
* __sgx_alloc_epc_page() - Allocate an EPC page
+ * @owner: the owner of the EPC page
*
* Iterate through NUMA nodes and reserve ia free EPC page to the caller. Start
* from the NUMA node, where the caller is executing.
@@ -488,14 +490,14 @@ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)
* - an EPC page: A borrowed EPC pages were available.
* - NULL: Out of EPC pages.
*/
-struct sgx_epc_page *__sgx_alloc_epc_page(void)
+struct sgx_epc_page *__sgx_alloc_epc_page(void *private)
{
struct sgx_epc_page *page;
int nid_of_current = numa_node_id();
int nid = nid_of_current;

if (node_isset(nid_of_current, sgx_numa_mask)) {
- page = __sgx_alloc_epc_page_from_node(nid_of_current);
+ page = __sgx_alloc_epc_page_from_node(private, nid_of_current);
if (page)
return page;
}
@@ -506,7 +508,7 @@ struct sgx_epc_page *__sgx_alloc_epc_page(void)
if (nid == nid_of_current)
break;

- page = __sgx_alloc_epc_page_from_node(nid);
+ page = __sgx_alloc_epc_page_from_node(private, nid);
if (page)
return page;
}
@@ -559,7 +561,7 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)

/**
* sgx_alloc_epc_page() - Allocate an EPC page
- * @owner: the owner of the EPC page
+ * @private: per-caller private data
* @reclaim: reclaim pages if necessary
*
* Iterate through EPC sections and borrow a free EPC page to the caller. When a
@@ -574,16 +576,14 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)
* an EPC page,
* -errno on error
*/
-struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
+struct sgx_epc_page *sgx_alloc_epc_page(void *private, bool reclaim)
{
struct sgx_epc_page *page;

for ( ; ; ) {
- page = __sgx_alloc_epc_page();
- if (!IS_ERR(page)) {
- page->owner = owner;
+ page = __sgx_alloc_epc_page(private);
+ if (!IS_ERR(page))
break;
- }

if (list_empty(&sgx_active_page_list))
return ERR_PTR(-ENOMEM);
@@ -624,6 +624,7 @@ void sgx_free_epc_page(struct sgx_epc_page *page)

spin_lock(&node->lock);

+ page->private = NULL;
list_add_tail(&page->list, &node->free_page_list);
sgx_nr_free_pages++;

@@ -652,7 +653,7 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
for (i = 0; i < nr_pages; i++) {
section->pages[i].section = index;
section->pages[i].flags = 0;
- section->pages[i].owner = NULL;
+ section->pages[i].private = "dirty";
list_add_tail(&section->pages[i].list, &sgx_dirty_page_list);
}

diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 4628acec0009..8b1be10a46f6 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -28,8 +28,12 @@

struct sgx_epc_page {
unsigned int section;
- unsigned int flags;
- struct sgx_encl_page *owner;
+ int flags;
+ union {
+ void *private;
+ struct sgx_encl_page *owner;
+ struct sgx_encl_page *vepc;
+ };
struct list_head list;
};

@@ -77,12 +81,12 @@ static inline void *sgx_get_epc_virt_addr(struct sgx_epc_page *page)
return section->virt_addr + index * PAGE_SIZE;
}

-struct sgx_epc_page *__sgx_alloc_epc_page(void);
+struct sgx_epc_page *__sgx_alloc_epc_page(void *private);
void sgx_free_epc_page(struct sgx_epc_page *page);

void sgx_mark_page_reclaimable(struct sgx_epc_page *page);
int sgx_unmark_page_reclaimable(struct sgx_epc_page *page);
-struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim);
+struct sgx_epc_page *sgx_alloc_epc_page(void *private, bool reclaim);

#ifdef CONFIG_X86_SGX_KVM
int __init sgx_vepc_init(void);
--
2.31.1

2021-09-18 05:00:16

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v5 5/7] x86/sgx: Hook arch_memory_failure() into mainline code

Add a call inside memory_failure() to check if the address is an SGX
EPC page and handle it.

Note the SGX EPC pages do not have a "struct page" entry, so the hook
goes in at the same point as the device mapping hook.

Pull the call to acquire the mutex earlier so the SGX errors are also
protected.

Make set_mce_nospec() skip SGX pages when trying to adjust
the 1:1 map.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/include/asm/processor.h | 8 ++++++++
arch/x86/include/asm/set_memory.h | 4 ++++
include/linux/mm.h | 13 +++++++++++++
mm/memory-failure.c | 19 +++++++++++++------
4 files changed, 38 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 9ad2acaaae9b..4865f2860a4f 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -853,4 +853,12 @@ enum mds_mitigations {
MDS_MITIGATION_VMWERV,
};

+#ifdef CONFIG_X86_SGX
+int arch_memory_failure(unsigned long pfn, int flags);
+#define arch_memory_failure arch_memory_failure
+
+bool arch_is_platform_page(u64 paddr);
+#define arch_is_platform_page arch_is_platform_page
+#endif
+
#endif /* _ASM_X86_PROCESSOR_H */
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 43fa081a1adb..ce8dd215f5b3 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -2,6 +2,7 @@
#ifndef _ASM_X86_SET_MEMORY_H
#define _ASM_X86_SET_MEMORY_H

+#include <linux/mm.h>
#include <asm/page.h>
#include <asm-generic/set_memory.h>

@@ -98,6 +99,9 @@ static inline int set_mce_nospec(unsigned long pfn, bool unmap)
unsigned long decoy_addr;
int rc;

+ /* SGX pages are not in the 1:1 map */
+ if (arch_is_platform_page(pfn << PAGE_SHIFT))
+ return 0;
/*
* We would like to just call:
* set_memory_XX((unsigned long)pfn_to_kaddr(pfn), 1);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 73a52aba448f..3cc63682fe47 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3284,5 +3284,18 @@ static inline int seal_check_future_write(int seals, struct vm_area_struct *vma)
return 0;
}

+#ifndef arch_memory_failure
+static inline int arch_memory_failure(unsigned long pfn, int flags)
+{
+ return -ENXIO;
+}
+#endif
+#ifndef arch_is_platform_page
+static inline bool arch_is_platform_page(u64 paddr)
+{
+ return false;
+}
+#endif
+
#endif /* __KERNEL__ */
#endif /* _LINUX_MM_H */
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 54879c339024..5693bac9509c 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1632,21 +1632,28 @@ int memory_failure(unsigned long pfn, int flags)
if (!sysctl_memory_failure_recovery)
panic("Memory failure on page %lx", pfn);

+ mutex_lock(&mf_mutex);
+
p = pfn_to_online_page(pfn);
if (!p) {
+ res = arch_memory_failure(pfn, flags);
+ if (res == 0)
+ goto unlock_mutex;
+
if (pfn_valid(pfn)) {
pgmap = get_dev_pagemap(pfn, NULL);
- if (pgmap)
- return memory_failure_dev_pagemap(pfn, flags,
- pgmap);
+ if (pgmap) {
+ res = memory_failure_dev_pagemap(pfn, flags,
+ pgmap);
+ goto unlock_mutex;
+ }
}
pr_err("Memory failure: %#lx: memory outside kernel control\n",
pfn);
- return -ENXIO;
+ res = -ENXIO;
+ goto unlock_mutex;
}

- mutex_lock(&mf_mutex);
-
try_again:
if (PageHuge(p)) {
res = memory_failure_hugetlb(pfn, flags);
--
2.31.1

2021-09-18 05:01:21

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v5 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

X86 machine check architecture reports a physical address when there
is a memory error. Handling that error requires a method to determine
whether the physical address reported is in any of the areas reserved
for EPC pages by BIOS.

SGX EPC pages do not have Linux "struct page" associated with them.

Keep track of the mapping from ranges of EPC pages to the sections
that contain them using an xarray.

Create a function arch_is_platform_page() that simply reports whether an address
is an EPC page for use elsewhere in the kernel. The ACPI error injection
code needs this function and is typically built as a module, so export it.

Note that arch_is_platform_page() will be slower than other similar "what type
is this page" functions that can simply check bits in the "struct page".
If there is some future performance critical user of this function it
may need to be implemented in a more efficient way.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/kernel/cpu/sgx/main.c | 10 ++++++++++
arch/x86/kernel/cpu/sgx/sgx.h | 1 +
2 files changed, 11 insertions(+)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 4a5b51d16133..10892513212d 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -20,6 +20,7 @@ struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS];
static int sgx_nr_epc_sections;
static struct task_struct *ksgxd_tsk;
static DECLARE_WAIT_QUEUE_HEAD(ksgxd_waitq);
+static DEFINE_XARRAY(epc_page_ranges);

/*
* These variables are part of the state of the reclaimer, and must be accessed
@@ -649,6 +650,9 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
}

section->phys_addr = phys_addr;
+ section->end_phys_addr = phys_addr + size - 1;
+ xa_store_range(&epc_page_ranges, section->phys_addr,
+ section->end_phys_addr, section, GFP_KERNEL);

for (i = 0; i < nr_pages; i++) {
section->pages[i].section = index;
@@ -660,6 +664,12 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
return true;
}

+bool arch_is_platform_page(u64 paddr)
+{
+ return !!xa_load(&epc_page_ranges, paddr);
+}
+EXPORT_SYMBOL_GPL(arch_is_platform_page);
+
/**
* A section metric is concatenated in a way that @low bits 12-31 define the
* bits 12-31 of the metric and @high bits 0-19 define the bits 32-51 of the
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 8b1be10a46f6..6a55b1971956 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -54,6 +54,7 @@ struct sgx_numa_node {
*/
struct sgx_epc_section {
unsigned long phys_addr;
+ unsigned long end_phys_addr;
void *virt_addr;
struct sgx_epc_page *pages;
struct sgx_numa_node *node;
--
2.31.1

2021-09-18 05:01:57

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v5 6/7] x86/sgx: Add hook to error injection address validation

SGX reserved memory does not appear in the standard address maps.

Add hook to call into the SGX code to check if an address is located
in SGX memory.

There are other challenges in injecting errors into SGX. Update the
documentation with a sequence of operations to inject.

Signed-off-by: Tony Luck <[email protected]>
---
.../firmware-guide/acpi/apei/einj.rst | 19 +++++++++++++++++++
drivers/acpi/apei/einj.c | 3 ++-
2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/Documentation/firmware-guide/acpi/apei/einj.rst b/Documentation/firmware-guide/acpi/apei/einj.rst
index c042176e1707..55e2331a6438 100644
--- a/Documentation/firmware-guide/acpi/apei/einj.rst
+++ b/Documentation/firmware-guide/acpi/apei/einj.rst
@@ -181,5 +181,24 @@ You should see something like this in dmesg::
[22715.834759] EDAC sbridge MC3: PROCESSOR 0:306e7 TIME 1422553404 SOCKET 0 APIC 0
[22716.616173] EDAC MC3: 1 CE memory read error on CPU_SrcID#0_Channel#0_DIMM#0 (channel:0 slot:0 page:0x12345 offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0001:0090 socket:0 channel_mask:1 rank:0)

+Special notes for injection into SGX enclaves:
+
+There may be a separate BIOS setup option to enable SGX injection.
+
+The injection process consists of setting some special memory controller
+trigger that will inject the error on the next write to the target
+address. But the h/w prevents any software outside of an SGX enclave
+from accessing enclave pages (even BIOS SMM mode).
+
+The following sequence can be used:
+ 1) Determine physical address of enclave page
+ 2) Use "notrigger=1" mode to inject (this will setup
+ the injection address, but will not actually inject)
+ 3) Enter the enclave
+ 4) Store data to the virtual address matching physical address from step 1
+ 5) Execute CLFLUSH for that virtual address
+ 6) Spin delay for 250ms
+ 7) Read from the virtual address. This will trigger the error
+
For more information about EINJ, please refer to ACPI specification
version 4.0, section 17.5 and ACPI 5.0, section 18.6.
diff --git a/drivers/acpi/apei/einj.c b/drivers/acpi/apei/einj.c
index 2882450c443e..67c335baad52 100644
--- a/drivers/acpi/apei/einj.c
+++ b/drivers/acpi/apei/einj.c
@@ -544,7 +544,8 @@ static int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2,
((region_intersects(base_addr, size, IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE)
!= REGION_INTERSECTS) &&
(region_intersects(base_addr, size, IORESOURCE_MEM, IORES_DESC_PERSISTENT_MEMORY)
- != REGION_INTERSECTS)))
+ != REGION_INTERSECTS) &&
+ !arch_is_platform_page(base_addr)))
return -EINVAL;

inject:
--
2.31.1

2021-09-18 05:03:02

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v5 7/7] x86/sgx: Add check for SGX pages to ghes_do_memory_failure()

SGX EPC pages do not have a "struct page" associated with them so the
pfn_valid() sanity check fails and results in a warning message to
the console.

Add an additonal check to skip the warning if the address of the error
is in an SGX EPC page.

Signed-off-by: Tony Luck <[email protected]>
---
drivers/acpi/apei/ghes.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index 0c8330ed1ffd..0c5c9acc6254 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -449,7 +449,7 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags)
return false;

pfn = PHYS_PFN(physical_addr);
- if (!pfn_valid(pfn)) {
+ if (!pfn_valid(pfn) && !arch_is_platform_page(physical_addr)) {
pr_warn_ratelimited(FW_WARN GHES_PFX
"Invalid address in generic error data: %#llx\n",
physical_addr);
--
2.31.1

2021-09-18 09:43:18

by Luck, Tony

[permalink] [raw]
Subject: [PATCH v5 4/7] x86/sgx: Add SGX infrastructure to recover from poison

Provide a recovery function arch_memory_failure(). If the poison was
consumed synchronously then send a SIGBUS. Note that the virtual
address of the access is not included with the SIGBUS as is the case
for poison outside of SGX enclaves. This doesn't matter as addresses
of code/data inside an enclave is of little to no use to code executing
outside the (now dead) enclave.

Poison found in a free page results in the page being moved from the
free list to the poison page list.

Signed-off-by: Tony Luck <[email protected]>
---
arch/x86/kernel/cpu/sgx/main.c | 77 ++++++++++++++++++++++++++++++++++
1 file changed, 77 insertions(+)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 7a53ff876059..8f23c8489cec 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -682,6 +682,83 @@ bool arch_is_platform_page(u64 paddr)
}
EXPORT_SYMBOL_GPL(arch_is_platform_page);

+static struct sgx_epc_page *sgx_paddr_to_page(u64 paddr)
+{
+ struct sgx_epc_section *section;
+
+ section = xa_load(&epc_page_ranges, paddr);
+ if (!section)
+ return NULL;
+
+ return &section->pages[PFN_DOWN(paddr - section->phys_addr)];
+}
+
+/*
+ * Called in process context to handle a hardware reported
+ * error in an SGX EPC page.
+ * If the MF_ACTION_REQUIRED bit is set in flags, then the
+ * context is the task that consumed the poison data. Otherwise
+ * this is called from a kernel thread unrelated to the page.
+ */
+int arch_memory_failure(unsigned long pfn, int flags)
+{
+ struct sgx_epc_page *page = sgx_paddr_to_page(pfn << PAGE_SHIFT);
+ struct sgx_epc_section *section;
+ struct sgx_numa_node *node;
+
+ /*
+ * mm/memory-failure.c calls this routine for all errors
+ * where there isn't a "struct page" for the address. But that
+ * includes other address ranges besides SGX.
+ */
+ if (!page)
+ return -ENXIO;
+
+ /*
+ * If poison was consumed synchronously. Send a SIGBUS to
+ * the task. Hardware has already exited the SGX enclave and
+ * will not allow re-entry to an enclave that has a memory
+ * error. The signal may help the task understand why the
+ * enclave is broken.
+ */
+ if (flags & MF_ACTION_REQUIRED)
+ force_sig(SIGBUS);
+
+ section = &sgx_epc_sections[page->section];
+ node = section->node;
+
+ spin_lock(&node->lock);
+
+ /* Already poisoned? Nothing more to do */
+ if (page->poison)
+ goto out;
+
+ page->poison = 1;
+
+ /*
+ * If there is no owner, then the page is on a free list.
+ * Move it to the poison page list.
+ */
+ if (!page->private) {
+ list_del(&page->list);
+ list_add(&page->list, &sgx_poison_page_list);
+ goto out;
+ }
+
+ /*
+ * TBD: Add additional plumbing to enable pre-emptive
+ * action for asynchronous poison notification. Until
+ * then just hope that the poison:
+ * a) is not accessed - sgx_free_epc_page() will deal with it
+ * when the user gives it back
+ * b) results in a recoverable machine check rather than
+ * a fatal one
+ */
+out:
+ spin_unlock(&node->lock);
+ return 0;
+}
+
/**
* A section metric is concatenated in a way that @low bits 12-31 define the
* bits 12-31 of the metric and @high bits 0-19 define the bits 32-51 of the
--
2.31.1

2021-09-21 21:28:20

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH v5 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

>> section->phys_addr = phys_addr;
>> + section->end_phys_addr = phys_addr + size - 1;
>> + xa_store_range(&epc_page_ranges, section->phys_addr,
>> + section->end_phys_addr, section, GFP_KERNEL);
>
> Did we ever figure out how much space storing really big ranges in the
> xarray consumes?

No. Willy said the existing xarray code would be less than optimal with
this usage, but that things would be much better when he applied some
maple tree updates to the internals of xarray.

If there is some easy way to measure the memory backing an xarray I'm
happy to get the data. Or if someone else can synthesize it ... the two
ranges on my system that are added to the xarray are:

$ dmesg | grep -i sgx
[ 8.496844] sgx: EPC section 0x8000c00000-0x807f7fffff
[ 8.505118] sgx: EPC section 0x10000c00000-0x1007fffffff

I.e. two ranges of a bit under 2GB each.

But I don't think the overhead can be too hideous:

$ grep MemFree /proc/meminfo
MemFree: 1048682016 kB

I still have ~ 1TB free. Which is much greater that the 640 KB which should
be "enough for anybody" :-).

-Tony

2021-09-21 22:48:58

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v5 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

On 9/17/21 2:38 PM, Tony Luck wrote:
> /*
> * These variables are part of the state of the reclaimer, and must be accessed
> @@ -649,6 +650,9 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
> }
>
> section->phys_addr = phys_addr;
> + section->end_phys_addr = phys_addr + size - 1;
> + xa_store_range(&epc_page_ranges, section->phys_addr,
> + section->end_phys_addr, section, GFP_KERNEL);

Did we ever figure out how much space storing really big ranges in the
xarray consumes?

2021-09-21 23:57:17

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v5 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

On Fri, 2021-09-17 at 14:38 -0700, Tony Luck wrote:
> SGX EPC pages go through the following life cycle:
>
> DIRTY ---> FREE ---> IN-USE --\
> ^ |
> \-----------------/
>
> Recovery action for poison for a DIRTY or FREE page is simple. Just
> make sure never to allocate the page. IN-USE pages need some extra
> handling.
>
> It would be good to use the sgx_epc_page->owner field as an indicator
> of where an EPC page is currently in that cycle (owner != NULL means
> the EPC page is IN-USE). But there is one caller, sgx_alloc_va_page(),
> that calls with NULL.
>
> Since there are multiple uses of the "owner" field with different types
> change the sgx_epc_page structure to define an anonymous union with
> each of the uses explicitly called out.

But it's still always a pointer.

And not only that, but two alternative fields in that union have *exactly* the
same type, so it's kind of artifically representing the problem more complex
than it really is.

I'm not just getting, why all this complexity, and not a few casts instead?

I neither get the rename of "owner" to "private". It serves very little value.
I'm not saying that "owner" is best name ever but it's not *that* confusing
either. That I'm sure that it is definitely not very productive to rename it.

Also there was still this "dirty". We could use ((void *)-1), which was also
suggested for earlier revisions.

/Jarkko

2021-09-22 00:13:11

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v5 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

On 9/21/21 2:28 PM, Jarkko Sakkinen wrote:
>> Since there are multiple uses of the "owner" field with different types
>> change the sgx_epc_page structure to define an anonymous union with
>> each of the uses explicitly called out.
> But it's still always a pointer.
>
> And not only that, but two alternative fields in that union have *exactly* the
> same type, so it's kind of artifically representing the problem more complex
> than it really is.
>
> I'm not just getting, why all this complexity, and not a few casts instead?

I suggested this. It makes the structure more self-describing because
it explicitly lists the possibles uses of the space in the structure.

Maybe I stare at 'struct page' and its 4 unions too much and I'm
enamored by their shininess. But, in the end, I prefer unions to casting.

2021-09-22 00:19:20

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v5 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

On 9/21/21 1:50 PM, Luck, Tony wrote:
>> Did we ever figure out how much space storing really big ranges in the
>> xarray consumes?
> No. Willy said the existing xarray code would be less than optimal with
> this usage, but that things would be much better when he applied some
> maple tree updates to the internals of xarray.
>
> If there is some easy way to measure the memory backing an xarray I'm
> happy to get the data. Or if someone else can synthesize it ... the two
> ranges on my system that are added to the xarray are:
>
> $ dmesg | grep -i sgx
> [ 8.496844] sgx: EPC section 0x8000c00000-0x807f7fffff
> [ 8.505118] sgx: EPC section 0x10000c00000-0x1007fffffff
>
> I.e. two ranges of a bit under 2GB each.
>
> But I don't think the overhead can be too hideous:
>
> $ grep MemFree /proc/meminfo
> MemFree: 1048682016 kB
>
> I still have ~ 1TB free. Which is much greater that the 640 KB which should
> be "enough for anybody" :-).

There is a kmem_cache_create() for the xarray nodes. So, you should be
able to see the difference in /proc/meminfo's "Slab" field. Maybe boot
with init=/bin/sh to reduce the noise and look at meminfo both with and
without SGX your patch applied, or just with the xarray bits commented out.

I don't quite know how the data structures are munged, but xas_alloc()
makes it look like 'struct xa_node' is allocated from
radix_tree_node_cachep. If that's the case, you should also be able to
see this in even more detail in:

# grep radix /proc/slabinfo
radix_tree_node 432305 482412 584 28 4 : tunables 0 0
0 : slabdata 17229 17229 0

again, on a system with and without your new code enabled.

2021-09-22 01:03:06

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v5 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

On 9/21/21 4:48 PM, Luck, Tony wrote:
>
> # name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
>
> So I think this means that I have (9950 - 9800) * 584 = 87600 more bytes
> allocated. Maybe that's a lot? But percentage-wise is seems in the
> noise. E.g. We allocate one "struct sgx_epc_page" for each SGX page.
> On my system I have 4GB of SGX EPC, so around 32 MB of these structures.

100k for 4GB of EPC is certainly in the noise as far as I'm concerned.

Thanks for checking this.

2021-09-22 01:51:43

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH v5 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

>> Since there are multiple uses of the "owner" field with different types
>> change the sgx_epc_page structure to define an anonymous union with
>> each of the uses explicitly called out.
>
> But it's still always a pointer.
>
> And not only that, but two alternative fields in that union have *exactly* the
> same type, so it's kind of artifically representing the problem more complex
> than it really is.

Bother! I seem to have jumbled some old bits of v4 into this series.

I agree that we just want "void *owner; here. I even made the changes.
Then managed to lose them while updating.

I'll find the bits I lost and re-merge them in.

-Tony

2021-09-22 02:04:45

by Luck, Tony

[permalink] [raw]
Subject: Re: [PATCH v5 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages

On Tue, Sep 21, 2021 at 03:32:14PM -0700, Dave Hansen wrote:
> On 9/21/21 1:50 PM, Luck, Tony wrote:
> >> Did we ever figure out how much space storing really big ranges in the
> >> xarray consumes?
> > No. Willy said the existing xarray code would be less than optimal with
> > this usage, but that things would be much better when he applied some
> > maple tree updates to the internals of xarray.
> >
> > If there is some easy way to measure the memory backing an xarray I'm
> > happy to get the data. Or if someone else can synthesize it ... the two
> > ranges on my system that are added to the xarray are:
> >
> > $ dmesg | grep -i sgx
> > [ 8.496844] sgx: EPC section 0x8000c00000-0x807f7fffff
> > [ 8.505118] sgx: EPC section 0x10000c00000-0x1007fffffff
> >
> > I.e. two ranges of a bit under 2GB each.
> >
> > But I don't think the overhead can be too hideous:
> >
> > $ grep MemFree /proc/meminfo
> > MemFree: 1048682016 kB
> >
> > I still have ~ 1TB free. Which is much greater that the 640 KB which should
> > be "enough for anybody" :-).
>
> There is a kmem_cache_create() for the xarray nodes. So, you should be
> able to see the difference in /proc/meminfo's "Slab" field. Maybe boot
> with init=/bin/sh to reduce the noise and look at meminfo both with and
> without SGX your patch applied, or just with the xarray bits commented out.
>
> I don't quite know how the data structures are munged, but xas_alloc()
> makes it look like 'struct xa_node' is allocated from
> radix_tree_node_cachep. If that's the case, you should also be able to
> see this in even more detail in:
>
> # grep radix /proc/slabinfo
> radix_tree_node 432305 482412 584 28 4 : tunables 0 0
> 0 : slabdata 17229 17229 0
>
> again, on a system with and without your new code enabled.


Booting with init=/bin/sh and running that grep command right away at
the prompt:

With the xa_store_range() call commented out of my kernel:

radix_tree_node 9800 9968 584 56 8 : tunables 0 0 0 : slabdata 178 178 0


With xa_store_range() enabled:

radix_tree_node 9950 10136 584 56 8 : tunables 0 0 0 : slabdata 181 181 0



The head of the file says these are the field names:

# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>

So I think this means that I have (9950 - 9800) * 584 = 87600 more bytes
allocated. Maybe that's a lot? But percentage-wise is seems in the
noise. E.g. We allocate one "struct sgx_epc_page" for each SGX page.
On my system I have 4GB of SGX EPC, so around 32 MB of these structures.

-Tony

2021-09-22 05:21:28

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v5 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

On Tue, 2021-09-21 at 21:34 +0000, Luck, Tony wrote:
> > > Since there are multiple uses of the "owner" field with different types
> > > change the sgx_epc_page structure to define an anonymous union with
> > > each of the uses explicitly called out.
> >
> > But it's still always a pointer.
> >
> > And not only that, but two alternative fields in that union have *exactly* the
> > same type, so it's kind of artifically representing the problem more complex
> > than it really is.
>
> Bother! I seem to have jumbled some old bits of v4 into this series.
>
> I agree that we just want "void *owner; here. I even made the changes.
> Then managed to lose them while updating.
>
> I'll find the bits I lost and re-merge them in.
>
> -Tony

Yeah, ok, cool, thank you. Just reporting what I was observing :-)

/Jarkko

2021-09-22 05:29:09

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v5 1/7] x86/sgx: Provide indication of life-cycle of EPC pages

On Tue, 2021-09-21 at 15:15 -0700, Dave Hansen wrote:
> On 9/21/21 2:28 PM, Jarkko Sakkinen wrote:
> > > Since there are multiple uses of the "owner" field with different types
> > > change the sgx_epc_page structure to define an anonymous union with
> > > each of the uses explicitly called out.
> > But it's still always a pointer.
> >
> > And not only that, but two alternative fields in that union have *exactly* the
> > same type, so it's kind of artifically representing the problem more complex
> > than it really is.
> >
> > I'm not just getting, why all this complexity, and not a few casts instead?
>
> I suggested this. It makes the structure more self-describing because
> it explicitly lists the possibles uses of the space in the structure.
>
> Maybe I stare at 'struct page' and its 4 unions too much and I'm
> enamored by their shininess. But, in the end, I prefer unions to casting.

Yeah, packing data into constrained space (as in the case of struct page) is
the only application for, where you can speak of a quantitative decision, when
you pick union.

/Jarkko