2018-10-23 16:34:25

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCH 0/2] Fix couple of issues with LDT remap for PTI

The patchset fixes issues with LDT remap for PTI:

- Layout collision due to KASLR with 5-level paging;

- Information leak via Meltdown-like attack;

Please review and consider applying.

Kirill A. Shutemov (2):
x86/mm: Move LDT remap out of KASLR region on 5-level paging
x86/ldt: Unmap PTEs for the slow before freeing LDT

Documentation/x86/x86_64/mm.txt | 8 ++--
arch/x86/include/asm/page_64_types.h | 12 ++---
arch/x86/include/asm/pgtable_64_types.h | 4 +-
arch/x86/kernel/ldt.c | 59 ++++++++++++++++---------
arch/x86/xen/mmu_pv.c | 6 +--
5 files changed, 53 insertions(+), 36 deletions(-)

--
2.19.1



2018-10-23 16:33:31

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCH 2/2] x86/ldt: Unmap PTEs for the slow before freeing LDT

modify_ldt(2) leaves old LDT mapped after we switch over to the new one.
Memory for the old LDT gets freed and the pages can be re-used.

Leaving the mapping in place can have security implications. The mapping
is present in userspace copy of page tables and Meltdown-like attack can
read these freed and possibly reused pages.

It's relatively simple to fix: just unmap the old LDT and flush TLB
before freeing LDT memory.

We can now avoid flushing TLB on map_ldt_struct() as the slot is
unmapped and flushed by unmap_ldt_struct() (or never mapped in
the first place). The overhead of the change should be negligible.
It shouldn't be a particularly hot path anyway.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Fixes: f55f0501cbf6 ("x86/pti: Put the LDT in its own PGD if PTI is on")
---
arch/x86/kernel/ldt.c | 59 ++++++++++++++++++++++++++++---------------
1 file changed, 38 insertions(+), 21 deletions(-)

diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c
index 733e6ace0fa4..8767fea41309 100644
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -199,14 +199,6 @@ static void sanity_check_ldt_mapping(struct mm_struct *mm)
/*
* If PTI is enabled, this maps the LDT into the kernelmode and
* usermode tables for the given mm.
- *
- * There is no corresponding unmap function. Even if the LDT is freed, we
- * leave the PTEs around until the slot is reused or the mm is destroyed.
- * This is harmless: the LDT is always in ordinary memory, and no one will
- * access the freed slot.
- *
- * If we wanted to unmap freed LDTs, we'd also need to do a flush to make
- * it useful, and the flush would slow down modify_ldt().
*/
static int
map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot)
@@ -214,8 +206,7 @@ map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot)
unsigned long va;
bool is_vmalloc;
spinlock_t *ptl;
- pgd_t *pgd;
- int i;
+ int i, nr_pages;

if (!static_cpu_has(X86_FEATURE_PTI))
return 0;
@@ -229,16 +220,10 @@ map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot)
/* Check if the current mappings are sane */
sanity_check_ldt_mapping(mm);

- /*
- * Did we already have the top level entry allocated? We can't
- * use pgd_none() for this because it doens't do anything on
- * 4-level page table kernels.
- */
- pgd = pgd_offset(mm, LDT_BASE_ADDR);
-
is_vmalloc = is_vmalloc_addr(ldt->entries);

- for (i = 0; i * PAGE_SIZE < ldt->nr_entries * LDT_ENTRY_SIZE; i++) {
+ nr_pages = DIV_ROUND_UP(ldt->nr_entries * LDT_ENTRY_SIZE, PAGE_SIZE);
+ for (i = 0; i < nr_pages; i++) {
unsigned long offset = i << PAGE_SHIFT;
const void *src = (char *)ldt->entries + offset;
unsigned long pfn;
@@ -272,13 +257,39 @@ map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot)
/* Propagate LDT mapping to the user page-table */
map_ldt_struct_to_user(mm);

- va = (unsigned long)ldt_slot_va(slot);
- flush_tlb_mm_range(mm, va, va + LDT_SLOT_STRIDE, 0);
-
ldt->slot = slot;
return 0;
}

+static void
+unmap_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt)
+{
+ unsigned long va;
+ int i, nr_pages;
+
+ if (!ldt)
+ return;
+
+ /* LDT map/unmap is only required for PTI */
+ if (!static_cpu_has(X86_FEATURE_PTI))
+ return;
+
+ nr_pages = DIV_ROUND_UP(ldt->nr_entries * LDT_ENTRY_SIZE, PAGE_SIZE);
+ for (i = 0; i < nr_pages; i++) {
+ unsigned long offset = i << PAGE_SHIFT;
+ pte_t *ptep;
+ spinlock_t *ptl;
+
+ va = (unsigned long)ldt_slot_va(ldt->slot) + offset;
+ ptep = get_locked_pte(mm, va, &ptl);
+ pte_clear(mm, va, ptep);
+ pte_unmap_unlock(ptep, ptl);
+ }
+
+ va = (unsigned long)ldt_slot_va(ldt->slot);
+ flush_tlb_mm_range(mm, va, va + nr_pages * PAGE_SIZE, 0);
+}
+
#else /* !CONFIG_PAGE_TABLE_ISOLATION */

static int
@@ -286,6 +297,11 @@ map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot)
{
return 0;
}
+
+static void
+unmap_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt)
+{
+}
#endif /* CONFIG_PAGE_TABLE_ISOLATION */

static void free_ldt_pgtables(struct mm_struct *mm)
@@ -524,6 +540,7 @@ static int write_ldt(void __user *ptr, unsigned long bytecount, int oldmode)
}

install_ldt(mm, new_ldt);
+ unmap_ldt_struct(mm, old_ldt);
free_ldt_struct(old_ldt);
error = 0;

--
2.19.1


2018-10-23 16:35:35

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCH 1/2] x86/mm: Move LDT remap out of KASLR region on 5-level paging

On 5-level paging LDT remap area is placed in the middle of
KASLR randomization region and it can overlap with direct mapping,
vmalloc or vmap area.

Let's move LDT just before direct mapping which makes it safe for KASLR.
This also allows us to unify layout between 4- and 5-level paging.

We don't touch 4 pgd slot gap just before the direct mapping reserved
for a hypervisor, but move direct mapping by one slot instead.

The LDT mapping is per-mm, so we cannot move it into P4D page table next
to CPU_ENTRY_AREA without complicating PGD table allocation for 5-level
paging.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Fixes: f55f0501cbf6 ("x86/pti: Put the LDT in its own PGD if PTI is on")
---
Documentation/x86/x86_64/mm.txt | 8 ++++----
arch/x86/include/asm/page_64_types.h | 12 +++++++-----
arch/x86/include/asm/pgtable_64_types.h | 4 +---
arch/x86/xen/mmu_pv.c | 6 +++---
4 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
index 5432a96d31ff..463c48c26fb7 100644
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -4,7 +4,8 @@ Virtual memory map with 4 level page tables:
0000000000000000 - 00007fffffffffff (=47 bits) user space, different per mm
hole caused by [47:63] sign extension
ffff800000000000 - ffff87ffffffffff (=43 bits) guard hole, reserved for hypervisor
-ffff880000000000 - ffffc7ffffffffff (=64 TB) direct mapping of all phys. memory
+ffff888000000000 - ffff887fffffffff (=39 bits) LDT remap for PTI
+ffff888000000000 - ffffc87fffffffff (=64 TB) direct mapping of all phys. memory
ffffc80000000000 - ffffc8ffffffffff (=40 bits) hole
ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
@@ -14,7 +15,6 @@ ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
... unused hole ...
vaddr_end for KASLR
fffffe0000000000 - fffffe7fffffffff (=39 bits) cpu_entry_area mapping
-fffffe8000000000 - fffffeffffffffff (=39 bits) LDT remap for PTI
ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
... unused hole ...
ffffffef00000000 - fffffffeffffffff (=64 GB) EFI region mapping space
@@ -30,8 +30,8 @@ Virtual memory map with 5 level page tables:
0000000000000000 - 00ffffffffffffff (=56 bits) user space, different per mm
hole caused by [56:63] sign extension
ff00000000000000 - ff0fffffffffffff (=52 bits) guard hole, reserved for hypervisor
-ff10000000000000 - ff8fffffffffffff (=55 bits) direct mapping of all phys. memory
-ff90000000000000 - ff9fffffffffffff (=52 bits) LDT remap for PTI
+ff10000000000000 - ff10ffffffffffff (=48 bits) LDT remap for PTI
+ff11000000000000 - ff90ffffffffffff (=55 bits) direct mapping of all phys. memory
ffa0000000000000 - ffd1ffffffffffff (=54 bits) vmalloc/ioremap space (12800 TB)
ffd2000000000000 - ffd3ffffffffffff (=49 bits) hole
ffd4000000000000 - ffd5ffffffffffff (=49 bits) virtual memory map (512TB)
diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 6afac386a434..b99d497e342d 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -33,12 +33,14 @@

/*
* Set __PAGE_OFFSET to the most negative possible address +
- * PGDIR_SIZE*16 (pgd slot 272). The gap is to allow a space for a
- * hypervisor to fit. Choosing 16 slots here is arbitrary, but it's
- * what Xen requires.
+ * PGDIR_SIZE*17 (pgd slot 273).
+ *
+ * The gap is to allow a space for LDT remap for PTI (1 pgd slot) and space for
+ * a hypervisor (16 slots). Choosing 16 slots for a hypervisor is arbitrary,
+ * but it's what Xen requires.
*/
-#define __PAGE_OFFSET_BASE_L5 _AC(0xff10000000000000, UL)
-#define __PAGE_OFFSET_BASE_L4 _AC(0xffff880000000000, UL)
+#define __PAGE_OFFSET_BASE_L5 _AC(0xff11000000000000, UL)
+#define __PAGE_OFFSET_BASE_L4 _AC(0xffff888000000000, UL)

#ifdef CONFIG_DYNAMIC_MEMORY_LAYOUT
#define __PAGE_OFFSET page_offset_base
diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
index 04edd2d58211..84bd9bdc1987 100644
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -111,9 +111,7 @@ extern unsigned int ptrs_per_p4d;
*/
#define MAXMEM (1UL << MAX_PHYSMEM_BITS)

-#define LDT_PGD_ENTRY_L4 -3UL
-#define LDT_PGD_ENTRY_L5 -112UL
-#define LDT_PGD_ENTRY (pgtable_l5_enabled() ? LDT_PGD_ENTRY_L5 : LDT_PGD_ENTRY_L4)
+#define LDT_PGD_ENTRY -240UL
#define LDT_BASE_ADDR (LDT_PGD_ENTRY << PGDIR_SHIFT)
#define LDT_END_ADDR (LDT_BASE_ADDR + PGDIR_SIZE)

diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index dd461c0167ef..2c84c6ad8b50 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -1897,7 +1897,7 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
init_top_pgt[0] = __pgd(0);

/* Pre-constructed entries are in pfn, so convert to mfn */
- /* L4[272] -> level3_ident_pgt */
+ /* L4[273] -> level3_ident_pgt */
/* L4[511] -> level3_kernel_pgt */
convert_pfn_mfn(init_top_pgt);

@@ -1917,8 +1917,8 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
addr[0] = (unsigned long)pgd;
addr[1] = (unsigned long)l3;
addr[2] = (unsigned long)l2;
- /* Graft it onto L4[272][0]. Note that we creating an aliasing problem:
- * Both L4[272][0] and L4[511][510] have entries that point to the same
+ /* Graft it onto L4[273][0]. Note that we creating an aliasing problem:
+ * Both L4[273][0] and L4[511][510] have entries that point to the same
* L2 (PMD) tables. Meaning that if you modify it in __va space
* it will be also modified in the __ka space! (But if you just
* modify the PMD table to point to other PTE's or none, then you
--
2.19.1


2018-10-24 11:15:14

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/ldt: Unmap PTEs for the slow before freeing LDT

The subject line does not parse..

2018-10-24 11:57:46

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH 1/2] x86/mm: Move LDT remap out of KASLR region on 5-level paging

On Tue, Oct 23, 2018 at 07:31:56PM +0300, Kirill A. Shutemov wrote:
> -ffff880000000000 - ffffc7ffffffffff (=64 TB) direct mapping of all phys. memory
> +ffff888000000000 - ffff887fffffffff (=39 bits) LDT remap for PTI

I'm a little bit cross-eyed at this point, but I think the above '888'
should be '880'.

> @@ -14,7 +15,6 @@ ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
> ... unused hole ...
> vaddr_end for KASLR
> fffffe0000000000 - fffffe7fffffffff (=39 bits) cpu_entry_area mapping
> -fffffe8000000000 - fffffeffffffffff (=39 bits) LDT remap for PTI

... and the line above this one should be adjusted to finish at
fffffeffffffffff (also it's now 40 bits). Or should there be something
else here?

> ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
> ... unused hole ...
> ffffffef00000000 - fffffffeffffffff (=64 GB) EFI region mapping space
> @@ -30,8 +30,8 @@ Virtual memory map with 5 level page tables:
> 0000000000000000 - 00ffffffffffffff (=56 bits) user space, different per mm
> hole caused by [56:63] sign extension
> ff00000000000000 - ff0fffffffffffff (=52 bits) guard hole, reserved for hypervisor
> -ff10000000000000 - ff8fffffffffffff (=55 bits) direct mapping of all phys. memory
> -ff90000000000000 - ff9fffffffffffff (=52 bits) LDT remap for PTI
> +ff10000000000000 - ff10ffffffffffff (=48 bits) LDT remap for PTI
> +ff11000000000000 - ff90ffffffffffff (=55 bits) direct mapping of all phys. memory

What's at ff910..0 to ff9f..f ?

Is there any way we can generate this part of this file to prevent human
error from creeping in over time? ;-)


2018-10-24 12:46:02

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH 1/2] x86/mm: Move LDT remap out of KASLR region on 5-level paging

On Wed, Oct 24, 2018 at 04:54:47AM -0700, Matthew Wilcox wrote:
> On Tue, Oct 23, 2018 at 07:31:56PM +0300, Kirill A. Shutemov wrote:
> > -ffff880000000000 - ffffc7ffffffffff (=64 TB) direct mapping of all phys. memory
> > +ffff888000000000 - ffff887fffffffff (=39 bits) LDT remap for PTI
>
> I'm a little bit cross-eyed at this point, but I think the above '888'
> should be '880'.
>
> > @@ -14,7 +15,6 @@ ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
> > ... unused hole ...
> > vaddr_end for KASLR
> > fffffe0000000000 - fffffe7fffffffff (=39 bits) cpu_entry_area mapping
> > -fffffe8000000000 - fffffeffffffffff (=39 bits) LDT remap for PTI
>
> ... and the line above this one should be adjusted to finish at
> fffffeffffffffff (also it's now 40 bits). Or should there be something
> else here?
>
> > ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
> > ... unused hole ...
> > ffffffef00000000 - fffffffeffffffff (=64 GB) EFI region mapping space
> > @@ -30,8 +30,8 @@ Virtual memory map with 5 level page tables:
> > 0000000000000000 - 00ffffffffffffff (=56 bits) user space, different per mm
> > hole caused by [56:63] sign extension
> > ff00000000000000 - ff0fffffffffffff (=52 bits) guard hole, reserved for hypervisor
> > -ff10000000000000 - ff8fffffffffffff (=55 bits) direct mapping of all phys. memory
> > -ff90000000000000 - ff9fffffffffffff (=52 bits) LDT remap for PTI
> > +ff10000000000000 - ff10ffffffffffff (=48 bits) LDT remap for PTI
> > +ff11000000000000 - ff90ffffffffffff (=55 bits) direct mapping of all phys. memory
>
> What's at ff910..0 to ff9f..f ?
>
> Is there any way we can generate this part of this file to prevent human
> error from creeping in over time? ;-)

In current Linus' tree this part of the documentation was re-written. I've
rebased to it and rewrote the documenation for the change.

I believe I've fixed all mistakes you've noticied. Please check out v2. I
will post it soon.

--
Kirill A. Shutemov

2018-10-24 18:50:15

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/ldt: Unmap PTEs for the slow before freeing LDT

On Tue, Oct 23, 2018 at 9:32 AM Kirill A. Shutemov
<[email protected]> wrote:
>
> modify_ldt(2) leaves old LDT mapped after we switch over to the new one.
> Memory for the old LDT gets freed and the pages can be re-used.
>
> Leaving the mapping in place can have security implications. The mapping
> is present in userspace copy of page tables and Meltdown-like attack can
> read these freed and possibly reused pages.

Code looks okay. But:

> - /*
> - * Did we already have the top level entry allocated? We can't
> - * use pgd_none() for this because it doens't do anything on
> - * 4-level page table kernels.
> - */
> - pgd = pgd_offset(mm, LDT_BASE_ADDR);

This looks like an unrelated cleanup. Can it be its own patch?

2018-10-24 19:41:50

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/ldt: Unmap PTEs for the slow before freeing LDT

On 10/23/18 9:31 AM, Kirill A. Shutemov wrote:
>
> It shouldn't be a particularly hot path anyway.
>

That's putting it mildly.

-hpa


2018-10-25 07:28:04

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/ldt: Unmap PTEs for the slow before freeing LDT

On Wed, Oct 24, 2018 at 11:49:17AM -0700, Andy Lutomirski wrote:
> On Tue, Oct 23, 2018 at 9:32 AM Kirill A. Shutemov
> <[email protected]> wrote:
> >
> > modify_ldt(2) leaves old LDT mapped after we switch over to the new one.
> > Memory for the old LDT gets freed and the pages can be re-used.
> >
> > Leaving the mapping in place can have security implications. The mapping
> > is present in userspace copy of page tables and Meltdown-like attack can
> > read these freed and possibly reused pages.
>
> Code looks okay. But:
>
> > - /*
> > - * Did we already have the top level entry allocated? We can't
> > - * use pgd_none() for this because it doens't do anything on
> > - * 4-level page table kernels.
> > - */
> > - pgd = pgd_offset(mm, LDT_BASE_ADDR);
>
> This looks like an unrelated cleanup. Can it be its own patch?

Okay, I'll move it into a separate patch in v3.

I'll some more time for comments on v2 before respin.

--
Kirill A. Shutemov