2018-06-12 20:51:38

by Dmitry Safonov

[permalink] [raw]
Subject: [RFC] x86/vdso: Align vdso after searching for free area

There is errata for AMD family 15h CPUs [1] and since
commit dfb09f9b7ab03 ("x86, amd: Avoid cache aliasing penalties on AMD
family 15h") bits [14:12] are being cleared for shared libraries.
Also per-boot ASLR applies over upper bits by OR directly over the
address.

As we need special alignment and lower bits values to be set, it makes
only a little sense to call get_unmapped_area() after calculating the
address. It also can lead to random crashes if get_unmapped_area()
actually changes/aligns the address, which we observed on 15h CPU.
Usually it's not a problem as there isn't many mappings (except possibly
ld.so, uprobes?) and result address is the same before/after
get_unmapped_area().

Move align_vdso_addr() after get_unmapped_area() to make sure that
errata for AMD 15h is always applied.

[1]: https://developer.amd.com/wordpress/media/2012/10/SharedL1InstructionCacheonAMD15hCPU.pdf
Cc: Andy Lutomirski <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vasiliy Khoruzhick <[email protected]>
Cc: [email protected]
Signed-off-by: Dmitry Safonov <[email protected]>
---
arch/x86/entry/vdso/vma.c | 18 ++++++++++--------
arch/x86/include/asm/elf.h | 1 +
arch/x86/kernel/sys_x86_64.c | 2 +-
3 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index 5b8b556dbb12..862f0cce3ee6 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -154,18 +154,26 @@ static int map_vdso(const struct vdso_image *image, unsigned long addr)
struct mm_struct *mm = current->mm;
struct vm_area_struct *vma;
unsigned long text_start;
+ unsigned long area_size = image->size - image->sym_vvar_start;
int ret = 0;

if (down_write_killable(&mm->mmap_sem))
return -EINTR;

- addr = get_unmapped_area(NULL, addr,
- image->size - image->sym_vvar_start, 0, 0);
+ /* Find a bigger place for vma then needed - to align vdso later. */
+ area_size += get_align_mask();
+ addr = get_unmapped_area(NULL, addr, area_size, 0, 0);
if (IS_ERR_VALUE(addr)) {
ret = addr;
goto up_fail;
}

+ /*
+ * Forcibly align the final address in case we have a hardware
+ * issue that requires alignment for performance reasons.
+ */
+ addr = align_vdso_addr(addr);
+
text_start = addr - image->sym_vvar_start;

/*
@@ -239,12 +247,6 @@ static unsigned long vdso_addr(unsigned long start, unsigned len)
addr = start;
}

- /*
- * Forcibly align the final address in case we have a hardware
- * issue that requires alignment for performance reasons.
- */
- addr = align_vdso_addr(addr);
-
return addr;
}

diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index 0d157d2a1e2a..88aa49294c9c 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -382,4 +382,5 @@ struct va_alignment {

extern struct va_alignment va_align;
extern unsigned long align_vdso_addr(unsigned long);
+extern unsigned long get_align_mask(void);
#endif /* _ASM_X86_ELF_H */
diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 6a78d4b36a79..4dd74c6f236d 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -27,7 +27,7 @@
/*
* Align a virtual address to avoid aliasing in the I$ on AMD F15h.
*/
-static unsigned long get_align_mask(void)
+unsigned long get_align_mask(void)
{
/* handle 32- and 64-bit case with a single conditional */
if (va_align.flags < 0 || !(va_align.flags & (2 - mmap_is_ia32())))
--
2.13.6



2018-06-12 21:24:54

by Dmitry Safonov

[permalink] [raw]
Subject: Re: [RFC] x86/vdso: Align vdso after searching for free area

On Tue, 2018-06-12 at 21:49 +0100, Dmitry Safonov wrote:
> There is errata for AMD family 15h CPUs [1] and since
> commit dfb09f9b7ab03 ("x86, amd: Avoid cache aliasing penalties on
> AMD
> family 15h") bits [14:12] are being cleared for shared libraries.
> Also per-boot ASLR applies over upper bits by OR directly over the
> address.
>
> As we need special alignment and lower bits values to be set, it
> makes
> only a little sense to call get_unmapped_area() after calculating the
> address. It also can lead to random crashes if get_unmapped_area()
> actually changes/aligns the address, which we observed on 15h CPU.
> Usually it's not a problem as there isn't many mappings (except
> possibly
> ld.so, uprobes?) and result address is the same before/after
> get_unmapped_area().
>
> Move align_vdso_addr() after get_unmapped_area() to make sure that
> errata for AMD 15h is always applied.

Alternative dirty-hacky idea:
specify some (struct file*) to get_unmapped_area() for vdso vma, then
mapping would be automatically aligned. Dirty as hell as relies on
get_unmapped_area() realization details.

--
Dima

2018-06-12 21:40:59

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [RFC] x86/vdso: Align vdso after searching for free area

On 06/12/18 14:24, Dmitry Safonov wrote:
>>
>> Move align_vdso_addr() after get_unmapped_area() to make sure that
>> errata for AMD 15h is always applied.
>
> Alternative dirty-hacky idea:
> specify some (struct file*) to get_unmapped_area() for vdso vma, then
> mapping would be automatically aligned. Dirty as hell as relies on
> get_unmapped_area() realization details.
>


I have mentioned several times that I would like to see the vdso
actually be an actual file in a filesystem, that the kernel *or* user
space can map (needs to be MAP_SHARED, of course.)

The vdso data page needs to be moved after the ELF object itself for
this to work. Ideally it should be given an actual ELF segment (and
ideally an ELF section as well.) The easy way to do this is to give the
linker a dummy vvar page as a properly aligned section at compile time,
into which space the kernel can map the real vvar page. The only
downside is that the linker likes to put section headings after the
actual data, so it may end up taking up an extra page over the current
arrangement. However, I think the gains outweigh the losses.

-hpa

2018-06-12 21:49:28

by Dmitry Safonov

[permalink] [raw]
Subject: Re: [RFC] x86/vdso: Align vdso after searching for free area

On Tue, 2018-06-12 at 14:39 -0700, H. Peter Anvin wrote:
> On 06/12/18 14:24, Dmitry Safonov wrote:
> > >
> > > Move align_vdso_addr() after get_unmapped_area() to make sure
> > > that
> > > errata for AMD 15h is always applied.
> >
> > Alternative dirty-hacky idea:
> > specify some (struct file*) to get_unmapped_area() for vdso vma,
> > then
> > mapping would be automatically aligned. Dirty as hell as relies on
> > get_unmapped_area() realization details.
> >
>
>
> I have mentioned several times that I would like to see the vdso
> actually be an actual file in a filesystem, that the kernel *or* user
> space can map (needs to be MAP_SHARED, of course.)

Yeah, I remember I did that previously:
https://lwn.net/Articles/698854/

But hadn't time to fix review replies.
Probably, worth to resurrect the patches and clean the dust out from
them.

> The vdso data page needs to be moved after the ELF object itself for
> this to work. Ideally it should be given an actual ELF segment (and
> ideally an ELF section as well.) The easy way to do this is to give
> the
> linker a dummy vvar page as a properly aligned section at compile
> time,
> into which space the kernel can map the real vvar page. The only
> downside is that the linker likes to put section headings after the
> actual data, so it may end up taking up an extra page over the
> current
> arrangement. However, I think the gains outweigh the losses.

--
Thanks,
Dmitry