2018-03-22 16:38:21

by Ilya Smith

[permalink] [raw]
Subject: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

Current implementation doesn't randomize address returned by mmap.
All the entropy ends with choosing mmap_base_addr at the process
creation. After that mmap build very predictable layout of address
space. It allows to bypass ASLR in many cases. This patch make
randomization of address on any mmap call.

---
v2: Changed the way how gap was chosen. Now we don't get all possible
gaps. Random address generated and used as a tree walking direction.
Tree walked with backtracking till suitable gap will be found.
When the gap was found, address randomly shifted from next vma start.

The vm_unmapped_area_info structure was extended with new field random_shift
what might be used to set arch-depended limit on shift to next vma start.
In case of x86-64 architecture this shift is 256 pages for 32 bit applications
and 0x1000000 pages for 64 bit.

To get the entropy pseudo-random is used. This is because on Intel x86-64
processors instruction RDRAND works very slow if buffer is consumed -
after about 10000 iterations.

This feature could be enabled by setting randomize_va_space with 4.

---
Performance:
After applying this patch single mmap took about 7% longer according to
following test:

before = rdtsc();
addr = mmap(0, SIZE, PROT_READ | PROT_WRITE,
MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
after = rdtsc();
diff = after - before;
munmap(addr, SIZE)
...
unsigned long long total = 0;
for(int i = 0; i < count; ++i) {
total += one_iteration();
}
printf("%lld\n", total);

Time is consumed by div instruction in computation of the address.

make kernel:
echo 2 > /proc/sys/kernel/randomize_va_space
make mrproper && make defconfig && time make
real 11m9.925s
user 10m17.829s
sys 1m4.969s

echo 4 > /proc/sys/kernel/randomize_va_space
make mrproper && make defconfig && time make
real 11m12.806s
user 10m18.305s
sys 1m4.281s


Ilya Smith (2):
Randomization of address chosen by mmap.
Architecture defined limit on memory region random shift.

arch/alpha/kernel/osf_sys.c | 1 +
arch/arc/mm/mmap.c | 1 +
arch/arm/mm/mmap.c | 2 +
arch/frv/mm/elf-fdpic.c | 1 +
arch/ia64/kernel/sys_ia64.c | 1 +
arch/ia64/mm/hugetlbpage.c | 1 +
arch/metag/mm/hugetlbpage.c | 1 +
arch/mips/mm/mmap.c | 1 +
arch/parisc/kernel/sys_parisc.c | 2 +
arch/powerpc/mm/hugetlbpage-radix.c | 1 +
arch/powerpc/mm/mmap.c | 2 +
arch/powerpc/mm/slice.c | 2 +
arch/s390/mm/mmap.c | 2 +
arch/sh/mm/mmap.c | 2 +
arch/sparc/kernel/sys_sparc_32.c | 1 +
arch/sparc/kernel/sys_sparc_64.c | 2 +
arch/sparc/mm/hugetlbpage.c | 2 +
arch/tile/mm/hugetlbpage.c | 2 +
arch/x86/kernel/sys_x86_64.c | 4 +
arch/x86/mm/hugetlbpage.c | 4 +
fs/hugetlbfs/inode.c | 1 +
include/linux/mm.h | 17 ++--
mm/mmap.c | 165 ++++++++++++++++++++++++++++++++++++
23 files changed, 213 insertions(+), 5 deletions(-)

--
2.7.4



2018-03-22 16:38:40

by Ilya Smith

[permalink] [raw]
Subject: [RFC PATCH v2 2/2] Architecture defined limit on memory region random shift.

Signed-off-by: Ilya Smith <[email protected]>
---
arch/alpha/kernel/osf_sys.c | 1 +
arch/arc/mm/mmap.c | 1 +
arch/arm/mm/mmap.c | 2 ++
arch/frv/mm/elf-fdpic.c | 1 +
arch/ia64/kernel/sys_ia64.c | 1 +
arch/ia64/mm/hugetlbpage.c | 1 +
arch/metag/mm/hugetlbpage.c | 1 +
arch/mips/mm/mmap.c | 1 +
arch/parisc/kernel/sys_parisc.c | 2 ++
arch/powerpc/mm/hugetlbpage-radix.c | 1 +
arch/powerpc/mm/mmap.c | 2 ++
arch/powerpc/mm/slice.c | 2 ++
arch/s390/mm/mmap.c | 2 ++
arch/sh/mm/mmap.c | 2 ++
arch/sparc/kernel/sys_sparc_32.c | 1 +
arch/sparc/kernel/sys_sparc_64.c | 2 ++
arch/sparc/mm/hugetlbpage.c | 2 ++
arch/tile/mm/hugetlbpage.c | 2 ++
arch/x86/kernel/sys_x86_64.c | 4 ++++
arch/x86/mm/hugetlbpage.c | 4 ++++
fs/hugetlbfs/inode.c | 1 +
include/linux/mm.h | 1 +
mm/mmap.c | 3 ++-
23 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c
index fa1a392..0ab9f31 100644
--- a/arch/alpha/kernel/osf_sys.c
+++ b/arch/alpha/kernel/osf_sys.c
@@ -1301,6 +1301,7 @@ arch_get_unmapped_area_1(unsigned long addr, unsigned long len,
info.high_limit = limit;
info.align_mask = 0;
info.align_offset = 0;
+ info.random_shift = 0;
return vm_unmapped_area(&info);
}

diff --git a/arch/arc/mm/mmap.c b/arch/arc/mm/mmap.c
index 2e13683..45225fc 100644
--- a/arch/arc/mm/mmap.c
+++ b/arch/arc/mm/mmap.c
@@ -75,5 +75,6 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
info.high_limit = TASK_SIZE;
info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
+ info.random_shift = 0;
return vm_unmapped_area(&info);
}
diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index eb1de66..1eb660c 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -101,6 +101,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
info.high_limit = TASK_SIZE;
info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
+ info.random_shift = 0;
return vm_unmapped_area(&info);
}

@@ -152,6 +153,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
info.high_limit = mm->mmap_base;
info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
+ info.random_shift = 0;
addr = vm_unmapped_area(&info);

/*
diff --git a/arch/frv/mm/elf-fdpic.c b/arch/frv/mm/elf-fdpic.c
index 46aa289..a2ce2ce 100644
--- a/arch/frv/mm/elf-fdpic.c
+++ b/arch/frv/mm/elf-fdpic.c
@@ -86,6 +86,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
info.high_limit = (current->mm->start_stack - 0x00200000);
info.align_mask = 0;
info.align_offset = 0;
+ info.random_shift = 0;
addr = vm_unmapped_area(&info);
if (!(addr & ~PAGE_MASK))
goto success;
diff --git a/arch/ia64/kernel/sys_ia64.c b/arch/ia64/kernel/sys_ia64.c
index 085adfc..15fa4fb 100644
--- a/arch/ia64/kernel/sys_ia64.c
+++ b/arch/ia64/kernel/sys_ia64.c
@@ -64,6 +64,7 @@ arch_get_unmapped_area (struct file *filp, unsigned long addr, unsigned long len
info.high_limit = TASK_SIZE;
info.align_mask = align_mask;
info.align_offset = 0;
+ info.random_shift = 0;
return vm_unmapped_area(&info);
}

diff --git a/arch/ia64/mm/hugetlbpage.c b/arch/ia64/mm/hugetlbpage.c
index d16e419..ec7822d 100644
--- a/arch/ia64/mm/hugetlbpage.c
+++ b/arch/ia64/mm/hugetlbpage.c
@@ -162,6 +162,7 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr, u
info.high_limit = HPAGE_REGION_BASE + RGN_MAP_LIMIT;
info.align_mask = PAGE_MASK & (HPAGE_SIZE - 1);
info.align_offset = 0;
+ info.random_shift = 0;
return vm_unmapped_area(&info);
}

diff --git a/arch/metag/mm/hugetlbpage.c b/arch/metag/mm/hugetlbpage.c
index 012ee4c..babd325 100644
--- a/arch/metag/mm/hugetlbpage.c
+++ b/arch/metag/mm/hugetlbpage.c
@@ -191,6 +191,7 @@ hugetlb_get_unmapped_area_new_pmd(unsigned long len)
info.high_limit = TASK_SIZE;
info.align_mask = PAGE_MASK & HUGEPT_MASK;
info.align_offset = 0;
+ info.random_shift = 0;
return vm_unmapped_area(&info);
}

diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c
index 33d3251..5a3d384 100644
--- a/arch/mips/mm/mmap.c
+++ b/arch/mips/mm/mmap.c
@@ -122,6 +122,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp,
info.flags = 0;
info.low_limit = mm->mmap_base;
info.high_limit = TASK_SIZE;
+ info.random_shift = 0;
return vm_unmapped_area(&info);
}

diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
index 378a754..abf4b05 100644
--- a/arch/parisc/kernel/sys_parisc.c
+++ b/arch/parisc/kernel/sys_parisc.c
@@ -130,6 +130,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
info.high_limit = mmap_upper_limit();
info.align_mask = last_mmap ? (PAGE_MASK & (SHM_COLOUR - 1)) : 0;
info.align_offset = shared_align_offset(last_mmap, pgoff);
+ info.random_shift = 0;
addr = vm_unmapped_area(&info);

found_addr:
@@ -192,6 +193,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
info.high_limit = mm->mmap_base;
info.align_mask = last_mmap ? (PAGE_MASK & (SHM_COLOUR - 1)) : 0;
info.align_offset = shared_align_offset(last_mmap, pgoff);
+ info.random_shift = 0;
addr = vm_unmapped_area(&info);
if (!(addr & ~PAGE_MASK))
goto found_addr;
diff --git a/arch/powerpc/mm/hugetlbpage-radix.c b/arch/powerpc/mm/hugetlbpage-radix.c
index 2486bee..1d61a88 100644
--- a/arch/powerpc/mm/hugetlbpage-radix.c
+++ b/arch/powerpc/mm/hugetlbpage-radix.c
@@ -87,6 +87,7 @@ radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW);
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
info.align_offset = 0;
+ info.random_shift = 0;

return vm_unmapped_area(&info);
}
diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
index d503f34..7fe98c7 100644
--- a/arch/powerpc/mm/mmap.c
+++ b/arch/powerpc/mm/mmap.c
@@ -136,6 +136,7 @@ radix__arch_get_unmapped_area(struct file *filp, unsigned long addr,
info.low_limit = mm->mmap_base;
info.high_limit = high_limit;
info.align_mask = 0;
+ info.random_shift = 0;

return vm_unmapped_area(&info);
}
@@ -180,6 +181,7 @@ radix__arch_get_unmapped_area_topdown(struct file *filp,
info.low_limit = max(PAGE_SIZE, mmap_min_addr);
info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW);
info.align_mask = 0;
+ info.random_shift = 0;

addr = vm_unmapped_area(&info);
if (!(addr & ~PAGE_MASK))
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 23ec2c5..2005845 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -284,6 +284,7 @@ static unsigned long slice_find_area_bottomup(struct mm_struct *mm,
info.length = len;
info.align_mask = PAGE_MASK & ((1ul << pshift) - 1);
info.align_offset = 0;
+ info.random_shift = 0;

addr = TASK_UNMAPPED_BASE;
/*
@@ -330,6 +331,7 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm,
info.length = len;
info.align_mask = PAGE_MASK & ((1ul << pshift) - 1);
info.align_offset = 0;
+ info.random_shift = 0;

addr = mm->mmap_base;
/*
diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index 831bdcf..141823f 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -95,6 +95,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
info.length = len;
info.low_limit = mm->mmap_base;
info.high_limit = TASK_SIZE;
+ info.random_shift = 0;
if (filp || (flags & MAP_SHARED))
info.align_mask = MMAP_ALIGN_MASK << PAGE_SHIFT;
else
@@ -146,6 +147,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
info.length = len;
info.low_limit = max(PAGE_SIZE, mmap_min_addr);
info.high_limit = mm->mmap_base;
+ info.random_shift = 0;
if (filp || (flags & MAP_SHARED))
info.align_mask = MMAP_ALIGN_MASK << PAGE_SHIFT;
else
diff --git a/arch/sh/mm/mmap.c b/arch/sh/mm/mmap.c
index 6a1a129..d9206c2 100644
--- a/arch/sh/mm/mmap.c
+++ b/arch/sh/mm/mmap.c
@@ -74,6 +74,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
info.high_limit = TASK_SIZE;
info.align_mask = do_colour_align ? (PAGE_MASK & shm_align_mask) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
+ info.random_shift = 0;
return vm_unmapped_area(&info);
}

@@ -124,6 +125,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
info.high_limit = mm->mmap_base;
info.align_mask = do_colour_align ? (PAGE_MASK & shm_align_mask) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
+ info.random_shift = 0;
addr = vm_unmapped_area(&info);

/*
diff --git a/arch/sparc/kernel/sys_sparc_32.c b/arch/sparc/kernel/sys_sparc_32.c
index 990703b7..af664ba3 100644
--- a/arch/sparc/kernel/sys_sparc_32.c
+++ b/arch/sparc/kernel/sys_sparc_32.c
@@ -66,6 +66,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
info.align_mask = (flags & MAP_SHARED) ?
(PAGE_MASK & (SHMLBA - 1)) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
+ info.random_shift = 0;
return vm_unmapped_area(&info);
}

diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
index 55416db..3d12e3d 100644
--- a/arch/sparc/kernel/sys_sparc_64.c
+++ b/arch/sparc/kernel/sys_sparc_64.c
@@ -131,6 +131,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
info.high_limit = min(task_size, VA_EXCLUDE_START);
info.align_mask = do_color_align ? (PAGE_MASK & (SHMLBA - 1)) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
+ info.random_shift = 0;
addr = vm_unmapped_area(&info);

if ((addr & ~PAGE_MASK) && task_size > VA_EXCLUDE_END) {
@@ -194,6 +195,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
info.high_limit = mm->mmap_base;
info.align_mask = do_color_align ? (PAGE_MASK & (SHMLBA - 1)) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
+ info.random_shift = 0;
addr = vm_unmapped_area(&info);

/*
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
index 0112d69..6d0c032 100644
--- a/arch/sparc/mm/hugetlbpage.c
+++ b/arch/sparc/mm/hugetlbpage.c
@@ -43,6 +43,7 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *filp,
info.high_limit = min(task_size, VA_EXCLUDE_START);
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
info.align_offset = 0;
+ info.random_shift = 0;
addr = vm_unmapped_area(&info);

if ((addr & ~PAGE_MASK) && task_size > VA_EXCLUDE_END) {
@@ -75,6 +76,7 @@ hugetlb_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
info.high_limit = mm->mmap_base;
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
info.align_offset = 0;
+ info.random_shift = 0;
addr = vm_unmapped_area(&info);

/*
diff --git a/arch/tile/mm/hugetlbpage.c b/arch/tile/mm/hugetlbpage.c
index 0986d42..2b3a9b6 100644
--- a/arch/tile/mm/hugetlbpage.c
+++ b/arch/tile/mm/hugetlbpage.c
@@ -176,6 +176,7 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file,
info.high_limit = TASK_SIZE;
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
info.align_offset = 0;
+ info.random_shift = 0;
return vm_unmapped_area(&info);
}

@@ -193,6 +194,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
info.high_limit = current->mm->mmap_base;
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
info.align_offset = 0;
+ info.random_shift = 0;
addr = vm_unmapped_area(&info);

/*
diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 676774b..0eda047 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -163,6 +163,8 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
info.high_limit = end;
info.align_mask = 0;
info.align_offset = pgoff << PAGE_SHIFT;
+ info.random_shift = in_compat_syscall() ?
+ 256 : 0x1000000;
if (filp) {
info.align_mask = get_align_mask();
info.align_offset += get_align_bits();
@@ -224,6 +226,8 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,

info.align_mask = 0;
info.align_offset = pgoff << PAGE_SHIFT;
+ info.random_shift = in_compat_syscall() ?
+ 256 : 0x1000000;
if (filp) {
info.align_mask = get_align_mask();
info.align_offset += get_align_bits();
diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
index 00b2966..f4f6436 100644
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -97,6 +97,8 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file,

info.align_mask = PAGE_MASK & ~huge_page_mask(h);
info.align_offset = 0;
+ info.random_shift = in_compat_syscall() ?
+ 256 : 0x1000000;
return vm_unmapped_area(&info);
}

@@ -121,6 +123,8 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,

info.align_mask = PAGE_MASK & ~huge_page_mask(h);
info.align_offset = 0;
+ info.random_shift = in_compat_syscall() ?
+ 256 : 0x1000000;
addr = vm_unmapped_area(&info);

/*
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 8fe1b0a..83e962e 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -200,6 +200,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
info.high_limit = TASK_SIZE;
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
info.align_offset = 0;
+ info.random_shift = 0;
return vm_unmapped_area(&info);
}
#endif
diff --git a/include/linux/mm.h b/include/linux/mm.h
index c716257..f869e6d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2252,6 +2252,7 @@ struct vm_unmapped_area_info {
unsigned long high_limit;
unsigned long align_mask;
unsigned long align_offset;
+ unsigned long random_shift;
};

#ifndef CONFIG_MMU
diff --git a/mm/mmap.c b/mm/mmap.c
index ba9cebb..425fa09 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1938,7 +1938,7 @@ unsigned long unmapped_area_random(struct vm_unmapped_area_info *info)
if (gap_end == gap_start)
return gap_start;
addr = entropy[1] % (min((gap_end - gap_start) >> PAGE_SHIFT,
- 0x10000UL));
+ info->random_shift));
addr = gap_end - (addr << PAGE_SHIFT);
addr += (info->align_offset - addr) & info->align_mask;
return addr;
@@ -2186,6 +2186,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
info.low_limit = mm->mmap_base;
info.high_limit = TASK_SIZE;
info.align_mask = 0;
+ info.random_shift = 0;
return vm_unmapped_area(&info);
}
#endif
--
2.7.4


2018-03-22 16:39:58

by Ilya Smith

[permalink] [raw]
Subject: [RFC PATCH v2 1/2] Randomization of address chosen by mmap.

Signed-off-by: Ilya Smith <[email protected]>
---
include/linux/mm.h | 16 ++++--
mm/mmap.c | 164 +++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 175 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ad06d42..c716257 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -25,6 +25,7 @@
#include <linux/err.h>
#include <linux/page_ref.h>
#include <linux/memremap.h>
+#include <linux/sched.h>

struct mempolicy;
struct anon_vma;
@@ -2253,6 +2254,13 @@ struct vm_unmapped_area_info {
unsigned long align_offset;
};

+#ifndef CONFIG_MMU
+#define randomize_va_space 0
+#else
+extern int randomize_va_space;
+#endif
+
+extern unsigned long unmapped_area_random(struct vm_unmapped_area_info *info);
extern unsigned long unmapped_area(struct vm_unmapped_area_info *info);
extern unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info);

@@ -2268,6 +2276,9 @@ extern unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info);
static inline unsigned long
vm_unmapped_area(struct vm_unmapped_area_info *info)
{
+ /* How about 32 bit process?? */
+ if ((current->flags & PF_RANDOMIZE) && randomize_va_space > 3)
+ return unmapped_area_random(info);
if (info->flags & VM_UNMAPPED_AREA_TOPDOWN)
return unmapped_area_topdown(info);
else
@@ -2529,11 +2540,6 @@ int drop_caches_sysctl_handler(struct ctl_table *, int,
void drop_slab(void);
void drop_slab_node(int nid);

-#ifndef CONFIG_MMU
-#define randomize_va_space 0
-#else
-extern int randomize_va_space;
-#endif

const char * arch_vma_name(struct vm_area_struct *vma);
void print_vma_addr(char *prefix, unsigned long rip);
diff --git a/mm/mmap.c b/mm/mmap.c
index 9efdc021..ba9cebb 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -45,6 +45,7 @@
#include <linux/moduleparam.h>
#include <linux/pkeys.h>
#include <linux/oom.h>
+#include <linux/random.h>

#include <linux/uaccess.h>
#include <asm/cacheflush.h>
@@ -1780,6 +1781,169 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
return error;
}

+unsigned long unmapped_area_random(struct vm_unmapped_area_info *info)
+{
+ struct mm_struct *mm = current->mm;
+ struct vm_area_struct *vma = NULL;
+ struct vm_area_struct *visited_vma = NULL;
+ unsigned long entropy[2];
+ unsigned long length, low_limit, high_limit, gap_start, gap_end;
+ unsigned long addr = 0;
+
+ /* get entropy with prng */
+ prandom_bytes(&entropy, sizeof(entropy));
+ /* small hack to prevent EPERM result */
+ info->low_limit = max(info->low_limit, mmap_min_addr);
+
+ /* Adjust search length to account for worst case alignment overhead */
+ length = info->length + info->align_mask;
+ if (length < info->length)
+ return -ENOMEM;
+
+ /*
+ * Adjust search limits by the desired length.
+ * See implementation comment at top of unmapped_area().
+ */
+ gap_end = info->high_limit;
+ if (gap_end < length)
+ return -ENOMEM;
+ high_limit = gap_end - length;
+
+ low_limit = info->low_limit + info->align_mask;
+ if (low_limit >= high_limit)
+ return -ENOMEM;
+
+ /* Choose random addr in limit range */
+ addr = entropy[0] % ((high_limit - low_limit) >> PAGE_SHIFT);
+ addr = low_limit + (addr << PAGE_SHIFT);
+ addr += (info->align_offset - addr) & info->align_mask;
+
+ /* Check if rbtree root looks promising */
+ if (RB_EMPTY_ROOT(&mm->mm_rb))
+ return -ENOMEM;
+
+ vma = rb_entry(mm->mm_rb.rb_node, struct vm_area_struct, vm_rb);
+ if (vma->rb_subtree_gap < length)
+ return -ENOMEM;
+ /* use randomly chosen address to find closest suitable gap */
+ while (true) {
+ gap_start = vma->vm_prev ? vm_end_gap(vma->vm_prev) : 0;
+ gap_end = vm_start_gap(vma);
+ if (gap_end < low_limit)
+ break;
+ if (addr < vm_start_gap(vma)) {
+ /* random said check left */
+ if (vma->vm_rb.rb_left) {
+ struct vm_area_struct *left =
+ rb_entry(vma->vm_rb.rb_left,
+ struct vm_area_struct, vm_rb);
+ if (addr <= vm_start_gap(left) &&
+ left->rb_subtree_gap >= length) {
+ vma = left;
+ continue;
+ }
+ }
+ } else if (addr >= vm_end_gap(vma)) {
+ /* random said check right */
+ if (vma->vm_rb.rb_right) {
+ struct vm_area_struct *right =
+ rb_entry(vma->vm_rb.rb_right,
+ struct vm_area_struct, vm_rb);
+ /* it want go to the right */
+ if (right->rb_subtree_gap >= length) {
+ vma = right;
+ continue;
+ }
+ }
+ }
+ if (gap_start < low_limit) {
+ if (gap_end <= low_limit)
+ break;
+ gap_start = low_limit;
+ } else if (gap_end > info->high_limit) {
+ if (gap_start >= info->high_limit)
+ break;
+ gap_end = info->high_limit;
+ }
+ if (gap_end > gap_start &&
+ gap_end - gap_start >= length)
+ goto found;
+ visited_vma = vma;
+ break;
+ }
+ /* not found */
+ while (true) {
+ gap_start = vma->vm_prev ? vm_end_gap(vma->vm_prev) : 0;
+
+ if (gap_start <= high_limit && vma->vm_rb.rb_right) {
+ struct vm_area_struct *right =
+ rb_entry(vma->vm_rb.rb_right,
+ struct vm_area_struct, vm_rb);
+ if (right->rb_subtree_gap >= length &&
+ right != visited_vma) {
+ vma = right;
+ continue;
+ }
+ }
+
+check_current:
+ /* Check if current node has a suitable gap */
+ gap_end = vm_start_gap(vma);
+ if (gap_end <= low_limit)
+ goto go_back;
+
+ if (gap_start < low_limit)
+ gap_start = low_limit;
+
+ if (gap_start <= high_limit &&
+ gap_end > gap_start && gap_end - gap_start >= length)
+ goto found;
+
+ /* Visit left subtree if it looks promising */
+ if (vma->vm_rb.rb_left) {
+ struct vm_area_struct *left =
+ rb_entry(vma->vm_rb.rb_left,
+ struct vm_area_struct, vm_rb);
+ if (left->rb_subtree_gap >= length &&
+ vm_end_gap(left) > low_limit &&
+ left != visited_vma) {
+ vma = left;
+ continue;
+ }
+ }
+go_back:
+ /* Go back up the rbtree to find next candidate node */
+ while (true) {
+ struct rb_node *prev = &vma->vm_rb;
+
+ if (!rb_parent(prev))
+ return -ENOMEM;
+ visited_vma = vma;
+ vma = rb_entry(rb_parent(prev),
+ struct vm_area_struct, vm_rb);
+ if (prev == vma->vm_rb.rb_right) {
+ gap_start = vma->vm_prev ?
+ vm_end_gap(vma->vm_prev) : low_limit;
+ goto check_current;
+ }
+ }
+ }
+found:
+ /* We found a suitable gap. Clip it with the original high_limit. */
+ if (gap_end > info->high_limit)
+ gap_end = info->high_limit;
+ gap_end -= info->length;
+ gap_end -= (gap_end - info->align_offset) & info->align_mask;
+ /* only one suitable page */
+ if (gap_end == gap_start)
+ return gap_start;
+ addr = entropy[1] % (min((gap_end - gap_start) >> PAGE_SHIFT,
+ 0x10000UL));
+ addr = gap_end - (addr << PAGE_SHIFT);
+ addr += (info->align_offset - addr) & info->align_mask;
+ return addr;
+}
+
unsigned long unmapped_area(struct vm_unmapped_area_info *info)
{
/*
--
2.7.4


2018-03-22 20:55:42

by Andrew Morton

[permalink] [raw]
Subject: Re: [RFC PATCH v2 1/2] Randomization of address chosen by mmap.

On Thu, 22 Mar 2018 19:36:37 +0300 Ilya Smith <[email protected]> wrote:

> include/linux/mm.h | 16 ++++--
> mm/mmap.c | 164 +++++++++++++++++++++++++++++++++++++++++++++++++++++

You'll be wanting to update the documentation.
Documentation/sysctl/kernel.txt and
Documentation/admin-guide/kernel-parameters.txt.

> ...
>
> @@ -2268,6 +2276,9 @@ extern unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info);
> static inline unsigned long
> vm_unmapped_area(struct vm_unmapped_area_info *info)
> {
> + /* How about 32 bit process?? */
> + if ((current->flags & PF_RANDOMIZE) && randomize_va_space > 3)
> + return unmapped_area_random(info);

The handling of randomize_va_space is peculiar. Rather than being a
bitfield which independently selects different modes, it is treated as
a scalar: the larger the value, the more stuff we randomize.

I can see the sense in that (and I wonder what randomize_va_space=5
will do). But it is... odd.

Why did you select randomize_va_space=4 for this? Is there a mode 3
already and we forgot to document it? Or did you leave a gap for
something? If the former, please feel free to fix the documentation
(in a separate, preceding patch) while you're in there ;)

> if (info->flags & VM_UNMAPPED_AREA_TOPDOWN)
> return unmapped_area_topdown(info);
> else
> @@ -2529,11 +2540,6 @@ int drop_caches_sysctl_handler(struct ctl_table *, int,
> void drop_slab(void);
> void drop_slab_node(int nid);
>
>
> ...
>
> @@ -1780,6 +1781,169 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
> return error;
> }
>
> +unsigned long unmapped_area_random(struct vm_unmapped_area_info *info)
> +{

This function is just dead code if CONFIG_MMU=n, yes? Let's add the
ifdefs to make it go away in that case.

> + struct mm_struct *mm = current->mm;
> + struct vm_area_struct *vma = NULL;
> + struct vm_area_struct *visited_vma = NULL;
> + unsigned long entropy[2];
> + unsigned long length, low_limit, high_limit, gap_start, gap_end;
> + unsigned long addr = 0;
> +
> + /* get entropy with prng */
> + prandom_bytes(&entropy, sizeof(entropy));
> + /* small hack to prevent EPERM result */
> + info->low_limit = max(info->low_limit, mmap_min_addr);
> +
>
> ...
>
> +found:
> + /* We found a suitable gap. Clip it with the original high_limit. */
> + if (gap_end > info->high_limit)
> + gap_end = info->high_limit;
> + gap_end -= info->length;
> + gap_end -= (gap_end - info->align_offset) & info->align_mask;
> + /* only one suitable page */
> + if (gap_end == gap_start)
> + return gap_start;
> + addr = entropy[1] % (min((gap_end - gap_start) >> PAGE_SHIFT,
> + 0x10000UL));

What does the magic 10000 mean? Isn't a comment needed explaining this?

> + addr = gap_end - (addr << PAGE_SHIFT);
> + addr += (info->align_offset - addr) & info->align_mask;
> + return addr;
> +}
>
> ...
>



2018-03-22 20:57:31

by Andrew Morton

[permalink] [raw]
Subject: Re: [RFC PATCH v2 2/2] Architecture defined limit on memory region random shift.


Please add changelogs. An explanation of what a "limit on memory
region random shift" is would be nice ;) Why does it exist, why are we
doing this, etc. Surely there's something to be said - at present this
is just a lump of random code?




2018-03-22 20:58:59

by Andrew Morton

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On Thu, 22 Mar 2018 19:36:36 +0300 Ilya Smith <[email protected]> wrote:

> Current implementation doesn't randomize address returned by mmap.
> All the entropy ends with choosing mmap_base_addr at the process
> creation. After that mmap build very predictable layout of address
> space. It allows to bypass ASLR in many cases.

Perhaps some more effort on the problem description would help. *Are*
people predicting layouts at present? What problems does this cause?
How are they doing this and are there other approaches to solving the
problem?

Mainly: what value does this patchset have to our users? This reader
is unable to determine that from the information which you have
provided. Full details, please.


2018-03-23 12:52:59

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On Thu, Mar 22, 2018 at 07:36:36PM +0300, Ilya Smith wrote:
> Current implementation doesn't randomize address returned by mmap.
> All the entropy ends with choosing mmap_base_addr at the process
> creation. After that mmap build very predictable layout of address
> space. It allows to bypass ASLR in many cases. This patch make
> randomization of address on any mmap call.

Why should this be done in the kernel rather than libc? libc is perfectly
capable of specifying random numbers in the first argument of mmap.

2018-03-23 17:26:36

by Ilya Smith

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

Hello, Andrew

Thanks for reading this patch.

> On 22 Mar 2018, at 23:57, Andrew Morton <[email protected]> wrote:
>
> On Thu, 22 Mar 2018 19:36:36 +0300 Ilya Smith <[email protected]> wrote:
>
>> Current implementation doesn't randomize address returned by mmap.
>> All the entropy ends with choosing mmap_base_addr at the process
>> creation. After that mmap build very predictable layout of address
>> space. It allows to bypass ASLR in many cases.
>
> Perhaps some more effort on the problem description would help. *Are*
> people predicting layouts at present? What problems does this cause?
> How are they doing this and are there other approaches to solving the
> problem?
>
Sorry, I’ve lost it in first version. In short - memory layout could be easily
repaired by single leakage. Also any Out of Bounds error may easily be
exploited according to current implementation. All because mmap choose address
just before previously allocated segment. You can read more about it here:
http://www.openwall.com/lists/oss-security/2018/02/27/5
Some test are available here https://github.com/blackzert/aslur.
To solve the problem Kernel should randomize address on any mmap so
attacker could never easily gain needed addresses.

> Mainly: what value does this patchset have to our users? This reader
> is unable to determine that from the information which you have
> provided. Full details, please.

The value of this patch is to decrease successful rate of exploitation
vulnerable applications.These could be either remote or local vectors.


2018-03-23 17:46:26

by Ilya Smith

[permalink] [raw]
Subject: Re: [RFC PATCH v2 1/2] Randomization of address chosen by mmap.


> On 22 Mar 2018, at 23:53, Andrew Morton <[email protected]> wrote:
>
> On Thu, 22 Mar 2018 19:36:37 +0300 Ilya Smith <[email protected]> wrote:
>
>> include/linux/mm.h | 16 ++++--
>> mm/mmap.c | 164 +++++++++++++++++++++++++++++++++++++++++++++++++++++
>
> You'll be wanting to update the documentation.
> Documentation/sysctl/kernel.txt and
> Documentation/admin-guide/kernel-parameters.txt.
>

Sure, thanks for pointing there. I will add few lines there after discussion them
here.

>> ...
>>
>> @@ -2268,6 +2276,9 @@ extern unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info);
>> static inline unsigned long
>> vm_unmapped_area(struct vm_unmapped_area_info *info)
>> {
>> + /* How about 32 bit process?? */
>> + if ((current->flags & PF_RANDOMIZE) && randomize_va_space > 3)
>> + return unmapped_area_random(info);
>
> The handling of randomize_va_space is peculiar. Rather than being a
> bitfield which independently selects different modes, it is treated as
> a scalar: the larger the value, the more stuff we randomize.
>
> I can see the sense in that (and I wonder what randomize_va_space=5
> will do). But it is... odd.
>
> Why did you select randomize_va_space=4 for this? Is there a mode 3
> already and we forgot to document it? Or did you leave a gap for
> something? If the former, please feel free to fix the documentation
> (in a separate, preceding patch) while you're in there ;)
>

Yes, I was not sure about correct value so leaved some gap for future. Also
according to current implementation this value used like a scalar. But I’m
agree bitfield looks more flexible for the future. I think right now I can leave
3 as value for my patch and it could be fixed any time in the future. What
do you think about it?

>> if (info->flags & VM_UNMAPPED_AREA_TOPDOWN)
>> return unmapped_area_topdown(info);
>> else
>> @@ -2529,11 +2540,6 @@ int drop_caches_sysctl_handler(struct ctl_table *, int,
>> void drop_slab(void);
>> void drop_slab_node(int nid);
>>
>>
>> ...
>>
>> @@ -1780,6 +1781,169 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>> return error;
>> }
>>
>> +unsigned long unmapped_area_random(struct vm_unmapped_area_info *info)
>> +{
>
> This function is just dead code if CONFIG_MMU=n, yes? Let's add the
> ifdefs to make it go away in that case.
>

Thanks, I missed that case. I will fix it.

>> + struct mm_struct *mm = current->mm;
>> + struct vm_area_struct *vma = NULL;
>> + struct vm_area_struct *visited_vma = NULL;
>> + unsigned long entropy[2];
>> + unsigned long length, low_limit, high_limit, gap_start, gap_end;
>> + unsigned long addr = 0;
>> +
>> + /* get entropy with prng */
>> + prandom_bytes(&entropy, sizeof(entropy));
>> + /* small hack to prevent EPERM result */
>> + info->low_limit = max(info->low_limit, mmap_min_addr);
>> +
>>
>> ...
>>
>> +found:
>> + /* We found a suitable gap. Clip it with the original high_limit. */
>> + if (gap_end > info->high_limit)
>> + gap_end = info->high_limit;
>> + gap_end -= info->length;
>> + gap_end -= (gap_end - info->align_offset) & info->align_mask;
>> + /* only one suitable page */
>> + if (gap_end == gap_start)
>> + return gap_start;
>> + addr = entropy[1] % (min((gap_end - gap_start) >> PAGE_SHIFT,
>> + 0x10000UL));
>
> What does the magic 10000 mean? Isn't a comment needed explaining this?
>
>> + addr = gap_end - (addr << PAGE_SHIFT);
>> + addr += (info->align_offset - addr) & info->align_mask;
>> + return addr;
>> +}
>>
>> ...
>>
>

This one what I fix by next patch. I was trying to make patches separate to make
it easier to understand them. This constant came from last version discussion
and honestly doesn’t means much. I replaced it with Architecture depended limit
that as I plan would be CONFIG value as well.

This value means maximum number of pages we can move away from the next
vma. The less value means less security but less memory fragmentation. Any way
on 64bit systems memory fragmentation is not such a big problem.


2018-03-23 17:51:14

by Ilya Smith

[permalink] [raw]
Subject: Re: [RFC PATCH v2 2/2] Architecture defined limit on memory region random shift.


> On 22 Mar 2018, at 23:54, Andrew Morton <[email protected]> wrote:
>
>
> Please add changelogs. An explanation of what a "limit on memory
> region random shift" is would be nice ;) Why does it exist, why are we
> doing this, etc. Surely there's something to be said - at present this
> is just a lump of random code?
>
>
>
Sorry, my bad. The main idea of this limit is to decrease possible memory
fragmentation. This is not so big problem on 64bit process, but really big for
32 bit processes since may cause failure memory allocation. To control memory
fragmentation and protect 32 bit systems (or architectures) this limit was
introduce by this patch. It could be also moved to CONFIG_ as well.

2018-03-23 17:57:08

by Ilya Smith

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.


> On 23 Mar 2018, at 15:48, Matthew Wilcox <[email protected]> wrote:
>
> On Thu, Mar 22, 2018 at 07:36:36PM +0300, Ilya Smith wrote:
>> Current implementation doesn't randomize address returned by mmap.
>> All the entropy ends with choosing mmap_base_addr at the process
>> creation. After that mmap build very predictable layout of address
>> space. It allows to bypass ASLR in many cases. This patch make
>> randomization of address on any mmap call.
>
> Why should this be done in the kernel rather than libc? libc is perfectly
> capable of specifying random numbers in the first argument of mmap.
Well, there is following reasons:
1. It should be done in any libc implementation, what is not possible IMO;
2. User mode is not that layer which should be responsible for choosing
random address or handling entropy;
3. Memory fragmentation is unpredictable in this case

Off course user mode could use random ‘hint’ address, but kernel may
discard this address if it is occupied for example and allocate just before
closest vma. So this solution doesn’t give that much security like
randomization address inside kernel.

2018-03-23 18:11:32

by Rich Felker

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On Fri, Mar 23, 2018 at 05:48:06AM -0700, Matthew Wilcox wrote:
> On Thu, Mar 22, 2018 at 07:36:36PM +0300, Ilya Smith wrote:
> > Current implementation doesn't randomize address returned by mmap.
> > All the entropy ends with choosing mmap_base_addr at the process
> > creation. After that mmap build very predictable layout of address
> > space. It allows to bypass ASLR in many cases. This patch make
> > randomization of address on any mmap call.
>
> Why should this be done in the kernel rather than libc? libc is perfectly
> capable of specifying random numbers in the first argument of mmap.

Generally libc does not have a view of the current vm maps, and thus
in passing "random numbers", they would have to be uniform across the
whole vm space and thus non-uniform once the kernel rounds up to avoid
existing mappings. Also this would impose requirements that libc be
aware of the kernel's use of the virtual address space and what's
available to userspace -- for example, on 32-bit archs whether 2GB,
3GB, or full 4GB (for 32-bit-user-on-64-bit-kernel) is available, and
on 64-bit archs where fewer than the full 64 bits are actually valid
in addresses, what the actual usable pointer size is. There is
currently no clean way of conveying this information to userspace.

Rich

2018-03-23 19:09:23

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On Fri, Mar 23, 2018 at 02:00:24PM -0400, Rich Felker wrote:
> On Fri, Mar 23, 2018 at 05:48:06AM -0700, Matthew Wilcox wrote:
> > On Thu, Mar 22, 2018 at 07:36:36PM +0300, Ilya Smith wrote:
> > > Current implementation doesn't randomize address returned by mmap.
> > > All the entropy ends with choosing mmap_base_addr at the process
> > > creation. After that mmap build very predictable layout of address
> > > space. It allows to bypass ASLR in many cases. This patch make
> > > randomization of address on any mmap call.
> >
> > Why should this be done in the kernel rather than libc? libc is perfectly
> > capable of specifying random numbers in the first argument of mmap.
>
> Generally libc does not have a view of the current vm maps, and thus
> in passing "random numbers", they would have to be uniform across the
> whole vm space and thus non-uniform once the kernel rounds up to avoid
> existing mappings.

I'm aware that you're the musl author, but glibc somehow manages to
provide etext, edata and end, demonstrating that it does know where at
least some of the memory map lies. Virtually everything after that is
brought into the address space via mmap, which at least glibc intercepts,
so it's entirely possible for a security-conscious libc to know where
other things are in the memory map. Not to mention that what we're
primarily talking about here are libraries which are dynamically linked
and are loaded by ld.so before calling main(); not dlopen() or even
regular user mmaps.

> Also this would impose requirements that libc be
> aware of the kernel's use of the virtual address space and what's
> available to userspace -- for example, on 32-bit archs whether 2GB,
> 3GB, or full 4GB (for 32-bit-user-on-64-bit-kernel) is available, and
> on 64-bit archs where fewer than the full 64 bits are actually valid
> in addresses, what the actual usable pointer size is. There is
> currently no clean way of conveying this information to userspace.

Huh, I thought libc was aware of this. Also, I'd expect a libc-based
implementation to restrict itself to, eg, only loading libraries in
the bottom 1GB to avoid applications who want to map huge things from
running out of unfragmented address space.

2018-03-23 19:22:18

by Rich Felker

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On Fri, Mar 23, 2018 at 12:06:18PM -0700, Matthew Wilcox wrote:
> On Fri, Mar 23, 2018 at 02:00:24PM -0400, Rich Felker wrote:
> > On Fri, Mar 23, 2018 at 05:48:06AM -0700, Matthew Wilcox wrote:
> > > On Thu, Mar 22, 2018 at 07:36:36PM +0300, Ilya Smith wrote:
> > > > Current implementation doesn't randomize address returned by mmap.
> > > > All the entropy ends with choosing mmap_base_addr at the process
> > > > creation. After that mmap build very predictable layout of address
> > > > space. It allows to bypass ASLR in many cases. This patch make
> > > > randomization of address on any mmap call.
> > >
> > > Why should this be done in the kernel rather than libc? libc is perfectly
> > > capable of specifying random numbers in the first argument of mmap.
> >
> > Generally libc does not have a view of the current vm maps, and thus
> > in passing "random numbers", they would have to be uniform across the
> > whole vm space and thus non-uniform once the kernel rounds up to avoid
> > existing mappings.
>
> I'm aware that you're the musl author, but glibc somehow manages to
> provide etext, edata and end, demonstrating that it does know where at
> least some of the memory map lies.

Yes, but that's pretty minimal info.

> Virtually everything after that is
> brought into the address space via mmap, which at least glibc intercepts,

There's also vdso, the program interpreter (ldso), and theoretically
other things the kernel might add. I agree you _could_ track most of
this (and all if you want to open /proc/self/maps), but it seems
hackish and wrong (violating clean boundaries between userspace and
kernel responsibility).

> > Also this would impose requirements that libc be
> > aware of the kernel's use of the virtual address space and what's
> > available to userspace -- for example, on 32-bit archs whether 2GB,
> > 3GB, or full 4GB (for 32-bit-user-on-64-bit-kernel) is available, and
> > on 64-bit archs where fewer than the full 64 bits are actually valid
> > in addresses, what the actual usable pointer size is. There is
> > currently no clean way of conveying this information to userspace.
>
> Huh, I thought libc was aware of this. Also, I'd expect a libc-based
> implementation to restrict itself to, eg, only loading libraries in
> the bottom 1GB to avoid applications who want to map huge things from
> running out of unfragmented address space.

That seems like a rather arbitrary expectation and I'm not sure why
you'd expect it to result in less fragmentation rather than more. For
example if it started from 1GB and worked down, you'd immediately
reduce the contiguous free space from ~3GB to ~2GB, and if it started
from the bottom and worked up, brk would immediately become
unavailable, increasing mmap pressure elsewhere.

Rich

2018-03-23 19:31:52

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On Fri, Mar 23, 2018 at 03:16:21PM -0400, Rich Felker wrote:
> > Huh, I thought libc was aware of this. Also, I'd expect a libc-based
> > implementation to restrict itself to, eg, only loading libraries in
> > the bottom 1GB to avoid applications who want to map huge things from
> > running out of unfragmented address space.
>
> That seems like a rather arbitrary expectation and I'm not sure why
> you'd expect it to result in less fragmentation rather than more. For
> example if it started from 1GB and worked down, you'd immediately
> reduce the contiguous free space from ~3GB to ~2GB, and if it started
> from the bottom and worked up, brk would immediately become
> unavailable, increasing mmap pressure elsewhere.

By *not* limiting yourself to the bottom 1GB, you'll almost immediately
fragment the address space even worse. Just looking at 'ls' as a
hopefully-good example of a typical app, it maps:

linux-vdso.so.1 (0x00007ffef5eef000)
libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007fb3657f5000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fb36543b000)
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007fb3651c9000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fb364fc5000)
/lib64/ld-linux-x86-64.so.2 (0x00007fb365c3f000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fb364da7000)

The VDSO wouldn't move, but look at the distribution of mapping 6 things
into a 3GB address space in random locations. What are the odds you have
a contiguous 1GB chunk of address space? If you restrict yourself to the
bottom 1GB before running out of room and falling back to a sequential
allocation, you'll prevent a lot of fragmentation.

2018-03-23 19:38:59

by Rich Felker

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On Fri, Mar 23, 2018 at 12:29:52PM -0700, Matthew Wilcox wrote:
> On Fri, Mar 23, 2018 at 03:16:21PM -0400, Rich Felker wrote:
> > > Huh, I thought libc was aware of this. Also, I'd expect a libc-based
> > > implementation to restrict itself to, eg, only loading libraries in
> > > the bottom 1GB to avoid applications who want to map huge things from
> > > running out of unfragmented address space.
> >
> > That seems like a rather arbitrary expectation and I'm not sure why
> > you'd expect it to result in less fragmentation rather than more. For
> > example if it started from 1GB and worked down, you'd immediately
> > reduce the contiguous free space from ~3GB to ~2GB, and if it started
> > from the bottom and worked up, brk would immediately become
> > unavailable, increasing mmap pressure elsewhere.
>
> By *not* limiting yourself to the bottom 1GB, you'll almost immediately
> fragment the address space even worse. Just looking at 'ls' as a
> hopefully-good example of a typical app, it maps:
>
> linux-vdso.so.1 (0x00007ffef5eef000)
> libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007fb3657f5000)
> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fb36543b000)
> libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007fb3651c9000)
> libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fb364fc5000)
> /lib64/ld-linux-x86-64.so.2 (0x00007fb365c3f000)
> libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fb364da7000)
>
> The VDSO wouldn't move, but look at the distribution of mapping 6 things
> into a 3GB address space in random locations. What are the odds you have
> a contiguous 1GB chunk of address space? If you restrict yourself to the
> bottom 1GB before running out of room and falling back to a sequential
> allocation, you'll prevent a lot of fragmentation.

Oh, you're talking about "with random locations" case. Randomizing
each map just hopelessly fragments things no matter what you do on
32-bit. If you reduce the space over which you randomize to the point
where it's not fragmenting/killing your available vm space, there are
so few degrees of freedom left that it's trivial to brute-force. Maybe
"libs randomized in low 1GB, everything else near-sequential in high
addresses" works half decently, but I have a hard time believing you
can get any ASLR that's significantly better than snake oil in a
32-bit address space, and you certainly do pay a high price in total
available vm space.

Rich

2018-03-26 08:48:15

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On Fri 23-03-18 20:55:49, Ilya Smith wrote:
>
> > On 23 Mar 2018, at 15:48, Matthew Wilcox <[email protected]> wrote:
> >
> > On Thu, Mar 22, 2018 at 07:36:36PM +0300, Ilya Smith wrote:
> >> Current implementation doesn't randomize address returned by mmap.
> >> All the entropy ends with choosing mmap_base_addr at the process
> >> creation. After that mmap build very predictable layout of address
> >> space. It allows to bypass ASLR in many cases. This patch make
> >> randomization of address on any mmap call.
> >
> > Why should this be done in the kernel rather than libc? libc is perfectly
> > capable of specifying random numbers in the first argument of mmap.
> Well, there is following reasons:
> 1. It should be done in any libc implementation, what is not possible IMO;

Is this really so helpful?

> 2. User mode is not that layer which should be responsible for choosing
> random address or handling entropy;

Why?

> 3. Memory fragmentation is unpredictable in this case
>
> Off course user mode could use random ‘hint’ address, but kernel may
> discard this address if it is occupied for example and allocate just before
> closest vma. So this solution doesn’t give that much security like
> randomization address inside kernel.

The userspace can use the new MAP_FIXED_NOREPLACE to probe for the
address range atomically and chose a different range on failure.

--
Michal Hocko
SUSE Labs

2018-03-26 19:46:55

by Ilya Smith

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.


> On 26 Mar 2018, at 11:46, Michal Hocko <[email protected]> wrote:
>
> On Fri 23-03-18 20:55:49, Ilya Smith wrote:
>>
>>> On 23 Mar 2018, at 15:48, Matthew Wilcox <[email protected]> wrote:
>>>
>>> On Thu, Mar 22, 2018 at 07:36:36PM +0300, Ilya Smith wrote:
>>>> Current implementation doesn't randomize address returned by mmap.
>>>> All the entropy ends with choosing mmap_base_addr at the process
>>>> creation. After that mmap build very predictable layout of address
>>>> space. It allows to bypass ASLR in many cases. This patch make
>>>> randomization of address on any mmap call.
>>>
>>> Why should this be done in the kernel rather than libc? libc is perfectly
>>> capable of specifying random numbers in the first argument of mmap.
>> Well, there is following reasons:
>> 1. It should be done in any libc implementation, what is not possible IMO;
>
> Is this really so helpful?

Yes, ASLR is one of very important mitigation techniques which are really used
to protect applications. If there is no ASLR, it is very easy to exploit
vulnerable application and compromise the system. We can’t just fix all the
vulnerabilities right now, thats why we have mitigations - techniques which are
makes exploitation more hard or impossible in some cases.

Thats why it is helpful.

>
>> 2. User mode is not that layer which should be responsible for choosing
>> random address or handling entropy;
>
> Why?

Because of the following reasons:
1. To get random address you should have entropy. These entropy shouldn’t be
exposed to attacker anyhow, the best case is to get it from kernel. So this is
a syscall.
2. You should have memory map of your process to prevent remapping or big
fragmentation. Kernel already has this map. You will got another one in libc.
And any non-libc user of mmap (via syscall, etc) will make hole in your map.
This one also decrease performance cause you any way call syscall_mmap
which will try to find some address for you in worst case, but after you already
did some computing on it.
3. The more memory you use in userland for these proposal, the easier for
attacker to leak it or use in exploitation techniques.
4. It is so easy to fix Kernel function and so hard to support memory
management from userspace.

>
>> 3. Memory fragmentation is unpredictable in this case
>>
>> Off course user mode could use random ‘hint’ address, but kernel may
>> discard this address if it is occupied for example and allocate just before
>> closest vma. So this solution doesn’t give that much security like
>> randomization address inside kernel.
>
> The userspace can use the new MAP_FIXED_NOREPLACE to probe for the
> address range atomically and chose a different range on failure.
>

This algorithm should track current memory. If he doesn’t he may cause
infinite loop while trying to choose memory. And each iteration increase time
needed on allocation new memory, what is not preferred by any libc library
developer.

Thats why I did this patch.

Thanks,
Ilya



2018-03-27 07:27:09

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On Mon 26-03-18 22:45:31, Ilya Smith wrote:
>
> > On 26 Mar 2018, at 11:46, Michal Hocko <[email protected]> wrote:
> >
> > On Fri 23-03-18 20:55:49, Ilya Smith wrote:
> >>
> >>> On 23 Mar 2018, at 15:48, Matthew Wilcox <[email protected]> wrote:
> >>>
> >>> On Thu, Mar 22, 2018 at 07:36:36PM +0300, Ilya Smith wrote:
> >>>> Current implementation doesn't randomize address returned by mmap.
> >>>> All the entropy ends with choosing mmap_base_addr at the process
> >>>> creation. After that mmap build very predictable layout of address
> >>>> space. It allows to bypass ASLR in many cases. This patch make
> >>>> randomization of address on any mmap call.
> >>>
> >>> Why should this be done in the kernel rather than libc? libc is perfectly
> >>> capable of specifying random numbers in the first argument of mmap.
> >> Well, there is following reasons:
> >> 1. It should be done in any libc implementation, what is not possible IMO;
> >
> > Is this really so helpful?
>
> Yes, ASLR is one of very important mitigation techniques which are really used
> to protect applications. If there is no ASLR, it is very easy to exploit
> vulnerable application and compromise the system. We can’t just fix all the
> vulnerabilities right now, thats why we have mitigations - techniques which are
> makes exploitation more hard or impossible in some cases.
>
> Thats why it is helpful.

I am not questioning ASLR in general. I am asking whether we really need
per mmap ASLR in general. I can imagine that some environments want to
pay the additional price and other side effects, but considering this
can be achieved by libc, why to add more code to the kernel?

> >
> >> 2. User mode is not that layer which should be responsible for choosing
> >> random address or handling entropy;
> >
> > Why?
>
> Because of the following reasons:
> 1. To get random address you should have entropy. These entropy shouldn’t be
> exposed to attacker anyhow, the best case is to get it from kernel. So this is
> a syscall.

/dev/[u]random is not sufficient?

> 2. You should have memory map of your process to prevent remapping or big
> fragmentation. Kernel already has this map.

/proc/self/maps?

> You will got another one in libc.
> And any non-libc user of mmap (via syscall, etc) will make hole in your map.
> This one also decrease performance cause you any way call syscall_mmap
> which will try to find some address for you in worst case, but after you already
> did some computing on it.

I do not understand. a) you should be prepared to pay an additional
price for an additional security measures and b) how would anybody punch
a hole into your mapping?

> 3. The more memory you use in userland for these proposal, the easier for
> attacker to leak it or use in exploitation techniques.

This is true in general, isn't it? I fail to see how kernel chosen and
user chosen ranges would make any difference.

> 4. It is so easy to fix Kernel function and so hard to support memory
> management from userspace.

Well, on the other hand the new layout mode will add a maintenance
burden on the kernel and will have to be maintained for ever because it
is a user visible ABI.

> >> 3. Memory fragmentation is unpredictable in this case
> >>
> >> Off course user mode could use random ‘hint’ address, but kernel may
> >> discard this address if it is occupied for example and allocate just before
> >> closest vma. So this solution doesn’t give that much security like
> >> randomization address inside kernel.
> >
> > The userspace can use the new MAP_FIXED_NOREPLACE to probe for the
> > address range atomically and chose a different range on failure.
> >
>
> This algorithm should track current memory. If he doesn’t he may cause
> infinite loop while trying to choose memory. And each iteration increase time
> needed on allocation new memory, what is not preferred by any libc library
> developer.

Well, I am pretty sure userspace can implement proper free ranges
tracking...

--
Michal Hocko
SUSE Labs

2018-03-27 13:52:34

by Ilya Smith

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.


> On 27 Mar 2018, at 10:24, Michal Hocko <[email protected]> wrote:
>
> On Mon 26-03-18 22:45:31, Ilya Smith wrote:
>>
>>> On 26 Mar 2018, at 11:46, Michal Hocko <[email protected]> wrote:
>>>
>>> On Fri 23-03-18 20:55:49, Ilya Smith wrote:
>>>>
>>>>> On 23 Mar 2018, at 15:48, Matthew Wilcox <[email protected]> wrote:
>>>>>
>>>>> On Thu, Mar 22, 2018 at 07:36:36PM +0300, Ilya Smith wrote:
>>>>>> Current implementation doesn't randomize address returned by mmap.
>>>>>> All the entropy ends with choosing mmap_base_addr at the process
>>>>>> creation. After that mmap build very predictable layout of address
>>>>>> space. It allows to bypass ASLR in many cases. This patch make
>>>>>> randomization of address on any mmap call.
>>>>>
>>>>> Why should this be done in the kernel rather than libc? libc is perfectly
>>>>> capable of specifying random numbers in the first argument of mmap.
>>>> Well, there is following reasons:
>>>> 1. It should be done in any libc implementation, what is not possible IMO;
>>>
>>> Is this really so helpful?
>>
>> Yes, ASLR is one of very important mitigation techniques which are really used
>> to protect applications. If there is no ASLR, it is very easy to exploit
>> vulnerable application and compromise the system. We can’t just fix all the
>> vulnerabilities right now, thats why we have mitigations - techniques which are
>> makes exploitation more hard or impossible in some cases.
>>
>> Thats why it is helpful.
>
> I am not questioning ASLR in general. I am asking whether we really need
> per mmap ASLR in general. I can imagine that some environments want to
> pay the additional price and other side effects, but considering this
> can be achieved by libc, why to add more code to the kernel?

I believe this is the only one right place for it. Adding these 200+ lines of
code we give this feature for any user - on desktop, on server, on IoT device,
on SCADA, etc. But if only glibc will implement ‘user-mode-aslr’ IoT and SCADA
devices will never get it.

>>>
>>>> 2. User mode is not that layer which should be responsible for choosing
>>>> random address or handling entropy;
>>>
>>> Why?
>>
>> Because of the following reasons:
>> 1. To get random address you should have entropy. These entropy shouldn’t be
>> exposed to attacker anyhow, the best case is to get it from kernel. So this is
>> a syscall.
>
> /dev/[u]random is not sufficient?

Using /dev/[u]random makes 3 syscalls - open, read, close. This is a performance
issue.

>
>> 2. You should have memory map of your process to prevent remapping or big
>> fragmentation. Kernel already has this map.
>
> /proc/self/maps?

Not any system has /proc and parsing /proc/self/maps is robust so it is the
performance issue. libc will have to do it on any mmap. And there is a possible
race here - application may mmap/unmap memory with native syscall during other
thread reading maps.

>> You will got another one in libc.
>> And any non-libc user of mmap (via syscall, etc) will make hole in your map.
>> This one also decrease performance cause you any way call syscall_mmap
>> which will try to find some address for you in worst case, but after you already
>> did some computing on it.
>
> I do not understand. a) you should be prepared to pay an additional
> price for an additional security measures and b) how would anybody punch
> a hole into your mapping?
>

I was talking about any code that call mmap directly without libc wrapper.

>> 3. The more memory you use in userland for these proposal, the easier for
>> attacker to leak it or use in exploitation techniques.
>
> This is true in general, isn't it? I fail to see how kernel chosen and
> user chosen ranges would make any difference.

My point here was that libc will have to keep memory representation as a tree
and this tree increase attack surface. It could be hidden in kernel as it is right now.

>
>> 4. It is so easy to fix Kernel function and so hard to support memory
>> management from userspace.
>
> Well, on the other hand the new layout mode will add a maintenance
> burden on the kernel and will have to be maintained for ever because it
> is a user visible ABI.

Thats why I made this patch as RFC and would like to discuss this ABI here. I
made randomize_va_space parameter to allow disable randomisation per whole
system. PF_RANDOMIZE flag may disable randomization for concrete process (or
process groups?). For architecture I’ve made info.random_shift = 0 , so if your
arch has small address space you may disable shifting. I also would like to add
some sysctl to allow process/groups to change this value and allow some
processes to have shifts bigger then another. Lets discuss it, please.

>
>>>> 3. Memory fragmentation is unpredictable in this case
>>>>
>>>> Off course user mode could use random ‘hint’ address, but kernel may
>>>> discard this address if it is occupied for example and allocate just before
>>>> closest vma. So this solution doesn’t give that much security like
>>>> randomization address inside kernel.
>>>
>>> The userspace can use the new MAP_FIXED_NOREPLACE to probe for the
>>> address range atomically and chose a different range on failure.
>>>
>>
>> This algorithm should track current memory. If he doesn’t he may cause
>> infinite loop while trying to choose memory. And each iteration increase time
>> needed on allocation new memory, what is not preferred by any libc library
>> developer.
>
> Well, I am pretty sure userspace can implement proper free ranges
> tracking…

I think we need to know what libc developers will say on implementing ASLR in
user-mode. I am pretty sure they will say ‘nether’ or ‘some-day’. And problem
of ASLR will stay forever.

Thanks,
Ilya




2018-03-27 14:39:48

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On Tue 27-03-18 16:51:08, Ilya Smith wrote:
>
> > On 27 Mar 2018, at 10:24, Michal Hocko <[email protected]> wrote:
> >
> > On Mon 26-03-18 22:45:31, Ilya Smith wrote:
> >>
> >>> On 26 Mar 2018, at 11:46, Michal Hocko <[email protected]> wrote:
> >>>
> >>> On Fri 23-03-18 20:55:49, Ilya Smith wrote:
> >>>>
> >>>>> On 23 Mar 2018, at 15:48, Matthew Wilcox <[email protected]> wrote:
> >>>>>
> >>>>> On Thu, Mar 22, 2018 at 07:36:36PM +0300, Ilya Smith wrote:
> >>>>>> Current implementation doesn't randomize address returned by mmap.
> >>>>>> All the entropy ends with choosing mmap_base_addr at the process
> >>>>>> creation. After that mmap build very predictable layout of address
> >>>>>> space. It allows to bypass ASLR in many cases. This patch make
> >>>>>> randomization of address on any mmap call.
> >>>>>
> >>>>> Why should this be done in the kernel rather than libc? libc is perfectly
> >>>>> capable of specifying random numbers in the first argument of mmap.
> >>>> Well, there is following reasons:
> >>>> 1. It should be done in any libc implementation, what is not possible IMO;
> >>>
> >>> Is this really so helpful?
> >>
> >> Yes, ASLR is one of very important mitigation techniques which are really used
> >> to protect applications. If there is no ASLR, it is very easy to exploit
> >> vulnerable application and compromise the system. We can’t just fix all the
> >> vulnerabilities right now, thats why we have mitigations - techniques which are
> >> makes exploitation more hard or impossible in some cases.
> >>
> >> Thats why it is helpful.
> >
> > I am not questioning ASLR in general. I am asking whether we really need
> > per mmap ASLR in general. I can imagine that some environments want to
> > pay the additional price and other side effects, but considering this
> > can be achieved by libc, why to add more code to the kernel?
>
> I believe this is the only one right place for it. Adding these 200+ lines of
> code we give this feature for any user - on desktop, on server, on IoT device,
> on SCADA, etc. But if only glibc will implement ‘user-mode-aslr’ IoT and SCADA
> devices will never get it.

I guess it would really help if you could be more specific about the
class of security issues this would help to mitigate. My first
understanding was that we we need some randomization between program
executable segments to reduce the attack space when a single address
leaks and you know the segments layout (ordering). But why do we need
_all_ mmaps to be randomized. Because that complicates the
implementation consirably for different reasons you have mentioned
earlier.

Do you have any specific CVE that would be mitigated by this
randomization approach?

I am sorry, I am not a security expert to see all the cosequences but a
vague - the more randomization the better - sounds rather weak to me.
--
Michal Hocko
SUSE Labs

2018-03-27 22:18:09

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On Tue, Mar 27, 2018 at 04:51:08PM +0300, Ilya Smith wrote:
> > /dev/[u]random is not sufficient?
>
> Using /dev/[u]random makes 3 syscalls - open, read, close. This is a performance
> issue.

You may want to take a look at the getrandom(2) system call, which is
the recommended way getting secure random numbers from the kernel.

> > Well, I am pretty sure userspace can implement proper free ranges
> > tracking…
>
> I think we need to know what libc developers will say on implementing ASLR in
> user-mode. I am pretty sure they will say ‘nether’ or ‘some-day’. And problem
> of ASLR will stay forever.

Why can't you send patches to the libc developers?

Regards,

- Ted

2018-03-28 00:03:00

by Rich Felker

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On Tue, Mar 27, 2018 at 06:16:35PM -0400, Theodore Y. Ts'o wrote:
> On Tue, Mar 27, 2018 at 04:51:08PM +0300, Ilya Smith wrote:
> > > /dev/[u]random is not sufficient?
> >
> > Using /dev/[u]random makes 3 syscalls - open, read, close. This is a performance
> > issue.
>
> You may want to take a look at the getrandom(2) system call, which is
> the recommended way getting secure random numbers from the kernel.

Yes, while opening /dev/urandom is not acceptable due to needing an
fd, getrandom and existing fallbacks for it have this covered if
needed.

> > > Well, I am pretty sure userspace can implement proper free ranges
> > > tracking…
> >
> > I think we need to know what libc developers will say on implementing ASLR in
> > user-mode. I am pretty sure they will say ‘nether’ or ‘some-day’. And problem
> > of ASLR will stay forever.
>
> Why can't you send patches to the libc developers?

I can tell you right now that any patch submitted for musl that
depended on trying to duplicate knowledge of the entire virtual
address space layout in userspace as part of mmap would be rejected,
and I would recommend glibc do the same.

Not only does it vastly increase complexity; it also has all sorts of
failure modes (fd exhastion, etc.) which would either introduce new
and unwanted ways for mmap to fail, or would force fallback to the
normal (no extra randomization) strategy under conditions an attacker
could potentially control, defeating the whole purpose. It would also
potentially make it easier for an attacker to examine the vm layout
for attacks, since it would be recorded in userspace.

There's also the issue of preserving AS-safety of mmap. POSIX does not
actually require mmap to be AS-safe, and on musl munmap is not fully
AS-safe anyway because of some obscure issues it compensates for, but
we may be able to make it AS-safe (this is a low-priority open issue).
If mmap were manipulating data structures representing the vm space in
userspace, though, the only way to make it anywhere near AS-safe would
be to block all signals and take a lock every time mmap or munmap is
called. This would significantly increase the cost of each call,
especially now that meltdown/spectre mitigations have greatly
increased the overhead of each syscall.

Overall, asking userspace to take a lead role in management of process
vm space is a radical change in the split of what user and kernel are
responsible for, and it really does not make sense as part of a
dubious hardening measure. Something this big would need to be really
well-motivated.

Rich

2018-03-28 04:51:46

by Rob Landley

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On 03/23/2018 02:06 PM, Matthew Wilcox wrote:
> On Fri, Mar 23, 2018 at 02:00:24PM -0400, Rich Felker wrote:
>> On Fri, Mar 23, 2018 at 05:48:06AM -0700, Matthew Wilcox wrote:
>>> On Thu, Mar 22, 2018 at 07:36:36PM +0300, Ilya Smith wrote:
>>>> Current implementation doesn't randomize address returned by mmap.
>>>> All the entropy ends with choosing mmap_base_addr at the process
>>>> creation. After that mmap build very predictable layout of address
>>>> space. It allows to bypass ASLR in many cases. This patch make
>>>> randomization of address on any mmap call.
>>>
>>> Why should this be done in the kernel rather than libc? libc is perfectly
>>> capable of specifying random numbers in the first argument of mmap.
>>
>> Generally libc does not have a view of the current vm maps, and thus
>> in passing "random numbers", they would have to be uniform across the
>> whole vm space and thus non-uniform once the kernel rounds up to avoid
>> existing mappings.
>
> I'm aware that you're the musl author, but glibc somehow manages to
> provide etext, edata and end, demonstrating that it does know where at
> least some of the memory map lies.

You can parse /proc/self/maps, but it's really expensive and disgusting.

Rob

2018-03-28 18:48:48

by Ilya Smith

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.


> On 27 Mar 2018, at 17:38, Michal Hocko <[email protected]> wrote:
>
> On Tue 27-03-18 16:51:08, Ilya Smith wrote:
>>
>>> On 27 Mar 2018, at 10:24, Michal Hocko <[email protected]> wrote:
>>>
>>> On Mon 26-03-18 22:45:31, Ilya Smith wrote:
>>>>
>>>>> On 26 Mar 2018, at 11:46, Michal Hocko <[email protected]> wrote:
>>>>>
>>>>> On Fri 23-03-18 20:55:49, Ilya Smith wrote:
>>>>>>
>>>>>>> On 23 Mar 2018, at 15:48, Matthew Wilcox <[email protected]> wrote:
>>>>>>>
>>>>>>> On Thu, Mar 22, 2018 at 07:36:36PM +0300, Ilya Smith wrote:
>>>>>>>> Current implementation doesn't randomize address returned by mmap.
>>>>>>>> All the entropy ends with choosing mmap_base_addr at the process
>>>>>>>> creation. After that mmap build very predictable layout of address
>>>>>>>> space. It allows to bypass ASLR in many cases. This patch make
>>>>>>>> randomization of address on any mmap call.
>>>>>>>
>>>>>>> Why should this be done in the kernel rather than libc? libc is perfectly
>>>>>>> capable of specifying random numbers in the first argument of mmap.
>>>>>> Well, there is following reasons:
>>>>>> 1. It should be done in any libc implementation, what is not possible IMO;
>>>>>
>>>>> Is this really so helpful?
>>>>
>>>> Yes, ASLR is one of very important mitigation techniques which are really used
>>>> to protect applications. If there is no ASLR, it is very easy to exploit
>>>> vulnerable application and compromise the system. We can’t just fix all the
>>>> vulnerabilities right now, thats why we have mitigations - techniques which are
>>>> makes exploitation more hard or impossible in some cases.
>>>>
>>>> Thats why it is helpful.
>>>
>>> I am not questioning ASLR in general. I am asking whether we really need
>>> per mmap ASLR in general. I can imagine that some environments want to
>>> pay the additional price and other side effects, but considering this
>>> can be achieved by libc, why to add more code to the kernel?
>>
>> I believe this is the only one right place for it. Adding these 200+ lines of
>> code we give this feature for any user - on desktop, on server, on IoT device,
>> on SCADA, etc. But if only glibc will implement ‘user-mode-aslr’ IoT and SCADA
>> devices will never get it.
>
> I guess it would really help if you could be more specific about the
> class of security issues this would help to mitigate. My first
> understanding was that we we need some randomization between program
> executable segments to reduce the attack space when a single address
> leaks and you know the segments layout (ordering). But why do we need
> _all_ mmaps to be randomized. Because that complicates the
> implementation consirably for different reasons you have mentioned
> earlier.
>

There are following reasons:
1) To protect layout if one region was leaked (as you said).
2) To protect against exploitation of Out-of-bounds vulnerabilities in some
cases (CWE-125 , CWE-787)
3) To protect against exploitation of Buffer Overflows in some cases (CWE-120)
4) To protect application in cases when attacker need to guess the address
(paper ASLR-NG by Hector Marco-Gisbert and Ismael Ripoll-Ripoll)
And may be more cases.

> Do you have any specific CVE that would be mitigated by this
> randomization approach?
> I am sorry, I am not a security expert to see all the cosequences but a
> vague - the more randomization the better - sounds rather weak to me.

It is hard to name concrete CVE number, sorry. Mitigations are made to prevent
exploitation but not to fix vulnerabilities. It means good mitigation will make
vulnerable application crash but not been compromised in most cases. This means
the better randomization, the less successful exploitation rate.


Thanks,
Ilya


2018-03-28 18:49:53

by Ilya Smith

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

> On 28 Mar 2018, at 01:16, Theodore Y. Ts'o <[email protected]> wrote:
>
> On Tue, Mar 27, 2018 at 04:51:08PM +0300, Ilya Smith wrote:
>>> /dev/[u]random is not sufficient?
>>
>> Using /dev/[u]random makes 3 syscalls - open, read, close. This is a performance
>> issue.
>
> You may want to take a look at the getrandom(2) system call, which is
> the recommended way getting secure random numbers from the kernel.
>
>>> Well, I am pretty sure userspace can implement proper free ranges
>>> tracking…
>>
>> I think we need to know what libc developers will say on implementing ASLR in
>> user-mode. I am pretty sure they will say ‘nether’ or ‘some-day’. And problem
>> of ASLR will stay forever.
>
> Why can't you send patches to the libc developers?
>
> Regards,
>
> - Ted

I still believe the issue is on kernel side, not in library.

Best regards,
Ilya


2018-03-30 07:56:35

by Pavel Machek

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

Hi!

> Current implementation doesn't randomize address returned by mmap.
> All the entropy ends with choosing mmap_base_addr at the process
> creation. After that mmap build very predictable layout of address
> space. It allows to bypass ASLR in many cases. This patch make
> randomization of address on any mmap call.

How will this interact with people debugging their application, and
getting different behaviours based on memory layout?

strace, strace again, get different results?

Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (657.00 B)
signature.asc (188.00 B)
Digital signature
Download all attachments

2018-03-30 09:10:49

by Ilya Smith

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

Hi

> On 30 Mar 2018, at 10:55, Pavel Machek <[email protected]> wrote:
>
> Hi!
>
>> Current implementation doesn't randomize address returned by mmap.
>> All the entropy ends with choosing mmap_base_addr at the process
>> creation. After that mmap build very predictable layout of address
>> space. It allows to bypass ASLR in many cases. This patch make
>> randomization of address on any mmap call.
>
> How will this interact with people debugging their application, and
> getting different behaviours based on memory layout?
>
> strace, strace again, get different results?
>

Honestly I’m confused about your question. If the only one way for debugging
application is to use predictable mmap behaviour, then something went wrong in
this live and we should stop using computers at all.

Thanks,
Ilya

2018-03-30 09:59:04

by Pavel Machek

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On Fri 2018-03-30 12:07:58, Ilya Smith wrote:
> Hi
>
> > On 30 Mar 2018, at 10:55, Pavel Machek <[email protected]> wrote:
> >
> > Hi!
> >
> >> Current implementation doesn't randomize address returned by mmap.
> >> All the entropy ends with choosing mmap_base_addr at the process
> >> creation. After that mmap build very predictable layout of address
> >> space. It allows to bypass ASLR in many cases. This patch make
> >> randomization of address on any mmap call.
> >
> > How will this interact with people debugging their application, and
> > getting different behaviours based on memory layout?
> >
> > strace, strace again, get different results?
> >
>
> Honestly I’m confused about your question. If the only one way for debugging
> application is to use predictable mmap behaviour, then something went wrong in
> this live and we should stop using computers at all.

I'm not saying "only way". I'm saying one way, and you are breaking
that. There's advanced stuff like debuggers going "back in time".

Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (1.17 kB)
signature.asc (188.00 B)
Digital signature
Download all attachments

2018-03-30 11:12:02

by Ilya Smith

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.


> On 30 Mar 2018, at 12:57, Pavel Machek <[email protected]> wrote:
>
> On Fri 2018-03-30 12:07:58, Ilya Smith wrote:
>> Hi
>>
>>> On 30 Mar 2018, at 10:55, Pavel Machek <[email protected]> wrote:
>>>
>>> Hi!
>>>
>>>> Current implementation doesn't randomize address returned by mmap.
>>>> All the entropy ends with choosing mmap_base_addr at the process
>>>> creation. After that mmap build very predictable layout of address
>>>> space. It allows to bypass ASLR in many cases. This patch make
>>>> randomization of address on any mmap call.
>>>
>>> How will this interact with people debugging their application, and
>>> getting different behaviours based on memory layout?
>>>
>>> strace, strace again, get different results?
>>>
>>
>> Honestly I’m confused about your question. If the only one way for debugging
>> application is to use predictable mmap behaviour, then something went wrong in
>> this live and we should stop using computers at all.
>
> I'm not saying "only way". I'm saying one way, and you are breaking
> that. There's advanced stuff like debuggers going "back in time".
>

Correct me if I wrong, when you run gdb for instance and try to debug some
application, gdb will disable randomization. This behaviour works with gdb
command: set disable-randomization on. As I know, gdb remove flag PF_RANDOMIZE
from current personality thats how it disables ASLR for debugging process.
According to my patch, flag PF_RANDOMIZE is checked before calling
unmapped_area_random. So I don’t breaking debugging. If you talking about the
case, when your application crashes under customer environment and you want to
debug it; in this case layout of memory is what you don’t control at all and
you have to understand what is where. So for debugging memory process layout is
not what you should care of.

Thanks,
Ilya

2018-03-30 13:40:40

by Rich Felker

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

On Fri, Mar 30, 2018 at 09:55:08AM +0200, Pavel Machek wrote:
> Hi!
>
> > Current implementation doesn't randomize address returned by mmap.
> > All the entropy ends with choosing mmap_base_addr at the process
> > creation. After that mmap build very predictable layout of address
> > space. It allows to bypass ASLR in many cases. This patch make
> > randomization of address on any mmap call.
>
> How will this interact with people debugging their application, and
> getting different behaviours based on memory layout?
>
> strace, strace again, get different results?

Normally gdb disables ASLR for the process when invoking a program to
debug. I don't see why that would be terribly useful with strace but
you can do the same if you want.

Rich