2024-03-12 22:39:22

by Edgecombe, Rick P

[permalink] [raw]
Subject: [PATCH v3 03/12] mm: Use get_unmapped_area_vmflags()

When memory is being placed, mmap() will take care to respect the guard
gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and
VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap()
needs to consider two things:
1. That the new mapping isn’t placed in an any existing mappings guard
gaps.
2. That the new mapping isn’t placed such that any existing mappings
are not in *its* guard gaps.

The long standing behavior of mmap() is to ensure 1, but not take any care
around 2. So for example, if there is a PAGE_SIZE free area, and a
mmap() with a PAGE_SIZE size, and a type that has a guard gap is being
placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then
the mapping that is supposed to have a guard gap will not have a gap to
the adjacent VMA.

Use mm_get_unmapped_area_vmflags() in the do_mmap() so future changes
can cause shadow stack mappings to be placed with a guard gap. Also use
the THP variant that takes vm_flags, such that THP shadow stack can get the
same treatment. Adjust the vm_flags calculation to happen earlier so that
the vm_flags can be passed into __get_unmapped_area().

Signed-off-by: Rick Edgecombe <[email protected]>
---
v2:
- Make get_unmapped_area() a static inline (Kirill)
---
include/linux/mm.h | 11 ++++++++++-
mm/mmap.c | 34 ++++++++++++++++------------------
2 files changed, 26 insertions(+), 19 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index f5a97dec5169..d91cde79aaee 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3363,7 +3363,16 @@ extern int install_special_mapping(struct mm_struct *mm,
unsigned long randomize_stack_top(unsigned long stack_top);
unsigned long randomize_page(unsigned long start, unsigned long range);

-extern unsigned long get_unmapped_area(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
+unsigned long
+__get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
+ unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags);
+
+static inline unsigned long
+get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
+ unsigned long pgoff, unsigned long flags)
+{
+ return __get_unmapped_area(file, addr, len, pgoff, flags, 0);
+}

extern unsigned long mmap_region(struct file *file, unsigned long addr,
unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
diff --git a/mm/mmap.c b/mm/mmap.c
index e23ce8ca24c9..a3128ed26676 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1257,18 +1257,6 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
if (mm->map_count > sysctl_max_map_count)
return -ENOMEM;

- /* Obtain the address to map to. we verify (or select) it and ensure
- * that it represents a valid section of the address space.
- */
- addr = get_unmapped_area(file, addr, len, pgoff, flags);
- if (IS_ERR_VALUE(addr))
- return addr;
-
- if (flags & MAP_FIXED_NOREPLACE) {
- if (find_vma_intersection(mm, addr, addr + len))
- return -EEXIST;
- }
-
if (prot == PROT_EXEC) {
pkey = execute_only_pkey(mm);
if (pkey < 0)
@@ -1282,6 +1270,18 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
vm_flags |= calc_vm_prot_bits(prot, pkey) | calc_vm_flag_bits(flags) |
mm->def_flags | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC;

+ /* Obtain the address to map to. we verify (or select) it and ensure
+ * that it represents a valid section of the address space.
+ */
+ addr = __get_unmapped_area(file, addr, len, pgoff, flags, vm_flags);
+ if (IS_ERR_VALUE(addr))
+ return addr;
+
+ if (flags & MAP_FIXED_NOREPLACE) {
+ if (find_vma_intersection(mm, addr, addr + len))
+ return -EEXIST;
+ }
+
if (flags & MAP_LOCKED)
if (!can_do_mlock())
return -EPERM;
@@ -1839,8 +1839,8 @@ unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm, struct file *fi
}

unsigned long
-get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
- unsigned long pgoff, unsigned long flags)
+__get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
+ unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags)
{
unsigned long (*get_area)(struct file *, unsigned long,
unsigned long, unsigned long, unsigned long)
@@ -1875,8 +1875,8 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
if (get_area)
addr = get_area(file, addr, len, pgoff, flags);
else
- addr = mm_get_unmapped_area(current->mm, file, addr, len,
- pgoff, flags);
+ addr = mm_get_unmapped_area_vmflags(current->mm, file, addr, len,
+ pgoff, flags, vm_flags);
if (IS_ERR_VALUE(addr))
return addr;

@@ -1889,8 +1889,6 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
return error ? error : addr;
}

-EXPORT_SYMBOL(get_unmapped_area);
-
unsigned long
mm_get_unmapped_area(struct mm_struct *mm, struct file *file,
unsigned long addr, unsigned long len,
--
2.34.1



2024-03-13 09:07:35

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v3 03/12] mm: Use get_unmapped_area_vmflags()



Le 12/03/2024 à 23:28, Rick Edgecombe a écrit :
> When memory is being placed, mmap() will take care to respect the guard
> gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and
> VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap()
> needs to consider two things:
> 1. That the new mapping isn’t placed in an any existing mappings guard
> gaps.
> 2. That the new mapping isn’t placed such that any existing mappings
> are not in *its* guard gaps.
>
> The long standing behavior of mmap() is to ensure 1, but not take any care
> around 2. So for example, if there is a PAGE_SIZE free area, and a
> mmap() with a PAGE_SIZE size, and a type that has a guard gap is being
> placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then
> the mapping that is supposed to have a guard gap will not have a gap to
> the adjacent VMA.
>
> Use mm_get_unmapped_area_vmflags() in the do_mmap() so future changes
> can cause shadow stack mappings to be placed with a guard gap. Also use
> the THP variant that takes vm_flags, such that THP shadow stack can get the
> same treatment. Adjust the vm_flags calculation to happen earlier so that
> the vm_flags can be passed into __get_unmapped_area().
>
> Signed-off-by: Rick Edgecombe <[email protected]>
> ---
> v2:
> - Make get_unmapped_area() a static inline (Kirill)
> ---
> include/linux/mm.h | 11 ++++++++++-
> mm/mmap.c | 34 ++++++++++++++++------------------
> 2 files changed, 26 insertions(+), 19 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index f5a97dec5169..d91cde79aaee 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -3363,7 +3363,16 @@ extern int install_special_mapping(struct mm_struct *mm,
> unsigned long randomize_stack_top(unsigned long stack_top);
> unsigned long randomize_page(unsigned long start, unsigned long range);
>
> -extern unsigned long get_unmapped_area(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
> +unsigned long
> +__get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
> + unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags);
> +
> +static inline unsigned long
> +get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
> + unsigned long pgoff, unsigned long flags)
> +{
> + return __get_unmapped_area(file, addr, len, pgoff, flags, 0);
> +}
>
> extern unsigned long mmap_region(struct file *file, unsigned long addr,
> unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
> diff --git a/mm/mmap.c b/mm/mmap.c
> index e23ce8ca24c9..a3128ed26676 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -1257,18 +1257,6 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> if (mm->map_count > sysctl_max_map_count)
> return -ENOMEM;
>
> - /* Obtain the address to map to. we verify (or select) it and ensure
> - * that it represents a valid section of the address space.
> - */
> - addr = get_unmapped_area(file, addr, len, pgoff, flags);
> - if (IS_ERR_VALUE(addr))
> - return addr;
> -
> - if (flags & MAP_FIXED_NOREPLACE) {
> - if (find_vma_intersection(mm, addr, addr + len))
> - return -EEXIST;
> - }
> -
> if (prot == PROT_EXEC) {
> pkey = execute_only_pkey(mm);
> if (pkey < 0)
> @@ -1282,6 +1270,18 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> vm_flags |= calc_vm_prot_bits(prot, pkey) | calc_vm_flag_bits(flags) |
> mm->def_flags | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC;
>
> + /* Obtain the address to map to. we verify (or select) it and ensure
> + * that it represents a valid section of the address space.
> + */
> + addr = __get_unmapped_area(file, addr, len, pgoff, flags, vm_flags);
> + if (IS_ERR_VALUE(addr))
> + return addr;
> +
> + if (flags & MAP_FIXED_NOREPLACE) {
> + if (find_vma_intersection(mm, addr, addr + len))
> + return -EEXIST;
> + }
> +
> if (flags & MAP_LOCKED)
> if (!can_do_mlock())
> return -EPERM;
> @@ -1839,8 +1839,8 @@ unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm, struct file *fi
> }
>
> unsigned long
> -get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
> - unsigned long pgoff, unsigned long flags)
> +__get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
> + unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags)
> {
> unsigned long (*get_area)(struct file *, unsigned long,
> unsigned long, unsigned long, unsigned long)
> @@ -1875,8 +1875,8 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
> if (get_area)
> addr = get_area(file, addr, len, pgoff, flags);

What about get_area(), what happens to vm_flags in case that function is
used ?

> else
> - addr = mm_get_unmapped_area(current->mm, file, addr, len,
> - pgoff, flags);
> + addr = mm_get_unmapped_area_vmflags(current->mm, file, addr, len,
> + pgoff, flags, vm_flags);
> if (IS_ERR_VALUE(addr))
> return addr;
>
> @@ -1889,8 +1889,6 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
> return error ? error : addr;
> }
>
> -EXPORT_SYMBOL(get_unmapped_area);
> -

Don't you have to export __get_unmapped_area() so that the static inline
get_unmapped_area() used in a module have access to __get_unmapped_area() ?

If it was pointless to export get_unmapped_area(), its removal should be
a separate patch.

> unsigned long
> mm_get_unmapped_area(struct mm_struct *mm, struct file *file,
> unsigned long addr, unsigned long len,

2024-03-13 14:55:40

by Edgecombe, Rick P

[permalink] [raw]
Subject: Re: [PATCH v3 03/12] mm: Use get_unmapped_area_vmflags()

On Wed, 2024-03-13 at 09:05 +0000, Christophe Leroy wrote:
>
> What about get_area(), what happens to vm_flags in case that function
> is
> used ?

Shadow stack can only be private and anonymous memory. So this way is
to avoid unnecessary plumbing. And that plumbing becomes quite
extensive when you start intersecting with the 
f_ops->get_unmapped_area(), so its pretty fortunate.

2024-03-13 16:00:49

by Edgecombe, Rick P

[permalink] [raw]
Subject: Re: [PATCH v3 03/12] mm: Use get_unmapped_area_vmflags()

On Wed, 2024-03-13 at 09:05 +0000, Christophe Leroy wrote:
> > -EXPORT_SYMBOL(get_unmapped_area);
> > -
>
> Don't you have to export __get_unmapped_area() so that the static
> inline
> get_unmapped_area() used in a module have access to
> __get_unmapped_area() ?
>
> If it was pointless to export get_unmapped_area(), its removal should
> be
> a separate patch.

Yes, it seems to have been pointless. I can split it out. Thanks.

2024-03-13 17:58:06

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v3 03/12] mm: Use get_unmapped_area_vmflags()



Le 13/03/2024 à 15:55, Edgecombe, Rick P a écrit :
> On Wed, 2024-03-13 at 09:05 +0000, Christophe Leroy wrote:
>>
>> What about get_area(), what happens to vm_flags in case that function
>> is
>> used ?
>
> Shadow stack can only be private and anonymous memory. So this way is
> to avoid unnecessary plumbing. And that plumbing becomes quite
> extensive when you start intersecting with the
> f_ops->get_unmapped_area(), so its pretty fortunate.

Ok,


Reviewed-by: Christophe Leroy <[email protected]>