2021-05-10 19:55:04

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH] mm/mmap: Introduce unlock_range() for code cleanup

Both __do_munmap() and exit_mmap() unlock a range of VMAs using almost
identical code blocks. Replace both blocks by a static inline function.

Signed-off-by: Liam R. Howlett <[email protected]>
---
mm/mmap.c | 38 +++++++++++++++++++-------------------
1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 81f5595a8490..ea556fc795d2 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2801,6 +2801,21 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
return __split_vma(mm, vma, addr, new_below);
}

+static inline void unlock_range(struct vm_area_struct *start, unsigned long limit)
+{
+ struct mm_struct *mm = start->vm_mm;
+ struct vm_area_struct *tmp = start;
+
+ while (tmp && tmp->vm_start < limit) {
+ if (tmp->vm_flags & VM_LOCKED) {
+ mm->locked_vm -= vma_pages(tmp);
+ munlock_vma_pages_all(tmp);
+ }
+
+ tmp = tmp->vm_next;
+ }
+}
+
/* Munmap is split into 2 main parts -- this part which finds
* what needs doing, and the areas themselves, which do the
* work. This now handles partial unmappings.
@@ -2889,17 +2904,8 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
/*
* unlock any mlock()ed ranges before detaching vmas
*/
- if (mm->locked_vm) {
- struct vm_area_struct *tmp = vma;
- while (tmp && tmp->vm_start < end) {
- if (tmp->vm_flags & VM_LOCKED) {
- mm->locked_vm -= vma_pages(tmp);
- munlock_vma_pages_all(tmp);
- }
-
- tmp = tmp->vm_next;
- }
- }
+ if (mm->locked_vm)
+ unlock_range(vma, end);

/* Detach vmas from rbtree */
if (!detach_vmas_to_be_unmapped(mm, vma, prev, end))
@@ -3184,14 +3190,8 @@ void exit_mmap(struct mm_struct *mm)
mmap_write_unlock(mm);
}

- if (mm->locked_vm) {
- vma = mm->mmap;
- while (vma) {
- if (vma->vm_flags & VM_LOCKED)
- munlock_vma_pages_all(vma);
- vma = vma->vm_next;
- }
- }
+ if (mm->locked_vm)
+ unlock_range(mm->mmap, ULONG_MAX);

arch_exit_mmap(mm);

--
2.30.2


2021-05-10 20:00:46

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH] mm/mmap: Introduce unlock_range() for code cleanup

On Mon, May 10, 2021 at 07:50:22PM +0000, Liam Howlett wrote:
> Both __do_munmap() and exit_mmap() unlock a range of VMAs using almost
> identical code blocks. Replace both blocks by a static inline function.
>
> Signed-off-by: Liam R. Howlett <[email protected]>

Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>

> +static inline void unlock_range(struct vm_area_struct *start, unsigned long limit)

Seems like an unnecessary >80 column line ...

static inline
void unlock_range(struct vm_area_struct *start, unsigned long limit)

2021-05-10 21:03:23

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH] mm/mmap: Introduce unlock_range() for code cleanup

* Matthew Wilcox <[email protected]> [210510 15:57]:
> On Mon, May 10, 2021 at 07:50:22PM +0000, Liam Howlett wrote:
> > Both __do_munmap() and exit_mmap() unlock a range of VMAs using almost
> > identical code blocks. Replace both blocks by a static inline function.
> >
> > Signed-off-by: Liam R. Howlett <[email protected]>
>
> Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
>
> > +static inline void unlock_range(struct vm_area_struct *start, unsigned long limit)
>
> Seems like an unnecessary >80 column line ...
>
> static inline
> void unlock_range(struct vm_area_struct *start, unsigned long limit)
>

Sorry about that, checkpatch also did not see this. I will send a v2.

2021-05-11 21:15:34

by Davidlohr Bueso

[permalink] [raw]
Subject: Re: [PATCH] mm/mmap: Introduce unlock_range() for code cleanup

On Mon, 10 May 2021, Liam Howlett wrote:

>Both __do_munmap() and exit_mmap() unlock a range of VMAs using almost
>identical code blocks. Replace both blocks by a static inline function.
>
>Signed-off-by: Liam R. Howlett <[email protected]>

Reviewed-by: Davidlohr Bueso <[email protected]>