2023-10-01 00:58:00

by Rik van Riel

[permalink] [raw]
Subject: [PATCH v5 0/3] hugetlbfs: close race between MADV_DONTNEED and page fault

v5: somehow a __vma_private_lock(vma) test failed to make it from my tree into the v4 series, fix that
v4: fix unmap_vmas locking issue pointed out by Mike Kravetz, and resulting lockdep fallout
v3: fix compile error w/ lockdep and test case errors with patch 3
v2: fix the locking bug found with the libhugetlbfs tests.

Malloc libraries, like jemalloc and tcalloc, take decisions on when
to call madvise independently from the code in the main application.

This sometimes results in the application page faulting on an address,
right after the malloc library has shot down the backing memory with
MADV_DONTNEED.

Usually this is harmless, because we always have some 4kB pages
sitting around to satisfy a page fault. However, with hugetlbfs
systems often allocate only the exact number of huge pages that
the application wants.

Due to TLB batching, hugetlbfs MADV_DONTNEED will free pages outside of
any lock taken on the page fault path, which can open up the following
race condition:

CPU 1 CPU 2

MADV_DONTNEED
unmap page
shoot down TLB entry
page fault
fail to allocate a huge page
killed with SIGBUS
free page

Fix that race by extending the hugetlb_vma_lock locking scheme to also
cover private hugetlb mappings (with resv_map), and pulling the locking
from __unmap_hugepage_final_range into helper functions called from
zap_page_range_single. This ensures page faults stay locked out of
the MADV_DONTNEED VMA until the huge pages have actually been freed.

The third patch in the series is more of an RFC. Using the
invalidate_lock instead of the hugetlb_vma_lock greatly simplifies
the code, but at the cost of turning a per-VMA lock into a lock
per backing hugetlbfs file, which could slow things down when
multiple processes are mapping the same hugetlbfs file.



2023-10-01 00:58:11

by Rik van Riel

[permalink] [raw]
Subject: [PATCH 1/3] hugetlbfs: extend hugetlb_vma_lock to private VMAs

From: Rik van Riel <[email protected]>

Extend the locking scheme used to protect shared hugetlb mappings
from truncate vs page fault races, in order to protect private
hugetlb mappings (with resv_map) against MADV_DONTNEED.

Add a read-write semaphore to the resv_map data structure, and
use that from the hugetlb_vma_(un)lock_* functions, in preparation
for closing the race between MADV_DONTNEED and page faults.

Signed-off-by: Rik van Riel <[email protected]>
Reviewed-by: Mike Kravetz <[email protected]>
---
include/linux/hugetlb.h | 6 ++++++
mm/hugetlb.c | 41 +++++++++++++++++++++++++++++++++++++----
2 files changed, 43 insertions(+), 4 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 5b2626063f4f..694928fa06a3 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -60,6 +60,7 @@ struct resv_map {
long adds_in_progress;
struct list_head region_cache;
long region_cache_count;
+ struct rw_semaphore rw_sema;
#ifdef CONFIG_CGROUP_HUGETLB
/*
* On private mappings, the counter to uncharge reservations is stored
@@ -1231,6 +1232,11 @@ static inline bool __vma_shareable_lock(struct vm_area_struct *vma)
return (vma->vm_flags & VM_MAYSHARE) && vma->vm_private_data;
}

+static inline bool __vma_private_lock(struct vm_area_struct *vma)
+{
+ return (!(vma->vm_flags & VM_MAYSHARE)) && vma->vm_private_data;
+}
+
/*
* Safe version of huge_pte_offset() to check the locks. See comments
* above huge_pte_offset().
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ba6d39b71cb1..ee7497f37098 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -97,6 +97,7 @@ static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma);
static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma);
static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
unsigned long start, unsigned long end);
+static struct resv_map *vma_resv_map(struct vm_area_struct *vma);

static inline bool subpool_is_free(struct hugepage_subpool *spool)
{
@@ -267,6 +268,10 @@ void hugetlb_vma_lock_read(struct vm_area_struct *vma)
struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;

down_read(&vma_lock->rw_sema);
+ } else if (__vma_private_lock(vma)) {
+ struct resv_map *resv_map = vma_resv_map(vma);
+
+ down_read(&resv_map->rw_sema);
}
}

@@ -276,6 +281,10 @@ void hugetlb_vma_unlock_read(struct vm_area_struct *vma)
struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;

up_read(&vma_lock->rw_sema);
+ } else if (__vma_private_lock(vma)) {
+ struct resv_map *resv_map = vma_resv_map(vma);
+
+ up_read(&resv_map->rw_sema);
}
}

@@ -285,6 +294,10 @@ void hugetlb_vma_lock_write(struct vm_area_struct *vma)
struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;

down_write(&vma_lock->rw_sema);
+ } else if (__vma_private_lock(vma)) {
+ struct resv_map *resv_map = vma_resv_map(vma);
+
+ down_write(&resv_map->rw_sema);
}
}

@@ -294,17 +307,27 @@ void hugetlb_vma_unlock_write(struct vm_area_struct *vma)
struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;

up_write(&vma_lock->rw_sema);
+ } else if (__vma_private_lock(vma)) {
+ struct resv_map *resv_map = vma_resv_map(vma);
+
+ up_write(&resv_map->rw_sema);
}
}

int hugetlb_vma_trylock_write(struct vm_area_struct *vma)
{
- struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;

- if (!__vma_shareable_lock(vma))
- return 1;
+ if (__vma_shareable_lock(vma)) {
+ struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;

- return down_write_trylock(&vma_lock->rw_sema);
+ return down_write_trylock(&vma_lock->rw_sema);
+ } else if (__vma_private_lock(vma)) {
+ struct resv_map *resv_map = vma_resv_map(vma);
+
+ return down_write_trylock(&resv_map->rw_sema);
+ }
+
+ return 1;
}

void hugetlb_vma_assert_locked(struct vm_area_struct *vma)
@@ -313,6 +336,10 @@ void hugetlb_vma_assert_locked(struct vm_area_struct *vma)
struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;

lockdep_assert_held(&vma_lock->rw_sema);
+ } else if (__vma_private_lock(vma)) {
+ struct resv_map *resv_map = vma_resv_map(vma);
+
+ lockdep_assert_held(&resv_map->rw_sema);
}
}

@@ -345,6 +372,11 @@ static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma)
struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;

__hugetlb_vma_unlock_write_put(vma_lock);
+ } else if (__vma_private_lock(vma)) {
+ struct resv_map *resv_map = vma_resv_map(vma);
+
+ /* no free for anon vmas, but still need to unlock */
+ up_write(&resv_map->rw_sema);
}
}

@@ -1068,6 +1100,7 @@ struct resv_map *resv_map_alloc(void)
kref_init(&resv_map->refs);
spin_lock_init(&resv_map->lock);
INIT_LIST_HEAD(&resv_map->regions);
+ init_rwsem(&resv_map->rw_sema);

resv_map->adds_in_progress = 0;
/*
--
2.41.0

2023-10-01 00:58:16

by Rik van Riel

[permalink] [raw]
Subject: [PATCH 2/3] hugetlbfs: close race between MADV_DONTNEED and page fault

From: Rik van Riel <[email protected]>

Malloc libraries, like jemalloc and tcalloc, take decisions on when
to call madvise independently from the code in the main application.

This sometimes results in the application page faulting on an address,
right after the malloc library has shot down the backing memory with
MADV_DONTNEED.

Usually this is harmless, because we always have some 4kB pages
sitting around to satisfy a page fault. However, with hugetlbfs
systems often allocate only the exact number of huge pages that
the application wants.

Due to TLB batching, hugetlbfs MADV_DONTNEED will free pages outside of
any lock taken on the page fault path, which can open up the following
race condition:

CPU 1 CPU 2

MADV_DONTNEED
unmap page
shoot down TLB entry
page fault
fail to allocate a huge page
killed with SIGBUS
free page

Fix that race by pulling the locking from __unmap_hugepage_final_range
into helper functions called from zap_page_range_single. This ensures
page faults stay locked out of the MADV_DONTNEED VMA until the
huge pages have actually been freed.

Signed-off-by: Rik van Riel <[email protected]>
---
include/linux/hugetlb.h | 35 +++++++++++++++++++++++++++++++++--
mm/hugetlb.c | 20 +++++++++++---------
mm/memory.c | 13 ++++++++-----
3 files changed, 52 insertions(+), 16 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 694928fa06a3..d9ec500cfef9 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -139,7 +139,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
void unmap_hugepage_range(struct vm_area_struct *,
unsigned long, unsigned long, struct page *,
zap_flags_t);
-void __unmap_hugepage_range_final(struct mmu_gather *tlb,
+void __unmap_hugepage_range(struct mmu_gather *tlb,
struct vm_area_struct *vma,
unsigned long start, unsigned long end,
struct page *ref_page, zap_flags_t zap_flags);
@@ -246,6 +246,25 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
unsigned long *start, unsigned long *end);

+extern void __hugetlb_zap_begin(struct vm_area_struct *vma,
+ unsigned long *begin, unsigned long *end);
+extern void __hugetlb_zap_end(struct vm_area_struct *vma,
+ struct zap_details *details);
+
+static inline void hugetlb_zap_begin(struct vm_area_struct *vma,
+ unsigned long *start, unsigned long *end)
+{
+ if (is_vm_hugetlb_page(vma))
+ __hugetlb_zap_begin(vma, start, end);
+}
+
+static inline void hugetlb_zap_end(struct vm_area_struct *vma,
+ struct zap_details *details)
+{
+ if (is_vm_hugetlb_page(vma))
+ __hugetlb_zap_end(vma, details);
+}
+
void hugetlb_vma_lock_read(struct vm_area_struct *vma);
void hugetlb_vma_unlock_read(struct vm_area_struct *vma);
void hugetlb_vma_lock_write(struct vm_area_struct *vma);
@@ -297,6 +316,18 @@ static inline void adjust_range_if_pmd_sharing_possible(
{
}

+static inline void hugetlb_zap_begin(
+ struct vm_area_struct *vma,
+ unsigned long *start, unsigned long *end)
+{
+}
+
+static inline void hugetlb_zap_end(
+ struct vm_area_struct *vma,
+ struct zap_details *details)
+{
+}
+
static inline struct page *hugetlb_follow_page_mask(
struct vm_area_struct *vma, unsigned long address, unsigned int flags,
unsigned int *page_mask)
@@ -442,7 +473,7 @@ static inline long hugetlb_change_protection(
return 0;
}

-static inline void __unmap_hugepage_range_final(struct mmu_gather *tlb,
+static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
struct vm_area_struct *vma, unsigned long start,
unsigned long end, struct page *ref_page,
zap_flags_t zap_flags)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ee7497f37098..397a26f70deb 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5306,9 +5306,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
return len + old_addr - old_end;
}

-static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
- unsigned long start, unsigned long end,
- struct page *ref_page, zap_flags_t zap_flags)
+void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ unsigned long start, unsigned long end,
+ struct page *ref_page, zap_flags_t zap_flags)
{
struct mm_struct *mm = vma->vm_mm;
unsigned long address;
@@ -5435,16 +5435,18 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct
tlb_flush_mmu_tlbonly(tlb);
}

-void __unmap_hugepage_range_final(struct mmu_gather *tlb,
- struct vm_area_struct *vma, unsigned long start,
- unsigned long end, struct page *ref_page,
- zap_flags_t zap_flags)
+void __hugetlb_zap_begin(struct vm_area_struct *vma,
+ unsigned long *start, unsigned long *end)
{
+ adjust_range_if_pmd_sharing_possible(vma, start, end);
hugetlb_vma_lock_write(vma);
i_mmap_lock_write(vma->vm_file->f_mapping);
+}

- /* mmu notification performed in caller */
- __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags);
+void __hugetlb_zap_end(struct vm_area_struct *vma,
+ struct zap_details *details)
+{
+ zap_flags_t zap_flags = details ? details->zap_flags : 0;

if (zap_flags & ZAP_FLAG_UNMAP) { /* final unmap */
/*
diff --git a/mm/memory.c b/mm/memory.c
index 6c264d2f969c..517221f01303 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1683,7 +1683,7 @@ static void unmap_single_vma(struct mmu_gather *tlb,
if (vma->vm_file) {
zap_flags_t zap_flags = details ?
details->zap_flags : 0;
- __unmap_hugepage_range_final(tlb, vma, start, end,
+ __unmap_hugepage_range(tlb, vma, start, end,
NULL, zap_flags);
}
} else
@@ -1728,8 +1728,12 @@ void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
start_addr, end_addr);
mmu_notifier_invalidate_range_start(&range);
do {
- unmap_single_vma(tlb, vma, start_addr, end_addr, &details,
+ unsigned long start = start_addr;
+ unsigned long end = end_addr;
+ hugetlb_zap_begin(vma, &start, &end);
+ unmap_single_vma(tlb, vma, start, end, &details,
mm_wr_locked);
+ hugetlb_zap_end(vma, &details);
} while ((vma = mas_find(mas, tree_end - 1)) != NULL);
mmu_notifier_invalidate_range_end(&range);
}
@@ -1753,9 +1757,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
lru_add_drain();
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm,
address, end);
- if (is_vm_hugetlb_page(vma))
- adjust_range_if_pmd_sharing_possible(vma, &range.start,
- &range.end);
+ hugetlb_zap_begin(vma, &range.start, &range.end);
tlb_gather_mmu(&tlb, vma->vm_mm);
update_hiwater_rss(vma->vm_mm);
mmu_notifier_invalidate_range_start(&range);
@@ -1766,6 +1768,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
unmap_single_vma(&tlb, vma, address, end, details, false);
mmu_notifier_invalidate_range_end(&range);
tlb_finish_mmu(&tlb);
+ hugetlb_zap_end(vma, details);
}

/**
--
2.41.0

2023-10-01 00:58:24

by Rik van Riel

[permalink] [raw]
Subject: [PATCH 3/3] hugetlbfs: replace hugetlb_vma_lock with invalidate_lock

From: Rik van Riel <[email protected]>

Replace the custom hugetlbfs VMA locking code with the recently
introduced invalidate_lock. This greatly simplifies things.

However, this is a large enough change that it should probably go in
separately from the other changes.

Suggested-by: Matthew Wilcox <[email protected]>
Signed-off-by: Rik van Riel <[email protected]>
---
fs/hugetlbfs/inode.c | 68 +-----------
include/linux/fs.h | 6 ++
include/linux/hugetlb.h | 21 +---
mm/hugetlb.c | 227 ++++------------------------------------
4 files changed, 32 insertions(+), 290 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 316c4cebd3f3..711fd3f5d86f 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -485,7 +485,6 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
struct folio *folio, pgoff_t index)
{
struct rb_root_cached *root = &mapping->i_mmap;
- struct hugetlb_vma_lock *vma_lock;
struct page *page = &folio->page;
struct vm_area_struct *vma;
unsigned long v_start;
@@ -495,9 +494,9 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
start = index * pages_per_huge_page(h);
end = (index + 1) * pages_per_huge_page(h);

+ filemap_invalidate_lock(mapping);
i_mmap_lock_write(mapping);
-retry:
- vma_lock = NULL;
+
vma_interval_tree_foreach(vma, root, start, end - 1) {
v_start = vma_offset_start(vma, start);
v_end = vma_offset_end(vma, end);
@@ -505,62 +504,13 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
if (!hugetlb_vma_maps_page(vma, v_start, page))
continue;

- if (!hugetlb_vma_trylock_write(vma)) {
- vma_lock = vma->vm_private_data;
- /*
- * If we can not get vma lock, we need to drop
- * immap_sema and take locks in order. First,
- * take a ref on the vma_lock structure so that
- * we can be guaranteed it will not go away when
- * dropping immap_sema.
- */
- kref_get(&vma_lock->refs);
- break;
- }
-
unmap_hugepage_range(vma, v_start, v_end, NULL,
ZAP_FLAG_DROP_MARKER);
hugetlb_vma_unlock_write(vma);
}

+ filemap_invalidate_unlock(mapping);
i_mmap_unlock_write(mapping);
-
- if (vma_lock) {
- /*
- * Wait on vma_lock. We know it is still valid as we have
- * a reference. We must 'open code' vma locking as we do
- * not know if vma_lock is still attached to vma.
- */
- down_write(&vma_lock->rw_sema);
- i_mmap_lock_write(mapping);
-
- vma = vma_lock->vma;
- if (!vma) {
- /*
- * If lock is no longer attached to vma, then just
- * unlock, drop our reference and retry looking for
- * other vmas.
- */
- up_write(&vma_lock->rw_sema);
- kref_put(&vma_lock->refs, hugetlb_vma_lock_release);
- goto retry;
- }
-
- /*
- * vma_lock is still attached to vma. Check to see if vma
- * still maps page and if so, unmap.
- */
- v_start = vma_offset_start(vma, start);
- v_end = vma_offset_end(vma, end);
- if (hugetlb_vma_maps_page(vma, v_start, page))
- unmap_hugepage_range(vma, v_start, v_end, NULL,
- ZAP_FLAG_DROP_MARKER);
-
- kref_put(&vma_lock->refs, hugetlb_vma_lock_release);
- hugetlb_vma_unlock_write(vma);
-
- goto retry;
- }
}

static void
@@ -578,20 +528,10 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
unsigned long v_start;
unsigned long v_end;

- if (!hugetlb_vma_trylock_write(vma))
- continue;
-
v_start = vma_offset_start(vma, start);
v_end = vma_offset_end(vma, end);

unmap_hugepage_range(vma, v_start, v_end, NULL, zap_flags);
-
- /*
- * Note that vma lock only exists for shared/non-private
- * vmas. Therefore, lock is not held when calling
- * unmap_hugepage_range for private vmas.
- */
- hugetlb_vma_unlock_write(vma);
}
}

@@ -725,10 +665,12 @@ static void hugetlb_vmtruncate(struct inode *inode, loff_t offset)
pgoff = offset >> PAGE_SHIFT;

i_size_write(inode, offset);
+ filemap_invalidate_lock(mapping);
i_mmap_lock_write(mapping);
if (!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root))
hugetlb_vmdelete_list(&mapping->i_mmap, pgoff, 0,
ZAP_FLAG_DROP_MARKER);
+ filemap_invalidate_unlock(mapping);
i_mmap_unlock_write(mapping);
remove_inode_hugepages(inode, offset, LLONG_MAX);
}
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 4aeb3fa11927..b455a8913db4 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -847,6 +847,12 @@ static inline void filemap_invalidate_lock(struct address_space *mapping)
down_write(&mapping->invalidate_lock);
}

+static inline int filemap_invalidate_trylock(
+ struct address_space *mapping)
+{
+ return down_write_trylock(&mapping->invalidate_lock);
+}
+
static inline void filemap_invalidate_unlock(struct address_space *mapping)
{
up_write(&mapping->invalidate_lock);
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index d9ec500cfef9..2908c47e7bf2 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -60,7 +60,6 @@ struct resv_map {
long adds_in_progress;
struct list_head region_cache;
long region_cache_count;
- struct rw_semaphore rw_sema;
#ifdef CONFIG_CGROUP_HUGETLB
/*
* On private mappings, the counter to uncharge reservations is stored
@@ -107,12 +106,6 @@ struct file_region {
#endif
};

-struct hugetlb_vma_lock {
- struct kref refs;
- struct rw_semaphore rw_sema;
- struct vm_area_struct *vma;
-};
-
extern struct resv_map *resv_map_alloc(void);
void resv_map_release(struct kref *ref);

@@ -1277,17 +1270,9 @@ hugetlb_walk(struct vm_area_struct *vma, unsigned long addr, unsigned long sz)
{
#if defined(CONFIG_HUGETLB_PAGE) && \
defined(CONFIG_ARCH_WANT_HUGE_PMD_SHARE) && defined(CONFIG_LOCKDEP)
- struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
-
- /*
- * If pmd sharing possible, locking needed to safely walk the
- * hugetlb pgtables. More information can be found at the comment
- * above huge_pte_offset() in the same file.
- *
- * NOTE: lockdep_is_held() is only defined with CONFIG_LOCKDEP.
- */
- if (__vma_shareable_lock(vma))
- WARN_ON_ONCE(!lockdep_is_held(&vma_lock->rw_sema) &&
+ if (vma->vm_file)
+ WARN_ON_ONCE(!lockdep_is_held(
+ &vma->vm_file->f_mapping->invalidate_lock) &&
!lockdep_is_held(
&vma->vm_file->f_mapping->i_mmap_rwsem));
#endif
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 397a26f70deb..749f38537e4d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -92,9 +92,6 @@ struct mutex *hugetlb_fault_mutex_table ____cacheline_aligned_in_smp;

/* Forward declaration */
static int hugetlb_acct_memory(struct hstate *h, long delta);
-static void hugetlb_vma_lock_free(struct vm_area_struct *vma);
-static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma);
-static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma);
static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
unsigned long start, unsigned long end);
static struct resv_map *vma_resv_map(struct vm_area_struct *vma);
@@ -264,170 +261,41 @@ static inline struct hugepage_subpool *subpool_vma(struct vm_area_struct *vma)
*/
void hugetlb_vma_lock_read(struct vm_area_struct *vma)
{
- if (__vma_shareable_lock(vma)) {
- struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
-
- down_read(&vma_lock->rw_sema);
- } else if (__vma_private_lock(vma)) {
- struct resv_map *resv_map = vma_resv_map(vma);
-
- down_read(&resv_map->rw_sema);
- }
+ if (vma->vm_file)
+ filemap_invalidate_lock_shared(vma->vm_file->f_mapping);
}

void hugetlb_vma_unlock_read(struct vm_area_struct *vma)
{
- if (__vma_shareable_lock(vma)) {
- struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
-
- up_read(&vma_lock->rw_sema);
- } else if (__vma_private_lock(vma)) {
- struct resv_map *resv_map = vma_resv_map(vma);
-
- up_read(&resv_map->rw_sema);
- }
+ if (vma->vm_file)
+ filemap_invalidate_unlock_shared(vma->vm_file->f_mapping);
}

void hugetlb_vma_lock_write(struct vm_area_struct *vma)
{
- if (__vma_shareable_lock(vma)) {
- struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
-
- down_write(&vma_lock->rw_sema);
- } else if (__vma_private_lock(vma)) {
- struct resv_map *resv_map = vma_resv_map(vma);
-
- down_write(&resv_map->rw_sema);
- }
+ if (vma->vm_file)
+ filemap_invalidate_lock(vma->vm_file->f_mapping);
}

void hugetlb_vma_unlock_write(struct vm_area_struct *vma)
{
- if (__vma_shareable_lock(vma)) {
- struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
-
- up_write(&vma_lock->rw_sema);
- } else if (__vma_private_lock(vma)) {
- struct resv_map *resv_map = vma_resv_map(vma);
-
- up_write(&resv_map->rw_sema);
- }
+ if (vma->vm_file)
+ filemap_invalidate_unlock(vma->vm_file->f_mapping);
}

int hugetlb_vma_trylock_write(struct vm_area_struct *vma)
{

- if (__vma_shareable_lock(vma)) {
- struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
-
- return down_write_trylock(&vma_lock->rw_sema);
- } else if (__vma_private_lock(vma)) {
- struct resv_map *resv_map = vma_resv_map(vma);
-
- return down_write_trylock(&resv_map->rw_sema);
- }
+ if (vma->vm_file)
+ return filemap_invalidate_trylock(vma->vm_file->f_mapping);

return 1;
}

void hugetlb_vma_assert_locked(struct vm_area_struct *vma)
{
- if (__vma_shareable_lock(vma)) {
- struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
-
- lockdep_assert_held(&vma_lock->rw_sema);
- } else if (__vma_private_lock(vma)) {
- struct resv_map *resv_map = vma_resv_map(vma);
-
- lockdep_assert_held(&resv_map->rw_sema);
- }
-}
-
-void hugetlb_vma_lock_release(struct kref *kref)
-{
- struct hugetlb_vma_lock *vma_lock = container_of(kref,
- struct hugetlb_vma_lock, refs);
-
- kfree(vma_lock);
-}
-
-static void __hugetlb_vma_unlock_write_put(struct hugetlb_vma_lock *vma_lock)
-{
- struct vm_area_struct *vma = vma_lock->vma;
-
- /*
- * vma_lock structure may or not be released as a result of put,
- * it certainly will no longer be attached to vma so clear pointer.
- * Semaphore synchronizes access to vma_lock->vma field.
- */
- vma_lock->vma = NULL;
- vma->vm_private_data = NULL;
- up_write(&vma_lock->rw_sema);
- kref_put(&vma_lock->refs, hugetlb_vma_lock_release);
-}
-
-static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma)
-{
- if (__vma_shareable_lock(vma)) {
- struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
-
- __hugetlb_vma_unlock_write_put(vma_lock);
- } else if (__vma_private_lock(vma)) {
- struct resv_map *resv_map = vma_resv_map(vma);
-
- /* no free for anon vmas, but still need to unlock */
- up_write(&resv_map->rw_sema);
- }
-}
-
-static void hugetlb_vma_lock_free(struct vm_area_struct *vma)
-{
- /*
- * Only present in sharable vmas.
- */
- if (!vma || !__vma_shareable_lock(vma))
- return;
-
- if (vma->vm_private_data) {
- struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
-
- down_write(&vma_lock->rw_sema);
- __hugetlb_vma_unlock_write_put(vma_lock);
- }
-}
-
-static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma)
-{
- struct hugetlb_vma_lock *vma_lock;
-
- /* Only establish in (flags) sharable vmas */
- if (!vma || !(vma->vm_flags & VM_MAYSHARE))
- return;
-
- /* Should never get here with non-NULL vm_private_data */
- if (vma->vm_private_data)
- return;
-
- vma_lock = kmalloc(sizeof(*vma_lock), GFP_KERNEL);
- if (!vma_lock) {
- /*
- * If we can not allocate structure, then vma can not
- * participate in pmd sharing. This is only a possible
- * performance enhancement and memory saving issue.
- * However, the lock is also used to synchronize page
- * faults with truncation. If the lock is not present,
- * unlikely races could leave pages in a file past i_size
- * until the file is removed. Warn in the unlikely case of
- * allocation failure.
- */
- pr_warn_once("HugeTLB: unable to allocate vma specific lock\n");
- return;
- }
-
- kref_init(&vma_lock->refs);
- init_rwsem(&vma_lock->rw_sema);
- vma_lock->vma = vma;
- vma->vm_private_data = vma_lock;
+ if (vma->vm_file)
+ lockdep_assert_held(&vma->vm_file->f_mapping->invalidate_lock);
}

/* Helper that removes a struct file_region from the resv_map cache and returns
@@ -1100,7 +968,6 @@ struct resv_map *resv_map_alloc(void)
kref_init(&resv_map->refs);
spin_lock_init(&resv_map->lock);
INIT_LIST_HEAD(&resv_map->regions);
- init_rwsem(&resv_map->rw_sema);

resv_map->adds_in_progress = 0;
/*
@@ -1195,22 +1062,11 @@ void hugetlb_dup_vma_private(struct vm_area_struct *vma)
VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
/*
* Clear vm_private_data
- * - For shared mappings this is a per-vma semaphore that may be
- * allocated in a subsequent call to hugetlb_vm_op_open.
- * Before clearing, make sure pointer is not associated with vma
- * as this will leak the structure. This is the case when called
- * via clear_vma_resv_huge_pages() and hugetlb_vm_op_open has already
- * been called to allocate a new structure.
* - For MAP_PRIVATE mappings, this is the reserve map which does
* not apply to children. Faults generated by the children are
* not guaranteed to succeed, even if read-only.
*/
- if (vma->vm_flags & VM_MAYSHARE) {
- struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
-
- if (vma_lock && vma_lock->vma != vma)
- vma->vm_private_data = NULL;
- } else
+ if (!(vma->vm_flags & VM_MAYSHARE))
vma->vm_private_data = NULL;
}

@@ -4846,25 +4702,6 @@ static void hugetlb_vm_op_open(struct vm_area_struct *vma)
resv_map_dup_hugetlb_cgroup_uncharge_info(resv);
kref_get(&resv->refs);
}
-
- /*
- * vma_lock structure for sharable mappings is vma specific.
- * Clear old pointer (if copied via vm_area_dup) and allocate
- * new structure. Before clearing, make sure vma_lock is not
- * for this vma.
- */
- if (vma->vm_flags & VM_MAYSHARE) {
- struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
-
- if (vma_lock) {
- if (vma_lock->vma != vma) {
- vma->vm_private_data = NULL;
- hugetlb_vma_lock_alloc(vma);
- } else
- pr_warn("HugeTLB: vma_lock already exists in %s.\n", __func__);
- } else
- hugetlb_vma_lock_alloc(vma);
- }
}

static void hugetlb_vm_op_close(struct vm_area_struct *vma)
@@ -4875,8 +4712,6 @@ static void hugetlb_vm_op_close(struct vm_area_struct *vma)
unsigned long reserve, start, end;
long gbl_reserve;

- hugetlb_vma_lock_free(vma);
-
resv = vma_resv_map(vma);
if (!resv || !is_vma_resv_set(vma, HPAGE_RESV_OWNER))
return;
@@ -5440,30 +5275,16 @@ void __hugetlb_zap_begin(struct vm_area_struct *vma,
{
adjust_range_if_pmd_sharing_possible(vma, start, end);
hugetlb_vma_lock_write(vma);
- i_mmap_lock_write(vma->vm_file->f_mapping);
+ if (vma->vm_file)
+ i_mmap_lock_write(vma->vm_file->f_mapping);
}

void __hugetlb_zap_end(struct vm_area_struct *vma,
struct zap_details *details)
{
- zap_flags_t zap_flags = details ? details->zap_flags : 0;
-
- if (zap_flags & ZAP_FLAG_UNMAP) { /* final unmap */
- /*
- * Unlock and free the vma lock before releasing i_mmap_rwsem.
- * When the vma_lock is freed, this makes the vma ineligible
- * for pmd sharing. And, i_mmap_rwsem is required to set up
- * pmd sharing. This is important as page tables for this
- * unmapped range will be asynchrously deleted. If the page
- * tables are shared, there will be issues when accessed by
- * someone else.
- */
- __hugetlb_vma_unlock_write_free(vma);
- i_mmap_unlock_write(vma->vm_file->f_mapping);
- } else {
+ if (vma->vm_file)
i_mmap_unlock_write(vma->vm_file->f_mapping);
- hugetlb_vma_unlock_write(vma);
- }
+ hugetlb_vma_unlock_write(vma);
}

void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
@@ -6706,12 +6527,6 @@ bool hugetlb_reserve_pages(struct inode *inode,
return false;
}

- /*
- * vma specific semaphore used for pmd sharing and fault/truncation
- * synchronization
- */
- hugetlb_vma_lock_alloc(vma);
-
/*
* Only apply hugepage reservation if asked. At fault time, an
* attempt will be made for VM_NORESERVE to allocate a page
@@ -6834,7 +6649,6 @@ bool hugetlb_reserve_pages(struct inode *inode,
hugetlb_cgroup_uncharge_cgroup_rsvd(hstate_index(h),
chg * pages_per_huge_page(h), h_cg);
out_err:
- hugetlb_vma_lock_free(vma);
if (!vma || vma->vm_flags & VM_MAYSHARE)
/* Only call region_abort if the region_chg succeeded but the
* region_add failed or didn't run.
@@ -6904,13 +6718,10 @@ static unsigned long page_table_shareable(struct vm_area_struct *svma,
/*
* match the virtual addresses, permission and the alignment of the
* page table page.
- *
- * Also, vma_lock (vm_private_data) is required for sharing.
*/
if (pmd_index(addr) != pmd_index(saddr) ||
vm_flags != svm_flags ||
- !range_in_vma(svma, sbase, s_end) ||
- !svma->vm_private_data)
+ !range_in_vma(svma, sbase, s_end))
return 0;

return saddr;
@@ -6930,8 +6741,6 @@ bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr)
*/
if (!(vma->vm_flags & VM_MAYSHARE))
return false;
- if (!vma->vm_private_data) /* vma lock required for sharing */
- return false;
if (!range_in_vma(vma, start, end))
return false;
return true;
--
2.41.0

2023-10-01 04:58:34

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v5 0/3] hugetlbfs: close race between MADV_DONTNEED and page fault

On Sat, 30 Sep 2023 20:55:47 -0400 [email protected] wrote:

> v5: somehow a __vma_private_lock(vma) test failed to make it from my tree into the v4 series, fix that
> v4: fix unmap_vmas locking issue pointed out by Mike Kravetz, and resulting lockdep fallout
> v3: fix compile error w/ lockdep and test case errors with patch 3
> v2: fix the locking bug found with the libhugetlbfs tests.
>
> Malloc libraries, like jemalloc and tcalloc, take decisions on when
> to call madvise independently from the code in the main application.
>
> This sometimes results in the application page faulting on an address,
> right after the malloc library has shot down the backing memory with
> MADV_DONTNEED.
>
> Usually this is harmless, because we always have some 4kB pages
> sitting around to satisfy a page fault. However, with hugetlbfs
> systems often allocate only the exact number of huge pages that
> the application wants.
>
> Due to TLB batching, hugetlbfs MADV_DONTNEED will free pages outside of
> any lock taken on the page fault path, which can open up the following
> race condition:
>
> CPU 1 CPU 2
>
> MADV_DONTNEED
> unmap page
> shoot down TLB entry
> page fault
> fail to allocate a huge page
> killed with SIGBUS
> free page
>
> Fix that race by extending the hugetlb_vma_lock locking scheme to also
> cover private hugetlb mappings (with resv_map), and pulling the locking
> from __unmap_hugepage_final_range into helper functions called from
> zap_page_range_single. This ensures page faults stay locked out of
> the MADV_DONTNEED VMA until the huge pages have actually been freed.

Didn't we decide that [1/3] and [2/3] should be cc:stable?

> The third patch in the series is more of an RFC. Using the
> invalidate_lock instead of the hugetlb_vma_lock greatly simplifies
> the code, but at the cost of turning a per-VMA lock into a lock
> per backing hugetlbfs file, which could slow things down when
> multiple processes are mapping the same hugetlbfs file.

"could slow things down" is testable-for?

This third one I'd queue up for testing for a 6.7-rc1 merge, so I'll split
the series apart. Not a problem, but it would be a little better if
things were originally packaged that way.

2023-10-02 07:24:37

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH 3/3] hugetlbfs: replace hugetlb_vma_lock with invalidate_lock

On 09/30/23 20:55, [email protected] wrote:
> From: Rik van Riel <[email protected]>
>
> Replace the custom hugetlbfs VMA locking code with the recently
> introduced invalidate_lock. This greatly simplifies things.
>
> However, this is a large enough change that it should probably go in
> separately from the other changes.
>
> Suggested-by: Matthew Wilcox <[email protected]>
> Signed-off-by: Rik van Riel <[email protected]>
> ---
> fs/hugetlbfs/inode.c | 68 +-----------
> include/linux/fs.h | 6 ++
> include/linux/hugetlb.h | 21 +---
> mm/hugetlb.c | 227 ++++------------------------------------
> 4 files changed, 32 insertions(+), 290 deletions(-)

As noted elsewhere, there are issues with patch 2 of this series, and the
complete series does not pass libhugetlbfs tests. However, there were
questions about the performance characteristics of replacing hugetlb vma
lock with the invalidate_lock.

This is from commit 188a39725ad7 describing the performance gains from
the hugetlb vma lock.

The recent regression report [1] notes page fault and fork latency of
shared hugetlb mappings. To measure this, I created two simple programs:
1) map a shared hugetlb area, write fault all pages, unmap area
Do this in a continuous loop to measure faults per second
2) map a shared hugetlb area, write fault a few pages, fork and exit
Do this in a continuous loop to measure forks per second
These programs were run on a 48 CPU VM with 320GB memory. The shared
mapping size was 250GB. For comparison, a single instance of the program
was run. Then, multiple instances were run in parallel to introduce
lock contention. Changing the locking scheme results in a significant
performance benefit.

test instances unmodified revert vma
--------------------------------------------------------------------------
faults per sec 1 393043 395680 389932
faults per sec 24 71405 81191 79048
forks per sec 1 2802 2747 2725
forks per sec 24 439 536 500
Combined faults 24 1621 68070 53662
Combined forks 24 358 67 142

Combined test is when running both faulting program and forking program
simultaneously.

This series was 'stable enough' to run the test, although I did see some
bad PMD state warnings and threw out those runs. Here are the results:

test instances next-20230925 next-20230925+series
--------------------------------------------------------------------------
faults per sec 1 382994 386884
faults per sec 24 97959 75427
forks per sec 1 3105 3148
forks per sec 24 693 715
Combined faults 24 74506 31648
Combined forks 24 233 282

The significant measurement is 'Combined faults 24'. There is a 50+% drop,
which is better than I expected. It might be interesting to fix up all
issues in the series and rerun these tests?

Do note that the performance issue was originally reported as an issue
with a database using hugetlbfs (not my employer). I did not have access
to the actual DB to recreate issue. However, the user verified that changes
in the 'Combined faults 24' measurement reflected changes in their DB
performance.
--
Mike Kravetz

2023-10-02 09:12:36

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH 2/3] hugetlbfs: close race between MADV_DONTNEED and page fault

On 09/30/23 20:55, [email protected] wrote:
> From: Rik van Riel <[email protected]>
>
> Malloc libraries, like jemalloc and tcalloc, take decisions on when
> to call madvise independently from the code in the main application.
>
> This sometimes results in the application page faulting on an address,
> right after the malloc library has shot down the backing memory with
> MADV_DONTNEED.
>
> Usually this is harmless, because we always have some 4kB pages
> sitting around to satisfy a page fault. However, with hugetlbfs
> systems often allocate only the exact number of huge pages that
> the application wants.
>
> Due to TLB batching, hugetlbfs MADV_DONTNEED will free pages outside of
> any lock taken on the page fault path, which can open up the following
> race condition:
>
> CPU 1 CPU 2
>
> MADV_DONTNEED
> unmap page
> shoot down TLB entry
> page fault
> fail to allocate a huge page
> killed with SIGBUS
> free page
>
> Fix that race by pulling the locking from __unmap_hugepage_final_range
> into helper functions called from zap_page_range_single. This ensures
> page faults stay locked out of the MADV_DONTNEED VMA until the
> huge pages have actually been freed.
>
> Signed-off-by: Rik van Riel <[email protected]>
> ---
> include/linux/hugetlb.h | 35 +++++++++++++++++++++++++++++++++--
> mm/hugetlb.c | 20 +++++++++++---------
> mm/memory.c | 13 ++++++++-----

Hi Rik,

Something is not right here. I have not looked closely at the patch,
but running libhugetlbfs test suite hits this NULL deref in misalign (2M: 32).

[ 51.891236] BUG: kernel NULL pointer dereference, address: 00000000000001c0
[ 51.892420] #PF: supervisor read access in kernel mode
[ 51.893353] #PF: error_code(0x0000) - not-present page
[ 51.894207] PGD 80000001eeac0067 P4D 80000001eeac0067 PUD 1fa577067 PMD 0
[ 51.895299] Oops: 0000 [#1] PREEMPT SMP PTI
[ 51.896010] CPU: 0 PID: 1004 Comm: misalign Not tainted 6.6.0-rc3-next-20230925+ #13
[ 51.897285] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-1.fc37 04/01/2014
[ 51.898674] RIP: 0010:__hugetlb_zap_begin+0x76/0x90
[ 51.899488] Code: 06 48 8b 3a 48 39 cf 73 11 48 81 c7 ff ff ff 3f 48 81 e7 00 00 00 c0 48 89 3a 48 89 df e8 42 cd ff ff 48 8b 83 88 00 00 00 5b <48> 8b b8 c0 01 00 00 48 81 c7 28 01 00 00 e9 87 3b 91 00 0f 1f 80
[ 51.902194] RSP: 0018:ffffc9000487bbf0 EFLAGS: 00010246
[ 51.903019] RAX: 0000000000000000 RBX: 00000000f7a00000 RCX: 00000000c0000000
[ 51.904088] RDX: 0000000000440073 RSI: ffffc9000487bc00 RDI: ffff8881fa71dcb8
[ 51.905207] RBP: 00000000f7800000 R08: 00000000f7a00000 R09: 00000000f7a00000
[ 51.906284] R10: ffff8881fb5b8040 R11: ffff8881fb5b89b0 R12: ffff8881fa71dcb8
[ 51.907351] R13: ffffc9000487bd80 R14: ffffc9000487bc78 R15: 0000000000000001
[ 51.908648] FS: 0000000000000000(0000) GS:ffff888277c00000(0063) knlGS:00000000f7c99700
[ 51.910613] CS: 0010 DS: 002b ES: 002b CR0: 0000000080050033
[ 51.911983] CR2: 00000000000001c0 CR3: 00000001fa412005 CR4: 0000000000370ef0
[ 51.913417] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 51.914535] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 51.915602] Call Trace:
[ 51.916069] <TASK>
[ 51.916480] ? __die+0x1f/0x70
[ 51.917057] ? page_fault_oops+0x159/0x450
[ 51.917737] ? do_user_addr_fault+0x65/0x850
[ 51.919360] ? exc_page_fault+0x6d/0x1c0
[ 51.920021] ? asm_exc_page_fault+0x22/0x30
[ 51.920712] ? __hugetlb_zap_begin+0x76/0x90
[ 51.921440] unmap_vmas+0xb3/0x100
[ 51.922057] unmap_region.constprop.0+0xcc/0x140
[ 51.922837] ? lock_release+0x142/0x290
[ 51.923474] ? preempt_count_add+0x47/0xa0
[ 51.924150] mmap_region+0x565/0xab0
[ 51.924809] do_mmap+0x35a/0x520
[ 51.925384] vm_mmap_pgoff+0xdf/0x200
[ 51.926008] ksys_mmap_pgoff+0x18f/0x200
[ 51.926834] ? syscall_enter_from_user_mode_prepare+0x19/0x60
[ 51.928006] __do_fast_syscall_32+0x68/0x100
[ 51.928962] do_fast_syscall_32+0x2f/0x70
[ 51.929896] entry_SYSENTER_compat_after_hwframe+0x7b/0x8d

I think you previously built libhugetlbfs, so hopefully you can recreate.

The stack trace (and test) suggest hugetlbfs_file_mmap returns an error
due to misalignment, and then we unmap the vma just previously created.
Looks code is now calling hugetlb_zap_begin before unmap_single_vma.
The code/comment in unmap_single_vma mentions this special cleanup
case. Looks like vma->vm_file is NULL and __hugetlb_zap_begin is doing
a i_mmap_lock_write(vma->vm_file->f_mapping).

if (start != end) {
if (unlikely(is_vm_hugetlb_page(vma))) {
/*
* It is undesirable to test vma->vm_file as it
* should be non-null for valid hugetlb area.
* However, vm_file will be NULL in the error
* cleanup path of mmap_region. When
* hugetlbfs ->mmap method fails,
* mmap_region() nullifies vma->vm_file
* before calling this function to clean up.
* Since no pte has actually been setup, it is
* safe to do nothing in this case.
*/
if (vma->vm_file) {
zap_flags_t zap_flags = details ?
details->zap_flags : 0;
__unmap_hugepage_range_final(tlb, vma, start, end,
NULL, zap_flags);
}

Looks like vma->vm_file is NULL and __hugetlb_zap_begin is trying to do
i_mmap_lock_write(vma->vm_file->f_mapping).

Guess I did look closely. :)
--
Mike Kravetz

2023-10-02 13:25:45

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH 2/3] hugetlbfs: close race between MADV_DONTNEED and page fault

On Sun, 2023-10-01 at 21:39 -0700, Mike Kravetz wrote:
>
> Looks like vma->vm_file is NULL and __hugetlb_zap_begin is trying to
> do
> i_mmap_lock_write(vma->vm_file->f_mapping).
>
> Guess I did look closely. :)

Ugh. It looks like the fix for this bug ended up getting pulled
into patch 3, instead of patch 2. I've had it in my code for a
while now :/

Let me move the fix for this thing into patch 2.

void __hugetlb_zap_begin(struct vm_area_struct *vma,
unsigned long *start, unsigned long *end)
{
adjust_range_if_pmd_sharing_possible(vma, start, end);
hugetlb_vma_lock_write(vma);
if (vma->vm_file)
i_mmap_lock_write(vma->vm_file->f_mapping);
}


--
All Rights Reversed.

2023-10-03 19:36:07

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH 2/3] hugetlbfs: close race between MADV_DONTNEED and page fault

On Sun, 2023-10-01 at 21:39 -0700, Mike Kravetz wrote:
>
> Something is not right here.  I have not looked closely at the patch,
> but running libhugetlbfs test suite hits this NULL deref in misalign
> (2M: 32).

Hi Mike,

fixing the null dereference was easy, but I continued running
into a test case failure with linkhuge_rw. After tweaking the
code in my patches quite a few times, I finally ran out of
ideas and tried it on a tree without my patches.

I still see the test failure on upstream
2cf0f7156238 ("Merge tag 'nfs-for-6.6-2' of git://git.linux-
nfs.org/projects/anna/linux-nfs")

This is with a modern glibc, and the __morecore assignments
in libhugetlbfs/morecore.c commented out.


HUGETLB_ELFMAP=R HUGETLB_SHARE=1 linkhuge_rw (2M: 32): Pool state:
(('hugepages-2048kB', (('free_hugepages', 1), ('resv_hugepages', 0),
('surplus_hugepages', 0), ('nr_hugepages_mempolicy', 1),
('nr_hugepages', 1), ('nr_overcommit_hugepages', 0))),)
Hugepage pool state not preserved!
BEFORE: (('hugepages-2048kB', (('free_hugepages', 1),
('resv_hugepages', 0), ('surplus_hugepages', 0),
('nr_hugepages_mempolicy', 1), ('nr_hugepages', 1),
('nr_overcommit_hugepages', 0))),)
AFTER: (('hugepages-2048kB', (('free_hugepages', 0), ('resv_hugepages',
0), ('surplus_hugepages', 0), ('nr_hugepages_mempolicy', 1),
('nr_hugepages', 1), ('nr_overcommit_hugepages', 0))),)


It may take a little while to figure this one out. I did some
bpftracing, but don't have a real smoking gun yet. The trace
certainly shows the last user of the leaked huge page going
into __unmap_hugepage_range.

--
All Rights Reversed.

2023-10-03 20:20:18

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH 2/3] hugetlbfs: close race between MADV_DONTNEED and page fault

On 10/03/23 15:35, Rik van Riel wrote:
> On Sun, 2023-10-01 at 21:39 -0700, Mike Kravetz wrote:
> >
> > Something is not right here.? I have not looked closely at the patch,
> > but running libhugetlbfs test suite hits this NULL deref in misalign
> > (2M: 32).
>
> Hi Mike,
>
> fixing the null dereference was easy, but I continued running
> into a test case failure with linkhuge_rw. After tweaking the
> code in my patches quite a few times, I finally ran out of
> ideas and tried it on a tree without my patches.
>
> I still see the test failure on upstream
> 2cf0f7156238 ("Merge tag 'nfs-for-6.6-2' of git://git.linux-
> nfs.org/projects/anna/linux-nfs")
>
> This is with a modern glibc, and the __morecore assignments
> in libhugetlbfs/morecore.c commented out.
>
>
> HUGETLB_ELFMAP=R HUGETLB_SHARE=1 linkhuge_rw (2M: 32): Pool state:
> (('hugepages-2048kB', (('free_hugepages', 1), ('resv_hugepages', 0),
> ('surplus_hugepages', 0), ('nr_hugepages_mempolicy', 1),
> ('nr_hugepages', 1), ('nr_overcommit_hugepages', 0))),)
> Hugepage pool state not preserved!
> BEFORE: (('hugepages-2048kB', (('free_hugepages', 1),
> ('resv_hugepages', 0), ('surplus_hugepages', 0),
> ('nr_hugepages_mempolicy', 1), ('nr_hugepages', 1),
> ('nr_overcommit_hugepages', 0))),)
> AFTER: (('hugepages-2048kB', (('free_hugepages', 0), ('resv_hugepages',
> 0), ('surplus_hugepages', 0), ('nr_hugepages_mempolicy', 1),
> ('nr_hugepages', 1), ('nr_overcommit_hugepages', 0))),)
>

Hi Rik,

When I started working on hugetlb several years ago, the following libhugetlbfs
tests failed. This was/is with a version of glibc that supports __morecore.

noresv-preserve-resv-page (2M: 32): FAIL mmap() 1: Invalid argument
HUGETLB_ELFMAP=RW linkhuge_rw (2M: 32): FAIL small_data is not hugepage
HUGETLB_ELFMAP=RW linkhuge_rw (2M: 64): FAIL small_data is not hugepage
HUGETLB_MINIMAL_COPY=no HUGETLB_ELFMAP=RW linkhuge_rw (2M: 32): FAIL small_data is not hugepage
HUGETLB_MINIMAL_COPY=no HUGETLB_ELFMAP=RW linkhuge_rw (2M: 64): FAIL small_data is not hugepage
HUGETLB_ELFMAP=RW HUGETLB_SHARE=0 linkhuge_rw (2M: 32): FAIL small_data is not hugepage
HUGETLB_ELFMAP=RW HUGETLB_SHARE=0 linkhuge_rw (2M: 64): FAIL small_data is not hugepage
HUGETLB_ELFMAP=RW HUGETLB_SHARE=1 linkhuge_rw (2M: 32): FAIL small_data is not hugepage
HUGETLB_ELFMAP=RW HUGETLB_SHARE=1 linkhuge_rw (2M: 64): FAIL small_data is not hugepage
alloc-instantiate-race shared (2M: 32): FAIL mmap() 1: Cannot allocate memory
alloc-instantiate-race private (2M: 32): FAIL mmap() 1: Cannot allocate memory
truncate_sigbus_versus_oom (2M: 32): FAIL mmap() reserving all pages: Invalid argument
mmap-gettest 10 2048 (2M: 32): FAIL Failed to mmap the hugetlb file: Invalid argument
shm-fork 10 2048 (2M: 32): FAIL shmget(): Invalid argument
shm-getraw 2048 /dev/full (2M: 32): FAIL shmget(): Invalid argument

I spent some time looking into the issues, but most were issues with the
tests themselves. I did not attempt to modify the tests, nor do I
remember all the issues.

Please consider the above failures normal and expected. That have been
this way for many years. Sorry for any waste of your time.

Of course, if you would like to look into these you are welcome.
--
Mike Kravetz

2023-10-04 00:20:43

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH 2/3] hugetlbfs: close race between MADV_DONTNEED and page fault

On Tue, 2023-10-03 at 13:19 -0700, Mike Kravetz wrote:
> On 10/03/23 15:35, Rik van Riel wrote:
> > On Sun, 2023-10-01 at 21:39 -0700, Mike Kravetz wrote:
> > >
> > > Something is not right here.  I have not looked closely at the
> > > patch,
> > > but running libhugetlbfs test suite hits this NULL deref in
> > > misalign
> > > (2M: 32).
> >
> > Hi Mike,
> >
> > fixing the null dereference was easy, but I continued running
> > into a test case failure with linkhuge_rw. After tweaking the
> > code in my patches quite a few times, I finally ran out of
> > ideas and tried it on a tree without my patches.
> >
> > I still see the test failure on upstream
> > 2cf0f7156238 ("Merge tag 'nfs-for-6.6-2' of git://git.linux-
> > nfs.org/projects/anna/linux-nfs")
> >
> > This is with a modern glibc, and the __morecore assignments
> > in libhugetlbfs/morecore.c commented out.
> >
> >
> > HUGETLB_ELFMAP=R HUGETLB_SHARE=1 linkhuge_rw (2M: 32):  Pool state:
> > (('hugepages-2048kB', (('free_hugepages', 1), ('resv_hugepages',
> > 0),
> > ('surplus_hugepages', 0), ('nr_hugepages_mempolicy', 1),
> > ('nr_hugepages', 1), ('nr_overcommit_hugepages', 0))),)
> > Hugepage pool state not preserved!
> > BEFORE: (('hugepages-2048kB', (('free_hugepages', 1),
> > ('resv_hugepages', 0), ('surplus_hugepages', 0),
> > ('nr_hugepages_mempolicy', 1), ('nr_hugepages', 1),
> > ('nr_overcommit_hugepages', 0))),)
> > AFTER: (('hugepages-2048kB', (('free_hugepages', 0),
> > ('resv_hugepages',
> > 0), ('surplus_hugepages', 0), ('nr_hugepages_mempolicy', 1),
> > ('nr_hugepages', 1), ('nr_overcommit_hugepages', 0))),)
> >
>
> Please consider the above failures normal and expected.  That have
> been
> this way for many years.  Sorry for any waste of your time.
>
> Of course, if you would like to look into these you are welcome.

I'm not too worried about the test cases returning failure,
but having free_hugepages not go back to 1 after linkhuge_rw
exits looks bad.

In this case it appears that linkhuge_rw simply left behind
a file in /dev/hugepages when it died, and removing that file
returns free_hugepages back to what it should be.

I guess I'll go run the test cases without -c 1 :)

--
All Rights Reversed.