The foliation of THP collapse_file()'s call to try_to_unmap() is
currently wrong, crashing on a test in rmap_walk() when xas_next()
delivered a value (after which page has been loaded independently).
Fixes: c3b522d9a698 ("mm/rmap: Convert try_to_unmap() to take a folio")
Signed-off-by: Hugh Dickins <[email protected]>
---
Please just fold in if you agree.
mm/khugepaged.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- mmotm/mm/khugepaged.c
+++ linux/mm/khugepaged.c
@@ -1824,7 +1824,8 @@ static void collapse_file(struct mm_stru
}
if (page_mapped(page))
- try_to_unmap(folio, TTU_IGNORE_MLOCK | TTU_BATCH_FLUSH);
+ try_to_unmap(page_folio(page),
+ TTU_IGNORE_MLOCK | TTU_BATCH_FLUSH);
xas_lock_irq(&xas);
xas_set(&xas, index);
On Sat, Feb 26, 2022 at 06:22:47PM -0800, Hugh Dickins wrote:
> The foliation of THP collapse_file()'s call to try_to_unmap() is
> currently wrong, crashing on a test in rmap_walk() when xas_next()
> delivered a value (after which page has been loaded independently).
Argh. I have a fear of this exact bug, and I must have missed checking
for it this time. I hate trying to keep two variables in sync, so my
preferred fix for this is to remove it for this merge window:
+++ b/mm/khugepaged.c
@@ -1699,8 +1699,7 @@ static void collapse_file(struct mm_struct *mm,
xas_set(&xas, start);
for (index = start; index < end; index++) {
- struct folio *folio = xas_next(&xas);
- struct page *page = &folio->page;
+ struct page *page = xas_next(&xas);
VM_BUG_ON(index != xas.xa_index);
if (is_shmem) {
@@ -1835,7 +1834,8 @@ static void collapse_file(struct mm_struct *mm,
}
if (page_mapped(page))
- try_to_unmap(folio, TTU_IGNORE_MLOCK | TTU_BATCH_FLUSH);
+ try_to_unmap(page_folio(page),
+ TTU_IGNORE_MLOCK | TTU_BATCH_FLUSH);
xas_lock_irq(&xas);
xas_set(&xas, index);
(ie revert the first hunk). I'll come back to khugepaged in the next
merge window and convert this function properly. It's going to take
some surgery to shmem in order to use folios there first ...