The foliation of remove_migration_pte() is currently wrong on hugetlb
anon entries, causing LTP move_pages12 to crash on BUG_ON(!PageLocked)
in hugepage_add_anon_rmap().
Fixes: b4010e88f071 ("mm/migrate: Convert remove_migration_ptes() to folios")
Signed-off-by: Hugh Dickins <[email protected]>
---
Please just fold in if you agree.
mm/migrate.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- mmotm/mm/migrate.c
+++ linux/mm/migrate.c
@@ -182,7 +182,8 @@ static bool remove_migration_pte(struct
struct page *new;
unsigned long idx = 0;
- if (!folio_test_ksm(folio))
+ /* Skip call in common case, plus .pgoff is invalid for KSM */
+ if (pvmw.nr_pages != 1 && !folio_test_hugetlb(folio))
idx = linear_page_index(vma, pvmw.address) - pvmw.pgoff;
new = folio_page(folio, idx);
On Sat, Feb 26, 2022 at 06:25:15PM -0800, Hugh Dickins wrote:
> - if (!folio_test_ksm(folio))
> + /* Skip call in common case, plus .pgoff is invalid for KSM */
> + if (pvmw.nr_pages != 1 && !folio_test_hugetlb(folio))
> idx = linear_page_index(vma, pvmw.address) - pvmw.pgoff;
How do you feel about this instead?
- if (!folio_test_ksm(folio))
+ /* pgoff is invalid for ksm pages, but they are never large */
+ if (folio_test_large(folio) && !folio_test_hugetlb(folio))
idx = linear_page_index(vma, pvmw.address) - pvmw.pgoff;
On Mon, 28 Feb 2022, Matthew Wilcox wrote:
> On Sat, Feb 26, 2022 at 06:25:15PM -0800, Hugh Dickins wrote:
> > - if (!folio_test_ksm(folio))
> > + /* Skip call in common case, plus .pgoff is invalid for KSM */
> > + if (pvmw.nr_pages != 1 && !folio_test_hugetlb(folio))
> > idx = linear_page_index(vma, pvmw.address) - pvmw.pgoff;
>
> How do you feel about this instead?
>
> - if (!folio_test_ksm(folio))
> + /* pgoff is invalid for ksm pages, but they are never large */
> + if (folio_test_large(folio) && !folio_test_hugetlb(folio))
> idx = linear_page_index(vma, pvmw.address) - pvmw.pgoff;
>
That looks nicer to me too. I'll assume that's what you will add
or squash in your tree, and no need for me to resend - thanks.
Hugh