2022-05-09 02:38:00

by Baolin Wang

[permalink] [raw]
Subject: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration

On some architectures (like ARM64), it can support CONT-PTE/PMD size
hugetlb, which means it can support not only PMD/PUD size hugetlb:
2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page
size specified.

When migrating a hugetlb page, we will get the relevant page table
entry by huge_pte_offset() only once to nuke it and remap it with
a migration pte entry. This is correct for PMD or PUD size hugetlb,
since they always contain only one pmd entry or pud entry in the
page table.

However this is incorrect for CONT-PTE and CONT-PMD size hugetlb,
since they can contain several continuous pte or pmd entry with
same page table attributes. So we will nuke or remap only one pte
or pmd entry for this CONT-PTE/PMD size hugetlb page, which is
not expected for hugetlb migration. The problem is we can still
continue to modify the subpages' data of a hugetlb page during
migrating a hugetlb page, which can cause a serious data consistent
issue, since we did not nuke the page table entry and set a
migration pte for the subpages of a hugetlb page.

To fix this issue, we should change to use huge_ptep_clear_flush()
to nuke a hugetlb page table, and remap it with set_huge_pte_at()
and set_huge_swap_pte_at() when migrating a hugetlb page, which
already considered the CONT-PTE or CONT-PMD size hugetlb.

Signed-off-by: Baolin Wang <[email protected]>
---
mm/rmap.c | 24 ++++++++++++++++++------
1 file changed, 18 insertions(+), 6 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 6fdd198..7cf2408 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1924,13 +1924,15 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
break;
}
}
+
+ /* Nuke the hugetlb page table entry */
+ pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
} else {
flush_cache_page(vma, address, pte_pfn(*pvmw.pte));
+ /* Nuke the page table entry. */
+ pteval = ptep_clear_flush(vma, address, pvmw.pte);
}

- /* Nuke the page table entry. */
- pteval = ptep_clear_flush(vma, address, pvmw.pte);
-
/* Set the dirty flag on the folio now the pte is gone. */
if (pte_dirty(pteval))
folio_mark_dirty(folio);
@@ -2015,7 +2017,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
pte_t swp_pte;

if (arch_unmap_one(mm, vma, address, pteval) < 0) {
- set_pte_at(mm, address, pvmw.pte, pteval);
+ if (folio_test_hugetlb(folio))
+ set_huge_pte_at(mm, address, pvmw.pte, pteval);
+ else
+ set_pte_at(mm, address, pvmw.pte, pteval);
ret = false;
page_vma_mapped_walk_done(&pvmw);
break;
@@ -2024,7 +2029,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
!anon_exclusive, subpage);
if (anon_exclusive &&
page_try_share_anon_rmap(subpage)) {
- set_pte_at(mm, address, pvmw.pte, pteval);
+ if (folio_test_hugetlb(folio))
+ set_huge_pte_at(mm, address, pvmw.pte, pteval);
+ else
+ set_pte_at(mm, address, pvmw.pte, pteval);
ret = false;
page_vma_mapped_walk_done(&pvmw);
break;
@@ -2050,7 +2058,11 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
swp_pte = pte_swp_mksoft_dirty(swp_pte);
if (pte_uffd_wp(pteval))
swp_pte = pte_swp_mkuffd_wp(swp_pte);
- set_pte_at(mm, address, pvmw.pte, swp_pte);
+ if (folio_test_hugetlb(folio))
+ set_huge_swap_pte_at(mm, address, pvmw.pte,
+ swp_pte, vma_mmu_pagesize(vma));
+ else
+ set_pte_at(mm, address, pvmw.pte, swp_pte);
trace_set_migration_pte(address, pte_val(swp_pte),
compound_order(&folio->page));
/*
--
1.8.3.1



2022-05-09 04:48:58

by Baolin Wang

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration

Hi,

On 5/8/2022 8:01 PM, kernel test robot wrote:
> Hi Baolin,
>
> I love your patch! Yet something to improve:
>
> [auto build test ERROR on akpm-mm/mm-everything]
> [also build test ERROR on next-20220506]
> [cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036
> base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
> config: x86_64-randconfig-a013 (https://download.01.org/0day-ci/archive/20220508/[email protected]/config)
> compiler: gcc-11 (Debian 11.2.0-20) 11.2.0
> reproduce (this is a W=1 build):
> # https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773
> git remote add linux-review https://github.com/intel-lab-lkp/linux
> git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036
> git checkout 907981b27213707fdb2f8a24c107d6752a09a773
> # save the config file
> mkdir build_dir && cp config build_dir/.config
> make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <[email protected]>
>
> All errors (new ones prefixed by >>):
>
> mm/rmap.c: In function 'try_to_migrate_one':
>>> mm/rmap.c:1931:34: error: implicit declaration of function 'huge_ptep_clear_flush'; did you mean 'ptep_clear_flush'? [-Werror=implicit-function-declaration]
> 1931 | pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
> | ^~~~~~~~~~~~~~~~~~~~~
> | ptep_clear_flush
>>> mm/rmap.c:1931:34: error: incompatible types when assigning to type 'pte_t' from type 'int'
>>> mm/rmap.c:2023:41: error: implicit declaration of function 'set_huge_pte_at'; did you mean 'set_huge_swap_pte_at'? [-Werror=implicit-function-declaration]
> 2023 | set_huge_pte_at(mm, address, pvmw.pte, pteval);
> | ^~~~~~~~~~~~~~~
> | set_huge_swap_pte_at
> cc1: some warnings being treated as errors

Thanks for reporting. I think I should add some dummy functions in
hugetlb.h file if the CONFIG_HUGETLB_PAGE is not selected. I can pass
the building with below changes and your config file.

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 306d6ef..9f71043 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -1093,6 +1093,17 @@ static inline void set_huge_swap_pte_at(struct
mm_struct *mm, unsigned long addr
pte_t *ptep, pte_t pte,
unsigned long sz)
{
}
+
+static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep)
+{
+ return ptep_get(ptep);
+}
+
+static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long
addr,
+ pte_t *ptep, pte_t pte)
+{
+}
#endif /* CONFIG_HUGETLB_PAGE */

2022-05-09 08:00:46

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration

Hi Baolin,

I love your patch! Yet something to improve:

[auto build test ERROR on akpm-mm/mm-everything]
[also build test ERROR on next-20220506]
[cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
config: x86_64-randconfig-a013 (https://download.01.org/0day-ci/archive/20220508/[email protected]/config)
compiler: gcc-11 (Debian 11.2.0-20) 11.2.0
reproduce (this is a W=1 build):
# https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036
git checkout 907981b27213707fdb2f8a24c107d6752a09a773
# save the config file
mkdir build_dir && cp config build_dir/.config
make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>

All errors (new ones prefixed by >>):

mm/rmap.c: In function 'try_to_migrate_one':
>> mm/rmap.c:1931:34: error: implicit declaration of function 'huge_ptep_clear_flush'; did you mean 'ptep_clear_flush'? [-Werror=implicit-function-declaration]
1931 | pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
| ^~~~~~~~~~~~~~~~~~~~~
| ptep_clear_flush
>> mm/rmap.c:1931:34: error: incompatible types when assigning to type 'pte_t' from type 'int'
>> mm/rmap.c:2023:41: error: implicit declaration of function 'set_huge_pte_at'; did you mean 'set_huge_swap_pte_at'? [-Werror=implicit-function-declaration]
2023 | set_huge_pte_at(mm, address, pvmw.pte, pteval);
| ^~~~~~~~~~~~~~~
| set_huge_swap_pte_at
cc1: some warnings being treated as errors


vim +1931 mm/rmap.c

1883
1884 /* Unexpected PMD-mapped THP? */
1885 VM_BUG_ON_FOLIO(!pvmw.pte, folio);
1886
1887 subpage = folio_page(folio,
1888 pte_pfn(*pvmw.pte) - folio_pfn(folio));
1889 address = pvmw.address;
1890 anon_exclusive = folio_test_anon(folio) &&
1891 PageAnonExclusive(subpage);
1892
1893 if (folio_test_hugetlb(folio)) {
1894 /*
1895 * huge_pmd_unshare may unmap an entire PMD page.
1896 * There is no way of knowing exactly which PMDs may
1897 * be cached for this mm, so we must flush them all.
1898 * start/end were already adjusted above to cover this
1899 * range.
1900 */
1901 flush_cache_range(vma, range.start, range.end);
1902
1903 if (!folio_test_anon(folio)) {
1904 /*
1905 * To call huge_pmd_unshare, i_mmap_rwsem must be
1906 * held in write mode. Caller needs to explicitly
1907 * do this outside rmap routines.
1908 */
1909 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
1910
1911 if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) {
1912 flush_tlb_range(vma, range.start, range.end);
1913 mmu_notifier_invalidate_range(mm, range.start,
1914 range.end);
1915
1916 /*
1917 * The ref count of the PMD page was dropped
1918 * which is part of the way map counting
1919 * is done for shared PMDs. Return 'true'
1920 * here. When there is no other sharing,
1921 * huge_pmd_unshare returns false and we will
1922 * unmap the actual page and drop map count
1923 * to zero.
1924 */
1925 page_vma_mapped_walk_done(&pvmw);
1926 break;
1927 }
1928 }
1929
1930 /* Nuke the hugetlb page table entry */
> 1931 pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
1932 } else {
1933 flush_cache_page(vma, address, pte_pfn(*pvmw.pte));
1934 /* Nuke the page table entry. */
1935 pteval = ptep_clear_flush(vma, address, pvmw.pte);
1936 }
1937
1938 /* Set the dirty flag on the folio now the pte is gone. */
1939 if (pte_dirty(pteval))
1940 folio_mark_dirty(folio);
1941
1942 /* Update high watermark before we lower rss */
1943 update_hiwater_rss(mm);
1944
1945 if (folio_is_zone_device(folio)) {
1946 unsigned long pfn = folio_pfn(folio);
1947 swp_entry_t entry;
1948 pte_t swp_pte;
1949
1950 if (anon_exclusive)
1951 BUG_ON(page_try_share_anon_rmap(subpage));
1952
1953 /*
1954 * Store the pfn of the page in a special migration
1955 * pte. do_swap_page() will wait until the migration
1956 * pte is removed and then restart fault handling.
1957 */
1958 entry = pte_to_swp_entry(pteval);
1959 if (is_writable_device_private_entry(entry))
1960 entry = make_writable_migration_entry(pfn);
1961 else if (anon_exclusive)
1962 entry = make_readable_exclusive_migration_entry(pfn);
1963 else
1964 entry = make_readable_migration_entry(pfn);
1965 swp_pte = swp_entry_to_pte(entry);
1966
1967 /*
1968 * pteval maps a zone device page and is therefore
1969 * a swap pte.
1970 */
1971 if (pte_swp_soft_dirty(pteval))
1972 swp_pte = pte_swp_mksoft_dirty(swp_pte);
1973 if (pte_swp_uffd_wp(pteval))
1974 swp_pte = pte_swp_mkuffd_wp(swp_pte);
1975 set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte);
1976 trace_set_migration_pte(pvmw.address, pte_val(swp_pte),
1977 compound_order(&folio->page));
1978 /*
1979 * No need to invalidate here it will synchronize on
1980 * against the special swap migration pte.
1981 *
1982 * The assignment to subpage above was computed from a
1983 * swap PTE which results in an invalid pointer.
1984 * Since only PAGE_SIZE pages can currently be
1985 * migrated, just set it to page. This will need to be
1986 * changed when hugepage migrations to device private
1987 * memory are supported.
1988 */
1989 subpage = &folio->page;
1990 } else if (PageHWPoison(subpage)) {
1991 pteval = swp_entry_to_pte(make_hwpoison_entry(subpage));
1992 if (folio_test_hugetlb(folio)) {
1993 hugetlb_count_sub(folio_nr_pages(folio), mm);
1994 set_huge_swap_pte_at(mm, address,
1995 pvmw.pte, pteval,
1996 vma_mmu_pagesize(vma));
1997 } else {
1998 dec_mm_counter(mm, mm_counter(&folio->page));
1999 set_pte_at(mm, address, pvmw.pte, pteval);
2000 }
2001
2002 } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) {
2003 /*
2004 * The guest indicated that the page content is of no
2005 * interest anymore. Simply discard the pte, vmscan
2006 * will take care of the rest.
2007 * A future reference will then fault in a new zero
2008 * page. When userfaultfd is active, we must not drop
2009 * this page though, as its main user (postcopy
2010 * migration) will not expect userfaults on already
2011 * copied pages.
2012 */
2013 dec_mm_counter(mm, mm_counter(&folio->page));
2014 /* We have to invalidate as we cleared the pte */
2015 mmu_notifier_invalidate_range(mm, address,
2016 address + PAGE_SIZE);
2017 } else {
2018 swp_entry_t entry;
2019 pte_t swp_pte;
2020
2021 if (arch_unmap_one(mm, vma, address, pteval) < 0) {
2022 if (folio_test_hugetlb(folio))
> 2023 set_huge_pte_at(mm, address, pvmw.pte, pteval);
2024 else
2025 set_pte_at(mm, address, pvmw.pte, pteval);
2026 ret = false;
2027 page_vma_mapped_walk_done(&pvmw);
2028 break;
2029 }
2030 VM_BUG_ON_PAGE(pte_write(pteval) && folio_test_anon(folio) &&
2031 !anon_exclusive, subpage);
2032 if (anon_exclusive &&
2033 page_try_share_anon_rmap(subpage)) {
2034 if (folio_test_hugetlb(folio))
2035 set_huge_pte_at(mm, address, pvmw.pte, pteval);
2036 else
2037 set_pte_at(mm, address, pvmw.pte, pteval);
2038 ret = false;
2039 page_vma_mapped_walk_done(&pvmw);
2040 break;
2041 }
2042
2043 /*
2044 * Store the pfn of the page in a special migration
2045 * pte. do_swap_page() will wait until the migration
2046 * pte is removed and then restart fault handling.
2047 */
2048 if (pte_write(pteval))
2049 entry = make_writable_migration_entry(
2050 page_to_pfn(subpage));
2051 else if (anon_exclusive)
2052 entry = make_readable_exclusive_migration_entry(
2053 page_to_pfn(subpage));
2054 else
2055 entry = make_readable_migration_entry(
2056 page_to_pfn(subpage));
2057
2058 swp_pte = swp_entry_to_pte(entry);
2059 if (pte_soft_dirty(pteval))
2060 swp_pte = pte_swp_mksoft_dirty(swp_pte);
2061 if (pte_uffd_wp(pteval))
2062 swp_pte = pte_swp_mkuffd_wp(swp_pte);
2063 if (folio_test_hugetlb(folio))
2064 set_huge_swap_pte_at(mm, address, pvmw.pte,
2065 swp_pte, vma_mmu_pagesize(vma));
2066 else
2067 set_pte_at(mm, address, pvmw.pte, swp_pte);
2068 trace_set_migration_pte(address, pte_val(swp_pte),
2069 compound_order(&folio->page));
2070 /*
2071 * No need to invalidate here it will synchronize on
2072 * against the special swap migration pte.
2073 */
2074 }
2075
2076 /*
2077 * No need to call mmu_notifier_invalidate_range() it has be
2078 * done above for all cases requiring it to happen under page
2079 * table lock before mmu_notifier_invalidate_range_end()
2080 *
2081 * See Documentation/vm/mmu_notifier.rst
2082 */
2083 page_remove_rmap(subpage, vma, folio_test_hugetlb(folio));
2084 if (vma->vm_flags & VM_LOCKED)
2085 mlock_page_drain_local();
2086 folio_put(folio);
2087 }
2088
2089 mmu_notifier_invalidate_range_end(&range);
2090
2091 return ret;
2092 }
2093

--
0-DAY CI Kernel Test Service
https://01.org/lkp

2022-05-09 09:44:08

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration

Hi Baolin,

I love your patch! Yet something to improve:

[auto build test ERROR on akpm-mm/mm-everything]
[also build test ERROR on next-20220506]
[cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
config: x86_64-randconfig-a014 (https://download.01.org/0day-ci/archive/20220508/[email protected]/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project a385645b470e2d3a1534aae618ea56b31177639f)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036
git checkout 907981b27213707fdb2f8a24c107d6752a09a773
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>

All errors (new ones prefixed by >>):

>> mm/rmap.c:1931:13: error: call to undeclared function 'huge_ptep_clear_flush'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
^
mm/rmap.c:1931:13: note: did you mean 'ptep_clear_flush'?
include/linux/pgtable.h:431:14: note: 'ptep_clear_flush' declared here
extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
^
>> mm/rmap.c:1931:11: error: assigning to 'pte_t' from incompatible type 'int'
pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> mm/rmap.c:2023:6: error: call to undeclared function 'set_huge_pte_at'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
set_huge_pte_at(mm, address, pvmw.pte, pteval);
^
mm/rmap.c:2035:6: error: call to undeclared function 'set_huge_pte_at'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
set_huge_pte_at(mm, address, pvmw.pte, pteval);
^
4 errors generated.


vim +/huge_ptep_clear_flush +1931 mm/rmap.c

1883
1884 /* Unexpected PMD-mapped THP? */
1885 VM_BUG_ON_FOLIO(!pvmw.pte, folio);
1886
1887 subpage = folio_page(folio,
1888 pte_pfn(*pvmw.pte) - folio_pfn(folio));
1889 address = pvmw.address;
1890 anon_exclusive = folio_test_anon(folio) &&
1891 PageAnonExclusive(subpage);
1892
1893 if (folio_test_hugetlb(folio)) {
1894 /*
1895 * huge_pmd_unshare may unmap an entire PMD page.
1896 * There is no way of knowing exactly which PMDs may
1897 * be cached for this mm, so we must flush them all.
1898 * start/end were already adjusted above to cover this
1899 * range.
1900 */
1901 flush_cache_range(vma, range.start, range.end);
1902
1903 if (!folio_test_anon(folio)) {
1904 /*
1905 * To call huge_pmd_unshare, i_mmap_rwsem must be
1906 * held in write mode. Caller needs to explicitly
1907 * do this outside rmap routines.
1908 */
1909 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
1910
1911 if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) {
1912 flush_tlb_range(vma, range.start, range.end);
1913 mmu_notifier_invalidate_range(mm, range.start,
1914 range.end);
1915
1916 /*
1917 * The ref count of the PMD page was dropped
1918 * which is part of the way map counting
1919 * is done for shared PMDs. Return 'true'
1920 * here. When there is no other sharing,
1921 * huge_pmd_unshare returns false and we will
1922 * unmap the actual page and drop map count
1923 * to zero.
1924 */
1925 page_vma_mapped_walk_done(&pvmw);
1926 break;
1927 }
1928 }
1929
1930 /* Nuke the hugetlb page table entry */
> 1931 pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
1932 } else {
1933 flush_cache_page(vma, address, pte_pfn(*pvmw.pte));
1934 /* Nuke the page table entry. */
1935 pteval = ptep_clear_flush(vma, address, pvmw.pte);
1936 }
1937
1938 /* Set the dirty flag on the folio now the pte is gone. */
1939 if (pte_dirty(pteval))
1940 folio_mark_dirty(folio);
1941
1942 /* Update high watermark before we lower rss */
1943 update_hiwater_rss(mm);
1944
1945 if (folio_is_zone_device(folio)) {
1946 unsigned long pfn = folio_pfn(folio);
1947 swp_entry_t entry;
1948 pte_t swp_pte;
1949
1950 if (anon_exclusive)
1951 BUG_ON(page_try_share_anon_rmap(subpage));
1952
1953 /*
1954 * Store the pfn of the page in a special migration
1955 * pte. do_swap_page() will wait until the migration
1956 * pte is removed and then restart fault handling.
1957 */
1958 entry = pte_to_swp_entry(pteval);
1959 if (is_writable_device_private_entry(entry))
1960 entry = make_writable_migration_entry(pfn);
1961 else if (anon_exclusive)
1962 entry = make_readable_exclusive_migration_entry(pfn);
1963 else
1964 entry = make_readable_migration_entry(pfn);
1965 swp_pte = swp_entry_to_pte(entry);
1966
1967 /*
1968 * pteval maps a zone device page and is therefore
1969 * a swap pte.
1970 */
1971 if (pte_swp_soft_dirty(pteval))
1972 swp_pte = pte_swp_mksoft_dirty(swp_pte);
1973 if (pte_swp_uffd_wp(pteval))
1974 swp_pte = pte_swp_mkuffd_wp(swp_pte);
1975 set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte);
1976 trace_set_migration_pte(pvmw.address, pte_val(swp_pte),
1977 compound_order(&folio->page));
1978 /*
1979 * No need to invalidate here it will synchronize on
1980 * against the special swap migration pte.
1981 *
1982 * The assignment to subpage above was computed from a
1983 * swap PTE which results in an invalid pointer.
1984 * Since only PAGE_SIZE pages can currently be
1985 * migrated, just set it to page. This will need to be
1986 * changed when hugepage migrations to device private
1987 * memory are supported.
1988 */
1989 subpage = &folio->page;
1990 } else if (PageHWPoison(subpage)) {
1991 pteval = swp_entry_to_pte(make_hwpoison_entry(subpage));
1992 if (folio_test_hugetlb(folio)) {
1993 hugetlb_count_sub(folio_nr_pages(folio), mm);
1994 set_huge_swap_pte_at(mm, address,
1995 pvmw.pte, pteval,
1996 vma_mmu_pagesize(vma));
1997 } else {
1998 dec_mm_counter(mm, mm_counter(&folio->page));
1999 set_pte_at(mm, address, pvmw.pte, pteval);
2000 }
2001
2002 } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) {
2003 /*
2004 * The guest indicated that the page content is of no
2005 * interest anymore. Simply discard the pte, vmscan
2006 * will take care of the rest.
2007 * A future reference will then fault in a new zero
2008 * page. When userfaultfd is active, we must not drop
2009 * this page though, as its main user (postcopy
2010 * migration) will not expect userfaults on already
2011 * copied pages.
2012 */
2013 dec_mm_counter(mm, mm_counter(&folio->page));
2014 /* We have to invalidate as we cleared the pte */
2015 mmu_notifier_invalidate_range(mm, address,
2016 address + PAGE_SIZE);
2017 } else {
2018 swp_entry_t entry;
2019 pte_t swp_pte;
2020
2021 if (arch_unmap_one(mm, vma, address, pteval) < 0) {
2022 if (folio_test_hugetlb(folio))
> 2023 set_huge_pte_at(mm, address, pvmw.pte, pteval);
2024 else
2025 set_pte_at(mm, address, pvmw.pte, pteval);
2026 ret = false;
2027 page_vma_mapped_walk_done(&pvmw);
2028 break;
2029 }
2030 VM_BUG_ON_PAGE(pte_write(pteval) && folio_test_anon(folio) &&
2031 !anon_exclusive, subpage);
2032 if (anon_exclusive &&
2033 page_try_share_anon_rmap(subpage)) {
2034 if (folio_test_hugetlb(folio))
2035 set_huge_pte_at(mm, address, pvmw.pte, pteval);
2036 else
2037 set_pte_at(mm, address, pvmw.pte, pteval);
2038 ret = false;
2039 page_vma_mapped_walk_done(&pvmw);
2040 break;
2041 }
2042
2043 /*
2044 * Store the pfn of the page in a special migration
2045 * pte. do_swap_page() will wait until the migration
2046 * pte is removed and then restart fault handling.
2047 */
2048 if (pte_write(pteval))
2049 entry = make_writable_migration_entry(
2050 page_to_pfn(subpage));
2051 else if (anon_exclusive)
2052 entry = make_readable_exclusive_migration_entry(
2053 page_to_pfn(subpage));
2054 else
2055 entry = make_readable_migration_entry(
2056 page_to_pfn(subpage));
2057
2058 swp_pte = swp_entry_to_pte(entry);
2059 if (pte_soft_dirty(pteval))
2060 swp_pte = pte_swp_mksoft_dirty(swp_pte);
2061 if (pte_uffd_wp(pteval))
2062 swp_pte = pte_swp_mkuffd_wp(swp_pte);
2063 if (folio_test_hugetlb(folio))
2064 set_huge_swap_pte_at(mm, address, pvmw.pte,
2065 swp_pte, vma_mmu_pagesize(vma));
2066 else
2067 set_pte_at(mm, address, pvmw.pte, swp_pte);
2068 trace_set_migration_pte(address, pte_val(swp_pte),
2069 compound_order(&folio->page));
2070 /*
2071 * No need to invalidate here it will synchronize on
2072 * against the special swap migration pte.
2073 */
2074 }
2075
2076 /*
2077 * No need to call mmu_notifier_invalidate_range() it has be
2078 * done above for all cases requiring it to happen under page
2079 * table lock before mmu_notifier_invalidate_range_end()
2080 *
2081 * See Documentation/vm/mmu_notifier.rst
2082 */
2083 page_remove_rmap(subpage, vma, folio_test_hugetlb(folio));
2084 if (vma->vm_flags & VM_LOCKED)
2085 mlock_page_drain_local();
2086 folio_put(folio);
2087 }
2088
2089 mmu_notifier_invalidate_range_end(&range);
2090
2091 return ret;
2092 }
2093

--
0-DAY CI Kernel Test Service
https://01.org/lkp

2022-05-09 11:04:54

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration

On Sun, May 08, 2022 at 05:36:40PM +0800, Baolin Wang wrote:
> On some architectures (like ARM64), it can support CONT-PTE/PMD size
> hugetlb, which means it can support not only PMD/PUD size hugetlb:
> 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page
> size specified.
>
> When migrating a hugetlb page, we will get the relevant page table
> entry by huge_pte_offset() only once to nuke it and remap it with
> a migration pte entry. This is correct for PMD or PUD size hugetlb,
> since they always contain only one pmd entry or pud entry in the
> page table.
>
> However this is incorrect for CONT-PTE and CONT-PMD size hugetlb,
> since they can contain several continuous pte or pmd entry with
> same page table attributes. So we will nuke or remap only one pte
> or pmd entry for this CONT-PTE/PMD size hugetlb page, which is
> not expected for hugetlb migration. The problem is we can still
> continue to modify the subpages' data of a hugetlb page during
> migrating a hugetlb page, which can cause a serious data consistent
> issue, since we did not nuke the page table entry and set a
> migration pte for the subpages of a hugetlb page.
>
> To fix this issue, we should change to use huge_ptep_clear_flush()
> to nuke a hugetlb page table, and remap it with set_huge_pte_at()
> and set_huge_swap_pte_at() when migrating a hugetlb page, which
> already considered the CONT-PTE or CONT-PMD size hugetlb.
>
> Signed-off-by: Baolin Wang <[email protected]>

This looks fine to me.

Reviewed-by: Muchun Song <[email protected]>

Thanks.

2022-05-09 22:33:21

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration

On 5/8/22 02:36, Baolin Wang wrote:
> On some architectures (like ARM64), it can support CONT-PTE/PMD size
> hugetlb, which means it can support not only PMD/PUD size hugetlb:
> 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page
> size specified.
>
> When migrating a hugetlb page, we will get the relevant page table
> entry by huge_pte_offset() only once to nuke it and remap it with
> a migration pte entry. This is correct for PMD or PUD size hugetlb,
> since they always contain only one pmd entry or pud entry in the
> page table.
>
> However this is incorrect for CONT-PTE and CONT-PMD size hugetlb,
> since they can contain several continuous pte or pmd entry with
> same page table attributes. So we will nuke or remap only one pte
> or pmd entry for this CONT-PTE/PMD size hugetlb page, which is
> not expected for hugetlb migration. The problem is we can still
> continue to modify the subpages' data of a hugetlb page during
> migrating a hugetlb page, which can cause a serious data consistent
> issue, since we did not nuke the page table entry and set a
> migration pte for the subpages of a hugetlb page.
>
> To fix this issue, we should change to use huge_ptep_clear_flush()
> to nuke a hugetlb page table, and remap it with set_huge_pte_at()
> and set_huge_swap_pte_at() when migrating a hugetlb page, which
> already considered the CONT-PTE or CONT-PMD size hugetlb.
>
> Signed-off-by: Baolin Wang <[email protected]>
> ---
> mm/rmap.c | 24 ++++++++++++++++++------
> 1 file changed, 18 insertions(+), 6 deletions(-)

With the addition of !CONFIG_HUGETLB_PAGE stubs,

Reviewed-by: Mike Kravetz <[email protected]>
--
Mike Kravetz