2023-09-13 21:32:18

by Zi Yan

[permalink] [raw]
Subject: [PATCH v3 0/5] Use nth_page() in place of direct struct page manipulation

From: Zi Yan <[email protected]>

On SPARSEMEM without VMEMMAP, struct page is not guaranteed to be
contiguous, since each memory section's memmap might be allocated
independently. hugetlb pages can go beyond a memory section size, thus
direct struct page manipulation on hugetlb pages/subpages might give
wrong struct page. Kernel provides nth_page() to do the manipulation
properly. Use that whenever code can see hugetlb pages.

The patches are on top of next-20230913

Changes:

From v2:
1. Fixed the subject and the commit log of Patch 3 (David Hildenbrand)

From v1:
1. Separated first patch into three and add Fixes for better backport.

Zi Yan (5):
mm/cma: use nth_page() in place of direct struct page manipulation.
mm/hugetlb: use nth_page() in place of direct struct page
manipulation.
mm/memory_hotplug: use pfn math in place of direct struct page
manipulation.
fs: use nth_page() in place of direct struct page manipulation.
mips: use nth_page() in place of direct struct page manipulation.

arch/mips/mm/cache.c | 2 +-
fs/hugetlbfs/inode.c | 4 ++--
mm/cma.c | 2 +-
mm/hugetlb.c | 2 +-
mm/memory_hotplug.c | 2 +-
5 files changed, 6 insertions(+), 6 deletions(-)

--
2.40.1


2023-09-14 01:01:31

by Zi Yan

[permalink] [raw]
Subject: [PATCH v3 3/5] mm/memory_hotplug: use pfn math in place of direct struct page manipulation.

From: Zi Yan <[email protected]>

When dealing with hugetlb pages, manipulating struct page pointers
directly can get to wrong struct page, since struct page is not guaranteed
to be contiguous on SPARSEMEM without VMEMMAP. Use pfn calculation to
handle it properly.

Fixes: eeb0efd071d8 ("mm,memory_hotplug: fix scan_movable_pages() for gigantic hugepages")
Cc: <[email protected]>
Signed-off-by: Zi Yan <[email protected]>
Reviewed-by: Muchun Song <[email protected]>
Acked-by: David Hildenbrand <[email protected]>
---
mm/memory_hotplug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 1b03f4ec6fd2..3b301c4023ff 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1689,7 +1689,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
*/
if (HPageMigratable(head))
goto found;
- skip = compound_nr(head) - (page - head);
+ skip = compound_nr(head) - (pfn - page_to_pfn(head));
pfn += skip - 1;
}
return -ENOENT;
--
2.40.1

2023-09-14 01:07:26

by Zi Yan

[permalink] [raw]
Subject: [PATCH v3 4/5] fs: use nth_page() in place of direct struct page manipulation.

From: Zi Yan <[email protected]>

When dealing with hugetlb pages, struct page is not guaranteed to be
contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle it
properly.

Fixes: 38c1ddbde6c6 ("hugetlbfs: improve read HWPOISON hugepage")
Cc: <[email protected]>
Signed-off-by: Zi Yan <[email protected]>
Reviewed-by: Muchun Song <[email protected]>
---
fs/hugetlbfs/inode.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 7083fa0caaab..14d3d28e41b0 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -295,7 +295,7 @@ static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
size_t res = 0;

/* First subpage to start the loop. */
- page += offset / PAGE_SIZE;
+ page = nth_page(page, offset / PAGE_SIZE);
offset %= PAGE_SIZE;
while (1) {
if (is_raw_hwpoison_page_in_hugepage(page))
@@ -309,7 +309,7 @@ static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
break;
offset += n;
if (offset == PAGE_SIZE) {
- page++;
+ page = nth_page(page, 1);
offset = 0;
}
}
--
2.40.1

2023-09-14 01:15:50

by Zi Yan

[permalink] [raw]
Subject: [PATCH v3 1/5] mm/cma: use nth_page() in place of direct struct page manipulation.

From: Zi Yan <[email protected]>

When dealing with hugetlb pages, manipulating struct page pointers
directly can get to wrong struct page, since struct page is not guaranteed
to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
it properly.

Fixes: 2813b9c02962 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc")
Cc: <[email protected]>
Signed-off-by: Zi Yan <[email protected]>
Reviewed-by: Muchun Song <[email protected]>
---
mm/cma.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/cma.c b/mm/cma.c
index da2967c6a223..2b2494fd6b59 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -505,7 +505,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
*/
if (page) {
for (i = 0; i < count; i++)
- page_kasan_tag_reset(page + i);
+ page_kasan_tag_reset(nth_page(page, i));
}

if (ret && !no_warn) {
--
2.40.1

2023-09-14 02:38:39

by Zi Yan

[permalink] [raw]
Subject: Re: [PATCH v3 3/5] mm/memory_hotplug: use pfn math in place of direct struct page manipulation.

On 13 Sep 2023, at 16:12, Zi Yan wrote:

> From: Zi Yan <[email protected]>
>
> When dealing with hugetlb pages, manipulating struct page pointers
> directly can get to wrong struct page, since struct page is not guaranteed
> to be contiguous on SPARSEMEM without VMEMMAP. Use pfn calculation to
> handle it properly.
>
> Fixes: eeb0efd071d8 ("mm,memory_hotplug: fix scan_movable_pages() for gigantic hugepages")
> Cc: <[email protected]>
> Signed-off-by: Zi Yan <[email protected]>
> Reviewed-by: Muchun Song <[email protected]>
> Acked-by: David Hildenbrand <[email protected]>
> ---
> mm/memory_hotplug.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 1b03f4ec6fd2..3b301c4023ff 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1689,7 +1689,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
> */
> if (HPageMigratable(head))
> goto found;
> - skip = compound_nr(head) - (page - head);
> + skip = compound_nr(head) - (pfn - page_to_pfn(head));
> pfn += skip - 1;
> }
> return -ENOENT;
> --
> 2.40.1

Without the fix, a wrong number of page might be skipped. Since skip cannot be
negative, scan_movable_page() will end early and might miss a movable page with
-ENOENT. This might fail offline_pages(). No bug is reported. The fix comes
from code inspection.

--
Best Regards,
Yan, Zi


Attachments:
signature.asc (871.00 B)
OpenPGP digital signature

2023-09-14 05:49:46

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v3 0/5] Use nth_page() in place of direct struct page manipulation

On Wed, 13 Sep 2023 16:12:43 -0400 Zi Yan <[email protected]> wrote:

> On SPARSEMEM without VMEMMAP, struct page is not guaranteed to be
> contiguous, since each memory section's memmap might be allocated
> independently. hugetlb pages can go beyond a memory section size, thus
> direct struct page manipulation on hugetlb pages/subpages might give
> wrong struct page. Kernel provides nth_page() to do the manipulation
> properly. Use that whenever code can see hugetlb pages.

for (each patch) {

Can we please explain why -stable backporting is recommended?

Such an explanation will, as always, include a description of the
user-visible effects of the bug.

Some of the Fixes: targets are very old: 5 years. Has something
changed to bring these flaws to light? Or is it from code inspection?
}

Thanks.

2023-09-14 05:59:50

by Zi Yan

[permalink] [raw]
Subject: [PATCH v3 2/5] mm/hugetlb: use nth_page() in place of direct struct page manipulation.

From: Zi Yan <[email protected]>

When dealing with hugetlb pages, manipulating struct page pointers
directly can get to wrong struct page, since struct page is not guaranteed
to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
it properly.

Fixes: 57a196a58421 ("hugetlb: simplify hugetlb handling in follow_page_mask")
Cc: <[email protected]>
Signed-off-by: Zi Yan <[email protected]>
Reviewed-by: Muchun Song <[email protected]>
---
mm/hugetlb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index af74e83d92aa..8e68e6c53e66 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6469,7 +6469,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
}
}

- page += ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
+ page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT));

/*
* Note that page may be a sub-page, and with vmemmap
--
2.40.1

2023-09-14 08:58:28

by Zi Yan

[permalink] [raw]
Subject: Re: [PATCH v3 1/5] mm/cma: use nth_page() in place of direct struct page manipulation.

On 13 Sep 2023, at 16:12, Zi Yan wrote:

> From: Zi Yan <[email protected]>
>
> When dealing with hugetlb pages, manipulating struct page pointers
> directly can get to wrong struct page, since struct page is not guaranteed
> to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
> it properly.
>
> Fixes: 2813b9c02962 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc")
> Cc: <[email protected]>
> Signed-off-by: Zi Yan <[email protected]>
> Reviewed-by: Muchun Song <[email protected]>
> ---
> mm/cma.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/cma.c b/mm/cma.c
> index da2967c6a223..2b2494fd6b59 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -505,7 +505,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
> */
> if (page) {
> for (i = 0; i < count; i++)
> - page_kasan_tag_reset(page + i);
> + page_kasan_tag_reset(nth_page(page, i));
> }
>
> if (ret && !no_warn) {
> --
> 2.40.1

Without the fix, page_kasan_tag_reset() could reset wrong page tags, causing
a wrong kasan result. No related bug is reported. The fix comes from code
inspection.


--
Best Regards,
Yan, Zi


Attachments:
signature.asc (871.00 B)
OpenPGP digital signature

2023-09-14 15:17:37

by Zi Yan

[permalink] [raw]
Subject: Re: [PATCH v3 4/5] use nth_page() in place of direct struct page manipulation.

On 13 Sep 2023, at 16:12, Zi Yan wrote:

> From: Zi Yan <[email protected]>
>
> When dealing with hugetlb pages, struct page is not guaranteed to be
> contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle it
> properly.
>
> Fixes: 38c1ddbde6c6 ("hugetlbfs: improve read HWPOISON hugepage")
> Cc: <[email protected]>
> Signed-off-by: Zi Yan <[email protected]>
> Reviewed-by: Muchun Song <[email protected]>
> ---
> fs/hugetlbfs/inode.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index 7083fa0caaab..14d3d28e41b0 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -295,7 +295,7 @@ static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
> size_t res = 0;
>
> /* First subpage to start the loop. */
> - page += offset / PAGE_SIZE;
> + page = nth_page(page, offset / PAGE_SIZE);
> offset %= PAGE_SIZE;
> while (1) {
> if (is_raw_hwpoison_page_in_hugepage(page))
> @@ -309,7 +309,7 @@ static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
> break;
> offset += n;
> if (offset == PAGE_SIZE) {
> - page++;
> + page = nth_page(page, 1);
> offset = 0;
> }
> }
> --
> 2.40.1

Without the fix, a wrong subpage might be checked for HWPoison, causing wrong
number of bytes of a page copied to user space. No bug is reported. The fix
comes from code inspection.

--
Best Regards,
Yan, Zi


Attachments:
signature.asc (871.00 B)
OpenPGP digital signature

2023-09-14 22:33:05

by Zi Yan

[permalink] [raw]
Subject: Re: [PATCH v3 2/5] mm/hugetlb: use nth_page() in place of direct struct page manipulation.

On 13 Sep 2023, at 16:12, Zi Yan wrote:

> From: Zi Yan <[email protected]>
>
> When dealing with hugetlb pages, manipulating struct page pointers
> directly can get to wrong struct page, since struct page is not guaranteed
> to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
> it properly.
>
> Fixes: 57a196a58421 ("hugetlb: simplify hugetlb handling in follow_page_mask")
> Cc: <[email protected]>
> Signed-off-by: Zi Yan <[email protected]>
> Reviewed-by: Muchun Song <[email protected]>
> ---
> mm/hugetlb.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index af74e83d92aa..8e68e6c53e66 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6469,7 +6469,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
> }
> }
>
> - page += ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
> + page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT));
>
> /*
> * Note that page may be a sub-page, and with vmemmap
> --
> 2.40.1

A wrong or non-existing page might be tried to be grabbed, either leading to
a non freeable page or kernel memory access errors. No bug is reported.
It comes from code inspection.


--
Best Regards,
Yan, Zi


Attachments:
signature.asc (871.00 B)
OpenPGP digital signature

2023-09-15 17:13:22

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v3 1/5] mm/cma: use nth_page() in place of direct struct page manipulation.

On Wed, 13 Sep 2023 22:16:45 -0400 Zi Yan <[email protected]> wrote:

> No related bug is reported. The fix comes from code
> inspection.

OK, thanks. Given that none of these appear urgent, I'll move the
series from the 6.6-rcX queue (mm-hotfixes-unstable) and into the
6.7-rc1 queue (mm-unstable), while retaining the cc:stable. To give
them more testing time before landing in mainline and -stable kernels.