There are two code path which invoke __populate_section_memmap()
* sparse_init_nid()
* sparse_add_section()
For both case, we are sure the memory range is sub-section aligned.
* we pass PAGES_PER_SECTION to sparse_init_nid()
* we check range by check_pfn_span() before calling
sparse_add_section()
Also, the counterpart of __populate_section_memmap(), we don't do such
calculation and check since the range is checked by check_pfn_span() in
__remove_pages().
Clear the calculation and check to keep it simple and comply with its
counterpart.
Signed-off-by: Wei Yang <[email protected]>
---
v2:
* add a warn on once for unaligned range, suggested by David
---
mm/sparse-vmemmap.c | 20 ++++++--------------
1 file changed, 6 insertions(+), 14 deletions(-)
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 0db7738d76e9..8d3a1b6287c5 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -247,20 +247,12 @@ int __meminit vmemmap_populate_basepages(unsigned long start,
struct page * __meminit __populate_section_memmap(unsigned long pfn,
unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
{
- unsigned long start;
- unsigned long end;
-
- /*
- * The minimum granularity of memmap extensions is
- * PAGES_PER_SUBSECTION as allocations are tracked in the
- * 'subsection_map' bitmap of the section.
- */
- end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
- pfn &= PAGE_SUBSECTION_MASK;
- nr_pages = end - pfn;
-
- start = (unsigned long) pfn_to_page(pfn);
- end = start + nr_pages * sizeof(struct page);
+ unsigned long start = (unsigned long) pfn_to_page(pfn);
+ unsigned long end = start + nr_pages * sizeof(struct page);
+
+ if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
+ !IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION)))
+ return NULL;
if (vmemmap_populate(start, end, nid, altmap))
return NULL;
--
2.20.1 (Apple Git-117)
On 03.07.20 05:18, Wei Yang wrote:
> There are two code path which invoke __populate_section_memmap()
>
> * sparse_init_nid()
> * sparse_add_section()
>
> For both case, we are sure the memory range is sub-section aligned.
>
> * we pass PAGES_PER_SECTION to sparse_init_nid()
> * we check range by check_pfn_span() before calling
> sparse_add_section()
>
> Also, the counterpart of __populate_section_memmap(), we don't do such
> calculation and check since the range is checked by check_pfn_span() in
> __remove_pages().
>
> Clear the calculation and check to keep it simple and comply with its
> counterpart.
>
> Signed-off-by: Wei Yang <[email protected]>
>
> ---
> v2:
> * add a warn on once for unaligned range, suggested by David
> ---
> mm/sparse-vmemmap.c | 20 ++++++--------------
> 1 file changed, 6 insertions(+), 14 deletions(-)
>
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index 0db7738d76e9..8d3a1b6287c5 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -247,20 +247,12 @@ int __meminit vmemmap_populate_basepages(unsigned long start,
> struct page * __meminit __populate_section_memmap(unsigned long pfn,
> unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
> {
> - unsigned long start;
> - unsigned long end;
> -
> - /*
> - * The minimum granularity of memmap extensions is
> - * PAGES_PER_SUBSECTION as allocations are tracked in the
> - * 'subsection_map' bitmap of the section.
> - */
> - end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
> - pfn &= PAGE_SUBSECTION_MASK;
> - nr_pages = end - pfn;
> -
> - start = (unsigned long) pfn_to_page(pfn);
> - end = start + nr_pages * sizeof(struct page);
> + unsigned long start = (unsigned long) pfn_to_page(pfn);
> + unsigned long end = start + nr_pages * sizeof(struct page);
> +
> + if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
> + !IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION)))
> + return NULL;
Nit: indentation of both IS_ALIGNED should match.
Acked-by: David Hildenbrand <[email protected]>
>
> if (vmemmap_populate(start, end, nid, altmap))
> return NULL;
>
--
Thanks,
David / dhildenb
On Fri, Jul 03, 2020 at 11:18:28AM +0800, Wei Yang wrote:
>There are two code path which invoke __populate_section_memmap()
>
> * sparse_init_nid()
> * sparse_add_section()
>
>For both case, we are sure the memory range is sub-section aligned.
>
> * we pass PAGES_PER_SECTION to sparse_init_nid()
> * we check range by check_pfn_span() before calling
> sparse_add_section()
>
>Also, the counterpart of __populate_section_memmap(), we don't do such
>calculation and check since the range is checked by check_pfn_span() in
>__remove_pages().
>
>Clear the calculation and check to keep it simple and comply with its
>counterpart.
>
>Signed-off-by: Wei Yang <[email protected]>
>
Hi, Andrew,
Is this one picked up?
>---
>v2:
> * add a warn on once for unaligned range, suggested by David
>---
> mm/sparse-vmemmap.c | 20 ++++++--------------
> 1 file changed, 6 insertions(+), 14 deletions(-)
>
>diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
>index 0db7738d76e9..8d3a1b6287c5 100644
>--- a/mm/sparse-vmemmap.c
>+++ b/mm/sparse-vmemmap.c
>@@ -247,20 +247,12 @@ int __meminit vmemmap_populate_basepages(unsigned long start,
> struct page * __meminit __populate_section_memmap(unsigned long pfn,
> unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
> {
>- unsigned long start;
>- unsigned long end;
>-
>- /*
>- * The minimum granularity of memmap extensions is
>- * PAGES_PER_SUBSECTION as allocations are tracked in the
>- * 'subsection_map' bitmap of the section.
>- */
>- end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
>- pfn &= PAGE_SUBSECTION_MASK;
>- nr_pages = end - pfn;
>-
>- start = (unsigned long) pfn_to_page(pfn);
>- end = start + nr_pages * sizeof(struct page);
>+ unsigned long start = (unsigned long) pfn_to_page(pfn);
>+ unsigned long end = start + nr_pages * sizeof(struct page);
>+
>+ if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
>+ !IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION)))
>+ return NULL;
>
> if (vmemmap_populate(start, end, nid, altmap))
> return NULL;
>--
>2.20.1 (Apple Git-117)
--
Wei Yang
Help you, Help me
On 05.08.20 23:49, Wei Yang wrote:
> On Fri, Jul 03, 2020 at 11:18:28AM +0800, Wei Yang wrote:
>> There are two code path which invoke __populate_section_memmap()
>>
>> * sparse_init_nid()
>> * sparse_add_section()
>>
>> For both case, we are sure the memory range is sub-section aligned.
>>
>> * we pass PAGES_PER_SECTION to sparse_init_nid()
>> * we check range by check_pfn_span() before calling
>> sparse_add_section()
>>
>> Also, the counterpart of __populate_section_memmap(), we don't do such
>> calculation and check since the range is checked by check_pfn_span() in
>> __remove_pages().
>>
>> Clear the calculation and check to keep it simple and comply with its
>> counterpart.
>>
>> Signed-off-by: Wei Yang <[email protected]>
>>
>
> Hi, Andrew,
>
> Is this one picked up?
I can spot it in -next via the -mm tree:
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=68ad9becb23be14622e39ed36e5b0621a90a41d9
--
Thanks,
David / dhildenb
On Thu, Aug 06, 2020 at 09:29:36AM +0200, David Hildenbrand wrote:
>On 05.08.20 23:49, Wei Yang wrote:
>> On Fri, Jul 03, 2020 at 11:18:28AM +0800, Wei Yang wrote:
>>> There are two code path which invoke __populate_section_memmap()
>>>
>>> * sparse_init_nid()
>>> * sparse_add_section()
>>>
>>> For both case, we are sure the memory range is sub-section aligned.
>>>
>>> * we pass PAGES_PER_SECTION to sparse_init_nid()
>>> * we check range by check_pfn_span() before calling
>>> sparse_add_section()
>>>
>>> Also, the counterpart of __populate_section_memmap(), we don't do such
>>> calculation and check since the range is checked by check_pfn_span() in
>>> __remove_pages().
>>>
>>> Clear the calculation and check to keep it simple and comply with its
>>> counterpart.
>>>
>>> Signed-off-by: Wei Yang <[email protected]>
>>>
>>
>> Hi, Andrew,
>>
>> Is this one picked up?
>
>I can spot it in -next via the -mm tree:
>
>https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=68ad9becb23be14622e39ed36e5b0621a90a41d9
>
Thanks ;-)
Next time I would refer to this repo first.
>
>--
>Thanks,
>
>David / dhildenb
--
Wei Yang
Help you, Help me