2019-09-10 10:40:59

by Alastair D'Silva

[permalink] [raw]
Subject: [PATCH 2/2] mm: Add a bounds check in devm_memremap_pages()

From: Alastair D'Silva <[email protected]>

The call to check_hotplug_memory_addressable() validates that the memory
is fully addressable.

Without this call, it is possible that we may remap pages that is
not physically addressable, resulting in bogus section numbers
being returned from __section_nr().

Signed-off-by: Alastair D'Silva <[email protected]>
---
mm/memremap.c | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/mm/memremap.c b/mm/memremap.c
index 86432650f829..fd00993caa3e 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -269,6 +269,13 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)

mem_hotplug_begin();

+ error = check_hotplug_memory_addressable(res->start,
+ resource_size(res));
+ if (error) {
+ mem_hotplug_done();
+ goto err_checkrange;
+ }
+
/*
* For device private memory we call add_pages() as we only need to
* allocate and initialize struct page for the device memory. More-
@@ -324,6 +331,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)

err_add_memory:
kasan_remove_zero_shadow(__va(res->start), resource_size(res));
+ err_checkrange:
err_kasan:
untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res));
err_pfn_remap:
--
2.21.0


2019-09-10 12:33:14

by Alastair D'Silva

[permalink] [raw]
Subject: RE: [PATCH 2/2] mm: Add a bounds check in devm_memremap_pages()

> -----Original Message-----
> From: David Hildenbrand <[email protected]>
> Sent: Tuesday, 10 September 2019 5:39 PM
> To: Alastair D'Silva <[email protected]>; [email protected]
> Cc: Andrew Morton <[email protected]>; Oscar Salvador
> <[email protected]>; Michal Hocko <[email protected]>; Pavel Tatashin
> <[email protected]>; Dan Williams <[email protected]>;
> Wei Yang <[email protected]>; Qian Cai <[email protected]>; Jason
> Gunthorpe <[email protected]>; Logan Gunthorpe <[email protected]>; Ira
> Weiny <[email protected]>; [email protected]; linux-
> [email protected]
> Subject: Re: [PATCH 2/2] mm: Add a bounds check in
> devm_memremap_pages()
>
> On 10.09.19 04:52, Alastair D'Silva wrote:
> > From: Alastair D'Silva <[email protected]>
> >
> > The call to check_hotplug_memory_addressable() validates that the
> > memory is fully addressable.
> >
> > Without this call, it is possible that we may remap pages that is not
> > physically addressable, resulting in bogus section numbers being
> > returned from __section_nr().
> >
> > Signed-off-by: Alastair D'Silva <[email protected]>
> > ---
> > mm/memremap.c | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/mm/memremap.c b/mm/memremap.c index
> > 86432650f829..fd00993caa3e 100644
> > --- a/mm/memremap.c
> > +++ b/mm/memremap.c
> > @@ -269,6 +269,13 @@ void *devm_memremap_pages(struct device
> *dev,
> > struct dev_pagemap *pgmap)
> >
> > mem_hotplug_begin();
> >
> > + error = check_hotplug_memory_addressable(res->start,
> > + resource_size(res));
> > + if (error) {
> > + mem_hotplug_done();
> > + goto err_checkrange;
> > + }
> > +
>
> No need to check under the memory hotplug lock.
>

Thanks, I'll adjust it.

> > /*
> > * For device private memory we call add_pages() as we only need to
> > * allocate and initialize struct page for the device memory. More-
> > @@ -324,6 +331,7 @@ void *devm_memremap_pages(struct device *dev,
> > struct dev_pagemap *pgmap)
> >
> > err_add_memory:
> > kasan_remove_zero_shadow(__va(res->start), resource_size(res));
> > + err_checkrange:
> > err_kasan:
> > untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res));
> > err_pfn_remap:
> >
>
>
> --
>
> Thanks,
>
> David / dhildenb
>

--
Alastair D'Silva mob: 0423 762 819
skype: alastair_dsilva msn: [email protected]
blog: http://alastair.d-silva.org Twitter: @EvilDeece

2019-09-10 18:50:05

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH 2/2] mm: Add a bounds check in devm_memremap_pages()

On 10.09.19 04:52, Alastair D'Silva wrote:
> From: Alastair D'Silva <[email protected]>
>
> The call to check_hotplug_memory_addressable() validates that the memory
> is fully addressable.
>
> Without this call, it is possible that we may remap pages that is
> not physically addressable, resulting in bogus section numbers
> being returned from __section_nr().
>
> Signed-off-by: Alastair D'Silva <[email protected]>
> ---
> mm/memremap.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/mm/memremap.c b/mm/memremap.c
> index 86432650f829..fd00993caa3e 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -269,6 +269,13 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
>
> mem_hotplug_begin();
>
> + error = check_hotplug_memory_addressable(res->start,
> + resource_size(res));
> + if (error) {
> + mem_hotplug_done();
> + goto err_checkrange;
> + }
> +

No need to check under the memory hotplug lock.

> /*
> * For device private memory we call add_pages() as we only need to
> * allocate and initialize struct page for the device memory. More-
> @@ -324,6 +331,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
>
> err_add_memory:
> kasan_remove_zero_shadow(__va(res->start), resource_size(res));
> + err_checkrange:
> err_kasan:
> untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res));
> err_pfn_remap:
>


--

Thanks,

David / dhildenb