On 26.09.19 03:34, Alastair D'Silva wrote:
> From: Alastair D'Silva <[email protected]>
>
> On PowerPC, the address ranges allocated to OpenCAPI LPC memory
> are allocated from firmware. These address ranges may be higher
> than what older kernels permit, as we increased the maximum
> permissable address in commit 4ffe713b7587
> ("powerpc/mm: Increase the max addressable memory to 2PB"). It is
> possible that the addressable range may change again in the
> future.
>
> In this scenario, we end up with a bogus section returned from
> __section_nr (see the discussion on the thread "mm: Trigger bug on
> if a section is not found in __section_nr").
>
> Adding a check here means that we fail early and have an
> opportunity to handle the error gracefully, rather than rumbling
> on and potentially accessing an incorrect section.
>
> Further discussion is also on the thread ("powerpc: Perform a bounds
> check in arch_add_memory")
> http://lkml.kernel.org/r/[email protected]
>
> Signed-off-by: Alastair D'Silva <[email protected]>
> ---
> mm/memory_hotplug.c | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index c73f09913165..212804c0f7f5 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
> return 0;
> }
>
> +static int check_hotplug_memory_addressable(unsigned long pfn,
> + unsigned long nr_pages)
> +{
> + unsigned long max_addr = ((pfn + nr_pages) << PAGE_SHIFT) - 1;
> +
> + if (max_addr >> MAX_PHYSMEM_BITS) {
> + WARN(1,
> + "Hotplugged memory exceeds maximum addressable address, range=%#lx-%#lx, maximum=%#lx\n",
> + pfn << PAGE_SHIFT, max_addr,
> + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1);
> + return -E2BIG;
> + }
> +
> + return 0;
> +}
> +
> /*
> * Reasonably generic function for adding memory. It is
> * expected that archs that support memory hotplug will
> @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages,
> unsigned long nr, start_sec, end_sec;
> struct vmem_altmap *altmap = restrictions->altmap;
>
> + err = check_hotplug_memory_addressable(pfn, nr_pages);
> + if (err)
> + return err;
> +
> if (altmap) {
> /*
> * Validate altmap is within bounds of the total request
>
I know Michal suggested this, but I still prefer checking early instead
of when we're knees-deep into adding of memory. But as I don't have any
power here, the code looks fine, although I consider the computations in
check_hotplug_memory_addressable() fairly ugly.
--
Thanks,
David / dhildenb
On Thu 26-09-19 09:12:50, David Hildenbrand wrote:
> On 26.09.19 03:34, Alastair D'Silva wrote:
> > From: Alastair D'Silva <[email protected]>
> >
> > On PowerPC, the address ranges allocated to OpenCAPI LPC memory
> > are allocated from firmware. These address ranges may be higher
> > than what older kernels permit, as we increased the maximum
> > permissable address in commit 4ffe713b7587
> > ("powerpc/mm: Increase the max addressable memory to 2PB"). It is
> > possible that the addressable range may change again in the
> > future.
> >
> > In this scenario, we end up with a bogus section returned from
> > __section_nr (see the discussion on the thread "mm: Trigger bug on
> > if a section is not found in __section_nr").
> >
> > Adding a check here means that we fail early and have an
> > opportunity to handle the error gracefully, rather than rumbling
> > on and potentially accessing an incorrect section.
> >
> > Further discussion is also on the thread ("powerpc: Perform a bounds
> > check in arch_add_memory")
> > http://lkml.kernel.org/r/[email protected]
> >
> > Signed-off-by: Alastair D'Silva <[email protected]>
> > ---
> > mm/memory_hotplug.c | 20 ++++++++++++++++++++
> > 1 file changed, 20 insertions(+)
> >
> > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> > index c73f09913165..212804c0f7f5 100644
> > --- a/mm/memory_hotplug.c
> > +++ b/mm/memory_hotplug.c
> > @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
> > return 0;
> > }
> >
> > +static int check_hotplug_memory_addressable(unsigned long pfn,
> > + unsigned long nr_pages)
> > +{
> > + unsigned long max_addr = ((pfn + nr_pages) << PAGE_SHIFT) - 1;
> > +
> > + if (max_addr >> MAX_PHYSMEM_BITS) {
> > + WARN(1,
> > + "Hotplugged memory exceeds maximum addressable address, range=%#lx-%#lx, maximum=%#lx\n",
> > + pfn << PAGE_SHIFT, max_addr,
> > + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1);
> > + return -E2BIG;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > /*
> > * Reasonably generic function for adding memory. It is
> > * expected that archs that support memory hotplug will
> > @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages,
> > unsigned long nr, start_sec, end_sec;
> > struct vmem_altmap *altmap = restrictions->altmap;
> >
> > + err = check_hotplug_memory_addressable(pfn, nr_pages);
> > + if (err)
> > + return err;
> > +
> > if (altmap) {
> > /*
> > * Validate altmap is within bounds of the total request
> >
>
>
> I know Michal suggested this, but I still prefer checking early instead
> of when we're knees-deep into adding of memory.
What is your concern here? Unwinding the state should be pretty
straightfoward from this failure path.
--
Michal Hocko
SUSE Labs
On 26.09.19 09:43, Michal Hocko wrote:
> On Thu 26-09-19 09:12:50, David Hildenbrand wrote:
>> On 26.09.19 03:34, Alastair D'Silva wrote:
>>> From: Alastair D'Silva <[email protected]>
>>>
>>> On PowerPC, the address ranges allocated to OpenCAPI LPC memory
>>> are allocated from firmware. These address ranges may be higher
>>> than what older kernels permit, as we increased the maximum
>>> permissable address in commit 4ffe713b7587
>>> ("powerpc/mm: Increase the max addressable memory to 2PB"). It is
>>> possible that the addressable range may change again in the
>>> future.
>>>
>>> In this scenario, we end up with a bogus section returned from
>>> __section_nr (see the discussion on the thread "mm: Trigger bug on
>>> if a section is not found in __section_nr").
>>>
>>> Adding a check here means that we fail early and have an
>>> opportunity to handle the error gracefully, rather than rumbling
>>> on and potentially accessing an incorrect section.
>>>
>>> Further discussion is also on the thread ("powerpc: Perform a bounds
>>> check in arch_add_memory")
>>> http://lkml.kernel.org/r/[email protected]
>>>
>>> Signed-off-by: Alastair D'Silva <[email protected]>
>>> ---
>>> mm/memory_hotplug.c | 20 ++++++++++++++++++++
>>> 1 file changed, 20 insertions(+)
>>>
>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>> index c73f09913165..212804c0f7f5 100644
>>> --- a/mm/memory_hotplug.c
>>> +++ b/mm/memory_hotplug.c
>>> @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
>>> return 0;
>>> }
>>>
>>> +static int check_hotplug_memory_addressable(unsigned long pfn,
>>> + unsigned long nr_pages)
>>> +{
>>> + unsigned long max_addr = ((pfn + nr_pages) << PAGE_SHIFT) - 1;
>>> +
>>> + if (max_addr >> MAX_PHYSMEM_BITS) {
>>> + WARN(1,
>>> + "Hotplugged memory exceeds maximum addressable address, range=%#lx-%#lx, maximum=%#lx\n",
>>> + pfn << PAGE_SHIFT, max_addr,
>>> + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1);
>>> + return -E2BIG;
>>> + }
>>> +
>>> + return 0;
>>> +}
>>> +
>>> /*
>>> * Reasonably generic function for adding memory. It is
>>> * expected that archs that support memory hotplug will
>>> @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages,
>>> unsigned long nr, start_sec, end_sec;
>>> struct vmem_altmap *altmap = restrictions->altmap;
>>>
>>> + err = check_hotplug_memory_addressable(pfn, nr_pages);
>>> + if (err)
>>> + return err;
>>> +
>>> if (altmap) {
>>> /*
>>> * Validate altmap is within bounds of the total request
>>>
>>
>>
>> I know Michal suggested this, but I still prefer checking early instead
>> of when we're knees-deep into adding of memory.
>
> What is your concern here? Unwinding the state should be pretty
> straightfoward from this failure path.
Just the general "check what you can check early without locks"
approach. But yeah, this series is probably not worth a v5, so I can
live with this change just fine :)
--
Thanks,
David / dhildenb
On 26.09.19 09:12, David Hildenbrand wrote:
> On 26.09.19 03:34, Alastair D'Silva wrote:
>> From: Alastair D'Silva <[email protected]>
>>
>> On PowerPC, the address ranges allocated to OpenCAPI LPC memory
>> are allocated from firmware. These address ranges may be higher
>> than what older kernels permit, as we increased the maximum
>> permissable address in commit 4ffe713b7587
>> ("powerpc/mm: Increase the max addressable memory to 2PB"). It is
>> possible that the addressable range may change again in the
>> future.
>>
>> In this scenario, we end up with a bogus section returned from
>> __section_nr (see the discussion on the thread "mm: Trigger bug on
>> if a section is not found in __section_nr").
>>
>> Adding a check here means that we fail early and have an
>> opportunity to handle the error gracefully, rather than rumbling
>> on and potentially accessing an incorrect section.
>>
>> Further discussion is also on the thread ("powerpc: Perform a bounds
>> check in arch_add_memory")
>> http://lkml.kernel.org/r/[email protected]
>>
>> Signed-off-by: Alastair D'Silva <[email protected]>
>> ---
>> mm/memory_hotplug.c | 20 ++++++++++++++++++++
>> 1 file changed, 20 insertions(+)
>>
>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index c73f09913165..212804c0f7f5 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
>> return 0;
>> }
>>
>> +static int check_hotplug_memory_addressable(unsigned long pfn,
>> + unsigned long nr_pages)
>> +{
>> + unsigned long max_addr = ((pfn + nr_pages) << PAGE_SHIFT) - 1;
>> +
>> + if (max_addr >> MAX_PHYSMEM_BITS) {
>> + WARN(1,
>> + "Hotplugged memory exceeds maximum addressable address, range=%#lx-%#lx, maximum=%#lx\n",
>> + pfn << PAGE_SHIFT, max_addr,
>> + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1);
>> + return -E2BIG;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> /*
>> * Reasonably generic function for adding memory. It is
>> * expected that archs that support memory hotplug will
>> @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages,
>> unsigned long nr, start_sec, end_sec;
>> struct vmem_altmap *altmap = restrictions->altmap;
>>
>> + err = check_hotplug_memory_addressable(pfn, nr_pages);
>> + if (err)
>> + return err;
>> +
>> if (altmap) {
>> /*
>> * Validate altmap is within bounds of the total request
>>
>
>
> I know Michal suggested this, but I still prefer checking early instead
> of when we're knees-deep into adding of memory. But as I don't have any
> power here, the code looks fine, although I consider the computations in
> check_hotplug_memory_addressable() fairly ugly.
>
Forgot to add
Acked-by: David Hildenbrand <[email protected]>
:)
--
Thanks,
David / dhildenb
On Thu, 2019-09-26 at 09:46 +0200, David Hildenbrand wrote:
> On 26.09.19 09:43, Michal Hocko wrote:
> > On Thu 26-09-19 09:12:50, David Hildenbrand wrote:
> > > On 26.09.19 03:34, Alastair D'Silva wrote:
> > > > From: Alastair D'Silva <[email protected]>
> > > >
> > > > On PowerPC, the address ranges allocated to OpenCAPI LPC memory
> > > > are allocated from firmware. These address ranges may be higher
> > > > than what older kernels permit, as we increased the maximum
> > > > permissable address in commit 4ffe713b7587
> > > > ("powerpc/mm: Increase the max addressable memory to 2PB"). It
> > > > is
> > > > possible that the addressable range may change again in the
> > > > future.
> > > >
> > > > In this scenario, we end up with a bogus section returned from
> > > > __section_nr (see the discussion on the thread "mm: Trigger bug
> > > > on
> > > > if a section is not found in __section_nr").
> > > >
> > > > Adding a check here means that we fail early and have an
> > > > opportunity to handle the error gracefully, rather than
> > > > rumbling
> > > > on and potentially accessing an incorrect section.
> > > >
> > > > Further discussion is also on the thread ("powerpc: Perform a
> > > > bounds
> > > > check in arch_add_memory")
> > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lkml.kernel.org_r_20190827052047.31547-2D1-2Dalastair-40au1.ibm.com&d=DwICaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=cT4tgeEQ0Ll3SIlZDHE5AEXyKy6uKADMtf9_Eb7-vec&m=p9ZS4kSnvF0zq81jcCFd2nYj1zfTMvfbApCtmKI2KNA&s=yif-duzz_RESW3LUyU_0kkmefRAnKWjjn_p5Et-9B2g&e=
> > > >
> > > > Signed-off-by: Alastair D'Silva <[email protected]>
> > > > ---
> > > > mm/memory_hotplug.c | 20 ++++++++++++++++++++
> > > > 1 file changed, 20 insertions(+)
> > > >
> > > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> > > > index c73f09913165..212804c0f7f5 100644
> > > > --- a/mm/memory_hotplug.c
> > > > +++ b/mm/memory_hotplug.c
> > > > @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long
> > > > pfn, unsigned long nr_pages,
> > > > return 0;
> > > > }
> > > >
> > > > +static int check_hotplug_memory_addressable(unsigned long pfn,
> > > > + unsigned long
> > > > nr_pages)
> > > > +{
> > > > + unsigned long max_addr = ((pfn + nr_pages) <<
> > > > PAGE_SHIFT) - 1;
> > > > +
> > > > + if (max_addr >> MAX_PHYSMEM_BITS) {
> > > > + WARN(1,
> > > > + "Hotplugged memory exceeds maximum
> > > > addressable address, range=%#lx-%#lx, maximum=%#lx\n",
> > > > + pfn << PAGE_SHIFT, max_addr,
> > > > + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1);
> > > > + return -E2BIG;
> > > > + }
> > > > +
> > > > + return 0;
> > > > +}
> > > > +
> > > > /*
> > > > * Reasonably generic function for adding memory. It is
> > > > * expected that archs that support memory hotplug will
> > > > @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned
> > > > long pfn, unsigned long nr_pages,
> > > > unsigned long nr, start_sec, end_sec;
> > > > struct vmem_altmap *altmap = restrictions->altmap;
> > > >
> > > > + err = check_hotplug_memory_addressable(pfn, nr_pages);
> > > > + if (err)
> > > > + return err;
> > > > +
> > > > if (altmap) {
> > > > /*
> > > > * Validate altmap is within bounds of the
> > > > total request
> > > >
> > >
> > > I know Michal suggested this, but I still prefer checking early
> > > instead
> > > of when we're knees-deep into adding of memory.
> >
> > What is your concern here? Unwinding the state should be pretty
> > straightfoward from this failure path.
>
> Just the general "check what you can check early without locks"
> approach. But yeah, this series is probably not worth a v5, so I can
> live with this change just fine :)
>
>
I'm going to spin a V5 anyway - where were you suggesting?
> --
>
> Thanks,
>
> David / dhildenb
--
Alastair D'Silva
Open Source Developer
Linux Technology Centre, IBM Australia
mob: 0423 762 819
On 27.09.19 08:33, Alastair D'Silva wrote:
> On Thu, 2019-09-26 at 09:46 +0200, David Hildenbrand wrote:
>> On 26.09.19 09:43, Michal Hocko wrote:
>>> On Thu 26-09-19 09:12:50, David Hildenbrand wrote:
>>>> On 26.09.19 03:34, Alastair D'Silva wrote:
>>>>> From: Alastair D'Silva <[email protected]>
>>>>>
>>>>> On PowerPC, the address ranges allocated to OpenCAPI LPC memory
>>>>> are allocated from firmware. These address ranges may be higher
>>>>> than what older kernels permit, as we increased the maximum
>>>>> permissable address in commit 4ffe713b7587
>>>>> ("powerpc/mm: Increase the max addressable memory to 2PB"). It
>>>>> is
>>>>> possible that the addressable range may change again in the
>>>>> future.
>>>>>
>>>>> In this scenario, we end up with a bogus section returned from
>>>>> __section_nr (see the discussion on the thread "mm: Trigger bug
>>>>> on
>>>>> if a section is not found in __section_nr").
>>>>>
>>>>> Adding a check here means that we fail early and have an
>>>>> opportunity to handle the error gracefully, rather than
>>>>> rumbling
>>>>> on and potentially accessing an incorrect section.
>>>>>
>>>>> Further discussion is also on the thread ("powerpc: Perform a
>>>>> bounds
>>>>> check in arch_add_memory")
>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lkml.kernel.org_r_20190827052047.31547-2D1-2Dalastair-40au1.ibm.com&d=DwICaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=cT4tgeEQ0Ll3SIlZDHE5AEXyKy6uKADMtf9_Eb7-vec&m=p9ZS4kSnvF0zq81jcCFd2nYj1zfTMvfbApCtmKI2KNA&s=yif-duzz_RESW3LUyU_0kkmefRAnKWjjn_p5Et-9B2g&e=
>>>>>
>>>>> Signed-off-by: Alastair D'Silva <[email protected]>
>>>>> ---
>>>>> mm/memory_hotplug.c | 20 ++++++++++++++++++++
>>>>> 1 file changed, 20 insertions(+)
>>>>>
>>>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>>>> index c73f09913165..212804c0f7f5 100644
>>>>> --- a/mm/memory_hotplug.c
>>>>> +++ b/mm/memory_hotplug.c
>>>>> @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long
>>>>> pfn, unsigned long nr_pages,
>>>>> return 0;
>>>>> }
>>>>>
>>>>> +static int check_hotplug_memory_addressable(unsigned long pfn,
>>>>> + unsigned long
>>>>> nr_pages)
>>>>> +{
>>>>> + unsigned long max_addr = ((pfn + nr_pages) <<
>>>>> PAGE_SHIFT) - 1;
>>>>> +
>>>>> + if (max_addr >> MAX_PHYSMEM_BITS) {
>>>>> + WARN(1,
>>>>> + "Hotplugged memory exceeds maximum
>>>>> addressable address, range=%#lx-%#lx, maximum=%#lx\n",
>>>>> + pfn << PAGE_SHIFT, max_addr,
>>>>> + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1);
>>>>> + return -E2BIG;
>>>>> + }
>>>>> +
>>>>> + return 0;
>>>>> +}
>>>>> +
>>>>> /*
>>>>> * Reasonably generic function for adding memory. It is
>>>>> * expected that archs that support memory hotplug will
>>>>> @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned
>>>>> long pfn, unsigned long nr_pages,
>>>>> unsigned long nr, start_sec, end_sec;
>>>>> struct vmem_altmap *altmap = restrictions->altmap;
>>>>>
>>>>> + err = check_hotplug_memory_addressable(pfn, nr_pages);
>>>>> + if (err)
>>>>> + return err;
>>>>> +
>>>>> if (altmap) {
>>>>> /*
>>>>> * Validate altmap is within bounds of the
>>>>> total request
>>>>>
>>>>
>>>> I know Michal suggested this, but I still prefer checking early
>>>> instead
>>>> of when we're knees-deep into adding of memory.
>>>
>>> What is your concern here? Unwinding the state should be pretty
>>> straightfoward from this failure path.
>>
>> Just the general "check what you can check early without locks"
>> approach. But yeah, this series is probably not worth a v5, so I can
>> live with this change just fine :)
>>
>>
>
> I'm going to spin a V5 anyway - where were you suggesting?
I preferred the previous places where we checked, but I think we settled
on __add_pages(). So I am fine with the changes Oscar proposed. You
might want to turn "max_addr" into a const if you feel fancy. :)
--
Thanks,
David / dhildenb