2019-02-23 01:28:51

by Jing Xiangfeng

[permalink] [raw]
Subject: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

User can change a node specific hugetlb count. i.e.
/sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
the calculated value of count is a total number of huge pages. It could
be overflow when a user entering a crazy high value. If so, the total
number of huge pages could be a small value which is not user expect.
We can simply fix it by setting count to ULONG_MAX, then it goes on. This
may be more in line with user's intention of allocating as many huge pages
as possible.

Signed-off-by: Jing Xiangfeng <[email protected]>
---
mm/hugetlb.c | 7 +++++++
1 file changed, 7 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index afef616..6688894 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2423,7 +2423,14 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
* per node hstate attribute: adjust count to global,
* but restrict alloc/free to the specified node.
*/
+ unsigned long old_count = count;
count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
+ /*
+ * If user specified count causes overflow, set to
+ * largest possible value.
+ */
+ if (count < old_count)
+ count = ULONG_MAX;
init_nodemask_of_node(nodes_allowed, nid);
} else
nodes_allowed = &node_states[N_MEMORY];
--
2.7.4



2019-02-25 00:46:50

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On 2/22/19 5:32 PM, Jing Xiangfeng wrote:
> User can change a node specific hugetlb count. i.e.
> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> the calculated value of count is a total number of huge pages. It could
> be overflow when a user entering a crazy high value. If so, the total
> number of huge pages could be a small value which is not user expect.
> We can simply fix it by setting count to ULONG_MAX, then it goes on. This
> may be more in line with user's intention of allocating as many huge pages
> as possible.
>
> Signed-off-by: Jing Xiangfeng <[email protected]>

Thank you.

Acked-by: Mike Kravetz <[email protected]>

> ---
> mm/hugetlb.c | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index afef616..6688894 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2423,7 +2423,14 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
> * per node hstate attribute: adjust count to global,
> * but restrict alloc/free to the specified node.
> */
> + unsigned long old_count = count;
> count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> + /*
> + * If user specified count causes overflow, set to
> + * largest possible value.
> + */
> + if (count < old_count)
> + count = ULONG_MAX;
> init_nodemask_of_node(nodes_allowed, nid);
> } else
> nodes_allowed = &node_states[N_MEMORY];
>

2019-02-25 03:18:22

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On Sun, 24 Feb 2019, Mike Kravetz wrote:

> > User can change a node specific hugetlb count. i.e.
> > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> > the calculated value of count is a total number of huge pages. It could
> > be overflow when a user entering a crazy high value. If so, the total
> > number of huge pages could be a small value which is not user expect.
> > We can simply fix it by setting count to ULONG_MAX, then it goes on. This
> > may be more in line with user's intention of allocating as many huge pages
> > as possible.
> >
> > Signed-off-by: Jing Xiangfeng <[email protected]>
>
> Thank you.
>
> Acked-by: Mike Kravetz <[email protected]>
>
> > ---
> > mm/hugetlb.c | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index afef616..6688894 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -2423,7 +2423,14 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
> > * per node hstate attribute: adjust count to global,
> > * but restrict alloc/free to the specified node.
> > */
> > + unsigned long old_count = count;
> > count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> > + /*
> > + * If user specified count causes overflow, set to
> > + * largest possible value.
> > + */
> > + if (count < old_count)
> > + count = ULONG_MAX;
> > init_nodemask_of_node(nodes_allowed, nid);
> > } else
> > nodes_allowed = &node_states[N_MEMORY];
> >

Looks like this fixes the overflow issue, but isn't there already a
possible underflow since we don't hold hugetlb_lock? Even if
count == 0, what prevents h->nr_huge_pages_node[nid] being greater than
h->nr_huge_pages here? I think the per hstate values need to be read with
READ_ONCE() and stored on the stack to do any sane bounds checking.

2019-02-25 16:50:37

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On 2/24/19 7:17 PM, David Rientjes wrote:
> On Sun, 24 Feb 2019, Mike Kravetz wrote:
>
>>> User can change a node specific hugetlb count. i.e.
>>> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
>>> the calculated value of count is a total number of huge pages. It could
>>> be overflow when a user entering a crazy high value. If so, the total
>>> number of huge pages could be a small value which is not user expect.
>>> We can simply fix it by setting count to ULONG_MAX, then it goes on. This
>>> may be more in line with user's intention of allocating as many huge pages
>>> as possible.
>>>
>>> Signed-off-by: Jing Xiangfeng <[email protected]>
>>
>> Thank you.
>>
>> Acked-by: Mike Kravetz <[email protected]>
>>
>>> ---
>>> mm/hugetlb.c | 7 +++++++
>>> 1 file changed, 7 insertions(+)
>>>
>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>> index afef616..6688894 100644
>>> --- a/mm/hugetlb.c
>>> +++ b/mm/hugetlb.c
>>> @@ -2423,7 +2423,14 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
>>> * per node hstate attribute: adjust count to global,
>>> * but restrict alloc/free to the specified node.
>>> */
>>> + unsigned long old_count = count;
>>> count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
>>> + /*
>>> + * If user specified count causes overflow, set to
>>> + * largest possible value.
>>> + */
>>> + if (count < old_count)
>>> + count = ULONG_MAX;
>>> init_nodemask_of_node(nodes_allowed, nid);
>>> } else
>>> nodes_allowed = &node_states[N_MEMORY];
>>>
>
> Looks like this fixes the overflow issue, but isn't there already a
> possible underflow since we don't hold hugetlb_lock? Even if
> count == 0, what prevents h->nr_huge_pages_node[nid] being greater than
> h->nr_huge_pages here? I think the per hstate values need to be read with
> READ_ONCE() and stored on the stack to do any sane bounds checking.

Yes, without holding the lock there is the potential for issues. Looking
back to when the node specific code was added there is a comment about
"re-use/share as much of the existing global hstate attribute initialization
and handling". I suspect that is the reason for these calculations outside
the lock.

As you mention above, nr_huge_pages_node[nid] could be greater than
nr_huge_pages. This is true even if we do READ_ONCE(). So, the code would
need to test for this condition and 'fix up' values or re-read. It is just
racy without holding the lock.

If that is too ugly, then we could just add code for the node specific
adjustments. set_max_huge_pages() is only called from here. It would be
pretty easy to modify set_max_huge_pages() to take the node specific value
and do calculations/adjustments under the lock.
--
Mike Kravetz

2019-02-25 18:20:24

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

> On 2/24/19 7:17 PM, David Rientjes wrote:
>> On Sun, 24 Feb 2019, Mike Kravetz wrote:
>>>> @@ -2423,7 +2423,14 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
>>>> * per node hstate attribute: adjust count to global,
>>>> * but restrict alloc/free to the specified node.
>>>> */
>>>> + unsigned long old_count = count;
>>>> count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
>>>> + /*
>>>> + * If user specified count causes overflow, set to
>>>> + * largest possible value.
>>>> + */
>>>> + if (count < old_count)
>>>> + count = ULONG_MAX;
>>>> init_nodemask_of_node(nodes_allowed, nid);
>>>> } else
>>>> nodes_allowed = &node_states[N_MEMORY];
>>>>
>>
>> Looks like this fixes the overflow issue, but isn't there already a
>> possible underflow since we don't hold hugetlb_lock? Even if
>> count == 0, what prevents h->nr_huge_pages_node[nid] being greater than
>> h->nr_huge_pages here? I think the per hstate values need to be read with
>> READ_ONCE() and stored on the stack to do any sane bounds checking.
>
> Yes, without holding the lock there is the potential for issues. Looking
> back to when the node specific code was added there is a comment about
> "re-use/share as much of the existing global hstate attribute initialization
> and handling". I suspect that is the reason for these calculations outside
> the lock.
>
> As you mention above, nr_huge_pages_node[nid] could be greater than
> nr_huge_pages. This is true even if we do READ_ONCE(). So, the code would
> need to test for this condition and 'fix up' values or re-read. It is just
> racy without holding the lock.
>
> If that is too ugly, then we could just add code for the node specific
> adjustments. set_max_huge_pages() is only called from here. It would be
> pretty easy to modify set_max_huge_pages() to take the node specific value
> and do calculations/adjustments under the lock.

Ok, what about just moving the calculation/check inside the lock as in the
untested patch below?

Signed-off-by: Mike Kravetz <[email protected]>
---
mm/hugetlb.c | 34 ++++++++++++++++++++++++++--------
1 file changed, 26 insertions(+), 8 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 1c5219193b9e..5afa77dc7bc8 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2274,7 +2274,7 @@ static int adjust_pool_surplus(struct hstate *h,
nodemask_t *nodes_allowed,
}

#define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages)
-static int set_max_huge_pages(struct hstate *h, unsigned long count,
+static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
nodemask_t *nodes_allowed)
{
unsigned long min_count, ret;
@@ -2289,6 +2289,23 @@ static int set_max_huge_pages(struct hstate *h, unsigned
long count,
goto decrease_pool;
}

+ spin_lock(&hugetlb_lock);
+
+ /*
+ * Check for a node specific request. Adjust global count, but
+ * restrict alloc/free to the specified node.
+ */
+ if (nid != NUMA_NO_NODE) {
+ unsigned long old_count = count;
+ count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
+ /*
+ * If user specified count causes overflow, set to
+ * largest possible value.
+ */
+ if (count < old_count)
+ count = ULONG_MAX;
+ }
+
/*
* Increase the pool size
* First take pages out of surplus state. Then make up the
@@ -2300,7 +2317,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned
long count,
* pool might be one hugepage larger than it needs to be, but
* within all the constraints specified by the sysctls.
*/
- spin_lock(&hugetlb_lock);
while (h->surplus_huge_pages && count > persistent_huge_pages(h)) {
if (!adjust_pool_surplus(h, nodes_allowed, -1))
break;
@@ -2421,16 +2437,18 @@ static ssize_t __nr_hugepages_store_common(bool
obey_mempolicy,
nodes_allowed = &node_states[N_MEMORY];
}
} else if (nodes_allowed) {
+ /* Node specific request */
+ init_nodemask_of_node(nodes_allowed, nid);
+ } else {
/*
- * per node hstate attribute: adjust count to global,
- * but restrict alloc/free to the specified node.
+ * Node specific request, but we could not allocate
+ * node mask. Pass in ALL nodes, and clear nid.
*/
- count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
- init_nodemask_of_node(nodes_allowed, nid);
- } else
+ nid = NUMA_NO_NODE;
nodes_allowed = &node_states[N_MEMORY];
+ }

- err = set_max_huge_pages(h, count, nodes_allowed);
+ err = set_max_huge_pages(h, count, nid, nodes_allowed);
if (err)
goto out;

--
2.17.2


2019-02-25 19:17:55

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On Mon, 25 Feb 2019, Mike Kravetz wrote:

> Ok, what about just moving the calculation/check inside the lock as in the
> untested patch below?
>
> Signed-off-by: Mike Kravetz <[email protected]>
> ---
> mm/hugetlb.c | 34 ++++++++++++++++++++++++++--------
> 1 file changed, 26 insertions(+), 8 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 1c5219193b9e..5afa77dc7bc8 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2274,7 +2274,7 @@ static int adjust_pool_surplus(struct hstate *h,
> nodemask_t *nodes_allowed,
> }
>
> #define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages)
> -static int set_max_huge_pages(struct hstate *h, unsigned long count,
> +static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
> nodemask_t *nodes_allowed)
> {
> unsigned long min_count, ret;
> @@ -2289,6 +2289,23 @@ static int set_max_huge_pages(struct hstate *h, unsigned
> long count,
> goto decrease_pool;
> }
>
> + spin_lock(&hugetlb_lock);
> +
> + /*
> + * Check for a node specific request. Adjust global count, but
> + * restrict alloc/free to the specified node.
> + */
> + if (nid != NUMA_NO_NODE) {
> + unsigned long old_count = count;
> + count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> + /*
> + * If user specified count causes overflow, set to
> + * largest possible value.
> + */
> + if (count < old_count)
> + count = ULONG_MAX;
> + }
> +
> /*
> * Increase the pool size
> * First take pages out of surplus state. Then make up the
> @@ -2300,7 +2317,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned
> long count,
> * pool might be one hugepage larger than it needs to be, but
> * within all the constraints specified by the sysctls.
> */
> - spin_lock(&hugetlb_lock);
> while (h->surplus_huge_pages && count > persistent_huge_pages(h)) {
> if (!adjust_pool_surplus(h, nodes_allowed, -1))
> break;
> @@ -2421,16 +2437,18 @@ static ssize_t __nr_hugepages_store_common(bool
> obey_mempolicy,
> nodes_allowed = &node_states[N_MEMORY];
> }
> } else if (nodes_allowed) {
> + /* Node specific request */
> + init_nodemask_of_node(nodes_allowed, nid);
> + } else {
> /*
> - * per node hstate attribute: adjust count to global,
> - * but restrict alloc/free to the specified node.
> + * Node specific request, but we could not allocate
> + * node mask. Pass in ALL nodes, and clear nid.
> */
> - count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> - init_nodemask_of_node(nodes_allowed, nid);
> - } else
> + nid = NUMA_NO_NODE;
> nodes_allowed = &node_states[N_MEMORY];
> + }
>
> - err = set_max_huge_pages(h, count, nodes_allowed);
> + err = set_max_huge_pages(h, count, nid, nodes_allowed);
> if (err)
> goto out;
>

Looks good; Jing, could you test that this fixes your case?

2019-02-26 02:24:21

by Jing Xiangfeng

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On 2019/2/26 3:17, David Rientjes wrote:
> On Mon, 25 Feb 2019, Mike Kravetz wrote:
>
>> Ok, what about just moving the calculation/check inside the lock as in the
>> untested patch below?
>>
>> Signed-off-by: Mike Kravetz <[email protected]>
>> ---
>> mm/hugetlb.c | 34 ++++++++++++++++++++++++++--------
>> 1 file changed, 26 insertions(+), 8 deletions(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 1c5219193b9e..5afa77dc7bc8 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -2274,7 +2274,7 @@ static int adjust_pool_surplus(struct hstate *h,
>> nodemask_t *nodes_allowed,
>> }
>>
>> #define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages)
>> -static int set_max_huge_pages(struct hstate *h, unsigned long count,
>> +static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
>> nodemask_t *nodes_allowed)
>> {
>> unsigned long min_count, ret;
>> @@ -2289,6 +2289,23 @@ static int set_max_huge_pages(struct hstate *h, unsigned
>> long count,
>> goto decrease_pool;
>> }
>>
>> + spin_lock(&hugetlb_lock);
>> +
>> + /*
>> + * Check for a node specific request. Adjust global count, but
>> + * restrict alloc/free to the specified node.
>> + */
>> + if (nid != NUMA_NO_NODE) {
>> + unsigned long old_count = count;
>> + count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
>> + /*
>> + * If user specified count causes overflow, set to
>> + * largest possible value.
>> + */
>> + if (count < old_count)
>> + count = ULONG_MAX;
>> + }
>> +
>> /*
>> * Increase the pool size
>> * First take pages out of surplus state. Then make up the
>> @@ -2300,7 +2317,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned
>> long count,
>> * pool might be one hugepage larger than it needs to be, but
>> * within all the constraints specified by the sysctls.
>> */
>> - spin_lock(&hugetlb_lock);
>> while (h->surplus_huge_pages && count > persistent_huge_pages(h)) {
>> if (!adjust_pool_surplus(h, nodes_allowed, -1))
>> break;
>> @@ -2421,16 +2437,18 @@ static ssize_t __nr_hugepages_store_common(bool
>> obey_mempolicy,
>> nodes_allowed = &node_states[N_MEMORY];
>> }
>> } else if (nodes_allowed) {
>> + /* Node specific request */
>> + init_nodemask_of_node(nodes_allowed, nid);
>> + } else {
>> /*
>> - * per node hstate attribute: adjust count to global,
>> - * but restrict alloc/free to the specified node.
>> + * Node specific request, but we could not allocate
>> + * node mask. Pass in ALL nodes, and clear nid.
>> */
>> - count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
>> - init_nodemask_of_node(nodes_allowed, nid);
>> - } else
>> + nid = NUMA_NO_NODE;
>> nodes_allowed = &node_states[N_MEMORY];
>> + }
>>
>> - err = set_max_huge_pages(h, count, nodes_allowed);
>> + err = set_max_huge_pages(h, count, nid, nodes_allowed);
>> if (err)
>> goto out;
>>
>
> Looks good; Jing, could you test that this fixes your case?

Yes, I have tested this patch, it can also fix my case.
>
> .
>



2019-02-26 06:22:16

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On Tue, 26 Feb 2019, Jing Xiangfeng wrote:

> On 2019/2/26 3:17, David Rientjes wrote:
> > On Mon, 25 Feb 2019, Mike Kravetz wrote:
> >
> >> Ok, what about just moving the calculation/check inside the lock as in the
> >> untested patch below?
> >>
> >> Signed-off-by: Mike Kravetz <[email protected]>
> >> ---
> >> mm/hugetlb.c | 34 ++++++++++++++++++++++++++--------
> >> 1 file changed, 26 insertions(+), 8 deletions(-)
> >>
> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> >> index 1c5219193b9e..5afa77dc7bc8 100644
> >> --- a/mm/hugetlb.c
> >> +++ b/mm/hugetlb.c
> >> @@ -2274,7 +2274,7 @@ static int adjust_pool_surplus(struct hstate *h,
> >> nodemask_t *nodes_allowed,
> >> }
> >>
> >> #define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages)
> >> -static int set_max_huge_pages(struct hstate *h, unsigned long count,
> >> +static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
> >> nodemask_t *nodes_allowed)
> >> {
> >> unsigned long min_count, ret;
> >> @@ -2289,6 +2289,23 @@ static int set_max_huge_pages(struct hstate *h, unsigned
> >> long count,
> >> goto decrease_pool;
> >> }
> >>
> >> + spin_lock(&hugetlb_lock);
> >> +
> >> + /*
> >> + * Check for a node specific request. Adjust global count, but
> >> + * restrict alloc/free to the specified node.
> >> + */
> >> + if (nid != NUMA_NO_NODE) {
> >> + unsigned long old_count = count;
> >> + count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> >> + /*
> >> + * If user specified count causes overflow, set to
> >> + * largest possible value.
> >> + */
> >> + if (count < old_count)
> >> + count = ULONG_MAX;
> >> + }
> >> +
> >> /*
> >> * Increase the pool size
> >> * First take pages out of surplus state. Then make up the
> >> @@ -2300,7 +2317,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned
> >> long count,
> >> * pool might be one hugepage larger than it needs to be, but
> >> * within all the constraints specified by the sysctls.
> >> */
> >> - spin_lock(&hugetlb_lock);
> >> while (h->surplus_huge_pages && count > persistent_huge_pages(h)) {
> >> if (!adjust_pool_surplus(h, nodes_allowed, -1))
> >> break;
> >> @@ -2421,16 +2437,18 @@ static ssize_t __nr_hugepages_store_common(bool
> >> obey_mempolicy,
> >> nodes_allowed = &node_states[N_MEMORY];
> >> }
> >> } else if (nodes_allowed) {
> >> + /* Node specific request */
> >> + init_nodemask_of_node(nodes_allowed, nid);
> >> + } else {
> >> /*
> >> - * per node hstate attribute: adjust count to global,
> >> - * but restrict alloc/free to the specified node.
> >> + * Node specific request, but we could not allocate
> >> + * node mask. Pass in ALL nodes, and clear nid.
> >> */
> >> - count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> >> - init_nodemask_of_node(nodes_allowed, nid);
> >> - } else
> >> + nid = NUMA_NO_NODE;
> >> nodes_allowed = &node_states[N_MEMORY];
> >> + }
> >>
> >> - err = set_max_huge_pages(h, count, nodes_allowed);
> >> + err = set_max_huge_pages(h, count, nid, nodes_allowed);
> >> if (err)
> >> goto out;
> >>
> >
> > Looks good; Jing, could you test that this fixes your case?
>
> Yes, I have tested this patch, it can also fix my case.

Great!

Reported-by: Jing Xiangfeng <[email protected]>
Tested-by: Jing Xiangfeng <[email protected]>
Acked-by: David Rientjes <[email protected]>

2019-02-26 19:33:48

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On 2/25/19 10:21 PM, David Rientjes wrote:
> On Tue, 26 Feb 2019, Jing Xiangfeng wrote:
>> On 2019/2/26 3:17, David Rientjes wrote:
>>> On Mon, 25 Feb 2019, Mike Kravetz wrote:
>>>
>>>> Ok, what about just moving the calculation/check inside the lock as in the
>>>> untested patch below?
>>>>
>>>> Signed-off-by: Mike Kravetz <[email protected]>

<snip>

>>>
>>> Looks good; Jing, could you test that this fixes your case?
>>
>> Yes, I have tested this patch, it can also fix my case.
>
> Great!
>
> Reported-by: Jing Xiangfeng <[email protected]>
> Tested-by: Jing Xiangfeng <[email protected]>
> Acked-by: David Rientjes <[email protected]>

Thanks Jing and David!

Here is the patch with an updated commit message and above tags:

From: Mike Kravetz <[email protected]>
Date: Tue, 26 Feb 2019 10:43:24 -0800
Subject: [PATCH] hugetlbfs: fix potential over/underflow setting node specific
nr_hugepages

The number of node specific huge pages can be set via a file such as:
/sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
When a node specific value is specified, the global number of huge
pages must also be adjusted. This adjustment is calculated as the
specified node specific value + (global value - current node value).
If the node specific value provided by the user is large enough, this
calculation could overflow an unsigned long leading to a smaller
than expected number of huge pages.

To fix, check the calculation for overflow. If overflow is detected,
use ULONG_MAX as the requested value. This is inline with the user
request to allocate as many huge pages as possible.

It was also noticed that the above calculation was done outside the
hugetlb_lock. Therefore, the values could be inconsistent and result
in underflow. To fix, the calculation is moved to within the routine
set_max_huge_pages() where the lock is held.

Reported-by: Jing Xiangfeng <[email protected]>
Signed-off-by: Mike Kravetz <[email protected]>
Tested-by: Jing Xiangfeng <[email protected]>
Acked-by: David Rientjes <[email protected]>
---
mm/hugetlb.c | 34 ++++++++++++++++++++++++++--------
1 file changed, 26 insertions(+), 8 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b37e3100b7cc..a7e4223d2df5 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2274,7 +2274,7 @@ static int adjust_pool_surplus(struct hstate *h,
nodemask_t *nodes_allowed,
}

#define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages)
-static int set_max_huge_pages(struct hstate *h, unsigned long count,
+static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
nodemask_t *nodes_allowed)
{
unsigned long min_count, ret;
@@ -2289,6 +2289,23 @@ static int set_max_huge_pages(struct hstate *h, unsigned
long count,
goto decrease_pool;
}

+ spin_lock(&hugetlb_lock);
+
+ /*
+ * Check for a node specific request. Adjust global count, but
+ * restrict alloc/free to the specified node.
+ */
+ if (nid != NUMA_NO_NODE) {
+ unsigned long old_count = count;
+ count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
+ /*
+ * If user specified count causes overflow, set to
+ * largest possible value.
+ */
+ if (count < old_count)
+ count = ULONG_MAX;
+ }
+
/*
* Increase the pool size
* First take pages out of surplus state. Then make up the
@@ -2300,7 +2317,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned
long count,
* pool might be one hugepage larger than it needs to be, but
* within all the constraints specified by the sysctls.
*/
- spin_lock(&hugetlb_lock);
while (h->surplus_huge_pages && count > persistent_huge_pages(h)) {
if (!adjust_pool_surplus(h, nodes_allowed, -1))
break;
@@ -2421,16 +2437,18 @@ static ssize_t __nr_hugepages_store_common(bool
obey_mempolicy,
nodes_allowed = &node_states[N_MEMORY];
}
} else if (nodes_allowed) {
+ /* Node specific request */
+ init_nodemask_of_node(nodes_allowed, nid);
+ } else {
/*
- * per node hstate attribute: adjust count to global,
- * but restrict alloc/free to the specified node.
+ * Node specific request, but we could not allocate
+ * node mask. Pass in ALL nodes, and clear nid.
*/
- count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
- init_nodemask_of_node(nodes_allowed, nid);
- } else
+ nid = NUMA_NO_NODE;
nodes_allowed = &node_states[N_MEMORY];
+ }

- err = set_max_huge_pages(h, count, nodes_allowed);
+ err = set_max_huge_pages(h, count, nid, nodes_allowed);
if (err)
goto out;

--
2.17.2


2019-02-26 22:38:09

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

>
> The number of node specific huge pages can be set via a file such as:
> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> When a node specific value is specified, the global number of huge
> pages must also be adjusted. This adjustment is calculated as the
> specified node specific value + (global value - current node value).
> If the node specific value provided by the user is large enough, this
> calculation could overflow an unsigned long leading to a smaller
> than expected number of huge pages.
>
> To fix, check the calculation for overflow. If overflow is detected,
> use ULONG_MAX as the requested value. This is inline with the user
> request to allocate as many huge pages as possible.
>
> It was also noticed that the above calculation was done outside the
> hugetlb_lock. Therefore, the values could be inconsistent and result
> in underflow. To fix, the calculation is moved to within the routine
> set_max_huge_pages() where the lock is held.
>
> ...
>
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2274,7 +2274,7 @@ static int adjust_pool_surplus(struct hstate *h,
> nodemask_t *nodes_allowed,

Please tweak that email client to prevent the wordwraps.

> + /*
> + * Check for a node specific request. Adjust global count, but
> + * restrict alloc/free to the specified node.
> + */
> + if (nid != NUMA_NO_NODE) {
> + unsigned long old_count = count;
> + count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> + /*
> + * If user specified count causes overflow, set to
> + * largest possible value.
> + */
> + if (count < old_count)
> + count = ULONG_MAX;
> + }

The above two comments explain the code, but do not reveal the
reasoning behind the policy decisions which that code implements.

> ...
>
> + } else {
> /*
> - * per node hstate attribute: adjust count to global,
> - * but restrict alloc/free to the specified node.
> + * Node specific request, but we could not allocate
> + * node mask. Pass in ALL nodes, and clear nid.
> */

Ditto here, somewhat.

The old mantra: comments should explain "why", not "what". Reading the
code tells us the "what".

Thanks.

2019-02-27 00:05:38

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On 2/26/19 2:36 PM, Andrew Morton wrote:
>> ...
>>
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -2274,7 +2274,7 @@ static int adjust_pool_surplus(struct hstate *h,
>> nodemask_t *nodes_allowed,
>
> Please tweak that email client to prevent the wordwraps.

Sorry about that.

>> + /*
>> + * Check for a node specific request. Adjust global count, but
>> + * restrict alloc/free to the specified node.
>> + */

Better comment might be:

/*
* Check for a node specific request.
* Changing node specific huge page count may require a corresponding
* change to the global count. In any case, the passed node mask
* (nodes_allowed) will restrict alloc/free to the specified node.
*/

>> + if (nid != NUMA_NO_NODE) {
>> + unsigned long old_count = count;
>> + count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
>> + /*
>> + * If user specified count causes overflow, set to
>> + * largest possible value.
>> + */

Updated comment:
/*
* User may have specified a large count value which caused the
* above calculation to overflow. In this case, they wanted
* to allocate as many huge pages as possible. Set count to
* largest possible value to align with their intention.
*/

>> + if (count < old_count)
>> + count = ULONG_MAX;
>> + }
>
> The above two comments explain the code, but do not reveal the
> reasoning behind the policy decisions which that code implements.
>
>> ...
>>
>> + } else {
>> /*
>> - * per node hstate attribute: adjust count to global,
>> - * but restrict alloc/free to the specified node.
>> + * Node specific request, but we could not allocate
>> + * node mask. Pass in ALL nodes, and clear nid.
>> */
>
> Ditto here, somewhat.

I was just going to update the comments and send you a new patch, but
but your comment got me thinking about this situation. I did not really
change the way this code operates. As a reminder, the original code is like:

NODEMASK_ALLOC(nodemask_t, nodes_allowed, GFP_KERNEL | __GFP_NORETRY);

if (nid == NUMA_NO_NODE) {
/* do something */
} else if (nodes_allowed) {
/* do something else */
} else {
nodes_allowed = &node_states[N_MEMORY];
}

So, the only way we get to that final else if if we can not allocate
a node mask (kmalloc a few words). Right? I wonder why we should
even try to continue in this case. Why not just return right there?

The specified count value is either a request to increase number of
huge pages or decrease. If we can't allocate a few words, we certainly
are not going to find memory to create huge pages. There 'might' be
surplus pages which can be converted to permanent pages. But remember
this is a 'node specific' request and we can't allocate a mask to pass
down to the conversion routines. So, chances are good we would operate
on the wrong node. The same goes for a request to 'free' huge pages.
Since, we can't allocate a node mask we are likely to free them from
the wrong node.

Unless my reasoning above is incorrect, I think that final else block
in __nr_hugepages_store_common() is wrong.

Any additional thoughts?
--
Mike Kravetz

2019-03-04 06:05:32

by Naoya Horiguchi

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On Tue, Feb 26, 2019 at 11:32:24AM -0800, Mike Kravetz wrote:
> On 2/25/19 10:21 PM, David Rientjes wrote:
> > On Tue, 26 Feb 2019, Jing Xiangfeng wrote:
> >> On 2019/2/26 3:17, David Rientjes wrote:
> >>> On Mon, 25 Feb 2019, Mike Kravetz wrote:
> >>>
> >>>> Ok, what about just moving the calculation/check inside the lock as in the
> >>>> untested patch below?
> >>>>
> >>>> Signed-off-by: Mike Kravetz <[email protected]>
>
> <snip>
>
> >>>
> >>> Looks good; Jing, could you test that this fixes your case?
> >>
> >> Yes, I have tested this patch, it can also fix my case.
> >
> > Great!
> >
> > Reported-by: Jing Xiangfeng <[email protected]>
> > Tested-by: Jing Xiangfeng <[email protected]>
> > Acked-by: David Rientjes <[email protected]>
>
> Thanks Jing and David!
>
> Here is the patch with an updated commit message and above tags:
>
> From: Mike Kravetz <[email protected]>
> Date: Tue, 26 Feb 2019 10:43:24 -0800
> Subject: [PATCH] hugetlbfs: fix potential over/underflow setting node specific
> nr_hugepages
>
> The number of node specific huge pages can be set via a file such as:
> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> When a node specific value is specified, the global number of huge
> pages must also be adjusted. This adjustment is calculated as the
> specified node specific value + (global value - current node value).
> If the node specific value provided by the user is large enough, this
> calculation could overflow an unsigned long leading to a smaller
> than expected number of huge pages.
>
> To fix, check the calculation for overflow. If overflow is detected,
> use ULONG_MAX as the requested value. This is inline with the user
> request to allocate as many huge pages as possible.
>
> It was also noticed that the above calculation was done outside the
> hugetlb_lock. Therefore, the values could be inconsistent and result
> in underflow. To fix, the calculation is moved to within the routine
> set_max_huge_pages() where the lock is held.
>
> Reported-by: Jing Xiangfeng <[email protected]>
> Signed-off-by: Mike Kravetz <[email protected]>
> Tested-by: Jing Xiangfeng <[email protected]>
> Acked-by: David Rientjes <[email protected]>

Looks good to me with improved comments.
Thanks everyone.

Reviewed-by: Naoya Horiguchi <[email protected]>

> ---
> mm/hugetlb.c | 34 ++++++++++++++++++++++++++--------
> 1 file changed, 26 insertions(+), 8 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index b37e3100b7cc..a7e4223d2df5 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2274,7 +2274,7 @@ static int adjust_pool_surplus(struct hstate *h,
> nodemask_t *nodes_allowed,
> }
>
> #define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages)
> -static int set_max_huge_pages(struct hstate *h, unsigned long count,
> +static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
> nodemask_t *nodes_allowed)
> {
> unsigned long min_count, ret;
> @@ -2289,6 +2289,23 @@ static int set_max_huge_pages(struct hstate *h, unsigned
> long count,
> goto decrease_pool;
> }
>
> + spin_lock(&hugetlb_lock);
> +
> + /*
> + * Check for a node specific request. Adjust global count, but
> + * restrict alloc/free to the specified node.
> + */
> + if (nid != NUMA_NO_NODE) {
> + unsigned long old_count = count;
> + count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> + /*
> + * If user specified count causes overflow, set to
> + * largest possible value.
> + */
> + if (count < old_count)
> + count = ULONG_MAX;
> + }
> +
> /*
> * Increase the pool size
> * First take pages out of surplus state. Then make up the
> @@ -2300,7 +2317,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned
> long count,
> * pool might be one hugepage larger than it needs to be, but
> * within all the constraints specified by the sysctls.
> */
> - spin_lock(&hugetlb_lock);
> while (h->surplus_huge_pages && count > persistent_huge_pages(h)) {
> if (!adjust_pool_surplus(h, nodes_allowed, -1))
> break;
> @@ -2421,16 +2437,18 @@ static ssize_t __nr_hugepages_store_common(bool
> obey_mempolicy,
> nodes_allowed = &node_states[N_MEMORY];
> }
> } else if (nodes_allowed) {
> + /* Node specific request */
> + init_nodemask_of_node(nodes_allowed, nid);
> + } else {
> /*
> - * per node hstate attribute: adjust count to global,
> - * but restrict alloc/free to the specified node.
> + * Node specific request, but we could not allocate
> + * node mask. Pass in ALL nodes, and clear nid.
> */
> - count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> - init_nodemask_of_node(nodes_allowed, nid);
> - } else
> + nid = NUMA_NO_NODE;
> nodes_allowed = &node_states[N_MEMORY];
> + }
>
> - err = set_max_huge_pages(h, count, nodes_allowed);
> + err = set_max_huge_pages(h, count, nid, nodes_allowed);
> if (err)
> goto out;
>
> --
> 2.17.2
>
>

2019-03-04 13:54:44

by Oscar Salvador

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On Tue, Feb 26, 2019 at 04:03:23PM -0800, Mike Kravetz wrote:
> I was just going to update the comments and send you a new patch, but
> but your comment got me thinking about this situation. I did not really
> change the way this code operates. As a reminder, the original code is like:
>
> NODEMASK_ALLOC(nodemask_t, nodes_allowed, GFP_KERNEL | __GFP_NORETRY);
>
> if (nid == NUMA_NO_NODE) {
> /* do something */
> } else if (nodes_allowed) {
> /* do something else */
> } else {
> nodes_allowed = &node_states[N_MEMORY];
> }
>
> So, the only way we get to that final else if if we can not allocate
> a node mask (kmalloc a few words). Right? I wonder why we should
> even try to continue in this case. Why not just return right there?
>
> The specified count value is either a request to increase number of
> huge pages or decrease. If we can't allocate a few words, we certainly
> are not going to find memory to create huge pages. There 'might' be
> surplus pages which can be converted to permanent pages. But remember
> this is a 'node specific' request and we can't allocate a mask to pass
> down to the conversion routines. So, chances are good we would operate
> on the wrong node. The same goes for a request to 'free' huge pages.
> Since, we can't allocate a node mask we are likely to free them from
> the wrong node.
>
> Unless my reasoning above is incorrect, I think that final else block
> in __nr_hugepages_store_common() is wrong.
>
> Any additional thoughts?

Could not we kill the NODEMASK_ALLOC there?
__nr_hugepages_store_common() should be called from a rather shallow stack,
and unless I am wrong, the maximum value we can get for a nodemask_t is 128bytes
(1024 NUMA nodes).

--
Oscar Salvador
SUSE L3

2019-03-05 00:07:40

by Naoya Horiguchi

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On Tue, Feb 26, 2019 at 04:03:23PM -0800, Mike Kravetz wrote:
> On 2/26/19 2:36 PM, Andrew Morton wrote:
...
> >>
> >> + } else {
> >> /*
> >> - * per node hstate attribute: adjust count to global,
> >> - * but restrict alloc/free to the specified node.
> >> + * Node specific request, but we could not allocate
> >> + * node mask. Pass in ALL nodes, and clear nid.
> >> */
> >
> > Ditto here, somewhat.

# I missed this part when reviewing yesterday for some reason, sorry.

>
> I was just going to update the comments and send you a new patch, but
> but your comment got me thinking about this situation. I did not really
> change the way this code operates. As a reminder, the original code is like:
>
> NODEMASK_ALLOC(nodemask_t, nodes_allowed, GFP_KERNEL | __GFP_NORETRY);
>
> if (nid == NUMA_NO_NODE) {
> /* do something */
> } else if (nodes_allowed) {
> /* do something else */
> } else {
> nodes_allowed = &node_states[N_MEMORY];
> }
>
> So, the only way we get to that final else if if we can not allocate
> a node mask (kmalloc a few words). Right? I wonder why we should
> even try to continue in this case. Why not just return right there?

Simply returning on allocation failure looks better to me.
As you mentioned below, current behavior for this 'else' case is not
helpful for anyone.

Thanks,
Naoya Horiguchi

>
> The specified count value is either a request to increase number of
> huge pages or decrease. If we can't allocate a few words, we certainly
> are not going to find memory to create huge pages. There 'might' be
> surplus pages which can be converted to permanent pages. But remember
> this is a 'node specific' request and we can't allocate a mask to pass
> down to the conversion routines. So, chances are good we would operate
> on the wrong node. The same goes for a request to 'free' huge pages.
> Since, we can't allocate a node mask we are likely to free them from
> the wrong node.
>
> Unless my reasoning above is incorrect, I think that final else block
> in __nr_hugepages_store_common() is wrong.
>
> Any additional thoughts?
> --
> Mike Kravetz
>

2019-03-05 04:18:41

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On 3/4/19 4:03 PM, Naoya Horiguchi wrote:
> On Tue, Feb 26, 2019 at 04:03:23PM -0800, Mike Kravetz wrote:
>> On 2/26/19 2:36 PM, Andrew Morton wrote:
> ...
>>>>
>>>> + } else {
>>>> /*
>>>> - * per node hstate attribute: adjust count to global,
>>>> - * but restrict alloc/free to the specified node.
>>>> + * Node specific request, but we could not allocate
>>>> + * node mask. Pass in ALL nodes, and clear nid.
>>>> */
>>>
>>> Ditto here, somewhat.
>
> # I missed this part when reviewing yesterday for some reason, sorry.
>
>>
>> I was just going to update the comments and send you a new patch, but
>> but your comment got me thinking about this situation. I did not really
>> change the way this code operates. As a reminder, the original code is like:
>>
>> NODEMASK_ALLOC(nodemask_t, nodes_allowed, GFP_KERNEL | __GFP_NORETRY);
>>
>> if (nid == NUMA_NO_NODE) {
>> /* do something */
>> } else if (nodes_allowed) {
>> /* do something else */
>> } else {
>> nodes_allowed = &node_states[N_MEMORY];
>> }
>>
>> So, the only way we get to that final else if if we can not allocate
>> a node mask (kmalloc a few words). Right? I wonder why we should
>> even try to continue in this case. Why not just return right there?
>
> Simply returning on allocation failure looks better to me.
> As you mentioned below, current behavior for this 'else' case is not
> helpful for anyone.

Thanks Naoya. And, thank you Oscar for your suggestion.

I think the simplest thing to do would be simply return in this case. In
practice, we will likely never hit the condition. If we do, the system is
really low on memory and there will be other more important issues.

The revised patch below updates comments as suggested, and returns -ENOMEM
if we can not allocate a node mask.

Andrew, this is on top of Alexandre Ghiti's "hugetlb: allow to free gigantic
pages regardless of the configuration" patch. Both patches modify
__nr_hugepages_store_common(). Alex's patch is going to change slightly
in this area. Let me know if there is something I can do to help make
merging easier.

From: Mike Kravetz <[email protected]>
Date: Mon, 4 Mar 2019 17:45:11 -0800
Subject: [PATCH] hugetlbfs: fix potential over/underflow setting node specific
nr_hugepages

The number of node specific huge pages can be set via a file such as:
/sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
When a node specific value is specified, the global number of huge
pages must also be adjusted. This adjustment is calculated as the
specified node specific value + (global value - current node value).
If the node specific value provided by the user is large enough, this
calculation could overflow an unsigned long leading to a smaller
than expected number of huge pages.

To fix, check the calculation for overflow. If overflow is detected,
use ULONG_MAX as the requested value. This is inline with the user
request to allocate as many huge pages as possible.

It was also noticed that the above calculation was done outside the
hugetlb_lock. Therefore, the values could be inconsistent and result
in underflow. To fix, the calculation is moved within the routine
set_max_huge_pages() where the lock is held.

In addition, the code in __nr_hugepages_store_common() which tries to
handle the case of not being able to allocate a node mask would likely
result in incorrect behavior. Luckily, it is very unlikely we will
ever take this path. If we do, simply return ENOMEM.

Reported-by: Jing Xiangfeng <[email protected]>
Signed-off-by: Mike Kravetz <[email protected]>
---
mm/hugetlb.c | 42 +++++++++++++++++++++++++++++++++---------
1 file changed, 33 insertions(+), 9 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c5c4558e4a79..5a190a652cac 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2274,7 +2274,7 @@ static int adjust_pool_surplus(struct hstate *h, nodemask_t *nodes_allowed,
}

#define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages)
-static int set_max_huge_pages(struct hstate *h, unsigned long count,
+static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
nodemask_t *nodes_allowed)
{
unsigned long min_count, ret;
@@ -2289,6 +2289,28 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count,
goto decrease_pool;
}

+ spin_lock(&hugetlb_lock);
+
+ /*
+ * Check for a node specific request.
+ * Changing node specific huge page count may require a corresponding
+ * change to the global count. In any case, the passed node mask
+ * (nodes_allowed) will restrict alloc/free to the specified node.
+ */
+ if (nid != NUMA_NO_NODE) {
+ unsigned long old_count = count;
+
+ count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
+ /*
+ * User may have specified a large count value which caused the
+ * above calculation to overflow. In this case, they wanted
+ * to allocate as many huge pages as possible. Set count to
+ * largest possible value to align with their intention.
+ */
+ if (count < old_count)
+ count = ULONG_MAX;
+ }
+
/*
* Increase the pool size
* First take pages out of surplus state. Then make up the
@@ -2300,7 +2322,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count,
* pool might be one hugepage larger than it needs to be, but
* within all the constraints specified by the sysctls.
*/
- spin_lock(&hugetlb_lock);
while (h->surplus_huge_pages && count > persistent_huge_pages(h)) {
if (!adjust_pool_surplus(h, nodes_allowed, -1))
break;
@@ -2421,16 +2442,19 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
nodes_allowed = &node_states[N_MEMORY];
}
} else if (nodes_allowed) {
+ /* Node specific request */
+ init_nodemask_of_node(nodes_allowed, nid);
+ } else {
/*
- * per node hstate attribute: adjust count to global,
- * but restrict alloc/free to the specified node.
+ * Node specific request, but we could not allocate the few
+ * words required for a node mask. We are unlikely to hit
+ * this condition. Since we can not pass down the appropriate
+ * node mask, just return ENOMEM.
*/
- count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
- init_nodemask_of_node(nodes_allowed, nid);
- } else
- nodes_allowed = &node_states[N_MEMORY];
+ return -ENOMEM;
+ }

- err = set_max_huge_pages(h, count, nodes_allowed);
+ err = set_max_huge_pages(h, count, nid, nodes_allowed);
if (err)
goto out;

--
2.17.2


2019-03-05 21:20:21

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On Mon, 4 Mar 2019 20:15:40 -0800 Mike Kravetz <[email protected]> wrote:

> Andrew, this is on top of Alexandre Ghiti's "hugetlb: allow to free gigantic
> pages regardless of the configuration" patch. Both patches modify
> __nr_hugepages_store_common(). Alex's patch is going to change slightly
> in this area.

OK, thanks, I missed that. Are the changes significant?

2019-03-05 21:44:42

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On 3/5/19 1:16 PM, Andrew Morton wrote:
> On Mon, 4 Mar 2019 20:15:40 -0800 Mike Kravetz <[email protected]> wrote:
>
>> Andrew, this is on top of Alexandre Ghiti's "hugetlb: allow to free gigantic
>> pages regardless of the configuration" patch. Both patches modify
>> __nr_hugepages_store_common(). Alex's patch is going to change slightly
>> in this area.
>
> OK, thanks, I missed that. Are the changes significant?
>

No, changes should be minor. IIRC, just checking for a condition in an
error path.

--
Mike Kravetz

2019-03-05 22:04:58

by Alexandre Ghiti

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On 3/5/19 4:35 PM, Mike Kravetz wrote:
> On 3/5/19 1:16 PM, Andrew Morton wrote:
>> On Mon, 4 Mar 2019 20:15:40 -0800 Mike Kravetz <[email protected]> wrote:
>>
>>> Andrew, this is on top of Alexandre Ghiti's "hugetlb: allow to free gigantic
>>> pages regardless of the configuration" patch. Both patches modify
>>> __nr_hugepages_store_common(). Alex's patch is going to change slightly
>>> in this area.
>> OK, thanks, I missed that. Are the changes significant?
>>
> No, changes should be minor. IIRC, just checking for a condition in an
> error path.
I will send the v5 of this patch tomorrow, I was waiting for architecture
maintainer remarks. I still miss sh, but I'm confident on this change.

Alex

2019-03-06 09:50:44

by Oscar Salvador

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On Mon, Mar 04, 2019 at 08:15:40PM -0800, Mike Kravetz wrote:
> In addition, the code in __nr_hugepages_store_common() which tries to
> handle the case of not being able to allocate a node mask would likely
> result in incorrect behavior. Luckily, it is very unlikely we will
> ever take this path. If we do, simply return ENOMEM.

Hi Mike,

I still thnk that we could just get rid of the NODEMASK_ALLOC machinery
here, it adds a needlessly complexity IMHO.
Note that before "(5df66d306ec9: mm: fix comment for NODEMASK_ALLOC)",
the comment about the size was wrong, showing a much bigger size that it
actually was, and I would not be surprised if people started to add
NODEMASK_ALLOC here and there because of that.

Actually, there was a little talk about removing NODEMASK_ALLOC altogether,
but some further checks must be done before.

> Reported-by: Jing Xiangfeng <[email protected]>
> Signed-off-by: Mike Kravetz <[email protected]>

But the overall change looks good to me:

Reviewed-by: Oscar Salvador <[email protected]>

> ---
> mm/hugetlb.c | 42 +++++++++++++++++++++++++++++++++---------
> 1 file changed, 33 insertions(+), 9 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index c5c4558e4a79..5a190a652cac 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2274,7 +2274,7 @@ static int adjust_pool_surplus(struct hstate *h, nodemask_t *nodes_allowed,
> }
>
> #define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages)
> -static int set_max_huge_pages(struct hstate *h, unsigned long count,
> +static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
> nodemask_t *nodes_allowed)
> {
> unsigned long min_count, ret;
> @@ -2289,6 +2289,28 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count,
> goto decrease_pool;
> }
>
> + spin_lock(&hugetlb_lock);
> +
> + /*
> + * Check for a node specific request.
> + * Changing node specific huge page count may require a corresponding
> + * change to the global count. In any case, the passed node mask
> + * (nodes_allowed) will restrict alloc/free to the specified node.
> + */
> + if (nid != NUMA_NO_NODE) {
> + unsigned long old_count = count;
> +
> + count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> + /*
> + * User may have specified a large count value which caused the
> + * above calculation to overflow. In this case, they wanted
> + * to allocate as many huge pages as possible. Set count to
> + * largest possible value to align with their intention.
> + */
> + if (count < old_count)
> + count = ULONG_MAX;
> + }
> +
> /*
> * Increase the pool size
> * First take pages out of surplus state. Then make up the
> @@ -2300,7 +2322,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count,
> * pool might be one hugepage larger than it needs to be, but
> * within all the constraints specified by the sysctls.
> */
> - spin_lock(&hugetlb_lock);
> while (h->surplus_huge_pages && count > persistent_huge_pages(h)) {
> if (!adjust_pool_surplus(h, nodes_allowed, -1))
> break;
> @@ -2421,16 +2442,19 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
> nodes_allowed = &node_states[N_MEMORY];
> }
> } else if (nodes_allowed) {
> + /* Node specific request */
> + init_nodemask_of_node(nodes_allowed, nid);
> + } else {
> /*
> - * per node hstate attribute: adjust count to global,
> - * but restrict alloc/free to the specified node.
> + * Node specific request, but we could not allocate the few
> + * words required for a node mask. We are unlikely to hit
> + * this condition. Since we can not pass down the appropriate
> + * node mask, just return ENOMEM.
> */
> - count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> - init_nodemask_of_node(nodes_allowed, nid);
> - } else
> - nodes_allowed = &node_states[N_MEMORY];
> + return -ENOMEM;
> + }
>
> - err = set_max_huge_pages(h, count, nodes_allowed);
> + err = set_max_huge_pages(h, count, nid, nodes_allowed);
> if (err)
> goto out;
>
> --
> 2.17.2
>

--
Oscar Salvador
SUSE L3

2019-03-07 00:19:03

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

On 3/6/19 1:41 AM, Oscar Salvador wrote:
> On Mon, Mar 04, 2019 at 08:15:40PM -0800, Mike Kravetz wrote:
>> In addition, the code in __nr_hugepages_store_common() which tries to
>> handle the case of not being able to allocate a node mask would likely
>> result in incorrect behavior. Luckily, it is very unlikely we will
>> ever take this path. If we do, simply return ENOMEM.
>
> Hi Mike,
>
> I still thnk that we could just get rid of the NODEMASK_ALLOC machinery
> here, it adds a needlessly complexity IMHO.
> Note that before "(5df66d306ec9: mm: fix comment for NODEMASK_ALLOC)",
> the comment about the size was wrong, showing a much bigger size that it
> actually was, and I would not be surprised if people started to add
> NODEMASK_ALLOC here and there because of that.
>
> Actually, there was a little talk about removing NODEMASK_ALLOC altogether,
> but some further checks must be done before.

Thanks for the information. I too saw or remembered a large byte value. :(
A quick grep doesn't reveal any configurable way to get NODE_SHIFT larger
than 10. Of course, that could change. So, it does seem a bit funny that
NODEMASK_ALLOC() kicks into dynamic allocation mode with NODE_SHIFT > 8.
Although, my desktop distro has NODE_SHIFT set to 10.

>> Reported-by: Jing Xiangfeng <[email protected]>
>> Signed-off-by: Mike Kravetz <[email protected]>
>
> But the overall change looks good to me:
>
> Reviewed-by: Oscar Salvador <[email protected]>

Thanks.
I'm going to leave as is for now and put off removal of the dynamic allocation
for a later time. Unless, you get around to removing NODEMASK_ALLOC
altogether. :)
--
Mike Kravetz