2023-10-03 16:31:47

by Yajun Deng

[permalink] [raw]
Subject: [PATCH v3] memblock: don't run loop in memblock_add_range() twice

There is round twice in memblock_add_range(). The first counts the number
of regions needed to accommodate the new area. The second actually inserts
them. But the first round isn't really needed, we just need to check the
counts before inserting them.

Check the count before memblock_insert_region. If the count is equal to
the maximum, it needs to resize the array. Otherwise, insert it directly.

Also, there is a nested call here, we need to reserve the current array
immediately if slab is unavailable.

Signed-off-by: Yajun Deng <[email protected]>
---
v3: reserve the current array immediately if slab is unavailable.
v2: remove the changes of memblock_double_array.
v1: https://lore.kernel.org/all/[email protected]/
---
mm/memblock.c | 93 +++++++++++++++++++++++----------------------------
1 file changed, 41 insertions(+), 52 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 5a88d6d24d79..71449c0b8bc8 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -588,11 +588,12 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
phys_addr_t base, phys_addr_t size,
int nid, enum memblock_flags flags)
{
- bool insert = false;
phys_addr_t obase = base;
phys_addr_t end = base + memblock_cap_size(base, &size);
- int idx, nr_new, start_rgn = -1, end_rgn;
+ int idx, start_rgn = -1, end_rgn;
struct memblock_region *rgn;
+ int use_slab = slab_is_available();
+ unsigned long ocnt = type->cnt;

if (!size)
return 0;
@@ -608,25 +609,6 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
return 0;
}

- /*
- * The worst case is when new range overlaps all existing regions,
- * then we'll need type->cnt + 1 empty regions in @type. So if
- * type->cnt * 2 + 1 is less than or equal to type->max, we know
- * that there is enough empty regions in @type, and we can insert
- * regions directly.
- */
- if (type->cnt * 2 + 1 <= type->max)
- insert = true;
-
-repeat:
- /*
- * The following is executed twice. Once with %false @insert and
- * then with %true. The first counts the number of regions needed
- * to accommodate the new area. The second actually inserts them.
- */
- base = obase;
- nr_new = 0;
-
for_each_memblock_type(idx, type, rgn) {
phys_addr_t rbase = rgn->base;
phys_addr_t rend = rbase + rgn->size;
@@ -644,15 +626,30 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
WARN_ON(nid != memblock_get_region_node(rgn));
#endif
WARN_ON(flags != rgn->flags);
- nr_new++;
- if (insert) {
- if (start_rgn == -1)
- start_rgn = idx;
- end_rgn = idx + 1;
- memblock_insert_region(type, idx++, base,
- rbase - base, nid,
- flags);
+
+ /*
+ * If type->cnt is equal to type->max, it means there's
+ * not enough empty region and the array needs to be
+ * resized. Otherwise, insert it directly.
+ *
+ * If slab is unavailable, it means a new array was reserved
+ * in memblock_double_array. There is a nested call here, We
+ * need to reserve the current array now if its type is
+ * reserved.
+ */
+ if (type->cnt == type->max) {
+ if (memblock_double_array(type, obase, size))
+ return -ENOMEM;
+ else if (!use_slab && type == &memblock.reserved)
+ return memblock_reserve(obase, size);
}
+
+ if (start_rgn == -1)
+ start_rgn = idx;
+ end_rgn = idx + 1;
+ memblock_insert_region(type, idx++, base,
+ rbase - base, nid,
+ flags);
}
/* area below @rend is dealt with, forget about it */
base = min(rend, end);
@@ -660,33 +657,25 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,

/* insert the remaining portion */
if (base < end) {
- nr_new++;
- if (insert) {
- if (start_rgn == -1)
- start_rgn = idx;
- end_rgn = idx + 1;
- memblock_insert_region(type, idx, base, end - base,
- nid, flags);
+
+ if (type->cnt == type->max) {
+ if (memblock_double_array(type, obase, size))
+ return -ENOMEM;
+ else if (!use_slab && type == &memblock.reserved)
+ return memblock_reserve(obase, size);
}
- }

- if (!nr_new)
- return 0;
+ if (start_rgn == -1)
+ start_rgn = idx;
+ end_rgn = idx + 1;
+ memblock_insert_region(type, idx, base, end - base,
+ nid, flags);
+ }

- /*
- * If this was the first round, resize array and repeat for actual
- * insertions; otherwise, merge and return.
- */
- if (!insert) {
- while (type->cnt + nr_new > type->max)
- if (memblock_double_array(type, obase, size) < 0)
- return -ENOMEM;
- insert = true;
- goto repeat;
- } else {
+ if (ocnt != type->cnt)
memblock_merge_regions(type, start_rgn, end_rgn);
- return 0;
- }
+
+ return 0;
}

/**
--
2.25.1


2023-10-05 15:39:50

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH v3] memblock: don't run loop in memblock_add_range() twice

On Wed, Oct 04, 2023 at 12:30:45AM +0800, Yajun Deng wrote:
> There is round twice in memblock_add_range(). The first counts the number
> of regions needed to accommodate the new area. The second actually inserts
> them. But the first round isn't really needed, we just need to check the
> counts before inserting them.
>
> Check the count before memblock_insert_region. If the count is equal to
> the maximum, it needs to resize the array. Otherwise, insert it directly.
>
> Also, there is a nested call here, we need to reserve the current array
> immediately if slab is unavailable.

I presume this fixes a bug you found in v2, but are you sure it'll _never_
explode on a machine with different memory layout and different sequence of
memblock_reservee() calls?

I don't see this micro-optimization is worth the churn and potential bugs.
NAK.

> Signed-off-by: Yajun Deng <[email protected]>
> ---
> v3: reserve the current array immediately if slab is unavailable.
> v2: remove the changes of memblock_double_array.
> v1: https://lore.kernel.org/all/[email protected]/
> ---
> mm/memblock.c | 93 +++++++++++++++++++++++----------------------------
> 1 file changed, 41 insertions(+), 52 deletions(-)
>
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 5a88d6d24d79..71449c0b8bc8 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -588,11 +588,12 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
> phys_addr_t base, phys_addr_t size,
> int nid, enum memblock_flags flags)
> {
> - bool insert = false;
> phys_addr_t obase = base;
> phys_addr_t end = base + memblock_cap_size(base, &size);
> - int idx, nr_new, start_rgn = -1, end_rgn;
> + int idx, start_rgn = -1, end_rgn;
> struct memblock_region *rgn;
> + int use_slab = slab_is_available();
> + unsigned long ocnt = type->cnt;
>
> if (!size)
> return 0;
> @@ -608,25 +609,6 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
> return 0;
> }
>
> - /*
> - * The worst case is when new range overlaps all existing regions,
> - * then we'll need type->cnt + 1 empty regions in @type. So if
> - * type->cnt * 2 + 1 is less than or equal to type->max, we know
> - * that there is enough empty regions in @type, and we can insert
> - * regions directly.
> - */
> - if (type->cnt * 2 + 1 <= type->max)
> - insert = true;
> -
> -repeat:
> - /*
> - * The following is executed twice. Once with %false @insert and
> - * then with %true. The first counts the number of regions needed
> - * to accommodate the new area. The second actually inserts them.
> - */
> - base = obase;
> - nr_new = 0;
> -
> for_each_memblock_type(idx, type, rgn) {
> phys_addr_t rbase = rgn->base;
> phys_addr_t rend = rbase + rgn->size;
> @@ -644,15 +626,30 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
> WARN_ON(nid != memblock_get_region_node(rgn));
> #endif
> WARN_ON(flags != rgn->flags);
> - nr_new++;
> - if (insert) {
> - if (start_rgn == -1)
> - start_rgn = idx;
> - end_rgn = idx + 1;
> - memblock_insert_region(type, idx++, base,
> - rbase - base, nid,
> - flags);
> +
> + /*
> + * If type->cnt is equal to type->max, it means there's
> + * not enough empty region and the array needs to be
> + * resized. Otherwise, insert it directly.
> + *
> + * If slab is unavailable, it means a new array was reserved
> + * in memblock_double_array. There is a nested call here, We
> + * need to reserve the current array now if its type is
> + * reserved.
> + */
> + if (type->cnt == type->max) {
> + if (memblock_double_array(type, obase, size))
> + return -ENOMEM;
> + else if (!use_slab && type == &memblock.reserved)
> + return memblock_reserve(obase, size);
> }
> +
> + if (start_rgn == -1)
> + start_rgn = idx;
> + end_rgn = idx + 1;
> + memblock_insert_region(type, idx++, base,
> + rbase - base, nid,
> + flags);
> }
> /* area below @rend is dealt with, forget about it */
> base = min(rend, end);
> @@ -660,33 +657,25 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
>
> /* insert the remaining portion */
> if (base < end) {
> - nr_new++;
> - if (insert) {
> - if (start_rgn == -1)
> - start_rgn = idx;
> - end_rgn = idx + 1;
> - memblock_insert_region(type, idx, base, end - base,
> - nid, flags);
> +
> + if (type->cnt == type->max) {
> + if (memblock_double_array(type, obase, size))
> + return -ENOMEM;
> + else if (!use_slab && type == &memblock.reserved)
> + return memblock_reserve(obase, size);
> }
> - }
>
> - if (!nr_new)
> - return 0;
> + if (start_rgn == -1)
> + start_rgn = idx;
> + end_rgn = idx + 1;
> + memblock_insert_region(type, idx, base, end - base,
> + nid, flags);
> + }
>
> - /*
> - * If this was the first round, resize array and repeat for actual
> - * insertions; otherwise, merge and return.
> - */
> - if (!insert) {
> - while (type->cnt + nr_new > type->max)
> - if (memblock_double_array(type, obase, size) < 0)
> - return -ENOMEM;
> - insert = true;
> - goto repeat;
> - } else {
> + if (ocnt != type->cnt)
> memblock_merge_regions(type, start_rgn, end_rgn);
> - return 0;
> - }
> +
> + return 0;
> }
>
> /**
> --
> 2.25.1
>

--
Sincerely yours,
Mike.

2023-10-05 16:35:07

by Yajun Deng

[permalink] [raw]
Subject: Re: [PATCH v3] memblock: don't run loop in memblock_add_range() twice


On 2023/10/5 13:19, Mike Rapoport wrote:
> On Wed, Oct 04, 2023 at 12:30:45AM +0800, Yajun Deng wrote:
>> There is round twice in memblock_add_range(). The first counts the number
>> of regions needed to accommodate the new area. The second actually inserts
>> them. But the first round isn't really needed, we just need to check the
>> counts before inserting them.
>>
>> Check the count before memblock_insert_region. If the count is equal to
>> the maximum, it needs to resize the array. Otherwise, insert it directly.
>>
>> Also, there is a nested call here, we need to reserve the current array
>> immediately if slab is unavailable.
> I presume this fixes a bug you found in v2, but are you sure it'll _never_
> explode on a machine with different memory layout and different sequence of
> memblock_reservee() calls?


Not really. It has become complex. Because it has to deal with the
nested call.

I think v1 is the best solution if you accept it. It doesn't need to
deal with the nested call. It would be safer.

I don't think we must do memblock_reserve() in memblock_double_array().
These parameters 'addr' and

'new_alloc_size' in memblock_reserve come from memblock_find_in_range().
Since we can handle these

parameters out of memblock_find_in_range(), we can also handle 'addr'
and 'new_alloc_size' out of

memblock_double_array().

> I don't see this micro-optimization is worth the churn and potential bugs.
> NAK.


There are many handouts that tell people it needs to run twice in
memblock_add_range().

I think it's time to change this. I'm trying to tell people that running
twice is unnecessary.

Like v1, it just needs to check the count and handle memblock_reserve
out of memblock_double_array.


>
>> Signed-off-by: Yajun Deng <[email protected]>
>> ---
>> v3: reserve the current array immediately if slab is unavailable.
>> v2: remove the changes of memblock_double_array.
>> v1: https://lore.kernel.org/all/[email protected]/
>> ---
>> mm/memblock.c | 93 +++++++++++++++++++++++----------------------------
>> 1 file changed, 41 insertions(+), 52 deletions(-)
>>
>> diff --git a/mm/memblock.c b/mm/memblock.c
>> index 5a88d6d24d79..71449c0b8bc8 100644
>> --- a/mm/memblock.c
>> +++ b/mm/memblock.c
>> @@ -588,11 +588,12 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
>> phys_addr_t base, phys_addr_t size,
>> int nid, enum memblock_flags flags)
>> {
>> - bool insert = false;
>> phys_addr_t obase = base;
>> phys_addr_t end = base + memblock_cap_size(base, &size);
>> - int idx, nr_new, start_rgn = -1, end_rgn;
>> + int idx, start_rgn = -1, end_rgn;
>> struct memblock_region *rgn;
>> + int use_slab = slab_is_available();
>> + unsigned long ocnt = type->cnt;
>>
>> if (!size)
>> return 0;
>> @@ -608,25 +609,6 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
>> return 0;
>> }
>>
>> - /*
>> - * The worst case is when new range overlaps all existing regions,
>> - * then we'll need type->cnt + 1 empty regions in @type. So if
>> - * type->cnt * 2 + 1 is less than or equal to type->max, we know
>> - * that there is enough empty regions in @type, and we can insert
>> - * regions directly.
>> - */
>> - if (type->cnt * 2 + 1 <= type->max)
>> - insert = true;
>> -
>> -repeat:
>> - /*
>> - * The following is executed twice. Once with %false @insert and
>> - * then with %true. The first counts the number of regions needed
>> - * to accommodate the new area. The second actually inserts them.
>> - */
>> - base = obase;
>> - nr_new = 0;
>> -
>> for_each_memblock_type(idx, type, rgn) {
>> phys_addr_t rbase = rgn->base;
>> phys_addr_t rend = rbase + rgn->size;
>> @@ -644,15 +626,30 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
>> WARN_ON(nid != memblock_get_region_node(rgn));
>> #endif
>> WARN_ON(flags != rgn->flags);
>> - nr_new++;
>> - if (insert) {
>> - if (start_rgn == -1)
>> - start_rgn = idx;
>> - end_rgn = idx + 1;
>> - memblock_insert_region(type, idx++, base,
>> - rbase - base, nid,
>> - flags);
>> +
>> + /*
>> + * If type->cnt is equal to type->max, it means there's
>> + * not enough empty region and the array needs to be
>> + * resized. Otherwise, insert it directly.
>> + *
>> + * If slab is unavailable, it means a new array was reserved
>> + * in memblock_double_array. There is a nested call here, We
>> + * need to reserve the current array now if its type is
>> + * reserved.
>> + */
>> + if (type->cnt == type->max) {
>> + if (memblock_double_array(type, obase, size))
>> + return -ENOMEM;
>> + else if (!use_slab && type == &memblock.reserved)
>> + return memblock_reserve(obase, size);
>> }
>> +
>> + if (start_rgn == -1)
>> + start_rgn = idx;
>> + end_rgn = idx + 1;
>> + memblock_insert_region(type, idx++, base,
>> + rbase - base, nid,
>> + flags);
>> }
>> /* area below @rend is dealt with, forget about it */
>> base = min(rend, end);
>> @@ -660,33 +657,25 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
>>
>> /* insert the remaining portion */
>> if (base < end) {
>> - nr_new++;
>> - if (insert) {
>> - if (start_rgn == -1)
>> - start_rgn = idx;
>> - end_rgn = idx + 1;
>> - memblock_insert_region(type, idx, base, end - base,
>> - nid, flags);
>> +
>> + if (type->cnt == type->max) {
>> + if (memblock_double_array(type, obase, size))
>> + return -ENOMEM;
>> + else if (!use_slab && type == &memblock.reserved)
>> + return memblock_reserve(obase, size);
>> }
>> - }
>>
>> - if (!nr_new)
>> - return 0;
>> + if (start_rgn == -1)
>> + start_rgn = idx;
>> + end_rgn = idx + 1;
>> + memblock_insert_region(type, idx, base, end - base,
>> + nid, flags);
>> + }
>>
>> - /*
>> - * If this was the first round, resize array and repeat for actual
>> - * insertions; otherwise, merge and return.
>> - */
>> - if (!insert) {
>> - while (type->cnt + nr_new > type->max)
>> - if (memblock_double_array(type, obase, size) < 0)
>> - return -ENOMEM;
>> - insert = true;
>> - goto repeat;
>> - } else {
>> + if (ocnt != type->cnt)
>> memblock_merge_regions(type, start_rgn, end_rgn);
>> - return 0;
>> - }
>> +
>> + return 0;
>> }
>>
>> /**
>> --
>> 2.25.1
>>

2023-10-12 09:26:14

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH v3] memblock: don't run loop in memblock_add_range() twice

On Thu, Oct 05, 2023 at 11:20:19PM +0800, Yajun Deng wrote:
>
> On 2023/10/5 13:19, Mike Rapoport wrote:
>
> > I don't see this micro-optimization is worth the churn and potential bugs.
> > NAK.
>
> There are many handouts that tell people it needs to run twice in
> memblock_add_range().
>
> I think it's time to change this. I'm trying to tell people that running
> twice is unnecessary.

It might be unnecessary, but it's still simpler than the solutions you
proposed. And, again, removing the second loop is not worth the churn and
potential bugs.

--
Sincerely yours,
Mike.

2023-10-12 09:28:51

by Yajun Deng

[permalink] [raw]
Subject: Re: [PATCH v3] memblock: don't run loop in memblock_add_range() twice


On 2023/10/12 17:24, Mike Rapoport wrote:
> On Thu, Oct 05, 2023 at 11:20:19PM +0800, Yajun Deng wrote:
>> On 2023/10/5 13:19, Mike Rapoport wrote:
>>
>>> I don't see this micro-optimization is worth the churn and potential bugs.
>>> NAK.
>> There are many handouts that tell people it needs to run twice in
>> memblock_add_range().
>>
>> I think it's time to change this. I'm trying to tell people that running
>> twice is unnecessary.
> It might be unnecessary, but it's still simpler than the solutions you
> proposed. And, again, removing the second loop is not worth the churn and
> potential bugs.
>


Okay, I got it. Thanks.