2024-02-06 03:08:46

by Baolin Wang

[permalink] [raw]
Subject: [PATCH v3] mm: hugetlb: improve the handling of hugetlb allocation failure for freed or in-use hugetlb

alloc_and_dissolve_hugetlb_folio() preallocates a new hugetlb page before
it takes hugetlb_lock. In 3 out of 4 cases the page is not really used and
therefore the newly allocated page is just freed right away. This is
wasteful and it might cause pre-mature failures in those cases.

Address that by moving the allocation down to the only case (hugetlb
page is really in the free pages pool). We need to drop hugetlb_lock
to do so and therefore need to recheck the page state after regaining
it.

The patch is more of a cleanup than an actual fix to an existing
problem. There are no known reports about pre-mature failures.

Signed-off-by: Baolin Wang <[email protected]>
---
Changes from v2;
- Update the commit message suggested by Michal.
- Remove unnecessary comments.
Changes from v1:
- Update the suject line per Muchun.
- Move the allocation into the free hugetlb handling branch per Michal.
---
mm/hugetlb.c | 32 ++++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 9d996fe4ecd9..a05507a2143f 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3031,21 +3031,9 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
{
gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
int nid = folio_nid(old_folio);
- struct folio *new_folio;
+ struct folio *new_folio = NULL;
int ret = 0;

- /*
- * Before dissolving the folio, we need to allocate a new one for the
- * pool to remain stable. Here, we allocate the folio and 'prep' it
- * by doing everything but actually updating counters and adding to
- * the pool. This simplifies and let us do most of the processing
- * under the lock.
- */
- new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, NULL, NULL);
- if (!new_folio)
- return -ENOMEM;
- __prep_new_hugetlb_folio(h, new_folio);
-
retry:
spin_lock_irq(&hugetlb_lock);
if (!folio_test_hugetlb(old_folio)) {
@@ -3075,6 +3063,16 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
cond_resched();
goto retry;
} else {
+ if (!new_folio) {
+ spin_unlock_irq(&hugetlb_lock);
+ new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid,
+ NULL, NULL);
+ if (!new_folio)
+ return -ENOMEM;
+ __prep_new_hugetlb_folio(h, new_folio);
+ goto retry;
+ }
+
/*
* Ok, old_folio is still a genuine free hugepage. Remove it from
* the freelist and decrease the counters. These will be
@@ -3102,9 +3100,11 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,

free_new:
spin_unlock_irq(&hugetlb_lock);
- /* Folio has a zero ref count, but needs a ref to be freed */
- folio_ref_unfreeze(new_folio, 1);
- update_and_free_hugetlb_folio(h, new_folio, false);
+ if (new_folio) {
+ /* Folio has a zero ref count, but needs a ref to be freed */
+ folio_ref_unfreeze(new_folio, 1);
+ update_and_free_hugetlb_folio(h, new_folio, false);
+ }

return ret;
}
--
2.39.3



2024-02-06 10:09:32

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v3] mm: hugetlb: improve the handling of hugetlb allocation failure for freed or in-use hugetlb

On Tue 06-02-24 11:08:11, Baolin Wang wrote:
> alloc_and_dissolve_hugetlb_folio() preallocates a new hugetlb page before
> it takes hugetlb_lock. In 3 out of 4 cases the page is not really used and
> therefore the newly allocated page is just freed right away. This is
> wasteful and it might cause pre-mature failures in those cases.
>
> Address that by moving the allocation down to the only case (hugetlb
> page is really in the free pages pool). We need to drop hugetlb_lock
> to do so and therefore need to recheck the page state after regaining
> it.
>
> The patch is more of a cleanup than an actual fix to an existing
> problem. There are no known reports about pre-mature failures.
>
> Signed-off-by: Baolin Wang <[email protected]>

Acked-by: Michal Hocko <[email protected]>
Thanks!

> ---
> Changes from v2;
> - Update the commit message suggested by Michal.
> - Remove unnecessary comments.
> Changes from v1:
> - Update the suject line per Muchun.
> - Move the allocation into the free hugetlb handling branch per Michal.
> ---
> mm/hugetlb.c | 32 ++++++++++++++++----------------
> 1 file changed, 16 insertions(+), 16 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 9d996fe4ecd9..a05507a2143f 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3031,21 +3031,9 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
> {
> gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
> int nid = folio_nid(old_folio);
> - struct folio *new_folio;
> + struct folio *new_folio = NULL;
> int ret = 0;
>
> - /*
> - * Before dissolving the folio, we need to allocate a new one for the
> - * pool to remain stable. Here, we allocate the folio and 'prep' it
> - * by doing everything but actually updating counters and adding to
> - * the pool. This simplifies and let us do most of the processing
> - * under the lock.
> - */
> - new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, NULL, NULL);
> - if (!new_folio)
> - return -ENOMEM;
> - __prep_new_hugetlb_folio(h, new_folio);
> -
> retry:
> spin_lock_irq(&hugetlb_lock);
> if (!folio_test_hugetlb(old_folio)) {
> @@ -3075,6 +3063,16 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
> cond_resched();
> goto retry;
> } else {
> + if (!new_folio) {
> + spin_unlock_irq(&hugetlb_lock);
> + new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid,
> + NULL, NULL);
> + if (!new_folio)
> + return -ENOMEM;
> + __prep_new_hugetlb_folio(h, new_folio);
> + goto retry;
> + }
> +
> /*
> * Ok, old_folio is still a genuine free hugepage. Remove it from
> * the freelist and decrease the counters. These will be
> @@ -3102,9 +3100,11 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
>
> free_new:
> spin_unlock_irq(&hugetlb_lock);
> - /* Folio has a zero ref count, but needs a ref to be freed */
> - folio_ref_unfreeze(new_folio, 1);
> - update_and_free_hugetlb_folio(h, new_folio, false);
> + if (new_folio) {
> + /* Folio has a zero ref count, but needs a ref to be freed */
> + folio_ref_unfreeze(new_folio, 1);
> + update_and_free_hugetlb_folio(h, new_folio, false);
> + }
>
> return ret;
> }
> --
> 2.39.3
>

--
Michal Hocko
SUSE Labs

2024-02-07 02:25:45

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v3] mm: hugetlb: improve the handling of hugetlb allocation failure for freed or in-use hugetlb



> On Feb 6, 2024, at 11:08, Baolin Wang <[email protected]> wrote:
>
> alloc_and_dissolve_hugetlb_folio() preallocates a new hugetlb page before
> it takes hugetlb_lock. In 3 out of 4 cases the page is not really used and
> therefore the newly allocated page is just freed right away. This is
> wasteful and it might cause pre-mature failures in those cases.
>
> Address that by moving the allocation down to the only case (hugetlb
> page is really in the free pages pool). We need to drop hugetlb_lock
> to do so and therefore need to recheck the page state after regaining
> it.
>
> The patch is more of a cleanup than an actual fix to an existing
> problem. There are no known reports about pre-mature failures.
>
> Signed-off-by: Baolin Wang <[email protected]>

Reviewed-by: Muchun Song <[email protected]>

Thanks

> ---
> Changes from v2;
> - Update the commit message suggested by Michal.
> - Remove unnecessary comments.
> Changes from v1:
> - Update the suject line per Muchun.
> - Move the allocation into the free hugetlb handling branch per Michal.
> ---
> mm/hugetlb.c | 32 ++++++++++++++++----------------
> 1 file changed, 16 insertions(+), 16 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 9d996fe4ecd9..a05507a2143f 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3031,21 +3031,9 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
> {
> gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
> int nid = folio_nid(old_folio);
> - struct folio *new_folio;
> + struct folio *new_folio = NULL;
> int ret = 0;
>
> - /*
> - * Before dissolving the folio, we need to allocate a new one for the
> - * pool to remain stable. Here, we allocate the folio and 'prep' it
> - * by doing everything but actually updating counters and adding to
> - * the pool. This simplifies and let us do most of the processing
> - * under the lock.
> - */
> - new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, NULL, NULL);
> - if (!new_folio)
> - return -ENOMEM;
> - __prep_new_hugetlb_folio(h, new_folio);
> -
> retry:
> spin_lock_irq(&hugetlb_lock);
> if (!folio_test_hugetlb(old_folio)) {
> @@ -3075,6 +3063,16 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
> cond_resched();
> goto retry;
> } else {
> + if (!new_folio) {
> + spin_unlock_irq(&hugetlb_lock);
> + new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid,
> + NULL, NULL);
> + if (!new_folio)
> + return -ENOMEM;
> + __prep_new_hugetlb_folio(h, new_folio);
> + goto retry;
> + }
> +
> /*
> * Ok, old_folio is still a genuine free hugepage. Remove it from
> * the freelist and decrease the counters. These will be
> @@ -3102,9 +3100,11 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
>
> free_new:
> spin_unlock_irq(&hugetlb_lock);
> - /* Folio has a zero ref count, but needs a ref to be freed */
> - folio_ref_unfreeze(new_folio, 1);
> - update_and_free_hugetlb_folio(h, new_folio, false);
> + if (new_folio) {
> + /* Folio has a zero ref count, but needs a ref to be freed */
> + folio_ref_unfreeze(new_folio, 1);
> + update_and_free_hugetlb_folio(h, new_folio, false);
> + }
>
> return ret;
> }
> --
> 2.39.3
>