2023-02-22 19:54:02

by Peter Xu

[permalink] [raw]
Subject: [PATCH v2] mm/khugepaged: alloc_charge_hpage() take care of mem charge errors

If memory charge failed, instead of returning the hpage but with an error,
allow the function to cleanup the folio properly, which is normally what a
function should do in this case - either return successfully, or return
with no side effect of partial runs with an indicated error.

This will also avoid the caller calling mem_cgroup_uncharge() unnecessarily
with either anon or shmem path (even if it's safe to do so).

Cc: Yang Shi <[email protected]>
Reviewed-by: David Stevens <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
---
v1->v2:
- Enhance commit message, drop "Fixes:" and "Cc: stable" tag, add R-bs.
---
mm/khugepaged.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 8dbc39896811..941d1c7ea910 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1063,12 +1063,19 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm,
gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
GFP_TRANSHUGE);
int node = hpage_collapse_find_target_node(cc);
+ struct folio *folio;

if (!hpage_collapse_alloc_page(hpage, gfp, node, &cc->alloc_nmask))
return SCAN_ALLOC_HUGE_PAGE_FAIL;
- if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp)))
+
+ folio = page_folio(*hpage);
+ if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
+ folio_put(folio);
+ *hpage = NULL;
return SCAN_CGROUP_CHARGE_FAIL;
+ }
count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC);
+
return SCAN_SUCCEED;
}

--
2.39.1



2023-02-22 22:53:55

by Yang Shi

[permalink] [raw]
Subject: Re: [PATCH v2] mm/khugepaged: alloc_charge_hpage() take care of mem charge errors

On Wed, Feb 22, 2023 at 11:52 AM Peter Xu <[email protected]> wrote:
>
> If memory charge failed, instead of returning the hpage but with an error,
> allow the function to cleanup the folio properly, which is normally what a
> function should do in this case - either return successfully, or return
> with no side effect of partial runs with an indicated error.
>
> This will also avoid the caller calling mem_cgroup_uncharge() unnecessarily
> with either anon or shmem path (even if it's safe to do so).

Thanks for the cleanup. Reviewed-by: Yang Shi <[email protected]>

>
> Cc: Yang Shi <[email protected]>
> Reviewed-by: David Stevens <[email protected]>
> Acked-by: Johannes Weiner <[email protected]>
> Signed-off-by: Peter Xu <[email protected]>
> ---
> v1->v2:
> - Enhance commit message, drop "Fixes:" and "Cc: stable" tag, add R-bs.
> ---
> mm/khugepaged.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 8dbc39896811..941d1c7ea910 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1063,12 +1063,19 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm,
> gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
> GFP_TRANSHUGE);
> int node = hpage_collapse_find_target_node(cc);
> + struct folio *folio;
>
> if (!hpage_collapse_alloc_page(hpage, gfp, node, &cc->alloc_nmask))
> return SCAN_ALLOC_HUGE_PAGE_FAIL;
> - if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp)))
> +
> + folio = page_folio(*hpage);
> + if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
> + folio_put(folio);
> + *hpage = NULL;
> return SCAN_CGROUP_CHARGE_FAIL;
> + }
> count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC);
> +
> return SCAN_SUCCEED;
> }
>
> --
> 2.39.1
>

2023-03-02 23:22:17

by Zach O'Keefe

[permalink] [raw]
Subject: Re: [PATCH v2] mm/khugepaged: alloc_charge_hpage() take care of mem charge errors

On Feb 22 14:53, Yang Shi wrote:
> On Wed, Feb 22, 2023 at 11:52 AM Peter Xu <[email protected]> wrote:
> >
> > If memory charge failed, instead of returning the hpage but with an error,
> > allow the function to cleanup the folio properly, which is normally what a
> > function should do in this case - either return successfully, or return
> > with no side effect of partial runs with an indicated error.
> >
> > This will also avoid the caller calling mem_cgroup_uncharge() unnecessarily
> > with either anon or shmem path (even if it's safe to do so).
>
> Thanks for the cleanup. Reviewed-by: Yang Shi <[email protected]>
>
> >
> > Cc: Yang Shi <[email protected]>
> > Reviewed-by: David Stevens <[email protected]>
> > Acked-by: Johannes Weiner <[email protected]>
> > Signed-off-by: Peter Xu <[email protected]>
> > ---
> > v1->v2:
> > - Enhance commit message, drop "Fixes:" and "Cc: stable" tag, add R-bs.
> > ---
> > mm/khugepaged.c | 9 ++++++++-
> > 1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 8dbc39896811..941d1c7ea910 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -1063,12 +1063,19 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm,
> > gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
> > GFP_TRANSHUGE);
> > int node = hpage_collapse_find_target_node(cc);
> > + struct folio *folio;
> >
> > if (!hpage_collapse_alloc_page(hpage, gfp, node, &cc->alloc_nmask))
> > return SCAN_ALLOC_HUGE_PAGE_FAIL;
> > - if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp)))
> > +
> > + folio = page_folio(*hpage);
> > + if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
> > + folio_put(folio);
> > + *hpage = NULL;
> > return SCAN_CGROUP_CHARGE_FAIL;
> > + }
> > count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC);
> > +
> > return SCAN_SUCCEED;
> > }
> >
> > --
> > 2.39.1
> >
>

Thanks, Peter.

Can we also get rid of the unnecessary mem_cgroup_uncharge() calls while we're
at it? Maybe this deserves a separate patch, but after Yang's cleanup of the
!NUMA case (where we would preallocate a hugepage) we can depend on put_page()
do take care of that for us.

Regardless, can have my

Reviewed-by: Zach O'Keefe <[email protected]>

2023-03-03 15:00:09

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v2] mm/khugepaged: alloc_charge_hpage() take care of mem charge errors

On Thu, Mar 02, 2023 at 03:21:50PM -0800, Zach O'Keefe wrote:
> On Feb 22 14:53, Yang Shi wrote:
> > On Wed, Feb 22, 2023 at 11:52 AM Peter Xu <[email protected]> wrote:
> > >
> > > If memory charge failed, instead of returning the hpage but with an error,
> > > allow the function to cleanup the folio properly, which is normally what a
> > > function should do in this case - either return successfully, or return
> > > with no side effect of partial runs with an indicated error.
> > >
> > > This will also avoid the caller calling mem_cgroup_uncharge() unnecessarily
> > > with either anon or shmem path (even if it's safe to do so).
> >
> > Thanks for the cleanup. Reviewed-by: Yang Shi <[email protected]>
> >
> > >
> > > Cc: Yang Shi <[email protected]>
> > > Reviewed-by: David Stevens <[email protected]>
> > > Acked-by: Johannes Weiner <[email protected]>
> > > Signed-off-by: Peter Xu <[email protected]>
> > > ---
> > > v1->v2:
> > > - Enhance commit message, drop "Fixes:" and "Cc: stable" tag, add R-bs.
> > > ---
> > > mm/khugepaged.c | 9 ++++++++-
> > > 1 file changed, 8 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > > index 8dbc39896811..941d1c7ea910 100644
> > > --- a/mm/khugepaged.c
> > > +++ b/mm/khugepaged.c
> > > @@ -1063,12 +1063,19 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm,
> > > gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
> > > GFP_TRANSHUGE);
> > > int node = hpage_collapse_find_target_node(cc);
> > > + struct folio *folio;
> > >
> > > if (!hpage_collapse_alloc_page(hpage, gfp, node, &cc->alloc_nmask))
> > > return SCAN_ALLOC_HUGE_PAGE_FAIL;
> > > - if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp)))
> > > +
> > > + folio = page_folio(*hpage);
> > > + if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
> > > + folio_put(folio);
> > > + *hpage = NULL;
> > > return SCAN_CGROUP_CHARGE_FAIL;
> > > + }
> > > count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC);
> > > +
> > > return SCAN_SUCCEED;
> > > }
> > >
> > > --
> > > 2.39.1
> > >
> >
>
> Thanks, Peter.
>
> Can we also get rid of the unnecessary mem_cgroup_uncharge() calls while we're
> at it? Maybe this deserves a separate patch, but after Yang's cleanup of the
> !NUMA case (where we would preallocate a hugepage) we can depend on put_page()
> do take care of that for us.

Makes sense to me. I can prepare a separate patch to clean it up.

>
> Regardless, can have my
>
> Reviewed-by: Zach O'Keefe <[email protected]>

Thanks!

--
Peter Xu