Received: by 2002:a05:6358:111d:b0:dc:6189:e246 with SMTP id f29csp3620476rwi; Wed, 2 Nov 2022 00:47:27 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6p+bD+m3XP/Yho5liZCmR7lt8cir56azU1rkkdI77Hg0z8rbiVsY4mHsB4twqMBGhRlgN9 X-Received: by 2002:a17:906:6a03:b0:7ad:b51d:3a38 with SMTP id qw3-20020a1709066a0300b007adb51d3a38mr20733599ejc.202.1667375247170; Wed, 02 Nov 2022 00:47:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667375247; cv=none; d=google.com; s=arc-20160816; b=pzFzAc8DQS6GscwwDWLyaroTw00IEUqRB7INEkyrHk70/NZYM1FwIlivrh9744A6mR vx7b0RDC1EYSpjblwF86IURAJpsRQ9BHbSK4A3MXt909b83hWSc/KAhG65PGS27m4hah nvHUwrTKljKNenUjUIXtHFAfmraznVxoOx16Mckh6xOcZNaBI56CYV14tjy4/00ckaJp x595IwVvDDsDlV3TNf8d8fDOc+YJu6glEsC9thM2N6WYHVPzfYjCC35b5ZQcUXxKi7cj YfvlbaEevL2jr7sN48q0QRnwuAUHN8kq6pBHJcXotzisw4QYAgI8SdXVJ7/59Azisbq0 Hmrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:references:message-id :content-transfer-encoding:cc:date:in-reply-to:from:subject :mime-version:dkim-signature; bh=iIxI+9EdE71N9U8wkgda/gJcxSWf1ZLf+S0jdoHjU94=; b=G4HDY6+A2h0KDzUqTzwl5IB1C1IEnn5sTi2Ik+j0twqxrc+63KDrdkzliFuPqwKCZo Fa/cXBixz0gWJY2YRoO2v3swJHmsmxNv/e/XmQOKclsALHnA6AEKHKvQ2zJ7U9Axvg2H 9c0/yuIQvXyPYt+e5xMsfrn1NlMs344qtKY9XLBFzEE0ZnuGcwweIHichZR29iWiBIXs LxYUAem8YRDsgHlADBgk7qxeWE3JHDqQExUVnWRwSX8UF9hPZewwaBJN+r4ufhvbIR3E BrBzNV22UyOXS+Wo7BMVOVq4S7vy0KSkkT2X74sGbDW2B9O5NYAaxK6H+/PtK9iX7fER n6WA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=bm+T8nHM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x19-20020a170906711300b0078daf101aa1si10948013ejj.813.2022.11.02.00.47.04; Wed, 02 Nov 2022 00:47:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=bm+T8nHM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230116AbiKBG5P (ORCPT + 96 others); Wed, 2 Nov 2022 02:57:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229853AbiKBG5M (ORCPT ); Wed, 2 Nov 2022 02:57:12 -0400 Received: from out2.migadu.com (out2.migadu.com [IPv6:2001:41d0:2:aacc::]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 404A6222AE for ; Tue, 1 Nov 2022 23:57:11 -0700 (PDT) Content-Type: text/plain; charset=us-ascii DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1667372229; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iIxI+9EdE71N9U8wkgda/gJcxSWf1ZLf+S0jdoHjU94=; b=bm+T8nHMkC2FiaBL6bT6D7Jp5dV7ZRaVcMbFwFuhGWcbAB8f3tsQ0Unf8lHQ/O5EpWTX29 rErDDT7YaZNsbp3h+zHZAm4+dmPOob5twLdMCgfLLLfq122GS8KSGawxwqPtNg7Q0O5am3 JVAYzJDFdrNbe+8CqglfMwox9tB6rWw= Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3731.200.110.1.12\)) Subject: Re: [PATCH v2 7/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_uncharge_page() to folios X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20221101223059.460937-8-sidhartha.kumar@oracle.com> Date: Wed, 2 Nov 2022 14:56:48 +0800 Cc: linux-kernel@vger.kernel.org, Linux Memory Management List , Andrew Morton , Muchun Song , Mike Kravetz , Matthew Wilcox , Mina Almasry , linmiaohe@huawei.com, minhquangbui99@gmail.com, aneesh.kumar@linux.ibm.com Content-Transfer-Encoding: quoted-printable Message-Id: References: <20221101223059.460937-1-sidhartha.kumar@oracle.com> <20221101223059.460937-8-sidhartha.kumar@oracle.com> To: Sidhartha Kumar X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_PASS, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > On Nov 2, 2022, at 06:30, Sidhartha Kumar = wrote: >=20 > Continue to use a folio inside free_huge_page() by converting > hugetlb_cgroup_uncharge_page*() to folios. >=20 > Signed-off-by: Sidhartha Kumar > Reviewed-by: Mike Kravetz Reviewed-by: Muchun Song A nit below. > --- > include/linux/hugetlb_cgroup.h | 16 ++++++++-------- > mm/hugetlb.c | 15 +++++++++------ > mm/hugetlb_cgroup.c | 21 ++++++++++----------- > 3 files changed, 27 insertions(+), 25 deletions(-) >=20 > diff --git a/include/linux/hugetlb_cgroup.h = b/include/linux/hugetlb_cgroup.h > index 789b6fef176d..c70f92fe493e 100644 > --- a/include/linux/hugetlb_cgroup.h > +++ b/include/linux/hugetlb_cgroup.h > @@ -158,10 +158,10 @@ extern void hugetlb_cgroup_commit_charge(int = idx, unsigned long nr_pages, > extern void hugetlb_cgroup_commit_charge_rsvd(int idx, unsigned long = nr_pages, > struct hugetlb_cgroup *h_cg, > struct page *page); > -extern void hugetlb_cgroup_uncharge_page(int idx, unsigned long = nr_pages, > - struct page *page); > -extern void hugetlb_cgroup_uncharge_page_rsvd(int idx, unsigned long = nr_pages, > - struct page *page); > +extern void hugetlb_cgroup_uncharge_folio(int idx, unsigned long = nr_pages, > + struct folio *folio); > +extern void hugetlb_cgroup_uncharge_folio_rsvd(int idx, unsigned long = nr_pages, > + struct folio *folio); >=20 > extern void hugetlb_cgroup_uncharge_cgroup(int idx, unsigned long = nr_pages, > struct hugetlb_cgroup *h_cg); > @@ -254,14 +254,14 @@ hugetlb_cgroup_commit_charge_rsvd(int idx, = unsigned long nr_pages, > { > } >=20 > -static inline void hugetlb_cgroup_uncharge_page(int idx, unsigned = long nr_pages, > - struct page *page) > +static inline void hugetlb_cgroup_uncharge_folio(int idx, unsigned = long nr_pages, > + struct folio *folio) > { > } >=20 > -static inline void hugetlb_cgroup_uncharge_page_rsvd(int idx, > +static inline void hugetlb_cgroup_uncharge_folio_rsvd(int idx, > unsigned long nr_pages, > - struct page *page) > + struct folio *folio) > { > } > static inline void hugetlb_cgroup_uncharge_cgroup(int idx, > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 387b8d74107d..2ecc0a6cf883 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1726,10 +1726,10 @@ void free_huge_page(struct page *page) >=20 > spin_lock_irqsave(&hugetlb_lock, flags); > folio_clear_hugetlb_migratable(folio); > - hugetlb_cgroup_uncharge_page(hstate_index(h), > - pages_per_huge_page(h), page); > - hugetlb_cgroup_uncharge_page_rsvd(hstate_index(h), > - pages_per_huge_page(h), page); > + hugetlb_cgroup_uncharge_folio(hstate_index(h), > + pages_per_huge_page(h), folio); > + hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h), > + pages_per_huge_page(h), folio); > if (restore_reserve) > h->resv_huge_pages++; >=20 > @@ -2855,6 +2855,7 @@ struct page *alloc_huge_page(struct = vm_area_struct *vma, > struct hugepage_subpool *spool =3D subpool_vma(vma); > struct hstate *h =3D hstate_vma(vma); > struct page *page; > + struct folio *folio; > long map_chg, map_commit; > long gbl_chg; > int ret, idx; > @@ -2918,6 +2919,7 @@ struct page *alloc_huge_page(struct = vm_area_struct *vma, > * a reservation exists for the allocation. > */ > page =3D dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg); > + Redundant blank line. > if (!page) { > spin_unlock_irq(&hugetlb_lock); > page =3D alloc_buddy_huge_page_with_mpol(h, vma, addr); > @@ -2932,6 +2934,7 @@ struct page *alloc_huge_page(struct = vm_area_struct *vma, > set_page_refcounted(page); > /* Fall through */ > } > + folio =3D page_folio(page); > hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, page); > /* If allocation is not consuming a reservation, also store the > * hugetlb_cgroup pointer on the page. > @@ -2961,8 +2964,8 @@ struct page *alloc_huge_page(struct = vm_area_struct *vma, > rsv_adjust =3D hugepage_subpool_put_pages(spool, 1); > hugetlb_acct_memory(h, -rsv_adjust); > if (deferred_reserve) > - hugetlb_cgroup_uncharge_page_rsvd(hstate_index(h), > - pages_per_huge_page(h), page); > + hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h), > + pages_per_huge_page(h), folio); > } > return page; >=20 > diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c > index 351ffb40261c..7793401acc12 100644 > --- a/mm/hugetlb_cgroup.c > +++ b/mm/hugetlb_cgroup.c > @@ -349,11 +349,10 @@ void hugetlb_cgroup_commit_charge_rsvd(int idx, = unsigned long nr_pages, > /* > * Should be called with hugetlb_lock held > */ > -static void __hugetlb_cgroup_uncharge_page(int idx, unsigned long = nr_pages, > - struct page *page, bool rsvd) > +static void __hugetlb_cgroup_uncharge_folio(int idx, unsigned long = nr_pages, > + struct folio *folio, bool rsvd) > { > struct hugetlb_cgroup *h_cg; > - struct folio *folio =3D page_folio(page); >=20 > if (hugetlb_cgroup_disabled()) > return; > @@ -371,27 +370,27 @@ static void __hugetlb_cgroup_uncharge_page(int = idx, unsigned long nr_pages, > css_put(&h_cg->css); > else { > unsigned long usage =3D > - h_cg->nodeinfo[page_to_nid(page)]->usage[idx]; > + h_cg->nodeinfo[folio_nid(folio)]->usage[idx]; > /* > * This write is not atomic due to fetching usage and writing > * to it, but that's fine because we call this with > * hugetlb_lock held anyway. > */ > - WRITE_ONCE(h_cg->nodeinfo[page_to_nid(page)]->usage[idx], > + WRITE_ONCE(h_cg->nodeinfo[folio_nid(folio)]->usage[idx], > usage - nr_pages); > } > } >=20 > -void hugetlb_cgroup_uncharge_page(int idx, unsigned long nr_pages, > - struct page *page) > +void hugetlb_cgroup_uncharge_folio(int idx, unsigned long nr_pages, > + struct folio *folio) > { > - __hugetlb_cgroup_uncharge_page(idx, nr_pages, page, false); > + __hugetlb_cgroup_uncharge_folio(idx, nr_pages, folio, false); > } >=20 > -void hugetlb_cgroup_uncharge_page_rsvd(int idx, unsigned long = nr_pages, > - struct page *page) > +void hugetlb_cgroup_uncharge_folio_rsvd(int idx, unsigned long = nr_pages, > + struct folio *folio) > { > - __hugetlb_cgroup_uncharge_page(idx, nr_pages, page, true); > + __hugetlb_cgroup_uncharge_folio(idx, nr_pages, folio, true); > } >=20 > static void __hugetlb_cgroup_uncharge_cgroup(int idx, unsigned long = nr_pages, > --=20 > 2.31.1 >=20 >=20