Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4526376pxj; Tue, 22 Jun 2021 02:11:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzTOiV1LKWGd9S7kah+aRiq/te7kVJ/QAcMGl8uYptMCG5YlDjTjMv13wUtO5WhQlA7tCa6 X-Received: by 2002:a17:906:4483:: with SMTP id y3mr2984994ejo.92.1624353116899; Tue, 22 Jun 2021 02:11:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624353116; cv=none; d=google.com; s=arc-20160816; b=Ri+mYV3li4Wb2c2bQuDIKylgtUglcwfNu2HVOgMaQ0KrnaopCOPcA61zYNwjDIkID7 akL0c4JFBiVGklBpIdhcw6ABXfkSBgl25I4Xt8zoz1svyUFsCZqKaPQgLfqDJnEx8CNL G8W4cNxh/q2ekOnNsBnfSmpukqYEgDlyEB0paAFfqNW0bmUq05hjFQULIJ6296lPuAwl Z2kbiSOpk1Fifclzw/DSn1JJUl8D+AvJ7qlkd6jy1PVxOsrUelvYUDLRwggKn3gd1sVX OR5cUGC7J/1z4mF6GuLQkTyInjeIx3zCxOvvUf+gMQMLA+T+adt0N2x45FhzPUDdt8Tf tOsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=ydvc8Rl7wiGHkzj56eR2KVXaD0RfaW+A51LSajQqcq4=; b=clILuXFgSX4NpwDXZxGsPSOfZlp9HIcqCSmbmGQHMQL9PsjPEpILTZftYoaVhj7r8K iMHbGk2GKX5gzFhx16t734ZMSamcWHGtBqjZO6rJJbhH9TLSYgRFsLlsTd36Hn/duY7W 06Q8jJxR+LwaWCXN5tpHmbHQ09MlAEn7eOfpeWF89kZgRmoNbVd4GXU5g+mQLv2SxUKB 5BfMWCHWZalPmNq7/YU136xHG2xJGNwEGv4yQMiwApDQIMLTd7kqMdUhoZuor26Ol4S9 EntR/wiXp5z+ECYGa/UX/9bUJcJku5l0FfJir0n0v7ONsnqU/YoYaYT9Cpr9bSmEn+bw UAyg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=g1rAp2Bi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y12si2340107edc.348.2021.06.22.02.11.32; Tue, 22 Jun 2021 02:11:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=g1rAp2Bi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229803AbhFVJMK (ORCPT + 99 others); Tue, 22 Jun 2021 05:12:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229490AbhFVJMK (ORCPT ); Tue, 22 Jun 2021 05:12:10 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE0AFC061574 for ; Tue, 22 Jun 2021 02:09:53 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id x22so8575417pll.11 for ; Tue, 22 Jun 2021 02:09:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ydvc8Rl7wiGHkzj56eR2KVXaD0RfaW+A51LSajQqcq4=; b=g1rAp2Bihui++TbOWNROj8o+aLubfPXZ3PUScW959ZG5twVElrZwvWaluQCF4Est7y Vl/ixHVWoiJlMyCH/KSt/31oz/Z9vHSLsxLaAuUPhRGEzqXOwmCVxO3Uh3GdvhvyfrcI vq/RWS8s+QlIxCq8iN9kjVBU6qAOJcWgRth+FUYtS+iGm0r6hf50yqUHlWHJC/fQeP5u Wh992mbKoH1/dKpHm4BBzJRn7kmVtpVNGKlP67c9+9Ch8Zj7VL+a28aodxdL5ufCVsnR PBHRdMGZabz0DXgbrSONevPohFugavyglw4frfBWxJXYkwMaRKz1uygu72Z9oRDuRb2k 1uIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ydvc8Rl7wiGHkzj56eR2KVXaD0RfaW+A51LSajQqcq4=; b=q0mWAP+aolJ/867vx8ls7gmr/9ISchK/J/TwF9MuMCjyon48/Tx99SprLiCcoHuvqp 6sbpnz8lEq6iYfDlFx/KMwdB+FVwniUrjJaJ8eiM3xmBTdZZVdFuSmAzIztMNCqGDKnQ qJ9toiY5H7UiICH/eNkqANGRf+UiK1zitwEMLQCwM9a1YTUmqkb/i/VxTr5Qf9KOGsNF UzGmH7xbWh3IPcTy+qsHepy3+qj228Qul+kvysyjD0kJVypLRwny0fPyDa+MDqNY8d+E NejZMRW8qaJC01gi6NKpMuMrTyUdFeuyMmDMoS9k1pEGlQB9Ty4WHr41hF/EcPH1vTsy IfAg== X-Gm-Message-State: AOAM531RSjNkwjGQhvkxv8bD9x2vk3uvVhh5u+8rHsJdwQ/TfSC6XN7/ GzA5u2t7AXYJugBS4+yGA6cAAAijA2J4z02JCV1/Tw== X-Received: by 2002:a17:90b:4c8c:: with SMTP id my12mr2891403pjb.13.1624352993336; Tue, 22 Jun 2021 02:09:53 -0700 (PDT) MIME-Version: 1.0 References: <20210622021423.154662-1-mike.kravetz@oracle.com> <20210622021423.154662-2-mike.kravetz@oracle.com> In-Reply-To: <20210622021423.154662-2-mike.kravetz@oracle.com> From: Muchun Song Date: Tue, 22 Jun 2021 17:09:14 +0800 Message-ID: Subject: Re: [External] [PATCH 1/2] hugetlb: remove prep_compound_huge_page cleanup To: Mike Kravetz Cc: Linux Memory Management List , LKML , Jann Horn , Youquan Song , Andrea Arcangeli , Jan Kara , John Hubbard , "Kirill A . Shutemov" , Matthew Wilcox , Michal Hocko , Andrew Morton Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 22, 2021 at 10:15 AM Mike Kravetz wrote: > > The routine prep_compound_huge_page is a simple wrapper to call either > prep_compound_gigantic_page or prep_compound_page. However, it is only > called from gather_bootmem_prealloc which only processes gigantic pages. > Eliminate the routine and call prep_compound_gigantic_page directly. > > Signed-off-by: Mike Kravetz Nice clean-up. Thanks. Reviewed-by: Muchun Song > --- > mm/hugetlb.c | 29 ++++++++++------------------- > 1 file changed, 10 insertions(+), 19 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 760b5fb836b8..50596b7d6da9 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1320,8 +1320,6 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, > return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask); > } > > -static void prep_new_huge_page(struct hstate *h, struct page *page, int nid); > -static void prep_compound_gigantic_page(struct page *page, unsigned int order); > #else /* !CONFIG_CONTIG_ALLOC */ > static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, > int nid, nodemask_t *nodemask) > @@ -2759,16 +2757,10 @@ int __alloc_bootmem_huge_page(struct hstate *h) > return 1; > } > > -static void __init prep_compound_huge_page(struct page *page, > - unsigned int order) > -{ > - if (unlikely(order > (MAX_ORDER - 1))) > - prep_compound_gigantic_page(page, order); > - else > - prep_compound_page(page, order); > -} > - > -/* Put bootmem huge pages into the standard lists after mem_map is up */ > +/* > + * Put bootmem huge pages into the standard lists after mem_map is up. > + * Note: This only applies to gigantic (order > MAX_ORDER) pages. > + */ > static void __init gather_bootmem_prealloc(void) > { > struct huge_bootmem_page *m; > @@ -2777,20 +2769,19 @@ static void __init gather_bootmem_prealloc(void) > struct page *page = virt_to_page(m); > struct hstate *h = m->hstate; > > + VM_BUG_ON(!hstate_is_gigantic(h)); > WARN_ON(page_count(page) != 1); > - prep_compound_huge_page(page, huge_page_order(h)); > + prep_compound_gigantic_page(page, huge_page_order(h)); > WARN_ON(PageReserved(page)); > prep_new_huge_page(h, page, page_to_nid(page)); > put_page(page); /* free it into the hugepage allocator */ > > /* > - * If we had gigantic hugepages allocated at boot time, we need > - * to restore the 'stolen' pages to totalram_pages in order to > - * fix confusing memory reports from free(1) and another > - * side-effects, like CommitLimit going negative. > + * We need to restore the 'stolen' pages to totalram_pages > + * in order to fix confusing memory reports from free(1) and > + * other side-effects, like CommitLimit going negative. > */ > - if (hstate_is_gigantic(h)) > - adjust_managed_page_count(page, pages_per_huge_page(h)); > + adjust_managed_page_count(page, pages_per_huge_page(h)); > cond_resched(); > } > } > -- > 2.31.1 >