Received: by 2002:a05:7412:37c9:b0:e2:908c:2ebd with SMTP id jz9csp2239101rdb; Thu, 21 Sep 2023 12:28:50 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEj2yA5/UXZ66NL4ypjljar5DyN1lUcX9zqmbnX9Dkc3AvPF9F0G94o5ow21K2B0hBp9XIu X-Received: by 2002:a17:903:32ca:b0:1c5:ce3c:c390 with SMTP id i10-20020a17090332ca00b001c5ce3cc390mr3253755plr.41.1695324530711; Thu, 21 Sep 2023 12:28:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695324530; cv=none; d=google.com; s=arc-20160816; b=xgzGvfyPrunkoFkqY/ckziCdrhkjVHqTxIbqlIX9CbkW48M/NzNYcGcKtepkeQeV5c rSNXMtZOzwUWB11OLYMG5Mlspg5aS7maaT7Hbi//VdJOcmfhiAT+rzO9oZEuOHh4NNer +yf91ezBL5phmO3Jysfgz0w0vhmvKcNhdwJXk5BwLW6gsSr97juiaymG9Y5gIFT2sKFj aIkM1SGIgLRi4+8zUuygcW4m+5sJKVFDD9VLFhFuGVuTdNv43f32vRJ9/fZVWFkSb2z2 m4CMRPT2nd9lwjcWyIcxpk44P3fc/T5M29EhAE1c7WDb4ilLAZi9L7tRJme4xT/+UXSe Rp6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:references :cc:to:from:subject:mime-version:date:dkim-signature:message-id; bh=lRGTp/5qJIknHZ71d8tJbT3IXF9aiIEKzgyfD/pboKE=; fh=cRghV3rTh+J0e/6HzVvpU+n8KTvFzgLBunkbvyYtjk8=; b=F0BscTWVIV15LL69ollVjL9YrD+0+s73c4znHcnTBfC+wE/UI9yCbTq+0djwzgPI6K UZxDg1E+22PXITVtlmYx991AzCD1JC54k3hLRVCA72F1MwU1O/Bc7IWLsLtYDgZHmKdM GpqMFSrB68z73vD3E2QL7jLn7aaehQicTNtZ9ugdV6TZiE0Iuqe2j5aLvG+UAQI81OkQ IgXEwdVXDkSpszqvK4Wb/LWEw7UWbHaR7xHq5306UJE5/K9ZAVrodV7zVtqfoSvPdusn 39M1zq7zYg6WzI857xbSrCMRfK8xQ/erVF9JLlR/Hq17YqVb0GAqVDgEBOFFT/rbx2is vMyA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=en4DqbMQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id b9-20020a170903228900b001c57270c11fsi2206387plh.365.2023.09.21.12.28.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Sep 2023 12:28:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=en4DqbMQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 05A438104F6C; Thu, 21 Sep 2023 10:24:43 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229935AbjIURYc (ORCPT + 99 others); Thu, 21 Sep 2023 13:24:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229561AbjIURYL (ORCPT ); Thu, 21 Sep 2023 13:24:11 -0400 Received: from out-223.mta1.migadu.com (out-223.mta1.migadu.com [95.215.58.223]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52E942486B for ; Thu, 21 Sep 2023 10:11:31 -0700 (PDT) Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695289674; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lRGTp/5qJIknHZ71d8tJbT3IXF9aiIEKzgyfD/pboKE=; b=en4DqbMQwPfJtW83Nzb/AXJqQyugvKh1t2q7VtDGiVkwqjkqGM5/h7dhYp6qWFVKOwnMg9 jF7CPHZH2htwbjo5SF0Xohe1V9T5ubXKrr+iRup81yr/dJx13QDVZlgtokyQ05Uz5S0INI h5s5gLkRqeK7vFamuwX7afRJU1f29Sg= Date: Thu, 21 Sep 2023 17:47:46 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v4 4/8] hugetlb: perform vmemmap restoration on a list of pages X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song To: Mike Kravetz Cc: Linux-MM , LKML , Muchun Song , Joao Martins , Oscar Salvador , David Hildenbrand , Miaohe Lin , David Rientjes , Anshuman Khandual , Naoya Horiguchi , Barry Song <21cnbao@gmail.com>, Michal Hocko , Matthew Wilcox , Xiongchun Duan , Andrew Morton References: <20230918230202.254631-1-mike.kravetz@oracle.com> <20230918230202.254631-5-mike.kravetz@oracle.com> <20230919205756.GB425719@monkey> <2FDB2018-74AE-4514-9B43-01664A8E5DBF@linux.dev> <20230921011223.GC4065@monkey> <306da2a1-0dd4-e858-930f-211947a466d2@linux.dev> In-Reply-To: <306da2a1-0dd4-e858-930f-211947a466d2@linux.dev> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=0.3 required=5.0 tests=DATE_IN_PAST_06_12,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Thu, 21 Sep 2023 10:24:44 -0700 (PDT) On 2023/9/21 17:31, Muchun Song wrote: > > > On 2023/9/21 09:12, Mike Kravetz wrote: >> On 09/20/23 11:03, Muchun Song wrote: >>>> On Sep 20, 2023, at 10:56, Muchun Song wrote: >>>>> On Sep 20, 2023, at 04:57, Mike Kravetz >>>>> wrote: >>>>> On 09/19/23 17:52, Muchun Song wrote: >>>>>> On 2023/9/19 07:01, Mike Kravetz wrote: >>>>>> >>>>>> I still think we should free a non-optimized HugeTLB page if we >>>>>> encounter an OOM situation instead of continue to restore >>>>>> vemmmap pages. Restoring vmemmmap pages will only aggravate >>>>>> the OOM situation. The suitable appraoch is to free a non-optimized >>>>>> HugeTLB page to satisfy our allocation of vmemmap pages, what's >>>>>> your opinion, Mike? >>>>> I agree. >>>>> >>>>> As you mentioned previously, this may complicate this code path a >>>>> bit. >>>>> I will rewrite to make this happen. >>>> Maybe we could introduced two list passed to >>>> update_and_free_pages_bulk (this >>>> will be easy for the callers of it), one is for non-optimized huge >>>> page, >>>> another is optimized one. In update_and_free_pages_bulk, we could >>>> first >>>> free those non-optimized huge page, and then restore vemmmap pages for >>>> those optimized ones, in which case, the code could be simple. >>>> hugetlb_vmemmap_restore_folios() dose not need to add complexity, >>>> which >>>> still continue to restore vmemmap pages and will stop once we >>>> encounter >>>> an OOM situation. >> I am not sure if passing in optimized and non-optimized lists to >> update_and_free_pages_bulk will help much.  IIUC, it will almost always >> be the case where only one list has entries.  Is that mostly accurate? > > I think you are right. It will be less helpful since most of > pages will be not optimized when HVO is enabled. Sorry, correction: **not** should be deleted. > >>> BTW, maybe we should try again iff there are some non-optimized huge >>> page >>> whose vmemmap pages are restored successfully previously and could >>> be freed >>> first, then continue to restore the vmemmap pages of the remaining >>> huge pages. >>> I think the retry code could be done in update_and_free_pages_bulk() >>> as well. >> I came up with a new routine to handle these ENOMEM returns from >> hugetlb_vmemmap_restore_folios.  I 'think' it handles these situations. >> Here is an updated version of this patch.  Sorry, diff makes it a bit >> hard to read. >> >>  From b13bdccb01730f995191944769f87d0725c289ad Mon Sep 17 00:00:00 2001 >> From: Mike Kravetz >> Date: Sun, 10 Sep 2023 16:14:50 -0700 >> Subject: [PATCH] hugetlb: perform vmemmap restoration on a list of pages >> >> The routine update_and_free_pages_bulk already performs vmemmap >> restoration on the list of hugetlb pages in a separate step.  In >> preparation for more functionality to be added in this step, create a >> new routine hugetlb_vmemmap_restore_folios() that will restore >> vmemmap for a list of folios. >> >> This new routine must provide sufficient feedback about errors and >> actual restoration performed so that update_and_free_pages_bulk can >> perform optimally. >> >> Special care must be taken when encountering a ENOMEM error from >> hugetlb_vmemmap_restore_folios.  We want to continue making as much >> forward progress as possible.  A new routine bulk_vmemmap_restore_enomem >> handles this specific situation. >> >> Signed-off-by: Mike Kravetz >> --- >>   mm/hugetlb.c         | 83 ++++++++++++++++++++++++++++++++++---------- >>   mm/hugetlb_vmemmap.c | 39 +++++++++++++++++++++ >>   mm/hugetlb_vmemmap.h | 11 ++++++ >>   3 files changed, 115 insertions(+), 18 deletions(-) >> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index 70fedf8682c4..52abe56cf38a 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -1834,38 +1834,85 @@ static void >> update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio, >>           schedule_work(&free_hpage_work); >>   } >>   -static void update_and_free_pages_bulk(struct hstate *h, struct >> list_head *list) >> +static void bulk_vmemmap_restore_enomem(struct hstate *h, >> +                        struct list_head *list, >> +                        unsigned long restored) >>   { >>       struct folio *folio, *t_folio; >> -    bool clear_dtor = false; >>   -    /* >> -     * First allocate required vmemmmap (if necessary) for all >> folios on >> -     * list.  If vmemmap can not be allocated, we can not free folio to >> -     * lower level allocator, so add back as hugetlb surplus page. >> -     * add_hugetlb_folio() removes the page from THIS list. >> -     * Use clear_dtor to note if vmemmap was successfully allocated for >> -     * ANY page on the list. >> -     */ >> -    list_for_each_entry_safe(folio, t_folio, list, lru) { >> -        if (folio_test_hugetlb_vmemmap_optimized(folio)) { >> +    if (restored) { >> +        /* >> +         * On ENOMEM error, free any restored hugetlb pages so that >> +         * restore of the entire list can be retried. >> +         * The idea is that by freeing hugetlb pages with vmemmap >> +         * (those previously restored) we will free up memory so that >> +         * we can allocate vmemmap for more hugetlb pages. >> +         * We must examine and possibly free EVERY hugetlb page on list >> +         * in order to call hugetlb_vmemmap_restore_folios again. >> +         * This is not optimal, but is an error case that should not >> +         * happen frequently. >> +         */ >> +        list_for_each_entry_safe(folio, t_folio, list, lru) >> +            if (!folio_test_hugetlb_vmemmap_optimized(folio)) { >> +                list_del(&folio->lru); >> +                spin_lock_irq(&hugetlb_lock); >> +                __clear_hugetlb_destructor(h, folio); >> +                spin_unlock_irq(&hugetlb_lock); >> +                update_and_free_hugetlb_folio(h, folio, false); >> +                cond_resched(); >> +            } >> +    } else { >> +        /* >> +         * In the case where vmemmap was not restored for ANY folios, >> +         * we loop through them trying to restore individually in the >> +         * hope that someone elsewhere may free enough memory. >> +         * If unable to restore a page, the hugetlb page is made a >> +         * surplus page and removed from the list. >> +         * If are able to restore vmemmap for one hugetlb page, we free >> +         * it and quit processing the list to retry the bulk operation. >> +         */ >> +        list_for_each_entry_safe(folio, t_folio, list, lru) >>               if (hugetlb_vmemmap_restore(h, &folio->page)) { >>                   spin_lock_irq(&hugetlb_lock); >>                   add_hugetlb_folio(h, folio, true); >>                   spin_unlock_irq(&hugetlb_lock); >> -            } else >> -                clear_dtor = true; >> -        } >> +            } else { >> +                list_del(&folio->lru); >> +                spin_lock_irq(&hugetlb_lock); >> +                __clear_hugetlb_destructor(h, folio); >> +                spin_unlock_irq(&hugetlb_lock); >> +                update_and_free_hugetlb_folio(h, folio, false); >> +                break; >> +            } >>       } >> +} >> + >> +static void update_and_free_pages_bulk(struct hstate *h, struct >> list_head *list) >> +{ >> +    int ret; >> +    unsigned long restored; >> +    struct folio *folio, *t_folio; >>         /* >> -     * If vmemmmap allocation was performed on any folio above, take >> lock >> -     * to clear destructor of all folios on list.  This avoids the >> need to >> +     * First allocate required vmemmmap (if necessary) for all folios. >> +     * Carefully handle ENOMEM errors and free up any available hugetlb >> +     * pages in order to make forward progress. >> +     */ >> +retry: >> +    ret = hugetlb_vmemmap_restore_folios(h, list, &restored); >> +    if (ret == -ENOMEM) { >> +        bulk_vmemmap_restore_enomem(h, list, restored); >> +        goto retry; >> +    } >> + >> +    /* >> +     * If vmemmmap allocation was performed on ANY folio , take lock to >> +     * clear destructor of all folios on list.  This avoids the need to >>        * lock/unlock for each individual folio. >>        * The assumption is vmemmap allocation was performed on all or >> none >>        * of the folios on the list.  This is true expect in VERY rare >> cases. >>        */ >> -    if (clear_dtor) { >> +    if (restored) { >>           spin_lock_irq(&hugetlb_lock); >>           list_for_each_entry(folio, list, lru) >>               __clear_hugetlb_destructor(h, folio); >> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c >> index 4558b814ffab..cc91edbfb68b 100644 >> --- a/mm/hugetlb_vmemmap.c >> +++ b/mm/hugetlb_vmemmap.c >> @@ -480,6 +480,45 @@ int hugetlb_vmemmap_restore(const struct hstate >> *h, struct page *head) >>       return ret; >>   } >>   +/** >> + * hugetlb_vmemmap_restore_folios - restore vmemmap for every folio >> on the list. >> + * @h:        struct hstate. >> + * @folio_list:    list of folios. >> + * @restored:    Set to number of folios for which vmemmap was restored >> + *        successfully if caller passes a non-NULL pointer. >> + * >> + * Return: %0 if vmemmap exists for all folios on the list.  If an >> error is >> + *        encountered restoring vmemmap for ANY folio, an error code >> + *        will be returned to the caller.  It is then the >> responsibility >> + *        of the caller to check the hugetlb vmemmap optimized flag of >> + *        each folio to determine if vmemmap was actually restored. >> + *        Note that processing is stopped when first error is >> encountered. >> + */ >> +int hugetlb_vmemmap_restore_folios(const struct hstate *h, >> +                    struct list_head *folio_list, >> +                    unsigned long *restored) > > How about changing parameter of @restored to a list_head type which > returns the non-optimized (previously) or vmemmap-restored-sucessful > huge pages? > In which case, the caller could traverse this returned list to free > them first like you have implemented in bulk_vmemmap_restore_enomem(), > it will be more efficient. The meaning of returned value should also > be changed accordingly since update_and_free_pages_bulk() wants to > whether there is a vmemmap-optimized huge page being restored sucessfully > to determine if it should clear hugetlb flag. So > hugetlb_vmemmap_restore_folios() > could return how many huge pages being restored successful, if a negative > number is returned meaning there is some error in the process of > restoring > of vmemmap. > > Thanks. > >> +{ >> +    unsigned long num_restored; >> +    struct folio *folio; >> +    int ret = 0; >> + >> +    num_restored = 0; >> +    list_for_each_entry(folio, folio_list, lru) { >> +        if (folio_test_hugetlb_vmemmap_optimized(folio)) { >> +            ret = hugetlb_vmemmap_restore(h, &folio->page); >> +            if (ret) >> +                goto out; >> +            else >> +                num_restored++; >> +        } >> +    } >> + >> +out: >> +    if (*restored) >> +        *restored = num_restored; >> +    return ret; >> +} >> + >>   /* Return true iff a HugeTLB whose vmemmap should and can be >> optimized. */ >>   static bool vmemmap_should_optimize(const struct hstate *h, const >> struct page *head) >>   { >> diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h >> index c512e388dbb4..bb58453c3cc0 100644 >> --- a/mm/hugetlb_vmemmap.h >> +++ b/mm/hugetlb_vmemmap.h >> @@ -19,6 +19,8 @@ >>     #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP >>   int hugetlb_vmemmap_restore(const struct hstate *h, struct page >> *head); >> +int hugetlb_vmemmap_restore_folios(const struct hstate *h, >> +            struct list_head *folio_list, unsigned long *restored); >>   void hugetlb_vmemmap_optimize(const struct hstate *h, struct page >> *head); >>   void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct >> list_head *folio_list); >>   @@ -45,6 +47,15 @@ static inline int hugetlb_vmemmap_restore(const >> struct hstate *h, struct page *h >>       return 0; >>   } >>   +static inline int hugetlb_vmemmap_restore_folios(const struct >> hstate *h, >> +                    struct list_head *folio_list, >> +                    unsigned long *restored) >> +{ >> +    if (restored) >> +        *restored = 0; >> +    return 0; >> +} >> + >>   static inline void hugetlb_vmemmap_optimize(const struct hstate *h, >> struct page *head) >>   { >>   } >