Received: by 2002:a05:6358:e9c4:b0:b2:91dc:71ab with SMTP id hc4csp5436284rwb; Mon, 8 Aug 2022 19:52:32 -0700 (PDT) X-Google-Smtp-Source: AA6agR6QSEy1wfruW6bd8Xe0oDqAmB2pT7UIOKxWn5qKoiVhDDdLqiJw+zugbzjzBxFWDcfVX/1H X-Received: by 2002:a17:902:e552:b0:16d:2a83:e751 with SMTP id n18-20020a170902e55200b0016d2a83e751mr20620067plf.39.1660013552721; Mon, 08 Aug 2022 19:52:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1660013552; cv=none; d=google.com; s=arc-20160816; b=YWkzlzO8Brb8zl2kmf+eGMM9mWvM2JpNL1y71iZZwp+0Iy+1WvUf0sYLHVClUGWPko /4XWzL4Nk5rT4CXIbGfuzWYEvipMlLzzj+Mfz7pqPitnDlOdMBWGU2nA0JphFcR07ut0 kBfgMa2KwJfUKNYENflQBb6t40OFcjL1sPFVwltCJtu4Ley53N/U8ak8BvuRIe8qa5Bo udKWcQSI16eCTG3sYp2Dzq7Q+H77UxDfMjctEImuEZKDJAOXAmhMNtZxfDRmbc/XcRfe 9+qRe/3tNgjVVd2g0XB+yC/+0R2Zr+ofIsYkbBqqTXR+BOGfKq1igN3ptRslkIQRJNds /WIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=dlsCtTfjK0uCaINrQbcC10ieBMJ8v5oLoF2KCDZC2+A=; b=NRJEWgh0oI0Gdw/bWHrPtWOJKpz/l4AnGstXSyvBRxaRwRDkfg0xuUdAsZC70dolI6 DVO5BuKeUCV6XsPysNp8lXW5dMfZYpKlU85qFTk2IX90YnDtPriyeU8IBbN3PvMNAQpY GrVXY6pRpF2i/Fdlk8QaaLZ6NsITosoqroeA41ixbkeLIs4rPuiI0nyn5RLg3yqSTenC qdsj4jNZ4bhC+D/Sp2iuRfS6/frdhnf7DId1ntnnVFRJOZ/lc9z7i1aS/WqhmnnIpIMJ 94NgqHtv4+lldJseWkCGL6pPKTMyKYyI8YbV4mDu9TLxzI0++vTVP2BTjZNw+7jXAr3k sGQw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u9-20020a056a00124900b0052f25fdf526si6443544pfi.153.2022.08.08.19.52.18; Mon, 08 Aug 2022 19:52:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231295AbiHICs6 (ORCPT + 99 others); Mon, 8 Aug 2022 22:48:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229530AbiHICs4 (ORCPT ); Mon, 8 Aug 2022 22:48:56 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C19BDEA0 for ; Mon, 8 Aug 2022 19:48:55 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4M1y9g4ntxzjXkZ; Tue, 9 Aug 2022 10:45:43 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 9 Aug 2022 10:48:53 +0800 Subject: Re: [PATCH] hugetlb: freeze allocated pages before creating hugetlb pages To: Mike Kravetz CC: Muchun Song , Joao Martins , Matthew Wilcox , Michal Hocko , Peter Xu , Andrew Morton , Linux-MM , linux-kernel References: <20220808212836.111749-1-mike.kravetz@oracle.com> From: Miaohe Lin Message-ID: <119542cd-939f-3185-1d51-a177d4da1dff@huawei.com> Date: Tue, 9 Aug 2022 10:48:53 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20220808212836.111749-1-mike.kravetz@oracle.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2022/8/9 5:28, Mike Kravetz wrote: > When creating hugetlb pages, the hugetlb code must first allocate > contiguous pages from a low level allocator such as buddy, cma or > memblock. The pages returned from these low level allocators are > ref counted. This creates potential issues with other code taking > speculative references on these pages before they can be transformed to > a hugetlb page. This issue has been addressed with methods and code > such as that provided in [1]. > > Recent discussions about vmemmap freeing [2] have indicated that it > would be beneficial to freeze all sub pages, including the head page > of pages returned from low level allocators before converting to a > hugetlb page. This helps avoid races if want to replace the page > containing vmemmap for the head page. > > There have been proposals to change at least the buddy allocator to > return frozen pages as described at [3]. If such a change is made, it > can be employed by the hugetlb code. However, as mentioned above > hugetlb uses several low level allocators so each would need to be > modified to return frozen pages. For now, we can manually freeze the > returned pages. This is done in two places: > 1) alloc_buddy_huge_page, only the returned head page is ref counted. > We freeze the head page, retrying once in the VERY rare case where > there may be an inflated ref count. > 2) prep_compound_gigantic_page, for gigantic pages the current code > freezes all pages except the head page. New code will simply freeze > the head page as well. > > In a few other places, code checks for inflated ref counts on newly > allocated hugetlb pages. With the modifications to freeze after > allocating, this code can be removed. > > After hugetlb pages are freshly allocated, they are often added to the > hugetlb free lists. Since these pages were previously ref counted, this > was done via put_page() which would end up calling the hugetlb > destructor: free_huge_page. With changes to freeze pages, we simply > call free_huge_page directly to add the pages to the free list. > > In a few other places, freshly allocated hugetlb pages were immediately > put into use, and the expectation was they were already ref counted. In > these cases, we must manually ref count the page. > > [1] https://lore.kernel.org/linux-mm/20210622021423.154662-3-mike.kravetz@oracle.com/ > [2] https://lore.kernel.org/linux-mm/20220802180309.19340-1-joao.m.martins@oracle.com/ > [3] https://lore.kernel.org/linux-mm/20220531150611.1303156-1-willy@infradead.org/ > > Signed-off-by: Mike Kravetz > --- > mm/hugetlb.c | 97 +++++++++++++++++++--------------------------------- > 1 file changed, 35 insertions(+), 62 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 28516881a1b2..6b90d85d545b 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1769,13 +1769,12 @@ static bool __prep_compound_gigantic_page(struct page *page, unsigned int order, > { > int i, j; > int nr_pages = 1 << order; > - struct page *p = page + 1; > + struct page *p = page; > > /* we rely on prep_new_huge_page to set the destructor */ > set_compound_order(page, order); > - __ClearPageReserved(page); > __SetPageHead(page); > - for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) { > + for (i = 0; i < nr_pages; i++, p = mem_map_next(p, page, i)) { > /* > * For gigantic hugepages allocated through bootmem at > * boot, it's safer to be consistent with the not-gigantic > @@ -1814,7 +1813,8 @@ static bool __prep_compound_gigantic_page(struct page *page, unsigned int order, > } else { > VM_BUG_ON_PAGE(page_count(p), p); > } > - set_compound_head(p, page); > + if (i != 0) > + set_compound_head(p, page); It seems we forget to unfreeze the head page in out_error path. If unexpected inflated ref count occurs, the ref count of head page will become negative in free_gigantic_page? Thanks for your patch, Mike. I hope this will help solve the races with memory failure. ;) And I will take a more close review when I have enough time.