Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp2268051pxb; Tue, 23 Feb 2021 02:54:02 -0800 (PST) X-Google-Smtp-Source: ABdhPJwJfXZxShVG/K7aOfGiMGz9ZSfvuICmyFv2IbpP/4lpTVA1KzXW+U5740bfEINxrbzM02KV X-Received: by 2002:a05:6402:3133:: with SMTP id dd19mr26846913edb.337.1614077642476; Tue, 23 Feb 2021 02:54:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614077642; cv=none; d=google.com; s=arc-20160816; b=bDuCjinm1IMb4EhZ5R3jajEi+CVFBn0vA6B8wTlLkcfO6jxseDW+PqkUIS0iq25vDc /v8NHIhYxuZ6xsOF8bc4/SrKDIXgEva/2P/kgC96s8pnYyMxm3c+E6l5b74k/Gu0zxVi QXSt2IlOs2vhXqFltJJGJ1aUByP1F0RzTxy/Tg2tDkXqAEN2vBxOBawXMgFQaHoABsRb sLufR0O1cEIMU5O6KW+bLjfT3nN+IgWRg21uxXv6GsyLquGKeBRcmBvEl7xTbwt8F69P 3u9tIXmfHPmmdfv7mtlBodQU2MY5NIbP9VGmFNtFdHhHHTmOzqhpsM6mYcebdd8KU7tZ Mybw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=FQvDUNiRvjN3vYtota8w8yDKKfOBN/bMUvCV0dfuDvY=; b=Pb5jg4HADrpSkhogjae6Mgmrr2R6pvFrqCnwRX6uyt0sd55B7xE9Y8T4BqNg0gVU4A M4IS3QXrIql4IE/RogPomE+KPMrmtTjkuPwfjvpfnxRxuYTkwvDhH7if3iWLNbsiEze1 azf82849uIvUoFFhTkAZ4Madyi5HsyEHEwYWVmvb1OenNL5w8yiom2Vm9a0zTIimJcaR 1iB1QjODfViaDm6ynDlv6JvHAWvl5zOGEEIieOn25x2pwMVapGg+UOMa6DcMXTiAkkcr gOC0lU6FEavXALywoc9ytYvZAaDD5qvhfNocNxol6QriH+whytReUkL6/S+GeLWr5hMu fEtA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id hr15si12210012ejc.287.2021.02.23.02.53.19; Tue, 23 Feb 2021 02:54:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232217AbhBWKvX (ORCPT + 99 others); Tue, 23 Feb 2021 05:51:23 -0500 Received: from mx2.suse.de ([195.135.220.15]:54528 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232033AbhBWKuw (ORCPT ); Tue, 23 Feb 2021 05:50:52 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id A75CCAC69; Tue, 23 Feb 2021 10:50:10 +0000 (UTC) Date: Tue, 23 Feb 2021 11:50:05 +0100 From: Oscar Salvador To: Muchun Song Cc: Mike Kravetz , Jonathan Corbet , Thomas Gleixner , mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , David Rientjes , Matthew Wilcox , Michal Hocko , "Song Bao Hua (Barry Song)" , David Hildenbrand , HORIGUCHI =?utf-8?B?TkFPWUEo5aCA5Y+jIOebtOS5nyk=?= , Joao Martins , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel Subject: Re: [External] Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page Message-ID: <20210223104957.GA3844@linux> References: <20210219104954.67390-1-songmuchun@bytedance.com> <20210219104954.67390-5-songmuchun@bytedance.com> <13a5363c-6af4-1e1f-9a18-972ca18278b5@oracle.com> <20210223092740.GA1998@linux> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 23, 2021 at 06:27:07PM +0800, Muchun Song wrote: > > > > + > > > > + if (alloc_huge_page_vmemmap(h, page)) { > > > > + int zeroed; > > > > + > > > > + spin_lock(&hugetlb_lock); > > > > + INIT_LIST_HEAD(&page->lru); > > > > + set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); > > > > + h->nr_huge_pages++; > > > > + h->nr_huge_pages_node[nid]++; > > > > I think prep_new_huge_page() does this for us? > > Actually, there are some differences. e.g. prep_new_huge_page() > will reset hugetlb cgroup and ClearHPageFreed, but we do not need > them here. And prep_new_huge_page will acquire and release > the hugetlb_lock. But here we also need hold the lock to update > the surplus counter and enqueue the page to the free list. > So I do not think reuse prep_new_huge_page is a good idea. I see, I missed that. > > Can this actually happen? AFAIK, page landed in update_and_free_page should be > > zero refcounted, then we increase the reference, and I cannot see how the > > reference might have changed in the meantime. > > I am not sure whether other modules get the page and then put the > page. I see gather_surplus_pages does the same thing. So I copied > from there. I try to look at the memory_failure routine. > > > CPU0: CPU1: > set_compound_page_dtor(HUGETLB_PAGE_DTOR); > memory_failure_hugetlb > get_hwpoison_page > __get_hwpoison_page > get_page_unless_zero > put_page_testzero() > > Maybe this can happen. But it is a very corner case. If we want to > deal with this. We can put_page_testzero() first and then > set_compound_page_dtor(HUGETLB_PAGE_DTOR). I have to check further, but it looks like this could actually happen. Handling this with VM_BUG_ON is wrong, because memory_failure/soft_offline are entitled to increase the refcount of the page. AFAICS, CPU0: CPU1: set_compound_page_dtor(HUGETLB_PAGE_DTOR); memory_failure_hugetlb get_hwpoison_page __get_hwpoison_page get_page_unless_zero put_page_testzero() identify_page_state me_huge_page I think we can reach me_huge_page with either refcount = 1 or refcount =2, depending whether put_page_testzero has been issued. For now, I would not re-enqueue the page if put_page_testzero == false. I have to see how this can be handled gracefully. -- Oscar Salvador SUSE L3