Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp371974pxb; Wed, 24 Feb 2021 04:39:09 -0800 (PST) X-Google-Smtp-Source: ABdhPJzbmSar8G4ZwlYWJkzDWydeW70J0tMlmf55F4wmotMi8llL8drlOkEw2gDNXR8LQWF/ich6 X-Received: by 2002:a17:906:3052:: with SMTP id d18mr17192536ejd.530.1614170349159; Wed, 24 Feb 2021 04:39:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614170349; cv=none; d=google.com; s=arc-20160816; b=AzzLpTqv4zvLu32VG7nNk7o7NIqwqXDzO3R3Rhdc0Y3/3yDMavnC6jDQvmnOJGJrdA b2HtOABmPMGj3qLX78hPGV6k0g3HAmAECLkK/CYtOCTXHa0GbGmPv/GOY51x/7eVPbzH MswPc+vD+ycR2P+Uk4nV0+36HTNE5re8Plb1ItbzA39lAyg3cwfCay8DC00Iuat3nICU pp2qxBXf/93xze4vd4u1BhhuA98z9WChnko8u30nl/eAvNLPix27NwJ3L/Wjpwckx1ZT DE3b++iQ3Vn1sfa7jziCwaSxx97Gn3JNIJN4sUM+iFEdZC/QsN4xG6BKK4TM8nJBxyOd /u0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=I3jEBbY4A/UGX153lu2E72Jwi3lU4l9DJKDSt9tjW5Q=; b=086NqUld4CiOVO/1W0m8nzSQ7udFBMnUmYeMvVAATlD0BAKe3gn7KEeoE61mChESmc xCkN26MOkZKs95ex1CM6MFb0BHzfzGTRCE7g6C5bwXoGFcD4QLV3YDX3MRleiFqPuYgh vxYbNCwJY+BYA7VNB+5nYa15IVXgyEeaQrB1rdQLFhXyaBHU0eq/vtUjTrxZKNodXrvx yjd/qE7XyMqVHRRqHl5a99E7m2LQr1TQdNQ0u2CK5luQ0WPV82Ao20B1a4f8Rveimdme vrptmnVFAu6QyTJj7b2Foacqi5/mMlj5/lNndYBoFsQVtAhEuKthuwb2bGJlA50h1h2f NklQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=oOOuEBK7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d13si1165844edo.367.2021.02.24.04.38.42; Wed, 24 Feb 2021 04:39:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=oOOuEBK7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233178AbhBXDtJ (ORCPT + 99 others); Tue, 23 Feb 2021 22:49:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232594AbhBXDtI (ORCPT ); Tue, 23 Feb 2021 22:49:08 -0500 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9ED10C06174A for ; Tue, 23 Feb 2021 19:48:28 -0800 (PST) Received: by mail-pl1-x633.google.com with SMTP id d16so403707plg.0 for ; Tue, 23 Feb 2021 19:48:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=I3jEBbY4A/UGX153lu2E72Jwi3lU4l9DJKDSt9tjW5Q=; b=oOOuEBK72Tmj/AsITLSIjEWrvt7HNB/gWSE+VBImqLREZdfVm1dpZI8mf4k72Hxa/i yz6rqpz2vNUXnB1OxaJodya7pzUj87q/qibBdEQMTtVsQVKfuzEhmJMzAfyJ+Mbb2Fn9 i65JdEo2JKdsp/A2cpG5N6kMA2fCekFGx9tDv5NKfhPCcdKHU2CdBfmMMm9qnY9ovJr4 hhaeVaGjHvJ65SuOfuxbvFjlLDme47Q8kr9h2QXnHT+HFfbH9RURUJIUHpk7pMQg3iw7 eQ3kBof44riyepos3rgTe6Mcloz+QQOhOmUHH0OrAw8lEEZZJwuWTz+mTEgrTksKjtyT TX8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=I3jEBbY4A/UGX153lu2E72Jwi3lU4l9DJKDSt9tjW5Q=; b=HBQrlY9XqPsLUaMt4Opd526uyCUUE1FVjnfAsUIoey21lbQbRSrzdD+l5tbesYdOwM UWMja47nI/g3wDsCZTGVFRFWO68FkNM02Ioc++Ym4RD6h9hytXyCMqQgvKmBuo+gecLK h8MWmLZhRpIQOAzjB3betNHrdDtH6fl5pO6QnKnSpNYudps3nFXXvLQbHmrBFFdGVnNu MBwFChKhta6I7qLoCsvG9qgraLL0V0a+OyGYBHNYGwTWqZRkFEqSK19DwedRNZWjnhaM 2ghgUTKCqLQFPOfIWee7FbtZmEv/YXBgn8AtjRo7Ke1fHvXkyFUvMu+Gzn/9I8mZNVya LtKw== X-Gm-Message-State: AOAM531/p7Hcy2S5ctwC551b0M8ShbyPAP8RtxJiFntyVUorWPXiQkhB xsng9yUfyQ0rx4jB897r2Oo0nmzmD+D5bUPOQxCGDQ== X-Received: by 2002:a17:90a:778a:: with SMTP id v10mr2148311pjk.229.1614138507328; Tue, 23 Feb 2021 19:48:27 -0800 (PST) MIME-Version: 1.0 References: <20210219104954.67390-1-songmuchun@bytedance.com> <20210219104954.67390-5-songmuchun@bytedance.com> <13a5363c-6af4-1e1f-9a18-972ca18278b5@oracle.com> <20210223092740.GA1998@linux> <20210223104957.GA3844@linux> <20210223154128.GA21082@localhost.localdomain> <20210223223157.GA2740@localhost.localdomain> In-Reply-To: <20210223223157.GA2740@localhost.localdomain> From: Muchun Song Date: Wed, 24 Feb 2021 11:47:49 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page To: Oscar Salvador Cc: Mike Kravetz , Jonathan Corbet , Thomas Gleixner , mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , David Rientjes , Matthew Wilcox , Michal Hocko , "Song Bao Hua (Barry Song)" , David Hildenbrand , =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , Joao Martins , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 24, 2021 at 6:32 AM Oscar Salvador wrote: > > On Tue, Feb 23, 2021 at 04:41:28PM +0100, Oscar Salvador wrote: > > On Tue, Feb 23, 2021 at 11:50:05AM +0100, Oscar Salvador wrote: > > > > CPU0: CPU1: > > > > set_compound_page_dtor(HUGETLB_PAGE_DTOR); > > > > memory_failure_hugetlb > > > > get_hwpoison_page > > > > __get_hwpoison_page > > > > get_page_unless_zero > > > > put_page_testzero() > > > > > > > > Maybe this can happen. But it is a very corner case. If we want to > > > > deal with this. We can put_page_testzero() first and then > > > > set_compound_page_dtor(HUGETLB_PAGE_DTOR). > > > > > > I have to check further, but it looks like this could actually happen. > > > Handling this with VM_BUG_ON is wrong, because memory_failure/soft_offline are > > > entitled to increase the refcount of the page. > > > > > > AFAICS, > > > > > > CPU0: CPU1: > > > set_compound_page_dtor(HUGETLB_PAGE_DTOR); > > > memory_failure_hugetlb > > > get_hwpoison_page > > > __get_hwpoison_page > > > get_page_unless_zero > > > put_page_testzero() > > > identify_page_state > > > me_huge_page > > > > > > I think we can reach me_huge_page with either refcount = 1 or refcount =2, > > > depending whether put_page_testzero has been issued. > > > > > > For now, I would not re-enqueue the page if put_page_testzero == false. > > > I have to see how this can be handled gracefully. > > > > I took a brief look. > > It is not really your patch fault. Hugetlb <-> memory-failure synchronization is > > a bit odd, it definitely needs improvment. > > > > The thing is, we can have different scenarios here. > > E.g: by the time we return from put_page_testzero, we might have refcount == > > 0 and PageHWPoison, or refcount == 1 PageHWPoison. > > > > The former will let a user get a page from the pool and get a sigbus > > when it faults in the page, and the latter will be even more odd as we > > will have a self-refcounted page in the free pool (and hwpoisoned). I have been looking at the dequeue_huge_page_node_exact(). If a PageHWPoison huge page is in the free pool list, the page will not be allocated to the user. The PageHWPoison huge page will be skip in the dequeue_huge_page_node_exact(). > > > > As I said, it is not this patchset fault. I just made me realize this > > problem. > > > > I have to think some more about this. > > I have been thinking more about this. > memory failure events can occur at any time, and we might not be in a > position where we can handle gracefully the error, meaning that the page > might end up in non desirable state. > > E.g: we could flag the page right before enqueing it. > > I still think that VM_BUG_ON should go, as the refcount can be perfectly > increased by memory-failure/soft_offline handlers, so BUGing there does > not make much sense. Make sense. I will remove the VM_BUG_ON. > > One think we could do is to check the state of the page we want to > retrieve from the free hugepage pool. > We should discard any HWpoisoned ones, and dissolve them. > > The thing is, memory-failure/soft_offline should allocate a new hugepage > for the free pool, so keep the pool stable. > Something like [1]. > > Anyway, this is orthogonal to this patch, and something I will work on > soon. > > [1] https://lore.kernel.org/linux-mm/20210222135137.25717-2-osalvador@suse.de/T/#u Thanks for your efforts on this. > > -- > Oscar Salvador > SUSE L3