Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp2762634pxb; Sun, 24 Jan 2021 20:02:25 -0800 (PST) X-Google-Smtp-Source: ABdhPJywGlT+TmCejw21nZfcI5TkqWRFDmss4fFUlZETMouJ6+Xei+jTtSK2MKCC06tylT2bi6ma X-Received: by 2002:a17:906:4f0d:: with SMTP id t13mr742794eju.10.1611547344974; Sun, 24 Jan 2021 20:02:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611547344; cv=none; d=google.com; s=arc-20160816; b=UQ64wOLl89qIsUC1PVWzOwj3QlS71LgawpbBsA1tKE1zZIlqmoL5ZkkZrFblOax8Yp rvbKC52IHmQIeDJE98EHPB1Skxi37W9PXX63mj6aIGUHVtmoHPQlIUc42SJDrUvfLq6N TSotCjxaNUgFRIxtkEW/Nww17F7OoOZ2j59PtBSQvxmkTIZcOawa2M32p70Tmx983hML qBzGr+ijrDWB7SmLt0CTtlQP8SCSO5GysrtGdEeR7ugPbqLsdx9PU1lbNuigSedGXSuE FZ0g3nP5pAlXXdskn9rzhJMQwAns0ItbYo/9cNj7mlqdYdAtiXqkMfbbWbU6QXJjZ0Cz drzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=JawmNI2cw7dC+renZYHY4PhE5rPmyLgqTpnvi0+Z94E=; b=YqRef/+Hd5HVT4cP1+h88hggoG0Y+KSnp3inQPqSTuN/gjfV3bCSp1ISW+E+siG/E8 XZhimkL4oBgrsf9dc5sJIYAiOMTn81B9oxT3Pgj5hqb61H3z39pPpACiXzNnujlJ4MnJ /hoD+P0FT4fv/pZSJh4mTiDGxQ6TnS/SL6IyYKzSr5udwTmhPiU3DZnqaDpkF0xgrV+m UeCQD44Rkrj3HTXi6GePSdBU7xauUSOxgV7uM/n77q+3c3b1heiHi6SX1A6ZZO0tS3To F6rB9giWpq04cjUNhbbi0cZDqyzZaR7tZFwbBKVAtToIM7THNc5MZVQHOO/2J+Vwd1qc zbVg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=i+XW3lJu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y14si6764197edt.2.2021.01.24.20.02.01; Sun, 24 Jan 2021 20:02:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=i+XW3lJu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726939AbhAYD7q (ORCPT + 99 others); Sun, 24 Jan 2021 22:59:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50634 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726677AbhAYD7o (ORCPT ); Sun, 24 Jan 2021 22:59:44 -0500 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A639C061573 for ; Sun, 24 Jan 2021 19:59:04 -0800 (PST) Received: by mail-pf1-x434.google.com with SMTP id y205so7648059pfc.5 for ; Sun, 24 Jan 2021 19:59:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=JawmNI2cw7dC+renZYHY4PhE5rPmyLgqTpnvi0+Z94E=; b=i+XW3lJuJx4xsB43+d4WmMeA1XBib0Lbd67jLeCEJwWlNJqNf/dI0LfOc90u9wTZB9 9Ow/6b92TQ/NSL4xfgGz2BGnq7FmCxeMEfQ8498E8+skKsd4ruCVqTU8ng+yV3fVgo2o RXy8JUE019jPedU7S9UnxT01GI/L6C6t3PR0myctdR6Qe37/ZDZPcoK5k0FAN2OgW0bP PzXZ5qayVhE4BId7InEM1+ya+SN0bfusauctEnFobR7X2oOS/E0BD/Us9RKejDvJX7bR WkRgZVOfO2N95U6srshwm1Jn1PU3l+BzCbI8tAN3W6iKuG6hnoUS54DctdsYZwq32Sro OZZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=JawmNI2cw7dC+renZYHY4PhE5rPmyLgqTpnvi0+Z94E=; b=Ve4lY7Uvc0H0tdN0s0CncEuYVixNXHDxqDOB5qWZLbsKe3wN5M/vQcEki9e0oyvWZh hSZhLhy0R6Hq5j8IaCgfiRS4Q4QjQx1JRwTLcXUvs6CI3JOFByfB644M++I0JkLepRb3 fus7ZiXa7sD5j8FD1snUbkrM9cWPn6sKj1QXAKcKauU7SmcFl5PcqG/quS5qvz63mFOy Jmzh52T/Gj2XhHDhwi0+cqp09wg2woy9Qt45H74tXHK6XruZI68GBtFc4d3rjhgUc0gp yZ3eJmxdToq0zyxinH0ofKAwkredzh/pcldrBdAoxJWtQ4s5Q3vB5uUFqnyp558kv5hq Ssqw== X-Gm-Message-State: AOAM533V3ETKr+1ks/KZ2R46NNOW+fVc1vlQI/NBr3IGjSKoTfbro5vD HoDMgyOpAiWXbeDbgVAkoMAY58yKztjYyd/hOLpi7Q== X-Received: by 2002:a62:7694:0:b029:1b9:8d43:95af with SMTP id r142-20020a6276940000b02901b98d4395afmr16376787pfc.2.1611547143936; Sun, 24 Jan 2021 19:59:03 -0800 (PST) MIME-Version: 1.0 References: <20210117151053.24600-1-songmuchun@bytedance.com> <20210117151053.24600-5-songmuchun@bytedance.com> <59d18082-248a-7014-b917-625d759c572@google.com> In-Reply-To: <59d18082-248a-7014-b917-625d759c572@google.com> From: Muchun Song Date: Mon, 25 Jan 2021 11:58:27 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v13 04/12] mm: hugetlb: defer freeing of HugeTLB pages To: David Rientjes Cc: Jonathan Corbet , Mike Kravetz , Thomas Gleixner , mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , Matthew Wilcox , Oscar Salvador , Michal Hocko , "Song Bao Hua (Barry Song)" , David Hildenbrand , =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jan 25, 2021 at 7:55 AM David Rientjes wrote: > > > On Sun, 17 Jan 2021, Muchun Song wrote: > > > In the subsequent patch, we should allocate the vmemmap pages when > > freeing HugeTLB pages. But update_and_free_page() is always called > > with holding hugetlb_lock, so we cannot use GFP_KERNEL to allocate > > vmemmap pages. However, we can defer the actual freeing in a kworker > > to prevent from using GFP_ATOMIC to allocate the vmemmap pages. > > > > The update_hpage_vmemmap_workfn() is where the call to allocate > > vmemmmap pages will be inserted. > > > > I think it's reasonable to assume that userspace can release free hugetlb > pages from the pool on oom conditions when reclaim has become too > expensive. This approach now requires that we can allocate vmemmap pages > in a potential oom condition as a prerequisite for freeing memory, which > seems less than ideal. > > And, by doing this through a kworker, we can presumably get queued behind > another work item that requires memory to make forward progress in this > oom condition. > > Two thoughts: > > - We're going to be freeing the hugetlb page after we can allocate the > vmemmap pages, so why do we need to allocate with GFP_KERNEL? Can't we > simply dip into memory reserves using GFP_ATOMIC (and thus can be > holding hugetlb_lock) because we know we'll be freeing more memory than > we'll be allocating? Right. > I think requiring a GFP_KERNEL allocation to block > to free memory for vmemmap when we'll be freeing memory ourselves is > dubious. This simplifies all of this. Thanks for your thoughts. I just thought that we can go to reclaim when there is no memory in the system. But we cannot block when using GFP_KERNEL. Actually, we cannot deal with fail of memory allocating. In the next patch, I try to sleep 100ms and then try again to allocate memory when allocating memory fails. > > - If the answer is that we actually have to use GFP_KERNEL for other > reasons, what are your thoughts on pre-allocating the vmemmap as opposed > to deferring to a kworker? In other words, preallocate the necessary > memory with GFP_KERNEL and put it on a linked list in struct hstate > before acquiring hugetlb_lock. put_page() can be used in an atomic context. Actually, we cannot sleep in the __free_huge_page(). It seems a little tricky. Right? > > > Signed-off-by: Muchun Song > > Reviewed-by: Mike Kravetz > > --- > > mm/hugetlb.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++-- > > mm/hugetlb_vmemmap.c | 12 --------- > > mm/hugetlb_vmemmap.h | 17 ++++++++++++ > > 3 files changed, 89 insertions(+), 14 deletions(-) > > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index 140135fc8113..c165186ec2cf 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -1292,15 +1292,85 @@ static inline void destroy_compound_gigantic_page(struct page *page, > > unsigned int order) { } > > #endif > > > > -static void update_and_free_page(struct hstate *h, struct page *page) > > +static void __free_hugepage(struct hstate *h, struct page *page); > > + > > +/* > > + * As update_and_free_page() is always called with holding hugetlb_lock, so we > > + * cannot use GFP_KERNEL to allocate vmemmap pages. However, we can defer the > > + * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate > > + * the vmemmap pages. > > + * > > + * The update_hpage_vmemmap_workfn() is where the call to allocate vmemmmap > > + * pages will be inserted. > > + * > > + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of pages > > + * to be freed and frees them one-by-one. As the page->mapping pointer is going > > + * to be cleared in update_hpage_vmemmap_workfn() anyway, it is reused as the > > + * llist_node structure of a lockless linked list of huge pages to be freed. > > + */ > > +static LLIST_HEAD(hpage_update_freelist); > > + > > +static void update_hpage_vmemmap_workfn(struct work_struct *work) > > { > > - int i; > > + struct llist_node *node; > > + > > + node = llist_del_all(&hpage_update_freelist); > > + > > + while (node) { > > + struct page *page; > > + struct hstate *h; > > + > > + page = container_of((struct address_space **)node, > > + struct page, mapping); > > + node = node->next; > > + page->mapping = NULL; > > + h = page_hstate(page); > > + > > + spin_lock(&hugetlb_lock); > > + __free_hugepage(h, page); > > + spin_unlock(&hugetlb_lock); > > > > + cond_resched(); > > Wouldn't it be better to hold hugetlb_lock for the iteration rather than > constantly dropping it and reacquiring it? Use > cond_resched_lock(&hugetlb_lock) instead? Great. We can use it. Thanks.