Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3476740pxf; Mon, 22 Mar 2021 07:21:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyxb8e0buM/vNBmgIlUqfEKiUrNcPupJyxv9B3ndVpwbzCbW4cy9SFsMI6C1noEBZLbVHW9 X-Received: by 2002:a05:6402:510b:: with SMTP id m11mr26382545edd.103.1616422874556; Mon, 22 Mar 2021 07:21:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616422874; cv=none; d=google.com; s=arc-20160816; b=OHotWCG8evCpqHvljS+Oar8RcGFDS/+gmvoNAI8jhLOV0SdYl5eWUHiuQBbvOzQAlO YgqG0b4P518mxm6vB/0YHADdZ/OPwQJqvksFNNN79+cpD/4a9nEexfFXLPEzzVwzb1nI lOFev96WwP7/A+5m4mXcNvtU/hmMHxCfCj4iLv1ZjoA5ttorM7AgaQQO3XIxmPU+8ceu fG8IsucMqpKZgTWvqH5CqPVtVbSI2bJdWwR1hdezUMBC54BPFDARD2N3j2wVPOM6wEry e3jX3xj4CiqVMjCJqNydAA/24EzYSlsQ00VSGL6/IHpsOLlT5AIzINHaB19Cj9BvXl0X ulew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=jksCq3DsJImfqIu/dbj3WTPcJZucfV77hGm3/23dYP8=; b=HspB0Agwu5xEOKOv3qHnjOpGBcGIxjoIhCwDwuRzK15ZlP2N+ZssobDZTFlTiPoP9o UcXmAcl2V3BUmbKMwXlPgO2p2ESxm8iZVpJtLSDAdW/7uHOYkOJbuvTjY2+7QwDSvpQ7 LqNmNmExXloc06Iu52BZ+d6OSDMiJE7U4e9OE03f/Dx1T4XV4VeR9scTjfbfJLj3Ys3r wf9k+eZe9X1IIrqEY65T/oKR+V+C3K+kgLJXJ1tZQfM9QGFMmWWfxy1KBGpMkMKwigEA Tuo2elTx7wl0cCxpV0f7ew19Y69sJ+6IZz0yACA4xRDshsTm0adr3CMTUIdTOdnUzitX WwPw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b="P1px/j7s"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r27si10856200edw.299.2021.03.22.07.20.51; Mon, 22 Mar 2021 07:21:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b="P1px/j7s"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229614AbhCVOTz (ORCPT + 99 others); Mon, 22 Mar 2021 10:19:55 -0400 Received: from mx2.suse.de ([195.135.220.15]:57656 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229467AbhCVOTY (ORCPT ); Mon, 22 Mar 2021 10:19:24 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1616422762; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=jksCq3DsJImfqIu/dbj3WTPcJZucfV77hGm3/23dYP8=; b=P1px/j7s0d7JUxJpidoKJBMtVLfBUebPS1Pa8E5I1sujL2yYYEs8USRcmWfUJobdAIYYY9 1bCIx6I3uNQ0jcGBhykeI5NHgpBDkdGYgOAF/uXnUL6xmlz0d+J3oGHXXdPiVbFLnjITDd NlE6zO3jYoTkWl67gRmOPQDwCBPUqAw= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 7654CACA8; Mon, 22 Mar 2021 14:19:22 +0000 (UTC) Date: Mon, 22 Mar 2021 15:19:21 +0100 From: Michal Hocko To: Mike Kravetz Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt , Oscar Salvador , David Hildenbrand , Muchun Song , David Rientjes , Miaohe Lin , Peter Zijlstra , Matthew Wilcox , HORIGUCHI NAOYA , "Aneesh Kumar K . V" , Waiman Long , Peter Xu , Mina Almasry , Andrew Morton Subject: Re: [RFC PATCH 4/8] hugetlb: call update_and_free_page without hugetlb_lock Message-ID: References: <20210319224209.150047-1-mike.kravetz@oracle.com> <20210319224209.150047-5-mike.kravetz@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210319224209.150047-5-mike.kravetz@oracle.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri 19-03-21 15:42:05, Mike Kravetz wrote: > With the introduction of remove_hugetlb_page(), there is no need for > update_and_free_page to hold the hugetlb lock. Change all callers to > drop the lock before calling. > > With additional code modifications, this will allow loops which decrease > the huge page pool to drop the hugetlb_lock with each page to reduce > long hold times. > > The ugly unlock/lock cycle in free_pool_huge_page will be removed in > a subsequent patch which restructures free_pool_huge_page. > > Signed-off-by: Mike Kravetz Looks good to me. I will not ack it right now though. I am still crawling through the series and want to get a full picture. So far it looks promising ;). > --- > mm/hugetlb.c | 21 +++++++++++++-------- > 1 file changed, 13 insertions(+), 8 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index ae185d3315e0..3028cf10d504 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1362,14 +1362,8 @@ static void update_and_free_page(struct hstate *h, struct page *page) > 1 << PG_writeback); > } > if (hstate_is_gigantic(h)) { > - /* > - * Temporarily drop the hugetlb_lock, because > - * we might block in free_gigantic_page(). > - */ > - spin_unlock(&hugetlb_lock); > destroy_compound_gigantic_page(page, huge_page_order(h)); > free_gigantic_page(page, huge_page_order(h)); > - spin_lock(&hugetlb_lock); > } else { > __free_pages(page, huge_page_order(h)); > } > @@ -1435,16 +1429,18 @@ static void __free_huge_page(struct page *page) > > if (HPageTemporary(page)) { > remove_hugetlb_page(h, page, false); > + spin_unlock(&hugetlb_lock); > update_and_free_page(h, page); > } else if (h->surplus_huge_pages_node[nid]) { > /* remove the page from active list */ > remove_hugetlb_page(h, page, true); > + spin_unlock(&hugetlb_lock); > update_and_free_page(h, page); > } else { > arch_clear_hugepage_flags(page); > enqueue_huge_page(h, page); > + spin_unlock(&hugetlb_lock); > } > - spin_unlock(&hugetlb_lock); > } > > /* > @@ -1725,7 +1721,13 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, > list_entry(h->hugepage_freelists[node].next, > struct page, lru); > remove_hugetlb_page(h, page, acct_surplus); > + /* > + * unlock/lock around update_and_free_page is temporary > + * and will be removed with subsequent patch. > + */ > + spin_unlock(&hugetlb_lock); > update_and_free_page(h, page); > + spin_lock(&hugetlb_lock); > ret = 1; > break; > } > @@ -1794,8 +1796,9 @@ int dissolve_free_huge_page(struct page *page) > } > remove_hugetlb_page(h, page, false); > h->max_huge_pages--; > + spin_unlock(&hugetlb_lock); > update_and_free_page(h, head); > - rc = 0; > + return 0; > } > out: > spin_unlock(&hugetlb_lock); > @@ -2572,7 +2575,9 @@ static void try_to_free_low(struct hstate *h, unsigned long count, > remove_hugetlb_page(h, page, false); > h->free_huge_pages--; > h->free_huge_pages_node[page_to_nid(page)]--; > + spin_unlock(&hugetlb_lock); > update_and_free_page(h, page); > + spin_lock(&hugetlb_lock); > > /* > * update_and_free_page could have dropped lock so > -- > 2.30.2 > -- Michal Hocko SUSE Labs