Received: by 2002:a25:f815:0:0:0:0:0 with SMTP id u21csp2025579ybd; Sun, 23 Jun 2019 22:44:17 -0700 (PDT) X-Google-Smtp-Source: APXvYqwA4YqThWK+WSOFgVn1OKx/YTpoPRMe0gm2eb69C0xk+AhpV0vOlLfbOewz30B9kooavjFB X-Received: by 2002:a17:902:9898:: with SMTP id s24mr46548072plp.226.1561355057851; Sun, 23 Jun 2019 22:44:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561355057; cv=none; d=google.com; s=arc-20160816; b=VkdnP2227XKIzj5Wy69f6Ngumw/iLBD53ETVMBRuPbJ+pNTtfuKSq4cziFhyrUjkLy Uf+2ESHFkPY8q5jCqKnjqOAzWck9EniLDFoiD9uk9fhQQaLHNTnZKL8JLCP9iijnJRS4 niFvK1wO5ceqYZeCuOsx58jl23+7/S71qclXF8Og8HGUYL8LWdXK260Yztsvhqh8B0Ni iNMBAaBryjAj3oJsmV2PEBedvXE+wbLfxNwC8Ta1lKHec04IQXsmpmTYDRkPVL2L4qFA JmtSdKHggWRrMdBCcgWWf3rL6pH6z2tU7AsK/O/vwQ8K4MMNAoz92kbrDAzMcpikwI3a iMuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=I39Bi0bn1V2qCmzChaXQgVPzQdJt5HaZsOVvSEfHmxQ=; b=EwfcR10lNhJgv+PLLW8lAKppOlUV7qxsMqSm8KhoeqDXruY0R1uLCcNtwUqPisYmAf yo1kWTNcb8qFcujJIeYf8VPoB3x5V93jPDJH8YWM+yCN4Cp8aH8eVmg2cMcwhI6otATa p/pa1J8ZnuWG0bo5e02VfkYfOTArbq/Y6SMZtsRhCa7WGV+EmUHP9P6YXYqEO+CjsYaY N4llJdkPv698AeSxcjSFlSGzkP6thzzzOQVlNajiQarDIQRYdQ4brSGwvHmkBVBX6aEV QA2Ft/75gAYju8SwEYE80GuVhCQuTxdeg8/TjIP+ryBHi7WWbgWjlL+dYGmCfhmcA5HX tlSA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x123si9884235pfx.157.2019.06.23.22.44.01; Sun, 23 Jun 2019 22:44:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727244AbfFXFDn (ORCPT + 99 others); Mon, 24 Jun 2019 01:03:43 -0400 Received: from mga11.intel.com ([192.55.52.93]:32399 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726351AbfFXFDn (ORCPT ); Mon, 24 Jun 2019 01:03:43 -0400 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Jun 2019 22:03:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,411,1557212400"; d="scan'208";a="163216338" Received: from iweiny-desk2.sc.intel.com ([10.3.52.157]) by fmsmga007.fm.intel.com with ESMTP; 23 Jun 2019 22:03:42 -0700 Date: Sun, 23 Jun 2019 22:03:42 -0700 From: Ira Weiny To: Pingfan Liu Cc: linux-mm@kvack.org, Mike Kravetz , Oscar Salvador , David Hildenbrand , Andrew Morton , linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm/hugetlb: allow gigantic page allocation to migrate away smaller huge page Message-ID: <20190624050341.GB30102@iweiny-DESK2.sc.intel.com> References: <1561350068-8966-1-git-send-email-kernelfans@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1561350068-8966-1-git-send-email-kernelfans@gmail.com> User-Agent: Mutt/1.11.1 (2018-12-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 24, 2019 at 12:21:08PM +0800, Pingfan Liu wrote: > The current pfn_range_valid_gigantic() rejects the pud huge page allocation > if there is a pmd huge page inside the candidate range. > > But pud huge resource is more rare, which should align on 1GB on x86. It is > worth to allow migrating away pmd huge page to make room for a pud huge > page. > > The same logic is applied to pgd and pud huge pages. I'm sorry but I don't quite understand why we should do this. Is this a bug or an optimization? It sounds like an optimization. > > Signed-off-by: Pingfan Liu > Cc: Mike Kravetz > Cc: Oscar Salvador > Cc: David Hildenbrand > Cc: Andrew Morton > Cc: linux-kernel@vger.kernel.org > --- > mm/hugetlb.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index ac843d3..02d1978 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1081,7 +1081,11 @@ static bool pfn_range_valid_gigantic(struct zone *z, > unsigned long start_pfn, unsigned long nr_pages) > { > unsigned long i, end_pfn = start_pfn + nr_pages; > - struct page *page; > + struct page *page = pfn_to_page(start_pfn); > + > + if (PageHuge(page)) > + if (compound_order(compound_head(page)) >= nr_pages) I don't think you want compound_order() here. Ira > + return false; > > for (i = start_pfn; i < end_pfn; i++) { > if (!pfn_valid(i)) > @@ -1098,8 +1102,6 @@ static bool pfn_range_valid_gigantic(struct zone *z, > if (page_count(page) > 0) > return false; > > - if (PageHuge(page)) > - return false; > } > > return true; > -- > 2.7.5 >