Received: by 10.223.185.116 with SMTP id b49csp26268wrg; Tue, 13 Feb 2018 15:52:47 -0800 (PST) X-Google-Smtp-Source: AH8x2260EaCuCLw+m91cGrFXUMWkwRLMnxnbXLxk4t1LYpoiyN9McEBxKur3WAatRuojsa+sAkOa X-Received: by 10.99.141.200 with SMTP id z191mr2264913pgd.418.1518565967721; Tue, 13 Feb 2018 15:52:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518565967; cv=none; d=google.com; s=arc-20160816; b=XYQlmLPEN5HjYxlwJRYesQFofStykAZH1Bd+Xv66FDGQwtHm66jvQj3ZFYP/2ixyS0 GuxsNWsK1IT8/meOV7YPcVAgxZHustUbsGInS2/FQNykzoQLziwjGrHNZ7Rdr7TqWY5w vSR4rkXnhXWRPs6nv+6h38ksCLGwunO+Jx6Q4cb5uNoMQY9pgs3ofh4Ao/g8Yc2HxgJC yyW/4ly9IZHfBL+1XTjCvfcqDtIFgEyFBTZl6DcG6PXEpHbrYzt+eZNE3zafmh8MXtm7 GInlxB0+0KqMcuqiptxAXuXHsLQ1L9XPFy5IODUNgeffaHYTrtB85MkG9uF6PXwAPVRE SyRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=21a003nWlzzWwnnPhNJtbSqGrHXSOo+2TZoWcHsAUuY=; b=ZIBVGdp8d+9dvyeZInOCXZwi9xsMU5DWY9GJl1gN0e/GukOdrN3U6ua6n8LlAidIxM Hh435hvYm6njlRZB/rNmgLba4PGVYw5axA3Ls46BGXyakENQZdEWRtfnNvT+WlcdLd9d Zr+ILsJnBICd/qBa69notyDGfff+Y/oWIj1yLcQCv2NaXuT+yZwMNvq1ARFR9oPrM12q kF8I2bW4zybzGr5N6Fka9e/9hvhlJiAyCLioBMVlchFesClFhnn4id/f7EtFsguVyFng PTgBbXw2DR97V+3RP4D5clGkDSqIFHkbdWDz3m3DvCjP2zwuE8PCQpau5pfeSYl5WQs5 7fpg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e17si2378819pfb.261.2018.02.13.15.52.32; Tue, 13 Feb 2018 15:52:47 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966328AbeBMXuz (ORCPT + 99 others); Tue, 13 Feb 2018 18:50:55 -0500 Received: from mga17.intel.com ([192.55.52.151]:53398 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966235AbeBMXtl (ORCPT ); Tue, 13 Feb 2018 18:49:41 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Feb 2018 15:49:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,509,1511856000"; d="scan'208";a="29822373" Received: from rchatre-s.jf.intel.com ([10.54.70.76]) by fmsmga004.fm.intel.com with ESMTP; 13 Feb 2018 15:49:39 -0800 From: Reinette Chatre To: tglx@linutronix.de, fenghua.yu@intel.com, tony.luck@intel.com Cc: gavin.hindman@intel.com, vikas.shivappa@linux.intel.com, dave.hansen@intel.com, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org, Reinette Chatre , linux-mm@kvack.org, Andrew Morton , Mike Kravetz , Michal Hocko , Vlastimil Babka Subject: [RFC PATCH V2 21/22] mm/hugetlb: Enable large allocations through gigantic page API Date: Tue, 13 Feb 2018 07:47:05 -0800 Message-Id: X-Mailer: git-send-email 2.13.6 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Memory allocation within the kernel as supported by the SLAB allocators is limited by the maximum allocatable page order. With the default maximum page order of 11 it is not possible for the SLAB allocators to allocate more than 4MB. Large contiguous allocations are currently possible within the kernel through the gigantic page support. The creation of which is currently directed from userspace. Expose the gigantic page support within the kernel to enable memory allocations that cannot be fulfilled by the SLAB allocators. Suggested-by: Dave Hansen Signed-off-by: Reinette Chatre Cc: linux-mm@kvack.org Cc: Andrew Morton Cc: Mike Kravetz Cc: Michal Hocko Cc: Vlastimil Babka --- include/linux/hugetlb.h | 2 ++ mm/hugetlb.c | 10 ++++------ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 82a25880714a..8f2125dc8a86 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -349,6 +349,8 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, nodemask_t *nmask); int huge_add_to_page_cache(struct page *page, struct address_space *mapping, pgoff_t idx); +struct page *alloc_gigantic_page(int nid, unsigned int order, gfp_t gfp_mask); +void free_gigantic_page(struct page *page, unsigned int order); /* arch callback */ int __init __alloc_bootmem_huge_page(struct hstate *h); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9a334f5fb730..f3f5e4ef3144 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1060,7 +1060,7 @@ static void destroy_compound_gigantic_page(struct page *page, __ClearPageHead(page); } -static void free_gigantic_page(struct page *page, unsigned int order) +void free_gigantic_page(struct page *page, unsigned int order) { free_contig_range(page_to_pfn(page), 1 << order); } @@ -1108,17 +1108,15 @@ static bool zone_spans_last_pfn(const struct zone *zone, return zone_spans_pfn(zone, last_pfn); } -static struct page *alloc_gigantic_page(int nid, struct hstate *h) +struct page *alloc_gigantic_page(int nid, unsigned int order, gfp_t gfp_mask) { - unsigned int order = huge_page_order(h); unsigned long nr_pages = 1 << order; unsigned long ret, pfn, flags; struct zonelist *zonelist; struct zone *zone; struct zoneref *z; - gfp_t gfp_mask; - gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; + gfp_mask = gfp_mask | __GFP_THISNODE; zonelist = node_zonelist(nid, gfp_mask); for_each_zone_zonelist_nodemask(zone, z, zonelist, gfp_zone(gfp_mask), NULL) { spin_lock_irqsave(&zone->lock, flags); @@ -1155,7 +1153,7 @@ static struct page *alloc_fresh_gigantic_page_node(struct hstate *h, int nid) { struct page *page; - page = alloc_gigantic_page(nid, h); + page = alloc_gigantic_page(nid, huge_page_order(h), htlb_alloc_mask(h)); if (page) { prep_compound_gigantic_page(page, huge_page_order(h)); prep_new_huge_page(h, page, nid); -- 2.13.6