Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp6217908ybe; Tue, 17 Sep 2019 23:08:29 -0700 (PDT) X-Google-Smtp-Source: APXvYqzLCYKP5pvon/59JBlseWUg+VgzlfvjUspG5bCr9OmFlG9+anyvr8L6zRQxHax4Ra2zM0Xk X-Received: by 2002:a17:906:7089:: with SMTP id b9mr7903369ejk.282.1568786909320; Tue, 17 Sep 2019 23:08:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568786909; cv=none; d=google.com; s=arc-20160816; b=OYvR1aAGJGGnVQBEVM7Rx36TxsKHkb21cos+2VdoiRIWBE4yq3cvjt4kNV/PYOqQXL Kh6vmJ84KvAEMvuEArSUU0/a4LJv4DtXMpKgjGTJ0Twdk17rMW6zHozro+QhOzK2Pcuz NvVfQndKjP2eKtfSUyOI8Zp2uTHUtgk8XBofWcRVb1y+5jP6dNm9Sp7atbsDIqCmhcre 4TmeteTn34FzUMSKAOYh05pBuw+N5yG4tXCg+NsQ0ia1LRS//HOZZORxkyfdh4dS9tca aYoP5u/ssJ7o2llPukj2GWSZUBpmd2n9eiaE0Pox4FIwLeKC6VGIuCMQgeIH8/x/2wLb ZRTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:mime-version:user-agent:date:message-id:cc:subject :from:to; bh=3NqYv9WpLPzlMRfD2cbTRGxIFsGNC3XtyDLNj8Cs730=; b=woto21dDTGKnL7MLqMNJSFWVp1VE+EitJxfEkHl3sfQvJ9mSUOA/p/4fCQo7kmzsiK XvwuFTU58gk0LLCAoxQ9lakPEyV2HPeEb4+XIHOF3aDlcVhMaYedvcWTPl5eJKFXpZqs A/sCdX1+8Zu5wgnknysEB8FtBMHky48c+98Lnk/bGjilIT97n4DA3qQbIqi5hv+Bc+1t mNsvGwEljGeL0nOu0Z82pKvv6n+LoU5iXrQxPmdazQY2UT6tR1CtdcPlF36Ii/GubBAr M3eVxXWPLoyeyJpLFQll8uSzA/5f1arWQp+3iuneL3wqTP1lifLqSpEwM02l5OSg13mv 1rog== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f20si2458027edm.365.2019.09.17.23.08.05; Tue, 17 Sep 2019 23:08:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726496AbfIREWo (ORCPT + 99 others); Wed, 18 Sep 2019 00:22:44 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:36096 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725842AbfIREWo (ORCPT ); Wed, 18 Sep 2019 00:22:44 -0400 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id C9E0C158DB4B75F7F103; Wed, 18 Sep 2019 12:22:41 +0800 (CST) Received: from [127.0.0.1] (10.177.251.225) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.439.0; Wed, 18 Sep 2019 12:22:36 +0800 To: , , , , , , , , From: Yunfeng Ye Subject: [PATCH] mm: Support memblock alloc on the exact node for sparse_buffer_init() CC: , Message-ID: Date: Wed, 18 Sep 2019 12:22:29 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.251.225] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, when memblock_find_in_range_node() fail on the exact node, it will use %NUMA_NO_NODE to find memblock from other nodes. At present, the work is good, but when the large memory is insufficient and the small memory is enough, we want to allocate the small memory of this node first, and do not need to allocate large memory from other nodes. In sparse_buffer_init(), it will prepare large chunks of memory for page structure. The page management structure requires a lot of memory, but if the node does not have enough memory, it can be converted to a small memory allocation without having to allocate it from other nodes. Add %MEMBLOCK_ALLOC_EXACT_NODE flag for this situation. Normally, the behavior is the same with %MEMBLOCK_ALLOC_ACCESSIBLE, only that it will not allocate from other nodes when a single node fails to allocate. If large contiguous block memory allocated fail in sparse_buffer_init(), it will allocates small block memmory section by section later. Signed-off-by: Yunfeng Ye --- include/linux/memblock.h | 1 + mm/memblock.c | 3 ++- mm/sparse.c | 2 +- 3 files changed, 4 insertions(+), 2 deletions(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index f491690..9a81d9c 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -339,6 +339,7 @@ static inline int memblock_get_region_node(const struct memblock_region *r) #define MEMBLOCK_ALLOC_ANYWHERE (~(phys_addr_t)0) #define MEMBLOCK_ALLOC_ACCESSIBLE 0 #define MEMBLOCK_ALLOC_KASAN 1 +#define MEMBLOCK_ALLOC_EXACT_NODE 2 /* We are using top down, so it is safe to use 0 here */ #define MEMBLOCK_LOW_LIMIT 0 diff --git a/mm/memblock.c b/mm/memblock.c index 7d4f61a..dbd52c3c 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -277,6 +277,7 @@ static phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t size, /* pump up @end */ if (end == MEMBLOCK_ALLOC_ACCESSIBLE || + end == MEMBLOCK_ALLOC_EXACT_NODE || end == MEMBLOCK_ALLOC_KASAN) end = memblock.current_limit; @@ -1365,7 +1366,7 @@ static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, if (found && !memblock_reserve(found, size)) goto done; - if (nid != NUMA_NO_NODE) { + if (end != MEMBLOCK_ALLOC_EXACT_NODE && nid != NUMA_NO_NODE) { found = memblock_find_in_range_node(size, align, start, end, NUMA_NO_NODE, flags); diff --git a/mm/sparse.c b/mm/sparse.c index 72f010d..828db46 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -477,7 +477,7 @@ static void __init sparse_buffer_init(unsigned long size, int nid) sparsemap_buf = memblock_alloc_try_nid_raw(size, PAGE_SIZE, addr, - MEMBLOCK_ALLOC_ACCESSIBLE, nid); + MEMBLOCK_ALLOC_EXACT_NODE, nid); sparsemap_buf_end = sparsemap_buf + size; } -- 2.7.4.huawei.3