Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp6530486rdb; Tue, 2 Jan 2024 05:14:46 -0800 (PST) X-Google-Smtp-Source: AGHT+IFIbz/mw/S7siQUoLRxtpZOnbcdhBM8SX4HfIo+fphFNTp/aHQDzYz0FleUfO7UxqSXUDoc X-Received: by 2002:a17:902:e547:b0:1d4:aee7:3d8d with SMTP id n7-20020a170902e54700b001d4aee73d8dmr4823771plf.57.1704201285992; Tue, 02 Jan 2024 05:14:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704201285; cv=none; d=google.com; s=arc-20160816; b=tg5G5wzLpk0Y4ZE0xCwZ1uEdviJjL4EsmtpUZQCRH3ZRh8x8Zl6EDADSPHRWcQcO1/ eH0n4DN2pYHW+q0PizJcQjLpS04dddZ3TOP5W+NBawVbYnPhli0QMmqu2Q0wZqVM/4wK LF4XdxMgOS2nYpt0ie89726TpAPFMKN3ajwFNhkJLK6ba1NjEzqANMrtW1m+vYUTtjI3 1v814raTPL/GVE0BCzZ+N7X5ylEWXyFewaoYEYOynCYuJsooFj688HAfMNYR5xy/SS2L EZxiA+VSf4kG2RbpazsRS0y1n8+DPU5p6uh9jLUc8tuJw9hxuKpWJmbxmZ5u56WwEJF6 2wzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=lq6odAt2f+wReMUr3LVecCPvhipoB8fbQDkgCZGAfFs=; fh=5n5b5+XOwLgGNDePhvfXysty56rnkWOs9jaHEODdVeE=; b=l9eoX3qZ6D7kUn27ZL0xFK+uCAlOo3HSCPBdToCJqTcqeWMhuY4oINsiiNklLvaR0+ FjD1ZL+lmn7u0Fa8n+1HqGMfbMfcHglPfy9yK+JxFv0Zmp322qIQTttZ1OZi71EVkYMz PEmAOR7jfq+CINf8mZU4TXhf91txrkoAxm2aDT9HqBnWvSpwzCRIi2LjWZksvp1zNIUf bDxMJ8EIo/0QxSJHBP9y2Q5J+gK3+PzFtsAATUVBPqq4rraR2+drBReOguIyRhsXZdAd K5GNu7LgjztpG/0Z2JFFTat6xjl4OKXdetu1pY8hcPKpq1OWM09v0q/MKUT+0wwZ/9p1 Yn9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b="rOSB7/ml"; spf=pass (google.com: domain of linux-kernel+bounces-14387-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-14387-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id d186-20020a6336c3000000b005b92842d469si15982935pga.62.2024.01.02.05.14.45 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 05:14:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-14387-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b="rOSB7/ml"; spf=pass (google.com: domain of linux-kernel+bounces-14387-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-14387-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 525B528331D for ; Tue, 2 Jan 2024 13:14:44 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C544714AB6; Tue, 2 Jan 2024 13:13:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="rOSB7/ml" X-Original-To: linux-kernel@vger.kernel.org Received: from out-177.mta1.migadu.com (out-177.mta1.migadu.com [95.215.58.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 936B814AAA for ; Tue, 2 Jan 2024 13:13:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1704201222; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lq6odAt2f+wReMUr3LVecCPvhipoB8fbQDkgCZGAfFs=; b=rOSB7/mlWJwm7Sgj4858g08l3Mah9AIYX19x1bg4TM3sAFEv9Sl0kgizs7/gcRTPswrfp0 vOOopJqL+bE11QDfDg2MV8rTX8iV8fJMNIn3ouKYEcKaZ82/9tLxyXxN5xbIYQpLmzlMAU 1lf390N2ueiFxZGQLNGz7dsM80AvN8Q= From: Gang Li To: David Hildenbrand , David Rientjes , Mike Kravetz , Muchun Song , Andrew Morton , Tim Chen Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, ligang.bdlg@bytedance.com, Gang Li Subject: [PATCH v3 6/7] hugetlb: parallelize 2M hugetlb allocation and initialization Date: Tue, 2 Jan 2024 21:12:48 +0800 Message-Id: <20240102131249.76622-7-gang.li@linux.dev> In-Reply-To: <20240102131249.76622-1-gang.li@linux.dev> References: <20240102131249.76622-1-gang.li@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT By distributing both the allocation and the initialization tasks across multiple threads, the initialization of 2M hugetlb will be faster, thereby improving the boot speed. Here are some test results: test no patch(ms) patched(ms) saved ------------------- -------------- ------------- -------- 256c2t(4 node) 2M 3336 1051 68.52% 128c1t(2 node) 2M 1943 716 63.15% Signed-off-by: Gang Li --- mm/hugetlb.c | 72 ++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 53 insertions(+), 19 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a71bc1622b53b..d1629df5f399f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include @@ -3510,6 +3511,38 @@ static void __init hugetlb_hstate_alloc_pages_report(unsigned long allocated, st } } +static void __init hugetlb_alloc_node(unsigned long start, unsigned long end, void *arg) +{ + struct hstate *h = (struct hstate *)arg; + int i, num = end - start; + nodemask_t node_alloc_noretry; + unsigned long flags; + int next_nid_to_alloc = 0; + + /* Bit mask controlling how hard we retry per-node allocations.*/ + nodes_clear(node_alloc_noretry); + + for (i = 0; i < num; ++i) { + struct folio *folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY], + &node_alloc_noretry, &next_nid_to_alloc); + if (!folio) + break; + spin_lock_irqsave(&hugetlb_lock, flags); + __prep_account_new_huge_page(h, folio_nid(folio)); + enqueue_hugetlb_folio(h, folio); + spin_unlock_irqrestore(&hugetlb_lock, flags); + cond_resched(); + } +} + +static void __init hugetlb_vmemmap_optimize_node(unsigned long start, unsigned long end, void *arg) +{ + struct hstate *h = (struct hstate *)arg; + int nid = start; + + hugetlb_vmemmap_optimize_folios(h, &h->hugepage_freelists[nid]); +} + static unsigned long __init hugetlb_hstate_alloc_pages_gigantic(struct hstate *h) { unsigned long i; @@ -3529,26 +3562,27 @@ static unsigned long __init hugetlb_hstate_alloc_pages_gigantic(struct hstate *h static unsigned long __init hugetlb_hstate_alloc_pages_non_gigantic(struct hstate *h) { - unsigned long i; - struct folio *folio; - LIST_HEAD(folio_list); - nodemask_t node_alloc_noretry; - - /* Bit mask controlling how hard we retry per-node allocations.*/ - nodes_clear(node_alloc_noretry); - - for (i = 0; i < h->max_huge_pages; ++i) { - folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY], - &node_alloc_noretry); - if (!folio) - break; - list_add(&folio->lru, &folio_list); - cond_resched(); - } - - prep_and_add_allocated_folios(h, &folio_list); + struct padata_mt_job job = { + .fn_arg = h, + .align = 1, + .numa_aware = true + }; - return i; + job.thread_fn = hugetlb_alloc_node; + job.start = 0; + job.size = h->max_huge_pages; + job.min_chunk = h->max_huge_pages / num_node_state(N_MEMORY) / 2; + job.max_threads = num_node_state(N_MEMORY) * 2; + padata_do_multithreaded(&job); + + job.thread_fn = hugetlb_vmemmap_optimize_node; + job.start = 0; + job.size = num_node_state(N_MEMORY); + job.min_chunk = 1; + job.max_threads = num_node_state(N_MEMORY); + padata_do_multithreaded(&job); + + return h->nr_huge_pages; } /* -- 2.20.1