Received: by 2002:a05:6358:c692:b0:131:369:b2a3 with SMTP id fe18csp875338rwb; Wed, 26 Jul 2023 04:29:11 -0700 (PDT) X-Google-Smtp-Source: APBJJlGq5u1/nhYijA2s8Jq2Rpef+HnmxBPhHFIx1N9mD3KLQfrOY7yuB+igj48lKSiZ/XiKn7Da X-Received: by 2002:a17:906:18:b0:992:a9ba:b8da with SMTP id 24-20020a170906001800b00992a9bab8damr1411863eja.70.1690370951662; Wed, 26 Jul 2023 04:29:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690370951; cv=none; d=google.com; s=arc-20160816; b=QW+Lh35a9i0Q4n1t4KcIr71PQRTnq86F/6oMYb+/ho5IzsuznHcyOwI5OlEs3U6WoL N1n9Sr+ZnTE/tz7trRcb7MQrY7smLq8bipVkHKUtNYuq1nvlkdTWzVnAhF+MRYf6yYTD gV4adXzZ2kHh25OcunW/zHFraSOxliRAVn7aB1YGNVQaWe+1Ht7TWhml6oTcLzQsO1Cu wbRt7eyBGwDIQpHZwfmaa4d5Gf5EZ2GCarOKI7rLPJ5oM1C3qauoCO/2dlNVBhwc1Zyu UI1q30THN05nrS85ZtX5Cj2bcfrCHiV6PVT/JrgqqIC43GFJ90sKq1OJ18bcVJZ4VG09 I87w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=TkpEnQnFcgjsxV7Mb9Y830heirxfbxtFP87CIwaZAt0=; fh=PbRAY/Icnba01MDwdGUUHccrwl/8w+xOvevQ4xstlkE=; b=UGBmrbZIseTjUU1hzglIs+82ijE+cy018CkrWrrp7m2l+RXcehZx3mCv4gbBiQ39Kh 5IhG2Pe6KfOInTmfI8RguNDFpaz/dcw1HUpdxAiwv/OhG3CsRQhoI8HnVek2c1Hi3ZWw 3v2grQisP56HHTAOgobb7ZKrxcF9ivC7pzdMn6Ggw2TIojYEfbBcloFN+J72syOooy+l bnfM63SjS2QYMPaoZVgZhOyoIUyItlBopJOSkx+L7/qK60GY+SBUM/MXO6dPxaguLRP+ xBFNQ3culkaInmp0FPxCqw1mxL25ws5dkdiRr82cwbIvHTfOABqn80CKTs5gB7+YJGQh pBEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=cHN9Tt5r; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jo11-20020a170906f6cb00b00991f773d9b4si9230402ejb.268.2023.07.26.04.28.46; Wed, 26 Jul 2023 04:29:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=cHN9Tt5r; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231233AbjGZLBw (ORCPT + 99 others); Wed, 26 Jul 2023 07:01:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230398AbjGZLBu (ORCPT ); Wed, 26 Jul 2023 07:01:50 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62B94AC for ; Wed, 26 Jul 2023 04:01:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E859061A6D for ; Wed, 26 Jul 2023 11:01:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0E195C433C7; Wed, 26 Jul 2023 11:01:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690369308; bh=E9Nptia/LKz60/XHD5d/zhhnmMiksgZ3DAeI9Q4oeaE=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=cHN9Tt5r+ufFlJiEtRTftRJM4y3eXwbo4n8rhYaootGrDHqyk/IqAQBJkXRdMhy+D qTLii4YgzJFefLZfLuWR/NdTpSZR1lMjy8KcSm4L/2TWx6yd5A6Ab8vdG7W9P+Jn0b Msw0zEnXTTXFQ34UfegBqoa+fqxL3A9Cw94S/nnmuN9JT2PsGuKB+GdPmtHO2JC82L FTBQmVPYjXxoSK0r8QnBiwnd9RBiQLXLPsfbxafQu8zkqMSAxEABQDRyJWGNlU+zMU g9juIJUkV42hO4VvCYh8/q3ly25barX0HmBImE1S5sNcf+2H4QWV+ujgX826Ymg0mZ yYP61TSRVid4Q== Date: Wed, 26 Jul 2023 14:01:13 +0300 From: Mike Rapoport To: Usama Arif Cc: linux-mm@kvack.org, muchun.song@linux.dev, mike.kravetz@oracle.com, linux-kernel@vger.kernel.org, fam.zheng@bytedance.com, liangma@liangbit.com, simon.evans@bytedance.com, punit.agrawal@bytedance.com Subject: Re: [RFC 2/4] mm/memblock: Add hugepage_size member to struct memblock_region Message-ID: <20230726110113.GT1901145@kernel.org> References: <20230724134644.1299963-1-usama.arif@bytedance.com> <20230724134644.1299963-3-usama.arif@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230724134644.1299963-3-usama.arif@bytedance.com> X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 24, 2023 at 02:46:42PM +0100, Usama Arif wrote: > This propagates the hugepage size from the memblock APIs > (memblock_alloc_try_nid_raw and memblock_alloc_range_nid) > so that it can be stored in struct memblock region. This does not > introduce any functional change and hugepage_size is not used in > this commit. It is just a setup for the next commit where huge_pagesize > is used to skip initialization of struct pages that will be freed later > when HVO is enabled. > > Signed-off-by: Usama Arif > --- > arch/arm64/mm/kasan_init.c | 2 +- > arch/powerpc/platforms/pasemi/iommu.c | 2 +- > arch/powerpc/platforms/pseries/setup.c | 4 +- > arch/powerpc/sysdev/dart_iommu.c | 2 +- > include/linux/memblock.h | 8 ++- > mm/cma.c | 4 +- > mm/hugetlb.c | 6 +- > mm/memblock.c | 60 ++++++++++++-------- > mm/mm_init.c | 2 +- > mm/sparse-vmemmap.c | 2 +- > tools/testing/memblock/tests/alloc_nid_api.c | 2 +- > 11 files changed, 56 insertions(+), 38 deletions(-) > [ snip ] > diff --git a/include/linux/memblock.h b/include/linux/memblock.h > index f71ff9f0ec81..bb8019540d73 100644 > --- a/include/linux/memblock.h > +++ b/include/linux/memblock.h > @@ -63,6 +63,7 @@ struct memblock_region { > #ifdef CONFIG_NUMA > int nid; > #endif > + phys_addr_t hugepage_size; > }; > > /** > @@ -400,7 +401,8 @@ phys_addr_t memblock_phys_alloc_range(phys_addr_t size, phys_addr_t align, > phys_addr_t start, phys_addr_t end); > phys_addr_t memblock_alloc_range_nid(phys_addr_t size, > phys_addr_t align, phys_addr_t start, > - phys_addr_t end, int nid, bool exact_nid); > + phys_addr_t end, int nid, bool exact_nid, > + phys_addr_t hugepage_size); Rather than adding yet another parameter to memblock_phys_alloc_range() we can have an API that sets a flag on the reserved regions. With this the hugetlb reservation code can set a flag when HVO is enabled and memmap_init_reserved_pages() will skip regions with this flag set. > phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid); > > static __always_inline phys_addr_t memblock_phys_alloc(phys_addr_t size, > @@ -415,7 +417,7 @@ void *memblock_alloc_exact_nid_raw(phys_addr_t size, phys_addr_t align, > int nid); > void *memblock_alloc_try_nid_raw(phys_addr_t size, phys_addr_t align, > phys_addr_t min_addr, phys_addr_t max_addr, > - int nid); > + int nid, phys_addr_t hugepage_size); > void *memblock_alloc_try_nid(phys_addr_t size, phys_addr_t align, > phys_addr_t min_addr, phys_addr_t max_addr, > int nid); > @@ -431,7 +433,7 @@ static inline void *memblock_alloc_raw(phys_addr_t size, > { > return memblock_alloc_try_nid_raw(size, align, MEMBLOCK_LOW_LIMIT, > MEMBLOCK_ALLOC_ACCESSIBLE, > - NUMA_NO_NODE); > + NUMA_NO_NODE, 0); > } > > static inline void *memblock_alloc_from(phys_addr_t size, -- Sincerely yours, Mike.