Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp2204046pxu; Sun, 6 Dec 2020 23:33:44 -0800 (PST) X-Google-Smtp-Source: ABdhPJyPYAE/rdR7H8ap5IrWtgxlPvXaT0sZ5nZucyxa6Taf1j7JkHtDlimzL9H10GWfHFgDLRWX X-Received: by 2002:a17:906:4050:: with SMTP id y16mr17151882ejj.537.1607326424597; Sun, 06 Dec 2020 23:33:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607326424; cv=none; d=google.com; s=arc-20160816; b=ZYIFcBkBv0EFBxEQOgm8jmrhAQae4od+dJZp2gxi0Rcsw9FPHkRvIwLWjssYx6xtv5 Gc/xjpdWs7oJYXuLTbfcL+qi9BPX9fzXTnnR7N2o/aotECeR6do4JgS+UgesEkeFEh1Q clw6l/gg8DC5we7tCq+j+DW376lj7+MtltKQEwx9oDVh4UVjWieuBTnPWh4TuamaJ5oQ 8I9wxRHhWbKuXnhvhcPJlz2P37uO8uPTTOrfMwazuQwtyDODR+3mWma6dbkWxc06uAkQ L0SFOwd/FyLN825vaaleM0iVuQ0SdzmL5zTQb2DDGxsAQObZL14wUNrQMWXEy5ZxaTlY 3Tsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=POOgMNDuYKHggkITBUUKHtdkp3BIUDtWtPjyD8X422A=; b=xS/vVmzFS+uzAD5jMhcFru/arFTOzke5TOBzJxUn7Z9mq4YqSe8aaDlkd40PC9Pjtz smCdqNf8g1ZE22j+bKoNlPHPuIgV8mPsgrggp06mncBJXuMDgjJLBEWPZVHIBvhyCssC mw5lrwTzYvi6Ge5L5qnfBISBAoU4yK7rCVGDLA2PhWPuwL8ri9IPq/2SnCePPguNtfTS 29GuYXmR/GeWQLQFRKRCqFCgUd2xMpwTQOSLSRSrdswfG9Ea58KJeQojGXFH0o7SblV0 UNPVzmv8k6D9o1aKZMIUMB7HE8CTzc1AEFxC87dKJ32Jg5+1Vvg9P1YKbzWT4ZCrkZPO e8bQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a28si7410059edb.462.2020.12.06.23.33.22; Sun, 06 Dec 2020 23:33:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726110AbgLGHbV (ORCPT + 99 others); Mon, 7 Dec 2020 02:31:21 -0500 Received: from foss.arm.com ([217.140.110.172]:42858 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725783AbgLGHbV (ORCPT ); Mon, 7 Dec 2020 02:31:21 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2CCA511D4; Sun, 6 Dec 2020 23:30:35 -0800 (PST) Received: from [10.163.86.92] (unknown [10.163.86.92]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 27B623F66B; Sun, 6 Dec 2020 23:30:31 -0800 (PST) Subject: Re: [PATCH] arm64: mm: decrease the section size to reduce the memory reserved for the page map To: "Song Bao Hua (Barry Song)" , Mike Rapoport , Will Deacon Cc: "steve.capper@arm.com" , "catalin.marinas@arm.com" , "linux-kernel@vger.kernel.org" , "nsaenzjulienne@suse.de" , "liwei (CM)" , butao , "linux-arm-kernel@lists.infradead.org" , fengbaopeng References: <20201204014443.43329-1-liwei213@huawei.com> <20201204111347.GA844@willie-the-truck> <20201204114400.GT123287@linux.ibm.com> <60cb36d5dfcb4f9c904a83b520ecfe84@hisilicon.com> From: Anshuman Khandual Message-ID: Date: Mon, 7 Dec 2020 13:00:33 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <60cb36d5dfcb4f9c904a83b520ecfe84@hisilicon.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12/7/20 7:10 AM, Song Bao Hua (Barry Song) wrote: > > >> -----Original Message----- >> From: Mike Rapoport [mailto:rppt@linux.ibm.com] >> Sent: Saturday, December 5, 2020 12:44 AM >> To: Will Deacon >> Cc: liwei (CM) ; catalin.marinas@arm.com; fengbaopeng >> ; nsaenzjulienne@suse.de; steve.capper@arm.com; >> Song Bao Hua (Barry Song) ; >> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org; butao >> >> Subject: Re: [PATCH] arm64: mm: decrease the section size to reduce the memory >> reserved for the page map >> >> On Fri, Dec 04, 2020 at 11:13:47AM +0000, Will Deacon wrote: >>> On Fri, Dec 04, 2020 at 09:44:43AM +0800, Wei Li wrote: >>>> For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP >>>> do not free the reserved memory for the page map, decrease the section >>>> size can reduce the waste of reserved memory. >>>> >>>> Signed-off-by: Wei Li >>>> Signed-off-by: Baopeng Feng >>>> Signed-off-by: Xia Qing >>>> --- >>>> arch/arm64/include/asm/sparsemem.h | 2 +- >>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>> >>>> diff --git a/arch/arm64/include/asm/sparsemem.h >> b/arch/arm64/include/asm/sparsemem.h >>>> index 1f43fcc79738..8963bd3def28 100644 >>>> --- a/arch/arm64/include/asm/sparsemem.h >>>> +++ b/arch/arm64/include/asm/sparsemem.h >>>> @@ -7,7 +7,7 @@ >>>> >>>> #ifdef CONFIG_SPARSEMEM >>>> #define MAX_PHYSMEM_BITS CONFIG_ARM64_PA_BITS >>>> -#define SECTION_SIZE_BITS 30 >>>> +#define SECTION_SIZE_BITS 27 >>> >>> We chose '30' to avoid running out of bits in the page flags. What changed? >> >> I think that for 64-bit there are still plenty of free bits. I didn't >> check now, but when I played with SPARSEMEM on m68k there were 8 bits >> for section out of 32. >> >>> With this patch, I can trigger: >>> >>> ./include/linux/mmzone.h:1170:2: error: Allocator MAX_ORDER exceeds >> SECTION_SIZE >>> #error Allocator MAX_ORDER exceeds SECTION_SIZE >>> >>> if I bump up NR_CPUS and NODES_SHIFT. >> >> I don't think it's related to NR_CPUS and NODES_SHIFT. >> This seems rather 64K pages that cause this. >> >> Not that is shouldn't be addressed. > > Right now, only 4K PAGES will define ARM64_SWAPPER_USES_SECTION_MAPS. > Other cases will use vmemmap_populate_basepages(). > The original patch should be only addressing the issue in 4K pages: > https://lore.kernel.org/lkml/20200812010655.96339-1-liwei213@huawei.com/ > > would we do something like the below? > #ifdef CONFIG_ARM64_4K_PAGE > #define SECTION_SIZE_BITS 27 > #else > #define SECTION_SIZE_BITS 30 > #endif This is bit arbitrary. Probably 27 can be further reduced for 4K page size. Instead, we should make SECTION_SIZE_BITS explicitly depend upon MAX_ORDER. IOW section size should be the same as the highest order page in the buddy. CONFIG_FORCE_MAX_ZONEORDER is always defined on arm64. A quick test shows SECTION_SIZE_BITS would be 22 on 4K pages and 29 for 64K pages. As a fall back SECTION_SIZE_BITS can still be 30 in case CONFIG_FORCE_MAX_ZONEORDER is not defined. --- a/arch/arm64/include/asm/sparsemem.h +++ b/arch/arm64/include/asm/sparsemem.h @@ -7,7 +7,7 @@ #ifdef CONFIG_SPARSEMEM #define MAX_PHYSMEM_BITS CONFIG_ARM64_PA_BITS -#define SECTION_SIZE_BITS 30 +#define SECTION_SIZE_BITS (CONFIG_FORCE_MAX_ZONEORDER - 1 + PAGE_SHIFT) #endif #endif A similar approach exists on ia64 platform as well.