Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1434675AbdDZCqJ (ORCPT ); Tue, 25 Apr 2017 22:46:09 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46886 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1434660AbdDZCqE (ORCPT ); Tue, 25 Apr 2017 22:46:04 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com DCD1E804F8 Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=xpang@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com DCD1E804F8 Reply-To: xlpang@redhat.com Subject: Re: [PATCH 1/2] x86/mm/ident_map: Add PUD level 1GB page support References: <1493111582-28261-1-git-send-email-xlpang@redhat.com> To: Yinghai Lu , Xunlei Pang Cc: Linux Kernel Mailing List , "kexec@lists.infradead.org" , Andrew Morton , Eric Biederman , Dave Young , the arch/x86 maintainers , Ingo Molnar , "H. Peter Anvin" , Thomas Gleixner From: Xunlei Pang Message-ID: <59000A2A.7040402@redhat.com> Date: Wed, 26 Apr 2017 10:47:06 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Wed, 26 Apr 2017 02:45:59 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5064 Lines: 126 On 04/26/2017 at 03:49 AM, Yinghai Lu wrote: > On Tue, Apr 25, 2017 at 2:13 AM, Xunlei Pang wrote: >> The current kernel_ident_mapping_init() creates the identity >> mapping using 2MB page(PMD level), this patch adds the 1GB >> page(PUD level) support. >> >> This is useful on large machines to save some reserved memory >> (as paging structures) in the kdump case when kexec setups up >> identity mappings before booting into the new kernel. >> >> We will utilize this new support in the following patch. >> >> Signed-off-by: Xunlei Pang >> --- >> arch/x86/boot/compressed/pagetable.c | 2 +- >> arch/x86/include/asm/init.h | 3 ++- >> arch/x86/kernel/machine_kexec_64.c | 2 +- >> arch/x86/mm/ident_map.c | 13 ++++++++++++- >> arch/x86/power/hibernate_64.c | 2 +- >> 5 files changed, 17 insertions(+), 5 deletions(-) >> >> diff --git a/arch/x86/boot/compressed/pagetable.c b/arch/x86/boot/compressed/pagetable.c >> index 56589d0..1d78f17 100644 >> --- a/arch/x86/boot/compressed/pagetable.c >> +++ b/arch/x86/boot/compressed/pagetable.c >> @@ -70,7 +70,7 @@ static void *alloc_pgt_page(void *context) >> * Due to relocation, pointers must be assigned at run time not build time. >> */ >> static struct x86_mapping_info mapping_info = { >> - .pmd_flag = __PAGE_KERNEL_LARGE_EXEC, >> + .page_flag = __PAGE_KERNEL_LARGE_EXEC, >> }; >> >> /* Locates and clears a region for a new top level page table. */ >> diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h >> index 737da62..46eab1a 100644 >> --- a/arch/x86/include/asm/init.h >> +++ b/arch/x86/include/asm/init.h >> @@ -4,8 +4,9 @@ >> struct x86_mapping_info { >> void *(*alloc_pgt_page)(void *); /* allocate buf for page table */ >> void *context; /* context for alloc_pgt_page */ >> - unsigned long pmd_flag; /* page flag for PMD entry */ >> + unsigned long page_flag; /* page flag for PMD or PUD entry */ >> unsigned long offset; /* ident mapping offset */ >> + bool use_pud_page; /* PUD level 1GB page support */ > how about use direct_gbpages instead? > use_pud_page is confusing. ok > >> }; >> >> int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page, >> diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c >> index 085c3b3..1d4f2b0 100644 >> --- a/arch/x86/kernel/machine_kexec_64.c >> +++ b/arch/x86/kernel/machine_kexec_64.c >> @@ -113,7 +113,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable) >> struct x86_mapping_info info = { >> .alloc_pgt_page = alloc_pgt_page, >> .context = image, >> - .pmd_flag = __PAGE_KERNEL_LARGE_EXEC, >> + .page_flag = __PAGE_KERNEL_LARGE_EXEC, >> }; >> unsigned long mstart, mend; >> pgd_t *level4p; >> diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c >> index 04210a2..0ad0280 100644 >> --- a/arch/x86/mm/ident_map.c >> +++ b/arch/x86/mm/ident_map.c >> @@ -13,7 +13,7 @@ static void ident_pmd_init(struct x86_mapping_info *info, pmd_t *pmd_page, >> if (pmd_present(*pmd)) >> continue; >> >> - set_pmd(pmd, __pmd((addr - info->offset) | info->pmd_flag)); >> + set_pmd(pmd, __pmd((addr - info->offset) | info->page_flag)); >> } >> } >> >> @@ -30,6 +30,17 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page, >> if (next > end) >> next = end; >> >> + if (info->use_pud_page) { >> + pud_t pudval; >> + >> + if (pud_present(*pud)) >> + continue; >> + >> + pudval = __pud((addr - info->offset) | info->page_flag); >> + set_pud(pud, pudval); > should mask addr with PUD_MASK. > addr &= PUD_MASK; > set_pud(pud, __pmd(addr - info->offset) | info->page_flag); Yes, will update, thanks for the catch. Regards, Xunlei > > >> + continue; >> + } >> + >> if (pud_present(*pud)) { >> pmd = pmd_offset(pud, 0); >> ident_pmd_init(info, pmd, addr, next); >> diff --git a/arch/x86/power/hibernate_64.c b/arch/x86/power/hibernate_64.c >> index 6a61194..a6e21fe 100644 >> --- a/arch/x86/power/hibernate_64.c >> +++ b/arch/x86/power/hibernate_64.c >> @@ -104,7 +104,7 @@ static int set_up_temporary_mappings(void) >> { >> struct x86_mapping_info info = { >> .alloc_pgt_page = alloc_pgt_page, >> - .pmd_flag = __PAGE_KERNEL_LARGE_EXEC, >> + .page_flag = __PAGE_KERNEL_LARGE_EXEC, >> .offset = __PAGE_OFFSET, >> }; >> unsigned long mstart, mend; >> -- >> 1.8.3.1 >>