Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752617AbcLHIkG (ORCPT ); Thu, 8 Dec 2016 03:40:06 -0500 Received: from mx1.redhat.com ([209.132.183.28]:60154 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752423AbcLHIkE (ORCPT ); Thu, 8 Dec 2016 03:40:04 -0500 Date: Thu, 8 Dec 2016 16:40:00 +0800 From: Baoquan He To: Alexnader Kuleshov Cc: linux-kernel@vger.kernel.org, tglx@linutronix.de, hpa@zytor.com, mingo@redhat.com, x86@kernel.org, keescook@chromium.org, yinghai@kernel.org, bp@suse.de, thgarnie@google.com, luto@kernel.org, anderson@redhat.com, dyoung@redhat.com, xlpang@redhat.com Subject: Re: [PATCH 1/2] x86/64: Make kernel text mapping always take one whole page table in early boot code Message-ID: <20161208084000.GB3328@x1> References: <1481183765-4166-1-git-send-email-bhe@redhat.com> <1481183765-4166-2-git-send-email-bhe@redhat.com> <20161208082432.GB3998@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161208082432.GB3998@localhost.localdomain> User-Agent: Mutt/1.7.0 (2016-08-17) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Thu, 08 Dec 2016 08:40:04 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3395 Lines: 82 On 12/08/16 at 02:24pm, Alexnader Kuleshov wrote: > On 12-08-16, Baoquan He wrote: > > In early boot code level2_kernel_pgt is used to map kernel text. And its > > size varies according to KERNEL_IMAGE_SIZE and fixed at compiling time. > > In fact we can make it always takes 512 entries of one whople page table, > > because later function cleanup_highmap will clean up the unused entries. > > With the help of this change kernel text mapping size can be decided at > > runtime later, 512M if kaslr is disabled, 1G if kaslr is enabled. > > s/whople/whole Will change. Thanks! > > > Signed-off-by: Baoquan He > > --- > > arch/x86/include/asm/page_64_types.h | 3 ++- > > arch/x86/kernel/head_64.S | 15 ++++++++------- > > arch/x86/mm/init_64.c | 2 +- > > 3 files changed, 11 insertions(+), 9 deletions(-) > > > > diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h > > index 9215e05..62a20ea 100644 > > --- a/arch/x86/include/asm/page_64_types.h > > +++ b/arch/x86/include/asm/page_64_types.h > > @@ -56,8 +56,9 @@ > > * are fully set up. If kernel ASLR is configured, it can extend the > > * kernel page table mapping, reducing the size of the modules area. > > */ > > +#define KERNEL_MAPPING_SIZE_EXT (1024 * 1024 * 1024) > > #if defined(CONFIG_RANDOMIZE_BASE) > > -#define KERNEL_IMAGE_SIZE (1024 * 1024 * 1024) > > +#define KERNEL_IMAGE_SIZE KERNEL_MAPPING_SIZE_EXT > > #else > > #define KERNEL_IMAGE_SIZE (512 * 1024 * 1024) > > #endif > > diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S > > index b4421cc..c4b40e7c9 100644 > > --- a/arch/x86/kernel/head_64.S > > +++ b/arch/x86/kernel/head_64.S > > @@ -453,17 +453,18 @@ NEXT_PAGE(level3_kernel_pgt) > > > > NEXT_PAGE(level2_kernel_pgt) > > /* > > - * 512 MB kernel mapping. We spend a full page on this pagetable > > - * anyway. > > + * Kernel image size is limited to 512 MB. The kernel code+data+bss > > + * must not be bigger than that. > > * > > - * The kernel code+data+bss must not be bigger than that. > > + * We spend a full page on this pagetable anyway, so take the whole > > + * page here so that the kernel mapping size can be decided at runtime, > > + * 512M if no kaslr, 1G if kaslr enabled. Later cleanup_highmap will > > + * clean up those unused entries. > > * > > - * (NOTE: at +512MB starts the module area, see MODULES_VADDR. > > - * If you want to increase this then increase MODULES_VADDR > > - * too.) > > + * The module area starts after kernel mapping area. > > */ > > PMDS(0, __PAGE_KERNEL_LARGE_EXEC, > > - KERNEL_IMAGE_SIZE/PMD_SIZE) > > + PTRS_PER_PMD) > > > > NEXT_PAGE(level2_fixmap_pgt) > > .fill 506,8,0 > > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c > > index 14b9dd7..e95b977 100644 > > --- a/arch/x86/mm/init_64.c > > +++ b/arch/x86/mm/init_64.c > > @@ -307,7 +307,7 @@ void __init init_extra_mapping_uc(unsigned long phys, unsigned long size) > > void __init cleanup_highmap(void) > > { > > unsigned long vaddr = __START_KERNEL_map; > > - unsigned long vaddr_end = __START_KERNEL_map + KERNEL_IMAGE_SIZE; > > + unsigned long vaddr_end = __START_KERNEL_map + KERNEL_MAPPING_SIZE_EXT; > > unsigned long end = roundup((unsigned long)_brk_end, PMD_SIZE) - 1; > > pmd_t *pmd = level2_kernel_pgt; > > > > -- > > 2.5.5 > >