Received: by 10.223.185.116 with SMTP id b49csp549233wrg; Wed, 14 Feb 2018 03:18:53 -0800 (PST) X-Google-Smtp-Source: AH8x227NMQ0K4l4jzhmWnANhqBIUruLA3J3+Vdisvfwt4mlYogABWxbAL/P9Qx9j6AzHX0DiUmH2 X-Received: by 2002:a17:902:7886:: with SMTP id q6-v6mr4219397pll.364.1518607133134; Wed, 14 Feb 2018 03:18:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518607133; cv=none; d=google.com; s=arc-20160816; b=G3L35dAmuCJBQHpJRcMhHuO2ya//vLec4NwzngxflzPCuGJIpbU4W2Epc51j2tvN8f tFsxkV8rOcONizYwR/aJ5Ab/7ZGpk+Pa4P43sDfReKuPzLEyjW0z6L37R8T+Q0MWA5US 1UP9sp6GqWW7w3OItuM3iFM7BYo3Fb0NGS6spfnO8OhOscVVP0es4pgR/PB6I0ReEHkr TnPaCyeRIiLCfhPWaUb6fUvLMrzhNO3bHzqv/U666wMOp7qDtUaBPA+UcJnAMaC5QhpE wZdaolXAunpoNmfnO9sXJKhNYeZeZZpMFv+KyjHqMdAvD401xIh45MGqOrc8BnB6C25q 0R6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=Lpq+K0Blq/O4yVOvTsFmTRbTkL4ZATF4CB/BXR8MNAc=; b=cpIWz+nPl31JOuXbuKbDFW0AEM2ifZtwxj81+it1kkouA48vo0tomlKw1Rss8WlH0W NZ/H2EY71upEpTMlyOnP4dkQa1Afroi2o492WsKBvY7G3cHDLgvPOwEw1hQ6cUxho98H uqKhh4EHBCx9XSVRFwr29i86JHiWGyV04N9bVNW/lrJOWX8HXfZpskrVkG2JetJIIFwP 2rSAGjlYS41lDqIzdJ82cS3FPZtGQKk8AJ6NCPcWHi5PfqZhf5ALl5SCC5HUGqb0KZEo TcGTWps480jWW1bLbGVbAkHX8jnwtBcfcxKM3wQSM/sABNtzed3cvg5dQrYCO9jhZtNp AyzQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a14-v6si1100890pls.623.2018.02.14.03.18.38; Wed, 14 Feb 2018 03:18:53 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S967435AbeBNLRT (ORCPT + 99 others); Wed, 14 Feb 2018 06:17:19 -0500 Received: from mga02.intel.com ([134.134.136.20]:5277 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S967420AbeBNLRR (ORCPT ); Wed, 14 Feb 2018 06:17:17 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 14 Feb 2018 03:17:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,512,1511856000"; d="scan'208";a="204108622" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga006.fm.intel.com with ESMTP; 14 Feb 2018 03:17:12 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id B90053BD; Wed, 14 Feb 2018 13:17:08 +0200 (EET) From: "Kirill A. Shutemov" To: Ingo Molnar , x86@kernel.org, Thomas Gleixner , "H. Peter Anvin" Cc: Linus Torvalds , Andy Lutomirski , Borislav Petkov , Andi Kleen , linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCH 8/9] x86/mm: Make __VIRTUAL_MASK_SHIFT dynamic Date: Wed, 14 Feb 2018 14:16:55 +0300 Message-Id: <20180214111656.88514-9-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20180214111656.88514-1-kirill.shutemov@linux.intel.com> References: <20180214111656.88514-1-kirill.shutemov@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org For boot-time switching between paging modes, we need to be able to adjust virtual mask shifts. The change doesn't affect the kernel image size much: text data bss dec hex filename 8628892 4734340 1368064 14731296 e0c820 vmlinux.before 8628966 4734340 1368064 14731370 e0c86a vmlinux.after Signed-off-by: Kirill A. Shutemov --- arch/x86/entry/entry_64.S | 12 ++++++++++++ arch/x86/include/asm/page_64_types.h | 2 +- arch/x86/mm/dump_pagetables.c | 12 ++++++++++-- arch/x86/mm/kaslr.c | 4 +++- 4 files changed, 26 insertions(+), 4 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index cd216c9431e1..1608b13a0b36 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -260,8 +260,20 @@ GLOBAL(entry_SYSCALL_64_after_hwframe) * Change top bits to match most significant bit (47th or 56th bit * depending on paging mode) in the address. */ +#ifdef CONFIG_X86_5LEVEL + testl $1, pgtable_l5_enabled(%rip) + jz 1f + shl $(64 - 57), %rcx + sar $(64 - 57), %rcx + jmp 2f +1: + shl $(64 - 48), %rcx + sar $(64 - 48), %rcx +2: +#else shl $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx sar $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx +#endif /* If this changed %rcx, it was not canonical */ cmpq %rcx, %r11 diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h index d54a3d5b5b3b..fa7dc7cd8c19 100644 --- a/arch/x86/include/asm/page_64_types.h +++ b/arch/x86/include/asm/page_64_types.h @@ -56,7 +56,7 @@ #define __PHYSICAL_MASK_SHIFT 52 #ifdef CONFIG_X86_5LEVEL -#define __VIRTUAL_MASK_SHIFT 56 +#define __VIRTUAL_MASK_SHIFT (pgtable_l5_enabled ? 56 : 47) #else #define __VIRTUAL_MASK_SHIFT 47 #endif diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index 420058b05d39..9efee6f464ab 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -85,8 +85,12 @@ static struct addr_marker address_markers[] = { [VMALLOC_START_NR] = { 0UL, "vmalloc() Area" }, [VMEMMAP_START_NR] = { 0UL, "Vmemmap" }, #ifdef CONFIG_KASAN - [KASAN_SHADOW_START_NR] = { KASAN_SHADOW_START, "KASAN shadow" }, - [KASAN_SHADOW_END_NR] = { KASAN_SHADOW_END, "KASAN shadow end" }, + /* + * These fields get initialized with the (dynamic) + * KASAN_SHADOW_{START,END} values in pt_dump_init(). + */ + [KASAN_SHADOW_START_NR] = { 0UL, "KASAN shadow" }, + [KASAN_SHADOW_END_NR] = { 0UL, "KASAN shadow end" }, #endif #ifdef CONFIG_MODIFY_LDT_SYSCALL [LDT_NR] = { 0UL, "LDT remap" }, @@ -571,6 +575,10 @@ static int __init pt_dump_init(void) #ifdef CONFIG_MODIFY_LDT_SYSCALL address_markers[LDT_NR].start_address = LDT_BASE_ADDR; #endif +#ifdef CONFIG_KASAN + address_markers[KASAN_SHADOW_START_NR].start_address = KASAN_SHADOW_START; + address_markers[KASAN_SHADOW_END_NR].start_address = KASAN_SHADOW_END; +#endif #endif #ifdef CONFIG_X86_32 address_markers[VMALLOC_START_NR].start_address = VMALLOC_START; diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c index 515b98a8ccee..d079878c6cbc 100644 --- a/arch/x86/mm/kaslr.c +++ b/arch/x86/mm/kaslr.c @@ -52,7 +52,7 @@ static __initdata struct kaslr_memory_region { unsigned long *base; unsigned long size_tb; } kaslr_regions[] = { - { &page_offset_base, 1 << (__PHYSICAL_MASK_SHIFT - TB_SHIFT) /* Maximum */ }, + { &page_offset_base, 0 }, { &vmalloc_base, VMALLOC_SIZE_TB }, { &vmemmap_base, 1 }, }; @@ -93,6 +93,8 @@ void __init kernel_randomize_memory(void) if (!kaslr_memory_enabled()) return; + kaslr_regions[0].size_tb = 1 << (__PHYSICAL_MASK_SHIFT - TB_SHIFT); + /* * Update Physical memory mapping to available and * add padding if needed (especially for memory hotplug support). -- 2.15.1