Received: by 10.213.65.68 with SMTP id h4csp1067608imn; Fri, 6 Apr 2018 14:01:46 -0700 (PDT) X-Google-Smtp-Source: AIpwx48gOp5k7bqE391tZN5riQgFuYaFdyRrxzc3UumJRwiCxDE/8nwQLrUmOjHhIPO+jP0eiBdw X-Received: by 10.99.96.130 with SMTP id u124mr18990738pgb.252.1523048506873; Fri, 06 Apr 2018 14:01:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523048506; cv=none; d=google.com; s=arc-20160816; b=qMQrUkm+EWLlBydYLLnPsTJEMtdfhXW0s94ZFt1JXRqPeo51w/rrU4TzW+oQz9+/7V r4986jdu9n8k0yGcQs6sTniPPT2zVaOciMLFwqpIQqUaIchPKSSijKxRbp++E3OURqjJ NGbHnjBzoWwlwub4aE8/2o0w7mh5ViVOBnjhnKHq8xA/DWR3QQAJ5MMcLRlItGMyNmFp KpzNTDP6g4AO174gwfzHLWj34+bO7WkfhZLhyk572xClXW53vDhhwrybMvv66q/9ZeEn vlVGMhsZuLi6FwjnGr1tXltYzI65Q5glPfvgfJhkAwI2L+4A0GkMpLwnLVirMAa2mg7f zceA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:in-reply-to:references:date :from:cc:to:subject:arc-authentication-results; bh=xRU/n9ZO0SkGp3VPnC8vatwXuQTmJ5ihY6POG7anGyM=; b=fiekcz85mRww5f9SmLe1UUwGd7qq+KSRs84lzjcg83/US3D8wvpiIAy348buahaidQ /LkKL0mgiLZjT29iEHCi5E1WA5Pm2Aw+CibFi0gDX77YITOU2VdFyKd5/CryVLLrWDc4 s/d/B0KF9SP2GEHctyG5FuYKcsclv/5jO1fR/RJr2xsYLBC4/Fynco2Ufj9FFJVCA6Wv E4AsPA6peqL4wxJXsHoBAkep3toX2+59Jpy/SZ+q38ZUj0XxbPN+8xj9k5zyjDALI/27 nG+iJF6A38BATvINwIKxsj6lNj/yv+5SIbOQYx8C1JWqpwiqQjrfw8j8mf9Z3wVLRy03 +8Ig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w3si7712024pgo.645.2018.04.06.14.01.06; Fri, 06 Apr 2018 14:01:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752070AbeDFU6F (ORCPT + 99 others); Fri, 6 Apr 2018 16:58:05 -0400 Received: from mga05.intel.com ([192.55.52.43]:63493 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752012AbeDFU6D (ORCPT ); Fri, 6 Apr 2018 16:58:03 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Apr 2018 13:58:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,416,1517904000"; d="scan'208";a="35005426" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.39.119]) by fmsmga002.fm.intel.com with ESMTP; 06 Apr 2018 13:58:02 -0700 Subject: [PATCH 03/11] x86/mm: introduce "default" kernel PTE mask To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Dave Hansen , aarcange@redhat.com, luto@kernel.org, torvalds@linux-foundation.org, keescook@google.com, hughd@google.com, jgross@suse.com, x86@kernel.org, namit@vmware.com From: Dave Hansen Date: Fri, 06 Apr 2018 13:55:06 -0700 References: <20180406205501.24A1A4E7@viggo.jf.intel.com> In-Reply-To: <20180406205501.24A1A4E7@viggo.jf.intel.com> Message-Id: <20180406205506.030DB6B6@viggo.jf.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dave Hansen The __PAGE_KERNEL_* page permissions are "raw". They contain bits that may or may not be supported on the current processor. They need to be filtered by a mask (currently __supported_pte_mask) to turn them into a value that we can actually set in a PTE. These __PAGE_KERNEL_* values all contain _PAGE_GLOBAL. But, with PTI, we want to be able to support _PAGE_GLOBAL (have the bit set in __supported_pte_mask) but not have it appear in any of these masks by default. This patch creates a new mask, __default_kernel_pte_mask, and applies it when creating all of the PAGE_KERNEL_* masks. This makes PAGE_KERNEL_* safe to use anywhere (they only contain supported bits). It also ensures that PAGE_KERNEL_* contains _PAGE_GLOBAL on PTI=n kernels but clears _PAGE_GLOBAL when PTI=y. We also make __default_kernel_pte_mask a non-GPL exported symbol because there are plenty of driver-available interfaces that take PAGE_KERNEL_* permissions. Signed-off-by: Dave Hansen Cc: Andrea Arcangeli Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Kees Cook Cc: Hugh Dickins Cc: Juergen Gross Cc: x86@kernel.org Cc: Nadav Amit --- b/arch/x86/include/asm/pgtable_types.h | 27 +++++++++++++++------------ b/arch/x86/mm/init.c | 6 ++++++ b/arch/x86/mm/init_32.c | 8 +++++++- b/arch/x86/mm/init_64.c | 5 +++++ 4 files changed, 33 insertions(+), 13 deletions(-) diff -puN arch/x86/include/asm/pgtable_types.h~KERN-pgprot-default arch/x86/include/asm/pgtable_types.h --- a/arch/x86/include/asm/pgtable_types.h~KERN-pgprot-default 2018-04-06 10:47:54.732796127 -0700 +++ b/arch/x86/include/asm/pgtable_types.h 2018-04-06 10:47:54.741796127 -0700 @@ -196,19 +196,21 @@ enum page_cache_mode { #define __PAGE_KERNEL_NOENC (__PAGE_KERNEL) #define __PAGE_KERNEL_NOENC_WP (__PAGE_KERNEL_WP) -#define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) -#define PAGE_KERNEL_NOENC __pgprot(__PAGE_KERNEL) -#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) -#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) -#define PAGE_KERNEL_EXEC_NOENC __pgprot(__PAGE_KERNEL_EXEC) -#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX | _PAGE_ENC) -#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC) -#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC) -#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC) -#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR | _PAGE_ENC) +#define default_pgprot(x) __pgprot((x) & __default_kernel_pte_mask) -#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) -#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) +#define PAGE_KERNEL default_pgprot(__PAGE_KERNEL | _PAGE_ENC) +#define PAGE_KERNEL_NOENC default_pgprot(__PAGE_KERNEL) +#define PAGE_KERNEL_RO default_pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) +#define PAGE_KERNEL_EXEC default_pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_EXEC_NOENC default_pgprot(__PAGE_KERNEL_EXEC) +#define PAGE_KERNEL_RX default_pgprot(__PAGE_KERNEL_RX | _PAGE_ENC) +#define PAGE_KERNEL_NOCACHE default_pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE default_pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE_EXEC default_pgprot(__PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_VVAR default_pgprot(__PAGE_KERNEL_VVAR | _PAGE_ENC) + +#define PAGE_KERNEL_IO default_pgprot(__PAGE_KERNEL_IO) +#define PAGE_KERNEL_IO_NOCACHE default_pgprot(__PAGE_KERNEL_IO_NOCACHE) #endif /* __ASSEMBLY__ */ @@ -483,6 +485,7 @@ static inline pgprot_t pgprot_large_2_4k typedef struct page *pgtable_t; extern pteval_t __supported_pte_mask; +extern pteval_t __default_kernel_pte_mask; extern void set_nx(void); extern int nx_enabled; diff -puN arch/x86/mm/init_32.c~KERN-pgprot-default arch/x86/mm/init_32.c --- a/arch/x86/mm/init_32.c~KERN-pgprot-default 2018-04-06 10:47:54.733796127 -0700 +++ b/arch/x86/mm/init_32.c 2018-04-06 10:47:54.741796127 -0700 @@ -558,8 +558,14 @@ static void __init pagetable_init(void) permanent_kmaps_init(pgd_base); } -pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL); +#define DEFAULT_PTE_MASK ~(_PAGE_NX | _PAGE_GLOBAL) +/* Bits supported by the hardware: */ +pteval_t __supported_pte_mask __read_mostly = DEFAULT_PTE_MASK; +/* Bits allowed in normal kernel mappings: */ +pteval_t __default_kernel_pte_mask __read_mostly = DEFAULT_PTE_MASK; EXPORT_SYMBOL_GPL(__supported_pte_mask); +/* Used in PAGE_KERNEL_* macros which are reasonably used out-of-tree: */ +EXPORT_SYMBOL(__default_kernel_pte_mask); /* user-defined highmem size */ static unsigned int highmem_pages = -1; diff -puN arch/x86/mm/init_64.c~KERN-pgprot-default arch/x86/mm/init_64.c --- a/arch/x86/mm/init_64.c~KERN-pgprot-default 2018-04-06 10:47:54.735796127 -0700 +++ b/arch/x86/mm/init_64.c 2018-04-06 10:47:54.742796127 -0700 @@ -65,8 +65,13 @@ * around without checking the pgd every time. */ +/* Bits supported by the hardware: */ pteval_t __supported_pte_mask __read_mostly = ~0; +/* Bits allowed in normal kernel mappings: */ +pteval_t __default_kernel_pte_mask __read_mostly = ~0; EXPORT_SYMBOL_GPL(__supported_pte_mask); +/* Used in PAGE_KERNEL_* macros which are reasonably used out-of-tree: */ +EXPORT_SYMBOL(__default_kernel_pte_mask); int force_personality32; diff -puN arch/x86/mm/init.c~KERN-pgprot-default arch/x86/mm/init.c --- a/arch/x86/mm/init.c~KERN-pgprot-default 2018-04-06 10:47:54.737796127 -0700 +++ b/arch/x86/mm/init.c 2018-04-06 10:47:54.742796127 -0700 @@ -190,6 +190,12 @@ static void __init probe_page_size_mask( enable_global_pages(); } + /* By the default is everything supported: */ + __default_kernel_pte_mask = __supported_pte_mask; + /* Except when with PTI where the kernel is mostly non-Global: */ + if (cpu_feature_enabled(X86_FEATURE_PTI)) + __default_kernel_pte_mask &= ~_PAGE_GLOBAL; + /* Enable 1 GB linear kernel mappings if available: */ if (direct_gbpages && boot_cpu_has(X86_FEATURE_GBPAGES)) { printk(KERN_INFO "Using GB pages for direct mapping\n"); _