Received: by 10.213.65.68 with SMTP id h4csp177808imn; Tue, 3 Apr 2018 18:15:18 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+8bFDcLfAdPAsjndUOLAKX6jNHEChqvbvvNIWrgYXEn6cR6OFc3ia7Gjb0OHSIuOvB/auB X-Received: by 2002:a17:902:7445:: with SMTP id e5-v6mr16828192plt.352.1522804518442; Tue, 03 Apr 2018 18:15:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522804518; cv=none; d=google.com; s=arc-20160816; b=wIXcYG3b8HpwIj0N9/QytIXzAHPEFUTTyteywedmxGMOYqRh/GKjt6sY4aEvbYblH1 3cE3gXu65kL240oz8LpV2fyraRqO+/aE8Bs/HMWEuES/JznCTZia+uUYCl5XpBUqBVGk OzH7g2IuYOUhAPu/2T3x0O/sxk0xqC1B0oVenrpcNwc5Zu8aAan0vo7R1pdIZXkr537U 57c5bFk5MuZuWFM9+2ksdEM1DwyZwK6KxfYUV6yu1XOir5CZ2Qx9odQsmswp8AKVtZEG WVIY1/FFppXLTuUpank9fypviiaL9uDHPOBPY09mQa3DqlY43XJVaXuMJPqBNmhEd/SV NbiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:in-reply-to:references:date :from:cc:to:subject:arc-authentication-results; bh=N52m42N+vgeOGQ/xmkMEiNNltbsic9eZnvhO/r93G8o=; b=nRAlAEkcPxGYaQWdvTHQH+lPWFeduXTu1V9d1/sm5wVdvTvUdX7p8sAp3gs3kSGQT2 4+DtF0XRDVnSLNaqn1STCt0DvqerlVIELMDB58mLyk3Tr1y3NO6hFqKZ5cyLiFfokWx1 yoT6whiIL+fNmn1uuakcaK5cOE2u/BBAuDWQNDCu5i5Bjym3sipzaWDmrvvcMfYY8hAB ngUTdgJKnxjT12u3GUBFTP9wWGWfJECTrai3cnjbSeCReArotbZoMuTjA5FxCH93KC6W BKT60vjblGjwKAHrDpN46HEZ5SBl7sP0mvVdqAWXTyMWFJTFIxeoi4c+YZpLTDhlzMBN xH6Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u12-v6si1729560plz.287.2018.04.03.18.15.04; Tue, 03 Apr 2018 18:15:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754907AbeDDBMo (ORCPT + 99 others); Tue, 3 Apr 2018 21:12:44 -0400 Received: from mga09.intel.com ([134.134.136.24]:6300 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754218AbeDDBMk (ORCPT ); Tue, 3 Apr 2018 21:12:40 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Apr 2018 18:12:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,403,1517904000"; d="scan'208";a="188446775" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.39.119]) by orsmga004.jf.intel.com with ESMTP; 03 Apr 2018 18:12:40 -0700 Subject: [PATCH 03/11] x86/mm: introduce "default" kernel PTE mask To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Dave Hansen , aarcange@redhat.com, luto@kernel.org, torvalds@linux-foundation.org, keescook@google.com, hughd@google.com, jgross@suse.com, x86@kernel.org, namit@vmware.com From: Dave Hansen Date: Tue, 03 Apr 2018 18:09:50 -0700 References: <20180404010946.6186729B@viggo.jf.intel.com> In-Reply-To: <20180404010946.6186729B@viggo.jf.intel.com> Message-Id: <20180404010950.F395E46F@viggo.jf.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dave Hansen The __PAGE_KERNEL_* page permissions are "raw". They contain bits that may or may not be supported on the current processor. They need to be filtered by a mask (currently __supported_pte_mask) to turn them into a value that we can actually set in a PTE. These __PAGE_KERNEL_* values all contain _PAGE_GLOBAL. But, with PTI, we want to be able to support _PAGE_GLOBAL (have the bit set in __supported_pte_mask) but not have it appear in any of these masks by default. This patch creates a new mask, __default_kernel_pte_mask, and applies it when creating all of the PAGE_KERNEL_* masks. This makes PAGE_KERNEL_* safe to use anywhere (they only contain supported bits). It also ensures that PAGE_KERNEL_* contains _PAGE_GLOBAL on PTI=n kernels but clears _PAGE_GLOBAL when PTI=y. We also make __default_kernel_pte_mask a non-GPL exported symbol because there are plenty of driver-available interfaces that take PAGE_KERNEL_* permissions. Signed-off-by: Dave Hansen Cc: Andrea Arcangeli Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Kees Cook Cc: Hugh Dickins Cc: Juergen Gross Cc: x86@kernel.org Cc: Nadav Amit --- b/arch/x86/include/asm/pgtable_types.h | 27 +++++++++++++++------------ b/arch/x86/mm/init.c | 6 ++++++ b/arch/x86/mm/init_32.c | 8 +++++++- b/arch/x86/mm/init_64.c | 5 +++++ 4 files changed, 33 insertions(+), 13 deletions(-) diff -puN arch/x86/include/asm/pgtable_types.h~KERN-pgprot-default arch/x86/include/asm/pgtable_types.h --- a/arch/x86/include/asm/pgtable_types.h~KERN-pgprot-default 2018-04-02 16:41:13.662605176 -0700 +++ b/arch/x86/include/asm/pgtable_types.h 2018-04-02 16:41:13.672605176 -0700 @@ -196,19 +196,21 @@ enum page_cache_mode { #define __PAGE_KERNEL_NOENC (__PAGE_KERNEL) #define __PAGE_KERNEL_NOENC_WP (__PAGE_KERNEL_WP) -#define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) -#define PAGE_KERNEL_NOENC __pgprot(__PAGE_KERNEL) -#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) -#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) -#define PAGE_KERNEL_EXEC_NOENC __pgprot(__PAGE_KERNEL_EXEC) -#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX | _PAGE_ENC) -#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC) -#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC) -#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC) -#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR | _PAGE_ENC) +#define default_pgprot(x) __pgprot((x) & __default_kernel_pte_mask) -#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) -#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) +#define PAGE_KERNEL default_pgprot(__PAGE_KERNEL | _PAGE_ENC) +#define PAGE_KERNEL_NOENC default_pgprot(__PAGE_KERNEL) +#define PAGE_KERNEL_RO default_pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) +#define PAGE_KERNEL_EXEC default_pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_EXEC_NOENC default_pgprot(__PAGE_KERNEL_EXEC) +#define PAGE_KERNEL_RX default_pgprot(__PAGE_KERNEL_RX | _PAGE_ENC) +#define PAGE_KERNEL_NOCACHE default_pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE default_pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE_EXEC default_pgprot(__PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_VVAR default_pgprot(__PAGE_KERNEL_VVAR | _PAGE_ENC) + +#define PAGE_KERNEL_IO default_pgprot(__PAGE_KERNEL_IO) +#define PAGE_KERNEL_IO_NOCACHE default_pgprot(__PAGE_KERNEL_IO_NOCACHE) #endif /* __ASSEMBLY__ */ @@ -483,6 +485,7 @@ static inline pgprot_t pgprot_large_2_4k typedef struct page *pgtable_t; extern pteval_t __supported_pte_mask; +extern pteval_t __default_kernel_pte_mask; extern void set_nx(void); extern int nx_enabled; diff -puN arch/x86/mm/init_32.c~KERN-pgprot-default arch/x86/mm/init_32.c --- a/arch/x86/mm/init_32.c~KERN-pgprot-default 2018-04-02 16:41:13.664605176 -0700 +++ b/arch/x86/mm/init_32.c 2018-04-02 16:41:13.672605176 -0700 @@ -558,8 +558,14 @@ static void __init pagetable_init(void) permanent_kmaps_init(pgd_base); } -pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL); +#define DEFAULT_PTE_MASK ~(_PAGE_NX | _PAGE_GLOBAL) +/* Bits supported by the hardware: */ +pteval_t __supported_pte_mask __read_mostly = DEFAULT_PTE_MASK; +/* Bits allowed in normal kernel mappings: */ +pteval_t __default_kernel_pte_mask __read_mostly = DEFAULT_PTE_MASK; EXPORT_SYMBOL_GPL(__supported_pte_mask); +/* Used in PAGE_KERNEL_* macros which are reasonably used out-of-tree: */ +EXPORT_SYMBOL(__default_kernel_pte_mask); /* user-defined highmem size */ static unsigned int highmem_pages = -1; diff -puN arch/x86/mm/init_64.c~KERN-pgprot-default arch/x86/mm/init_64.c --- a/arch/x86/mm/init_64.c~KERN-pgprot-default 2018-04-02 16:41:13.666605176 -0700 +++ b/arch/x86/mm/init_64.c 2018-04-02 16:41:13.673605176 -0700 @@ -65,8 +65,13 @@ * around without checking the pgd every time. */ +/* Bits supported by the hardware: */ pteval_t __supported_pte_mask __read_mostly = ~0; +/* Bits allowed in normal kernel mappings: */ +pteval_t __default_kernel_pte_mask __read_mostly = ~0; EXPORT_SYMBOL_GPL(__supported_pte_mask); +/* Used in PAGE_KERNEL_* macros which are reasonably used out-of-tree: */ +EXPORT_SYMBOL(__default_kernel_pte_mask); int force_personality32; diff -puN arch/x86/mm/init.c~KERN-pgprot-default arch/x86/mm/init.c --- a/arch/x86/mm/init.c~KERN-pgprot-default 2018-04-02 16:41:13.668605176 -0700 +++ b/arch/x86/mm/init.c 2018-04-02 16:41:13.673605176 -0700 @@ -190,6 +190,12 @@ static void __init probe_page_size_mask( enable_global_pages(); } + /* By the default is everything supported: */ + __default_kernel_pte_mask = __supported_pte_mask; + /* Except when with PTI where the kernel is mostly non-Global: */ + if (cpu_feature_enabled(X86_FEATURE_PTI)) + __default_kernel_pte_mask &= ~_PAGE_GLOBAL; + /* Enable 1 GB linear kernel mappings if available: */ if (direct_gbpages && boot_cpu_has(X86_FEATURE_GBPAGES)) { printk(KERN_INFO "Using GB pages for direct mapping\n"); _