Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754758AbcKJAun (ORCPT ); Wed, 9 Nov 2016 19:50:43 -0500 Received: from mail-by2nam01on0042.outbound.protection.outlook.com ([104.47.34.42]:16160 "EHLO NAM01-BY2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754686AbcKJAug (ORCPT ); Wed, 9 Nov 2016 19:50:36 -0500 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Thomas.Lendacky@amd.com; From: Tom Lendacky Subject: [RFC PATCH v3 07/20] x86: Provide general kernel support for memory encryption To: , , , , , , , , CC: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , "Paolo Bonzini" , Larry Woodman , "Ingo Molnar" , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , "Thomas Gleixner" , Dmitry Vyukov Date: Wed, 9 Nov 2016 18:35:53 -0600 Message-ID: <20161110003553.3280.38587.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: MWHPR21CA0005.namprd21.prod.outlook.com (10.173.47.15) To BN6PR12MB1137.namprd12.prod.outlook.com (10.168.226.139) X-MS-Office365-Filtering-Correlation-Id: 564d9431-2149-4c94-86cd-08d409018c93 X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1137;2:jg/k12XcpIhwTe0QV9750PXUe+zNFx5wYBtC4cWVNl30JTEvpEifgVYlk6iYvYmbtYRcxfgayzwJvqdMFfj/O8MdvYH+om9plk8YM6H/mKiodlNY+QPzNOH5nZWwRS+CamOUrNzJQ44z+mwta64eCoP87wnEcwSinlVrtYVoDrmuI02/UxgTVOZ/oM/ZMXjE3XCR87bAzqbJ7KVDdbp9yA==;3:bidWEfog4LjApzalFR5KWiga+vtetsnHpaUJvxAZE69dRwudIz9pchjAwNbOoyJckUnBqAbDNI82MD3HzvpwaIgOHelcV+3a4Qy5PA1JF/bVGTw8TnyMLVUVjbwJZWRFqD/acI0Ix/VwDqJqBV82QQ== X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BN6PR12MB1137; X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1137;25:bAF2HYQBML6DtvpRwYAJAQF2eKRkcsb8nHG6LMEBDW+FOL2Kg7sMZI8hrsAsfyXFOAulr6PDnVRLBP0KRiePlmCHc5ipSsgrE3QgtF754n4anAUa5C11uA2aVdW92G6BaOB1jwOdZqqWsiLQiaIbViOJc4E9FLN1S5GI0wjdV93Ucz9j5jelhG2mGjBs1/MlO+bXgvWF2L3K/KaodRSaUvoEl8RHfeMzTviK1C/e47Mh5kLe6xb4CgKwj9b5vvM15j5uQ+43cOTgevb1u3BeVr668MWBzx+UZfmWLDzEbegwH244MvwkqVLqhUuHdogQPHvb0qr6krT3kcuDbc6k/Dn5QWRgyqgCp6kTpS4dYZ39h/RRoJDBBn992ow2AL1xOfRgObUqMNb01iRKJRr1No7ybN7O0CUZ2yLZ7mRmOdBh1T9fJ1sETLLZgHOC3WRf+Rchj5lEpCmu+PCW5C4IGLLzae9Fe5v0zVhidU8BuLGinTdutSMr1Y/tKxbSCaL02mu7tkcjlRVz+W7DLfJwNaY0JZKEpaW/HIbPupa7BVFCIEKDLY4XI8Xe3Vei27Ma7TcW/7bZJRvUsLT8/Cz65i7BT/2FZx5tCX0VyjtDkPnrUyxof2bDd1TMnv2oV0f/tmaUBRDMC0cmeSWg02HWdVM0AesyI92boBTffyBzYpFlHsDsQD/X6xfUT+pDKzgZCokJnirghRKFwPttdpE0uaEyI7HqavHZKhlh9MRKNqyzf1rN1ju8e3qKyIKR3hfWshhT9QQuTSJavN3WEdLtvQ== X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1137;31:w6yP5z0fAn+uujKAOwA9rHBtYVB7JpKJaddTSGTMFACeKH2vY6n1e7Er8sEv33pVypICGKYHtZ2Y7Pjm/SNnVJTG5Sp3E14+hnJnfBoo6eeDreP6crdAfbrn8v26ethMBAGjrWy4XdxoP0vZpke6U2KdsP3Hcfi61SwODfrNSKj+ygxUDs4RVgH7qQCcQRcephOqQadgPLCYU+uL+sgXo413yQqCyiE3inQgTNXgzlwoKLY1URq43liXV3sQ3fuxGRs4jD9i/lrIZRTkxkFB9g==;20:j7oJmrhrCdv0F5uCSPo/ecX3lpbDUZoiKMo0kCDFnR2bVzj7a7AVlgX2oWP/erVwzQhOhoPqBNXUzhJ40cx6dvrUEVI2xtXE2dBi59CcpfbtOsJulUQ8yXTh3isgBbqYlbJHb2zL0Q0ZTx6dF6gk3MWcPiUR2AFbm8QcQZn4c5yvMGOFntYK+agq1ZAldo/al7A8RP7FNPR+b0G1KdbPevbjw8pko1IkEeG4Jen2r5TqvFaLZkxlJULnN9sXLYD3qXCUWBYHa7HnOqdkcqDUFZE0j6YDpf/1ElD1QJbs8h7/2v2ZsDdTLeSPvKuK8fkAxmtt7LwU3zOFHwrRYwdDq71GmwCKXyV7GaAPgGvbqaJO+PQeqb3wAk7XLRGoj/OCRxr4ZNFiwz5yhLPrLG8fHBPGD7yBZnM57WZPWLBLa0FWlJCQJZ4vxldKC/Qras3B/Uw6/1Kq2yAu6wfIF7hcrHyI/+29Qz8q2s1FOJKFcN4Z8OC+s/SZVYWIcfHrm7nA X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040176)(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001)(6055026);SRVR:BN6PR12MB1137;BCL:0;PCL:0;RULEID:;SRVR:BN6PR12MB1137; X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1137;4:7l1ZgVjpe7wg6JMCEJZCHrm6xcAtlL1bMCaZjXK+9TjvMBHXyIrMQiZDIsGhwQJaSKPmP4S9mPjq7c3/MNY0ddnFk2wBCTJ6K266NvLOL09xm+DO31ObMcDlLJqstNmv19iPIlAdBuBkkzfgQMglLHbLI9aVfv7girpnj+9CiBHaoHquMLe4MVXkBlfPTX/jgzEzTMMVDId+oX2IAKi25ChXnPjCBoG7KAu0zZS7qpNT0g6SC7s5SeezTL8htYfHSickyxg4ivVHkiLFkeAqsNPKUifoHSEAYDcddXEz0qhvGMqGNKqXs66KqM0reZ524ETvGSlYswHdFGxSrXv6msBNrUQl99heS9r6p4xOW3811+hgY18sTwiA4gZhw4rfh7ypKbHE+mcTPZGUfFdILHyw6OUX79s6Lp8bIvROc8xnEPXyMr0wMDNJhx+Eg+BaAz5QRBt0jpLp4S+JpzpnWg== X-Forefront-PRVS: 01221E3973 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(7916002)(189002)(199003)(6666003)(42186005)(53416004)(106356001)(586003)(105586002)(3846002)(4001350100001)(6116002)(5001770100001)(2950100002)(97736004)(83506001)(50466002)(1076002)(23676002)(77096005)(68736007)(101416001)(305945005)(69596002)(7736002)(7416002)(66066001)(5660300001)(8676002)(7846002)(189998001)(92566002)(9686002)(33646002)(230700001)(97746001)(4326007)(47776003)(86362001)(103116003)(2906002)(54356999)(2201001)(76176999)(50986999)(81156014)(81166006)(11771545001)(71626007)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:BN6PR12MB1137;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;PTR:InfoNoRecords;MX:1;A:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtCTjZQUjEyTUIxMTM3OzIzOmZmQU4vaTEzNFB2elF3Q0FYeXVKNFZiaFhW?= =?utf-8?B?S2R2K3FSbXEwWTdZTFJvaFQvc083dWh1bi8rdTlDL1p5TGVtYU5XUC9uRWNv?= =?utf-8?B?VXB0RW56eXh1QjVVTkh2NExZc0UxQmlKOHZHbGNOWCtRQ0FNM2pUa3h4Qkxl?= =?utf-8?B?cE02dlhobVVnaGlUVnkwRjRHdWVGUTJqZEZaNElqTndINm9oWTNaY3JYa0xr?= =?utf-8?B?UTFzeXYra1I2bGNsTEl2L20wTEk4V3hHSzRUVGJQNXhQQ2dHK0YySE1DbU5E?= =?utf-8?B?ZFYwejBFSUNDc1NSMTNqR3AzSXRndURkcnR1aSt3akpFc1pDbEpuNGwzc0hY?= =?utf-8?B?ajN0TmIrN0dLbHplWkZyOE43UXVXdGVLVHdKUTNDV3hLMEVjQjZZZFR6Wmpm?= =?utf-8?B?bExiYk1QVTJlVEhESTZXVVpMZjJZWGErRy84SUl3MEVuZEJxL1h0Y29BM0Zv?= =?utf-8?B?ZjhkN0FkQStKZEFVUVVNN1pQbDlEVEtZOWhmQ1hYbUpxaU5pUlRTTHFlL0FX?= =?utf-8?B?NVovVklkcmx4S293d0hpMWpUK0RRZnNhbWFTTlBNOEhMWmdJNXk4Y1ZSalZo?= =?utf-8?B?UDg2NmxGRjN5Y0g1SzFLeTQ1RVdpbVVQU1ZBV1h0dnVidEZVT1ZqNjcvZmNW?= =?utf-8?B?QjhhNytCSGJOTEtSNTlVTnZON1ZOMEtaMEVSUW0xaVZmUzVUK295NTZHbWJT?= =?utf-8?B?TStkU1UzdGZ5S2w2dHlLZUlnVnA3OGFEMk5XR01WS05lYXhxR0tjQmhtRG9Q?= =?utf-8?B?V05Fc01SNEt1U2k3bnllTVJSRmFQek95QXFMM1psbGRDanFXRUZwa2MyN25S?= =?utf-8?B?UHNia1pzeWJiaE5neldiV215dnpUUExTeU02YjcyamtOSDR4VmJScWVSZzNQ?= =?utf-8?B?TFdKbk9oWDlGS1NQVDFGcWtGQkQ5dm1HeVlRdFNxdTNJdG9saUpwQzZ1b0Ns?= =?utf-8?B?SzdZamU3djg3OVlHY3hJNXJEQ2JaTXUzdzA3ZGZwT0dSaG9QWmRnbTl2V2Z3?= =?utf-8?B?cTlDRjArRGpsN294M0ZXQXg5bkFHMFBCRVNXbjBmcDdIMmFYS0k0d2xjK25M?= =?utf-8?B?Smx6SFBReDJ1N1NVVlJVK2tsS1o0Z0tjVHNpbkpCZFdEQXMwMjQ3Z25HaWJv?= =?utf-8?B?dFhnK2poME85RmxyNjdESUtVdWZ5NWVUQTZybWNrVEtPVEVYK2tTSk1hSTU2?= =?utf-8?B?OGswRThoc1FIOUM0bkNLR0xDUUtFbmJ2a1VuWndWWG96SlkrSUxieFpEL0Rs?= =?utf-8?B?bkZqSVJtbll0VE5qVFgyNmJjZE9WOVpCdDZTaUUvWVJCWHFoM3BEcHVuazBG?= =?utf-8?B?RmhFQ2hhOHlWRTQrQmt1S1BJdndUNzRORTVTVmw3WmhNdTRmVVQwWTdXOG11?= =?utf-8?B?YzJycUhCSTBtMWJ3akxpN2ZBa3d0YnpRL1FDc0VCYWdpaW5CRitPTjZrejlH?= =?utf-8?B?dnlabTJVM3lGVGFJdFZtWkkwOWVaMmZzb2pKUG16K3hjdCtWT3oxakI2R0Mx?= =?utf-8?B?RWlqZ0F5TmlQbGtlKzJuMFdzWHZCdWxseTM4N2RodDZpZVltN2dwVjg0L1Bk?= =?utf-8?B?N1k5RW5ZNEFRSVJraENyOWF3R203SjZmY1ROOXdyTGdGVzY1V3hqMUt0MjJn?= =?utf-8?B?cE9nUStrb3J6akMyVG1HT3NQR3g4ZTVsbE1RQnBMUWgraDMvb21VQTNEVjJr?= =?utf-8?B?UUIxOUYxYi9EK21wNk9rNWdoakZnQXVqY3NwK1hrWDlFQkxFdE0wYkQvVDl5?= =?utf-8?B?Tk9GT00wM3hnbGFuZDNtdz09?= X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1137;6:WXTQkTLpIRygyDFE5BHAGWBmqsXMMwdFE+aAJhXZEOjPJz3D5YcSgDfFLNkdNGlsyJqyuzkyZUhKkdqclArWLSIIKF9Oekylk5EEITkOz7jDcee4qCbzCu7ydyzbw+V+tPUSvyHd7fiWDFln7tuDhzxIlQ96Wu0v7exK2aCkiYbOU+0DZBYNdgphaqfCYYbRJSFW45CBsOzR/tELuqErQlhLDmzd6FQOpg/1Irv5oCf6A+PBHsbVCSO3N840fxJ2Y8eSWg10C9/hDTiZ9ag5NTMr2GY0CJtxFRYpKngpHUTTpSDYPBr6BQHj4r4p4py/ezwVAO3PK7YATuPS8pZh3DhBlJCWWU9kcJzy8DlhpLA=;5:ZWiCjmMpeiDdHVeHH7VoCjFL/A/18UFxyZJLr7bLGpb63vVPUJUb0kRnWMVmYsuUbILYmPFg/FIMHzZNRY8XOhLTIrhT79wKx330arMRQy84cMTu8cmVSRE2ZAvqNRcUynRO59E8BpYJczmMYXSNyA==;24:cpQU+Y7dZPyZwFYd59Pg4NBibzEmdQVinixnjXMSNMFsmf+6Bg0m6Ln7SQc1MogDEuPfoe4POHlXJYwT8TAk2mVRd4RC/YkgdZIytFVGp74= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1137;7:z/9BVMnGDkv8t2ksF3c68RhmURJwTJwO4rCz1Xv2YwFJHlaozvjtdqQ0tWbwWM3AXl5aRmRPo9OBU7y1Pz5wRC6qapJX8iE+SUQMY/BGatk1kmF2uKvQ3f27ZJR7oWRssx7Cce2DwBEHglxcnKsnNSbfYNl3hsTCjehUcOIQaR2YNhewKYup153uxZ4LrQ3WinXTtdcqvzL9FBwQrFxNbmE/7NFb0HqBRte9CnrOEzQaeBDGrXX/v/2l8Gnou2MzXS3ahgTDati7APGVVpzUx+2s/DMHVdZJ6n3omP8H/tNNOuEG0BqKbtzr+PasW92OMtgQ66Ell4CXxDPPPY4wVXUbB/MPTn0gexD4rz6dqss=;20:OWFnyhmYcfTQSG4muVKAnk//LFlVPIHOCjm2mP5t/4ir55MADb2z4e8TonfTSdkjgrsBW6g+vVzoKyYvqhr45EL/C3AoyDRhbW5ci2XnslOp2jGn8b3no5mex2VLwBwddBzezVEHl+J59UnmVF00G2QHIh17/YgwN3R/NyP7X4oS2sX44rarr1IAn6O10N7X/m9OKy2b9veBGXOFgjMC8F0GowiSPhFsctT7G1otXLx22TdNlN1dJsyw1wGyMsqK X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2016 00:35:58.6707 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1137 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 16388 Lines: 466 Adding general kernel support for memory encryption includes: - Modify and create some page table macros to include the Secure Memory Encryption (SME) memory encryption mask - Modify and create some macros for calculating physical and virtual memory addresses - Provide an SME initialization routine to update the protection map with the memory encryption mask so that it is used by default - #undef CONFIG_AMD_MEM_ENCRYPT in the compressed boot path Signed-off-by: Tom Lendacky --- arch/x86/boot/compressed/pagetable.c | 7 +++++ arch/x86/include/asm/fixmap.h | 7 +++++ arch/x86/include/asm/mem_encrypt.h | 14 +++++++++++ arch/x86/include/asm/page.h | 4 ++- arch/x86/include/asm/pgtable.h | 20 +++++++++------ arch/x86/include/asm/pgtable_types.h | 45 ++++++++++++++++++++++------------ arch/x86/include/asm/processor.h | 3 ++ arch/x86/kernel/espfix_64.c | 2 +- arch/x86/kernel/head64.c | 12 ++++++++- arch/x86/kernel/head_64.S | 18 +++++++------- arch/x86/mm/kasan_init_64.c | 4 ++- arch/x86/mm/mem_encrypt.c | 20 +++++++++++++++ arch/x86/mm/pageattr.c | 3 ++ 13 files changed, 119 insertions(+), 40 deletions(-) diff --git a/arch/x86/boot/compressed/pagetable.c b/arch/x86/boot/compressed/pagetable.c index 56589d0..411c443 100644 --- a/arch/x86/boot/compressed/pagetable.c +++ b/arch/x86/boot/compressed/pagetable.c @@ -15,6 +15,13 @@ #define __pa(x) ((unsigned long)(x)) #define __va(x) ((void *)((unsigned long)(x))) +/* + * The pgtable.h and mm/ident_map.c includes make use of the SME related + * information which is not used in the compressed image support. Un-define + * the SME support to avoid any compile and link errors. + */ +#undef CONFIG_AMD_MEM_ENCRYPT + #include "misc.h" /* These actually do the work of building the kernel identity maps. */ diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 8554f96..83e91f0 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -153,6 +153,13 @@ static inline void __set_fixmap(enum fixed_addresses idx, } #endif +/* + * Fixmap settings used with memory encryption + * - FIXMAP_PAGE_NOCACHE is used for MMIO so make sure the memory + * encryption mask is not part of the page attributes + */ +#define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index a105796..5f1976d 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -15,14 +15,28 @@ #ifndef __ASSEMBLY__ +#include + #ifdef CONFIG_AMD_MEM_ENCRYPT extern unsigned long sme_me_mask; +void __init sme_early_init(void); + +#define __sme_pa(x) (__pa((x)) | sme_me_mask) +#define __sme_pa_nodebug(x) (__pa_nodebug((x)) | sme_me_mask) + #else /* !CONFIG_AMD_MEM_ENCRYPT */ #define sme_me_mask 0UL +static inline void __init sme_early_init(void) +{ +} + +#define __sme_pa __pa +#define __sme_pa_nodebug __pa_nodebug + #endif /* CONFIG_AMD_MEM_ENCRYPT */ #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index cf8f619..b1f7bf6 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -15,6 +15,8 @@ #ifndef __ASSEMBLY__ +#include + struct page; #include @@ -55,7 +57,7 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr, __phys_addr_symbol(__phys_reloc_hide((unsigned long)(x))) #ifndef __va -#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET)) +#define __va(x) ((void *)(((unsigned long)(x) & ~sme_me_mask) + PAGE_OFFSET)) #endif #define __boot_va(x) __va(x) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 437feb4..00c07d8 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -5,6 +5,7 @@ #include #include +#include /* * Macro to mark a page protection value as UC- @@ -155,17 +156,22 @@ static inline int pte_special(pte_t pte) static inline unsigned long pte_pfn(pte_t pte) { - return (pte_val(pte) & PTE_PFN_MASK) >> PAGE_SHIFT; + return (pte_val(pte) & ~sme_me_mask & PTE_PFN_MASK) >> PAGE_SHIFT; } static inline unsigned long pmd_pfn(pmd_t pmd) { - return (pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; + return (pmd_val(pmd) & ~sme_me_mask & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; } static inline unsigned long pud_pfn(pud_t pud) { - return (pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT; + return (pud_val(pud) & ~sme_me_mask & pud_pfn_mask(pud)) >> PAGE_SHIFT; +} + +static inline unsigned long pgd_pfn(pgd_t pgd) +{ + return (pgd_val(pgd) & ~sme_me_mask) >> PAGE_SHIFT; } #define pte_page(pte) pfn_to_page(pte_pfn(pte)) @@ -565,8 +571,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pmd_page(pmd) \ - pfn_to_page((pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT) +#define pmd_page(pmd) pfn_to_page(pmd_pfn(pmd)) /* * the pmd page can be thought of an array like this: pmd_t[PTRS_PER_PMD] @@ -634,8 +639,7 @@ static inline unsigned long pud_page_vaddr(pud_t pud) * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pud_page(pud) \ - pfn_to_page((pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT) +#define pud_page(pud) pfn_to_page(pud_pfn(pud)) /* Find an entry in the second-level page table.. */ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) @@ -675,7 +679,7 @@ static inline unsigned long pgd_page_vaddr(pgd_t pgd) * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pgd_page(pgd) pfn_to_page(pgd_val(pgd) >> PAGE_SHIFT) +#define pgd_page(pgd) pfn_to_page(pgd_pfn(pgd)) /* to find an entry in a page-table-directory. */ static inline unsigned long pud_index(unsigned long address) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index f1218f5..cbfb83e 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -2,7 +2,9 @@ #define _ASM_X86_PGTABLE_DEFS_H #include + #include +#include #define FIRST_USER_ADDRESS 0UL @@ -121,10 +123,10 @@ #define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE) -#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ - _PAGE_ACCESSED | _PAGE_DIRTY) -#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ - _PAGE_DIRTY) +#define _PAGE_TABLE_NO_ENC (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER |\ + _PAGE_ACCESSED | _PAGE_DIRTY) +#define _KERNPG_TABLE_NO_ENC (_PAGE_PRESENT | _PAGE_RW | \ + _PAGE_ACCESSED | _PAGE_DIRTY) /* * Set of bits not changed in pte_modify. The pte's @@ -191,18 +193,29 @@ enum page_cache_mode { #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL __pgprot(__PAGE_KERNEL) -#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO) -#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC) -#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX) -#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE) -#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC) -#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL) -#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR) - -#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) -#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) +#ifndef __ASSEMBLY__ + +#define _PAGE_ENC sme_me_mask + +#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ + _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_ENC) +#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ + _PAGE_DIRTY | _PAGE_ENC) + +#define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) +#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) +#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX | _PAGE_ENC) +#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL | _PAGE_ENC) +#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR | _PAGE_ENC) + +#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) +#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) + +#endif /* __ASSEMBLY__ */ /* xwr */ #define __P000 PAGE_NONE diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 984a7bf..963368e 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -22,6 +22,7 @@ struct vm86; #include #include #include +#include #include #include @@ -207,7 +208,7 @@ static inline void native_cpuid(unsigned int *eax, unsigned int *ebx, static inline void load_cr3(pgd_t *pgdir) { - write_cr3(__pa(pgdir)); + write_cr3(__sme_pa(pgdir)); } #ifdef CONFIG_X86_32 diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c index 04f89ca..51566d7 100644 --- a/arch/x86/kernel/espfix_64.c +++ b/arch/x86/kernel/espfix_64.c @@ -193,7 +193,7 @@ void init_espfix_ap(int cpu) pte_p = pte_offset_kernel(&pmd, addr); stack_page = page_address(alloc_pages_node(node, GFP_KERNEL, 0)); - pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask)); + pte = __pte(__pa(stack_page) | ((__PAGE_KERNEL_RO | _PAGE_ENC) & ptemask)); for (n = 0; n < ESPFIX_PTE_CLONES; n++) set_pte(&pte_p[n*PTE_STRIDE], pte); diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index 54a2372..0540789 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -28,6 +28,7 @@ #include #include #include +#include /* * Manage page tables very early on. @@ -42,7 +43,7 @@ static void __init reset_early_page_tables(void) { memset(early_level4_pgt, 0, sizeof(pgd_t)*(PTRS_PER_PGD-1)); next_early_pgt = 0; - write_cr3(__pa_nodebug(early_level4_pgt)); + write_cr3(__sme_pa_nodebug(early_level4_pgt)); } /* Create a new PMD entry */ @@ -54,7 +55,7 @@ int __init early_make_pgtable(unsigned long address) pmdval_t pmd, *pmd_p; /* Invalid address or early pgt is done ? */ - if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt)) + if (physaddr >= MAXMEM || read_cr3() != __sme_pa_nodebug(early_level4_pgt)) return -1; again: @@ -157,6 +158,13 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data) clear_page(init_level4_pgt); + /* + * SME support may update early_pmd_flags to include the memory + * encryption mask, so it needs to be called before anything + * that may generate a page fault. + */ + sme_early_init(); + kasan_early_init(); for (i = 0; i < NUM_EXCEPTION_VECTORS; i++) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index 9a28aad..e8a7272 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -127,7 +127,7 @@ startup_64: movq %rdi, %rax shrq $PGDIR_SHIFT, %rax - leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx + leaq (4096 + _KERNPG_TABLE_NO_ENC)(%rbx), %rdx addq %r12, %rdx movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -448,7 +448,7 @@ GLOBAL(name) __INITDATA NEXT_PAGE(early_level4_pgt) .fill 511,8,0 - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NO_ENC NEXT_PAGE(early_dynamic_pgts) .fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0 @@ -460,15 +460,15 @@ NEXT_PAGE(init_level4_pgt) .fill 512,8,0 #else NEXT_PAGE(init_level4_pgt) - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NO_ENC .org init_level4_pgt + L4_PAGE_OFFSET*8, 0 - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NO_ENC .org init_level4_pgt + L4_START_KERNEL*8, 0 /* (2^48-(2*1024*1024*1024))/(2^39) = 511 */ - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NO_ENC NEXT_PAGE(level3_ident_pgt) - .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NO_ENC .fill 511, 8, 0 NEXT_PAGE(level2_ident_pgt) /* Since I easily can, map the first 1G. @@ -480,8 +480,8 @@ NEXT_PAGE(level2_ident_pgt) NEXT_PAGE(level3_kernel_pgt) .fill L3_START_KERNEL,8,0 /* (2^48-(2*1024*1024*1024)-((2^39)*511))/(2^30) = 510 */ - .quad level2_kernel_pgt - __START_KERNEL_map + _KERNPG_TABLE - .quad level2_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level2_kernel_pgt - __START_KERNEL_map + _KERNPG_TABLE_NO_ENC + .quad level2_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE_NO_ENC NEXT_PAGE(level2_kernel_pgt) /* @@ -499,7 +499,7 @@ NEXT_PAGE(level2_kernel_pgt) NEXT_PAGE(level2_fixmap_pgt) .fill 506,8,0 - .quad level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE_NO_ENC /* 8MB reserved for vsyscalls + a 2MB hole = 4 + 1 entries */ .fill 5,8,0 diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 0493c17..0608dc8 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -68,7 +68,7 @@ static struct notifier_block kasan_die_notifier = { void __init kasan_early_init(void) { int i; - pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL; + pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL | _PAGE_ENC; pmdval_t pmd_val = __pa_nodebug(kasan_zero_pte) | _KERNPG_TABLE; pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE; @@ -130,7 +130,7 @@ void __init kasan_init(void) */ memset(kasan_zero_page, 0, PAGE_SIZE); for (i = 0; i < PTRS_PER_PTE; i++) { - pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO); + pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO | _PAGE_ENC); set_pte(&kasan_zero_pte[i], pte); } /* Flush TLBs again to be sure that write protection applied. */ diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 1ed75a4..d642cc5 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -11,6 +11,10 @@ */ #include +#include +#include + +extern pmdval_t early_pmd_flags; /* * Since sme_me_mask is set early in the boot process it must reside in @@ -19,3 +23,19 @@ */ unsigned long sme_me_mask __section(.data) = 0; EXPORT_SYMBOL_GPL(sme_me_mask); + +void __init sme_early_init(void) +{ + unsigned int i; + + if (!sme_me_mask) + return; + + early_pmd_flags |= sme_me_mask; + + __supported_pte_mask |= sme_me_mask; + + /* Update the protection map with memory encryption mask */ + for (i = 0; i < ARRAY_SIZE(protection_map); i++) + protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask); +} diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index e3353c9..b8e6bb5 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1974,6 +1974,9 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, if (!(page_flags & _PAGE_RW)) cpa.mask_clr = __pgprot(_PAGE_RW); + if (!(page_flags & _PAGE_ENC)) + cpa.mask_clr = __pgprot(pgprot_val(cpa.mask_clr) | _PAGE_ENC); + cpa.mask_set = __pgprot(_PAGE_PRESENT | page_flags); retval = __change_page_attr_set_clr(&cpa, 0);