Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755430AbdLUWDq (ORCPT ); Thu, 21 Dec 2017 17:03:46 -0500 Received: from mail-bl2nam02on0065.outbound.protection.outlook.com ([104.47.38.65]:48181 "EHLO NAM02-BL2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753761AbdLUWDh (ORCPT ); Thu, 21 Dec 2017 17:03:37 -0500 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Thomas.Lendacky@amd.com; From: Tom Lendacky Subject: [PATCH v2 4/5] x86/mm: Prepare sme_encrypt_kernel() for PAGE aligned encryption To: x86@kernel.org Cc: Brijesh Singh , linux-kernel@vger.kernel.org, Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Thomas Gleixner Date: Thu, 21 Dec 2017 16:03:21 -0600 Message-ID: <20171221220321.30632.70405.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20171221220242.30632.5031.stgit@tlendack-t1.amdoffice.net> References: <20171221220242.30632.5031.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: DM5PR21CA0054.namprd21.prod.outlook.com (10.175.112.144) To MWHPR12MB1150.namprd12.prod.outlook.com (10.169.204.14) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: d3482bcb-1f8d-491b-7a53-08d548bea8fa X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(48565401081)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(5600026)(4604075)(2017052603307)(7153060);SRVR:MWHPR12MB1150; X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1150;3:gMQve7uw9rVT5cjDsAdd5NIo1i2mcB236svj2mX66qJZKmqCFN0fjjNSoUvBDlGMMH38a8y/FmJpkiuj1X8UDFdoXqT8UxT56VOdVat9s83yKbHQoDab83yEjI3G8Pa8q97iP3oa5+BYjJ/37UtvW586omiHUGJe1KoyCUJaYDGoZupbE3lZ0B6tTces+SxD8iQGdfqOKOlBHGr+0a8Y+peiHhjhulkkJx+d3UlQW9bRc3CqyU/rZTBQn3FKaRE2;25:UCZRFSqBRXuFe7yslBZSA+tHVENPF8onNvWseZ/tvkFPgyI6lSobe/TdRVEkTYz+H5rM1mURHqZPTtf31YraLscIcP0lP36bYDmaEhqNxAegw7b7nAbFvjbTZ8w00tHSVQh963EY2wpXK+RgLTSu4pPPSdG8hksuBSob3aitY9hnWg9j+SSBU+Eb0/Dj55adHPvjhm/K4NbegSOu/so1FCl2vfUIQxNMV+iLkxhzcfaTYtAl4/2eN/DYSg9Bdax7PJyLdXSclDxe2noNscaupY1gjre/n3nV8zmYHz13QCA/+MT+FZD9oUMXwmmlzvhEfVWbw+IzttMUiZI3UxD55Q==;31:V/X/w97YX9EA0zD2CxDYJh1/njkhyr81A1YOHz54MfDJwIn0dj20EyVOzwGPpPon1oU6KZfNtiOAydT0I6yxFndLn0YxFjZmUXhvXUQDMOoL5Bb5tygRzDiSZQQN7fjaIxAQXG+YpZBTO6ATgmEOq3jQpkF5qQ9p+kIm5KT9KMQtGXOgG1G6vLW3aeN8m2PfixLR2JcVX+bkkXBgf/I030Z+lyfjcy4PwtYaAPd/BDY= X-MS-TrafficTypeDiagnostic: MWHPR12MB1150: X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1150;20:7Nj1dpoLyk/Ye9Hp5BU2hYC+F/6wQ/UC/VVjsfJtTjR7QmloxAIsOb2pbD0XHzbBW9E/OPPE+umE7SMVPkFBBwKBzVQC0Be7/5LBoqvUvAbMm6pvLdjnw6lKH5i6tejiJX4nTjTz4dNfLTpH6C7vQQkNJl41lkk2o509JRuW2eOSLmSBYyI2Q8iat8PNM6ciUix3TbM8eOTMQE3qSOQ0bW5LFuwIvvqcLaRfKhaBnANY03IQbnUIyt48f8ZZHHugraTqz7mqWdaAHHfbAxDrssWV4f/aH3CWd2tWKpgIqgLJFQ9+WnNna8DeYPH2W+lR59v76kdF9qRsQAeTc8NOSHLFZJ82WcNau1gI0FuSzVH84H4jZ6PcnShmCmpJG+TGze5zzOA8B0i9Iiw1jPgZsB7tOmc82F5vJugog85xGCm3meN/xQt8I5w6ht8MV4veIYedfxaCufrHHIJTlT+UruvpT1gt4GCkf0f67IpZHsIvUCnD3P1/sLZ8umjs7Z7f;4:0crfbOl6Esx/c3PQ9NXGGYuXSsfrni6ncpmLuv2IOQOXyFoPmAqpwAqFiaHbr3H2X8/twb0ffB1+Y9yvD6ey1C7luS3bbcdspyo4JI172Zv295vMzLzMeIIHDoKJMAer1Arh2fZ6i5mLM1dFSqoeENjkTuX2BOt+AdlGldehj8WN4aHLkUlQoB0FuodvRDaGXza+siFStKd9f2vVa+NFXYUd2Q8nAA9WhFXh3LebilgP/m4pOJ3HLPqdhe72ht8ijgDEvboqoc043WFIx+nbBK+Ocj+llSIuUmuXn5fsF6hVhP50Q853f2O8jOBBEBRf X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040470)(2401047)(5005006)(8121501046)(3231023)(10201501046)(93006095)(93001095)(3002001)(6055026)(6041268)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123562045)(20161123558120)(20161123564045)(20161123560045)(6072148)(201708071742011);SRVR:MWHPR12MB1150;BCL:0;PCL:0;RULEID:(100000803101)(100110400095);SRVR:MWHPR12MB1150; X-Forefront-PRVS: 0528942FD8 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(39380400002)(366004)(376002)(39860400002)(346002)(396003)(189003)(199004)(58126008)(53936002)(386003)(230700001)(59450400001)(66066001)(72206003)(47776003)(305945005)(55016002)(7736002)(97746001)(86362001)(76176011)(68736007)(8936002)(4326008)(316002)(6116002)(97736004)(2361001)(81156014)(25786009)(105586002)(7696005)(53416004)(2906002)(81166006)(2351001)(106356001)(9686003)(69596002)(83506002)(6506007)(3846002)(52116002)(2486003)(50466002)(16526018)(23676004)(1076002)(6916009)(103116003)(5660300001)(478600001)(2950100002)(54906003)(6666003)(8676002);DIR:OUT;SFP:1101;SCL:1;SRVR:MWHPR12MB1150;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;PTR:InfoNoRecords;MX:1;A:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtNV0hQUjEyTUIxMTUwOzIzOkxQQll6MGRBeTFvWEV1NTFqTXZ2RWx0bFJl?= =?utf-8?B?SDdmK2IyeHRhTHRUSzhNbXp5dFlDU3NwMTg4SW5VWnE2aDQ2NzlJREkxRk1p?= =?utf-8?B?bzBORkQ2emIyM25ibm5Xb2tsOWFIQlhYbEx2TE9pQnFXRjB4Wm9TU2NGa2xI?= =?utf-8?B?bHgxRXlvbFpwWnpZR2JxYWsvZmQweEUzYkNnMXd4MDA0QTlEb01KWHRRc2tu?= =?utf-8?B?dGVDcHBNcjNzQWVZQnBvbnZjM0l0bU5zOVhRbmhUa0RHWVpTbjAra0NhS2gz?= =?utf-8?B?bTFOZEplK2tWaGpFblo5eUNIZm9oLzZwN0pPcGg4VDQ0UThBM0w5Q0N6TXpu?= =?utf-8?B?eVUxQVJJcnZieDlocmRRUGw2N0RIZ1h5SGZTRFRYeFMzN3lIakdKemZhc0VO?= =?utf-8?B?NDRjaG0wTzA3ZE5NK2xydkh5RjkvNWtFRmNIbTl5SGpLQTdsUit4NG04WWpP?= =?utf-8?B?TmZ5bU1pT0FtNWRiWUNEcnBQMWJaMEFEYm9VYXdIRHVGNDM4TEtvaGJ6YmNl?= =?utf-8?B?UWtoLzE4bmF5cmwvZm9GWlUyVHVmRVZ0MlYxK1BtT2ZvdEthRDVQeWE5RVZ6?= =?utf-8?B?UHdnZExSdWxMbGx5NDVqUjdMSHJKc2F5WE5aNHZNUEVKa0t1VHh3aWN4L1V2?= =?utf-8?B?MTR3Y3FmMkhKbU1XMmNXUHI2QWFmVlgyUEV6WnBSNXc5NjFzM05hUzFXeUNP?= =?utf-8?B?TDBzLzBVUjdKdThsWjU4aCs5ZU5vdkoyVElPSlpCbDNvOUlQY2xydXNVempG?= =?utf-8?B?YlJkM0xvWllWaFd5U2I1UmJVMU1iSk9STENyNTJQL09mbHJ0anFkdWppOVZn?= =?utf-8?B?Y21ldmRicERXQ0dmYWJya0Q4MFA1N01OalRDS0l6bUxaZngvdERBeWJVeCtK?= =?utf-8?B?eHgxSXZpSjRPdmlpUlRRZlJvMEZnbWphM1pJTmJEM2hPMVdWUlJBWE02bjBo?= =?utf-8?B?THVVK2lCUzBrUEhZRktYTzhsdlV4QjA5NlhWNzZGZm1zNVYxd2dBNVBvMCtH?= =?utf-8?B?UFAxcjU2eTZvamNzelhYUnlOWnhub1dFMDVqV0tvWUtwMU45TWdiME5VS1p1?= =?utf-8?B?ekxzZThxTVBVSWhsRFU4b0R4Wk1WSXFPTHp0U25NWElNYWIxcDZsMjZ3NEc0?= =?utf-8?B?TUt1SzM3WTN2UGUvNWhKdmhDZTFES2I0djJmRm5Eem9sOHNXU0phZm8yVnhD?= =?utf-8?B?T083ZXVIbTE4TVJjZXZ2eG1EMkdmVithNEZNZkhpeVdmTVJxbG0yanJrWWo3?= =?utf-8?B?c250RHZFOFdpYnpndW10WVdaRUd3WXNUSEJJNmFNU3VORmZOK25wdHc0T2py?= =?utf-8?B?VnpqbEZkaUFjQTlNTk9nVWFCUVB2UldOV1dJUll0a0RFZWlZQ25RU1FFWXNI?= =?utf-8?B?OWxVZGI0bmdWOVh1K2RiZ1RFdG5oZGE2MnhrVXpBdyt1cEdNL0RJVVhyU1FM?= =?utf-8?B?cUplQlpUK0VvRWlOYnEwY2RHc3ZBaEFVQ1U0ZUR0djZ3REdDRmRHQzgxclJl?= =?utf-8?B?SkZzVHRsVjZVTzZDK0JlekJLSDcrdkdydUxtY3FSLy8vVy9iRXFFQS9zcEtn?= =?utf-8?B?dGlNOXV5QU5JSFdCZEdoU01hendwdXR4MUxOMXl4SmdkWkRDMzhWTktNMlpw?= =?utf-8?B?UE92RW1sd3F0YTJDNVdYQnRlejZGb0ZVMnBpN0ZLZkwvaG4xdFhDUzFPQ0xQ?= =?utf-8?B?VGR5ZUN3UFlxVFduZFl6RnRhZlZZRTdGZ093MVhJRDJVaVlUSVVOZFJaTTNU?= =?utf-8?Q?MBnPZb/37UzjeBg4bY3ACPkGUWz51Dngmhu60=3D?= X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1150;6:Kbn75q0pd0XtwII994/+tiKvOevibZ7ZAw3+kTnaFuZ3WsdQZfPhzlepYCDfIAKtPvt/WX7idOETgYb5MFD1MixnAKk1ZJ3FWoEaJKrUx5QTpNAX9BaavmC0yokOvoC1B9YUMevlI30cES8CNEMhVneQVixBEauKlToGZFBVhBSTIosChJyknqFsYt61J0JObVOZNxOL7GzRCikuP7LmmLeqLbMqBORjfku+wFWRX2jKDG6rRrG98jCWrAkW20SD+GP1HOJmr+2Li9QwMGie4VswfkeC2YuofRHDTanDVjrN1RXKcPes8GVOBEiMP+wiTJqgjfa6bomKJaB1Mqs9RD78RlEOF+b2A//+cqI0P58=;5:gWpo4a69/i7PWCBzHkJKRRYDwnq91JE0G10+w/vB/C2aSJP1UKHxPXWyth3vZpnzhhNsQBp/+ogP29RTCMcSLJ1tWuX5MLRd18NLx/XVou6KgmQKUcVHooD08GfeWVY4z7ir7QwRU7cjw8uEr1Pv5+97z73UvSEvKX9w2eNDKKk=;24:6a0ibt/0ujNe9nx8G1w5bqSXfTvLLgBCr3+a1kJcfaPzhokoycE+fmqTGaN691WpqvGHXeeINCn1C84eFnm93dBQxVPT6LsMxKOtcVPpSXU=;7:y8j0kLbuyC7BXzyQzeEsuPAx2MFhRecrMHsybo1G9qxOrU+DKTRnUUa+4zQjllszLetofk3FE6jwe+JHaPnVjoFQ1x4Sh2OwlUOexoVwa/UFvfygvoIFMlnDU5w4aYc18pXPiDpaKzJXMt60Bi+RKqfLDavMDkKsvnLaE+MEuPBWAQcXwpm/GRpHI+hsnha8aMLScAXV2Q1ZTbr61ZtykywnO8ZG9KD2NH04rVU2RWV8ky8EInAKPRL1SIRkZvjL SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1150;20:DjIZ3ztuRYb9Tan0OFjlALHKf+ppmDezz/FeS94MeQpCf98azoN3zejzTy2Atr+//XVcg1XCA25pwNWRMV/YXW45x/KMLpE0bS0PoGwl09AyjzsYPBkIV4qp/cAZEmhG4e1TPw1mg/OGXyL1zBpXXNspkSQ5XfhVdPuJzO06RADhCj88GBO562jF6+x+s2t/8wuj2TEUpI2roJGFZ33Wdfd09kLDUlO1qaiisoAMDblhhOGshO5ZJ2FXPhLd48/K X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Dec 2017 22:03:25.3141 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d3482bcb-1f8d-491b-7a53-08d548bea8fa X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1150 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 8971 Lines: 284 In preparation for encrypting more than just the kernel, the encryption support in sme_encrypt_kernel() needs to support 4KB page aligned encryption instead of just 2MB large page aligned encryption. Update the routines that populate the PGD to support non-2MB aligned addresses. This is done by creating PTE page tables for the start and end portion of the address range that fall outside of the 2MB alignment. This results in, at most, two extra pages to hold the PTE entries for each mapping of a range. Signed-off-by: Tom Lendacky --- arch/x86/mm/mem_encrypt.c | 124 +++++++++++++++++++++++++++++++++++----- arch/x86/mm/mem_encrypt_boot.S | 20 +++++- 2 files changed, 122 insertions(+), 22 deletions(-) diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 9b180f8..ea61170 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -469,6 +469,7 @@ struct sme_populate_pgd_data { pgd_t *pgd; pmdval_t pmd_flags; + pteval_t pte_flags; unsigned long paddr; unsigned long vaddr; @@ -494,6 +495,7 @@ static void __init sme_clear_pgd(struct sme_populate_pgd_data *ppd) #define PGD_FLAGS _KERNPG_TABLE_NOENC #define P4D_FLAGS _KERNPG_TABLE_NOENC #define PUD_FLAGS _KERNPG_TABLE_NOENC +#define PMD_FLAGS _KERNPG_TABLE_NOENC #define PMD_FLAGS_LARGE (__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL) @@ -503,7 +505,15 @@ static void __init sme_clear_pgd(struct sme_populate_pgd_data *ppd) #define PMD_FLAGS_ENC (PMD_FLAGS_LARGE | _PAGE_ENC) -static void __init sme_populate_pgd_large(struct sme_populate_pgd_data *ppd) +#define PTE_FLAGS (__PAGE_KERNEL_EXEC & ~_PAGE_GLOBAL) + +#define PTE_FLAGS_DEC PTE_FLAGS +#define PTE_FLAGS_DEC_WP ((PTE_FLAGS_DEC & ~_PAGE_CACHE_MASK) | \ + (_PAGE_PAT | _PAGE_PWT)) + +#define PTE_FLAGS_ENC (PTE_FLAGS | _PAGE_ENC) + +static pmd_t __init *sme_prepare_pgd(struct sme_populate_pgd_data *ppd) { pgd_t *pgd_p; p4d_t *p4d_p; @@ -554,7 +564,7 @@ static void __init sme_populate_pgd_large(struct sme_populate_pgd_data *ppd) pud_p += pud_index(ppd->vaddr); if (native_pud_val(*pud_p)) { if (native_pud_val(*pud_p) & _PAGE_PSE) - return; + return NULL; pmd_p = (pmd_t *)(native_pud_val(*pud_p) & ~PTE_FLAGS_MASK); } else { @@ -568,17 +578,57 @@ static void __init sme_populate_pgd_large(struct sme_populate_pgd_data *ppd) native_set_pud(pud_p, pud); } + return pmd_p; +} + +static void __init sme_populate_pgd_large(struct sme_populate_pgd_data *ppd) +{ + pmd_t *pmd_p; + + pmd_p = sme_prepare_pgd(ppd); + if (!pmd_p) + return; + pmd_p += pmd_index(ppd->vaddr); if (!native_pmd_val(*pmd_p) || !(native_pmd_val(*pmd_p) & _PAGE_PSE)) native_set_pmd(pmd_p, native_make_pmd(ppd->paddr | ppd->pmd_flags)); } -static void __init __sme_map_range(struct sme_populate_pgd_data *ppd, - pmdval_t pmd_flags) +static void __init sme_populate_pgd(struct sme_populate_pgd_data *ppd) { - ppd->pmd_flags = pmd_flags; + pmd_t *pmd_p; + pte_t *pte_p; + + pmd_p = sme_prepare_pgd(ppd); + if (!pmd_p) + return; + + pmd_p += pmd_index(ppd->vaddr); + if (native_pmd_val(*pmd_p)) { + if (native_pmd_val(*pmd_p) & _PAGE_PSE) + return; + + pte_p = (pte_t *)(native_pmd_val(*pmd_p) & ~PTE_FLAGS_MASK); + } else { + pmd_t pmd; + pte_p = ppd->pgtable_area; + memset(pte_p, 0, sizeof(*pte_p) * PTRS_PER_PTE); + ppd->pgtable_area += sizeof(*pte_p) * PTRS_PER_PTE; + + pmd = native_make_pmd((pteval_t)pte_p + PMD_FLAGS); + native_set_pmd(pmd_p, pmd); + } + + pte_p += pte_index(ppd->vaddr); + if (!native_pte_val(*pte_p)) + native_set_pte(pte_p, + native_make_pte(ppd->paddr | ppd->pte_flags)); +} + +static void __init __sme_map_range_pmd(struct sme_populate_pgd_data *ppd) +{ while (ppd->vaddr < ppd->vaddr_end) { sme_populate_pgd_large(ppd); @@ -587,33 +637,71 @@ static void __init __sme_map_range(struct sme_populate_pgd_data *ppd, } } +static void __init __sme_map_range_pte(struct sme_populate_pgd_data *ppd) +{ + while (ppd->vaddr < ppd->vaddr_end) { + sme_populate_pgd(ppd); + + ppd->vaddr += PAGE_SIZE; + ppd->paddr += PAGE_SIZE; + } +} + +static void __init __sme_map_range(struct sme_populate_pgd_data *ppd, + pmdval_t pmd_flags, pteval_t pte_flags) +{ + unsigned long vaddr_end; + + ppd->pmd_flags = pmd_flags; + ppd->pte_flags = pte_flags; + + /* Save original end value since we modify the struct value */ + vaddr_end = ppd->vaddr_end; + + /* If start is not 2MB aligned, create PTE entries */ + ppd->vaddr_end = ALIGN(ppd->vaddr, PMD_PAGE_SIZE); + __sme_map_range_pte(ppd); + + /* Create PMD entries */ + ppd->vaddr_end = vaddr_end & PMD_PAGE_MASK; + __sme_map_range_pmd(ppd); + + /* If end is not 2MB aligned, create PTE entries */ + ppd->vaddr_end = vaddr_end; + __sme_map_range_pte(ppd); +} + static void __init sme_map_range_encrypted(struct sme_populate_pgd_data *ppd) { - __sme_map_range(ppd, PMD_FLAGS_ENC); + __sme_map_range(ppd, PMD_FLAGS_ENC, PTE_FLAGS_ENC); } static void __init sme_map_range_decrypted(struct sme_populate_pgd_data *ppd) { - __sme_map_range(ppd, PMD_FLAGS_DEC); + __sme_map_range(ppd, PMD_FLAGS_DEC, PTE_FLAGS_DEC); } static void __init sme_map_range_decrypted_wp(struct sme_populate_pgd_data *ppd) { - __sme_map_range(ppd, PMD_FLAGS_DEC_WP); + __sme_map_range(ppd, PMD_FLAGS_DEC_WP, PTE_FLAGS_DEC_WP); } static unsigned long __init sme_pgtable_calc(unsigned long len) { - unsigned long p4d_size, pud_size, pmd_size; + unsigned long p4d_size, pud_size, pmd_size, pte_size; unsigned long total; /* * Perform a relatively simplistic calculation of the pagetable - * entries that are needed. That mappings will be covered by 2MB - * PMD entries so we can conservatively calculate the required + * entries that are needed. Those mappings will be covered mostly + * by 2MB PMD entries so we can conservatively calculate the required * number of P4D, PUD and PMD structures needed to perform the - * mappings. Incrementing the count for each covers the case where - * the addresses cross entries. + * mappings. For mappings that are not 2MB aligned, PTE mappings + * would be needed for the start and end portion of the address range + * that fall outside of the 2MB alignment. This results in, at most, + * two extra pages to hold PTE entries for each range that is mapped. + * Incrementing the count for each covers the case where the addresses + * cross entries. */ if (IS_ENABLED(CONFIG_X86_5LEVEL)) { p4d_size = (ALIGN(len, PGDIR_SIZE) / PGDIR_SIZE) + 1; @@ -627,8 +715,9 @@ static unsigned long __init sme_pgtable_calc(unsigned long len) } pmd_size = (ALIGN(len, PUD_SIZE) / PUD_SIZE) + 1; pmd_size *= sizeof(pmd_t) * PTRS_PER_PMD; + pte_size = 2 * sizeof(pte_t) * PTRS_PER_PTE; - total = p4d_size + pud_size + pmd_size; + total = p4d_size + pud_size + pmd_size + pte_size; /* * Now calculate the added pagetable structures needed to populate @@ -711,10 +800,13 @@ void __init sme_encrypt_kernel(void) /* * The total workarea includes the executable encryption area and - * the pagetable area. + * the pagetable area. The start of the workarea is already 2MB + * aligned, align the end of the workarea on a 2MB boundary so that + * we don't try to create/allocate PTE entries from the workarea + * before it is mapped. */ workarea_len = execute_len + pgtable_area_len; - workarea_end = workarea_start + workarea_len; + workarea_end = ALIGN(workarea_start + workarea_len, PMD_PAGE_SIZE); /* * Set the address to the start of where newly created pagetable diff --git a/arch/x86/mm/mem_encrypt_boot.S b/arch/x86/mm/mem_encrypt_boot.S index de36884..23a8a9e 100644 --- a/arch/x86/mm/mem_encrypt_boot.S +++ b/arch/x86/mm/mem_encrypt_boot.S @@ -104,6 +104,7 @@ ENTRY(__enc_copy) mov %rdx, %cr4 push %r15 + push %r12 movq %rcx, %r9 /* Save kernel length */ movq %rdi, %r10 /* Save encrypted kernel address */ @@ -119,21 +120,27 @@ ENTRY(__enc_copy) wbinvd /* Invalidate any cache entries */ - /* Copy/encrypt 2MB at a time */ + /* Copy/encrypt up to 2MB at a time */ + movq $PMD_PAGE_SIZE, %r12 1: + cmpq %r12, %r9 + jnb 2f + movq %r9, %r12 + +2: movq %r11, %rsi /* Source - decrypted kernel */ movq %r8, %rdi /* Dest - intermediate copy buffer */ - movq $PMD_PAGE_SIZE, %rcx /* 2MB length */ + movq %r12, %rcx rep movsb movq %r8, %rsi /* Source - intermediate copy buffer */ movq %r10, %rdi /* Dest - encrypted kernel */ - movq $PMD_PAGE_SIZE, %rcx /* 2MB length */ + movq %r12, %rcx rep movsb - addq $PMD_PAGE_SIZE, %r11 - addq $PMD_PAGE_SIZE, %r10 - subq $PMD_PAGE_SIZE, %r9 /* Kernel length decrement */ + addq %r12, %r11 + addq %r12, %r10 + subq %r12, %r9 /* Kernel length decrement */ jnz 1b /* Kernel length not zero? */ /* Restore PAT register */ @@ -142,6 +149,7 @@ ENTRY(__enc_copy) mov %r15, %rdx /* Restore original PAT value */ wrmsr + pop %r12 pop %r15 ret