Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758083AbdDRVUi (ORCPT ); Tue, 18 Apr 2017 17:20:38 -0400 Received: from mail-bn3nam01on0058.outbound.protection.outlook.com ([104.47.33.58]:47712 "EHLO NAM01-BN3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1758055AbdDRVU0 (ORCPT ); Tue, 18 Apr 2017 17:20:26 -0400 Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; From: Tom Lendacky Subject: [PATCH v5 22/32] x86, swiotlb: DMA support for memory encryption To: , , , , , , , , , CC: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Toshimitsu Kani , Arnd Bergmann , Jonathan Corbet , Matt Fleming , "Michael S. Tsirkin" , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Brijesh Singh , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Dave Young , Thomas Gleixner , Dmitry Vyukov Date: Tue, 18 Apr 2017 16:20:10 -0500 Message-ID: <20170418212010.10190.78119.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170418211612.10190.82788.stgit@tlendack-t1.amdoffice.net> References: <20170418211612.10190.82788.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: DM5PR12CA0061.namprd12.prod.outlook.com (10.175.83.151) To BN6PR12MB1137.namprd12.prod.outlook.com (10.168.226.139) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 035b5a8a-a398-40eb-e7d5-08d486a0b483 X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001)(48565401081)(201703131423075)(201703031133081);SRVR:BN6PR12MB1137; X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1137;3:TduNwUwcii8egF0ezmUsIQXjNDO//U+gALb+Yix8JXHSR6UPIGn+5+yF3DWbNHiB1voj9FODlzA6+Lj4XAk3aCi7jdzNAYruw9CjEWpguDq8HVgvpgEgG5dKzXignCFhFXuWnc8JHpv2Fg85ygwKYudmp+AE0bOo15hFpq9TOEIyprWBVLZNDZCaw4T2uEF+vzBEH+0ecBfnalHw6OY4Xuh1G4M5MG6rLfV09FJBLj5bmpU4FJbhiKOOVrjOA5WxBiEGxS+GzQjZJlKxKvuutG6Blli3juqTGp41yFlHwxrWnZEiGJz9WqBen1Vxi971E+i2ea9LYQiSe/RteaZfZOjFN1uSgYxndEr1jL2BFCE=;25:xQ7tfRA5hFyDo9l2/EsrmDUfUc9sxBwi5BLQmkuV5pVvUvSjSeqlSjoZZHJ1ZrZ6b7pfBJWfEZTxX95ZxFkG/+GEmSi8TPAUSRSzHlR5UDLpbnl1ye9S6D67+KfxdnkS6Ku5/ooEiKC6iHXX2OjSP/HMiAyHNRFgMHmQfj3L+pMTaKvdHkYSFSwKR8SEe0P8QaAAG9pSrv6953a5A31qdUDg7nMhCr69LaYrkIJnPKUdJm2YYOIqvwv7hDVA+RJxf2nQ7qIHpqsXd0EjulvRUu8F0PkD1mZKwVqGN0Zy0rj3ikanfDDTx2iHYr7Xj1b2qOn8OuyOboYG6ycuQTOPeLnwIjZE/HGFUod2eW6efsLHEQQFJbJcU4jw9FAWdq61/Dz70gUrZgtGXCirkx8cKbYpiMJQ9vFV2y5gyeHRsIt0UWoACV6a0aPBrrHXFIrKCE954mKJ0NRoNQugKsU+7A== X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1137;31:qFpsGUHQeP5z5Vp0rl8rG9GShB5dNkMAZEGyImzdQpDd6RijNvkgmQneHWm0WxPxZK6uQFvLstFHlmzdCAgnx/7mCYRL8lgwgNhOsAh1bGd7KZLBf5VnqSfwtlIVAFu+j4XA6DldDgnOWn4x09SmDXWyBCKneoJ4YrT+n81uFiJ7qPuLAoy/j9h9nxvt707EbcHL/lF0Cgdseb41d6s6eK9aao8t0+v4C7L1mnyAnFs=;20:h2vcD0+KPir9GpOLoBwpfXBsNg7BiOjNUsDm5EYE7FsHnNnOrwN8JH0t2fxQoBeqfZ8sPkgrcOX55mKHF/qCKGm2W/mDXT80qvvvbQSzBNB1RQ5xJSbgpP+BPkK7nakNUXY6DI3WMVS9fkVWM8wIBcaHRVpQ1pHd1NERFkdECJjTI554A4DFGAhjjemhB9O3p8UtLk2chA3MRMCXWAteEuw64QzRCHS3i6a5/uNAGW5wuJ7qvYEzaz7hY4SBf6LWsKJX83pj1n/p5VsSPDoRPb3tgeFu6H+cNMu1xRITGWc0A5UNQFipJmz2HPvj/Vo+upyUs907mR3lG+C3dZ0agrT9bnPTGbXCvoLrxbOVnDr38GQ5/JEjJ09RgF/flVoVwiHLobiQ5qaMeRF9da3DE63mVu6Johx4PMlc/bu+2KkyREl/zpCD18ZgeHAWbIWi8Xaccs12wFy/hEuPZaCxPVKTR4P+tMTuSFQcCp6PnENopgKpbQ9Cc+XREKrunUcM X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040450)(601004)(2401047)(8121501046)(5005006)(3002001)(10201501046)(93006095)(93001095)(6055026)(6041248)(20161123562025)(201703131423075)(201702281528075)(201703061421075)(20161123560025)(20161123555025)(20161123564025)(6072148);SRVR:BN6PR12MB1137;BCL:0;PCL:0;RULEID:;SRVR:BN6PR12MB1137; X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1137;4:JZ5qqQsiXmwCrjjTB+MHnfB5sC5Kiyy2I6sULOpougXZD/RhApu6u9YX4vt5GJ22Cw8Lfj9anOgY+isbKk+HmVIEo0Wcq4oMom4ndphVOaT38wQ/TxDPWx/xG/xLbAYcMjKiWVPRsFpl84ZJy9zSk7C8e9mOrqiGIDeJciQmE8D7VQMtOijA/CY1LhXk1xAGMPvShz6tnWp9diN+u3AfBKiAuAVvmc96UhIajLIILRxndLvX5cuz0sQM8ZIowMOlsnsf8mkk0fqpFhJ7qH72ncLcuywiOBIKxSiaertCNfAfRm/9iGENLpqzCDgk6L6Xd40Wi3BleYOg9/yATeq0IMaciWaEGtb4yT9T5VY58G53WRsjIGlzbCIBnOmNozK1CV1dv8GKwUwHsyx7DCjiwj/VyJdwBK+eurpPmzt8h35FsyJTN0+4/YQW16cz2M/474jQi6gISJ2tSJtUGP8EQlSPOlEZRU2TXf5wuvZAezKDCMrpxyVOZt74lA/tyTckNDJb/jE1W3hS2kD8DT05a6EN/v53u+DGAsZU5r3UZemhrmsv60ht0pQckQtp6l6jtOFcg0/bvrrvbIZ6sjyl0LbiER9YvcHdwVJbAdlvmFUTQo2bFpoM/DAwZUBKW6dl23Ix7XrlW0GiH9wCY9/KjyAdNOLBKms3lRYyRyj8sznj95x80xfVVN9TlOWdKaQIV1ahQG6O4Br71JXZFo+Jrdxv4CYZrkE2ztXJaeH6UtWjZI5vHdCh9XhsTDIjV+FIu+mrC9pQpTYPdaP7mRtE6A== X-Forefront-PRVS: 028166BF91 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(39840400002)(39850400002)(39400400002)(39410400002)(39860400002)(39450400003)(7736002)(575784001)(7406005)(2201001)(6506006)(7416002)(23676002)(53936002)(86362001)(54906002)(5660300001)(9686003)(83506001)(305945005)(81166006)(8676002)(55016002)(42186005)(2950100002)(53416004)(230700001)(50986999)(1076002)(76176999)(103116003)(6116002)(33646002)(97746001)(3846002)(25786009)(47776003)(4001350100001)(50466002)(38730400002)(66066001)(54356999)(2906002)(189998001)(4326008)(921003)(71626007)(1121003);DIR:OUT;SFP:1101;SCL:1;SRVR:BN6PR12MB1137;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;MLV:sfv;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtCTjZQUjEyTUIxMTM3OzIzOitnMFlvRzMydXgzUnRvaFhQYU54STFtWmpS?= =?utf-8?B?NHFPV2NNbkliWWR5MU5MREtRb3Y4enBtdG54UnFRNTRGZEtZQkJ1NmtMeTVP?= =?utf-8?B?L2hEOHZoaFkySkJpdlNYMXM4SnpPTE56RDB6dnU0Tm5FVDFFR1k1MmV3Nkl5?= =?utf-8?B?eFBPVWlZWTdWc1M2RkY2bFAwdTRhSExSbkl3MzhxdEkvRmFSejJQRXRWL3dp?= =?utf-8?B?VzAxd3VwNHNYWUdLMFNpY2dJK2xzeXZURHVXYVZlSlUvQWI3NG03K3lPS05j?= =?utf-8?B?Y2ZRRWY5aWxzSm43d2x1NGJodkkrSmhieGdVUjNiK0RMTXNzYkpydzBFMk1N?= =?utf-8?B?bG8zd2c3WUZRdFBuZkllTCtSRmFUdVFGZXltbVdGSVV1MzMyMVZUUFBiQWln?= =?utf-8?B?SnpKSFQ3VXR5WGdaRWxUVnpZcnVFenFydHgrWXdrNzJ5YmdETnIxSmR0eFhp?= =?utf-8?B?cEdHS040UnpJdHlVd05KbHlrMWx5L0I2L0g4Y0VvbmNaVmJua0FjR1V6eUp3?= =?utf-8?B?OUVCSHlZU3FSdjl3cHNiOVBXVmhBcnVNWTBMVU16L1RzaHJWVW53Q09Cd0R6?= =?utf-8?B?cmZSWFNqT3pJTlhXRGpMRExsNmZXMlhpdmFsK01zdFNtdk9Mem55aUE4NmVM?= =?utf-8?B?bmphbCtLZmw5cFVMLzF0c1o5MkpUS2FERlF3ZG1KMm9YVW9qWDhaSDB1dVk4?= =?utf-8?B?NGxGc0l6LytCTSt0QjFjK0tUNXhwKzNFRi9vaVgxTFlOcGNVNkhwMWt3WGFG?= =?utf-8?B?c3V0R3ZOeFdGWjFYUnNUMkVRT3hvWm5vTGIvZjNoLzZRSmpvMEQydXFBWUEv?= =?utf-8?B?WGRKT1FQV2FTWDVqMEpHemw2d05ZRVRsNFNyZGNpTU1CY2tqSnhUQUFjdlc1?= =?utf-8?B?d29kWXJIYTlPUFd2UmZVU1p6dUg0RFBVTDQwb3F0M0hPRUJBcVU2UVROenFT?= =?utf-8?B?UEdLd2tVNVA1cjlyYm5BVkVpRVFhQTRDajRBdmRFVDFUVnhOQm9WbW5zN24x?= =?utf-8?B?RWN6ckpxVFZoUFVXbno1Q2grOXEzUGFGeUtKN1VlbWtreDVCY3lQYnhtY0t1?= =?utf-8?B?a25mTUlBNys5Um1qazZKQXZ0YmlsakhrMkU0ZURocTdlTTF3b096VEpZZzcx?= =?utf-8?B?VzRndW1GUUowTUsxanliVlNlc0JOdklmTTJxNnpCMnR2Z1JPbW5FQkt0b1g1?= =?utf-8?B?MXY1UkQxcFN4U0J6QlZ3VXpoUEdSR0ZRRjJVVE1SSXVFVXAybkFNNGhXN3lv?= =?utf-8?B?cUJWRlZSUVA0YVFkQWlwQzU4SnNnc2ZrNGdGd0krMlBrVFUvWDNycWx0em5R?= =?utf-8?B?SU9WcnBuZ2kwTlVaV29ldHNrbk9LdzE3bmd3ck1xeVBnM2xCcStWN3VndkQz?= =?utf-8?B?OEhMZENBTU51SU83SXMweXlLL29aZEtvQ2E1UlUxOVU1c090OWl0RVBSVGl0?= =?utf-8?B?VlUyTHVDNml2L3pURFlYNmhkL1RhSGJWeWd4WTYvemNDMCt5L1RVMVpVbC92?= =?utf-8?B?ZjZ0cUJRaUtRNTg2bkg5NCtTc1VqM0hmRlFhR1lCV2dWc0FNVFQzeE9YeUxU?= =?utf-8?B?YmhPRlpjRGtiVWF1eGM4bFJqUUpUbTArTWlpTklmaG5rM25hWTFwUjl5WWJu?= =?utf-8?B?d2xpdzRnZ0FST043THNETjMzZ25tQnRPNStlUUlFcWRBMVRodzFoSjB3PT0=?= X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1137;6:VoYpdof2Dma8jILxnnDG8NNh+HsnWX04a6arP5HdazbKrltslHgCAQPUxm6LRfE/X+zvArwj0GOC2Xd8MYYy0VIIRX18mAXfSqtKtHEhvA6lZTmX3j7Q4WTdcp2UGM1j3FLpTPv5isWWxnyCTN9avxPdjlKALXaqDs9BnEsdvOKZOdsqG+XtOl1gKXS3d0dChVYdD/8fs1YsKlxg6sg0bY5OYkWNLjYUAG9CikQWDv6GvL2aup/FdvDntP+y7ULPUr5baXqfBp1VhCqvJ/WPcJ4Llc7wGsVKLIyph0y18oyHWrscC9NC60Gp7lfPExiNr9D34y+ag1EtAfdRpfLx/EqAwG27BxFnpWNQI5eXsO/8Cw+b49c7udnRiYmWM97rwlvDl7hKrqbDZSN0DDOrbSFaRWHnQKIR5+0h44zboYbeS6kbG3G7CxFzpucNQmNe323XqbgUG5XHCz7QXZQ6xuQ7bI41SmvsWBWCIrDny5o=;5:A6BU5VZl6tTW5Y7LSStEmqnDEwgoSm1FV3YTlBUpK3hbLe4CO3rLiv1R4es7SkZCZ+++/OFG7nH4wbwEp5ncI2iV9KhPlD+QxqdDf7sd/raojlUWpeZsh3JpGFPtIHgozAoj6J5B34gGKECy/ON2mQ==;24:Q6qoWA3JrJ1yPQSt/AloXpk3gWng94Aov1k8hfuiPNKc1oB4NX7S60KSUA/VxKmpyOlFoaUwDvKRT1QK0FbSh/W8gMAfZIGqYEvtN/hQyao= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1137;7:2yglCluYCPsubxk/QPLkYRy/F2XS391zKFBDE0crHqhsK2fBpYUewhSMQfeNv0pd+bsr+ic1NmN+o/jpuxdaFi3m7ZnNsOtT8kxIhPhqLFHizkz9F5VNyOpzGQvSbgnnEy+xSBkduXmI1Y/bU7TvFCLLI57bK5zIkvCFBNqEz4v4uey7n4VXodG+nTYvJZYs6eXQ5ZA0om/EE9WQxJOJBllDmhj9By0c8MtxxmPOaj7GuSNjhsEQxJhdkVqN43odnnGjiNW9/U9mGlyAj0S7EAkYcbRuay7xi+XS4Fi8nRGtWLpREQTzo/VMem7r8SnUvZd66meXVeHTVL877ZCMVA==;20:y44IYT/2N7aPCrk/drEnUoQCoLYcuWM6kgVUcb1X1UwWJPoviqdRi02uM1TFa8e1+V8j8quQmqiKD4ngmZyaS45S/EC4zn9SkKv6KcIH1ID1nFFWip3ZoGMa71n3iLHpAi6yHEX1UBE4rsHWUjDbuXxjutbArFIMQG58d9XB8xvj/dQzLVd3r67Wp9K0RQrvWzkMdbOatN+XH74y+ntItd8TxMbMcv6xC64grjQqAPuopnUQ9XK2hvUhvpo/m0JJ X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2017 21:20:12.7968 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1137 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 12713 Lines: 387 Since DMA addresses will effectively look like 48-bit addresses when the memory encryption mask is set, SWIOTLB is needed if the DMA mask of the device performing the DMA does not support 48-bits. SWIOTLB will be initialized to create decrypted bounce buffers for use by these devices. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/dma-mapping.h | 5 ++- arch/x86/include/asm/mem_encrypt.h | 5 +++ arch/x86/kernel/pci-dma.c | 11 +++++-- arch/x86/kernel/pci-nommu.c | 2 + arch/x86/kernel/pci-swiotlb.c | 8 ++++- arch/x86/mm/mem_encrypt.c | 22 ++++++++++++++ include/linux/mem_encrypt.h | 10 ++++++ include/linux/swiotlb.h | 1 + init/main.c | 13 ++++++++ lib/swiotlb.c | 56 +++++++++++++++++++++++++++++++----- 10 files changed, 116 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h index 08a0838..d75430a 100644 --- a/arch/x86/include/asm/dma-mapping.h +++ b/arch/x86/include/asm/dma-mapping.h @@ -12,6 +12,7 @@ #include #include #include +#include #ifdef CONFIG_ISA # define ISA_DMA_BIT_MASK DMA_BIT_MASK(24) @@ -62,12 +63,12 @@ static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr) { - return paddr; + return __sme_set(paddr); } static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr) { - return daddr; + return __sme_clr(daddr); } #endif /* CONFIG_X86_DMA_REMAP */ diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 130d7fe..0637b4b 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -36,6 +36,11 @@ void __init sme_early_decrypt(resource_size_t paddr, void __init sme_early_init(void); +/* Architecture __weak replacement functions */ +void __init mem_encrypt_init(void); + +void swiotlb_set_mem_attributes(void *vaddr, unsigned long size); + #else /* !CONFIG_AMD_MEM_ENCRYPT */ #ifndef sme_me_mask diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c index 3a216ec..72d96d4 100644 --- a/arch/x86/kernel/pci-dma.c +++ b/arch/x86/kernel/pci-dma.c @@ -93,9 +93,12 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t size, if (gfpflags_allow_blocking(flag)) { page = dma_alloc_from_contiguous(dev, count, get_order(size), flag); - if (page && page_to_phys(page) + size > dma_mask) { - dma_release_from_contiguous(dev, page, count); - page = NULL; + if (page) { + addr = phys_to_dma(dev, page_to_phys(page)); + if (addr + size > dma_mask) { + dma_release_from_contiguous(dev, page, count); + page = NULL; + } } } /* fallback */ @@ -104,7 +107,7 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t size, if (!page) return NULL; - addr = page_to_phys(page); + addr = phys_to_dma(dev, page_to_phys(page)); if (addr + size > dma_mask) { __free_pages(page, get_order(size)); diff --git a/arch/x86/kernel/pci-nommu.c b/arch/x86/kernel/pci-nommu.c index a88952e..98b576a 100644 --- a/arch/x86/kernel/pci-nommu.c +++ b/arch/x86/kernel/pci-nommu.c @@ -30,7 +30,7 @@ static dma_addr_t nommu_map_page(struct device *dev, struct page *page, enum dma_data_direction dir, unsigned long attrs) { - dma_addr_t bus = page_to_phys(page) + offset; + dma_addr_t bus = phys_to_dma(dev, page_to_phys(page)) + offset; WARN_ON(size == 0); if (!check_addr("map_single", dev, bus, size)) return DMA_ERROR_CODE; diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c index 1e23577..a75fee7 100644 --- a/arch/x86/kernel/pci-swiotlb.c +++ b/arch/x86/kernel/pci-swiotlb.c @@ -12,6 +12,8 @@ #include #include #include +#include + int swiotlb __read_mostly; void *x86_swiotlb_alloc_coherent(struct device *hwdev, size_t size, @@ -64,11 +66,13 @@ void x86_swiotlb_free_coherent(struct device *dev, size_t size, * pci_swiotlb_detect_override - set swiotlb to 1 if necessary * * This returns non-zero if we are forced to use swiotlb (by the boot - * option). + * option). If memory encryption is enabled then swiotlb will be set + * to 1 so that bounce buffers are allocated and used for devices that + * do not support the addressing range required for the encryption mask. */ int __init pci_swiotlb_detect_override(void) { - if (swiotlb_force == SWIOTLB_FORCE) + if ((swiotlb_force == SWIOTLB_FORCE) || sme_active()) swiotlb = 1; return swiotlb; diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 2321f05..30b07a3 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -16,11 +16,14 @@ #ifdef CONFIG_AMD_MEM_ENCRYPT #include +#include +#include #include #include #include #include +#include /* * Since SME related variables are set early in the boot process they must @@ -194,6 +197,25 @@ void __init sme_early_init(void) protection_map[i] = pgprot_encrypted(protection_map[i]); } +/* Architecture __weak replacement functions */ +void __init mem_encrypt_init(void) +{ + if (!sme_me_mask) + return; + + /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ + swiotlb_update_mem_attributes(); +} + +void swiotlb_set_mem_attributes(void *vaddr, unsigned long size) +{ + WARN(PAGE_ALIGN(size) != size, + "size is not page-aligned (%#lx)\n", size); + + /* Make the SWIOTLB buffer area decrypted */ + set_memory_decrypted((unsigned long)vaddr, size >> PAGE_SHIFT); +} + void __init sme_encrypt_kernel(void) { } diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h index 14a7b9f..3c384d1 100644 --- a/include/linux/mem_encrypt.h +++ b/include/linux/mem_encrypt.h @@ -32,6 +32,16 @@ static inline bool sme_active(void) #endif /* CONFIG_AMD_MEM_ENCRYPT */ +#ifndef __sme_set +/* + * The __sme_set() and __sme_clr() macros are useful for adding or removing + * the encryption mask from a value (e.g. when dealing with pagetable + * entries). + */ +#define __sme_set(x) ((unsigned long)(x) | sme_me_mask) +#define __sme_clr(x) ((unsigned long)(x) & ~sme_me_mask) +#endif + #endif /* __ASSEMBLY__ */ #endif /* __MEM_ENCRYPT_H__ */ diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 4ee479f..15e7160 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -35,6 +35,7 @@ enum swiotlb_force { extern unsigned long swiotlb_nr_tbl(void); unsigned long swiotlb_size_or_default(void); extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); +extern void __init swiotlb_update_mem_attributes(void); /* * Enumeration for sync targets diff --git a/init/main.c b/init/main.c index b0c11cb..e5b4fb7 100644 --- a/init/main.c +++ b/init/main.c @@ -467,6 +467,10 @@ void __init __weak thread_stack_cache_init(void) } #endif +void __init __weak mem_encrypt_init(void) +{ +} + /* * Set up kernel memory allocators */ @@ -614,6 +618,15 @@ asmlinkage __visible void __init start_kernel(void) */ locking_selftest(); + /* + * This needs to be called before any devices perform DMA + * operations that might use the SWIOTLB bounce buffers. + * This call will mark the bounce buffers as decrypted so + * that their usage will not cause "plain-text" data to be + * decrypted when accessed. + */ + mem_encrypt_init(); + #ifdef CONFIG_BLK_DEV_INITRD if (initrd_start && !initrd_below_start_ok && page_to_pfn(virt_to_page((void *)initrd_start)) < min_low_pfn) { diff --git a/lib/swiotlb.c b/lib/swiotlb.c index a8d74a7..74d6557 100644 --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -30,6 +30,7 @@ #include #include #include +#include #include #include @@ -155,6 +156,17 @@ unsigned long swiotlb_size_or_default(void) return size ? size : (IO_TLB_DEFAULT_SIZE); } +void __weak swiotlb_set_mem_attributes(void *vaddr, unsigned long size) +{ +} + +/* For swiotlb, clear memory encryption mask from dma addresses */ +static dma_addr_t swiotlb_phys_to_dma(struct device *hwdev, + phys_addr_t address) +{ + return __sme_clr(phys_to_dma(hwdev, address)); +} + /* Note that this doesn't work with highmem page */ static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev, volatile void *address) @@ -183,6 +195,31 @@ void swiotlb_print_info(void) bytes >> 20, vstart, vend - 1); } +/* + * Early SWIOTLB allocation may be too early to allow an architecture to + * perform the desired operations. This function allows the architecture to + * call SWIOTLB when the operations are possible. It needs to be called + * before the SWIOTLB memory is used. + */ +void __init swiotlb_update_mem_attributes(void) +{ + void *vaddr; + unsigned long bytes; + + if (no_iotlb_memory || late_alloc) + return; + + vaddr = phys_to_virt(io_tlb_start); + bytes = PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT); + swiotlb_set_mem_attributes(vaddr, bytes); + memset(vaddr, 0, bytes); + + vaddr = phys_to_virt(io_tlb_overflow_buffer); + bytes = PAGE_ALIGN(io_tlb_overflow); + swiotlb_set_mem_attributes(vaddr, bytes); + memset(vaddr, 0, bytes); +} + int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) { void *v_overflow_buffer; @@ -320,6 +357,7 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) io_tlb_start = virt_to_phys(tlb); io_tlb_end = io_tlb_start + bytes; + swiotlb_set_mem_attributes(tlb, bytes); memset(tlb, 0, bytes); /* @@ -330,6 +368,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) if (!v_overflow_buffer) goto cleanup2; + swiotlb_set_mem_attributes(v_overflow_buffer, io_tlb_overflow); + memset(v_overflow_buffer, 0, io_tlb_overflow); io_tlb_overflow_buffer = virt_to_phys(v_overflow_buffer); /* @@ -581,7 +621,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, return SWIOTLB_MAP_ERROR; } - start_dma_addr = phys_to_dma(hwdev, io_tlb_start); + start_dma_addr = swiotlb_phys_to_dma(hwdev, io_tlb_start); return swiotlb_tbl_map_single(hwdev, start_dma_addr, phys, size, dir, attrs); } @@ -702,7 +742,7 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, goto err_warn; ret = phys_to_virt(paddr); - dev_addr = phys_to_dma(hwdev, paddr); + dev_addr = swiotlb_phys_to_dma(hwdev, paddr); /* Confirm address can be DMA'd by device */ if (dev_addr + size - 1 > dma_mask) { @@ -812,10 +852,10 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page, map = map_single(dev, phys, size, dir, attrs); if (map == SWIOTLB_MAP_ERROR) { swiotlb_full(dev, size, dir, 1); - return phys_to_dma(dev, io_tlb_overflow_buffer); + return swiotlb_phys_to_dma(dev, io_tlb_overflow_buffer); } - dev_addr = phys_to_dma(dev, map); + dev_addr = swiotlb_phys_to_dma(dev, map); /* Ensure that the address returned is DMA'ble */ if (dma_capable(dev, dev_addr, size)) @@ -824,7 +864,7 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page, attrs |= DMA_ATTR_SKIP_CPU_SYNC; swiotlb_tbl_unmap_single(dev, map, size, dir, attrs); - return phys_to_dma(dev, io_tlb_overflow_buffer); + return swiotlb_phys_to_dma(dev, io_tlb_overflow_buffer); } EXPORT_SYMBOL_GPL(swiotlb_map_page); @@ -958,7 +998,7 @@ void swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr, sg_dma_len(sgl) = 0; return 0; } - sg->dma_address = phys_to_dma(hwdev, map); + sg->dma_address = swiotlb_phys_to_dma(hwdev, map); } else sg->dma_address = dev_addr; sg_dma_len(sg) = sg->length; @@ -1026,7 +1066,7 @@ void swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr, int swiotlb_dma_mapping_error(struct device *hwdev, dma_addr_t dma_addr) { - return (dma_addr == phys_to_dma(hwdev, io_tlb_overflow_buffer)); + return (dma_addr == swiotlb_phys_to_dma(hwdev, io_tlb_overflow_buffer)); } EXPORT_SYMBOL(swiotlb_dma_mapping_error); @@ -1039,6 +1079,6 @@ void swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr, int swiotlb_dma_supported(struct device *hwdev, u64 mask) { - return phys_to_dma(hwdev, io_tlb_end - 1) <= mask; + return swiotlb_phys_to_dma(hwdev, io_tlb_end - 1) <= mask; } EXPORT_SYMBOL(swiotlb_dma_supported);