Received: by 10.213.65.68 with SMTP id h4csp664298imn; Fri, 23 Mar 2018 12:55:25 -0700 (PDT) X-Google-Smtp-Source: AG47ELt/IahSO1fGxdjcamn7FcYXDdjLR14TV1mOwmIBA4yL3XHVVTLHPvh8eYwjwte0Fk8EIJqb X-Received: by 2002:a17:902:22a:: with SMTP id 39-v6mr31149925plc.128.1521834925269; Fri, 23 Mar 2018 12:55:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521834925; cv=none; d=google.com; s=arc-20160816; b=aEHXA9V6KkEWmcRX9r2ytLNo0MoCZFwE386Q5djSOnvxIUC2TfzBOrLmFLzdu+TCuo QFB2ZAIeunj+rmPbQ/z5RW6/bZ38viieDWWJIqzYHNGrTiC+G+XLyMVWKBqr5S/LHEvQ 2u6+kymdY8sxDLMLRCdvM30ay3LQcHsi4cEHoVkLt3BrPqs5+ODbrPnlISJ4TmRBjdz9 cS9a/2ymrxan7d0vuQcfbwHieHnGNjX6smp5Rr7gOK5i/7cawiU9gKv4/cRek4BCAvAJ A5NL/UFEamdk/eNzqX6J286yAHuiVQD0gosUvugSreopblU064yJX3VK15MU4xt8yQee y6Sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-disposition :content-transfer-encoding:mime-version:robot-unsubscribe:robot-id :git-commit-id:subject:to:references:in-reply-to:reply-to:cc :message-id:from:date:arc-authentication-results; bh=1NvY9Nywymp5rhKWfqBwBQuQMkiIumLqJdFt56lACo8=; b=m8YwIIsC3HsXZLzOmYJpCMdnn35936fx6i2zCe/siwyxL4HxCPGnIUtwYNNfwKDX64 C5JtXJiUnQTD3JnrquZJ0Ti32i1snz60YXB0JBLe9bFhkdYRKg4MfkOPqeGASYfi99vT 8VJTTczgUggeLt2U2hgJIHAUixI2G6tNRiWwJmrdkkdouvejzjiuTy+KjnLgkVY+oI9K bM+/6ghS4LgwuAyhk66uAf6Cphht3YisAmNQBZ6ycaLuSetV5ZP3LmauNlGPoubjfyLx x7A6vL4jTR9V0u1sizWJ5+3IWXoGtgfVcm+qnaAjoqWTD4Ym8XsmYqpCGRjEr33hRSSq g9DA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a12-v6si8915934plp.690.2018.03.23.12.55.11; Fri, 23 Mar 2018 12:55:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752469AbeCWTyV (ORCPT + 99 others); Fri, 23 Mar 2018 15:54:21 -0400 Received: from terminus.zytor.com ([198.137.202.136]:44705 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751853AbeCWTyT (ORCPT ); Fri, 23 Mar 2018 15:54:19 -0400 Received: from terminus.zytor.com (localhost [127.0.0.1]) by terminus.zytor.com (8.15.2/8.15.2) with ESMTP id w2NJrrtx020099; Fri, 23 Mar 2018 12:53:53 -0700 Received: (from tipbot@localhost) by terminus.zytor.com (8.15.2/8.15.2/Submit) id w2NJrqDS020096; Fri, 23 Mar 2018 12:53:52 -0700 Date: Fri, 23 Mar 2018 12:53:52 -0700 X-Authentication-Warning: terminus.zytor.com: tipbot set sender to tipbot@zytor.com using -f From: tip-bot for Christoph Hellwig Message-ID: Cc: konrad.wilk@oracle.com, dwmw2@infradead.org, mingo@kernel.org, hch@lst.de, torvalds@linux-foundation.org, peterz@infradead.org, joro@8bytes.org, jdmason@kudzu.us, tglx@linutronix.de, mulix@mulix.org, thomas.lendacky@amd.com, hpa@zytor.com, linux-kernel@vger.kernel.org Reply-To: hch@lst.de, dwmw2@infradead.org, mingo@kernel.org, konrad.wilk@oracle.com, torvalds@linux-foundation.org, joro@8bytes.org, peterz@infradead.org, tglx@linutronix.de, thomas.lendacky@amd.com, mulix@mulix.org, jdmason@kudzu.us, linux-kernel@vger.kernel.org, hpa@zytor.com In-Reply-To: <20180319103826.12853-14-hch@lst.de> References: <20180319103826.12853-14-hch@lst.de> To: linux-tip-commits@vger.kernel.org Subject: [tip:x86/dma] dma/direct: Handle force decryption for DMA coherent buffers in common code Git-Commit-ID: c10f07aa27dadf5ab5b3d58c48c91a467f80db49 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00 autolearn=ham autolearn_force=no version=3.4.1 X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on terminus.zytor.com Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: c10f07aa27dadf5ab5b3d58c48c91a467f80db49 Gitweb: https://git.kernel.org/tip/c10f07aa27dadf5ab5b3d58c48c91a467f80db49 Author: Christoph Hellwig AuthorDate: Mon, 19 Mar 2018 11:38:25 +0100 Committer: Ingo Molnar CommitDate: Tue, 20 Mar 2018 10:01:59 +0100 dma/direct: Handle force decryption for DMA coherent buffers in common code With that in place the generic DMA-direct routines can be used to allocate non-encrypted bounce buffers, and the x86 SEV case can use the generic swiotlb ops including nice features such as using CMA allocations. Note that I'm not too happy about using sev_active() in DMA-direct, but I couldn't come up with a good enough name for a wrapper to make it worth adding. Tested-by: Tom Lendacky Signed-off-by: Christoph Hellwig Reviewed-by: Thomas Gleixner Cc: David Woodhouse Cc: Joerg Roedel Cc: Jon Mason Cc: Konrad Rzeszutek Wilk Cc: Linus Torvalds Cc: Muli Ben-Yehuda Cc: Peter Zijlstra Cc: iommu@lists.linux-foundation.org Link: http://lkml.kernel.org/r/20180319103826.12853-14-hch@lst.de Signed-off-by: Ingo Molnar --- arch/x86/mm/mem_encrypt.c | 73 ++--------------------------------------------- lib/dma-direct.c | 32 +++++++++++++++++---- 2 files changed, 29 insertions(+), 76 deletions(-) diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 1b396422d26f..b2de398d1fd3 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -195,58 +195,6 @@ void __init sme_early_init(void) swiotlb_force = SWIOTLB_FORCE; } -static void *sev_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, - gfp_t gfp, unsigned long attrs) -{ - unsigned int order; - struct page *page; - void *vaddr = NULL; - - order = get_order(size); - page = alloc_pages_node(dev_to_node(dev), gfp, order); - if (page) { - dma_addr_t addr; - - /* - * Since we will be clearing the encryption bit, check the - * mask with it already cleared. - */ - addr = __phys_to_dma(dev, page_to_phys(page)); - if ((addr + size) > dev->coherent_dma_mask) { - __free_pages(page, get_order(size)); - } else { - vaddr = page_address(page); - *dma_handle = addr; - } - } - - if (!vaddr) - vaddr = swiotlb_alloc_coherent(dev, size, dma_handle, gfp); - - if (!vaddr) - return NULL; - - /* Clear the SME encryption bit for DMA use if not swiotlb area */ - if (!is_swiotlb_buffer(dma_to_phys(dev, *dma_handle))) { - set_memory_decrypted((unsigned long)vaddr, 1 << order); - memset(vaddr, 0, PAGE_SIZE << order); - *dma_handle = __sme_clr(*dma_handle); - } - - return vaddr; -} - -static void sev_free(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) -{ - /* Set the SME encryption bit for re-use if not swiotlb area */ - if (!is_swiotlb_buffer(dma_to_phys(dev, dma_handle))) - set_memory_encrypted((unsigned long)vaddr, - 1 << get_order(size)); - - swiotlb_free_coherent(dev, size, vaddr, dma_handle); -} - static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc) { pgprot_t old_prot, new_prot; @@ -399,20 +347,6 @@ bool sev_active(void) } EXPORT_SYMBOL(sev_active); -static const struct dma_map_ops sev_dma_ops = { - .alloc = sev_alloc, - .free = sev_free, - .map_page = swiotlb_map_page, - .unmap_page = swiotlb_unmap_page, - .map_sg = swiotlb_map_sg_attrs, - .unmap_sg = swiotlb_unmap_sg_attrs, - .sync_single_for_cpu = swiotlb_sync_single_for_cpu, - .sync_single_for_device = swiotlb_sync_single_for_device, - .sync_sg_for_cpu = swiotlb_sync_sg_for_cpu, - .sync_sg_for_device = swiotlb_sync_sg_for_device, - .mapping_error = swiotlb_dma_mapping_error, -}; - /* Architecture __weak replacement functions */ void __init mem_encrypt_init(void) { @@ -423,12 +357,11 @@ void __init mem_encrypt_init(void) swiotlb_update_mem_attributes(); /* - * With SEV, DMA operations cannot use encryption. New DMA ops - * are required in order to mark the DMA areas as decrypted or - * to use bounce buffers. + * With SEV, DMA operations cannot use encryption, we need to use + * SWIOTLB to bounce buffer DMA operation. */ if (sev_active()) - dma_ops = &sev_dma_ops; + dma_ops = &swiotlb_dma_ops; /* * With SEV, we need to unroll the rep string I/O instructions. diff --git a/lib/dma-direct.c b/lib/dma-direct.c index c9e8e21cb334..1277d293d4da 100644 --- a/lib/dma-direct.c +++ b/lib/dma-direct.c @@ -9,6 +9,7 @@ #include #include #include +#include #define DIRECT_MAPPING_ERROR 0 @@ -20,6 +21,14 @@ #define ARCH_ZONE_DMA_BITS 24 #endif +/* + * For AMD SEV all DMA must be to unencrypted addresses. + */ +static inline bool force_dma_unencrypted(void) +{ + return sev_active(); +} + static bool check_addr(struct device *dev, dma_addr_t dma_addr, size_t size, const char *caller) @@ -37,7 +46,9 @@ check_addr(struct device *dev, dma_addr_t dma_addr, size_t size, static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) { - return phys_to_dma(dev, phys) + size - 1 <= dev->coherent_dma_mask; + dma_addr_t addr = force_dma_unencrypted() ? + __phys_to_dma(dev, phys) : phys_to_dma(dev, phys); + return addr + size - 1 <= dev->coherent_dma_mask; } void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, @@ -46,6 +57,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT; int page_order = get_order(size); struct page *page = NULL; + void *ret; /* GFP_DMA32 and GFP_DMA are no ops without the corresponding zones: */ if (dev->coherent_dma_mask <= DMA_BIT_MASK(ARCH_ZONE_DMA_BITS)) @@ -78,10 +90,15 @@ again: if (!page) return NULL; - - *dma_handle = phys_to_dma(dev, page_to_phys(page)); - memset(page_address(page), 0, size); - return page_address(page); + ret = page_address(page); + if (force_dma_unencrypted()) { + set_memory_decrypted((unsigned long)ret, 1 << page_order); + *dma_handle = __phys_to_dma(dev, page_to_phys(page)); + } else { + *dma_handle = phys_to_dma(dev, page_to_phys(page)); + } + memset(ret, 0, size); + return ret; } /* @@ -92,9 +109,12 @@ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT; + unsigned int page_order = get_order(size); + if (force_dma_unencrypted()) + set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); if (!dma_release_from_contiguous(dev, virt_to_page(cpu_addr), count)) - free_pages((unsigned long)cpu_addr, get_order(size)); + free_pages((unsigned long)cpu_addr, page_order); } static dma_addr_t dma_direct_map_page(struct device *dev, struct page *page,