Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp226686ybz; Wed, 15 Apr 2020 07:39:26 -0700 (PDT) X-Google-Smtp-Source: APiQypJrEXWQhDNejNA0VjgAWOJQAjvSQqjA2bOxKiz6X5iGQqY+ZVqfqBbPVVLNxjBsco186rfx X-Received: by 2002:aa7:c40b:: with SMTP id j11mr9709116edq.17.1586961566030; Wed, 15 Apr 2020 07:39:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586961566; cv=none; d=google.com; s=arc-20160816; b=q/3ecfiCqIrc1MLQQJACQxBTFsPXosl/ZMA+mxrUmDZk+EfyKDVGgd8w56d/EU9h+p XlWdl2IK57xZMFMKeCXZ6uHEz3+PiuON4CBcssazqcaICL3T1DA/Q77n+mg5ZpOEjru+ MegN15aT7sahTxGSkSo189y8i3gZjbJZ8/sage9LV4jkEwejoFx6hVTpQgr8f6bFA91g /obD9xmALyNOlik0QnTpnph61DoUAMfT95bV2dHQirKl/b0Cvf7NSgmK/Sg4VEf8Ej4x JNVwBgYnQF/uDBTGqnD9WPzekd3uz3t3wQp0MJFhOxgcnbBPYabliIRH31FmSG0UH1+E GS3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=uKdGdqF/+GNBYDZWM+26jf4oQ2Rr7fsIm1TWA6uIlOA=; b=PRyZKRu3PlYTNqGjbdjHTAcJBgkdIcfV6wHqWIh9hiQq/GeyZIiJcIdRkJyys1Jhuy XGDndZ2iXwwXXH5MbWTNNloxRHVlRPPTaVtvJPevGBFW5EmUbUv7WqRsGZPEbaXfp2jU J2q5hy1Da0Rhk2vVVEJXT1SW8Np7qCNFu1DN9yrp9I2uxlXtg4Jy+7T09o7bw83y8GhE 8XbgAtIFVZE/3HZ3F2bIAs2k9VSpKlFhJ9fuAF/H0aNwBM6FcaoztJFDE+JYnYc5ah0I 4DpgXSVK7DWnrQgvmylW2Hb2sNnNhu5pVKs/kDVcpFHhq5rrn6pDZQTPRFLM8op4HcJG IIIA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b="WulsqM//"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s25si10123763edw.441.2020.04.15.07.39.00; Wed, 15 Apr 2020 07:39:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b="WulsqM//"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2392234AbgDOAGp (ORCPT + 99 others); Tue, 14 Apr 2020 20:06:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2634552AbgDOAFA (ORCPT ); Tue, 14 Apr 2020 20:05:00 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B01AC061A0C for ; Tue, 14 Apr 2020 17:05:00 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id b72so707910pfb.11 for ; Tue, 14 Apr 2020 17:05:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=uKdGdqF/+GNBYDZWM+26jf4oQ2Rr7fsIm1TWA6uIlOA=; b=WulsqM//lOs3T2IowcDd43dAEvBgL4u222+/xORDx+a8LvPRVm9cY3NQfEqWuqqkRE fddXnxLsqPjFuz+CPoWBTB8apYtNoHXOQr1BMMmRpLV7dNnC3oqa9jAKPo4I75G9fmxX vxIvzftwroSPfYRM3wdT2E9nLqgIiLSp0XqVxu/DRz7vgc9i7nNMvX4hZTJUfN/+MQZz xmgPx5BX1IhgddUd+tTNj4GfKchRjGq2Gt9BlPgSbAinptlRGRVjGLnuJRX6JfZuYjkF BHPYE2cp2yWj3CPx4BwmlSXUVx78saZsmeirZ+/ojUj2pYZHiQ23RBo1UqOGX/T7WQLX FrXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=uKdGdqF/+GNBYDZWM+26jf4oQ2Rr7fsIm1TWA6uIlOA=; b=PRC/iq7m9/0sdcNBxGy3cVa2lFzyDAtlZL1uaope8Higxru6yIeUwKuv8PZ4izgRaS dFrXUMxrTsLszeVNnaN1NwMdwcslCXz+oQ1qMlTyN2zPFRdGqm7mNNZXnmZAY77A77eO CdyEqPDCCNF2QK2ytMCPrvolqEGGPUmQ2+xLISXceC169vCdnJguwW6B8WpX+KWaQ4w+ 4LmBrbC2RobovNqQNVzzhnKlVbNedX8IP9L10t00haqTz4l+QOBhHKkvakTw9oHNM5Nj 3aNpbfrHS6ECi7FrOW6Jg/Nr1Y+G3bZxBvZp4bdiJFBKg7X78VywWi/7uN/1FdqS1pBk e9Mg== X-Gm-Message-State: AGi0Pub/41pVlLXilQxh0Mfhycr/yRwd4Erx/7f1dEGBJi8n0r5bKrXd cRnfBArSJ8VahFXXnAAsPnh/znAzCVE= X-Received: by 2002:a65:44cb:: with SMTP id g11mr16665800pgs.436.1586909099611; Tue, 14 Apr 2020 17:04:59 -0700 (PDT) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id q131sm7247530pfq.115.2020.04.14.17.04.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 17:04:59 -0700 (PDT) Date: Tue, 14 Apr 2020 17:04:58 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Christoph Hellwig , Tom Lendacky cc: Brijesh Singh , Jon Grimm , Joerg Roedel , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Subject: [patch 4/7] dma-direct: atomic allocations must come from atomic coherent pools In-Reply-To: Message-ID: References: User-Agent: Alpine 2.22 (DEB 394 2020-01-19) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When a device requires unencrypted memory and the context does not allow blocking, memory must be returned from the atomic coherent pools. This avoids the remap when CONFIG_DMA_DIRECT_REMAP is not enabled and the config only requires CONFIG_DMA_COHERENT_POOL. This will be used for CONFIG_AMD_MEM_ENCRYPT in a subsequent patch. Keep all memory in these pools unencrypted. When set_memory_decrypted() fails, this prohibits the memory from being added. If adding memory to the genpool fails, and set_memory_encrypted() subsequently fails, there is no alternative other than leaking the memory. Signed-off-by: David Rientjes --- kernel/dma/direct.c | 46 ++++++++++++++++++++++++++++++++++++++------- kernel/dma/pool.c | 27 +++++++++++++++++++++++--- 2 files changed, 63 insertions(+), 10 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index a834ee22f8ff..07ecc5c4d134 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -76,6 +76,39 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit); } +/* + * Decrypting memory is allowed to block, so if this device requires + * unencrypted memory it must come from atomic pools. + */ +static inline bool dma_should_alloc_from_pool(struct device *dev, gfp_t gfp, + unsigned long attrs) +{ + if (!IS_ENABLED(CONFIG_DMA_COHERENT_POOL)) + return false; + if (gfpflags_allow_blocking(gfp)) + return false; + if (force_dma_unencrypted(dev)) + return true; + if (!IS_ENABLED(CONFIG_DMA_DIRECT_REMAP)) + return false; + if (dma_alloc_need_uncached(dev, attrs)) + return true; + return false; +} + +static inline bool dma_should_free_from_pool(struct device *dev, + unsigned long attrs) +{ + if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL)) + return true; + if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && + !force_dma_unencrypted(dev)) + return false; + if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP)) + return true; + return false; +} + struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, gfp_t gfp, unsigned long attrs) { @@ -125,9 +158,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, struct page *page; void *ret; - if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - dma_alloc_need_uncached(dev, attrs) && - !gfpflags_allow_blocking(gfp)) { + if (dma_should_alloc_from_pool(dev, gfp, attrs)) { ret = dma_alloc_from_pool(dev, PAGE_ALIGN(size), &page, gfp); if (!ret) return NULL; @@ -204,6 +235,11 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, { unsigned int page_order = get_order(size); + /* If cpu_addr is not from an atomic pool, dma_free_from_pool() fails */ + if (dma_should_free_from_pool(dev, attrs) && + dma_free_from_pool(dev, cpu_addr, PAGE_ALIGN(size))) + return; + if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && !force_dma_unencrypted(dev)) { /* cpu_addr is a struct page cookie, not a kernel address */ @@ -211,10 +247,6 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, return; } - if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - dma_free_from_pool(dev, cpu_addr, PAGE_ALIGN(size))) - return; - if (force_dma_unencrypted(dev)) set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 9e2da17ed17b..cf052314d9e4 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -53,22 +54,42 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, arch_dma_prep_coherent(page, pool_size); +#ifdef CONFIG_DMA_DIRECT_REMAP addr = dma_common_contiguous_remap(page, pool_size, pgprot_dmacoherent(PAGE_KERNEL), __builtin_return_address(0)); if (!addr) goto free_page; - +#else + addr = page_to_virt(page); +#endif + /* + * Memory in the atomic DMA pools must be unencrypted, the pools do not + * shrink so no re-encryption occurs in dma_direct_free_pages(). + */ + ret = set_memory_decrypted((unsigned long)page_to_virt(page), + 1 << order); + if (ret) + goto remove_mapping; ret = gen_pool_add_virt(pool, (unsigned long)addr, page_to_phys(page), pool_size, NUMA_NO_NODE); if (ret) - goto remove_mapping; + goto encrypt_mapping; return 0; +encrypt_mapping: + ret = set_memory_encrypted((unsigned long)page_to_virt(page), + 1 << order); + if (WARN_ON_ONCE(ret)) { + /* Decrypt succeeded but encrypt failed, purposely leak */ + goto out; + } remove_mapping: +#ifdef CONFIG_DMA_DIRECT_REMAP dma_common_free_remap(addr, pool_size); -free_page: +#endif +free_page: __maybe_unused if (!dma_release_from_contiguous(NULL, page, 1 << order)) __free_pages(page, order); out: