Received: by 2002:a05:6a10:8a4d:0:0:0:0 with SMTP id dn13csp595804pxb; Fri, 13 Aug 2021 01:54:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxq39J0cW2dA4ISJp1wO88F8uXmWOVa4plnATsqoBpUvN4uVoOrlFxeDgx0Frrc9tFdiaSn X-Received: by 2002:a92:6605:: with SMTP id a5mr1134569ilc.15.1628844897292; Fri, 13 Aug 2021 01:54:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628844897; cv=none; d=google.com; s=arc-20160816; b=QVKh8+DNNuJ+1ecpCVSfhG+aAzySeiBI58cGXKjuMFqOYGT7RdWo4bgx8LkON+L1Us Bj1CRqsv3Rkoi29w1xyiBfYa0S3jjZuUNFeUPy3jqsQLER9R+/p4T8sY8iq/zQBgt3dw DnpqsadCQydYJf23qF9WFaPhDObr27BeKFlgO+LaV2JwhRia/4Xnr80OJM51uOyqVE4b T93vDd/hW49dSMOoMhW6Vuez7J4wzg89rr0TCHp/2G2g5bYbrm1O+t3PRNw0I0xrT7U9 xHXgFKtfwC3LX5fFGFvFL5okkmedJOXfC3Tl5ZNGDD7Q32BRg3U+829vZVESrvzctTY7 zS2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=kyFeA5s+mZz62XLszZXyhrMuhyU/E9Xqmrhhftctwk8=; b=v+t0J5Ips3VoO6zrYw2gOOiZAcgsRSSARC87YbkV2MsnqCmrLEvaXKJc0TyEMHOQa5 D+BCQKDM4uLch5nbkQYMFGNLu6lnX8DBT85Gc5T/5H68g4/Vrb6/CWj5dJ+YzpUNh6Vl Yssx3I51pXB81eeMk2Z+w3Q01Khp7BisJEv/+P5cbwMR31OhjbBrMDATNDb1oazIrddM zJmVirI9LcWoNA67aMisR4bKcoqD9SArI7soyLrktLhfBB5kH/plBG7OUcwOZMCBLuAm lV/dHIWkKoIZVfj5QNAZ1AwKCit7vDDhLLFXLPcRJXrwSX726iramUyqrAt1Wtwk3Pq5 KgjQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=YDfz0xfg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f16si861023ils.116.2021.08.13.01.54.46; Fri, 13 Aug 2021 01:54:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=YDfz0xfg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239651AbhHMHqI (ORCPT + 99 others); Fri, 13 Aug 2021 03:46:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239724AbhHMHnx (ORCPT ); Fri, 13 Aug 2021 03:43:53 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A07C1C06124C for ; Fri, 13 Aug 2021 00:42:41 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id q2so10767401plr.11 for ; Fri, 13 Aug 2021 00:42:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kyFeA5s+mZz62XLszZXyhrMuhyU/E9Xqmrhhftctwk8=; b=YDfz0xfgB0lBpml+1FRf9NM0W4VP0Diglm7ABBgTTysLmbUgTTQKiQS0TKBZNcJG3u AGzqxmPYOucblVMOPOW0/Xamh2btNSCH7SJWyIUEvOtgZ4Pyq2PnHeNg+fGAmmVGgup6 +gbnyr8VfpE6A9t+kXFUdA2FVnx/nyg0yQwPY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kyFeA5s+mZz62XLszZXyhrMuhyU/E9Xqmrhhftctwk8=; b=b8TGFtaEfzddsBOk9xtSNYuhnZrZImfS5Uyu9tG67+jwB12hYtEdTPDyir4Q/DqNS2 VMFbO82iuHymQuKaJgZm6ci7Zqesqp41IfCuc/YSVD9bFpglwTXEJmpZnL7GbmR7hwu1 M4WEc3Yi7a+ZGWqM7PsG8bphkeU9knCjY9tHZ6ZADVVVHU/CTbhoDQ2g7Z+3PiALV+qN j2Fg/V+V4ZkK9kypWguVvigV6E1erJNaDwAMuWbbJ+QdNdvyfqWbAOafay7tX8Vhyz/F PCRfqfH6dsM7r1QbPanQWEB59Ug08XskKsbcNje5tcRRV1rzOwdWW5Gb3Zw88AF0HP1U kbAQ== X-Gm-Message-State: AOAM530wFgKvceXp28VpMtxdB2cL7jHOqKf8BkFN4AkGeTDt5YFOD1FB Hi5HB1R5ACGAyHpS8VNHEZYsPg== X-Received: by 2002:a17:90a:1d05:: with SMTP id c5mr1435474pjd.175.1628840561233; Fri, 13 Aug 2021 00:42:41 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:cbc8:5998:527e:5f43]) by smtp.gmail.com with UTF8SMTPSA id c196sm1530705pga.92.2021.08.13.00.42.39 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 13 Aug 2021 00:42:40 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Robin Murphy , Christoph Hellwig Cc: Joerg Roedel , Will Deacon , Lu Baolu , Tom Murphy , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, David Stevens Subject: [PATCH v4 2/6] dma-iommu: fix arch_sync_dma for map Date: Fri, 13 Aug 2021 16:38:35 +0900 Message-Id: <20210813073839.1562438-3-stevensd@google.com> X-Mailer: git-send-email 2.33.0.rc1.237.g0d66db33f3-goog In-Reply-To: <20210813073839.1562438-1-stevensd@google.com> References: <20210813073839.1562438-1-stevensd@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: David Stevens When calling arch_sync_dma, we need to pass it the memory that's actually being used for dma. When using swiotlb bounce buffers, this is the bounce buffer. Fold __iommu_dma_map_swiotlb into iommu_dma_map_page so it can sync the right phys_addr_t. Now that iommu_dma_map_sg delegates to a function which takes care of architectural syncing in the untrusted device case, the call to iommu_dma_sync_sg_for_device can be moved so it only occurs in for trusted devices. Doing the sync there for untrusted devices never really worked, since it needs to be able to target swiotlb buffers. This also moves the architectural sync to before the call to __iommu_dma_map, to guarantee that untrusted devices can't see stale data they shouldn't see. Fixes: 82612d66d51d ("iommu: Allow the dma-iommu api to use bounce buffers") Signed-off-by: David Stevens --- drivers/iommu/dma-iommu.c | 105 +++++++++++++++++--------------------- 1 file changed, 47 insertions(+), 58 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 968e0150c95e..e9119ff93535 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -536,52 +536,6 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, return iova + iova_off; } -static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, - size_t org_size, dma_addr_t dma_mask, bool coherent, - enum dma_data_direction dir, unsigned long attrs) -{ - int prot = dma_info_to_prot(dir, coherent, attrs); - struct iommu_domain *domain = iommu_get_dma_domain(dev); - struct iommu_dma_cookie *cookie = domain->iova_cookie; - struct iova_domain *iovad = &cookie->iovad; - size_t aligned_size = org_size; - void *padding_start; - size_t padding_size; - dma_addr_t iova; - - /* - * If both the physical buffer start address and size are - * page aligned, we don't need to use a bounce page. - */ - if (IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev) && - iova_offset(iovad, phys | org_size)) { - aligned_size = iova_align(iovad, org_size); - phys = swiotlb_tbl_map_single(dev, phys, org_size, - aligned_size, dir, attrs); - - if (phys == DMA_MAPPING_ERROR) - return DMA_MAPPING_ERROR; - - /* Cleanup the padding area. */ - padding_start = phys_to_virt(phys); - padding_size = aligned_size; - - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (dir == DMA_TO_DEVICE || - dir == DMA_BIDIRECTIONAL)) { - padding_start += org_size; - padding_size -= org_size; - } - - memset(padding_start, 0, padding_size); - } - - iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask); - if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys)) - swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs); - return iova; -} - static void __iommu_dma_free_pages(struct page **pages, int count) { while (count--) @@ -842,14 +796,50 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, { phys_addr_t phys = page_to_phys(page) + offset; bool coherent = dev_is_dma_coherent(dev); - dma_addr_t dma_handle; + int prot = dma_info_to_prot(dir, coherent, attrs); + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t aligned_size = size; + dma_addr_t iova, dma_mask = dma_get_mask(dev); + + /* + * If both the physical buffer start address and size are + * page aligned, we don't need to use a bounce page. + */ + if (IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev) && + iova_offset(iovad, phys | size)) { + void *padding_start; + size_t padding_size; + + aligned_size = iova_align(iovad, size); + phys = swiotlb_tbl_map_single(dev, phys, size, + aligned_size, dir, attrs); + + if (phys == DMA_MAPPING_ERROR) + return DMA_MAPPING_ERROR; + + /* Cleanup the padding area. */ + padding_start = phys_to_virt(phys); + padding_size = aligned_size; + + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + (dir == DMA_TO_DEVICE || + dir == DMA_BIDIRECTIONAL)) { + padding_start += size; + padding_size -= size; + } - dma_handle = __iommu_dma_map_swiotlb(dev, phys, size, dma_get_mask(dev), - coherent, dir, attrs); - if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - dma_handle != DMA_MAPPING_ERROR) + memset(padding_start, 0, padding_size); + } + + if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) arch_sync_dma_for_device(phys, size, dir); - return dma_handle; + + iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask); + if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys)) + swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); + return iova; } static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, @@ -953,9 +943,8 @@ static int iommu_dma_map_sg_swiotlb(struct device *dev, struct scatterlist *sg, int i; for_each_sg(sg, s, nents, i) { - sg_dma_address(s) = __iommu_dma_map_swiotlb(dev, sg_phys(s), - s->length, dma_get_mask(dev), - dev_is_dma_coherent(dev), dir, attrs); + sg_dma_address(s) = iommu_dma_map_page(dev, sg_page(s), + s->offset, s->length, dir, attrs); if (sg_dma_address(s) == DMA_MAPPING_ERROR) goto out_unmap; sg_dma_len(s) = s->length; @@ -992,12 +981,12 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, iommu_deferred_attach(dev, domain)) return 0; - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - iommu_dma_sync_sg_for_device(dev, sg, nents, dir); - if (dev_is_untrusted(dev)) return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs); + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + iommu_dma_sync_sg_for_device(dev, sg, nents, dir); + /* * Work out how much IOVA space we need, and align the segments to * IOVA granules for the IOMMU driver to handle. With some clever -- 2.33.0.rc1.237.g0d66db33f3-goog