Received: by 2002:a05:6a10:c7d3:0:0:0:0 with SMTP id h19csp1393711pxy; Sun, 15 Aug 2021 20:01:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwe0qZd9m/OhUh+DU6NZF80wUnxK3gdJ/fjVRCgAkSf39aHpRYJHcOnjfLsTD6J7IFwLiJ1 X-Received: by 2002:a6b:d20c:: with SMTP id q12mr11279571iob.121.1629082877030; Sun, 15 Aug 2021 20:01:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629082877; cv=none; d=google.com; s=arc-20160816; b=RMRbTrfIqWexy5A+mhSY3LrIYkgBDp2J5ZE8ICWfCvo4WVuTlFI/iz/LNbFvUiE1hH +xV0BUBF/5Fiyte3oOMyP5YujRS+yU9w2CIvH5MzoJCXjuVDNSrER+nAVUc9prwtTRAA dscrAHb7MlvzA4Ue5hRsnHtvVQOWSmxo+16QD3ph3GXYTUSx7sqP6Sp6IggXLPCNVnut cIPxLX5UuHEcLGhztwM1otrc0+D7Gw+XvL0g7VwnjsLe/2Xb1sQ3Ulavye0UlS4Cj48E FFIfRmIyQlCjri+aBBILseoMaouekoSbKO4edpv2ZVYrSlrrD6yxFmJ+j4f8WGSpg5V6 OvhQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=tBsX0MVHwtbJoJcOekTIeXMGpx0HIEfEiJde7/PcK80=; b=rX/NdsI9f+LEi2tbBr5s8U4+Dsmq8a25Fw93D/IMM++92lnkQ/AdkIAY2MljmZ+ls3 MC8fMRH+TOs20oQtgSLyRUemcs1219QGqOE4e9Mi+hIk1KhUHfrnq/mtMlEmFkDLZOfO taWNW3ucuNYBB+TpNTFWruJs+TY4ax05XxK47SHVbuNXmv3MO3ozi/3zh6mpm9hOnOEK va5DTntVTXAZzMgiSo++EfbPIkNRZLtOVMw97dS1m1JpVXd38uCWvhQMwDHa97HaTtbx ZOmvDneCq9bBhA4be6wJZvM3Ayz/Ep3A1N+SvEXGySXMLrU0UBiMz26VpPerfyWPq5PG 19JQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=h7hFFN+j; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f18si10468882ilq.43.2021.08.15.20.01.05; Sun, 15 Aug 2021 20:01:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=h7hFFN+j; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232956AbhHPDAV (ORCPT + 99 others); Sun, 15 Aug 2021 23:00:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232850AbhHPDAT (ORCPT ); Sun, 15 Aug 2021 23:00:19 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE3F4C061764 for ; Sun, 15 Aug 2021 19:59:48 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id e15so19175633plh.8 for ; Sun, 15 Aug 2021 19:59:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tBsX0MVHwtbJoJcOekTIeXMGpx0HIEfEiJde7/PcK80=; b=h7hFFN+jkFco/mFLULcI/bu9oY6wROBT3J9B6dCnV4361nEgGdTn0efcRxko0+2D6v +ZovAXQ2ehzDNqBPDf+F46Sx21AcZ7qmxyHDqtldO5Yb9WBIbgeKJJZ14xmd+K7ViagI va70ZypCyXFfJHdwgKlDLhyxtLfhjMgLLvc8Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tBsX0MVHwtbJoJcOekTIeXMGpx0HIEfEiJde7/PcK80=; b=Y7LS1AvXCYre7Od8TQ3TQXeUasuaRnZV1mXiA3RRZMhnCGnRx5J5/QM+P4aLB+DBKB 6w3HcbKKC81ptLkenRoFQe9INqDB1R7/2b6AJiGwJwBomwYBYUDAR4pPOsqJ8RVyYGeZ Yy6dOodAg8uu2R5L9ed4KpES567KASBvHdn88kl3de/p9g+9LW2FXoSakILtbefllJwT /HeF//uDFJO9OIuKehmPaLViqBC0686OnUafU2JSsQwxAXLLU0bqo+E9EyjmginNWHBX Q74CAtR0diixdhQoMK5jVnjFPntc7H5SRRmtX5PlKbriqjj6FZ+m9+OJ5LxPZbj9e8r+ P/qw== X-Gm-Message-State: AOAM532isGemA1JTxphBVpXc8hfnq7pG7PCbbdav5tOx3K8jxF7j8kYO /qoxUASb+8yaQ2m7Xt6NSATejw== X-Received: by 2002:a17:902:e891:b0:12d:97bf:b2d8 with SMTP id w17-20020a170902e89100b0012d97bfb2d8mr10712167plg.84.1629082788345; Sun, 15 Aug 2021 19:59:48 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:17b8:f07a:2a52:317a]) by smtp.gmail.com with UTF8SMTPSA id nl9sm7666335pjb.33.2021.08.15.19.59.46 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 15 Aug 2021 19:59:48 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Robin Murphy , Christoph Hellwig Cc: Joerg Roedel , Will Deacon , Lu Baolu , Tom Murphy , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, David Stevens Subject: [PATCH v5 7/7] dma-iommu: account for min_align_mask Date: Mon, 16 Aug 2021 11:57:55 +0900 Message-Id: <20210816025755.2906695-8-stevensd@google.com> X-Mailer: git-send-email 2.33.0.rc1.237.g0d66db33f3-goog In-Reply-To: <20210816025755.2906695-1-stevensd@google.com> References: <20210816025755.2906695-1-stevensd@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: David Stevens For devices which set min_align_mask, swiotlb preserves the offset of the original physical address within that mask. Since __iommu_dma_map accounts for non-aligned addresses, passing a non-aligned swiotlb address with the swiotlb aligned size results in the offset being accounted for twice in the size passed to iommu_map_atomic. The extra page exposed to DMA is also not cleaned up by __iommu_dma_unmap, since that function unmaps with the correct size. This causes mapping failures if the iova gets reused, due to collisions in the iommu page tables. To fix this, pass the original size to __iommu_dma_map, since that function already handles alignment. Additionally, when swiotlb returns non-aligned addresses, there is padding at the start of the bounce buffer that needs to be cleared. Fixes: 1f221a0d0dbf ("swiotlb: respect min_align_mask") Signed-off-by: David Stevens --- drivers/iommu/dma-iommu.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index b1b0327cc2f6..b44b367839c2 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -788,7 +788,6 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_domain *iovad = &cookie->iovad; - size_t aligned_size = size; dma_addr_t iova, dma_mask = dma_get_mask(dev); /* @@ -797,8 +796,8 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, */ if (IS_ENABLED(CONFIG_SWIOTLB) && dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) { - void *padding_start; - size_t padding_size; + void *tlb_start; + size_t aligned_size, iova_off, mapping_end_off; aligned_size = iova_align(iovad, size); phys = swiotlb_tbl_map_single(dev, phys, size, aligned_size, @@ -807,24 +806,27 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, if (phys == DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; - /* Cleanup the padding area. */ - padding_start = phys_to_virt(phys); - padding_size = aligned_size; + iova_off = iova_offset(iovad, phys); + tlb_start = phys_to_virt(phys - iova_off); if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) { - padding_start += size; - padding_size -= size; + /* Cleanup the padding area. */ + mapping_end_off = iova_off + size; + memset(tlb_start, 0, iova_off); + memset(tlb_start + mapping_end_off, 0, + aligned_size - mapping_end_off); + } else { + /* Nothing was sync'ed, so clear the whole buffer. */ + memset(tlb_start, 0, aligned_size); } - - memset(padding_start, 0, padding_size); } if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) arch_sync_dma_for_device(phys, size, dir); - iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask); + iova = __iommu_dma_map(dev, phys, size, prot, dma_mask); if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys)) swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); return iova; -- 2.33.0.rc1.237.g0d66db33f3-goog