Received: by 2002:a05:6a10:8a4d:0:0:0:0 with SMTP id dn13csp596062pxb; Fri, 13 Aug 2021 01:55:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyKpQqA1d2K+ZCPVcps+yoqTBe94VB62LjJ7lC1Vv+clL/YIYnzVuqSPvG802fUzGpcjOw4 X-Received: by 2002:a17:906:b052:: with SMTP id bj18mr1414819ejb.55.1628844926269; Fri, 13 Aug 2021 01:55:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628844926; cv=none; d=google.com; s=arc-20160816; b=yJ3R+XDqprnXxBnEHqHzHtW3/peQrL0INX/evfu8NY+yEmTxl0O+rtH0B8iMlZ4uZ2 XMnk3IrxHngYG4QxptMVrpChx6VCkknEzDb0bXBO7/UEXfjoOsQIJzCErkW2od4v/nJr vuLC2kOBKH8aulheVnFSx1zlHzHP8G8slinRMCBgvcVsGkG17yKttKRD16a+zCSwWAw0 bZWMxhJq1TKrMsmcw0Y8rGZtqMvq7FZChRuh5gRNg8dXj1ROfxAJnxIiVXn62BaUSHX6 HkoqzEJdfCsyhFJKvBBU9RIrRS2gjkxXz5p9vEOx4Zi/2PWUL5jfbyWATGGfXoepghBF asyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=rP4GgDTu9bjAnYypJHgfqIqtEWZeT+eHGMSHhkdUgBk=; b=sINh17kEXRL9pIAEobTwHnS0RfQNVUzUwnaS0BP1esAv0+ANsID62FjGSX1vX3VGBr 0WZVD1DXe1kvdFdn/8Tuy8uNqX3uX3A9nQam0stOLwlPyClK++76X/9A02pp9bqh4UxL v9XOfDhxe2fgdhHydFUo//gWKibVp9NWEqdZ3DZJjgSzAzY7AGIp1VGeA+qW8by25Yyq jHE39jgliLyWt7ubios3OQZPT61cndUKQTx2FnzxKUzEMj1nN53eCVIWY7q/0gZReCRJ M47d+RgjEx5sci7aA0cYRfV5HcdpqSF30VYdzgVyaz9mqH3PZEckO42qCuIzZeNdy5D8 LE9Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=blB8YXGO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d22si1328805ejj.387.2021.08.13.01.55.03; Fri, 13 Aug 2021 01:55:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=blB8YXGO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239726AbhHMHqU (ORCPT + 99 others); Fri, 13 Aug 2021 03:46:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239416AbhHMHoA (ORCPT ); Fri, 13 Aug 2021 03:44:00 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFB7EC061226 for ; Fri, 13 Aug 2021 00:42:52 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id mq2-20020a17090b3802b0290178911d298bso14819938pjb.1 for ; Fri, 13 Aug 2021 00:42:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rP4GgDTu9bjAnYypJHgfqIqtEWZeT+eHGMSHhkdUgBk=; b=blB8YXGO2KdX/7C4bR6aPvh+DiFzKoX/53wOKBfEetogX9UNme2pP558uNaJxjgMfe Jo5Lmifq91NNT3S5K6N1kwM0LY4/DWTJp1KMlSHxnk9zdzND4aOsyCsYWN+H8+tfWYBX vsspO4ddofYYMn1ycEYWqaBEJVZorZbX0sib4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rP4GgDTu9bjAnYypJHgfqIqtEWZeT+eHGMSHhkdUgBk=; b=DbuDlrLIgiLO+4KjLsLLydjxLEzwZ37b6Szf9C73glhMY3gK0BEf2b+yATLdw/G5VS 8i+E1DSG36N+bdKRFmMcXeGcZmwVkZDqHPUC+d80FncYJvvyeIO8aYeh3XdU1uvtkRoa DG4e0AdSgnn7X2GBtaqfg6m0C9nOScUJscnI0pl1FkaIw+kGDcYXUCYpH48r3dh1XFfi 0IoYx+Jj4834NG2fdEcmAFyv6e8YxTSkx6ukl+zxbB6PjF5bYe8Ipt1cpAD3KwajGR0N Z9aG7S+piklA6Es9/OwVa3H828OhMpqcCMFt2wKn7bDjsRtIrZcg+2R1GgxKOOBFgdMH DD3Q== X-Gm-Message-State: AOAM531xGJcmJvmYujUcB5egJS4GDLoXsewRLTDI6q4cScT6kGIcVqF5 KflUj6QcM1OPmTsKoUkxRsbZnA== X-Received: by 2002:a17:90a:2e88:: with SMTP id r8mr1349417pjd.169.1628840572592; Fri, 13 Aug 2021 00:42:52 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:cbc8:5998:527e:5f43]) by smtp.gmail.com with UTF8SMTPSA id c2sm1195862pfi.80.2021.08.13.00.42.50 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 13 Aug 2021 00:42:52 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Robin Murphy , Christoph Hellwig Cc: Joerg Roedel , Will Deacon , Lu Baolu , Tom Murphy , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, David Stevens Subject: [PATCH v4 5/6] swiotlb: support aligned swiotlb buffers Date: Fri, 13 Aug 2021 16:38:38 +0900 Message-Id: <20210813073839.1562438-6-stevensd@google.com> X-Mailer: git-send-email 2.33.0.rc1.237.g0d66db33f3-goog In-Reply-To: <20210813073839.1562438-1-stevensd@google.com> References: <20210813073839.1562438-1-stevensd@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: David Stevens Add an argument to swiotlb_tbl_map_single that specifies the desired alignment of the allocated buffer. This is used by dma-iommu to ensure the buffer is aligned to the iova granule size when using swiotlb with untrusted sub-granule mappings. This addresses an issue where adjacent slots could be exposed to the untrusted device if IO_TLB_SIZE < iova granule < PAGE_SIZE. Signed-off-by: David Stevens --- drivers/iommu/dma-iommu.c | 4 ++-- include/linux/swiotlb.h | 3 ++- kernel/dma/swiotlb.c | 11 +++++++---- 3 files changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index bad813d63ea6..b1b0327cc2f6 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -801,8 +801,8 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, size_t padding_size; aligned_size = iova_align(iovad, size); - phys = swiotlb_tbl_map_single(dev, phys, size, - aligned_size, dir, attrs); + phys = swiotlb_tbl_map_single(dev, phys, size, aligned_size, + iova_mask(iovad), dir, attrs); if (phys == DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 216854a5e513..93d82e43eb3a 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -44,7 +44,8 @@ extern void __init swiotlb_update_mem_attributes(void); phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys, size_t mapping_size, size_t alloc_size, - enum dma_data_direction dir, unsigned long attrs); + unsigned int alloc_aligned_mask, enum dma_data_direction dir, + unsigned long attrs); extern void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index e50df8d8f87e..d4c45d8cd1fa 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -427,7 +427,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index) * allocate a buffer from that IO TLB pool. */ static int find_slots(struct device *dev, phys_addr_t orig_addr, - size_t alloc_size) + size_t alloc_size, unsigned int alloc_align_mask) { struct io_tlb_mem *mem = io_tlb_default_mem; unsigned long boundary_mask = dma_get_seg_boundary(dev); @@ -450,6 +450,7 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr, stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1; if (alloc_size >= PAGE_SIZE) stride = max(stride, stride << (PAGE_SHIFT - IO_TLB_SHIFT)); + stride = max(stride, (alloc_align_mask >> IO_TLB_SHIFT) + 1); spin_lock_irqsave(&mem->lock, flags); if (unlikely(nslots > mem->nslabs - mem->used)) @@ -504,7 +505,8 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr, phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, - enum dma_data_direction dir, unsigned long attrs) + unsigned int alloc_align_mask, enum dma_data_direction dir, + unsigned long attrs) { struct io_tlb_mem *mem = io_tlb_default_mem; unsigned int offset = swiotlb_align_offset(dev, orig_addr); @@ -524,7 +526,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, return (phys_addr_t)DMA_MAPPING_ERROR; } - index = find_slots(dev, orig_addr, alloc_size + offset); + index = find_slots(dev, orig_addr, + alloc_size + offset, alloc_align_mask); if (index == -1) { if (!(attrs & DMA_ATTR_NO_WARN)) dev_warn_ratelimited(dev, @@ -636,7 +639,7 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size, trace_swiotlb_bounced(dev, phys_to_dma(dev, paddr), size, swiotlb_force); - swiotlb_addr = swiotlb_tbl_map_single(dev, paddr, size, size, dir, + swiotlb_addr = swiotlb_tbl_map_single(dev, paddr, size, size, 0, dir, attrs); if (swiotlb_addr == (phys_addr_t)DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; -- 2.33.0.rc1.237.g0d66db33f3-goog