Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp3585323ybi; Sun, 2 Jun 2019 18:27:24 -0700 (PDT) X-Google-Smtp-Source: APXvYqzB2NVO2LaYefIj0FjdrMPNb0cquJ8Kvv+iZVxAEAFdM3DH90E6VWog8THYevs93kwO8WCF X-Received: by 2002:a63:b00e:: with SMTP id h14mr25183363pgf.321.1559525244372; Sun, 02 Jun 2019 18:27:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559525244; cv=none; d=google.com; s=arc-20160816; b=dTamdcECQu58EsTvF3WQiGaI+5ew1tngTfJYBqm0jZhpgLOFZzvamhqLPE6nO9i4VI h19Nzw8TPKgWQl77V5pSb3IYhmueKTLj5blCCtHFAg1dwoxrAGOH6ybv6qNR/Rc10ksl rkED+AgFCvCFGWhxh4GXw6ofGMY3aFiTLwPAkAmL2/+/QzThFdIQ6JHUMATyaMLPZX/9 YAtWRn6RbTV7Q06KU9loaPkYYImhzktDV7s3mU0vR8qPgRu7azBQOr02y6/7at2XFcTB J0ONndlhjcv+0J4eSmo7rZ9tt2uT/uPuu1xG1105sNrIOpwDFMd6Ms8Omgv61a4i+uDv IacQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=7XDjsXgbnDsm0TLjLMYDv3L6cUnBviLtVo5Z9Vc3vFU=; b=eQw2qAqj99R+Q3mrzp9UpZL3iyEyguFjZZCzJ4tYZn7ZvJl/ROSTv/jbc+5prksJVE jx5t9G/WAm1UDdng7ZBRH2Tqqe5Jovl7OcU+6cJPMwKTjz17g7u4tBmIlQqFB2wFnOxM SSN/TR3uSBaQq0FtZOdjKTzM22oqDXx9BM1sgDNwDu6lwI+nloikLZU9n9UEinTdcQdr BdBaJ30Kw92uZaOKG6sSTa5fUF6KkZfwHi40aUshuBEpzX+EzLDw1Gy0xC+jsJ6RsNGB IP/wOTT0Yqak25t7cqLJOC3XKC/MlHk8dnzog45MoGDl7i9kFu81k/w1YcpXzE+ZWx2Y ssGw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 132si16838290pfu.263.2019.06.02.18.27.09; Sun, 02 Jun 2019 18:27:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727151AbfFCBYN (ORCPT + 99 others); Sun, 2 Jun 2019 21:24:13 -0400 Received: from mga17.intel.com ([192.55.52.151]:20792 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727119AbfFCBYL (ORCPT ); Sun, 2 Jun 2019 21:24:11 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 02 Jun 2019 18:24:10 -0700 X-ExtLoop1: 1 Received: from allen-box.sh.intel.com ([10.239.159.136]) by FMSMGA003.fm.intel.com with ESMTP; 02 Jun 2019 18:24:07 -0700 From: Lu Baolu To: David Woodhouse , Joerg Roedel , Bjorn Helgaas , Christoph Hellwig Cc: ashok.raj@intel.com, jacob.jun.pan@intel.com, alan.cox@intel.com, kevin.tian@intel.com, mika.westerberg@linux.intel.com, Ingo Molnar , Greg Kroah-Hartman , pengfei.xu@intel.com, Konrad Rzeszutek Wilk , Marek Szyprowski , Robin Murphy , Jonathan Corbet , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Steven Rostedt , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Lu Baolu , Jacob Pan Subject: [PATCH v4 9/9] iommu/vt-d: Use bounce buffer for untrusted devices Date: Mon, 3 Jun 2019 09:16:20 +0800 Message-Id: <20190603011620.31999-10-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190603011620.31999-1-baolu.lu@linux.intel.com> References: <20190603011620.31999-1-baolu.lu@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The Intel VT-d hardware uses paging for DMA remapping. The minimum mapped window is a page size. The device drivers may map buffers not filling the whole IOMMU window. This allows the device to access to possibly unrelated memory and a malicious device could exploit this to perform DMA attacks. To address this, the Intel IOMMU driver will use bounce pages for those buffers which don't fill whole IOMMU pages. Cc: Ashok Raj Cc: Jacob Pan Cc: Kevin Tian Signed-off-by: Lu Baolu Tested-by: Xu Pengfei Tested-by: Mika Westerberg --- drivers/iommu/intel-iommu.c | 128 ++++++++++++++++++++++++++++++++---- 1 file changed, 117 insertions(+), 11 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 2f54734d1c43..4bf744a1c239 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -52,6 +52,7 @@ #include #include #include +#include #include "irq_remapping.h" #include "intel-pasid.h" @@ -3537,6 +3538,19 @@ __intel_map_single(struct device *dev, phys_addr_t paddr, size_t size, if (!iova_pfn) goto error; + if (device_needs_bounce(dev)) { + dma_addr_t ret_addr; + + ret_addr = iommu_bounce_map(dev, iova_pfn << PAGE_SHIFT, + paddr, size, dir, attrs); + if (ret_addr == DMA_MAPPING_ERROR) + goto error; + trace_bounce_map_single(dev, iova_pfn << PAGE_SHIFT, + paddr, size); + + return ret_addr; + } + /* * Check if DMAR supports zero-length reads on write only * mappings.. @@ -3620,14 +3634,28 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size, start_pfn = mm_to_dma_pfn(iova_pfn); last_pfn = start_pfn + nrpages - 1; - freelist = domain_unmap(domain, start_pfn, last_pfn); + if (device_needs_bounce(dev)) + for_each_sg(sglist, sg, nelems, i) { + iommu_bounce_unmap(dev, sg_dma_address(sg), + sg->length, dir, attrs); + trace_bounce_unmap_sg(dev, i, nelems, + sg_dma_address(sg), + sg_phys(sg), sg->length); + } + else + freelist = domain_unmap(domain, start_pfn, last_pfn); } else { iova_pfn = IOVA_PFN(dev_addr); nrpages = aligned_nrpages(dev_addr, size); start_pfn = mm_to_dma_pfn(iova_pfn); last_pfn = start_pfn + nrpages - 1; - freelist = domain_unmap(domain, start_pfn, last_pfn); + if (device_needs_bounce(dev)) { + iommu_bounce_unmap(dev, dev_addr, size, dir, attrs); + trace_bounce_unmap_single(dev, dev_addr, size); + } else { + freelist = domain_unmap(domain, start_pfn, last_pfn); + } } if (dev_is_pci(dev)) @@ -3774,6 +3802,26 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele prot |= DMA_PTE_WRITE; start_vpfn = mm_to_dma_pfn(iova_pfn); + if (device_needs_bounce(dev)) { + for_each_sg(sglist, sg, nelems, i) { + dma_addr_t ret_addr; + + ret_addr = iommu_bounce_map(dev, + start_vpfn << VTD_PAGE_SHIFT, + sg_phys(sg), sg->length, dir, attrs); + if (ret_addr == DMA_MAPPING_ERROR) + break; + + trace_bounce_map_sg(dev, i, nelems, ret_addr, + sg_phys(sg), sg->length); + + sg->dma_address = ret_addr; + sg->dma_length = sg->length; + start_vpfn += aligned_nrpages(sg->offset, sg->length); + } + + return i; + } ret = domain_sg_mapping(domain, start_vpfn, sglist, size, prot); if (unlikely(ret)) { @@ -3787,16 +3835,74 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele return nelems; } +static void +intel_sync_single_for_cpu(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir) +{ + if (!iommu_need_mapping(dev)) + dma_direct_sync_single_for_cpu(dev, addr, size, dir); + + if (device_needs_bounce(dev)) + iommu_bounce_sync(dev, addr, size, dir, SYNC_FOR_CPU); +} + +static void +intel_sync_single_for_device(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir) +{ + if (!iommu_need_mapping(dev)) + dma_direct_sync_single_for_device(dev, addr, size, dir); + + if (device_needs_bounce(dev)) + iommu_bounce_sync(dev, addr, size, dir, SYNC_FOR_DEVICE); +} + +static void +intel_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, + int nelems, enum dma_data_direction dir) +{ + struct scatterlist *sg; + int i; + + if (!iommu_need_mapping(dev)) + dma_direct_sync_sg_for_cpu(dev, sglist, nelems, dir); + + if (device_needs_bounce(dev)) + for_each_sg(sglist, sg, nelems, i) + iommu_bounce_sync(dev, sg_dma_address(sg), + sg_dma_len(sg), dir, SYNC_FOR_CPU); +} + +static void +intel_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, + int nelems, enum dma_data_direction dir) +{ + struct scatterlist *sg; + int i; + + if (!iommu_need_mapping(dev)) + dma_direct_sync_sg_for_device(dev, sglist, nelems, dir); + + if (device_needs_bounce(dev)) + for_each_sg(sglist, sg, nelems, i) + iommu_bounce_sync(dev, sg_dma_address(sg), + sg_dma_len(sg), dir, SYNC_FOR_DEVICE); +} + static const struct dma_map_ops intel_dma_ops = { - .alloc = intel_alloc_coherent, - .free = intel_free_coherent, - .map_sg = intel_map_sg, - .unmap_sg = intel_unmap_sg, - .map_page = intel_map_page, - .unmap_page = intel_unmap_page, - .map_resource = intel_map_resource, - .unmap_resource = intel_unmap_resource, - .dma_supported = dma_direct_supported, + .alloc = intel_alloc_coherent, + .free = intel_free_coherent, + .map_sg = intel_map_sg, + .unmap_sg = intel_unmap_sg, + .map_page = intel_map_page, + .unmap_page = intel_unmap_page, + .sync_single_for_cpu = intel_sync_single_for_cpu, + .sync_single_for_device = intel_sync_single_for_device, + .sync_sg_for_cpu = intel_sync_sg_for_cpu, + .sync_sg_for_device = intel_sync_sg_for_device, + .map_resource = intel_map_resource, + .unmap_resource = intel_unmap_resource, + .dma_supported = dma_direct_supported, }; static inline int iommu_domain_cache_init(void) -- 2.17.1