Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp5329050ybi; Wed, 12 Jun 2019 00:19:03 -0700 (PDT) X-Google-Smtp-Source: APXvYqyj02URJZDUGWyPUVKTxDvtIOuayc/3kjlWnu3BdOEhbPwpTOOO1riSdWmJL/HS5tResDMR X-Received: by 2002:a63:a055:: with SMTP id u21mr24112271pgn.5.1560323943217; Wed, 12 Jun 2019 00:19:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560323943; cv=none; d=google.com; s=arc-20160816; b=rMaS25c+SymlH9YIqJO62Ki69kVfOSYA6gHvO/PvhEUQBSKniwJIZCeHr1c6+E1Ijy DHfRZ1QAF7JDc4XNp6ABQXHXZ9KgQy/ygV9NyqQBeLrcs99OuLPqVu/5hWRfFbdM9ogk HLJiLiodqHW7MfYDbIxD7v0lpyXdOp3Vf5yubdF/ZCgWTMr94x1oWv5dUhGq+Hl9WN+j bOa8JEnBRoNQD3XuBLnfSmbD4jj/irTzqmpDg4Kh1M6Cni4BCzZpqr32MHCjMwOp9CyL wBg8hDEVGDo1nQRo6wD6kJ6WlKbBOMC1j37HSKmgWX7Qopm2A7Vr4FpbYXPdFRYg9630 T0Hg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:to:subject:cc; bh=m5PL5mVO5pnNfReEULVNbUrDoFFH2RvB1DxQwnWc210=; b=BsY8svr+CimZx6Gud5/MOQmKyjxjMQk7c0xk4f1wefe0lSE5LKoqkst/54Xr+xllAk +V5j7rQmt1mCessO7NEs+ELczR25JLvloRgdlzC8D58WPKsZS4oUImoSBew2oQ26IoL1 JDRpfTm9Cbux8CeUz7a0SIMMF3FB4441DUssUFvFJLYABOxw1avRWi0SMsBvMEokITG2 dt+QiiaHHbgkgYsjiBWpmOuVCsUTePEzR5vb9EmbyuC0iXlKJb00/d2Gx5kclmB5GmKX JOiAqGNbZ5sRrIfvuTH/lJsoeOZJL2zfdy5sZH0+1gyHn71xmuZJa2Us/gid5TwP9+iM QCqQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 68si14857305ple.89.2019.06.12.00.18.44; Wed, 12 Jun 2019 00:19:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406298AbfFLAxH (ORCPT + 99 others); Tue, 11 Jun 2019 20:53:07 -0400 Received: from mga05.intel.com ([192.55.52.43]:39244 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406250AbfFLAxH (ORCPT ); Tue, 11 Jun 2019 20:53:07 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Jun 2019 17:53:06 -0700 X-ExtLoop1: 1 Received: from allen-box.sh.intel.com (HELO [10.239.159.136]) ([10.239.159.136]) by orsmga004.jf.intel.com with ESMTP; 11 Jun 2019 17:53:01 -0700 Cc: baolu.lu@linux.intel.com, David Woodhouse , Joerg Roedel , Bjorn Helgaas , Christoph Hellwig , ashok.raj@intel.com, jacob.jun.pan@intel.com, alan.cox@intel.com, kevin.tian@intel.com, mika.westerberg@linux.intel.com, Ingo Molnar , Greg Kroah-Hartman , pengfei.xu@intel.com, Marek Szyprowski , Robin Murphy , Jonathan Corbet , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Steven Rostedt , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Jacob Pan , Alan Cox , Mika Westerberg Subject: Re: [PATCH v4 4/9] iommu: Add bounce page APIs To: Konrad Rzeszutek Wilk References: <20190603011620.31999-1-baolu.lu@linux.intel.com> <20190603011620.31999-5-baolu.lu@linux.intel.com> <20190610155614.GV28796@char.us.oracle.com> From: Lu Baolu Message-ID: <8d34c6be-f847-3871-d098-2f3aa1933924@linux.intel.com> Date: Wed, 12 Jun 2019 08:45:53 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190610155614.GV28796@char.us.oracle.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, On 6/10/19 11:56 PM, Konrad Rzeszutek Wilk wrote: > On Mon, Jun 03, 2019 at 09:16:15AM +0800, Lu Baolu wrote: >> IOMMU hardware always use paging for DMA remapping. The >> minimum mapped window is a page size. The device drivers >> may map buffers not filling whole IOMMU window. It allows >> device to access to possibly unrelated memory and various >> malicious devices can exploit this to perform DMA attack. >> >> This introduces the bouce buffer mechanism for DMA buffers >> which doesn't fill a minimal IOMMU page. It could be used >> by various vendor specific IOMMU drivers as long as the >> DMA domain is managed by the generic IOMMU layer. Below >> APIs are added: >> >> * iommu_bounce_map(dev, addr, paddr, size, dir, attrs) >> - Map a buffer start at DMA address @addr in bounce page >> manner. For buffer parts that doesn't cross a whole >> minimal IOMMU page, the bounce page policy is applied. >> A bounce page mapped by swiotlb will be used as the DMA >> target in the IOMMU page table. Otherwise, the physical >> address @paddr is mapped instead. >> >> * iommu_bounce_unmap(dev, addr, size, dir, attrs) >> - Unmap the buffer mapped with iommu_bounce_map(). The bounce >> page will be torn down after the bounced data get synced. >> >> * iommu_bounce_sync(dev, addr, size, dir, target) >> - Synce the bounced data in case the bounce mapped buffer is >> reused. >> >> The whole APIs are included within a kernel option IOMMU_BOUNCE_PAGE. >> It's useful for cases where bounce page doesn't needed, for example, >> embedded cases. >> >> Cc: Ashok Raj >> Cc: Jacob Pan >> Cc: Kevin Tian >> Cc: Alan Cox >> Cc: Mika Westerberg >> Signed-off-by: Lu Baolu >> --- >> drivers/iommu/Kconfig | 14 +++++ >> drivers/iommu/iommu.c | 119 ++++++++++++++++++++++++++++++++++++++++++ >> include/linux/iommu.h | 35 +++++++++++++ >> 3 files changed, 168 insertions(+) >> >> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig >> index 83664db5221d..d837ec3f359b 100644 >> --- a/drivers/iommu/Kconfig >> +++ b/drivers/iommu/Kconfig >> @@ -86,6 +86,20 @@ config IOMMU_DEFAULT_PASSTHROUGH >> >> If unsure, say N here. >> >> +config IOMMU_BOUNCE_PAGE >> + bool "Use bounce page for untrusted devices" >> + depends on IOMMU_API >> + select SWIOTLB > > I think you want: > depends on IOMMU_API && SWIOTLB > > As people may want to have IOMMU and SWIOTLB, and not IOMMU_BOUNCE_PAGE enabled. Yes. Yours makes more sense. Best regards, Baolu > >> + help >> + IOMMU hardware always use paging for DMA remapping. The minimum >> + mapped window is a page size. The device drivers may map buffers >> + not filling whole IOMMU window. This allows device to access to >> + possibly unrelated memory and malicious device can exploit this >> + to perform a DMA attack. Select this to use a bounce page for the >> + buffer which doesn't fill a whole IOMU page. >> + >> + If unsure, say N here. >> + >> config OF_IOMMU >> def_bool y >> depends on OF && IOMMU_API >> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c >> index 2a906386bb8e..fa44f681a82b 100644 >> --- a/drivers/iommu/iommu.c >> +++ b/drivers/iommu/iommu.c >> @@ -2246,3 +2246,122 @@ int iommu_sva_get_pasid(struct iommu_sva *handle) >> return ops->sva_get_pasid(handle); >> } >> EXPORT_SYMBOL_GPL(iommu_sva_get_pasid); >> + >> +#ifdef CONFIG_IOMMU_BOUNCE_PAGE >> + >> +/* >> + * Bounce buffer support for external devices: >> + * >> + * IOMMU hardware always use paging for DMA remapping. The minimum mapped >> + * window is a page size. The device drivers may map buffers not filling >> + * whole IOMMU window. This allows device to access to possibly unrelated >> + * memory and malicious device can exploit this to perform a DMA attack. >> + * Use bounce pages for the buffer which doesn't fill whole IOMMU pages. >> + */ >> + >> +static inline size_t >> +get_aligned_size(struct iommu_domain *domain, dma_addr_t addr, size_t size) >> +{ >> + unsigned long page_size = 1 << __ffs(domain->pgsize_bitmap); >> + unsigned long offset = page_size - 1; >> + >> + return ALIGN((addr & offset) + size, page_size); >> +} >> + >> +dma_addr_t iommu_bounce_map(struct device *dev, dma_addr_t iova, >> + phys_addr_t paddr, size_t size, >> + enum dma_data_direction dir, >> + unsigned long attrs) >> +{ >> + struct iommu_domain *domain; >> + unsigned int min_pagesz; >> + phys_addr_t tlb_addr; >> + size_t aligned_size; >> + int prot = 0; >> + int ret; >> + >> + domain = iommu_get_dma_domain(dev); >> + if (!domain) >> + return DMA_MAPPING_ERROR; >> + >> + if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL) >> + prot |= IOMMU_READ; >> + if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) >> + prot |= IOMMU_WRITE; >> + >> + aligned_size = get_aligned_size(domain, paddr, size); >> + min_pagesz = 1 << __ffs(domain->pgsize_bitmap); >> + >> + /* >> + * If both the physical buffer start address and size are >> + * page aligned, we don't need to use a bounce page. >> + */ >> + if (!IS_ALIGNED(paddr | size, min_pagesz)) { >> + tlb_addr = swiotlb_tbl_map_single(dev, >> + __phys_to_dma(dev, io_tlb_start), >> + paddr, size, aligned_size, dir, attrs); >> + if (tlb_addr == DMA_MAPPING_ERROR) >> + return DMA_MAPPING_ERROR; >> + } else { >> + tlb_addr = paddr; >> + } >> + >> + ret = iommu_map(domain, iova, tlb_addr, aligned_size, prot); >> + if (ret) { >> + swiotlb_tbl_unmap_single(dev, tlb_addr, size, >> + aligned_size, dir, attrs); >> + return DMA_MAPPING_ERROR; >> + } >> + >> + return iova; >> +} >> +EXPORT_SYMBOL_GPL(iommu_bounce_map); >> + >> +static inline phys_addr_t >> +iova_to_tlb_addr(struct iommu_domain *domain, dma_addr_t addr) >> +{ >> + if (unlikely(!domain->ops || !domain->ops->iova_to_phys)) >> + return 0; >> + >> + return domain->ops->iova_to_phys(domain, addr); >> +} >> + >> +void iommu_bounce_unmap(struct device *dev, dma_addr_t iova, size_t size, >> + enum dma_data_direction dir, unsigned long attrs) >> +{ >> + struct iommu_domain *domain; >> + phys_addr_t tlb_addr; >> + size_t aligned_size; >> + >> + domain = iommu_get_dma_domain(dev); >> + if (WARN_ON(!domain)) >> + return; >> + >> + aligned_size = get_aligned_size(domain, iova, size); >> + tlb_addr = iova_to_tlb_addr(domain, iova); >> + if (WARN_ON(!tlb_addr)) >> + return; >> + >> + iommu_unmap(domain, iova, aligned_size); >> + if (is_swiotlb_buffer(tlb_addr)) >> + swiotlb_tbl_unmap_single(dev, tlb_addr, size, >> + aligned_size, dir, attrs); >> +} >> +EXPORT_SYMBOL_GPL(iommu_bounce_unmap); >> + >> +void iommu_bounce_sync(struct device *dev, dma_addr_t addr, size_t size, >> + enum dma_data_direction dir, enum dma_sync_target target) >> +{ >> + struct iommu_domain *domain; >> + phys_addr_t tlb_addr; >> + >> + domain = iommu_get_dma_domain(dev); >> + if (WARN_ON(!domain)) >> + return; >> + >> + tlb_addr = iova_to_tlb_addr(domain, addr); >> + if (is_swiotlb_buffer(tlb_addr)) >> + swiotlb_tbl_sync_single(dev, tlb_addr, size, dir, target); >> +} >> +EXPORT_SYMBOL_GPL(iommu_bounce_sync); >> +#endif /* CONFIG_IOMMU_BOUNCE_PAGE */ >> diff --git a/include/linux/iommu.h b/include/linux/iommu.h >> index 91af22a344e2..814c0da64692 100644 >> --- a/include/linux/iommu.h >> +++ b/include/linux/iommu.h >> @@ -25,6 +25,8 @@ >> #include >> #include >> #include >> +#include >> +#include >> >> #define IOMMU_READ (1 << 0) >> #define IOMMU_WRITE (1 << 1) >> @@ -499,6 +501,39 @@ int iommu_sva_set_ops(struct iommu_sva *handle, >> const struct iommu_sva_ops *ops); >> int iommu_sva_get_pasid(struct iommu_sva *handle); >> >> +#ifdef CONFIG_IOMMU_BOUNCE_PAGE >> +dma_addr_t iommu_bounce_map(struct device *dev, dma_addr_t iova, >> + phys_addr_t paddr, size_t size, >> + enum dma_data_direction dir, >> + unsigned long attrs); >> +void iommu_bounce_unmap(struct device *dev, dma_addr_t iova, size_t size, >> + enum dma_data_direction dir, unsigned long attrs); >> +void iommu_bounce_sync(struct device *dev, dma_addr_t addr, size_t size, >> + enum dma_data_direction dir, >> + enum dma_sync_target target); >> +#else >> +static inline >> +dma_addr_t iommu_bounce_map(struct device *dev, dma_addr_t iova, >> + phys_addr_t paddr, size_t size, >> + enum dma_data_direction dir, >> + unsigned long attrs) >> +{ >> + return DMA_MAPPING_ERROR; >> +} >> + >> +static inline >> +void iommu_bounce_unmap(struct device *dev, dma_addr_t iova, size_t size, >> + enum dma_data_direction dir, unsigned long attrs) >> +{ >> +} >> + >> +static inline >> +void iommu_bounce_sync(struct device *dev, dma_addr_t addr, size_t size, >> + enum dma_data_direction dir, enum dma_sync_target target) >> +{ >> +} >> +#endif /* CONFIG_IOMMU_BOUNCE_PAGE */ >> + >> #else /* CONFIG_IOMMU_API */ >> >> struct iommu_ops {}; >> -- >> 2.17.1 >> >