Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp975961yba; Sat, 20 Apr 2019 18:34:23 -0700 (PDT) X-Google-Smtp-Source: APXvYqz9P3GVZKu6Vts3sRT8dten1eeUU58p15q7bATIm+sztK05EH0LngJAx1YYOSh388/KEG0X X-Received: by 2002:a63:7c45:: with SMTP id l5mr11860395pgn.303.1555810463642; Sat, 20 Apr 2019 18:34:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555810463; cv=none; d=google.com; s=arc-20160816; b=WERD7OJRiNLCaDnEdPBcnD0neWUhi/55I/mOtFA9y9vECf2R+Zsr1yFbFGBmz2dBpp fIA1UMx2TAggLVrQVJoquJBcENjFYHjdDomuUB2Fn1oYRRkczaapReSO44oNp9n3ysj3 Co12Oi+3Q2jIBcyz+vj/HmOLYwbc9QGIu2l8Bal/QH3fruzEXl4/TpGFeXygfPhWFerC Bymo+2zwqT1njJSeZY1EvDkoFNU0FEXezTbDs4IIO++sBM+kn/1aG2iWvjDiQAcekGxB 8J7sP+FVLcOE7pdVwHFS5dFCvbj6I7kfSBg7U+RoTO7JpnUowcb4+tVv8LwD++wpFu5n 2wiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=M8oZVi/nDA7R/4kvEEnN2VNX82/y3c1od/MAnMDDprY=; b=lT1NbxI9wr/1AEF8gVEUv0bhqBLgnrO3B6XvfWNPCLnfXJVZH+nKTOn2p3p0YyH9A7 qMsR3wgmipvmPxLETuiHyPI7rHH54DOAuEDwRPUFifvvJ0lmF7UuKn5c769EK9GSGFq2 H0/AieOAowf3Z7lmpLozqrBcVdEnFU6XkfFUZFErC2W2UHmCneiYpEv9H2LlkRJQMjlU dC6VdB5QNnaLXwQ3YsY7YNjWu6IKmlZkhcDQG8XGQZvRyoJ+gMurd420euC5/xfgrjvq jmX59fsXKoRJ6nXChMv708jWROMAZY6mEnBkLV4m3Kc8CKGhzhxroiFRF4bM3/iahgve FG9A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g123si10146776pfb.24.2019.04.20.18.34.08; Sat, 20 Apr 2019 18:34:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728435AbfDUB3c (ORCPT + 99 others); Sat, 20 Apr 2019 21:29:32 -0400 Received: from mga09.intel.com ([134.134.136.24]:18104 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726351AbfDUB2s (ORCPT ); Sat, 20 Apr 2019 21:28:48 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Apr 2019 18:28:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,376,1549958400"; d="scan'208";a="136008192" Received: from allen-box.sh.intel.com ([10.239.159.136]) by orsmga008.jf.intel.com with ESMTP; 20 Apr 2019 18:24:10 -0700 From: Lu Baolu To: David Woodhouse , Joerg Roedel Cc: ashok.raj@intel.com, jacob.jun.pan@intel.com, alan.cox@intel.com, kevin.tian@intel.com, mika.westerberg@linux.intel.com, pengfei.xu@intel.com, Konrad Rzeszutek Wilk , Christoph Hellwig , Marek Szyprowski , Robin Murphy , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Lu Baolu , Jacob Pan Subject: [PATCH v3 09/10] iommu/vt-d: Add dma sync ops for untrusted devices Date: Sun, 21 Apr 2019 09:17:18 +0800 Message-Id: <20190421011719.14909-10-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190421011719.14909-1-baolu.lu@linux.intel.com> References: <20190421011719.14909-1-baolu.lu@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This adds the dma sync ops for dma buffers used by any untrusted device. We need to sync such buffers because they might have been mapped with bounce pages. Cc: Ashok Raj Cc: Jacob Pan Cc: Kevin Tian Signed-off-by: Lu Baolu Tested-by: Xu Pengfei Tested-by: Mika Westerberg --- drivers/iommu/Kconfig | 1 + drivers/iommu/intel-iommu.c | 96 +++++++++++++++++++++++++++++++++---- 2 files changed, 88 insertions(+), 9 deletions(-) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index b918c22ca25b..f3191ec29e45 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -194,6 +194,7 @@ config INTEL_IOMMU select IOMMU_IOVA select NEED_DMA_MAP_STATE select DMAR_TABLE + select IOMMU_BOUNCE_PAGE help DMA remapping (DMAR) devices support enables independent address translations for Direct Memory Access (DMA) from devices. diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 0d80f26b8a72..ed941ec9b9d5 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3683,16 +3683,94 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele return nelems; } +static void +intel_sync_single_for_cpu(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir) +{ + if (WARN_ON(dir == DMA_NONE)) + return; + + if (!device_needs_bounce(dev)) + return; + + if (iommu_no_mapping(dev)) + return; + + iommu_bounce_sync_single(dev, addr, size, dir, SYNC_FOR_CPU); +} + +static void +intel_sync_single_for_device(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir) +{ + if (WARN_ON(dir == DMA_NONE)) + return; + + if (!device_needs_bounce(dev)) + return; + + if (iommu_no_mapping(dev)) + return; + + iommu_bounce_sync_single(dev, addr, size, dir, SYNC_FOR_DEVICE); +} + +static void +intel_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, + int nelems, enum dma_data_direction dir) +{ + struct scatterlist *sg; + int i; + + if (WARN_ON(dir == DMA_NONE)) + return; + + if (!device_needs_bounce(dev)) + return; + + if (iommu_no_mapping(dev)) + return; + + for_each_sg(sglist, sg, nelems, i) + iommu_bounce_sync_single(dev, sg_dma_address(sg), + sg_dma_len(sg), dir, SYNC_FOR_CPU); +} + +static void +intel_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, + int nelems, enum dma_data_direction dir) +{ + struct scatterlist *sg; + int i; + + if (WARN_ON(dir == DMA_NONE)) + return; + + if (!device_needs_bounce(dev)) + return; + + if (iommu_no_mapping(dev)) + return; + + for_each_sg(sglist, sg, nelems, i) + iommu_bounce_sync_single(dev, sg_dma_address(sg), + sg_dma_len(sg), dir, SYNC_FOR_DEVICE); +} + static const struct dma_map_ops intel_dma_ops = { - .alloc = intel_alloc_coherent, - .free = intel_free_coherent, - .map_sg = intel_map_sg, - .unmap_sg = intel_unmap_sg, - .map_page = intel_map_page, - .unmap_page = intel_unmap_page, - .map_resource = intel_map_resource, - .unmap_resource = intel_unmap_page, - .dma_supported = dma_direct_supported, + .alloc = intel_alloc_coherent, + .free = intel_free_coherent, + .map_sg = intel_map_sg, + .unmap_sg = intel_unmap_sg, + .map_page = intel_map_page, + .unmap_page = intel_unmap_page, + .sync_single_for_cpu = intel_sync_single_for_cpu, + .sync_single_for_device = intel_sync_single_for_device, + .sync_sg_for_cpu = intel_sync_sg_for_cpu, + .sync_sg_for_device = intel_sync_sg_for_device, + .map_resource = intel_map_resource, + .unmap_resource = intel_unmap_page, + .dma_supported = dma_direct_supported, }; static inline int iommu_domain_cache_init(void) -- 2.17.1