Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp720286pxk; Fri, 11 Sep 2020 20:33:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxAnwwJYAN3st3HauG7FZj+109j2ajRBQWveD6JKDVHUstrUUqvMHdp+1EK+vnKihZfKERn X-Received: by 2002:a50:9b44:: with SMTP id a4mr5943078edj.12.1599881590845; Fri, 11 Sep 2020 20:33:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599881590; cv=none; d=google.com; s=arc-20160816; b=pEkWVdCI/S+sFc4BZDD/LqaALSSy1ZMia+6iHt2Tc7c9xUEyhaizJGNmWJTdKJRkM6 3g74aYj2tVk0nO9pnFpLa2FuOENyHgI+sbIh5xdjrekofgWuVgDe4XXN4uren8JK/2JD 0tt3N/VvNnVEeIJ9CXI0r6bIvcxXyXiimPBFqdO0qcYlasnvMkcGrZK/ozSllP+bsTNF 4CCsvTXmdYpUZhmIozu3x0+ExebnMov0d8UXBIUh+9ZGG5iNPZKd8tyI3uPT7lYe3G/l n22TX3ucU8WNmcw5y76BpD6/5ySWENuQPUrRRQnK6eNtypahiqI4hq9emVdY7tPUVmah utUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:ironport-sdr:ironport-sdr; bh=7eJkd6olm+tOuHa33J2cAr0MQRyy3KxaX64PN8ktaJA=; b=z+ivMlc4o2D8EIrrc+6qzr/tASq0Bimre/wKIQa9LzBx3Jhk3S9S2VWzFz9IBtMFZq WThHmJAmlzcwZkApJpGyIpEjfj7uL42oNkXjfFPPsZqLsdY+d8LbwemspZEeDdMcCiK4 S0SrZ4poMB80TpVxUdYITKZ9QUWNKKBKafrizfrie4VWqDmkidIaJX9t3dZX31L3u++z /VaoiB3bJOIOIKuuKyOJs00XYwozi4KsPjscQ3gE4XBT62lBMvtLWRFCo7gdNMtBxmNL 5ZgYa1RQ5ZRvG3td4l75sFMa3f/dymklwb6oc8QFw4xutPPRj6kRU5WUINQv+VGUEYxL Er1A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x25si2720058edr.508.2020.09.11.20.32.48; Fri, 11 Sep 2020 20:33:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725809AbgILD2T (ORCPT + 99 others); Fri, 11 Sep 2020 23:28:19 -0400 Received: from mga07.intel.com ([134.134.136.100]:27600 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725773AbgILD2L (ORCPT ); Fri, 11 Sep 2020 23:28:11 -0400 IronPort-SDR: I7KQBreiZtT6bdKVfkPcfHQK7w8tXxwmn81nIxPITDgNgota1xpnTUQJbk37o3EfcB6YL9m+Ji ykvB/4P6izXw== X-IronPort-AV: E=McAfee;i="6000,8403,9741"; a="223074011" X-IronPort-AV: E=Sophos;i="5.76,418,1592895600"; d="scan'208";a="223074011" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Sep 2020 20:28:06 -0700 IronPort-SDR: xQh4118GOqs/1+PyAwnEcCygSu27/CDuu8TcRwx3fsmE9n57zbqAAeMWgr4sB3HFl6MHKfMTtZ ixIYv4R2yRVw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,418,1592895600"; d="scan'208";a="408321393" Received: from allen-box.sh.intel.com ([10.239.159.139]) by fmsmga001.fm.intel.com with ESMTP; 11 Sep 2020 20:28:03 -0700 From: Lu Baolu To: Joerg Roedel , Tom Murphy , David Woodhouse , Christoph Hellwig Cc: Ashok Raj , Tvrtko Ursulin , Intel-gfx@lists.freedesktop.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v3 1/6] iommu: Handle freelists when using deferred flushing in iommu drivers Date: Sat, 12 Sep 2020 11:21:55 +0800 Message-Id: <20200912032200.11489-2-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200912032200.11489-1-baolu.lu@linux.intel.com> References: <20200912032200.11489-1-baolu.lu@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Tom Murphy Allow the iommu_unmap_fast to return newly freed page table pages and pass the freelist to queue_iova in the dma-iommu ops path. This is useful for iommu drivers (in this case the intel iommu driver) which need to wait for the ioTLB to be flushed before newly free/unmapped page table pages can be freed. This way we can still batch ioTLB free operations and handle the freelists. Signed-off-by: Tom Murphy Signed-off-by: Lu Baolu --- drivers/iommu/dma-iommu.c | 30 ++++++++++++++------ drivers/iommu/intel/iommu.c | 55 ++++++++++++++++++++++++------------- include/linux/iommu.h | 1 + 3 files changed, 59 insertions(+), 27 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 5141d49a046b..82c071b2d5c8 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -50,6 +50,18 @@ struct iommu_dma_cookie { struct iommu_domain *fq_domain; }; +static void iommu_dma_entry_dtor(unsigned long data) +{ + struct page *freelist = (struct page *)data; + + while (freelist) { + unsigned long p = (unsigned long)page_address(freelist); + + freelist = freelist->freelist; + free_page(p); + } +} + static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) { if (cookie->type == IOMMU_DMA_IOVA_COOKIE) @@ -344,7 +356,8 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, if (!cookie->fq_domain && !iommu_domain_get_attr(domain, DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, &attr) && attr) { cookie->fq_domain = domain; - init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all, NULL); + init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all, + iommu_dma_entry_dtor); } if (!dev) @@ -438,7 +451,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, } static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, - dma_addr_t iova, size_t size) + dma_addr_t iova, size_t size, struct page *freelist) { struct iova_domain *iovad = &cookie->iovad; @@ -447,7 +460,8 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, cookie->msi_iova -= size; else if (cookie->fq_domain) /* non-strict mode */ queue_iova(iovad, iova_pfn(iovad, iova), - size >> iova_shift(iovad), 0); + size >> iova_shift(iovad), + (unsigned long)freelist); else free_iova_fast(iovad, iova_pfn(iovad, iova), size >> iova_shift(iovad)); @@ -472,7 +486,7 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, if (!cookie->fq_domain) iommu_tlb_sync(domain, &iotlb_gather); - iommu_dma_free_iova(cookie, dma_addr, size); + iommu_dma_free_iova(cookie, dma_addr, size, iotlb_gather.freelist); } static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, @@ -494,7 +508,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, return DMA_MAPPING_ERROR; if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) { - iommu_dma_free_iova(cookie, iova, size); + iommu_dma_free_iova(cookie, iova, size, NULL); return DMA_MAPPING_ERROR; } return iova + iova_off; @@ -649,7 +663,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, out_free_sg: sg_free_table(&sgt); out_free_iova: - iommu_dma_free_iova(cookie, iova, size); + iommu_dma_free_iova(cookie, iova, size, NULL); out_free_pages: __iommu_dma_free_pages(pages, count); return NULL; @@ -900,7 +914,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, return __finalise_sg(dev, sg, nents, iova); out_free_iova: - iommu_dma_free_iova(cookie, iova, iova_len); + iommu_dma_free_iova(cookie, iova, iova_len, NULL); out_restore_sg: __invalidate_sg(sg, nents); return 0; @@ -1194,7 +1208,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, return msi_page; out_free_iova: - iommu_dma_free_iova(cookie, iova, size); + iommu_dma_free_iova(cookie, iova, size, NULL); out_free_page: kfree(msi_page); return NULL; diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 87b17bac04c2..63ee30c689a7 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -1208,17 +1208,17 @@ static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level, pages can only be freed after the IOTLB flush has been done. */ static struct page *domain_unmap(struct dmar_domain *domain, unsigned long start_pfn, - unsigned long last_pfn) + unsigned long last_pfn, + struct page *freelist) { - struct page *freelist; - BUG_ON(!domain_pfn_supported(domain, start_pfn)); BUG_ON(!domain_pfn_supported(domain, last_pfn)); BUG_ON(start_pfn > last_pfn); /* we don't need lock here; nobody else touches the iova range */ freelist = dma_pte_clear_level(domain, agaw_to_level(domain->agaw), - domain->pgd, 0, start_pfn, last_pfn, NULL); + domain->pgd, 0, start_pfn, last_pfn, + freelist); /* free pgd */ if (start_pfn == 0 && last_pfn == DOMAIN_MAX_PFN(domain->gaw)) { @@ -1976,7 +1976,8 @@ static void domain_exit(struct dmar_domain *domain) if (domain->pgd) { struct page *freelist; - freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw)); + freelist = domain_unmap(domain, 0, + DOMAIN_MAX_PFN(domain->gaw), NULL); dma_free_pagelist(freelist); } @@ -3532,7 +3533,7 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size) if (dev_is_pci(dev)) pdev = to_pci_dev(dev); - freelist = domain_unmap(domain, start_pfn, last_pfn); + freelist = domain_unmap(domain, start_pfn, last_pfn, NULL); if (intel_iommu_strict || (pdev && pdev->untrusted) || !has_iova_flush_queue(&domain->iovad)) { iommu_flush_iotlb_psi(iommu, domain, start_pfn, @@ -4595,7 +4596,8 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb, struct page *freelist; freelist = domain_unmap(si_domain, - start_vpfn, last_vpfn); + start_vpfn, last_vpfn, + NULL); rcu_read_lock(); for_each_active_iommu(iommu, drhd) @@ -5570,10 +5572,8 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain, struct iommu_iotlb_gather *gather) { struct dmar_domain *dmar_domain = to_dmar_domain(domain); - struct page *freelist = NULL; unsigned long start_pfn, last_pfn; - unsigned int npages; - int iommu_id, level = 0; + int level = 0; /* Cope with horrid API which requires us to unmap more than the size argument if it happens to be a large-page mapping. */ @@ -5585,22 +5585,38 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain, start_pfn = iova >> VTD_PAGE_SHIFT; last_pfn = (iova + size - 1) >> VTD_PAGE_SHIFT; - freelist = domain_unmap(dmar_domain, start_pfn, last_pfn); - - npages = last_pfn - start_pfn + 1; - - for_each_domain_iommu(iommu_id, dmar_domain) - iommu_flush_iotlb_psi(g_iommus[iommu_id], dmar_domain, - start_pfn, npages, !freelist, 0); - - dma_free_pagelist(freelist); + gather->freelist = domain_unmap(dmar_domain, start_pfn, + last_pfn, gather->freelist); if (dmar_domain->max_addr == iova + size) dmar_domain->max_addr = iova; + iommu_iotlb_gather_add_page(domain, gather, iova, size); + return size; } +static void intel_iommu_tlb_sync(struct iommu_domain *domain, + struct iommu_iotlb_gather *gather) +{ + struct dmar_domain *dmar_domain = to_dmar_domain(domain); + unsigned long iova_pfn = IOVA_PFN(gather->start); + size_t size = gather->end - gather->start; + unsigned long start_pfn, last_pfn; + unsigned long nrpages; + int iommu_id; + + nrpages = aligned_nrpages(gather->start, size); + start_pfn = mm_to_dma_pfn(iova_pfn); + last_pfn = start_pfn + nrpages - 1; + + for_each_domain_iommu(iommu_id, dmar_domain) + iommu_flush_iotlb_psi(g_iommus[iommu_id], dmar_domain, + start_pfn, nrpages, !gather->freelist, 0); + + dma_free_pagelist(gather->freelist); +} + static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) { @@ -6060,6 +6076,7 @@ const struct iommu_ops intel_iommu_ops = { .aux_get_pasid = intel_iommu_aux_get_pasid, .map = intel_iommu_map, .unmap = intel_iommu_unmap, + .iotlb_sync = intel_iommu_tlb_sync, .iova_to_phys = intel_iommu_iova_to_phys, .probe_device = intel_iommu_probe_device, .probe_finalize = intel_iommu_probe_finalize, diff --git a/include/linux/iommu.h b/include/linux/iommu.h index fee209efb756..f0d56360a976 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -180,6 +180,7 @@ struct iommu_iotlb_gather { unsigned long start; unsigned long end; size_t pgsize; + struct page *freelist; }; /** -- 2.17.1