Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp2251725pxk; Sun, 27 Sep 2020 00:21:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzi0ZTsjlLxUsU1T1IJXQ7BJqYMrDw+Y31UY/DllmkCFJlLAbZdc3/i7RcsZJsXH5CVGIFq X-Received: by 2002:aa7:d6c6:: with SMTP id x6mr10013719edr.338.1601191289200; Sun, 27 Sep 2020 00:21:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601191289; cv=none; d=google.com; s=arc-20160816; b=nZuuHKN9FOii4S8DqwA9cE5gAOD9zK1URBdttEjk7ico+12DjY3sUkufF6L5Qbrvbv 1ZmvZDbjmABUxYCSkcce88pWb533NvoA+8MgiuKUwynUY4X4MT7h86Glbh/KdOo3S8u0 agQ+eZyxhTXyodcC1FiOmiOztLIOF1ynjM3UhhkLaRmAwbUUjF5lpZDFd0seQ3R+HBDi 4nHUPx0jwyFJWCKb0HfqYcbsuYfTru3WsVe9UldyI9PjNBjaOYPZ9h5QUSOnMbzT5pqQ UkMljfAnHSmz9/MtefhyvMLPINl+fSBAE4pIGXulKka3bWTMkhmFZeTuDww3u7ZiSJi6 dRPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:ironport-sdr:ironport-sdr; bh=CY2ND52tUkpYjDfclOUU7QU0ENpnoVmQL+fp8USFVss=; b=ITcIruZ4zCtmTvs5639Gckzbv9zVdhxEtE03OHmf4tRrCpOK/gkwoMZfLjxwjUEYOb 0GzP2SBsPdQa7pKHKyW6hA1qEBlaTbC0LNEH+FHKSupIJCpYEvvFS0Zkz/+wT0iN0zks 9KswkkGLgdPu/zU8CtZ2BBT5P6XxbnakjDs/16DFWRl8ZHuVfKU6kRHSdFWJTsatoKXl +Amdlbte7yYNTK5xmHLz3wUaeszEcJzq0B+RLMEsgGt4V7p4xKGAkhpZenighA2frxv0 qZ+1AjtjsFiTzXG+TUCThkATEGzVnvX0C4PR0TJ9mecwTRrM4GHVrHb597QBltjCDWQO yyqw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qt16si4980411ejb.352.2020.09.27.00.21.06; Sun, 27 Sep 2020 00:21:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729125AbgI0HT7 (ORCPT + 99 others); Sun, 27 Sep 2020 03:19:59 -0400 Received: from mga17.intel.com ([192.55.52.151]:42934 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726478AbgI0HT7 (ORCPT ); Sun, 27 Sep 2020 03:19:59 -0400 IronPort-SDR: tdyH/y+CFvMDgiNv9oZFJk2AcC6vLBcynGiT5nThYxYA/7936co8SazjpYIasphcQbLizRrdbg W7H7RsUCStHw== X-IronPort-AV: E=McAfee;i="6000,8403,9756"; a="141863646" X-IronPort-AV: E=Sophos;i="5.77,309,1596524400"; d="scan'208";a="141863646" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2020 23:40:52 -0700 IronPort-SDR: uQ98wVKfrttqJnv4H1RkQuunBU7Rj44Zy9bfURCZZzfKXgBMUt1/sqXgxGCpki13aQXfzCWAkQ 4b0hgaZhA4FA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,309,1596524400"; d="scan'208";a="349457814" Received: from allen-box.sh.intel.com ([10.239.159.139]) by FMSMGA003.fm.intel.com with ESMTP; 26 Sep 2020 23:40:49 -0700 From: Lu Baolu To: Joerg Roedel , Tom Murphy , David Woodhouse , Christoph Hellwig Cc: Ashok Raj , Tvrtko Ursulin , Intel-gfx@lists.freedesktop.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v4 1/7] iommu: Handle freelists when using deferred flushing in iommu drivers Date: Sun, 27 Sep 2020 14:34:31 +0800 Message-Id: <20200927063437.13988-2-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200927063437.13988-1-baolu.lu@linux.intel.com> References: <20200927063437.13988-1-baolu.lu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Tom Murphy Allow the iommu_unmap_fast to return newly freed page table pages and pass the freelist to queue_iova in the dma-iommu ops path. This is useful for iommu drivers (in this case the intel iommu driver) which need to wait for the ioTLB to be flushed before newly free/unmapped page table pages can be freed. This way we can still batch ioTLB free operations and handle the freelists. Signed-off-by: Tom Murphy Signed-off-by: Lu Baolu --- drivers/iommu/dma-iommu.c | 29 +++++++++++++------ drivers/iommu/intel/iommu.c | 55 ++++++++++++++++++++++++------------- include/linux/iommu.h | 1 + 3 files changed, 58 insertions(+), 27 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index cd6e3c70ebb3..1b8ef3a2cbc3 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -50,6 +50,18 @@ struct iommu_dma_cookie { struct iommu_domain *fq_domain; }; +static void iommu_dma_entry_dtor(unsigned long data) +{ + struct page *freelist = (struct page *)data; + + while (freelist) { + unsigned long p = (unsigned long)page_address(freelist); + + freelist = freelist->freelist; + free_page(p); + } +} + static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) { if (cookie->type == IOMMU_DMA_IOVA_COOKIE) @@ -344,7 +356,7 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, if (!cookie->fq_domain && !iommu_domain_get_attr(domain, DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, &attr) && attr) { if (init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all, - NULL)) + iommu_dma_entry_dtor)) pr_warn("iova flush queue initialization failed\n"); else cookie->fq_domain = domain; @@ -441,7 +453,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, } static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, - dma_addr_t iova, size_t size) + dma_addr_t iova, size_t size, struct page *freelist) { struct iova_domain *iovad = &cookie->iovad; @@ -450,7 +462,8 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, cookie->msi_iova -= size; else if (cookie->fq_domain) /* non-strict mode */ queue_iova(iovad, iova_pfn(iovad, iova), - size >> iova_shift(iovad), 0); + size >> iova_shift(iovad), + (unsigned long)freelist); else free_iova_fast(iovad, iova_pfn(iovad, iova), size >> iova_shift(iovad)); @@ -475,7 +488,7 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, if (!cookie->fq_domain) iommu_iotlb_sync(domain, &iotlb_gather); - iommu_dma_free_iova(cookie, dma_addr, size); + iommu_dma_free_iova(cookie, dma_addr, size, iotlb_gather.freelist); } static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, @@ -497,7 +510,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, return DMA_MAPPING_ERROR; if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) { - iommu_dma_free_iova(cookie, iova, size); + iommu_dma_free_iova(cookie, iova, size, NULL); return DMA_MAPPING_ERROR; } return iova + iova_off; @@ -649,7 +662,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, out_free_sg: sg_free_table(&sgt); out_free_iova: - iommu_dma_free_iova(cookie, iova, size); + iommu_dma_free_iova(cookie, iova, size, NULL); out_free_pages: __iommu_dma_free_pages(pages, count); return NULL; @@ -900,7 +913,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, return __finalise_sg(dev, sg, nents, iova); out_free_iova: - iommu_dma_free_iova(cookie, iova, iova_len); + iommu_dma_free_iova(cookie, iova, iova_len, NULL); out_restore_sg: __invalidate_sg(sg, nents); return 0; @@ -1194,7 +1207,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, return msi_page; out_free_iova: - iommu_dma_free_iova(cookie, iova, size); + iommu_dma_free_iova(cookie, iova, size, NULL); out_free_page: kfree(msi_page); return NULL; diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 722545f61ba2..fdd514c8b2d4 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -1243,17 +1243,17 @@ static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level, pages can only be freed after the IOTLB flush has been done. */ static struct page *domain_unmap(struct dmar_domain *domain, unsigned long start_pfn, - unsigned long last_pfn) + unsigned long last_pfn, + struct page *freelist) { - struct page *freelist; - BUG_ON(!domain_pfn_supported(domain, start_pfn)); BUG_ON(!domain_pfn_supported(domain, last_pfn)); BUG_ON(start_pfn > last_pfn); /* we don't need lock here; nobody else touches the iova range */ freelist = dma_pte_clear_level(domain, agaw_to_level(domain->agaw), - domain->pgd, 0, start_pfn, last_pfn, NULL); + domain->pgd, 0, start_pfn, last_pfn, + freelist); /* free pgd */ if (start_pfn == 0 && last_pfn == DOMAIN_MAX_PFN(domain->gaw)) { @@ -2011,7 +2011,8 @@ static void domain_exit(struct dmar_domain *domain) if (domain->pgd) { struct page *freelist; - freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw)); + freelist = domain_unmap(domain, 0, + DOMAIN_MAX_PFN(domain->gaw), NULL); dma_free_pagelist(freelist); } @@ -3567,7 +3568,7 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size) if (dev_is_pci(dev)) pdev = to_pci_dev(dev); - freelist = domain_unmap(domain, start_pfn, last_pfn); + freelist = domain_unmap(domain, start_pfn, last_pfn, NULL); if (intel_iommu_strict || (pdev && pdev->untrusted) || !has_iova_flush_queue(&domain->iovad)) { iommu_flush_iotlb_psi(iommu, domain, start_pfn, @@ -4630,7 +4631,8 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb, struct page *freelist; freelist = domain_unmap(si_domain, - start_vpfn, last_vpfn); + start_vpfn, last_vpfn, + NULL); rcu_read_lock(); for_each_active_iommu(iommu, drhd) @@ -5603,10 +5605,8 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain, struct iommu_iotlb_gather *gather) { struct dmar_domain *dmar_domain = to_dmar_domain(domain); - struct page *freelist = NULL; unsigned long start_pfn, last_pfn; - unsigned int npages; - int iommu_id, level = 0; + int level = 0; /* Cope with horrid API which requires us to unmap more than the size argument if it happens to be a large-page mapping. */ @@ -5618,22 +5618,38 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain, start_pfn = iova >> VTD_PAGE_SHIFT; last_pfn = (iova + size - 1) >> VTD_PAGE_SHIFT; - freelist = domain_unmap(dmar_domain, start_pfn, last_pfn); - - npages = last_pfn - start_pfn + 1; - - for_each_domain_iommu(iommu_id, dmar_domain) - iommu_flush_iotlb_psi(g_iommus[iommu_id], dmar_domain, - start_pfn, npages, !freelist, 0); - - dma_free_pagelist(freelist); + gather->freelist = domain_unmap(dmar_domain, start_pfn, + last_pfn, gather->freelist); if (dmar_domain->max_addr == iova + size) dmar_domain->max_addr = iova; + iommu_iotlb_gather_add_page(domain, gather, iova, size); + return size; } +static void intel_iommu_tlb_sync(struct iommu_domain *domain, + struct iommu_iotlb_gather *gather) +{ + struct dmar_domain *dmar_domain = to_dmar_domain(domain); + unsigned long iova_pfn = IOVA_PFN(gather->start); + size_t size = gather->end - gather->start; + unsigned long start_pfn, last_pfn; + unsigned long nrpages; + int iommu_id; + + nrpages = aligned_nrpages(gather->start, size); + start_pfn = mm_to_dma_pfn(iova_pfn); + last_pfn = start_pfn + nrpages - 1; + + for_each_domain_iommu(iommu_id, dmar_domain) + iommu_flush_iotlb_psi(g_iommus[iommu_id], dmar_domain, + start_pfn, nrpages, !gather->freelist, 0); + + dma_free_pagelist(gather->freelist); +} + static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) { @@ -6093,6 +6109,7 @@ const struct iommu_ops intel_iommu_ops = { .aux_get_pasid = intel_iommu_aux_get_pasid, .map = intel_iommu_map, .unmap = intel_iommu_unmap, + .iotlb_sync = intel_iommu_tlb_sync, .iova_to_phys = intel_iommu_iova_to_phys, .probe_device = intel_iommu_probe_device, .probe_finalize = intel_iommu_probe_finalize, diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 2ad26d8b4ab9..8109e061d46e 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -180,6 +180,7 @@ struct iommu_iotlb_gather { unsigned long start; unsigned long end; size_t pgsize; + struct page *freelist; }; /** -- 2.17.1