Received: by 2002:a89:2c3:0:b0:1ed:23cc:44d1 with SMTP id d3csp299135lqs; Tue, 5 Mar 2024 02:19:24 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWFmt8jWlTZotrSQhWN9PNkgfSZM35OocsvJkSCkPSkvIqG4BzcMI+JGrEkpbW7lIfds32BmXn0RRJ1+kQxnjH6IW7Fh1TtaFqIGMlI+w== X-Google-Smtp-Source: AGHT+IEUTANwDlfiE8zkdvbyf/W7Cf8avEm7B4hf7pRhEam8ZuDRf0qrNg83Ap+3K+6rg31cJHeW X-Received: by 2002:a17:902:f68f:b0:1db:ff7b:d1fa with SMTP id l15-20020a170902f68f00b001dbff7bd1famr1731287plg.47.1709633964269; Tue, 05 Mar 2024 02:19:24 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709633964; cv=pass; d=google.com; s=arc-20160816; b=IOrL2XZpmpylpJUjwXQwLKaQi6IT5nGECJuwBtDKR2wfWjGF2IPrN9h2r+ore1Cv7f SILqonzhAZMi2pAdfXKtySMK7TpC26ZQWlAlagTbqsRdP4ZpneCP6HBRA46OLkkAReKE jqn6BjLxyYpJSxxU3Mq2Ph59zK+HnqPJow75h5rF3qsbT/ODj6lGd9UX5lfWhe71Z2pW VRz9y6lO59JHp9eNeD2KCVxFGNjRUYbbTMwbgsuSUxft8O2cJRYQ3CBVJlLNNomlgS1Z LZhM6QeuwAqh1TfPC7Kxm6cTJ2yCbJh3b+oPcpZtwUcy6Pwl9k7Mr41iLmC1xHTEfYMI fwJw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=DGOWTpc0mK7AvFB2rfCYarTy/YnOa82zBOZKlzEOfvY=; fh=q7c0K0LtlTicAt6eDmJJJM50IJoKBwH/AvPNHUILEIs=; b=hjvQjkXA20XcQIlrNOpNVr4/O5l2MWGBTklcKKq+jluYFjRc5XcsWIhiVkWzIaSm8a YGcvBmo92hP6QMROJ1JS9swDK67EhZLP6PI+aqFclVW0+p1Ca7oX5zSjxw+UIotjkvuk Ovup4/CvmVhAz6KGoF7kALPXpZ07V5Qm4l148915vjGlll4fXDZQlXbE8sMsfXy7Yvu5 1FC7d/yv4v8J6AnsEH19VaxOTd8BjUsQMtNEvukEJfG2bkwIYW9dRez+dm66awenFk32 ZGuA+0SaDUwc4cjABn+NUJyClzlT6hCH1LkaWlRr8H5BveRBAk4wELT6AMXB8hNXPisZ 3ccg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=l7gVIpZC; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-92074-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-92074-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id l15-20020a170902f68f00b001db5079b6fesi10324105plg.99.2024.03.05.02.19.23 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Mar 2024 02:19:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-92074-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=l7gVIpZC; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-92074-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-92074-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 01A2EB21CE4 for ; Tue, 5 Mar 2024 10:19:23 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0DA995BAED; Tue, 5 Mar 2024 10:15:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="l7gVIpZC" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E7B4D5B66C; Tue, 5 Mar 2024 10:15:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709633752; cv=none; b=LArjfw5S+I0fdjueUOkEjeqZwt2XwGN+AAPtuXS+3ecp+ZdgkQq9Ai3hT3WLW76gTo78ttiFY7jvVc1/P4Thf5V9Upm68zqS04Enklh5oVJ+WZ3vjif7gqgzdsmF27/PTn+voy+BgxWlaNj/ECBT94ZMvIMWVfemNc2HWk0Jg9o= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709633752; c=relaxed/simple; bh=W6hmByMbviZow30AysrbmdCiStk7NPI/+8Qeoey1JY8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QNmkqmCSXsdIYMvI407az7b6l8Mz6y79w0GFgSOc1AnXaDjx8apCX9LYIjM5KtA9hFjL2+N3MGvMphY9zVos9jMPdjbYmOAmPyrSDv0lGYX1FFhRhFw9J1J60C0czDECkER45aE1n+AjyQ23S1jxBwSixqQc0fFlQ8y5byGJj8U= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=l7gVIpZC; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 733AEC433C7; Tue, 5 Mar 2024 10:15:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633751; bh=W6hmByMbviZow30AysrbmdCiStk7NPI/+8Qeoey1JY8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l7gVIpZCN4qfruYV+rhZs6OVB7eTnW+2jECepsOGpeOe6WYvjXT9SqiFpBQzTVqwz XZ0+mj9fWXef0KdsccKP98pCRc30F8WZsLr+lNyYrNYr3ah1H9ji1UrOI4hdLXFzf0 CXiKAAqhEBrPGQ+EI22ylfpvgmIPUXyhaTu/hkQYZKAaa92IBeLzIwCbEMlXgEp/+X SCobxhPdH3p7yiwMHielMCXj+TLdArAdfOiZodMpIPkTamBq50gmavt5TvXHw8Qguf gxgKOtzpIwVZVp1iDgIDemvks4MTC3dkgSc2SRsBHlswlYQDO8Faa4CI1/l3SS+j8R M12Dmxecw3+fg== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 04/16] iommu/dma: Provide an interface to allow preallocate IOVA Date: Tue, 5 Mar 2024 12:15:14 +0200 Message-ID: X-Mailer: git-send-email 2.44.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Leon Romanovsky Separate IOVA allocation to dedicated callback so it will allow cache of IOVA and reuse it in fast paths for devices which support ODP (on-demand-paging) mechanism. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 50 +++++++++++++++++++++++++++++---------- 1 file changed, 38 insertions(+), 12 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 50ccc4f1ef81..e55726783501 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -356,7 +356,7 @@ int iommu_dma_init_fq(struct iommu_domain *domain) atomic_set(&cookie->fq_timer_on, 0); /* * Prevent incomplete fq state being observable. Pairs with path from - * __iommu_dma_unmap() through iommu_dma_free_iova() to queue_iova() + * __iommu_dma_unmap() through __iommu_dma_free_iova() to queue_iova() */ smp_wmb(); WRITE_ONCE(cookie->fq_domain, domain); @@ -760,7 +760,7 @@ static int dma_info_to_prot(enum dma_data_direction dir, bool coherent, } } -static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, +static dma_addr_t __iommu_dma_alloc_iova(struct iommu_domain *domain, size_t size, u64 dma_limit, struct device *dev) { struct iommu_dma_cookie *cookie = domain->iova_cookie; @@ -806,7 +806,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, return (dma_addr_t)iova << shift; } -static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, +static void __iommu_dma_free_iova(struct iommu_dma_cookie *cookie, dma_addr_t iova, size_t size, struct iommu_iotlb_gather *gather) { struct iova_domain *iovad = &cookie->iovad; @@ -843,7 +843,7 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, if (!iotlb_gather.queued) iommu_iotlb_sync(domain, &iotlb_gather); - iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather); + __iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather); } static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, @@ -861,12 +861,12 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, size = iova_align(iovad, size + iova_off); - iova = iommu_dma_alloc_iova(domain, size, dma_mask, dev); + iova = __iommu_dma_alloc_iova(domain, size, dma_mask, dev); if (!iova) return DMA_MAPPING_ERROR; if (iommu_map(domain, iova, phys - iova_off, size, prot, GFP_ATOMIC)) { - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); return DMA_MAPPING_ERROR; } return iova + iova_off; @@ -970,7 +970,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, return NULL; size = iova_align(iovad, size); - iova = iommu_dma_alloc_iova(domain, size, dev->coherent_dma_mask, dev); + iova = __iommu_dma_alloc_iova(domain, size, dev->coherent_dma_mask, dev); if (!iova) goto out_free_pages; @@ -1004,7 +1004,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, out_free_sg: sg_free_table(sgt); out_free_iova: - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); out_free_pages: __iommu_dma_free_pages(pages, count); return NULL; @@ -1436,7 +1436,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, if (!iova_len) return __finalise_sg(dev, sg, nents, 0); - iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); + iova = __iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); if (!iova) { ret = -ENOMEM; goto out_restore_sg; @@ -1453,7 +1453,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, return __finalise_sg(dev, sg, nents, iova); out_free_iova: - iommu_dma_free_iova(cookie, iova, iova_len, NULL); + __iommu_dma_free_iova(cookie, iova, iova_len, NULL); out_restore_sg: __invalidate_sg(sg, nents); out: @@ -1706,6 +1706,30 @@ static size_t iommu_dma_opt_mapping_size(void) return iova_rcache_range(); } +static dma_addr_t iommu_dma_alloc_iova(struct device *dev, size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + dma_addr_t dma_mask = dma_get_mask(dev); + + size = iova_align(iovad, size); + return __iommu_dma_alloc_iova(domain, size, dma_mask, dev); +} + +static void iommu_dma_free_iova(struct device *dev, dma_addr_t iova, + size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + struct iommu_iotlb_gather iotlb_gather; + + size = iova_align(iovad, size); + iommu_iotlb_gather_init(&iotlb_gather); + __iommu_dma_free_iova(cookie, iova, size, &iotlb_gather); +} + static const struct dma_map_ops iommu_dma_ops = { .flags = DMA_F_PCI_P2PDMA_SUPPORTED, .alloc = iommu_dma_alloc, @@ -1728,6 +1752,8 @@ static const struct dma_map_ops iommu_dma_ops = { .unmap_resource = iommu_dma_unmap_resource, .get_merge_boundary = iommu_dma_get_merge_boundary, .opt_mapping_size = iommu_dma_opt_mapping_size, + .alloc_iova = iommu_dma_alloc_iova, + .free_iova = iommu_dma_free_iova, }; /* @@ -1776,7 +1802,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, if (!msi_page) return NULL; - iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); + iova = __iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); if (!iova) goto out_free_page; @@ -1790,7 +1816,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, return msi_page; out_free_iova: - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); out_free_page: kfree(msi_page); return NULL; -- 2.44.0