Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp2243285pxb; Sat, 28 Aug 2021 08:40:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx4g2a2b+29ccKfQGP4lYQCF9L2OfSnpyRefh0abQiRgoTAbzg5l2vOcNTDEg8lM1CBPTJX X-Received: by 2002:a92:da87:: with SMTP id u7mr10649312iln.297.1630165222376; Sat, 28 Aug 2021 08:40:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1630165222; cv=none; d=google.com; s=arc-20160816; b=lbNZG1epPi+TyeWAsNCwUj1+AKNbuQ4jJoVfFh5mE2+H4RJ4kwgQkiGGQjuv1PPcy7 a5UpH3s3cbmkBQndACt1OyWpFJ5CJamBSHWj1cShDkTS04Z1ngRH0IW8r6FVEbj9ejG0 3dX5sv0ZZZnDtmoTu2L9/R//owPGgWGsU3eTRfZS1wGOzwdLbhsgfIzg6Um+KfuwlxcP PmAYYFtBbfa9itT0ksJhIWv8D9SSpU814XbyNQbNho1i8lMl3uLU67diQ6Hsu3kg2odp qWqYgfxRPOLBsA92SEvXrTOTF3tZYfXhj9vDXu+7yOi4AxwN0FNlKma/JRs0Pfreqwnb hh8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=ZrPBniwQgp/pTAmxZg5So6bgv4JfHLcME0E/PbntOfI=; b=vOX4aP1P17hJw/SaXKJVjUdw1ampu/t3qzAszyaVSb8Aaje12ZiBd6H/tfk3S61VLs Hj5/X5tbchINYkPzSm6UcQHGov1hqC5dIqTm4Cm7ORTIGimiMgJYU0VJzfeL/NigaPL5 LDHVTHoF+m1D4UaBUrAaN1pnziVtAjMkl1utrgwerWR7rrHWTOJYkIMJ9KK08lFDgO/6 apz6Wrn+NUCt1BQy3vQUiKrq+mkPA5w/VwXUE8i15yxFWvqIWB8wh5aocaxa/xwEmHZJ r/LNbyI8OC8h1nVmsHreHdQs6UEnqos0LTOFZ38bCLS8jXdQj+DpYLJLuK6TwQvHh7Zk Zigg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@svenpeter.dev header.s=fm2 header.b=Kk+gP5xG; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=wux3N48t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=svenpeter.dev Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x6si3617196ilv.93.2021.08.28.08.40.10; Sat, 28 Aug 2021 08:40:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@svenpeter.dev header.s=fm2 header.b=Kk+gP5xG; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=wux3N48t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=svenpeter.dev Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231368AbhH1PjY (ORCPT + 99 others); Sat, 28 Aug 2021 11:39:24 -0400 Received: from new3-smtp.messagingengine.com ([66.111.4.229]:50273 "EHLO new3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230092AbhH1PjR (ORCPT ); Sat, 28 Aug 2021 11:39:17 -0400 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailnew.nyi.internal (Postfix) with ESMTP id 60219580A6B; Sat, 28 Aug 2021 11:38:23 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Sat, 28 Aug 2021 11:38:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=svenpeter.dev; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=ZrPBniwQgp/pT AmxZg5So6bgv4JfHLcME0E/PbntOfI=; b=Kk+gP5xG+AkV4vbMC5yQZarOx4tSl DuduF94PREcOnnpMvxeaLH4pLEhiPSN2dqJsgwJhicWbBr+WoAKemABMmkz5e70V 2xApMkdNfyvIQfg5VATHUDgoajE+TMWThiEFLi/IaHkrrPl3XhRPllC3Sul6ISOD mRPuQ9RyvexcbljJuBTkmnG9ifIZrVTR4iQxDvffhFhNv44KCCoxdq8R+He017v0 DEJ62SMgdEItZVXLT5Z34ORzxgjXG0P7Af9+GVbddw/RGyKOVBmFQq2fElnayhbp DGHi8d30gjorvNOgJcABq3v+6wVKYRZYUtmrgIuIundBKC1jg+F+vDXOA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=ZrPBniwQgp/pTAmxZg5So6bgv4JfHLcME0E/PbntOfI=; b=wux3N48t 0wP6OXfiLVB9Tq6MGzg0UNPI02CUottwzYPJu1s0h8wSo18uh++rsFpFY2xT9C4c m+f0as0X05tJtVS+9kTynWpa0K3770lB6TPAvvpFL7lddA5ZfIHvPeC9/qpRizcE s8VWmoG+2lFZQpyjHWnpvZL6zXEzl1hsqjXiTKp2W5oJTxWir0OLgWqfL5tDO9ao 2SpPPsQiYzbh3tFGnClk+v02/KY/JqpOxuI4SSP5r+mKA9t79QT8yqA+mrFNqCox 3IjqrF3w4EepNgsjTKDnoOoXE9xmgnqvyrTWgVKqiAPWAXqWlXM5I2JpU/Z2Y4EL z3Tj3Pg/Pgv+Yg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudduhedgleduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefuvhgvnhcu rfgvthgvrhcuoehsvhgvnhesshhvvghnphgvthgvrhdruggvvheqnecuggftrfgrthhtvg hrnheptedvkeetleeuffffhfekteetffeggffgveehieelueefvddtueffveevlefhfeej necuvehluhhsthgvrhfuihiivgepvdenucfrrghrrghmpehmrghilhhfrhhomhepshhvvg hnsehsvhgvnhhpvghtvghrrdguvghv X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat, 28 Aug 2021 11:38:21 -0400 (EDT) From: Sven Peter To: iommu@lists.linux-foundation.org Cc: Sven Peter , Joerg Roedel , Will Deacon , Robin Murphy , Arnd Bergmann , Mohamed Mediouni , Alexander Graf , Hector Martin , Alyssa Rosenzweig , linux-kernel@vger.kernel.org Subject: [PATCH v2 5/8] iommu/dma: Support PAGE_SIZE < iovad->granule allocations Date: Sat, 28 Aug 2021 17:36:39 +0200 Message-Id: <20210828153642.19396-6-sven@svenpeter.dev> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210828153642.19396-1-sven@svenpeter.dev> References: <20210828153642.19396-1-sven@svenpeter.dev> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Noncontiguous allocations must be made up of individual blocks in a way that allows those blocks to be mapped contiguously in IOVA space. For IOMMU page sizes larger than the CPU page size this can be done by allocating all individual blocks from pools with order >= get_order(iovad->granule). Some spillover pages might be allocated at the end, which can however immediately be freed. Signed-off-by: Sven Peter --- drivers/iommu/dma-iommu.c | 99 +++++++++++++++++++++++++++++++++++---- 1 file changed, 89 insertions(+), 10 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index a091cff5829d..e57966bcfae1 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -618,6 +619,9 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev, { struct page **pages; unsigned int i = 0, nid = dev_to_node(dev); + unsigned int j; + unsigned long min_order = __fls(order_mask); + unsigned int min_order_size = 1U << min_order; order_mask &= (2U << MAX_ORDER) - 1; if (!order_mask) @@ -657,15 +661,37 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev, split_page(page, order); break; } - if (!page) { - __iommu_dma_free_pages(pages, i); - return NULL; + + /* + * If we have no valid page here we might be trying to allocate + * the last block consisting of 1<pgsize_bitmap; + struct scatterlist *last_sg; struct page **pages; dma_addr_t iova; + phys_addr_t orig_s_phys; + size_t orig_s_len, orig_s_off, s_iova_off, iova_size; if (static_branch_unlikely(&iommu_deferred_attach_enabled) && iommu_deferred_attach(dev, domain)) return NULL; min_size = alloc_sizes & -alloc_sizes; - if (min_size < PAGE_SIZE) { + if (iovad->granule > PAGE_SIZE) { + if (size < iovad->granule) { + /* ensure a single contiguous allocation */ + min_size = ALIGN(size, PAGE_SIZE*(1U<coherent_dma_mask, dev); + iova_size = iova_align(iovad, size); + iova = iommu_dma_alloc_iova(domain, iova_size, dev->coherent_dma_mask, dev); if (!iova) goto out_free_pages; - if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, GFP_KERNEL)) + last_sg = __sg_alloc_table_from_pages(sgt, pages, count, 0, iova_size, + UINT_MAX, NULL, 0, GFP_KERNEL); + if (IS_ERR(last_sg)) goto out_free_iova; if (!(ioprot & IOMMU_CACHE)) { @@ -721,18 +760,58 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, arch_dma_prep_coherent(sg_page(sg), sg->length); } + if (iovad->granule > PAGE_SIZE) { + if (size < iovad->granule) { + /* + * we only have a single sg list entry here that is + * likely not aligned to iovad->granule. adjust the + * entry to represent the encapsulating IOMMU page + * and then later restore everything to its original + * values, similar to the impedance matching done in + * iommu_dma_map_sg. + */ + orig_s_phys = sg_phys(sgt->sgl); + orig_s_len = sgt->sgl->length; + orig_s_off = sgt->sgl->offset; + s_iova_off = iova_offset(iovad, orig_s_phys); + + sg_set_page(sgt->sgl, + phys_to_page(orig_s_phys - s_iova_off), + iova_align(iovad, orig_s_len + s_iova_off), + sgt->sgl->offset & ~s_iova_off); + } else { + /* + * convince iommu_map_sg_atomic to map the last block + * even though it may be too small. + */ + orig_s_len = last_sg->length; + last_sg->length = iova_align(iovad, last_sg->length); + } + } + if (iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot) - < size) + < iova_size) goto out_free_sg; + if (iovad->granule > PAGE_SIZE) { + if (size < iovad->granule) { + sg_set_page(sgt->sgl, phys_to_page(orig_s_phys), + orig_s_len, orig_s_off); + + iova += s_iova_off; + } else { + last_sg->length = orig_s_len; + } + } + sgt->sgl->dma_address = iova; - sgt->sgl->dma_length = size; + sgt->sgl->dma_length = iova_size; return pages; out_free_sg: sg_free_table(sgt); out_free_iova: - iommu_dma_free_iova(cookie, iova, size, NULL); + iommu_dma_free_iova(cookie, iova, iova_size, NULL); out_free_pages: __iommu_dma_free_pages(pages, count); return NULL; -- 2.25.1