Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp1134868pxb; Wed, 1 Sep 2021 18:37:09 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzkLSpaF49PkgDchWuDoW/2ntoiSyddvXPw01BEEzVY7ydF4ZD/MyU9ZYllduBgYOXDGKPV X-Received: by 2002:a92:c264:: with SMTP id h4mr500614ild.26.1630546629156; Wed, 01 Sep 2021 18:37:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1630546629; cv=none; d=google.com; s=arc-20160816; b=AmjAtcEfP8JjtRhck9yUfju+/++Vv5A4id3NiU25qGjGhg9eIs1x0g1Nhv0jiu3Px0 9qQZccpRZ//Vfjlfl8KTqNi/stt33H0ACojrdOfu0JI73hEs+QbH/8oLm4+8k0NYaZ4I cDsQWXb6biT2ue2F3+Z326GRIA7ZRhThakO2xEZXeuHm6w3iYWCbn2VxTZ/HS9Y3PFKP +BkDGtdYcD4ihNjjZ9zxAaKMBE0RFDLavOsydjzYT/myXy6LzVJxCI2LWEi5c9Wn869k S2LvjypUDfZyfRlAYZXh94b01CQXMu1IDerHx+CFThPMrLs5fAHZdZHojqlxOD+jn6cM mlIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=A+xvmbKBTe/UOPG9SJ7nD6gt6EtFVCE2ZEL/4uWzQMk=; b=bLxe936JDQQOwQoYVaANsCegc8/liYBFZ8KXJRKj5dWdaofX2Xuh5a+7EIoVdwmBqZ Hm5AOv1ztUPsvOThd6CgYU9Co8M9RGcaUVD88J84Lcu5rrW9sgxrvbnU+nJO2xacJrWp P9E9PK4W1EBV8iPzKqcPN7hbfbwf9fiwtyZ/qdhl3zzUWVJy86xXO0ErGudTGYBx9nGq +rOunpnST74C5T7ilrqqFhW/d0aR0PDEvuRQXU5TeI0NuT7aleEIVbugSy25ZRUPRScL 7wPy96j39+2/WxzFkzYTaSHQNP5ti7+Q4uajtvPzNIKposVvHFBbflXt8PAUO8XE7CNP vtKA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l5si340276ioa.54.2021.09.01.18.36.57; Wed, 01 Sep 2021 18:37:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233010AbhIBBdJ (ORCPT + 99 others); Wed, 1 Sep 2021 21:33:09 -0400 Received: from rosenzweig.io ([138.197.143.207]:45354 "EHLO rosenzweig.io" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232888AbhIBBdI (ORCPT ); Wed, 1 Sep 2021 21:33:08 -0400 Date: Wed, 1 Sep 2021 17:10:09 -0400 From: Alyssa Rosenzweig To: Sven Peter Cc: iommu@lists.linux-foundation.org, Joerg Roedel , Will Deacon , Robin Murphy , Arnd Bergmann , Mohamed Mediouni , Alexander Graf , Hector Martin , linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 3/8] iommu/dma: Disable get_sgtable for granule > PAGE_SIZE Message-ID: References: <20210828153642.19396-1-sven@svenpeter.dev> <20210828153642.19396-4-sven@svenpeter.dev> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > My biggest issue is that I do not understand how this function is supposed > to be used correctly. It would work fine as-is if it only ever gets passed buffers > allocated by the coherent API but there's not way to check or guarantee that. > There may also be callers making assumptions that no longer hold when > iovad->granule > PAGE_SIZE. > > Regarding your case: I'm not convinced the function is meant to be used there. > If I understand it correctly, your code first allocates memory with dma_alloc_coherent > (which possibly creates a sgt internally and then maps it with iommu_map_sg), > then coerces that back into a sgt with dma_get_sgtable, and then maps that sgt to > another iommu domain with dma_map_sg while assuming that the result will be contiguous > in IOVA space. It'll work out because dma_alloc_coherent is the very thing > meant to allocate pages that can be mapped into kernel and device VA space > as a single contiguous block and because both of your IOMMUs are different > instances of the same HW block. Anything allocated by dma_alloc_coherent for the > first IOMMU will have the right shape that will allow it to be mapped as > a single contiguous block for the second IOMMU. > > What could be done in your case is to instead use the IOMMU API, > allocate the pages yourself (while ensuring the sgt your create is made up > of blocks with size and physaddr aligned to max(domain_a->granule, domain_b->granule)) > and then just use iommu_map_sg for both domains which actually comes with the > guarantee that the result will be a single contiguous block in IOVA space and > doesn't required the sgt roundtrip. In principle I agree. I am getting the sense this function can't be used correctly in general, and yet is the function that's meant to be used. If my interpretation of prior LKML discussion holds, the problems are far deeper than my code or indeed page size problems... If the right way to handle this is with the IOMMU and IOVA APIs, I really wish that dance were wrapped up in a safe helper function instead of open coding it in every driver that does cross device sharing. We might even call that helper... hmm... dma_map_sg.... *ducks* For context for people-other-than-Sven, the code in question from my tree appears below the break. --------------------------------------------------------------------------------- /* * Allocate an IOVA contiguous buffer mapped to the DCP. The buffer need not be * physically contigiuous, however we should save the sgtable in case the * buffer needs to be later mapped for PIODMA. */ static bool dcpep_cb_allocate_buffer(struct apple_dcp *dcp, void *out, void *in) { struct dcp_allocate_buffer_resp *resp = out; struct dcp_allocate_buffer_req *req = in; void *buf; resp->dva_size = ALIGN(req->size, 4096); resp->mem_desc_id = ++dcp->nr_mappings; if (resp->mem_desc_id >= ARRAY_SIZE(dcp->mappings)) { dev_warn(dcp->dev, "DCP overflowed mapping table, ignoring"); return true; } buf = dma_alloc_coherent(dcp->dev, resp->dva_size, &resp->dva, GFP_KERNEL); dma_get_sgtable(dcp->dev, &dcp->mappings[resp->mem_desc_id], buf, resp->dva, resp->dva_size); WARN_ON(resp->mem_desc_id == 0); return true; } /* * Callback to map a buffer allocated with allocate_buf for PIODMA usage. * PIODMA is separate from the main DCP and uses own IOVA space on a dedicated * stream of the display DART, rather than the expected DCP DART. * * XXX: This relies on dma_get_sgtable in concert with dma_map_sgtable, which * is a "fundamentally unsafe" operation according to the docs. And yet * everyone does it... */ static bool dcpep_cb_map_piodma(struct apple_dcp *dcp, void *out, void *in) { struct dcp_map_buf_resp *resp = out; struct dcp_map_buf_req *req = in; struct sg_table *map; if (req->buffer >= ARRAY_SIZE(dcp->mappings)) goto reject; map = &dcp->mappings[req->buffer]; if (!map->sgl) goto reject; /* XNU leaks a kernel VA here, breaking kASLR. Don't do that. */ resp->vaddr = 0; /* Use PIODMA device instead of DCP to map against the right IOMMU. */ resp->ret = dma_map_sgtable(dcp->piodma, map, DMA_BIDIRECTIONAL, 0); if (resp->ret) dev_warn(dcp->dev, "failed to map for piodma %d\n", resp->ret); else resp->dva = sg_dma_address(map->sgl); resp->ret = 0; return true; reject: dev_warn(dcp->dev, "denying map of invalid buffer %llx for pidoma\n", req->buffer); resp->ret = EINVAL; return true; }