Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp2436887ybb; Mon, 30 Mar 2020 06:11:00 -0700 (PDT) X-Google-Smtp-Source: ADFU+vtunkUstI0S6j0ikpCO5+2AIMNZhAE2CwL1kNcJeDQ5WFiX5AVxYG0tW3LUXLZ8OcZCrMJB X-Received: by 2002:aca:ed4b:: with SMTP id l72mr6959846oih.95.1585573859894; Mon, 30 Mar 2020 06:10:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1585573859; cv=none; d=google.com; s=arc-20160816; b=YN8+4jNI6bCHpIqoiaOutAkcuSfM9ehZpQmoZus8x3QvJ/pgtgAcyVtTMje+PRvqoC 2sY0GgR8cFXZEf0wrMSev3ejBlfPvWQf1Oeateh6GsIQY7DXX42px6V0p5H7n9WJkQJ+ kAFLPDofF2OrXEw1P0r4wF+VRiZbL8Nsyw4V7XugqBSEmPzUpOA3vZgtHpgo6DXBCU++ hJpOezvturrq+EGncsPpmzLK6UfOajt8WX0J39am2ASq5n7Tzb4YnFGQOmN05z51ZKA6 XWB8lq15T4wT1ys0ntdDHZHhBQKXg6fsnfKJ3bPvN/TtHOKNfJU1p4AZqkWdt+Mw49XS fXKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=XK2GI7/8UhkvsdWJ02025/iNVWqWJmyarwrZ8sHaS8k=; b=LsXhrz5srRDlSJ8Cd/4N9K16FxK33idq59taVgZTOeNymhamd5fHe8OUEdkjp0vIQ0 bV+yFKcfF0Wmit+x7hxdjbFQX/D9z4Ds55Yepc4S5mmajTSNjaH4jtwnONt7+lTXdH0o T3yOocU/2cM1oIdcKG0MbVlhtOpkPARrRBn7uamb1viH+LWc8s5yPiOqFXGuASTsIlm8 V3Z7aTG6Jf6XBEVSGlC5vHccYlEtbhyFI6iyEwpGm/fXm9FKPKdj5DKEQQy+oz0jrpVe YlvYABTMIZBPU2dhKh774ncQAfKIRnNFPbFNFW3VYKZi7jyzaGC/vOz2OfyItGy8gQIH ClGA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r5si6152804otp.241.2020.03.30.06.10.46; Mon, 30 Mar 2020 06:10:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730185AbgC3NKS (ORCPT + 99 others); Mon, 30 Mar 2020 09:10:18 -0400 Received: from foss.arm.com ([217.140.110.172]:53246 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729862AbgC3NKS (ORCPT ); Mon, 30 Mar 2020 09:10:18 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6FF9A30E; Mon, 30 Mar 2020 06:10:17 -0700 (PDT) Received: from [10.57.60.204] (unknown [10.57.60.204]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 96CE83F71E; Mon, 30 Mar 2020 06:10:15 -0700 (PDT) Subject: Re: [PATCH v2] drm/prime: fix extracting of the DMA addresses from a scatterlist To: Marek Szyprowski , "Ruhl, Michael J" , "dri-devel@lists.freedesktop.org" , "linux-samsung-soc@vger.kernel.org" , "linux-kernel@vger.kernel.org" Cc: Bartlomiej Zolnierkiewicz , David Airlie , Shane Francis , "stable@vger.kernel.org" , Thomas Zimmermann , Alex Deucher References: <20200327162126.29705-1-m.szyprowski@samsung.com> <14063C7AD467DE4B82DEDB5C278E8663FFFBFCE1@fmsmsx107.amr.corp.intel.com> <8a09916d-5413-f9a8-bafa-2d8f0b8f892f@samsung.com> From: Robin Murphy Message-ID: <95fe655b-e68e-bea4-e8ea-3c4abc3021e7@arm.com> Date: Mon, 30 Mar 2020 14:10:14 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: <8a09916d-5413-f9a8-bafa-2d8f0b8f892f@samsung.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020-03-29 10:55 am, Marek Szyprowski wrote: > Hi Michael, > > On 2020-03-27 19:31, Ruhl, Michael J wrote: >>> -----Original Message----- >>> From: Marek Szyprowski >>> Sent: Friday, March 27, 2020 12:21 PM >>> To: dri-devel@lists.freedesktop.org; linux-samsung-soc@vger.kernel.org; >>> linux-kernel@vger.kernel.org >>> Cc: Marek Szyprowski ; >>> stable@vger.kernel.org; Bartlomiej Zolnierkiewicz >>> ; Maarten Lankhorst >>> ; Maxime Ripard >>> ; Thomas Zimmermann ; >>> David Airlie ; Daniel Vetter ; Alex Deucher >>> ; Shane Francis ; >>> Ruhl, Michael J >>> Subject: [PATCH v2] drm/prime: fix extracting of the DMA addresses from a >>> scatterlist >>> >>> Scatterlist elements contains both pages and DMA addresses, but one >>> should not assume 1:1 relation between them. The sg->length is the size >>> of the physical memory chunk described by the sg->page, while >>> sg_dma_len(sg) is the size of the DMA (IO virtual) chunk described by >>> the sg_dma_address(sg). >>> >>> The proper way of extracting both: pages and DMA addresses of the whole >>> buffer described by a scatterlist it to iterate independently over the >>> sg->pages/sg->length and sg_dma_address(sg)/sg_dma_len(sg) entries. >>> >>> Fixes: 42e67b479eab ("drm/prime: use dma length macro when mapping sg") >>> Signed-off-by: Marek Szyprowski >>> Reviewed-by: Alex Deucher >>> --- >>> drivers/gpu/drm/drm_prime.c | 37 +++++++++++++++++++++++++----------- >>> - >>> 1 file changed, 25 insertions(+), 12 deletions(-) >>> >>> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c >>> index 1de2cde2277c..282774e469ac 100644 >>> --- a/drivers/gpu/drm/drm_prime.c >>> +++ b/drivers/gpu/drm/drm_prime.c >>> @@ -962,27 +962,40 @@ int drm_prime_sg_to_page_addr_arrays(struct >>> sg_table *sgt, struct page **pages, >>> unsigned count; >>> struct scatterlist *sg; >>> struct page *page; >>> - u32 len, index; >>> + u32 page_len, page_index; >>> dma_addr_t addr; >>> + u32 dma_len, dma_index; >>> >>> - index = 0; >>> + /* >>> + * Scatterlist elements contains both pages and DMA addresses, but >>> + * one shoud not assume 1:1 relation between them. The sg->length >>> is >>> + * the size of the physical memory chunk described by the sg->page, >>> + * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk >>> + * described by the sg_dma_address(sg). >>> + */ >> Is there an example of what the scatterlist would look like in this case? > > DMA framework or IOMMU is allowed to join consecutive chunks while > mapping if such operation is supported by the hw. Here is the example: > > Lets assume that we have a scatterlist with 4 4KiB pages of the physical > addresses: 0x12000000, 0x13011000, 0x13012000, 0x11011000. The total > size of the buffer is 16KiB. After mapping this scatterlist to a device > behind an IOMMU it may end up as a contiguous buffer in the DMA (IOVA) > address space. at 0xf0010000. The scatterlist will look like this: > > sg[0].page = 0x12000000 > sg[0].len = 4096 > sg[0].dma_addr = 0xf0010000 > sg[0].dma_len = 16384 > sg[1].page = 0x13011000 > sg[1].len = 4096 > sg[1].dma_addr = 0 > sg[1].dma_len = 0 > sg[2].page = 0x13012000 > sg[2].len = 4096 > sg[2].dma_addr = 0 > sg[2].dma_len = 0 > sg[3].page = 0x11011000 > sg[3].len = 4096 > sg[3].dma_addr = 0 > sg[3].dma_len = 0 > > (I've intentionally wrote page as physical address to make it easier to > understand, in real SGs it is stored a struct page pointer). > >> Does each SG entry always have the page and dma info? or could you have >> entries that have page information only, and entries that have dma info only? > When SG is not mapped yet it contains only the ->pages and ->len > entries. I'm not aware of the SGs with the DMA information only, but in > theory it might be possible to have such. >> If the same entry has different size info (page_len = PAGE_SIZE, >> dma_len = 4 * PAGE_SIZE?), are we guaranteed that the arrays (page and addrs) have >> been sized correctly? > > There are always no more DMA related entries than the phys pages. If > there is 1:1 mapping between physical memory and DMA (IOVA) space, then > each SG entry will have len == dma_len, and dma_addr will be describing > the same as page entry. DMA mapping framework is allowed only to join > entries while mapping to DMA (IOVA). Nit: even in a 1:1 mapping, merging would still be permitted (subject to dma_parms constraints) during a bounce-buffer copy, or if the caller simply generates a naive list like so: sg[0].page = 0x12000000 sg[0].len = 4096 sg[1].page = 0x12001000 sg[1].len = 4096 dma_map_sg() => sg[0].dma_addr = 0x12000000 sg[0].dma_len = 8192 sg[1].dma_addr = 0 sg[1].dma_len = 0 I'm not sure that any non-IOMMU DMA API implementations actually take advantage of this, but they are *allowed* to ;) Robin.