Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp733197pxb; Thu, 26 Aug 2021 13:17:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzB23meV7mDe02HWL9ZI/HOL3R0MkKIShPyLUHRYFJJYeyXqnVOauRENJq0Tfo8aiaZwkju X-Received: by 2002:a05:6e02:12a3:: with SMTP id f3mr3826825ilr.46.1630009079468; Thu, 26 Aug 2021 13:17:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1630009079; cv=none; d=google.com; s=arc-20160816; b=mLwDb1tI30h4BXO+EkGKdvpzkO6vET7BxXboAdyLFqhdUadj7A8VgwD8saHawbuUik m3icgkPTr2a0fZyKCjoIoRhTrvK5aM24nikHQsO6QHcBQtBt9HsHkZUioc8cDY5KMvMJ lfRxv45Q21GB0Il7tEWz+l1okfIRcREm1FwlBWyERWe6ariSua8I8MYbOR79RoQsioV8 dWnH/vV4Rip1LCLieuJpnLBcgSfsL/Mb8c1+xvgZbr3hcnQMGWLlVOO6+K8aQWosB3WU JQ8R8kyjjJf6vXclwImLQumBSYefA0TDc0Oxv2igIzVna3Cm7hHEtP2hrJjGwtZCsJk8 zu0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=x0F9Owx4JW2DJ5hmHLyuScq6yiMYvNA9+VEWUMoRpoo=; b=NvvKonza29gPvQ83HzH3CFzTW+lcCovcvwB/hH0OIifW+jUn6ukHW2DxRZpZgLq/CA 2t0jYzzmzMJ1Q6xC7/QcZFjAZIEb1mA8AMJEAAet13ww7ZEOHSrmD70KirRd4mUzYQ/P /T4n7zvuWz4N3xU8FirJAVc3xEJ6S4vslCVl4WcyjL7zxiEFv2eP7w+EmKAa9T0daZAQ d+POQ7QxEzspX6t4rrODf5KmOQm3csBG8wvn+Top8KxDLnDlW53xn9mvWTPt7p5V1a0E JXv+nzNNSPBJcZ079vEtKYZfDd8+bg/1eJ/mrBxtwGxztKpxbOUUAauY7fWbuhrpLlye UkoQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y12si4827901ilu.84.2021.08.26.13.17.47; Thu, 26 Aug 2021 13:17:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234503AbhHZUQJ (ORCPT + 99 others); Thu, 26 Aug 2021 16:16:09 -0400 Received: from foss.arm.com ([217.140.110.172]:53316 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230463AbhHZUQI (ORCPT ); Thu, 26 Aug 2021 16:16:08 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E0BAF106F; Thu, 26 Aug 2021 13:15:20 -0700 (PDT) Received: from [10.57.15.112] (unknown [10.57.15.112]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4A3CE3F5A1; Thu, 26 Aug 2021 13:15:19 -0700 (PDT) Subject: Re: [PATCH 2/3] drm/etnaviv: fix dma configuration of the virtual device To: Lucas Stach , Michael Walle , etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Cc: "Lukas F . Hartmann" , Marek Vasut , Russell King , Christian Gmeiner , David Airlie , Daniel Vetter References: <20210826121006.685257-1-michael@walle.cc> <20210826121006.685257-3-michael@walle.cc> From: Robin Murphy Message-ID: <01fa99f2-8d19-0cd2-232f-4ba1f3171f24@arm.com> Date: Thu, 26 Aug 2021 21:15:13 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.13.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021-08-26 16:17, Lucas Stach wrote: > Am Donnerstag, dem 26.08.2021 um 16:00 +0100 schrieb Robin Murphy: >> On 2021-08-26 13:10, Michael Walle wrote: >>> The DMA configuration of the virtual device is inherited from the first >>> actual etnaviv device. Unfortunately, this doesn't work with an IOMMU: >>> >>> [ 5.191008] Failed to set up IOMMU for device (null); retaining platform DMA ops >>> >>> This is because there is no associated iommu_group with the device. The >>> group is set in iommu_group_add_device() which is eventually called by >>> device_add() via the platform bus: >>> device_add() >>> blocking_notifier_call_chain() >>> iommu_bus_notifier() >>> iommu_probe_device() >>> __iommu_probe_device() >>> iommu_group_get_for_dev() >>> iommu_group_add_device() >>> >>> Move of_dma_configure() into the probe function, which is called after >>> device_add(). Normally, the platform code will already call it itself >>> if .of_node is set. Unfortunately, this isn't the case here. >>> >>> Also move the dma mask assignemnts to probe() to keep all DMA related >>> settings together. >> >> I assume the driver must already keep track of the real GPU platform >> device in order to map registers, request interrupts, etc. correctly - >> can't it also correctly use that device for DMA API calls and avoid the >> need for these shenanigans altogether? >> > Not without a bigger rework. There's still quite a bit of midlayer > issues in DRM, where dma-buf imports are dma-mapped and cached via the > virtual DRM device instead of the real GPU device. Also etnaviv is able > to coalesce multiple Vivante GPUs in a single system under one virtual > DRM device, which is used on i.MX6 where the 2D and 3D GPUs are > separate peripherals, but have the same DMA constraints. Sure, I wouldn't expect it to be trivial to fix properly, but I wanted to point out that this is essentially a hack, relying on an implicit side-effect of of_dma_configure() which is already slated for removal. As such, I for one am not going to be too sympathetic if it stops working in future. Furthermore, even today it doesn't work in general - it might be OK for LS1028A with a single GPU block behind an SMMU, but as soon as you have multiple GPU blocks with distinct SMMU StreamIDs, or behind different IOMMU instances, then you're stuffed again. Although in fact I think it's also broken even for LS1028A, since AFAICS there's no guarantee that the relevant SMMU instance will actually be probed, or the SMMU driver even loaded, when etnaviv_pdev_probe() runs. > Effectively we would need to handle N devices for the dma-mapping in a > lot of places instead of only dealing with the one virtual DRM device. > It would probably be the right thing to anyways, but it's not something > that can be changed short-term. I'm also not yet sure about the > performance implications, as we might run into some cache maintenance > bottlenecks if we dma synchronize buffers to multiple real device > instead of doing it a single time with the virtual DRM device. I know, > I know, this has a lot of assumptions baked in that could fall apart if > someone builds a SoC with multiple Vivante GPUs that have differing DMA > constraints, but up until now hardware designers have not been *that* > crazy, fortunately. I'm not too familiar with the component stuff, but would it be viable to just have etnaviv_gpu_platform_probe() set up the first GPU which comes along as the master component and fundamental DRM device, then treat any subsequent ones as subcomponents as before? That would at least stand to be more robust in terms of obviating the of_dma_configure() hack (only actual bus code should ever be calling that), even if it won't do anything for the multiple IOMMU mapping or differing DMA constraints problems. Thanks, Robin.