Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp283545pxu; Tue, 6 Oct 2020 06:25:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw7d9RIeNNn1ja0T+hOGXGQkXnlFf0IJGtrY/tWzaQS5WeSmUc9HOzKPpFvV2QVgZr27QZz X-Received: by 2002:a17:906:30c7:: with SMTP id b7mr5343514ejb.7.1601990711118; Tue, 06 Oct 2020 06:25:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601990711; cv=none; d=google.com; s=arc-20160816; b=Dla1PdsbshJ1WGAVsCOCplKt54OGWSDTudU1uoAtMpRLlsSofIVt0NPFjVKwXsUBMr khBoLZDsJ6RssHQPtEBubRF7squKjMF2XK2JFCwKYyhhmPSbtNMCCr1h39PKk+fMZ35G 2hNABylbYleZVP35kEHB6bRK/8Hm1eUK3lCDeFEHyfrZC7XoqRxiInKTk7i5AXrDiRnY /YqzaasoTCY+o5v+oSlZaY1s9RWHWpXzALnusE//LCmcmpPbSeTSo0JpXlyn2fElqQb6 p35cmNQw3Ej+JR2t0661zcvxWyGDGJ3LUhjkqRTQKIAG9qkS2OJ6RP1gTAmAYObl8Cih mQvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject:dkim-signature; bh=YRYuE1oST5rCsSRo3ftJBgKrW6I4XszeRlUEeyGTNfA=; b=INWDX+hJMU+A3es4lqMfwuXdPIxiw7M7ItrgUMx9dNyQ96o7RrqlvciMVI/bFmv+Gx XEu+ll9e0RB53eBCaIlQETtdqraROZn6NH15oyFwKBCaRTm25/X2rZhjxAlhi6iPr3nV 2+tWa5Rjof8rFdenYkSrFV7T77syvrtr8Kq3Sx75u31TOa0VeuLzd8tu1jlaKbrp97JR g7hjiVXKPPHWF/RwVzJV99SAfq8mkaURAuClNcA1LJ6GbKl5Amms1CabdM8nq/FFRmxR FpxnwbrW0E42/XPniInq1uy+qqPyy4eT3/9B5rex8pMe9Nl5ziXmuzzHeL4OyDVpkdX1 g+xA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@marek-ca.20150623.gappssmtp.com header.s=20150623 header.b="BFD//pwA"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u2si2312837edd.517.2020.10.06.06.24.47; Tue, 06 Oct 2020 06:25:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@marek-ca.20150623.gappssmtp.com header.s=20150623 header.b="BFD//pwA"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725943AbgJFNVJ (ORCPT + 99 others); Tue, 6 Oct 2020 09:21:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725902AbgJFNVJ (ORCPT ); Tue, 6 Oct 2020 09:21:09 -0400 Received: from mail-qk1-x742.google.com (mail-qk1-x742.google.com [IPv6:2607:f8b0:4864:20::742]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11638C0613D1 for ; Tue, 6 Oct 2020 06:21:09 -0700 (PDT) Received: by mail-qk1-x742.google.com with SMTP id c62so16592456qke.1 for ; Tue, 06 Oct 2020 06:21:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marek-ca.20150623.gappssmtp.com; s=20150623; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=YRYuE1oST5rCsSRo3ftJBgKrW6I4XszeRlUEeyGTNfA=; b=BFD//pwAX/64KPt0LEII7XnZhATo+5u4Ci1QUGZnXAvcyoZpf216nBIukB8S0lNR+o /SBnsKFNycdO4FaqhrrvOl8q9yUCUD0kMtxCFrm+T3BuOtGJqNoq/vuatu/yUwFaui3W wG/q9rVWqGvQF3DH1sF8weJvvvE2/FsOJioQK368BV9SL2poAmn8GeN+if4sevucB6tf 8BV+2/e4V/R+9zg6ETBTYxp+Q6woapII4/lMg/2r2Ql9+NA7asVIWxgARtsMmyYoMq2K YKB5J7l4MgFdsyryEbJBhC2DmC2IwuvI3od1/Cu2zgJfsaaVHr58x1zSeFV0fZQiWrdf Hg/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=YRYuE1oST5rCsSRo3ftJBgKrW6I4XszeRlUEeyGTNfA=; b=DuRAwXLlzqg2rwiVT8QwR8E2m+rPOhOLSwm9wa3WbOG/oQb/yI2XZRIELz8vZRIlq5 IYJSkzkBrZV031GarH0eZDRn4ArXENKo2jlbXgJh3kfrfsW6JdVXcr3RWgaKUpJkdnIN dVOPNxcG+/xGp2ofFMwjqy+SemlI2qjcVTW1MnXQVstZD8NKSzQW+3suCPKDX+mjNeBa ciiyPF1BZmhBvl9FkHB9FzhRzwKnLQ51z/yXbGADcB2nLXP/hYIsYwV5HPPtCIrs9vl4 tkQA7gX4943lV9TX6zsLnGtnFZ2e4pZQ+cr6ulWaWgNF2rDqpQjG57Oej87XEzySiTtT Vq7Q== X-Gm-Message-State: AOAM531ccFWcNWuZJUvZ/B2JzcAOXpBOJV5aNM09c22MddbzK2vft6j2 YM5g6YkYpojD+KBdKVQVas9Epw== X-Received: by 2002:a05:620a:1287:: with SMTP id w7mr2490783qki.335.1601990468080; Tue, 06 Oct 2020 06:21:08 -0700 (PDT) Received: from [192.168.0.189] ([147.253.86.153]) by smtp.gmail.com with ESMTPSA id 192sm2477887qkm.110.2020.10.06.06.21.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 06 Oct 2020 06:21:07 -0700 (PDT) Subject: Re: [PATCH 2/3] drm/msm: add DRM_MSM_GEM_SYNC_CACHE for non-coherent cache maintenance To: Christoph Hellwig Cc: freedreno@lists.freedesktop.org, Rob Clark , Sean Paul , David Airlie , Daniel Vetter , "open list:DRM DRIVER FOR MSM ADRENO GPU" , "open list:DRM DRIVER FOR MSM ADRENO GPU" , open list , iommu@lists.linux-foundation.org, Joerg Roedel , Robin Murphy References: <20201001002709.21361-1-jonathan@marek.ca> <20201001002709.21361-3-jonathan@marek.ca> <20201002075321.GA7547@infradead.org> <20201005082914.GA31702@infradead.org> <3e0b91be-e4a4-4ea5-7d58-6e71b8d51932@marek.ca> <20201006072306.GA12834@infradead.org> From: Jonathan Marek Message-ID: <148a1660-f0fc-7163-2240-6b94725342b5@marek.ca> Date: Tue, 6 Oct 2020 09:19:32 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: <20201006072306.GA12834@infradead.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/6/20 3:23 AM, Christoph Hellwig wrote: > On Mon, Oct 05, 2020 at 10:35:43AM -0400, Jonathan Marek wrote: >> The cache synchronization doesn't have anything to do with IOMMU (for >> example: cache synchronization would be useful in cases where drm/msm >> doesn't use IOMMU). > > It has to do with doing DMA. And we have two frameworks for doing DMA: > either the DMA API which is for general driver use, and which as part of > the design includes cache maintainance hidden behind the concept of > ownership transfers. And we have the much more bare bones IOMMU API. > > If people want to use the "raw" IOMMU API with not cache coherent > devices we'll need a cache maintainance API that goes along with it. > It could either be formally part of the IOMMU API or be separate. > >> What is needed is to call arch_sync_dma_for_{cpu,device} (which is what I >> went with initially, but then decided to re-use drm/msm's >> sync_for_{cpu,device}). But you are also saying those functions aren't for >> driver use, and I doubt IOMMU maintainers will want to add wrappers for >> these functions just to satisfy this "not for driver use" requirement. > > arch_sync_dma_for_{cpu,device} are low-level helpers (and not very > great ones at that). The definitively should not be used by drivers. > They would be very useful buildblocks for a IOMMU cache maintainance > API. > > Of course the best outcome would be if we could find a way for the MSM > drm driver to just use DMA API and not deal with the lower level > abstractions. Do you remember why the driver went for use of the IOMMU > API? > One example why drm/msm can't use DMA API is multiple page table support (that is landing in 5.10), which is something that definitely couldn't work with DMA API. Another one is being able to choose the address for mappings, which AFAIK DMA API can't do (somewhat related to this: qcom hardware often has ranges of allowed addresses, which the dma_mask mechanism fails to represent, what I see is drivers using dma_mask as a "maximum address", and since addresses are allocated from the top it generally works) But let us imagine drm/msm switches to using DMA API. a2xx GPUs have their own very basic MMU (implemented by msm_gpummu.c), that will need to implement dma_map_ops, which will have to call arch_sync_dma_for_{cpu,device}. So drm/msm still needs to call arch_sync_dma_for_{cpu,device} in that scenario.