Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp7670246imu; Tue, 22 Jan 2019 09:35:31 -0800 (PST) X-Google-Smtp-Source: ALg8bN6ed+x+xwDKECXCHCF2BoprCALBC8N/EYjH0rxjkbRDvqKSzzaYlrJuyFThs58v3CeJm8b5 X-Received: by 2002:a17:902:4624:: with SMTP id o33mr34553764pld.289.1548178531434; Tue, 22 Jan 2019 09:35:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548178531; cv=none; d=google.com; s=arc-20160816; b=TjvWLx4DbIdhRCkBcGiWhSqya33isQmpykhnLS5g8QVKMzdPM9J17jRQCd9CAEkMLp 28vNNaE3F1sDZbssCfc9di4tVmAfKmV9f2TU4sQAjIhupVYneHEhidfgzz9xjYVYM64G zaBMi+QXrpXE1ueKa0njvqVVs0smQR2ZixZcXtot4iO7KNsDcpZAl4cLttrPf950raCr q8U9NYLSROlBTeiDzBd8+T0yAXmE7jegf6Dwx4E03yJK3DvsN0jHMjhmVtWs6FwCRFTV 327xmw2Rz7fO278x9pOIQWoJx6qERSXhE93UlyxYqEdMHMPDMIDZjKFITrc7Ax1tOdtt qFTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=Lcn7tuaNZ7GwD7JACdY8Nxk+FBJvKzf+TQggo1/zanU=; b=aqGTBJLwb5JfFLi1fF1Wjvtr27gsH4rsDb5SzlglJ50u3JunqfKFLJTpYRiqUnuZzp ozfVOgCdSs5S6fnIBY82Nm+jhsu4Kyeqd1qNLhW5IyGya8E2jY6GuR417+1XV4QT9ouF qDMh7fBY2DNMvtzo/FNrhZPxzOnE783gyjGDPGnQN3FYd8R+LzKQaAVwCl88ka7viX7b B1fxLR+vDyWCQr//SUkPfz6G3vXwR5rednpLDLjKqbTkbdY6+yeD2x6NJgrL5ldHVqKJ Jyxq5s+1D5u9aj4XauGB6GCWSgXw4BAn8jyceVqIkBqpIil9PMnQLbjO/BSRK/2/GcYo XgRA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Fu5HTO7U; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e67si15371070pfa.15.2019.01.22.09.35.12; Tue, 22 Jan 2019 09:35:31 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Fu5HTO7U; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726078AbfAVRdg (ORCPT + 99 others); Tue, 22 Jan 2019 12:33:36 -0500 Received: from mail-ot1-f65.google.com ([209.85.210.65]:33003 "EHLO mail-ot1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725971AbfAVRdf (ORCPT ); Tue, 22 Jan 2019 12:33:35 -0500 Received: by mail-ot1-f65.google.com with SMTP id i20so24439039otl.0 for ; Tue, 22 Jan 2019 09:33:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=Lcn7tuaNZ7GwD7JACdY8Nxk+FBJvKzf+TQggo1/zanU=; b=Fu5HTO7Uw/lZDFb15QpkaKn5jbTd6vhevqc0RyCCarYoTYrB0ytPrRZs3VkmhvWe0d yiX6A2WwsE5zG1GFLcRToVQJmalSi/js4qyr6ZlF6sRn2Kf+Wk5KTigENG2ut+SnrbXh 9Sprv1dNFP2vJY9ffY8mjMeIUdTqZx7SyGc3k= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=Lcn7tuaNZ7GwD7JACdY8Nxk+FBJvKzf+TQggo1/zanU=; b=W0HC2gMm8y4tdhRXd3WdYCB+thqaS4B5cY5DNUYHeMywaky+KM7f67Xlrb5o1e2/TV axgJ38OH92GCg3i2CEVC/hi34k7l6UjgBRhKcuCm3h6+JA2qDl3siwbD6YmZMU8LpUW8 biRfvVwtcchTw69wC9Ria0D+I24uj8O3sD3r4aJoE8ctVewrT4xIHduo/amSQx7K8zf8 MwJoiIcNWbydHhT4nDJ0NTHY1DFh5Do8gMFLBh+SCoPjMWl5hBJkJ2lwzyZSCgXCTKAE a+tupn3hPFJcTXa1FdpBD2U9y4nPy4prwKBUGQAqvLxX9WDMhKGm2qiLGIRS8Irm+vmk wPUA== X-Gm-Message-State: AJcUukfFV0Y3ynaEczpz260tLnDqepDAFYzyGBCmm6L78DcvbSI1cpP1 jgvGq72nJFooXhRiZnAI0LK7Pe6MoI3K3N8whHzV+A== X-Received: by 2002:a9d:1d65:: with SMTP id m92mr13205202otm.65.1548178413939; Tue, 22 Jan 2019 09:33:33 -0800 (PST) MIME-Version: 1.0 References: <79eb70f6-00b0-2939-5ec9-65e196ab4987@ti.com> <20190116151946.66vc6ibbivijdzvd@DESKTOP-E1NTVVP.localdomain> <4a5d4147-765d-711f-af98-9022d0253b3b@ti.com> <3bf4bfce-aee6-9c52-c2f7-783dc63bfc3a@ti.com> <20190121112235.g36qqptpv6wjuj7w@DESKTOP-E1NTVVP.localdomain> In-Reply-To: From: Sumit Semwal Date: Tue, 22 Jan 2019 23:03:22 +0530 Message-ID: Subject: Re: [PATCH 13/14] staging: android: ion: Do not sync CPU cache on map/unmap To: "Andrew F. Davis" Cc: Brian Starkey , Liam Mark , Laura Abbott , Greg Kroah-Hartman , =?UTF-8?B?QXJ2ZSBIasO4bm5ldsOlZw==?= , "devel@driverdev.osuosl.org" , "linux-kernel@vger.kernel.org" , dri-devel , nd , Daniel Vetter Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello everyone, Sincere apologies for chiming in a bit late here, but was off due to some health issues. Also, adding Daniel Vetter to the mix, since he has been one of the core guys who shaped up dma-buf as it is today. On Tue, 22 Jan 2019 at 02:51, Andrew F. Davis wrote: > > On 1/21/19 5:22 AM, Brian Starkey wrote: > > Hi, > > > > Sorry for being a bit sporadic on this. I was out travelling last= week > > with little time for email. > > > > On Fri, Jan 18, 2019 at 11:16:31AM -0600, Andrew F. Davis wrote: > >> On 1/17/19 7:11 PM, Liam Mark wrote: > >>> On Thu, 17 Jan 2019, Andrew F. Davis wrote: > >>> > >>>> On 1/16/19 4:54 PM, Liam Mark wrote: > >>>>> On Wed, 16 Jan 2019, Andrew F. Davis wrote: > >>>>> > >>>>>> On 1/16/19 9:19 AM, Brian Starkey wrote: > >>>>>>> Hi :-) > >>>>>>> > >>>>>>> On Tue, Jan 15, 2019 at 12:40:16PM -0600, Andrew F. Davis wrote: > >>>>>>>> On 1/15/19 12:38 PM, Andrew F. Davis wrote: > >>>>>>>>> On 1/15/19 11:45 AM, Liam Mark wr= ote: > >>>>>>>>>> On Tue, 15 Jan 2019, Andrew F. Davis wrote: > >>>>>>>>>> > >>>>>>>>>>> On 1/14/19 11:13 AM, Liam Mark wrote: > >>>>>>>>>>>> On Fri, 11 Jan 2019, Andrew F. Davis wrote: > >>>>>>>>>>>> > >>>>>>>>>>>>> Buffers may not be mapped from the CPU so skip cache maintenance here. > >>>>>>>>>>>>> Accesses from the CPU to a cached heap should be bracketed with > >>>>>>>>>>>>> {begin,end}_cpu_access calls so maintenance should not be needed anyway. > >>>>>>>>>>>>> > >>>>>>>>>>>>> Signed-off-by: Andrew F. Davis > >>>>>>>>>>>>> --- > >>>>>>>>>>>>> drivers/staging/android/ion/ion.c | 7 ++++--- > >>>>>>>>>>>>> 1 file changed, 4 insertions(+), 3 deletions(-) > >>>>>>>>>>>>> > >>>>>>>>>>>>> diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c > >>>>>>>>>>>>> index 14e48f6eb734..09cb5a8e2b09 100644 > >>>>>>>>>>>>> --- a/drivers/staging/android/ion/ion.c > >>>>>>>>>>>>> +++ b/drivers/staging/android/ion/ion.c > >>>>>>>>>>>>> @@ -261,8 +261,8 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, > >>>>>>>>>>>>> > >>>>>>>>>>>>> table =3D a->= ;table; > >>>>>>>>>>>>> > >>>>>>>>>>>>> - if (!dma_map_sg(attachment->dev, table->sgl, table->nents, > >>>>>>>>>>>>> - direction)) > >>>>>>>>>>>>> + if (!dma_map_sg_attrs(attachment->dev, table->sgl, table->nents, > >>>>>>>>>>>>> + direction, DMA_ATTR_SKIP_CPU_SYNC)) > >>>>>>>>>>>> > >>>>>>>>>>>> Unfortunately I don't think you can do this for a couple reasons. > >>>>>>>>>>>> You can't rely on {begin,end}_cpu_access calls to do cache maintenance. > >>>>>>>>>>>> If the calls to {begin,end}_cpu_access were made before the call to > >>>>>>>>>>>> dma_buf_attach then there won't have been a device attached so the calls > >>>>>>>>>>>> to {begin,end}_cpu_access won't have done any cache maintenance. > >>>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> That should be okay though, if you have no attachments (or all > >>>>>>>>>>> attachments are IO-coherent) then there is no need for cache > >>>>>>>>>>> maintenance. Unless you mean a sequence where a non-io-coherent device > >>>>>>>>>>> is attached later after data has already been written. Does that > >>>>>>>>>>> sequence need supporting? > >>>>>>>>>> > >>>>>>>>>> Yes, but also I think there are cases where CPU access can happen before > >>>>>>>>>> in Android, but I will focus on later for now. > >>>>>>>>>> > >>>>>>>>>>> DMA-BUF doesn't have to allocate the backing > >>>>>>>>>>> memory until map_dma_buf() time, and that should only happen after all > >>>>>>>>>>> the devices have attached so it can know where to put the buffer. So we > >>>>>>>>>>> shouldn't expect any CPU access to buffers before all the devices are > >>>>>>>>>>> attached and mapped, righ= t? > >>>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> Here is an example where CPU access can happen later in Android. > >>>>>>>>>> > >>>>>>>>>> Camera device records video -> software post processing -> video device > >>>>>>>>>> (who does compression of raw data) and writes to a file > >>>>>>>>>> > >>>>>>>>>> In this example assume the buffer is cached and the devices are not > >>>>>>>>>> IO-coherent (quite common). > >>>>>>>>>> > >>>>>>>>> > >>>>>>>>> This is the start of the problem, having cached mappings of memory that > >>>>>>>>> is also being accessed non-coherently is going to cause issues one way > >>>>>>>>> or another. On top of the speculative cache fills that have to be > >>>>>>>>> constantly fought back against with CMOs like below; some coherent > >>>>>>>>> interconnects behave badly when you mix coherent and non-coherent access > >>>>>>>>> (snoop filters get messed up). > >>>>>>>>> > >>>>>>>>> The solution is to either always have the addresses marked non-coherent > >>>>>>>>> (like device memory, no-map carveouts), or if you really want to use > >>>>>>>>> regular system memory allocated at runtime, then all cached mappings of > >>>>>>>>> it need to be dropped, even the kernel logical address (area as painful > >>>>>>>>> as that would be). > >>>>>>> > >>>>>>> Ouch :-( I wasn't aware about these potential interconnect issues. How > >>>>>>> "real" is that? It seems that we aren't really hitting that today on > >>>>>>> real devices. > >>>>>>> > >>>>>> > >>>>>> Sadly there is at least one real device like this now (TI AM654). We > >>>>>> spent some time working with the ARM interconnect spec designers to see > >>>>>> if this was allowed behavior, final conclusion was mixing coherent and > >>>>>> non-coherent accesses is never a good idea.. So we have been working to > >>>>>> try to minimize any cases of mixed attributes [0], if a region is > >>>>>> coherent then everyone in the system needs to treat it as such and > >>>>>> vice-versa, even clever CMO that work on other systems wont save you > >>>>>> here. :( > >>>>>> > >>>>>> [0] https://github.com/ARM-software/arm-trusted-firmware/pull/1553 > >>>>>> > > > > "Never a good idea" - but I think it should still be well defined= by > > the ARMv8 ARM (Section B2.8). Does this apply to your system? > > > > "If the mismatched attributes for a memory location all assign th= e > > same shareability attribute to a Location that has a cacheable > > attribute, any loss of uniprocessor semantics, ordering, or coher= ency > > within a shareability domain can be avoided by use of software ca= che > > management" > > > > https://developer.arm.com/docs/ddi0487/latest/arm-architecture-re= ference-manual-armv8-for-armv8-a-architecture-profile > > > > If the cache is invalidated when switching between access types, > > shouldn't the snoop filters get un-messed-up? > > > > The details of the issue are this, our coherent interconnect (MSMC) ha= s > a snoop filter (for those following along at home it's a list of which > cache lines are currently inside each connected master so snoop reques= ts > can be filtered for masters that wont care). When a "NoSnoop"(non-cach= ed > or non-shareable) transaction is received for a location from any mast= er > it assumes that location cannot be in the cache of *any* master (as th= e > correct cache-line state transition a given core will take for that li= ne > is not defined by ARM spec), so it drops all records of that line. The > only way to recover from this is for every master to invalidate the li= ne > and pick it back up again so the snoop filer can re-learn who really h= as > it again. Invalidate on one core also doesn't propagate to the differe= nt > cores as those are requests are also blocked by the now confused snoop > filter, so each and every core must manually do it.. > > It behaves much more like later in ARMv8 ARM (Section B2.8): > > "If the mismatched attributes for a Location mean that multiple > cacheable accesses to the Location might be made with different > shareability attributes, then uniprocessor semantics, ordering, and > coherency are guaranteed only if: > =E2=80=A2 Each PE that accesses the Location with a cacheable attribut= e performs > a clean and invalidate of the Location before and after accessing that > Location. > =E2=80=A2 A DMB barrier with scope that covers the full shareability o= f the > accesses is placed between any accesses to the same memory Location th= at > use different attributes." > > >>>>>> > >>>>>>>>> > >>>>>>>>>> ION buffer is allocated. > >>>>>>>>>> > >>>>>>>>>> //Camera device records video > >>>>>>>>>> dma_buf_attach > >>>>>>>>>> dma_map_attachment (buffer needs to be cleaned) > >>>>>>>>> > >>>>>>>>> Why does the buffer need to be cleaned here? I just got through reading > >>>>>>>>> the thread linked by Laura in the other reply. I do like +Brian's > >>>>>>>> > >>>>>>>> Actually +Brian this time :) > >>>>>>>> > >>>>>>>>> suggestion of tracking if the buffer has had CPU access since the last > >>>>>>>>> time and only flushing the cache if it has. As unmapped heaps never get > >>>>>>>>> CPU mapped this would never be the case for unmapped heaps, it solves my > >>>>>>>>> problem. > >>>>>>>>> > >>>>>>>>>> [camera device writes to buff= er] > >>>>>>>>>> dma_buf_unmap_attachment (buffer needs to be invalidated) > >>>>>>>>> > >>>>>>>>> It doesn't know there will be any further CPU access, it could get freed > >>>>>>>>> after this for all we know, the invalidate can be saved until the CPU > >>>>>>>>> requests access again. > >>>>>>> > >>>>>>> We don't have any API to allow the invalidate to happen on CPU access > >>>>>>> if all devices already detached. We need a struct device pointer to > >>>>>>> give to the DMA API, otherwise on arm64 there'll be no invalidate. > >>>>>>> > >>>>>>> I had a chat with a few people internally after the previous > >>>>>>> discussion with Liam. One suggestion was to use > >>>>>>> DMA_ATTR_SKIP_CPU_SYNC in unmap_dma_buf, but only if there's at least > >>>>>>> one other device attached (guarantees that we can do an invalidate in > >>>>>>> the future if begin_cpu_access is called). If the last device > >>>>>>> detaches, do a sync then. > >>>>>>> > >>>>>>> Conversely, in map_dma_buf, we would track if there was any CPU access > >>>>>>> and use/skip the sync appropriately. > >>>>>>> > >>>>>> > >>>>>> Now that I think this all through I agree this patch is probably wrong. > >>>>>> The real fix needs to be better handling in the dma_map_sg() to deal > >>>>>> with the case of the memory not being mapped (what I'm dealing with for > >>>>>> unmapped heaps), and for cases when the memory in question is not cached > >>>>>> (Liam's issue I think). For both these cases the dma_map_sg() does the > >>>>>> wrong thing. > >>>>>> > >>>>>>> I did start poking the code to check out how that would look, but then > >>>>>>> Christmas happened and I'm still catching back up. > >>>>>>> > >>>>>>>>> > >>>>>>>>>> dma_buf_detach (device cannot stay attached because it is being sent down > >>>>>>>>>> the pipeline and Camera doesn't know the end of the use case) > >>>>>>>>>> > >>>>>>>>> > >>>>>>>>> This seems like a broken use-case, I understand the desire to keep > >>>>>>>>> everything as modular as possible and separate the steps, but at this > >>>>>>>>> point no one owns this buffers backing memory, not the CPU or any > >>>>>>>>> device. I would go as far as to say DMA-BUF should be free now to > >>>>>>>>> de-allocate the backing storage if it wants, that way it could get ready > >>>>>>>>> for the next attachment, which may change the required backing memory > >>>>>>>>> completely. > >>>>>>>>> > >>>>>>>>> All devices should attach before the first mapping, and only let go > >>>>>>>>> after the task is complete, otherwise this buffers data needs copied off > >>>>>>>>> to a different location or the CPU needs to take ownership in-between. > >>>>>>>>> > >>>>>>> > >>>>>>> Yeah.. that's certainly the theory. Are there any DMA-BUF > >>>>>>> implementations which actually do that? I hear it quoted a lot, > >>>>>>> because that's what the docs say - but if the reality doesn't match > >>>>>>> it, maybe we should change the docs. > >>>>>>> > >>>>>> > >>>>>> Do you mean on the userspace side? I'm not sure, seems like Android > >>>>>> might be doing this wrong from what I can gather. From kernel side if > >>>>>> you mean the "de-allocate the backing storage", we will have some cases > >>>>>> like this soon, so I want to make sure userspace is not abusing DMA-BUF > >>>>>> in ways not specified in the documentation. Changing the docs to force > >>>>>> the backing memory to always be allocated breaks the central goal in > >>>>>> having attach/map in DMA-BUF separate. > > > > Actually I meant in the kernel, in exporters. I haven't seen anyo= ne > > using the API as it was intended (defer allocation until first ma= p, > > migrate between different attachments, etc.). Mostly, backing sto= rage > > seems to get allocated at the point of export, and device mapping= s are > > often held persistently (e.g. the DRM prime code maps the buffer = at > > import time, and keeps it mapped: drm_gem_prime_import_dev). > > > So I suppose some clarification on the 'intended use' part of dma-buf about deferred allocation is due, so here it is: (Daniel, please feel free to chime in with your opinion here) - dma-buf was of course designed as a framework to help intelligent exporters to defer allocation until first map, and be able to migrate backing storage if required etc. At the same time, it is not a _requirement_ from any exporter, so exporters so far have just used it as a convenient mechanism for zero-copy. - ION is one of the few dma-buf exporters in kernel, which satisfies a certain set of expectations from its users. > I haven't either, which is a shame as it allows for some really useful > management strategies for shared memory resources. I'm working on one > such case right now, maybe I'll get to be the first to upstream one :) > That will be a really good thing! Though perhaps we ought to think if for what you're trying to do, is ION the right place, or should you have a device-specific exporter, available to users via dma-buf apis? > > I wasn't aware that CPU access before first device access was > > considered an abuse of the API - it seems like a valid thing to w= ant > > to do. > > > > That's just it, I don't know if it is an abuse of API, I'm trying to g= et > some clarity on that. If we do want to allow early CPU access then tha= t > seems to be in contrast to the idea of deferred allocation until first > device map, what is supposed to backing the buffer if no devices have > attached or mapped yet? Just some system memory followed by migration = on > the first attach to the proper backing? Seems too time wasteful to be > have a valid use. > > Maybe it should be up to the exporter if early CPU access is allowed? > > I'm hoping someone with authority over the DMA-BUF framework can clari= fy > original intentions here. I don't think dma-buf as a framework stops early CPU access, and the exporter can definitely decide on that by implementing begin_cpu_access / end_cpu_access operations to not allow early CPU access, if it so desires. > > >>>>>> > >>>>>>>>>> //buffer is send down the pipeline > >>>>>>>>>> > >>>>>>>>>> // Usersapce software post processing occurs > >>>>>>>>>> mmap buffer > >>>>>>>>> > >>>>>>>>> Perhaps the invalidate should happen here in mmap. > >>>>>>>>> > >>>>>>>>>> DMA_BUF_IOCTL_SYNC IOCT with flags DMA_BUF_SYNC_START // No CM= O since no > >>>>>>>>>> devices attached to buffer > >>>>>>>>> > >>>>>>>>> And that should be okay, mmap does the sync, and if no devices = are > >>>>>>>>> attached nothing could have changed the underlying memory in th= e > >>>>>>>>> mean-time, DMA_BUF_SYNC_START can safely be a no-op as they are= . > >>>>>>> > >>>>>>> Yeah, that's true - so long as you did an invalidate in unmap_dma= _buf. > >>>>>>> Liam was saying that it's too painful for them to do that every t= ime a > >>>>>>> device unmaps - when in many cases (device->device, no CPU) it's = not > >>>>>>> needed. > >>>>>> > >>>>>> Invalidates are painless, at least compared to a real cache flush,= just > >>>>>> set the invalid bit vs actually writing out lines. I thought the i= ssue > >>>>>> was on the map side. > >>>>>> > >>>>> > >>>>> Invalidates aren't painless for us because we have a coherent syste= m cache > >>>>> so clean lines get written out. > >>>> > >>>> That seems very broken, why would clean lines ever need to be writte= n > >>>> out, that defeats the whole point of having the invalidate separate = from > >>>> clean. How do you deal with stale cache lines? I guess in your case = this > >>>> is what forces you to have to use uncached memory for DMA-able memor= y. > >>>> > >>> > >>> My understanding is that our ARM invalidate is a clean + invalidate, = I had > >>> concerns about the clean lines being written to the the system cache = as > >>> part of the 'clean', but the following 'invalidate' would take care o= f > >>> actually invalidating the lines (so nothign broken). > >>> But i am probably wrong on this and it is probably smart enough not t= o the > >>> writing of the clean lines. > >>> > >> > >> You are correct that for a lot of ARM cores "invalidate" is always a > >> "clean + invalidate". At first I thought this was kinda silly as there > >> is now no way to mark a dirty line invalid without it getting written > >> out first, but if you think about it any dirty cache-line can be writt= en > >> out (cleaned) at anytime anyway, so this doesn't actually change syste= m > >> behavior. You should just not write to memory (make the line dirty) > >> anything you don't want eventually written out. > >> > >> Point two, it's not just smart enough to not write-out clean lines, it > >> is guaranteed not to write them out by the spec. Otherwise since > >> cache-lines can be randomly filled if those same clean lines got writt= en > >> out on invalidate operations there would be no way to maintain coheren= cy > >> and things would be written over top each other all over the place. > >> > >>> But regardless, targets supporting a coherent system cache is a legit= amate > >>> configuration and an invalidate on this configuration does have to go= to > >>> the bus to invalidate the system cache (which isn't free) so I dont' = think > >>> you can make the assumption that invalidates are cheap so that it is = okay > >>> to do them (even if they are not needed) on every dma unmap. > >>> > >> > >> Very true, CMOs need to be broadcast to other coherent masters on a > >> coherent interconnect (and the interconnect itself if it has a cache a= s > >> well (L3)), so not 100% free, but almost, just the infinitesimal cost = of > >> the cache tag check in hardware. If there are no non-coherent devices > >> attached then the CMOs are no-ops, if there are then the data needs to > >> be written out either way, doing it every access like is done with > >> uncached memory (- any write combining) will blow away any saving made > >> from the one less CMO. Either way you lose with uncached mappings of > >> memory. If I'm wrong I would love to know. > >> > > > > From what I understand, the current DMA APIs are not equipped to > > handle having coherent and non-coherent devices attached at the same > > time. The buffer is either in "CPU land" or "Device land", there's no > > smaller granule of "Coherent Device land" or "Non-Coherent Device > > land". > > > > I think if there's devices which are making coherent accesses, and > > devices which are making non-coherent accesses, then we can't support > > them being attached at the same time without some enhancements to the > > APIs. > > > > I think you are right, we only handle sync to/from the CPU out to > "Device land". To sync from device to device I'm not sure there is > anything right now, they all have to be able to talk to each other > without any maintenance from the host CPU. > > This will probably lead to some interesting cases like in OpenVX where a > key selling point is keeping the host out of the loop and let the remote > devices do all the sharing between themselves. > > >>>>> And these invalidates can occur on fairly large buffers. > >>>>> > >>>>> That is why we haven't went with using cached ION memory and "track= ing CPU > >>>>> access" because it only solves half the problem, ie there isn't a w= ay to > >>>>> safely skip the invalidate (because we can't read the future). > >>>>> Our solution was to go with uncached ION memory (when possible), bu= t as > >>>>> you can see in other discussions upstream support for uncached memo= ry has > >>>>> its own issues. > >>>>> > > > > @Liam, in your problematic use-cases, are both devices detached when > > the buffer moves between them? > > > > 1) dev 1 map, access, unmap > > 2) dev 1 detach > > 3) (maybe) CPU access > > 4) dev 2 attach > > 5) dev 2 map, access > > > > I still think a pretty pragmatic solution is to use > > DMA_ATTR_SKIP_CPU_SYNC until the last device detaches. That won't work > > if your access sequence looks like above... > > > > ...however, if your sequence looks like above, then you probably need > > to keep at least one of the devices attached anyway. Otherwise, per > > the API, the buffer could get migrated after 2)/before 5). That will > > surely hurt a lot more than an invalidate. > > > >>>> > >>>> Sounds like you need to fix upstream support then, finding a way to = drop > >>>> all cacheable mappings of memory you want to make uncached mappings = for > >>>> seems to be the only solution. > >>>> > >>> > >>> I think we can probably agree that there woudln't be a good way to re= move > >>> cached mappings without causing an unacceptable performance degradati= on > >>> since it would fragment all the nice 1GB kernel mappings we have. > >>> > >>> So I am trying to find an alternative solution. > >>> > >> > >> I'm not sure there is a better solution. How hard is this solution to > >> implement anyway? The kernel already has to make gaps and cut up that > >> nice 1GB mapping when you make a reserved memory space in the lowmem > >> area, so all the logic is probably already implemented. Just need to > >> allow it to be hooked into from Ion when doing doing the uncached mapp= ings. > >> > > > > I haven't looked recently, but I'm not sure the early memblock code > > can be reused as-is at runtime. I seem to remember it makes a bunch of > > assumptions about the fact that it's running "early". > > > > If CPU uncached mappings of normal system memory is really the way > > forward, I could envisage a heap which maintains a pool of chunks of > > memory which it removed from the kernel mapping. The pool could grow > > (remove more pages from the kernel mapping)/shrink (add them back to > > the kernel mapping) as needed. > > > > John Reitan implemented a compound-page heap, which used compaction to > > get a pool of 2MB contiguous pages. Something like that would at least > > prevent needing full 4kB granularity when removing things from the > > kernel mapping. > > > > Even better, could it somehow be restricted to a region which is > > already fragmented? (e.g. the one which was used for the default CMA > > heap) > > > > Thanks, > > -Brian > > > >>>>>>> > >>>>>>>>> > >>>>>>>>>> [CPU reads/writes to the buffer] > >>>>>>>>>> DMA_BUF_IOCTL_SYNC IOCTL with flags DMA_BUF_SYNC_END // No CMO= since no > >>>>>>>>>> devices attached to buffer > >>>>>>>>>> munmap buffer > >>>>>>>>>> > >>>>>>>>>> //buffer is send down the pipeline > >>>>>>>>>> // Buffer is send to video device (who does compression of raw= data) and > >>>>>>>>>> writes to a file > >>>>>>>>>> dma_buf_attach > >>>>>>>>>> dma_map_attachment (buffer needs to be cleaned) > >>>>>>>>>> [video device writes to buffer] > >>>>>>>>>> dma_buf_unmap_attachment > >>>>>>>>>> dma_buf_detach (device cannot stay attached because it is bei= ng sent down > >>>>>>>>>> the pipeline and Video doesn't know the end of the use case) > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>>>> Also ION no longer provides DMA ready memory, so if you are = not doing CPU > >>>>>>>>>>>> access then there is no requirement (that I am aware of) for= you to call > >>>>>>>>>>>> {begin,end}_cpu_access before passing the buffer to the devi= ce and if this > >>>>>>>>>>>> buffer is cached and your device is not IO-coherent then the= cache maintenance > >>>>>>>>>>>> in ion_map_dma_buf and ion_unmap_dma_buf is required. > >>>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> If I am not doing any CPU access then why do I need CPU cache > >>>>>>>>>>> maintenance on the buffer? > >>>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> Because ION no longer provides DMA ready memory. > >>>>>>>>>> Take the above example. > >>>>>>>>>> > >>>>>>>>>> ION allocates memory from buddy allocator and requests zeroing= . > >>>>>>>>>> Zeros are written to the cache. > >>>>>>>>>> > >>>>>>>>>> You pass the buffer to the camera device which is not IO-coher= ent. > >>>>>>>>>> The camera devices writes directly to the buffer in DDR. > >>>>>>>>>> Since you didn't clean the buffer a dirty cache line (one of t= he zeros) is > >>>>>>>>>> evicted from the cache, this zero overwrites data the camera d= evice has > >>>>>>>>>> written which corrupts your data. > >>>>>>>>>> > >>>>>>>>> > >>>>>>>>> The zeroing *is* a CPU access, therefor it should handle the ne= eded CMO > >>>>>>>>> for CPU access at the time of zeroing. > >>>>>>>>> > >>>>>>> > >>>>>>> Actually that should be at the point of the first non-coherent de= vice > >>>>>>> mapping the buffer right? No point in doing CMO if the future acc= esses > >>>>>>> are coherent. > >>>>>> > >>>>>> I see your point, as long as the zeroing is guaranteed to be the f= irst > >>>>>> access to this buffer then it should be safe. > >>>>>> > >>>>>> Andrew > >>>>>> > >>>>>>> > >>>>>>> Cheers, > >>>>>>> -Brian > >>>>>>> > >>>>>>>>> Andrew > >>>>>>>>> > >>>>>>>>>> Liam > >>>>>>>>>> > >>>>>>>>>> Qualcomm Innovation Center, Inc. is a member of Code Aurora Fo= rum, > >>>>>>>>>> a Linux Foundation Collaborative Project > >>>>>>>>>> > >>>>>> > >>>>> > >>>>> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, > >>>>> a Linux Foundation Collaborative Project > >>>>> > >>>> > >>> > >>> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, > >>> a Linux Foundation Collaborative Project > >>> Best, Sumit.