Received: by 10.192.165.156 with SMTP id m28csp1213028imm; Wed, 18 Apr 2018 06:09:14 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+d3yEc3w0W9xg5upUWPSQdquh9zB1Hfuwv4Xlpa7UhgQNRp+2j/I/27HxVBVR6xjKOWVU8 X-Received: by 2002:a17:902:4001:: with SMTP id b1-v6mr1992804pld.273.1524056954333; Wed, 18 Apr 2018 06:09:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524056954; cv=none; d=google.com; s=arc-20160816; b=Dn31dkhTWTJ5LJyDbf1xKBKMXArREtmTEosQ2yG8Mv7sOAPKuwZLUjixSWIf1FQXIQ nNYFRNxlyS4y3nXJvHFMcV5GAP2NGIRWxkesD9ZgtgdTSPJxG39/6r6Xlz06lMzlIPdI padh5NsiAwu39NZ36WxB/TAf1g9ASBJYpRHDQN4tGcz9r93Y5SdiRMjXd3NSLLeLqri8 hqfd43spfRrzTn2JQbWeEBgoYpLFAa4i3O7/S+2UKsanXr2j/RGC2OjPD1ssxBbld+O7 pFBHidnuJctZQVaxQjZTOSLy3uYJDAVvq4/T4uMvg++ej1ZOJjcTxmeSfMfKqddH0G4p mI0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=MOAlxP7TOKyS1YKqR73LUpIOR6mcy0Oa7MlZj0Uj9CE=; b=YkFpUA/mer/NvpzBqDJeW6RBmjo3JfDEiGQ1HaloNkr2QowLmsLy7sFXijZailkHCB nezVBJ/XH+EKaB81rELqQFhS/sUp69K2UJ/PHkHkmzT6Y68mB15PheG7/WY67bOPYeyc vNyG69WxoICjX4X6GHhVIeipf8+N56XuBB4w9fweDx4mlY59FY78EE14eiS8jQswDxmp 98ERFDd/Kqvp7m3fktE0WmSwedd7yQMgG7yJBtb3IGek2JwmJq5w35+vFy2bjjIgbHUF JGINXBXwctgTT0fIOXtXLGcU3uzTupwIFCnyw/3tywliRf0YRkoSFbcDuxz/ulC2UnBm IF9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 28si1169746pfl.127.2018.04.18.06.08.59; Wed, 18 Apr 2018 06:09:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753864AbeDRNHh (ORCPT + 99 others); Wed, 18 Apr 2018 09:07:37 -0400 Received: from www381.your-server.de ([78.46.137.84]:35555 "EHLO www381.your-server.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753790AbeDRNHf (ORCPT ); Wed, 18 Apr 2018 09:07:35 -0400 Received: from [78.46.172.3] (helo=sslproxy06.your-server.de) by www381.your-server.de with esmtpsa (TLSv1.2:DHE-RSA-AES256-GCM-SHA384:256) (Exim 4.85_2) (envelope-from ) id 1f8mo6-00012V-4A; Wed, 18 Apr 2018 15:07:30 +0200 Received: from [2003:86:2c44:e800:8200:bff:fe9b:6612] by sslproxy06.your-server.de with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.89) (envelope-from ) id 1f8mnD-000UxL-Rf; Wed, 18 Apr 2018 15:06:35 +0200 Subject: Re: [RFC 2/6] dmaengine: xilinx_dma: Pass AXI4-Stream control words to netdev dma client To: Peter Ujfalusi , Radhey Shyam Pandey , Vinod Koul Cc: "michal.simek@xilinx.com" , "linux-kernel@vger.kernel.org" , "dmaengine@vger.kernel.org" , "dan.j.williams@intel.com" , Appana Durga Kedareswara Rao , "linux-arm-kernel@lists.infradead.org" References: <1522665546-10035-1-git-send-email-radheys@xilinx.com> <1522665546-10035-3-git-send-email-radheys@xilinx.com> <20180411090854.GY6014@localhost> <7f549d2e-fc96-8c7e-d839-edb86ae088a5@metafoo.de> <4ba085c7-5256-6c8a-5697-c0d5736a6e46@ti.com> <1fc10bec-5c2c-98f1-1d5b-b768dea844ed@metafoo.de> <78828d31-e4cd-5211-f1b6-8918ac38f599@ti.com> From: Lars-Peter Clausen Message-ID: Date: Wed, 18 Apr 2018 15:06:13 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit X-Authenticated-Sender: lars@metafoo.de X-Virus-Scanned: Clear (ClamAV 0.99.3/24491/Wed Apr 18 14:27:12 2018) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 04/18/2018 08:31 AM, Peter Ujfalusi wrote: > > On 2018-04-17 18:54, Lars-Peter Clausen wrote: >> On 04/17/2018 04:53 PM, Peter Ujfalusi wrote: >>> On 2018-04-17 16:58, Lars-Peter Clausen wrote: >>>>>> There are two options. >>>>>> >>>>>> Either you extend the generic interfaces so it can cover your usecase in a >>>>>> generic way. E.g. the ability to attach meta data to transfer. >>>>> >>>>> Fwiw I have this patch as part of a bigger work to achieve similar results: >>>> >>>> That's good stuff. Is this in a public tree somewhere? >>> >>> Not atm. I can not send the user of the new API and I did not wanted to >>> send something like this out of the blue w/o context. >>> >>> But as it is a generic patch, I can send it as well. The only thing is >>> that the need for the memcpy, so I might end up with >>> ptr = get_metadata_ptr(desc, &size); /* size: in RX the valid size */ >>> >>> and set_metadata_size(); /* in TX to tell how the client placed */ >>> >>> Or something like that, the attach_metadata() as it is works just fine, >>> but high throughput might not like the memcpy. >>> >> >> In the most abstracted way I'd say metadata and data are two different data >> streams that are correlated and send/received at the same time. > > In my case the meatdata is sideband information or parameters for/from > the remote end. Like timestamp, algorithm parameters, keys, etc. > > It is tight to the data payload, but it is not part of it. > > But the API should be generic enough to cover other use cases where > clients need to provide additional information. > For me, the metadata is part of the descriptor we give and receive back > from the DMA, others might have sideband channel to send that. > > For metadata handling we could have: > > struct dma_desc_metadata_ops { > /* To give a buffer for the DMA with the metadata, as it was in my > * original patch > */ > int (*desc_attach_metadata)(struct dma_async_tx_descriptor *desc, > void *data, size_t len); > > void *(*desc_get_metadata_ptr)(struct dma_async_tx_descriptor *desc, > size_t *payload_len, size_t *max_len); > int (*desc_set_payload_len)(struct dma_async_tx_descriptor *desc, > size_t payload_len); > }; > > Probably a simple flag variable to indicate which of the two modes are > supported: > 1. Client provided metadata buffer handling > Clients provide the buffer via desc_attach_metadata(), the DMA driver > will do whatever it needs to do, copy it in place, send it differently, > use parameters. > In RX the received metadata is going to be placed to the provided buffer. > 2. Ability to give the metadata pointer to user to work on it. > In TX, clients can use desc_get_metadata_ptr() to get the pointer, > current payload size and maximum size of the metadata and can work > directly on the buffer to place the data. Then desc_set_payload_len() to > let the DMA know how much data is actually placed there. > In RX, desc_get_metadata_ptr() will give the user the pointer and the > payload size so it can process that information correctly. > > DMA driver can implement either or both, but clients must only use > either 1 or 2 to work with the metadata. > > >> Think multi-planar transfer, like for audio when the right and left channel >> are in separate buffers and not interleaved. Or video with different >> color/luminance components in separate buffers. This is something that is at >> the moment not covered by the dmaengine API either. > > Hrm, true, but it is hardly the metadata use case. It is more like > different DMA transfer type. When I look at this with my astronaut architect view from high high up above I do not see a difference between metadata and multi-planar data. Both split the data that is sent to the peripheral into multiple sub-streams, each carrying part of the data. I'm sure there are peripherals that interleave data and metadata on the same data stream. Similar to how we have left and right channel interleaved in a audio stream. What about metadata that is not contiguous and split into multiple segments. How do you handle passing a sgl to the metadata interface? And then it suddenly looks quite similar to the normal DMA descriptor interface. But maybe that's just one abstraction level to high. >>>>>> Or you can implement a interface that is specific to your DMA controller and >>>>>> any client using this interface knows it is talking to your DMA controller. >>>>> >>>>> Hrm, so we can have DMA driver specific calls? The reason why TI's keystone 2 >>>>> navigator DMA support was rejected that it was introducing NAV specific calls >>>>> for clients to configure features not yet supported by the framework. >>>> >>>> In my opinion it is OK, somebody else might have different ideas. I mean it >>>> is not nice, but it is better than the alternative of overloading the >>>> generic API with driver specific semantics or introducing some kind of IOCTL >>>> catch all callback. >>> >>> True, but the generic API can be extended as well to cover new grounds, >>> features. Like this metadata thing. >>> >>>> If there is tight coupling between the DMA core and client and there is no >>>> intention of using a generic client the best solution might even be to no >>>> use DMAengine at all. >>> >>> This is how the knav stuff ended up. Well it is only used by networking >>> atm, so it is 'fine' to have custom API, but it is not portable. >> >> I totally agree generic APIs are better, but not everybody has the resources >> to rewrite the whole framework just because they want to do this tiny thing >> that isn't covered by the framework yet. In that case it is better to go >> with a custom API (that might evolve into a generic API), rather than >> overloading the generic API and putting a strain on everybody who works on >> the generic API. > > At some point a threshold is reached when the burden of maintaining a > custom API is more costly than investing on the extension of the framework. > What happens when an existing driver (using DMAengine API) need to be > supported on platform where only custom DMA code is available or if you > want to migrate to a new DMA from the custom API the drier is wired for? > It is just plain pain. > At the end what we want to do with DMA: move data from opne place to > another (in oversimplified view). I think we fully agree on this :)