Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp1014595ybk; Wed, 20 May 2020 18:44:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxdcyJ3tu9XrF1aCphAHaeF0BoL54EoH2vQzgSUbYV3aj6eE4F1ScU3HPK4yAqTpT1Ng0Gi X-Received: by 2002:a05:6402:1d2d:: with SMTP id dh13mr5754183edb.169.1590025474172; Wed, 20 May 2020 18:44:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590025474; cv=none; d=google.com; s=arc-20160816; b=lrIbcEcP50gLm8rRZvg/jlgewzwcQsvVFTQHc+OKM0fLBUhjgSdAjAPhQ9+hsRutvA FguHNwBWlLE6ShqOMEnJ+iZfYTjzNYG97H6UCgmu+Kd+IgbmBsokT+COFfq67mUrUihn YDfHtf07Vmjv8bhkLiYDs6fQpjCH5Q/cM6wP6ojbQ3OCN+mUPtSwZxpGkCC08wieuzY1 u80rlu2h6/KsG7ctAOnL09GCWOoWQzzOXFWfXeG55Aiiz40lDA3wtwgJ0Hf1blM1Jusm ZA20AexzEYTeBtV2c9umwNBFEApO14mko0qt/fa7RzTRfXJeQHL+i917hugiX60eTf8P yEbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=V55mIqaIiINK3X4WAlmkErUGZP0ozNV8mnaGzn7jCc8=; b=RHHrfHyMUgQMC80OCH6jLSn7Fr/wQAJFPGMVgd4RjBlquJfQOEfR6xMu0SkzjGxb3r RFg/+vCUtRZ7ttny4LTODv/0JEJZpNaHxSeNPzHVxqaX17s/B2z/yZ7v2mLih0STJ6r9 H9hiuI/Sw2i8l3y9hIJoz+tzvC+HCQ6mAP+9AxjjQ8Qu7sBM73RUDVUejkDIF2Uym4r1 OxtzEyYWNCiHol1B7Z1E8b0BqOfjQCpYWOvrEpVC0KKwv1h5r72IlPnIdjjtK1YkpuoJ mtPbrQd3CUGwZwt6AE+FU77PNLqwBjbJ0q3r+efNZyrbDVn4+dEcm8ZJydAxXhRNzNiY P0Gg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o5si2665752ejr.637.2020.05.20.18.44.10; Wed, 20 May 2020 18:44:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726953AbgEUBki (ORCPT + 99 others); Wed, 20 May 2020 21:40:38 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:34324 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726840AbgEUBki (ORCPT ); Wed, 20 May 2020 21:40:38 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 74C798030779; Thu, 21 May 2020 01:40:33 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id d9QIbRN94f6e; Thu, 21 May 2020 04:40:32 +0300 (MSK) Date: Thu, 21 May 2020 04:40:31 +0300 From: Serge Semin To: Vinod Koul CC: Serge Semin , Andy Shevchenko , Mark Brown , Viresh Kumar , Dan Williams , Alexey Malahov , Thomas Bogendoerfer , Paul Burton , Ralf Baechle , Arnd Bergmann , Rob Herring , , devicetree , dmaengine , Linux Kernel Mailing List Subject: Re: [PATCH v2 4/6] dmaengine: dw: Print warning if multi-block is unsupported Message-ID: <20200521014031.7r54unsig477csfu@mobilestation> References: <20200511115813.GG8216@sirena.org.uk> <20200511134502.hjbu5evkiuh75chr@mobilestation> <20200511193255.t6orpcdz5ukmwmqo@mobilestation> <20200511210714.GO185537@smile.fi.intel.com> <20200511210800.GP185537@smile.fi.intel.com> <20200512124206.l3uv5hg2zimi24dq@mobilestation> <20200515063039.GH333670@vkoul-mobl> <20200517192347.h3hiibsifwfyyr7z@mobilestation> <20200519170246.GS374218@vkoul-mobl.Dlink> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <20200519170246.GS374218@vkoul-mobl.Dlink> X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 19, 2020 at 10:32:46PM +0530, Vinod Koul wrote: > On 17-05-20, 22:23, Serge Semin wrote: > > On Fri, May 15, 2020 at 12:00:39PM +0530, Vinod Koul wrote: > > > Hi Serge, > > > > > > On 12-05-20, 15:42, Serge Semin wrote: > > > > Vinod, > > > > > > > > Could you join the discussion for a little bit? > > > > > > > > In order to properly fix the problem discussed in this topic, we need to > > > > introduce an additional capability exported by DMA channel handlers on per-channel > > > > basis. It must be a number, which would indicate an upper limitation of the SG list > > > > entries amount. > > > > Something like this would do it: > > > > struct dma_slave_caps { > > > > ... > > > > unsigned int max_sg_nents; > > > > ... > > > > > > Looking at the discussion, I agree we should can this up in the > > > interface. The max_dma_len suggests the length of a descriptor allowed, > > > it does not convey the sg_nents supported which in the case of nollp is > > > one. > > > > > > Btw is this is a real hardware issue, I have found that value of such > > > hardware is very less and people did fix it up in subsequent revs to add > > > llp support. > > > > Yes, it is. My DW DMAC doesn't support LLP and there isn't going to be new SoC > > version produced.( > > Ouch > > > > Also, another question is why this cannot be handled in driver, I agree > > > your hardware does not support llp but that does not stop you from > > > breaking a multi_sg list into N hardware descriptors and keep submitting > > > them (for this to work submission should be done in isr and not in bh, > > > unfortunately very few driver take that route). > > > > Current DW DMA driver does that, but this isn't enough. The problem is that > > in order to fix the issue in the DMA hardware driver we need to introduce > > an inter-dependent channels abstraction and synchronously feed both Tx and > > Rx DMA channels with hardware descriptors (LLP entries) one-by-one. This hardly > > needed by any slave device driver rather than SPI, which Tx and Rx buffers are > > inter-dependent. So Andy's idea was to move the fix to the SPI driver (feed > > the DMA engine channels with Tx and Rx data buffers synchronously), but DMA > > engine would provide an info whether such fix is required. This can be > > determined by the maximum SG entries capability. > > Okay but having the sw limitation removed would also be a good idea, you > can handle any user, I will leave it upto you, either way is okay > > > > > (Note max_sg_ents isn't a limitation on the number of SG entries supported by > > the DMA driver, but the number of SG entries handled by the DMA engine in a > > single DMA transaction.) > > > > > TBH the max_sg_nents or > > > max_dma_len are HW restrictions and SW *can* deal with then :-) > > > > Yes, it can, but it only works for the cases when individual DMA channels are > > utilized. DMA hardware driver doesn't know that the target and source slave > > device buffers (SPI Tx and Rx FIFOs) are inter-dependent, that writing to one > > you will implicitly push data to another. So due to the interrupts handling > > latency Tx DMA channel is restarted faster than Rx DMA channel is reinitialized. > > This causes the SPI Rx FIFO overflow and data loss. > > > > > > > > In an idea world, you should break the sw descriptor submitted into N hw > > > descriptors and submit to hardware and let user know when the sw > > > descriptor is completed. Of course we do not do that :( > > > > Well, the current Dw DMA driver does that. But due to the two slave device > > buffers inter-dependency this isn't enough to perform safe DMA transactions. > > Due to the interrupts handling latency Tx DMA channel pushes data to the slave > > device buffer faster than Rx DMA channel starts to handle incoming data. This > > causes the SPI Rx FIFO overflow. > > > > > > > > > }; > > > > As Andy suggested it's value should be interpreted as: > > > > 0 - unlimited number of entries, > > > > 1:MAX_UINT - actual limit to the number of entries. > > > > > > > > Hmm why 0, why not MAX_UINT for unlimited? > > > > 0 is much better for many reasons. First of all MAX_UINT is a lot, but it's > > still a number. On x64 platform this might be actual limit if for instance > > the block-size register is 32-bits wide. Secondly interpreting 0 as unlimited > > number of entries would be more suitable since most of the drivers support > > LLP functionality and we wouldn't need to update their code to set MAX_UINT. > > Thirdly DMA engines, which don't support LLPs would need to set this parameter > > as 1. So if we do as you say and interpret unlimited number of LLPs as MAX_UINT, > > then 0 would left unused. > > > > To sum up I also think that using 0 as unlimited number SG entries supported is > > much better. > > ok > > > > > In addition to that seeing the dma_get_slave_caps() method provide the caps only > > > > by getting them from the DMA device descriptor, while we need to have an info on > > > > per-channel basis, it would be good to introduce a new DMA-device callback like: > > > > struct dma_device { > > > > ... > > > > int (*device_caps)(struct dma_chan *chan, > > > > struct dma_slave_caps *caps); > > > > > > > > Do you have a controller where channel caps are on per-channel basis? > > > > Yes, I do. Our DW DMA controller has got the maximum burst length non-uniformly > > distributed per DMA channels. There are eight channels our controller supports, > > among which first two channels can burst up to 32 transfer words, but the rest > > of the channels support bursting up to 4 transfer words. > > > > So having such device_caps() callback to customize the device capabilities on > > per-DMA-channel basis would be very useful! What do you think? > > Okay looks like per-ch basis is the way forward! Great! Thanks. I'll send v3 with updates we've come up to in this discussion. -Sergey > > -- > ~Vinod