Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp71052ybt; Thu, 9 Jul 2020 15:46:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy2pz8iI3TsNE6aXrBO47Xmq/PktBmqb4sJE0uzoh6SGkdNG1pwUo+EdDsN4/9upyg5wNY+ X-Received: by 2002:aa7:d5cd:: with SMTP id d13mr50305808eds.370.1594334819622; Thu, 09 Jul 2020 15:46:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594334819; cv=none; d=google.com; s=arc-20160816; b=bN2v0zmVJww7VH8800b3x0lmnfBaVZ4aJwrLag/Il36BHr09DKdcsx4H2CvGas/165 1XJlSfmj7Di+AsgoPGfQh5KSpgcbalSaaIsJkFA7W6nF1qX0FS3b+nJu+eSVkR+XHkUX M6z7AUYRM/44A98QliajNZR/Z90OWcBCWZc9axyas2/lFZrtud/SCzNa107ngmU3t/ZO enrThVL3cB81dSwCgVnONTmj5bAL+YTQdkaRVTOI1j0OyqKB7t0ziHLoEVaUUVeBBTPT 3pDRpTDyA4q2DN1Pxf5cz/mhbL3mqh9pU++J+LxpoMIDyXs+NsbYazkHjtPwgDpnMbeQ At5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=aijHXl3iDV4HBhaBLhMPSs/b6YTeyMad5qTj+39801c=; b=0CPa1Zt8sw9TcHSj185dpNcPDzjPvOejsjXNIBqPGsQ1u8PzsWUNMmUVoy+EPgMqpU ofLQ1m2B2YQfVtpbqgc9ZKLbCdknC4mSV6G/HS/rcTFNq9BOd99+lH8F/2QfJ5scL89G 1KaKw73gyj6HNm8bydTE+R5Gol11AF40+3i1CmKmyXSdD9t1xJJETsjhLu6eUg8Ffcjk u1tiOEj9tiX4KrksBbxGfXJkgN5Y1LcONl/dTDrAPtyVLwjvWverp0Z1+g5l8BJi47tO ci9c52UGUpGGkJbkHtnBGse9/sJrAm/aGG0LHixmbjFZ8mF0S46wABnWybl16qlpuvNI ayHQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dm15si3743933ejc.728.2020.07.09.15.46.36; Thu, 09 Jul 2020 15:46:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726859AbgGIWqX (ORCPT + 99 others); Thu, 9 Jul 2020 18:46:23 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:48146 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727046AbgGIWqT (ORCPT ); Thu, 9 Jul 2020 18:46:19 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 5B7E08040A68; Thu, 9 Jul 2020 22:46:12 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id EFXq2Mmq0s6M; Fri, 10 Jul 2020 01:46:11 +0300 (MSK) From: Serge Semin To: Vinod Koul , Viresh Kumar , Dan Williams CC: Serge Semin , Serge Semin , Andy Shevchenko , Alexey Malahov , Thomas Bogendoerfer , Arnd Bergmann , Rob Herring , , , , Subject: [PATCH v7 04/11] dmaengine: Introduce max SG list entries capability Date: Fri, 10 Jul 2020 01:45:43 +0300 Message-ID: <20200709224550.15539-5-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200709224550.15539-1-Sergey.Semin@baikalelectronics.ru> References: <20200709224550.15539-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some devices may lack the support of the hardware accelerated SG list entries automatic walking through and execution. In this case a burden of the SG list traversal and DMA engine re-initialization lies on the DMA engine driver (normally implemented by using a DMA transfer completion IRQ to recharge the DMA device with a next SG list entry). But such solution may not be suitable for some DMA consumers. In particular SPI devices need both Tx and Rx DMA channels work synchronously in order to avoid the Rx FIFO overflow. In case if Rx DMA channel is paused for some time while the Tx DMA channel works implicitly pulling data into the Rx FIFO, the later will be eventually overflown, which will cause the data loss. So if SG list entries aren't automatically fetched by the DMA engine, but are one-by-one manually selected for execution in the ISRs/deferred work/etc., such problem will eventually happen due to the non-deterministic latencies of the service execution. In order to let the DMA consumer know about the DMA device capabilities regarding the hardware accelerated SG list traversal we introduce the max_sg_list capability. It is supposed to be initialized by the DMA engine driver with 0 if there is no limitation for the number of SG entries atomically executed and with non-zero value if there is such constraints, so the upper limit is determined by the number set to the property. Suggested-by: Andy Shevchenko Signed-off-by: Serge Semin Reviewed-by: Andy Shevchenko Cc: Alexey Malahov Cc: Thomas Bogendoerfer Cc: Arnd Bergmann Cc: Rob Herring Cc: linux-mips@vger.kernel.org Cc: devicetree@vger.kernel.org --- Changelog v3: - This is a new patch created as a result of the discussion with Vinud and Andy in the framework of DW DMA burst and LLP capabilities. Changelog v4: - Fix of->if typo. It should be definitely of. --- drivers/dma/dmaengine.c | 1 + include/linux/dmaengine.h | 8 ++++++++ 2 files changed, 9 insertions(+) diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index b332ffe52780..ad56ad58932c 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -592,6 +592,7 @@ int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps) caps->directions = device->directions; caps->min_burst = device->min_burst; caps->max_burst = device->max_burst; + caps->max_sg_nents = device->max_sg_nents; caps->residue_granularity = device->residue_granularity; caps->descriptor_reuse = device->descriptor_reuse; caps->cmd_pause = !!device->device_pause; diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index 0c7403b27133..a7e4d8dfdd19 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -467,6 +467,9 @@ enum dma_residue_granularity { * should be checked by controller as well * @min_burst: min burst capability per-transfer * @max_burst: max burst capability per-transfer + * @max_sg_nents: max number of SG list entries executed in a single atomic + * DMA tansaction with no intermediate IRQ for reinitialization. Zero + * value means unlimited number of entries. * @cmd_pause: true, if pause is supported (i.e. for reading residue or * for resume later) * @cmd_resume: true, if resume is supported @@ -481,6 +484,7 @@ struct dma_slave_caps { u32 directions; u32 min_burst; u32 max_burst; + u32 max_sg_nents; bool cmd_pause; bool cmd_resume; bool cmd_terminate; @@ -773,6 +777,9 @@ struct dma_filter { * should be checked by controller as well * @min_burst: min burst capability per-transfer * @max_burst: max burst capability per-transfer + * @max_sg_nents: max number of SG list entries executed in a single atomic + * DMA tansaction with no intermediate IRQ for reinitialization. Zero + * value means unlimited number of entries. * @residue_granularity: granularity of the transfer residue reported * by tx_status * @device_alloc_chan_resources: allocate resources and return the @@ -844,6 +851,7 @@ struct dma_device { u32 directions; u32 min_burst; u32 max_burst; + u32 max_sg_nents; bool descriptor_reuse; enum dma_residue_granularity residue_granularity; -- 2.26.2