Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933271AbcLMNnB (ORCPT ); Tue, 13 Dec 2016 08:43:01 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:35338 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933091AbcLMNlN (ORCPT ); Tue, 13 Dec 2016 08:41:13 -0500 From: "M'boumba Cedric Madianga" To: vinod.koul@intel.com, robh+dt@kernel.org, mark.rutland@arm.com, mcoquelin.stm32@gmail.com, alexandre.torgue@st.com, dan.j.williams@intel.com, dmaengine@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: "M'boumba Cedric Madianga" Subject: [PATCH 5/9] dmaengine: stm32-dma: Rework starting transfer management Date: Tue, 13 Dec 2016 14:40:47 +0100 Message-Id: <1481636451-27863-6-git-send-email-cedric.madianga@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1481636451-27863-1-git-send-email-cedric.madianga@gmail.com> References: <1481636451-27863-1-git-send-email-cedric.madianga@gmail.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2404 Lines: 73 This patch reworks the way to manage transfer starting. Now, starting DMA is only allowed when the channel is not busy. Then, stm32_dma_start_transfer is declared as void. At least, after each transfer completion, we start the next transfer if a new descriptor as been queued in the issued list during an ongoing transfer. Signed-off-by: M'boumba Cedric Madianga Reviewed-by: Ludovic BARRE --- drivers/dma/stm32-dma.c | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/drivers/dma/stm32-dma.c b/drivers/dma/stm32-dma.c index 3056ce7..adb846c 100644 --- a/drivers/dma/stm32-dma.c +++ b/drivers/dma/stm32-dma.c @@ -421,7 +421,7 @@ static void stm32_dma_dump_reg(struct stm32_dma_chan *chan) dev_dbg(chan2dev(chan), "SFCR: 0x%08x\n", sfcr); } -static int stm32_dma_start_transfer(struct stm32_dma_chan *chan) +static void stm32_dma_start_transfer(struct stm32_dma_chan *chan) { struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); struct virt_dma_desc *vdesc; @@ -432,12 +432,12 @@ static int stm32_dma_start_transfer(struct stm32_dma_chan *chan) ret = stm32_dma_disable_chan(chan); if (ret < 0) - return ret; + return; if (!chan->desc) { vdesc = vchan_next_desc(&chan->vchan); if (!vdesc) - return -EPERM; + return; chan->desc = to_stm32_dma_desc(vdesc); chan->next_sg = 0; @@ -471,7 +471,7 @@ static int stm32_dma_start_transfer(struct stm32_dma_chan *chan) chan->busy = true; - return 0; + dev_dbg(chan2dev(chan), "vchan %p: started\n", &chan->vchan); } static void stm32_dma_configure_next_sg(struct stm32_dma_chan *chan) @@ -552,15 +552,13 @@ static void stm32_dma_issue_pending(struct dma_chan *c) { struct stm32_dma_chan *chan = to_stm32_dma_chan(c); unsigned long flags; - int ret; spin_lock_irqsave(&chan->vchan.lock, flags); - if (!chan->busy) { - if (vchan_issue_pending(&chan->vchan) && !chan->desc) { - ret = stm32_dma_start_transfer(chan); - if ((!ret) && (chan->desc->cyclic)) - stm32_dma_configure_next_sg(chan); - } + if (vchan_issue_pending(&chan->vchan) && !chan->desc && !chan->busy) { + dev_dbg(chan2dev(chan), "vchan %p: issued\n", &chan->vchan); + stm32_dma_start_transfer(chan); + if (chan->desc->cyclic) + stm32_dma_configure_next_sg(chan); } spin_unlock_irqrestore(&chan->vchan.lock, flags); } -- 1.9.1