Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752011AbbBKIIK (ORCPT ); Wed, 11 Feb 2015 03:08:10 -0500 Received: from exprod5og106.obsmtp.com ([64.18.0.182]:33143 "EHLO mail-qa0-f51.google.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751894AbbBKIII (ORCPT ); Wed, 11 Feb 2015 03:08:08 -0500 MIME-Version: 1.0 In-Reply-To: <20150205201118.GB16547@intel.com> References: <1422968107-23125-1-git-send-email-rsahu@apm.com> <1422968107-23125-2-git-send-email-rsahu@apm.com> <20150205015031.GK4489@intel.com> <20150205201118.GB16547@intel.com> Date: Wed, 11 Feb 2015 13:38:07 +0530 Message-ID: Subject: Re: [PATCH v5 1/3] dmaengine: Add support for APM X-Gene SoC DMA engine driver From: Rameshwar Sahu To: Vinod Koul Cc: dan.j.williams@intel.com, dmaengine@vger.kernel.org, Arnd Bergmann , linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, ddutile@redhat.com, jcm@redhat.com, patches@apm.com, Loc Ho Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2708 Lines: 71 Hi Vinod, On Fri, Feb 6, 2015 at 1:41 AM, Vinod Koul wrote: > On Thu, Feb 05, 2015 at 05:29:06PM +0530, Rameshwar Sahu wrote: >> Hi Vinod, >> >> Thanks for reviewing this patch. >> >> Please see inline...... > Please STOP top posting > >> > >> >> +} >> >> + >> >> +static void xgene_dma_issue_pending(struct dma_chan *channel) >> >> +{ >> >> + /* Nothing to do */ >> >> +} >> > What do you mean by nothing to do here >> > See Documentation/dmaengine/client.txt Section 4 & 5 >> This docs is only applicable on slave DMA operations, we don't support >> slave DMA, it's only master. >> Our hw engine is designed in the way that there is no scope of >> flushing pending transaction explicitly by sw. >> We have circular ring descriptor dedicated to engine. In submit >> callback we are queuing descriptor and informing to engine, so after >> this it's internal to hw to execute descriptor one by one. > But the API expectations on this are _same_ > > No the API expects you to maintain a SW queue, then push to your ring buffer > when you get issue_pending. Issue pending is the start of data transfer, you > client will expect accordingly. Okay, I will maintain a sw queue, and will push sw descriptor to hw in this callback. > >> >> + /* Run until we are out of length */ >> >> + do { >> >> + /* Create the largest transaction possible */ >> >> + copy = min_t(size_t, len, DMA_MAX_64BDSC_BYTE_CNT); >> >> + >> >> + /* Prepare DMA descriptor */ >> >> + xgene_dma_prep_cpy_desc(chan, slot, dst, src, copy); >> >> + >> > This is wrong. The descriptor is supposed to be already prepared and now it >> > has to be submitted to queue >> >> Due to the race in tx_submit call from the client, need to serialize >> the submission of H/W DMA descriptors. >> So we are making shadow copy in prepare DMA routine and preparing >> actual descriptor during tx_submit call. > Thats an abuse of API and I dont see a reason why this race should happen in > the first place. > > So you get a prep call, you prepare a desc in SW. Then submit pushes it to a > queue. Finally in issue pending you push them to HW. Simple..? I agree, I will do it and post another version soon. > > -- > ~Vinod > -- > To unsubscribe from this list: send the line "unsubscribe dmaengine" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/