Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp1636663imm; Tue, 22 May 2018 07:08:54 -0700 (PDT) X-Google-Smtp-Source: AB8JxZp/fHvkqaK5DgT8zxATXqfOLwAmqgljMx6C9umcVyvDVvAkFN5kjs56SJRVbb1vMuaFcmt8 X-Received: by 2002:a65:61a5:: with SMTP id i5-v6mr13419923pgv.405.1526998134178; Tue, 22 May 2018 07:08:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526998134; cv=none; d=google.com; s=arc-20160816; b=HvPGhMWH6EtWa+XFjLILs2EJjOsEviDSvtxN5xdwhxZKzSjmmjMaLo8eEtjo52K94B ms5LBVWzKflfnQd9djsajwOIOv+gd1dFWGE9iG2NVEm1fJEoVJdHaDRrQEBbhwvavzKt ATkxYwnYs8yOUHR6POsprLJpWPVjxB6L/pmsWtaKV3xoypmZZkk3WhTnpLP28jGUfKvN WTJXu+AWmjuzJeaR+1QAkmrDh6hvbQi0ZojC2D7LN8PYi27v94turUu4+DdhAxoIFW22 pk2axz9HQUkISkXLQeyjTKrZyPafCwlNBNSl1uqjnNEcI66Qnc9a3m22ClMoqtka56YO y0Uw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:message-id:references :in-reply-to:subject:cc:to:from:date:content-transfer-encoding :mime-version:dkim-signature:dkim-signature :arc-authentication-results; bh=SqcNtAQYjBWqoExBclZD6BQAYY4eUgWefo7tR1ITS+w=; b=xm73i0D0C3jlavWTDjG8bneedPNr0EF1crWGsrcSYEaOx7Ps6U2gOSyEFsnyumpbfW ZYoip/1b65m05NLx+uS7NINIxW6CTjvCbYZhqWCUBBz5iz8jUMa/aI56JZjn8GkwMIap LxeZN+huAbIthyvCM5LxevLBuVvRgiKpDVGFAmIJ8ufV0cyxm7dtqHDZsHjQg8OHZlYT 2gi4V1aLXs1irA9f31ESfYigw01N7g6WcTmyWHIyVpowJzFgRTmaMHNIaARjtknSzd8s Enqmec7JK99r6rPBX0J9IaaN1q+SvKKW6GMEJXCLkBrlardP5PUwM/sNLJfUqht2khlO yOBQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@codeaurora.org header.s=default header.b=WI18Itww; dkim=pass header.i=@codeaurora.org header.s=default header.b=WI18Itww; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x7-v6si17557466plo.303.2018.05.22.07.08.36; Tue, 22 May 2018 07:08:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@codeaurora.org header.s=default header.b=WI18Itww; dkim=pass header.i=@codeaurora.org header.s=default header.b=WI18Itww; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751412AbeEVOHF (ORCPT + 99 others); Tue, 22 May 2018 10:07:05 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:41168 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751196AbeEVOHC (ORCPT ); Tue, 22 May 2018 10:07:02 -0400 Received: by smtp.codeaurora.org (Postfix, from userid 1000) id DF67E60AD4; Tue, 22 May 2018 14:07:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1526998021; bh=Okv3BbpfaS0IDQRlwVpz3NeDpgDoSYRtau/4Qvqugcw=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=WI18ItwwLMWi48tN8C+EFicfMN3KIRwfkCHtgbVuL1VFaxUboWERX+VeLuxu4Nm7L CT7a6rr9f1F5LSnVD+eblVBkkBbqjcwTLxTdM8DSNVTi9ba6jpUHgT7OiWCVfxSaRH xVTFwPHcI61w9JemFwkRYYx0ssYGHHWN/JDAc71g= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on pdx-caf-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=ALL_TRUSTED,BAYES_00, DKIM_SIGNED,T_DKIM_INVALID autolearn=no autolearn_force=no version=3.4.0 Received: from mail.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id 32044607A2; Tue, 22 May 2018 14:07:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1526998021; bh=Okv3BbpfaS0IDQRlwVpz3NeDpgDoSYRtau/4Qvqugcw=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=WI18ItwwLMWi48tN8C+EFicfMN3KIRwfkCHtgbVuL1VFaxUboWERX+VeLuxu4Nm7L CT7a6rr9f1F5LSnVD+eblVBkkBbqjcwTLxTdM8DSNVTi9ba6jpUHgT7OiWCVfxSaRH xVTFwPHcI61w9JemFwkRYYx0ssYGHHWN/JDAc71g= MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Tue, 22 May 2018 19:37:01 +0530 From: Abhishek Sahu To: Miquel Raynal Cc: Boris Brezillon , David Woodhouse , Brian Norris , Marek Vasut , Richard Weinberger , linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mtd@lists.infradead.org, Andy Gross , Archit Taneja Subject: Re: [PATCH v2 05/14] mtd: rawnand: qcom: wait for desc completion in all BAM channels In-Reply-To: <20180522084754.0c3a53a4@xps13> References: <1525350041-22995-1-git-send-email-absahu@codeaurora.org> <1525350041-22995-6-git-send-email-absahu@codeaurora.org> <20180522084754.0c3a53a4@xps13> Message-ID: <8f101f393618923129342f97e4a842f7@codeaurora.org> X-Sender: absahu@codeaurora.org User-Agent: Roundcube Webmail/1.2.5 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2018-05-22 12:17, Miquel Raynal wrote: > Hi Abhishek, > > On Thu, 3 May 2018 17:50:32 +0530, Abhishek Sahu > wrote: > >> The BAM has 3 channels - tx, rx and command. command channel >> is used for register read/writes, tx channel for data writes >> and rx channel for data reads. Currently, the driver assumes the >> transfer completion once it gets all the command descriptor >> completed. Sometimes, there is race condition in data channel > > "Sometimes, there is a race condition between the data channel (rx/tx) > and the command channel completion. In these cases, ..." > >> (tx/rx) and command channel completion and in these cases, >> the data in buffer is not valid during the small window between > > ^ present in the buffer ? > >> command descriptor completion and data descriptor completion. >> >> Now, the changes have been made to assign the callback for > > It is preferable to use a descriptive tense when you expose what the > patch does. Something like "Change to assign ..." > Thanks Miquel for your review. I will change the sentence. >> channel's final descriptor. The DMA will generate the callback >> when all the descriptors have completed in that channel. >> The NAND transfer will be completed only when all required >> DMA channels have generated the completion callback. >> > > It looks like this is a fix that is a good candidate for stable trees, > you might want to add the relevant tags. Sure. I will add the relevant tags. > >> Signed-off-by: Abhishek Sahu >> --- >> * Changes from v1: >> >> NONE >> >> 1. Removed the custom logic and used the helper fuction. >> drivers/mtd/nand/raw/qcom_nandc.c | 55 >> ++++++++++++++++++++++++++++++++++++++- >> 1 file changed, 54 insertions(+), 1 deletion(-) >> >> diff --git a/drivers/mtd/nand/raw/qcom_nandc.c >> b/drivers/mtd/nand/raw/qcom_nandc.c >> index a8d71ce..3d1ff54 100644 >> --- a/drivers/mtd/nand/raw/qcom_nandc.c >> +++ b/drivers/mtd/nand/raw/qcom_nandc.c >> @@ -213,6 +213,8 @@ >> #define QPIC_PER_CW_CMD_SGL 32 >> #define QPIC_PER_CW_DATA_SGL 8 >> >> +#define QPIC_NAND_COMPLETION_TIMEOUT msecs_to_jiffies(2000) > > That's huge, but why not, it's a timeout anyway. > Correct. This timeout will never happen in normal case. It will hit if something bad happened in the board. >> + >> /* >> * Flags used in DMA descriptor preparation helper functions >> * (i.e. read_reg_dma/write_reg_dma/read_data_dma/write_data_dma) >> @@ -245,6 +247,11 @@ >> * @tx_sgl_start - start index in data sgl for tx. >> * @rx_sgl_pos - current index in data sgl for rx. >> * @rx_sgl_start - start index in data sgl for rx. >> + * @first_chan_done - if current transfer already has got first >> channel >> + * DMA desc completion. >> + * @txn_done - completion for nand transfer. > > s/nand/NAND/ > >> + * @last_data_desc - last DMA desc in data channel (tx/rx). >> + * @last_cmd_desc - last DMA desc in command channel. >> */ >> struct bam_transaction { >> struct bam_cmd_element *bam_ce; >> @@ -258,6 +265,10 @@ struct bam_transaction { >> u32 tx_sgl_start; >> u32 rx_sgl_pos; >> u32 rx_sgl_start; >> + bool first_chan_done; >> + struct completion txn_done; >> + struct dma_async_tx_descriptor *last_data_desc; >> + struct dma_async_tx_descriptor *last_cmd_desc; >> }; >> >> /* >> @@ -504,6 +515,8 @@ static void free_bam_transaction(struct >> qcom_nand_controller *nandc) >> >> bam_txn->data_sgl = bam_txn_buf; >> >> + init_completion(&bam_txn->txn_done); >> + >> return bam_txn; >> } >> >> @@ -523,11 +536,36 @@ static void clear_bam_transaction(struct >> qcom_nand_controller *nandc) >> bam_txn->tx_sgl_start = 0; >> bam_txn->rx_sgl_pos = 0; >> bam_txn->rx_sgl_start = 0; >> + bam_txn->last_data_desc = NULL; >> + bam_txn->first_chan_done = false; > > Are you sure you don't want to reinit last_cmd_desc here? Each NAND data transfer will definitely have at least one command desc so that reinit is redundant. But some of the NAND transfers can have only command descriptors (i.e. no data desc) so, we need to reinit last_data_desc. > >> >> sg_init_table(bam_txn->cmd_sgl, nandc->max_cwperpage * >> QPIC_PER_CW_CMD_SGL); >> sg_init_table(bam_txn->data_sgl, nandc->max_cwperpage * >> QPIC_PER_CW_DATA_SGL); >> + >> + reinit_completion(&bam_txn->txn_done); >> +} >> + >> +/* Callback for DMA descriptor completion */ >> +static void qpic_bam_dma_done(void *data) >> +{ >> + struct bam_transaction *bam_txn = data; >> + >> + /* >> + * In case of data transfer with NAND, 2 callbacks will be >> generated. >> + * One for command channel and another one for data channel. >> + * If current transaction has data descriptors then check if its >> + * already got one DMA channel completion callback. In this case >> + * make the NAND transfer complete otherwise mark first_chan_done >> true >> + * and wait for next channel DMA completion callback. >> + */ >> + if (bam_txn->last_data_desc && !bam_txn->first_chan_done) { >> + bam_txn->first_chan_done = true; >> + return; >> + } > > There is a lot of new variables just to wait for two bam_dma_done(). > Why not just creating a boolean like "wait_second completion", > initialize it in prepare_bam_async_desc to true when needed and > complete txn_done when it's false, that's all: > > if (bam_txn->wait_second_completion) { > bam_txn->wait_second_completion = false; > return; > } > >> + >> + complete(&bam_txn->txn_done); >> } >> >> static inline struct qcom_nand_host *to_qcom_nand_host(struct >> nand_chip *chip) >> @@ -756,6 +794,12 @@ static int prepare_bam_async_desc(struct >> qcom_nand_controller *nandc, >> >> desc->dma_desc = dma_desc; >> >> + /* update last data/command descriptor */ >> + if (chan == nandc->cmd_chan) >> + bam_txn->last_cmd_desc = dma_desc; >> + else >> + bam_txn->last_data_desc = dma_desc; >> + > > Is there a reason for the "last_" prefix? why not current_*_desc or > just *_desc? (this is a real question :) ). Correct me if I'm wrong but > you have a scatter-gather list of DMA transfers that are mapped to form > one DMA descriptor, so there is no "last" descriptor, right? > We have 3 DMA channels (tx/rx and command) and each channel has multiple DMA descriptors. The callback needs to be set for last descriptor only for that channel. > Otherwise, as I told you above, why not just a: > > if (chan == nandc->data_chan) > bam_txn->wait_second_completion = true; > This is nice idea. I will change the implementation accordingly. >> list_add_tail(&desc->node, &nandc->desc_list); >> >> return 0; >> @@ -1273,10 +1317,19 @@ static int submit_descs(struct >> qcom_nand_controller *nandc) >> cookie = dmaengine_submit(desc->dma_desc); >> >> if (nandc->props->is_bam) { >> + bam_txn->last_cmd_desc->callback = qpic_bam_dma_done; >> + bam_txn->last_cmd_desc->callback_param = bam_txn; >> + if (bam_txn->last_data_desc) { >> + bam_txn->last_data_desc->callback = qpic_bam_dma_done; >> + bam_txn->last_data_desc->callback_param = bam_txn; >> + } > > Why don't you do this directly in prepare_bam_async_desc? > > With: > > dma_desc->callback = ... > dma_desc->callback_param = ... > prepare_bam_async_desc can be called multiple times since each channel can have list of DMA descriptors. We want to set callback only for last DMA descriptor in that channel. CMD desc1 -> Data desc1 -> Data desc2-> CMD desc2 -> CMD desc3 In the above case, the callback should be set for Data desc2 and CMD desc3. Thanks, Abhishek >> + >> dma_async_issue_pending(nandc->tx_chan); >> dma_async_issue_pending(nandc->rx_chan); >> + dma_async_issue_pending(nandc->cmd_chan); >> >> - if (dma_sync_wait(nandc->cmd_chan, cookie) != DMA_COMPLETE) >> + if (!wait_for_completion_timeout(&bam_txn->txn_done, >> + QPIC_NAND_COMPLETION_TIMEOUT)) >> return -ETIMEDOUT; >> } else { >> if (dma_sync_wait(nandc->chan, cookie) != DMA_COMPLETE)