Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp1207625imm; Mon, 21 May 2018 23:48:26 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrHXakzy12bytPJ39jaukGdbY+19lyIQw3+EtLrBZov5Z3B/8IGaoExHoEwGNUbdu2Cqn7g X-Received: by 2002:a17:902:6b09:: with SMTP id o9-v6mr24077005plk.256.1526971706153; Mon, 21 May 2018 23:48:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526971706; cv=none; d=google.com; s=arc-20160816; b=A2zsJ4Qsw/Rl0PzfRvgkA/M3gyposRQMLFIstLJFw7id2fAKuyduIninKMzaFBiLRl w2s7pN+jcueIduLhlBhU+LDNdasHFxBj1zYBWq3iSae12iu0jyQ18B19K7K1n1C2c9D9 6omJykI/9xpeoTI8/q+/BFJnXYdikIC1gIf03ZrxAsZrZtSJFGv6c4JX5l/Y90rPrEaX I8GmDy0PpxqPYvKv9/1cLrIZTyVy3oLuVydzyrhzssTLGe1BQQOVn5/KYY6nZFVBahx0 VpQeCX0iHwW70q1o4EqviX+xjGqFrvsZPAARRMcxKJxxdL8u0QoQrBWDVoACGhilQyEO BCew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :organization:references:in-reply-to:message-id:subject:cc:to:from :date:arc-authentication-results; bh=YqLffGHIiIohy4reb4ZNDGTeJcfsEx7GJ6IubXjjLts=; b=d7nbic7pU73kt79+W2zVyJsE6o5NHUMGVR5xySFZgSgCJosZw2vYYqDU1WY3w7CXHl WkyUbpyN6c1gZtXZvoD1dZ89j05ygGkyT8PokvA/H9XlSWiu9Gth3wjQ2APspkW0M4mJ p27dZ04hv4xALX1aqF5zS/giClx9EnsTXgSl/bAl18lrSjNZ25pfdAfnxUePZUS4kXwJ MdM3wf/4o0Sax5F9D/BrC1TUC6S3IShfG35lw31+w1Xz9rOgvxqnbq8Q5p3ndIrJTHLC IqmmJBQO3OXWmZZyoi64ocOnLJbThzhUGzZjDK/iExqCz3DRdDP5OPkH14MclgbUjvI1 L2xA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i33-v6si15473767pld.546.2018.05.21.23.48.10; Mon, 21 May 2018 23:48:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751160AbeEVGr7 (ORCPT + 99 others); Tue, 22 May 2018 02:47:59 -0400 Received: from mail.bootlin.com ([62.4.15.54]:43845 "EHLO mail.bootlin.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750820AbeEVGr5 (ORCPT ); Tue, 22 May 2018 02:47:57 -0400 Received: by mail.bootlin.com (Postfix, from userid 110) id E57D82072C; Tue, 22 May 2018 08:47:54 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.bootlin.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED,SHORTCIRCUIT, URIBL_BLOCKED shortcircuit=ham autolearn=disabled version=3.4.0 Received: from xps13 (LStLambert-657-1-97-87.w90-63.abo.wanadoo.fr [90.63.216.87]) by mail.bootlin.com (Postfix) with ESMTPSA id 8AA7920719; Tue, 22 May 2018 08:47:54 +0200 (CEST) Date: Tue, 22 May 2018 08:47:54 +0200 From: Miquel Raynal To: Abhishek Sahu Cc: Boris Brezillon , David Woodhouse , Brian Norris , Marek Vasut , Richard Weinberger , linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mtd@lists.infradead.org, Andy Gross , Archit Taneja Subject: Re: [PATCH v2 05/14] mtd: rawnand: qcom: wait for desc completion in all BAM channels Message-ID: <20180522084754.0c3a53a4@xps13> In-Reply-To: <1525350041-22995-6-git-send-email-absahu@codeaurora.org> References: <1525350041-22995-1-git-send-email-absahu@codeaurora.org> <1525350041-22995-6-git-send-email-absahu@codeaurora.org> Organization: Bootlin X-Mailer: Claws Mail 3.15.0-dirty (GTK+ 2.24.31; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Abhishek, On Thu, 3 May 2018 17:50:32 +0530, Abhishek Sahu wrote: > The BAM has 3 channels - tx, rx and command. command channel > is used for register read/writes, tx channel for data writes > and rx channel for data reads. Currently, the driver assumes the > transfer completion once it gets all the command descriptor > completed. Sometimes, there is race condition in data channel "Sometimes, there is a race condition between the data channel (rx/tx) and the command channel completion. In these cases, ..." > (tx/rx) and command channel completion and in these cases, > the data in buffer is not valid during the small window between ^ present in the buffer ? > command descriptor completion and data descriptor completion. > > Now, the changes have been made to assign the callback for It is preferable to use a descriptive tense when you expose what the patch does. Something like "Change to assign ..." > channel's final descriptor. The DMA will generate the callback > when all the descriptors have completed in that channel. > The NAND transfer will be completed only when all required > DMA channels have generated the completion callback. > It looks like this is a fix that is a good candidate for stable trees, you might want to add the relevant tags. > Signed-off-by: Abhishek Sahu > --- > * Changes from v1: > > NONE > > 1. Removed the custom logic and used the helper fuction. > drivers/mtd/nand/raw/qcom_nandc.c | 55 ++++++++++++++++++++++++++++++++++++++- > 1 file changed, 54 insertions(+), 1 deletion(-) > > diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c > index a8d71ce..3d1ff54 100644 > --- a/drivers/mtd/nand/raw/qcom_nandc.c > +++ b/drivers/mtd/nand/raw/qcom_nandc.c > @@ -213,6 +213,8 @@ > #define QPIC_PER_CW_CMD_SGL 32 > #define QPIC_PER_CW_DATA_SGL 8 > > +#define QPIC_NAND_COMPLETION_TIMEOUT msecs_to_jiffies(2000) That's huge, but why not, it's a timeout anyway. > + > /* > * Flags used in DMA descriptor preparation helper functions > * (i.e. read_reg_dma/write_reg_dma/read_data_dma/write_data_dma) > @@ -245,6 +247,11 @@ > * @tx_sgl_start - start index in data sgl for tx. > * @rx_sgl_pos - current index in data sgl for rx. > * @rx_sgl_start - start index in data sgl for rx. > + * @first_chan_done - if current transfer already has got first channel > + * DMA desc completion. > + * @txn_done - completion for nand transfer. s/nand/NAND/ > + * @last_data_desc - last DMA desc in data channel (tx/rx). > + * @last_cmd_desc - last DMA desc in command channel. > */ > struct bam_transaction { > struct bam_cmd_element *bam_ce; > @@ -258,6 +265,10 @@ struct bam_transaction { > u32 tx_sgl_start; > u32 rx_sgl_pos; > u32 rx_sgl_start; > + bool first_chan_done; > + struct completion txn_done; > + struct dma_async_tx_descriptor *last_data_desc; > + struct dma_async_tx_descriptor *last_cmd_desc; > }; > > /* > @@ -504,6 +515,8 @@ static void free_bam_transaction(struct qcom_nand_controller *nandc) > > bam_txn->data_sgl = bam_txn_buf; > > + init_completion(&bam_txn->txn_done); > + > return bam_txn; > } > > @@ -523,11 +536,36 @@ static void clear_bam_transaction(struct qcom_nand_controller *nandc) > bam_txn->tx_sgl_start = 0; > bam_txn->rx_sgl_pos = 0; > bam_txn->rx_sgl_start = 0; > + bam_txn->last_data_desc = NULL; > + bam_txn->first_chan_done = false; Are you sure you don't want to reinit last_cmd_desc here? > > sg_init_table(bam_txn->cmd_sgl, nandc->max_cwperpage * > QPIC_PER_CW_CMD_SGL); > sg_init_table(bam_txn->data_sgl, nandc->max_cwperpage * > QPIC_PER_CW_DATA_SGL); > + > + reinit_completion(&bam_txn->txn_done); > +} > + > +/* Callback for DMA descriptor completion */ > +static void qpic_bam_dma_done(void *data) > +{ > + struct bam_transaction *bam_txn = data; > + > + /* > + * In case of data transfer with NAND, 2 callbacks will be generated. > + * One for command channel and another one for data channel. > + * If current transaction has data descriptors then check if its > + * already got one DMA channel completion callback. In this case > + * make the NAND transfer complete otherwise mark first_chan_done true > + * and wait for next channel DMA completion callback. > + */ > + if (bam_txn->last_data_desc && !bam_txn->first_chan_done) { > + bam_txn->first_chan_done = true; > + return; > + } There is a lot of new variables just to wait for two bam_dma_done(). Why not just creating a boolean like "wait_second completion", initialize it in prepare_bam_async_desc to true when needed and complete txn_done when it's false, that's all: if (bam_txn->wait_second_completion) { bam_txn->wait_second_completion = false; return; } > + > + complete(&bam_txn->txn_done); > } > > static inline struct qcom_nand_host *to_qcom_nand_host(struct nand_chip *chip) > @@ -756,6 +794,12 @@ static int prepare_bam_async_desc(struct qcom_nand_controller *nandc, > > desc->dma_desc = dma_desc; > > + /* update last data/command descriptor */ > + if (chan == nandc->cmd_chan) > + bam_txn->last_cmd_desc = dma_desc; > + else > + bam_txn->last_data_desc = dma_desc; > + Is there a reason for the "last_" prefix? why not current_*_desc or just *_desc? (this is a real question :) ). Correct me if I'm wrong but you have a scatter-gather list of DMA transfers that are mapped to form one DMA descriptor, so there is no "last" descriptor, right? Otherwise, as I told you above, why not just a: if (chan == nandc->data_chan) bam_txn->wait_second_completion = true; > list_add_tail(&desc->node, &nandc->desc_list); > > return 0; > @@ -1273,10 +1317,19 @@ static int submit_descs(struct qcom_nand_controller *nandc) > cookie = dmaengine_submit(desc->dma_desc); > > if (nandc->props->is_bam) { > + bam_txn->last_cmd_desc->callback = qpic_bam_dma_done; > + bam_txn->last_cmd_desc->callback_param = bam_txn; > + if (bam_txn->last_data_desc) { > + bam_txn->last_data_desc->callback = qpic_bam_dma_done; > + bam_txn->last_data_desc->callback_param = bam_txn; > + } Why don't you do this directly in prepare_bam_async_desc? With: dma_desc->callback = ... dma_desc->callback_param = ... > + > dma_async_issue_pending(nandc->tx_chan); > dma_async_issue_pending(nandc->rx_chan); > + dma_async_issue_pending(nandc->cmd_chan); > > - if (dma_sync_wait(nandc->cmd_chan, cookie) != DMA_COMPLETE) > + if (!wait_for_completion_timeout(&bam_txn->txn_done, > + QPIC_NAND_COMPLETION_TIMEOUT)) > return -ETIMEDOUT; > } else { > if (dma_sync_wait(nandc->chan, cookie) != DMA_COMPLETE) -- Miquel Raynal, Bootlin (formerly Free Electrons) Embedded Linux and Kernel engineering https://bootlin.com