From: Anup Patel Subject: [PATCH v4 2/4] async_tx: Fix DMA_PREP_FENCE usage in do_async_gen_syndrome() Date: Tue, 14 Feb 2017 12:21:50 +0530 Message-ID: <1487055112-5185-3-git-send-email-anup.patel@broadcom.com> References: <1487055112-5185-1-git-send-email-anup.patel@broadcom.com> Cc: Dan Williams , Ray Jui , Scott Branden , Jon Mason , Rob Rice , bcm-kernel-feedback-list@broadcom.com, dmaengine@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, linux-raid@vger.kernel.org, Anup Patel To: Vinod Koul , Rob Herring , Mark Rutland , Herbert Xu , "David S . Miller" , Jassi Brar Return-path: Received: from mail-wm0-f53.google.com ([74.125.82.53]:37972 "EHLO mail-wm0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752581AbdBNGwt (ORCPT ); Tue, 14 Feb 2017 01:52:49 -0500 Received: by mail-wm0-f53.google.com with SMTP id r141so9741933wmg.1 for ; Mon, 13 Feb 2017 22:52:43 -0800 (PST) In-Reply-To: <1487055112-5185-1-git-send-email-anup.patel@broadcom.com> Sender: linux-crypto-owner@vger.kernel.org List-ID: The DMA_PREP_FENCE is to be used when preparing Tx descriptor if output of Tx descriptor is to be used by next/dependent Tx descriptor. The DMA_PREP_FENSE will not be set correctly in do_async_gen_syndrome() when calling dma->device_prep_dma_pq() under following conditions: 1. ASYNC_TX_FENCE not set in submit->flags 2. DMA_PREP_FENCE not set in dma_flags 3. src_cnt (= (disks - 2)) is greater than dma_maxpq(dma, dma_flags) This patch fixes DMA_PREP_FENCE usage in do_async_gen_syndrome() taking inspiration from do_async_xor() implementation. Signed-off-by: Anup Patel Reviewed-by: Ray Jui Reviewed-by: Scott Branden --- crypto/async_tx/async_pq.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/crypto/async_tx/async_pq.c b/crypto/async_tx/async_pq.c index f83de99..56bd612 100644 --- a/crypto/async_tx/async_pq.c +++ b/crypto/async_tx/async_pq.c @@ -62,9 +62,6 @@ do_async_gen_syndrome(struct dma_chan *chan, dma_addr_t dma_dest[2]; int src_off = 0; - if (submit->flags & ASYNC_TX_FENCE) - dma_flags |= DMA_PREP_FENCE; - while (src_cnt > 0) { submit->flags = flags_orig; pq_src_cnt = min(src_cnt, dma_maxpq(dma, dma_flags)); @@ -83,6 +80,8 @@ do_async_gen_syndrome(struct dma_chan *chan, if (cb_fn_orig) dma_flags |= DMA_PREP_INTERRUPT; } + if (submit->flags & ASYNC_TX_FENCE) + dma_flags |= DMA_PREP_FENCE; /* Drivers force forward progress in case they can not provide * a descriptor -- 2.7.4