Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp464083imm; Fri, 29 Jun 2018 00:30:25 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJMHrFvec3yfAyVEzcolvbM8zaPSmhBVL/Op1/v4uLRO3u80KdHmQicO/GhAi4/BjCWidG/ X-Received: by 2002:a65:448c:: with SMTP id l12-v6mr11273413pgq.277.1530257425342; Fri, 29 Jun 2018 00:30:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530257425; cv=none; d=google.com; s=arc-20160816; b=eDqn6USVAx8yvacgg/I/Mw9QxiE1NjeR+F0APnAVviw8pQeIGN86BkU/a9/RjCOcnM kI8hwkPkz+4g1y1oINh3ARxLgcjSS0moVOMlOkgBNMOyfaek5CwddhsCKZsVSJBoFY/q LBq0fHhzBnVt71mU7vz/0E/UZi/qQwT8bn1Rh4f9/VRgAXQ4YLfA/1wPyfADtJdxX+lb b4pK6S83U5uUka3XNyqLhS9YRIAlNMK8niEjkTEYdRidBx7g1miNwQSFdvuYdanf3O9C X0QHz0LJzBNJABfgmJCYk1En0LYvVZnpBwsunBa5v9+a+ZPfDNmvYhGOEuJw4DqVKfm1 yLrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=3qHo0aF5BpryIYjGzsPsxsJB4VVCh0fdq/h8Lw3Y0nE=; b=XiSo0a4L21ZnTtQD4OMG6RWM2fb2IeSQ+gTeZ8fZW0+5OEW16PT/QlMNfybzJw1//p Zomb46iVhpR42rfsVWb0gQfNaXHIKTce0jUDWBNjHWjSqvimvyBZuemniCM/eQjlm0Yi cx060tbsy91vqOrcOIE6Fje4DD5nqVKUEeS0+SR+HuLG76FO19ltYmBUVxJE7u8SXjPv XSgh0WnN6o/WjUcGbx3/ea7925tGPmyrRrtExiZ9Ii4hiblDsqCYx6Rca8Zg/auLzWEy YW0bFDZAMjlPt19WLYil+qTIe9wH70lelFj1a0NAW0GDh+yK9GoNFID6uEAdB96PEjPR RSxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=UXxSoFi4; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e3-v6si8169368pld.229.2018.06.29.00.30.10; Fri, 29 Jun 2018 00:30:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=UXxSoFi4; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933495AbeF2H0E (ORCPT + 99 others); Fri, 29 Jun 2018 03:26:04 -0400 Received: from mail.kernel.org ([198.145.29.99]:55764 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932266AbeF2H0B (ORCPT ); Fri, 29 Jun 2018 03:26:01 -0400 Received: from localhost (unknown [122.167.66.23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7274C27B86; Fri, 29 Jun 2018 07:26:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1530257161; bh=RAf8I497MuNAjK5S3SsLUPHxaD3rEQiXacHukyS5g0s=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=UXxSoFi4R7lbSCokQQV/IFyrFaFYIWNkZiYHe4pIUQVpwHUQAfHqBIOziEXnyQ9PG IUPy/IfEUmokh4qLVuog2MPZefmrLj5vgD0fuW6rFhi9dOe/fZUaIFk2SPkXiH2w58 1H8/WvtJz2GK7PR9nH62b9A2asqT5a8r+KpYClEA= Date: Fri, 29 Jun 2018 12:55:52 +0530 From: Vinod To: Andrea Merello Cc: dan.j.williams@intel.com, michal.simek@xilinx.com, appana.durga.rao@xilinx.com, dmaengine@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Radhey Shyam Pandey Subject: Re: [PATCH v3 1/5] dmaengine: xilinx_dma: in axidma slave_sg and dma_cylic mode align split descriptors Message-ID: <20180629072552.GY22377@vkoul-mobl> References: <20180625092724.22164-1-andrea.merello@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180625092724.22164-1-andrea.merello@gmail.com> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 25-06-18, 11:27, Andrea Merello wrote: > Whenever a single or cyclic transaction is prepared, the driver > could eventually split it over several SG descriptors in order > to deal with the HW maximum transfer length. > > This could end up in DMA operations starting from a misaligned > address. This seems fatal for the HW if DRE is not enabled. > > This patch eventually adjusts the transfer size in order to make sure > all operations start from an aligned address. > > Cc: Radhey Shyam Pandey > Signed-off-by: Andrea Merello > Reviewed-by: Radhey Shyam Pandey > --- > Changes in v2: > - don't introduce copy_mask field, rather rely on already-esistent > copy_align field. Suggested by Radhey Shyam Pandey > - reword title > Changes in v3: > - fix bug introduced in v2: wrong copy size when DRE is enabled > use implementation suggested by Radhey Shyam Pandey > --- > drivers/dma/xilinx/xilinx_dma.c | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c > index 27b523530c4a..113d9bf1b6a1 100644 > --- a/drivers/dma/xilinx/xilinx_dma.c > +++ b/drivers/dma/xilinx/xilinx_dma.c > @@ -1793,6 +1793,16 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( > */ > copy = min_t(size_t, sg_dma_len(sg) - sg_used, > XILINX_DMA_MAX_TRANS_LEN); > + > + if ((copy + sg_used < sg_dma_len(sg)) && > + chan->xdev->common.copy_align) { > + /* > + * If this is not the last descriptor, make sure > + * the next one will be properly aligned > + */ > + copy = rounddown(copy, > + (1 << chan->xdev->common.copy_align)); > + } > hw = &segment->hw; > > /* Fill in the descriptor */ > @@ -1898,6 +1908,16 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_dma_cyclic( > */ > copy = min_t(size_t, period_len - sg_used, > XILINX_DMA_MAX_TRANS_LEN); > + > + if ((copy + sg_used < period_len) && > + chan->xdev->common.copy_align) { > + /* > + * If this is not the last descriptor, make sure > + * the next one will be properly aligned > + */ > + copy = rounddown(copy, > + (1 << chan->xdev->common.copy_align)); > + } same code pasted twice, can we have a routine for this... perhaps more code can be made common too -- ~Vinod