Received: by 2002:ac0:98c7:0:0:0:0:0 with SMTP id g7-v6csp3480284imd; Mon, 29 Oct 2018 07:46:38 -0700 (PDT) X-Google-Smtp-Source: AJdET5fBanzJALHiMDdpJdo0wksDfk/+pkRjAe68JaLN0DVwX7pVzgQZYellFB7pChPbDlP66EWf X-Received: by 2002:a62:ac18:: with SMTP id v24-v6mr14931311pfe.126.1540824398358; Mon, 29 Oct 2018 07:46:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540824398; cv=none; d=google.com; s=arc-20160816; b=GRGJsvcxHnvVg3OndtI06FQ64ZPNoYjpv9xqCzf/ckfv3ampF7smU2taNXYQI/8X4l PLJl9X7CYUv7Iq9cb9DFeaCBTDRpT600R5k8xuz2LPxo45zjFWM7usZKB/cFEX15gYIr rvTrXP23IKBFbqazxQSeZVm/lrJ7LmkIL200DYlkzSTrN4SQ2Ta0c3Aan873a5hv3cL4 SSAyPYgs0yuCLN4mGRztZ48gYTiSLo6OqYn2CvD1xUzdWdqSj80/QxQnk0wZ1cjgI7kf 9+sMvVD/vfGUjX4e9p+W+1+ZhO5zrHZ21c0krf792UMFLRcO8DHpBFJ71N6S04EFa7AS M9mA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:to:references:message-id :content-transfer-encoding:cc:date:in-reply-to:from:subject :mime-version:dkim-signature; bh=OupCyM1v/7DUSb/Y4S5TjO0exjXdP7eHE5GrHzEcPxw=; b=YUHLfclldMdSunKIlOxakKpDmU6Lj50SFUSBXp6pQ54DkkDsRhKwi5AzpyHnN4lseS D/eUgtDvfhwaalYnwaSqvf4Goaggv+AutEo1l1tVpF+p14mfDZm0S4nAa76S0lV2r9Kk lXLn1H1CY+NYCOy8FfCDOumT6mq4hYLVjR66zj0gMRDvRQEJHJjTMLilpWtrHsUqexaP hu60QiLLRalP+uj+U5pQt4yeArKJQ0kdpbLNqkHhEqA+khx2XxJr5J+ITkbkgnrrcYKc DXRKG2CXWIKKs6EfLImw+cE3/TKv+k1VEehj38n5rEiiD9U3eBff1926vp64FDbZs2qf Uu5A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=Xb0Cf25S; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b72-v6si16763485pfm.100.2018.10.29.07.46.23; Mon, 29 Oct 2018 07:46:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=Xb0Cf25S; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727284AbeJ2Xef (ORCPT + 99 others); Mon, 29 Oct 2018 19:34:35 -0400 Received: from mail-yw1-f66.google.com ([209.85.161.66]:37611 "EHLO mail-yw1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726219AbeJ2Xee (ORCPT ); Mon, 29 Oct 2018 19:34:34 -0400 Received: by mail-yw1-f66.google.com with SMTP id v77-v6so3453730ywc.4; Mon, 29 Oct 2018 07:45:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=OupCyM1v/7DUSb/Y4S5TjO0exjXdP7eHE5GrHzEcPxw=; b=Xb0Cf25SCyui39SNJ+bktuL66JoZ7Joa/SFsprGgYwcyz5n83x25f1/xfzQBAOs7Ah B+upUpBUmvvHnjsce+ygrPFSYAQItMW07Rdr2IOLOJgSJZ0ghzLkMYM/LiMHOivSoNRJ FikbsyCElVKOusx4yGL4Pj8CHgu5jEAAOhLonI+TXF1KY9aCA9UB/5FzaUMcs/5AI8fh ecfmX4qCOTdJW19kocXXnOqjxwbE6utdh9F3uYbao5GCuwdaO8pKkYTcj6YSmvHo0CAR 1GezEndN4Z9YE7lB7O/6EARQcGqqjBdiktTksQpCz+Zq37UTKOaKTpxE+SD3j5lCosSr 3tfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=OupCyM1v/7DUSb/Y4S5TjO0exjXdP7eHE5GrHzEcPxw=; b=NKMU6Eyx81f/4ydKcvCXMh2c5oBDHYqjTqsMUeyvdFYQrxlTItZdPmT6+A/MmLrt0u d5kO8eg/4YFONFpOqzrmBv+sgCpn/ho5bc/En7TitEDrex/C08sbbt7kNYhq7TDQe3Rj nTxLEaPXNxA/yeKPopmiDb2/qSOa4ko/D5vB+diEb5yC2lOyqx7TOX8RGmf2D5sN+lL/ 2zrV6NbrQHo6juLVd3dNDogJJ34xemTvgIdyqmkIopg7usZHtClCiYRz4aQMaSaCRcTp qH4U+uDZ3ybwvJ/de++w3qfb3WcFuCn2fE9WQd1wDOEoWszbeysFn8le/7nx54EMjPqd vCtg== X-Gm-Message-State: AGRZ1gJHUs9Ex7h1wyclwNY/h9Fu+qXhBlC0Mx4sVR+Q6JW6dNMI/fO/ z7QIGGA0kZ3/02NYuXVldm3pT3JATR8= X-Received: by 2002:a0d:f505:: with SMTP id e5-v6mr13977653ywf.304.1540824335175; Mon, 29 Oct 2018 07:45:35 -0700 (PDT) Received: from ?IPv6:2605:a601:41fb:3c00:f807:6b09:6e40:bee6? ([2605:a601:41fb:3c00:f807:6b09:6e40:bee6]) by smtp.gmail.com with ESMTPSA id j74-v6sm2762758ywj.20.2018.10.29.07.45.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 29 Oct 2018 07:45:34 -0700 (PDT) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (1.0) Subject: Re: [PATCH 3/7] dmaengine: fsl-qdma: Add qDMA controller driver for Layerscape SoCs From: Li Yang X-Mailer: iPhone Mail (16A404) In-Reply-To: Date: Mon, 29 Oct 2018 09:45:33 -0500 Cc: Leo Li , Vinod , Rob Herring , Mark Rutland , Shawn Guo , Dan Williams , "dmaengine@vger.kernel.org" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS" , lkml , "moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE" , linuxppc-dev , Wen He , Jiaheng Fan Content-Transfer-Encoding: quoted-printable Message-Id: References: <20181026095240.33668-1-peng.ma@nxp.com> <20181026095240.33668-3-peng.ma@nxp.com> To: Peng Ma Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > On Oct 29, 2018, at 4:51 AM, Peng Ma wrote: >=20 >=20 >=20 >> -----Original Message----- >> From: Li Yang >> Sent: 2018=E5=B9=B410=E6=9C=8827=E6=97=A5 4:48 >> To: Peng Ma >> Cc: Vinod ; Rob Herring ; Mark >> Rutland ; Shawn Guo ; Dan >> Williams ; dmaengine@vger.kernel.org; open >> list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS >> ; lkml ; >> moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE >> ; linuxppc-dev >> ; Wen He ; Jiaheng Fan >> >> Subject: Re: [PATCH 3/7] dmaengine: fsl-qdma: Add qDMA controller driver f= or >> Layerscape SoCs >>=20 >>> On Fri, Oct 26, 2018 at 4:57 AM Peng Ma wrote: >>>=20 >>> NXP Queue DMA controller(qDMA) on Layerscape SoCs supports channel >>> virtuallization by allowing DMA jobs to be enqueued into different >>> command queues. >>>=20 >>> Note that this module depends on NXP DPAA. >>=20 >> It is not clear if you are saying that the driver can only work on >> SoCs with a DPAA hardware block, or the driver is actually depending >> on the DPAA drivers also. If it is the later case, you also should >> express that in the Kconfig you added below. >>=20 > [Peng Ma] Ok, I will express it in the Kconfig. >>>=20 >>> Signed-off-by: Wen He >>> Signed-off-by: Jiaheng Fan >>> Signed-off-by: Peng Ma >>> --- >>> change in v10: >>> - no >>>=20 >>> drivers/dma/Kconfig | 13 + >>> drivers/dma/Makefile | 1 + >>> drivers/dma/fsl-qdma.c | 1257 >> ++++++++++++++++++++++++++++++++++++++++++++++++ >>> 3 files changed, 1271 insertions(+), 0 deletions(-) >>> create mode 100644 drivers/dma/fsl-qdma.c >>>=20 >>> diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig >>> index dacf3f4..50e19d7 100644 >>> --- a/drivers/dma/Kconfig >>> +++ b/drivers/dma/Kconfig >>> @@ -218,6 +218,19 @@ config FSL_EDMA >>> multiplexing capability for DMA request sources(slot). >>> This module can be found on Freescale Vybrid and LS-1 SoCs. >>>=20 >>> +config FSL_QDMA >>> + tristate "NXP Layerscape qDMA engine support" >>> + depends on ARM || ARM64 >>> + select DMA_ENGINE >>> + select DMA_VIRTUAL_CHANNELS >>> + select DMA_ENGINE_RAID >>> + select ASYNC_TX_ENABLE_CHANNEL_SWITCH >>> + help >>> + Support the NXP Layerscape qDMA engine with command queue >> and legacy mode. >>> + Channel virtualization is supported through enqueuing of DMA >> jobs to, >>> + or dequeuing DMA jobs from, different work queues. >>> + This module can be found on NXP Layerscape SoCs. >>> + >>> config FSL_RAID >>> tristate "Freescale RAID engine Support" >>> depends on FSL_SOC >> && !ASYNC_TX_ENABLE_CHANNEL_SWITCH >>> diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile >>> index c91702d..2d1b586 100644 >>> --- a/drivers/dma/Makefile >>> +++ b/drivers/dma/Makefile >>> @@ -32,6 +32,7 @@ obj-$(CONFIG_DW_DMAC_CORE) +=3D dw/ >>> obj-$(CONFIG_EP93XX_DMA) +=3D ep93xx_dma.o >>> obj-$(CONFIG_FSL_DMA) +=3D fsldma.o >>> obj-$(CONFIG_FSL_EDMA) +=3D fsl-edma.o >>> +obj-$(CONFIG_FSL_QDMA) +=3D fsl-qdma.o >>> obj-$(CONFIG_FSL_RAID) +=3D fsl_raid.o >>> obj-$(CONFIG_HSU_DMA) +=3D hsu/ >>> obj-$(CONFIG_IMG_MDC_DMA) +=3D img-mdc-dma.o >>> diff --git a/drivers/dma/fsl-qdma.c b/drivers/dma/fsl-qdma.c >>> new file mode 100644 >>> index 0000000..404869e >>> --- /dev/null >>> +++ b/drivers/dma/fsl-qdma.c >>> @@ -0,0 +1,1257 @@ >>> +// SPDX-License-Identifier: GPL-2.0 >>> +// Copyright 2018 NXP >>=20 >> I'm not sure if this is really the case. The driver at least has been >> sent out in 2015. We should keep these copyright claims, even the >> legacy Freescale copyright claims. >>=20 > [Peng Ma]=20 > I am not sure this patch sent out in 2015, but the git log earliest sho= ws the patch cteated at Dec 20 2017. so if i changed the "Copyright 2018 NXP= " to " Copyright 2017-2018 NXP " You can find early versions of this patch in previous SDK releases or the up= stream versions with a google search. These existing copyright claims shoul= d not be removed in the first place. >=20 > Best regards > Peng Ma >>> + >>> +/* >>> + * Driver for NXP Layerscape Queue Direct Memory Access Controller >>> + * >>> + * Author: >>> + * Wen He >>> + * Jiaheng Fan >>> + * >>> + */ >>> + >>> +#include >>> +#include >>> +#include >>> +#include >>> +#include >>> +#include >>> + >>> +#include "virt-dma.h" >>> +#include "fsldma.h" >>> + >>> +/* Register related definition */ >>> +#define FSL_QDMA_DMR 0x0 >>> +#define FSL_QDMA_DSR 0x4 >>> +#define FSL_QDMA_DEIER 0xe00 >>> +#define FSL_QDMA_DEDR 0xe04 >>> +#define FSL_QDMA_DECFDW0R 0xe10 >>> +#define FSL_QDMA_DECFDW1R 0xe14 >>> +#define FSL_QDMA_DECFDW2R 0xe18 >>> +#define FSL_QDMA_DECFDW3R 0xe1c >>> +#define FSL_QDMA_DECFQIDR 0xe30 >>> +#define FSL_QDMA_DECBR 0xe34 >>> + >>> +#define FSL_QDMA_BCQMR(x) (0xc0 + 0x100 * (x)) >>> +#define FSL_QDMA_BCQSR(x) (0xc4 + 0x100 * (x)) >>> +#define FSL_QDMA_BCQEDPA_SADDR(x) (0xc8 + 0x100 * (x)) >>> +#define FSL_QDMA_BCQDPA_SADDR(x) (0xcc + 0x100 * (x)) >>> +#define FSL_QDMA_BCQEEPA_SADDR(x) (0xd0 + 0x100 * (x)) >>> +#define FSL_QDMA_BCQEPA_SADDR(x) (0xd4 + 0x100 * (x)) >>> +#define FSL_QDMA_BCQIER(x) (0xe0 + 0x100 * (x)) >>> +#define FSL_QDMA_BCQIDR(x) (0xe4 + 0x100 * (x)) >>> + >>> +#define FSL_QDMA_SQDPAR 0x80c >>> +#define FSL_QDMA_SQEPAR 0x814 >>> +#define FSL_QDMA_BSQMR 0x800 >>> +#define FSL_QDMA_BSQSR 0x804 >>> +#define FSL_QDMA_BSQICR 0x828 >>> +#define FSL_QDMA_CQMR 0xa00 >>> +#define FSL_QDMA_CQDSCR1 0xa08 >>> +#define FSL_QDMA_CQDSCR2 0xa0c >>> +#define FSL_QDMA_CQIER 0xa10 >>> +#define FSL_QDMA_CQEDR 0xa14 >>> +#define FSL_QDMA_SQCCMR 0xa20 >>> + >>> +/* Registers for bit and genmask */ >>> +#define FSL_QDMA_CQIDR_SQT BIT(15) >>> +#define QDMA_CCDF_FOTMAT BIT(29) >>> +#define QDMA_CCDF_SER BIT(30) >>> +#define QDMA_SG_FIN BIT(30) >>> +#define QDMA_SG_LEN_MASK GENMASK(29, 0) >>> +#define QDMA_CCDF_MASK GENMASK(28, 20) >>> + >>> +#define FSL_QDMA_DEDR_CLEAR GENMASK(31, 0) >>> +#define FSL_QDMA_BCQIDR_CLEAR GENMASK(31, 0) >>> +#define FSL_QDMA_DEIER_CLEAR GENMASK(31, 0) >>> + >>> +#define FSL_QDMA_BCQIER_CQTIE BIT(15) >>> +#define FSL_QDMA_BCQIER_CQPEIE BIT(23) >>> +#define FSL_QDMA_BSQICR_ICEN BIT(31) >>> + >>> +#define FSL_QDMA_BSQICR_ICST(x) ((x) << 16) >>> +#define FSL_QDMA_CQIER_MEIE BIT(31) >>> +#define FSL_QDMA_CQIER_TEIE BIT(0) >>> +#define FSL_QDMA_SQCCMR_ENTER_WM BIT(21) >>> + >>> +#define FSL_QDMA_BCQMR_EN BIT(31) >>> +#define FSL_QDMA_BCQMR_EI BIT(30) >>> +#define FSL_QDMA_BCQMR_CD_THLD(x) ((x) << 20) >>> +#define FSL_QDMA_BCQMR_CQ_SIZE(x) ((x) << 16) >>> + >>> +#define FSL_QDMA_BCQSR_QF BIT(16) >>> +#define FSL_QDMA_BCQSR_XOFF BIT(0) >>> + >>> +#define FSL_QDMA_BSQMR_EN BIT(31) >>> +#define FSL_QDMA_BSQMR_DI BIT(30) >>> +#define FSL_QDMA_BSQMR_CQ_SIZE(x) ((x) << 16) >>> + >>> +#define FSL_QDMA_BSQSR_QE BIT(17) >>> + >>> +#define FSL_QDMA_DMR_DQD BIT(30) >>> +#define FSL_QDMA_DSR_DB BIT(31) >>> + >>> +/* Size related definition */ >>> +#define FSL_QDMA_QUEUE_MAX 8 >>> +#define FSL_QDMA_COMMAND_BUFFER_SIZE 64 >>> +#define FSL_QDMA_DESCRIPTOR_BUFFER_SIZE 32 >>> +#define FSL_QDMA_CIRCULAR_DESC_SIZE_MIN 64 >>> +#define FSL_QDMA_CIRCULAR_DESC_SIZE_MAX 16384 >>> +#define FSL_QDMA_QUEUE_NUM_MAX 8 >>> + >>> +/* Field definition for CMD */ >>> +#define FSL_QDMA_CMD_RWTTYPE 0x4 >>> +#define FSL_QDMA_CMD_LWC 0x2 >>> +#define FSL_QDMA_CMD_RWTTYPE_OFFSET 28 >>> +#define FSL_QDMA_CMD_NS_OFFSET 27 >>> +#define FSL_QDMA_CMD_DQOS_OFFSET 24 >>> +#define FSL_QDMA_CMD_WTHROTL_OFFSET 20 >>> +#define FSL_QDMA_CMD_DSEN_OFFSET 19 >>> +#define FSL_QDMA_CMD_LWC_OFFSET 16 >>> + >>> +/* Field definition for Descriptor offset */ >>> +#define QDMA_CCDF_STATUS 20 >>> +#define QDMA_CCDF_OFFSET 20 >>> + >>> +/* Field definition for safe loop count*/ >>> +#define FSL_QDMA_HALT_COUNT 1500 >>> +#define FSL_QDMA_MAX_SIZE 16385 >>> +#define FSL_QDMA_COMP_TIMEOUT 1000 >>> +#define FSL_COMMAND_QUEUE_OVERFLLOW 10 >>> + >>> +#define FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma_engine, x) >> \ >>> + (((fsl_qdma_engine)->block_offset) * (x)) >>> + >>> +/** >>> + * struct fsl_qdma_format - This is the struct holding describing compo= und >>> + * descriptor format with qDMA. >>> + * @status: Command status and enqueue status >> notification. >>> + * @cfg: Frame offset and frame format. >>> + * @addr_lo: Holding the compound descriptor of the >> lower >>> + * 32-bits address in memory 40-bit address. >>> + * @addr_hi: Same as above member, but point high >> 8-bits in >>> + * memory 40-bit address. >>> + * @__reserved1: Reserved field. >>> + * @cfg8b_w1: Compound descriptor command queue >> origin produced >>> + * by qDMA and dynamic debug field. >>> + * @data Pointer to the memory 40-bit address, >> describes DMA >>> + * source information and DMA destination >> information. >>> + */ >>> +struct fsl_qdma_format { >>> + __le32 status; >>> + __le32 cfg; >>> + union { >>> + struct { >>> + __le32 addr_lo; >>> + u8 addr_hi; >>> + u8 __reserved1[2]; >>> + u8 cfg8b_w1; >>> + } __packed; >>> + __le64 data; >>> + }; >>> +} __packed; >>> + >>> +/* qDMA status notification pre information */ >>> +struct fsl_pre_status { >>> + u64 addr; >>> + u8 queue; >>> +}; >>> + >>> +static DEFINE_PER_CPU(struct fsl_pre_status, pre); >>> + >>> +struct fsl_qdma_chan { >>> + struct virt_dma_chan vchan; >>> + struct virt_dma_desc vdesc; >>> + enum dma_status status; >>> + struct fsl_qdma_engine *qdma; >>> + struct fsl_qdma_queue *queue; >>> +}; >>> + >>> +struct fsl_qdma_queue { >>> + struct fsl_qdma_format *virt_head; >>> + struct fsl_qdma_format *virt_tail; >>> + struct list_head comp_used; >>> + struct list_head comp_free; >>> + struct dma_pool *comp_pool; >>> + struct dma_pool *desc_pool; >>> + spinlock_t queue_lock; >>> + dma_addr_t bus_addr; >>> + u32 n_cq; >>> + u32 id; >>> + struct fsl_qdma_format *cq; >>> + void __iomem *block_base; >>> +}; >>> + >>> +struct fsl_qdma_comp { >>> + dma_addr_t bus_addr; >>> + dma_addr_t desc_bus_addr; >>> + struct fsl_qdma_format *virt_addr; >>> + struct fsl_qdma_format *desc_virt_addr; >>> + struct fsl_qdma_chan *qchan; >>> + struct virt_dma_desc vdesc; >>> + struct list_head list; >>> +}; >>> + >>> +struct fsl_qdma_engine { >>> + struct dma_device dma_dev; >>> + void __iomem *ctrl_base; >>> + void __iomem *status_base; >>> + void __iomem *block_base; >>> + u32 n_chans; >>> + u32 n_queues; >>> + struct mutex fsl_qdma_mutex; >>> + int error_irq; >>> + int *queue_irq; >>> + u32 feature; >>> + struct fsl_qdma_queue *queue; >>> + struct fsl_qdma_queue **status; >>> + struct fsl_qdma_chan *chans; >>> + int block_number; >>> + int block_offset; >>> + int irq_base; >>> + int desc_allocated; >>> + >>> +}; >>> + >>> +static inline u64 >>> +qdma_ccdf_addr_get64(const struct fsl_qdma_format *ccdf) >>> +{ >>> + return le64_to_cpu(ccdf->data) & (U64_MAX >> 24); >>> +} >>> + >>> +static inline void >>> +qdma_desc_addr_set64(struct fsl_qdma_format *ccdf, u64 addr) >>> +{ >>> + ccdf->addr_hi =3D upper_32_bits(addr); >>> + ccdf->addr_lo =3D cpu_to_le32(lower_32_bits(addr)); >>> +} >>> + >>> +static inline u8 >>> +qdma_ccdf_get_queue(const struct fsl_qdma_format *ccdf) >>> +{ >>> + return ccdf->cfg8b_w1 & U8_MAX; >>> +} >>> + >>> +static inline int >>> +qdma_ccdf_get_offset(const struct fsl_qdma_format *ccdf) >>> +{ >>> + return (le32_to_cpu(ccdf->cfg) & QDMA_CCDF_MASK) >> >> QDMA_CCDF_OFFSET; >>> +} >>> + >>> +static inline void >>> +qdma_ccdf_set_format(struct fsl_qdma_format *ccdf, int offset) >>> +{ >>> + ccdf->cfg =3D cpu_to_le32(QDMA_CCDF_FOTMAT | offset); >>> +} >>> + >>> +static inline int >>> +qdma_ccdf_get_status(const struct fsl_qdma_format *ccdf) >>> +{ >>> + return (le32_to_cpu(ccdf->status) & QDMA_CCDF_MASK) >> >> QDMA_CCDF_STATUS; >>> +} >>> + >>> +static inline void >>> +qdma_ccdf_set_ser(struct fsl_qdma_format *ccdf, int status) >>> +{ >>> + ccdf->status =3D cpu_to_le32(QDMA_CCDF_SER | status); >>> +} >>> + >>> +static inline void qdma_csgf_set_len(struct fsl_qdma_format *csgf, int l= en) >>> +{ >>> + csgf->cfg =3D cpu_to_le32(len & QDMA_SG_LEN_MASK); >>> +} >>> + >>> +static inline void qdma_csgf_set_f(struct fsl_qdma_format *csgf, int le= n) >>> +{ >>> + csgf->cfg =3D cpu_to_le32(QDMA_SG_FIN | (len & >> QDMA_SG_LEN_MASK)); >>> +} >>> + >>> +static u32 qdma_readl(struct fsl_qdma_engine *qdma, void __iomem >> *addr) >>> +{ >>> + return FSL_DMA_IN(qdma, addr, 32); >>> +} >>> + >>> +static void qdma_writel(struct fsl_qdma_engine *qdma, u32 val, >>> + void __iomem *addr) >>> +{ >>> + FSL_DMA_OUT(qdma, addr, val, 32); >>> +} >>> + >>> +static struct fsl_qdma_chan *to_fsl_qdma_chan(struct dma_chan *chan) >>> +{ >>> + return container_of(chan, struct fsl_qdma_chan, vchan.chan); >>> +} >>> + >>> +static struct fsl_qdma_comp *to_fsl_qdma_comp(struct virt_dma_desc *vd)= >>> +{ >>> + return container_of(vd, struct fsl_qdma_comp, vdesc); >>> +} >>> + >>> +static void fsl_qdma_free_chan_resources(struct dma_chan *chan) >>> +{ >>> + struct fsl_qdma_chan *fsl_chan =3D to_fsl_qdma_chan(chan); >>> + struct fsl_qdma_queue *fsl_queue =3D fsl_chan->queue; >>> + struct fsl_qdma_engine *fsl_qdma =3D fsl_chan->qdma; >>> + struct fsl_qdma_comp *comp_temp, *_comp_temp; >>> + unsigned long flags; >>> + LIST_HEAD(head); >>> + >>> + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); >>> + vchan_get_all_descriptors(&fsl_chan->vchan, &head); >>> + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); >>> + >>> + vchan_dma_desc_free_list(&fsl_chan->vchan, &head); >>> + >>> + if (!fsl_queue->comp_pool && !fsl_queue->comp_pool) >>> + return; >>> + >>> + list_for_each_entry_safe(comp_temp, _comp_temp, >>> + &fsl_queue->comp_used, list) { >>> + dma_pool_free(fsl_queue->comp_pool, >>> + comp_temp->virt_addr, >>> + comp_temp->bus_addr); >>> + dma_pool_free(fsl_queue->desc_pool, >>> + comp_temp->desc_virt_addr, >>> + comp_temp->desc_bus_addr); >>> + list_del(&comp_temp->list); >>> + kfree(comp_temp); >>> + } >>> + >>> + list_for_each_entry_safe(comp_temp, _comp_temp, >>> + &fsl_queue->comp_free, list) { >>> + dma_pool_free(fsl_queue->comp_pool, >>> + comp_temp->virt_addr, >>> + comp_temp->bus_addr); >>> + dma_pool_free(fsl_queue->desc_pool, >>> + comp_temp->desc_virt_addr, >>> + comp_temp->desc_bus_addr); >>> + list_del(&comp_temp->list); >>> + kfree(comp_temp); >>> + } >>> + >>> + dma_pool_destroy(fsl_queue->comp_pool); >>> + dma_pool_destroy(fsl_queue->desc_pool); >>> + >>> + fsl_qdma->desc_allocated--; >>> + fsl_queue->comp_pool =3D NULL; >>> + fsl_queue->desc_pool =3D NULL; >>> +} >>> + >>> +static void fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp *fsl_comp, >>> + dma_addr_t dst, dma_addr_t >> src, u32 len) >>> +{ >>> + struct fsl_qdma_format *sdf, *ddf; >>> + struct fsl_qdma_format *ccdf, *csgf_desc, *csgf_src, *csgf_dest;= >>> + >>> + ccdf =3D fsl_comp->virt_addr; >>> + csgf_desc =3D fsl_comp->virt_addr + 1; >>> + csgf_src =3D fsl_comp->virt_addr + 2; >>> + csgf_dest =3D fsl_comp->virt_addr + 3; >>> + sdf =3D fsl_comp->desc_virt_addr; >>> + ddf =3D fsl_comp->desc_virt_addr + 1; >>> + >>> + memset(fsl_comp->virt_addr, 0, >> FSL_QDMA_COMMAND_BUFFER_SIZE); >>> + memset(fsl_comp->desc_virt_addr, 0, >> FSL_QDMA_DESCRIPTOR_BUFFER_SIZE); >>> + /* Head Command Descriptor(Frame Descriptor) */ >>> + qdma_desc_addr_set64(ccdf, fsl_comp->bus_addr + 16); >>> + qdma_ccdf_set_format(ccdf, qdma_ccdf_get_offset(ccdf)); >>> + qdma_ccdf_set_ser(ccdf, qdma_ccdf_get_status(ccdf)); >>> + /* Status notification is enqueued to status queue. */ >>> + /* Compound Command Descriptor(Frame List Table) */ >>> + qdma_desc_addr_set64(csgf_desc, fsl_comp->desc_bus_addr); >>> + /* It must be 32 as Compound S/G Descriptor */ >>> + qdma_csgf_set_len(csgf_desc, 32); >>> + qdma_desc_addr_set64(csgf_src, src); >>> + qdma_csgf_set_len(csgf_src, len); >>> + qdma_desc_addr_set64(csgf_dest, dst); >>> + qdma_csgf_set_len(csgf_dest, len); >>> + /* This entry is the last entry. */ >>> + qdma_csgf_set_f(csgf_dest, len); >>> + /* Descriptor Buffer */ >>> + sdf->data =3D >>> + cpu_to_le64(FSL_QDMA_CMD_RWTTYPE << >>> + FSL_QDMA_CMD_RWTTYPE_OFFSET); >>> + ddf->data =3D >>> + cpu_to_le64(FSL_QDMA_CMD_RWTTYPE << >>> + FSL_QDMA_CMD_RWTTYPE_OFFSET); >>> + ddf->data |=3D >>> + cpu_to_le64(FSL_QDMA_CMD_LWC << >> FSL_QDMA_CMD_LWC_OFFSET); >>> +} >>> + >>> +/* >>> + * Pre-request full command descriptor for enqueue. >>> + */ >>> +static int fsl_qdma_pre_request_enqueue_desc(struct fsl_qdma_queue >> *queue) >>> +{ >>> + int i; >>> + struct fsl_qdma_comp *comp_temp, *_comp_temp; >>> + >>> + for (i =3D 0; i < queue->n_cq + FSL_COMMAND_QUEUE_OVERFLLOW; >> i++) { >>> + comp_temp =3D kzalloc(sizeof(*comp_temp), >> GFP_KERNEL); >>> + if (!comp_temp) >>> + goto err_alloc; >>> + comp_temp->virt_addr =3D >>> + dma_pool_alloc(queue->comp_pool, >> GFP_KERNEL, >>> + &comp_temp->bus_addr); >>> + if (!comp_temp->virt_addr) >>> + goto err_dma_alloc; >>> + >>> + comp_temp->desc_virt_addr =3D >>> + dma_pool_alloc(queue->desc_pool, >> GFP_KERNEL, >>> + >> &comp_temp->desc_bus_addr); >>> + if (!comp_temp->desc_virt_addr) >>> + goto err_desc_dma_alloc; >>> + >>> + list_add_tail(&comp_temp->list, &queue->comp_free); >>> + } >>> + >>> + return 0; >>> + >>> +err_desc_dma_alloc: >>> + dma_pool_free(queue->comp_pool, comp_temp->virt_addr, >>> + comp_temp->bus_addr); >>> + >>> +err_dma_alloc: >>> + kfree(comp_temp); >>> + >>> +err_alloc: >>> + list_for_each_entry_safe(comp_temp, _comp_temp, >>> + &queue->comp_free, list) { >>> + if (comp_temp->virt_addr) >>> + dma_pool_free(queue->comp_pool, >>> + comp_temp->virt_addr, >>> + comp_temp->bus_addr); >>> + if (comp_temp->desc_virt_addr) >>> + dma_pool_free(queue->desc_pool, >>> + comp_temp->desc_virt_addr, >>> + >> comp_temp->desc_bus_addr); >>> + >>> + list_del(&comp_temp->list); >>> + kfree(comp_temp); >>> + } >>> + >>> + return -ENOMEM; >>> +} >>> + >>> +/* >>> + * Request a command descriptor for enqueue. >>> + */ >>> +static struct fsl_qdma_comp >>> +*fsl_qdma_request_enqueue_desc(struct fsl_qdma_chan *fsl_chan) >>> +{ >>> + unsigned long flags; >>> + struct fsl_qdma_comp *comp_temp; >>> + int timeout =3D FSL_QDMA_COMP_TIMEOUT; >>> + struct fsl_qdma_queue *queue =3D fsl_chan->queue; >>> + >>> + while (timeout--) { >>> + spin_lock_irqsave(&queue->queue_lock, flags); >>> + if (!list_empty(&queue->comp_free)) { >>> + comp_temp =3D >> list_first_entry(&queue->comp_free, >>> + struct >> fsl_qdma_comp, >>> + list); >>> + list_del(&comp_temp->list); >>> + >>> + spin_unlock_irqrestore(&queue->queue_lock, >> flags); >>> + comp_temp->qchan =3D fsl_chan; >>> + return comp_temp; >>> + } >>> + spin_unlock_irqrestore(&queue->queue_lock, flags); >>> + udelay(1); >>> + } >>> + >>> + return NULL; >>> +} >>> + >>> +static struct fsl_qdma_queue >>> +*fsl_qdma_alloc_queue_resources(struct platform_device *pdev, >>> + struct fsl_qdma_engine *fsl_qdma) >>> +{ >>> + int ret, len, i, j; >>> + int queue_num, block_number; >>> + unsigned int queue_size[FSL_QDMA_QUEUE_MAX]; >>> + struct fsl_qdma_queue *queue_head, *queue_temp; >>> + >>> + queue_num =3D fsl_qdma->n_queues; >>> + block_number =3D fsl_qdma->block_number; >>> + >>> + if (queue_num > FSL_QDMA_QUEUE_MAX) >>> + queue_num =3D FSL_QDMA_QUEUE_MAX; >>> + len =3D sizeof(*queue_head) * queue_num * block_number; >>> + queue_head =3D devm_kzalloc(&pdev->dev, len, GFP_KERNEL); >>> + if (!queue_head) >>> + return NULL; >>> + >>> + ret =3D device_property_read_u32_array(&pdev->dev, "queue-sizes"= , >>> + queue_size, >> queue_num); >>> + if (ret) { >>> + dev_err(&pdev->dev, "Can't get queue-sizes.\n"); >>> + return NULL; >>> + } >>> + for (j =3D 0; j < block_number; j++) { >>> + for (i =3D 0; i < queue_num; i++) { >>> + if (queue_size[i] > >> FSL_QDMA_CIRCULAR_DESC_SIZE_MAX || >>> + queue_size[i] < >> FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) { >>> + dev_err(&pdev->dev, >>> + "Get wrong >> queue-sizes.\n"); >>> + return NULL; >>> + } >>> + queue_temp =3D queue_head + i + (j * >> queue_num); >>> + >>> + queue_temp->cq =3D >>> + dma_alloc_coherent(&pdev->dev, >>> + sizeof(struct >> fsl_qdma_format) * >>> + queue_size[i], >>> + >> &queue_temp->bus_addr, >>> + GFP_KERNEL); >>> + if (!queue_temp->cq) >>> + return NULL; >>> + queue_temp->block_base =3D >> fsl_qdma->block_base + >>> + >> FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j); >>> + queue_temp->n_cq =3D queue_size[i]; >>> + queue_temp->id =3D i; >>> + queue_temp->virt_head =3D queue_temp->cq; >>> + queue_temp->virt_tail =3D queue_temp->cq; >>> + /* >>> + * List for queue command buffer >>> + */ >>> + INIT_LIST_HEAD(&queue_temp->comp_used); >>> + spin_lock_init(&queue_temp->queue_lock); >>> + } >>> + } >>> + return queue_head; >>> +} >>> + >>> +static struct fsl_qdma_queue >>> +*fsl_qdma_prep_status_queue(struct platform_device *pdev) >>> +{ >>> + int ret; >>> + unsigned int status_size; >>> + struct fsl_qdma_queue *status_head; >>> + struct device_node *np =3D pdev->dev.of_node; >>> + >>> + ret =3D of_property_read_u32(np, "status-sizes", &status_size); >>> + if (ret) { >>> + dev_err(&pdev->dev, "Can't get status-sizes.\n"); >>> + return NULL; >>> + } >>> + if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX || >>> + status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) { >>> + dev_err(&pdev->dev, "Get wrong status_size.\n"); >>> + return NULL; >>> + } >>> + status_head =3D devm_kzalloc(&pdev->dev, >>> + sizeof(*status_head), >> GFP_KERNEL); >>> + if (!status_head) >>> + return NULL; >>> + >>> + /* >>> + * Buffer for queue command >>> + */ >>> + status_head->cq =3D dma_alloc_coherent(&pdev->dev, >>> + sizeof(struct >> fsl_qdma_format) * >>> + status_size, >>> + >> &status_head->bus_addr, >>> + GFP_KERNEL); >>> + if (!status_head->cq) { >>> + devm_kfree(&pdev->dev, status_head); >>> + return NULL; >>> + } >>> + status_head->n_cq =3D status_size; >>> + status_head->virt_head =3D status_head->cq; >>> + status_head->virt_tail =3D status_head->cq; >>> + status_head->comp_pool =3D NULL; >>> + >>> + return status_head; >>> +} >>> + >>> +static int fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma) >>> +{ >>> + u32 reg; >>> + int i, j, count =3D FSL_QDMA_HALT_COUNT; >>> + void __iomem *block, *ctrl =3D fsl_qdma->ctrl_base; >>> + >>> + /* Disable the command queue and wait for idle state. */ >>> + reg =3D qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR); >>> + reg |=3D FSL_QDMA_DMR_DQD; >>> + qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR); >>> + for (j =3D 0; j < fsl_qdma->block_number; j++) { >>> + block =3D fsl_qdma->block_base + >>> + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j); >>> + for (i =3D 0; i < FSL_QDMA_QUEUE_NUM_MAX; i++) >>> + qdma_writel(fsl_qdma, 0, block + >> FSL_QDMA_BCQMR(i)); >>> + } >>> + while (1) { >>> + reg =3D qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DSR); >>> + if (!(reg & FSL_QDMA_DSR_DB)) >>> + break; >>> + if (count-- < 0) >>> + return -EBUSY; >>> + udelay(100); >>> + } >>> + >>> + for (j =3D 0; j < fsl_qdma->block_number; j++) { >>> + block =3D fsl_qdma->block_base + >>> + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j); >>> + >>> + /* Disable status queue. */ >>> + qdma_writel(fsl_qdma, 0, block + FSL_QDMA_BSQMR); >>> + >>> + /* >>> + * clear the command queue interrupt detect register for= >>> + * all queues. >>> + */ >>> + qdma_writel(fsl_qdma, FSL_QDMA_BCQIDR_CLEAR, >>> + block + FSL_QDMA_BCQIDR(0)); >>> + } >>> + >>> + return 0; >>> +} >>> + >>> +static int >>> +fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma, >>> + void *block, >>> + int id) >>> +{ >>> + bool duplicate; >>> + u32 reg, i, count; >>> + struct fsl_qdma_queue *temp_queue; >>> + struct fsl_qdma_format *status_addr; >>> + struct fsl_qdma_comp *fsl_comp =3D NULL; >>> + struct fsl_qdma_queue *fsl_queue =3D fsl_qdma->queue; >>> + struct fsl_qdma_queue *fsl_status =3D fsl_qdma->status[id]; >>> + >>> + count =3D FSL_QDMA_MAX_SIZE; >>> + >>> + while (count--) { >>> + duplicate =3D 0; >>> + reg =3D qdma_readl(fsl_qdma, block + FSL_QDMA_BSQSR); >>> + if (reg & FSL_QDMA_BSQSR_QE) >>> + return 0; >>> + >>> + status_addr =3D fsl_status->virt_head; >>> + >>> + if (qdma_ccdf_get_queue(status_addr) =3D=3D >>> + __this_cpu_read(pre.queue) && >>> + qdma_ccdf_addr_get64(status_addr) =3D=3D >>> + __this_cpu_read(pre.addr)) >>> + duplicate =3D 1; >>> + i =3D qdma_ccdf_get_queue(status_addr) + >>> + id * fsl_qdma->n_queues; >>> + __this_cpu_write(pre.addr, >> qdma_ccdf_addr_get64(status_addr)); >>> + __this_cpu_write(pre.queue, >> qdma_ccdf_get_queue(status_addr)); >>> + temp_queue =3D fsl_queue + i; >>> + >>> + spin_lock(&temp_queue->queue_lock); >>> + if (list_empty(&temp_queue->comp_used)) { >>> + if (!duplicate) { >>> + >> spin_unlock(&temp_queue->queue_lock); >>> + return -EAGAIN; >>> + } >>> + } else { >>> + fsl_comp =3D >> list_first_entry(&temp_queue->comp_used, >>> + struct >> fsl_qdma_comp, list); >>> + if (fsl_comp->bus_addr + 16 !=3D >>> + __this_cpu_read(pre.addr)) { >>> + if (!duplicate) { >>> + >> spin_unlock(&temp_queue->queue_lock); >>> + return -EAGAIN; >>> + } >>> + } >>> + } >>> + >>> + if (duplicate) { >>> + reg =3D qdma_readl(fsl_qdma, block + >> FSL_QDMA_BSQMR); >>> + reg |=3D FSL_QDMA_BSQMR_DI; >>> + qdma_desc_addr_set64(status_addr, 0x0); >>> + fsl_status->virt_head++; >>> + if (fsl_status->virt_head =3D=3D fsl_status->cq >>> + + >> fsl_status->n_cq) >>> + fsl_status->virt_head =3D >> fsl_status->cq; >>> + qdma_writel(fsl_qdma, reg, block + >> FSL_QDMA_BSQMR); >>> + spin_unlock(&temp_queue->queue_lock); >>> + continue; >>> + } >>> + list_del(&fsl_comp->list); >>> + >>> + reg =3D qdma_readl(fsl_qdma, block + >> FSL_QDMA_BSQMR); >>> + reg |=3D FSL_QDMA_BSQMR_DI; >>> + qdma_desc_addr_set64(status_addr, 0x0); >>> + fsl_status->virt_head++; >>> + if (fsl_status->virt_head =3D=3D fsl_status->cq + >> fsl_status->n_cq) >>> + fsl_status->virt_head =3D fsl_status->cq; >>> + qdma_writel(fsl_qdma, reg, block + >> FSL_QDMA_BSQMR); >>> + spin_unlock(&temp_queue->queue_lock); >>> + >>> + spin_lock(&fsl_comp->qchan->vchan.lock); >>> + vchan_cookie_complete(&fsl_comp->vdesc); >>> + fsl_comp->qchan->status =3D DMA_COMPLETE; >>> + spin_unlock(&fsl_comp->qchan->vchan.lock); >>> + } >>> + >>> + return 0; >>> +} >>> + >>> +static irqreturn_t fsl_qdma_error_handler(int irq, void *dev_id) >>> +{ >>> + unsigned int intr; >>> + struct fsl_qdma_engine *fsl_qdma =3D dev_id; >>> + void __iomem *status =3D fsl_qdma->status_base; >>> + >>> + intr =3D qdma_readl(fsl_qdma, status + FSL_QDMA_DEDR); >>> + >>> + if (intr) { >>> + dev_err(fsl_qdma->dma_dev.dev, "DMA transaction >> error!\n"); >>> + return IRQ_NONE; >>> + } >>> + >>> + qdma_writel(fsl_qdma, FSL_QDMA_DEDR_CLEAR, status + >> FSL_QDMA_DEDR); >>> + return IRQ_HANDLED; >>> +} >>> + >>> +static irqreturn_t fsl_qdma_queue_handler(int irq, void *dev_id) >>> +{ >>> + int id; >>> + unsigned int intr, reg; >>> + struct fsl_qdma_engine *fsl_qdma =3D dev_id; >>> + void __iomem *block, *ctrl =3D fsl_qdma->ctrl_base; >>> + >>> + id =3D irq - fsl_qdma->irq_base; >>> + if (id < 0 && id > fsl_qdma->block_number) { >>> + dev_err(fsl_qdma->dma_dev.dev, >>> + "irq %d is wrong irq_base is %d\n", >>> + irq, fsl_qdma->irq_base); >>> + } >>> + >>> + block =3D fsl_qdma->block_base + >>> + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, id); >>> + >>> + intr =3D qdma_readl(fsl_qdma, block + FSL_QDMA_BCQIDR(0)); >>> + >>> + if ((intr & FSL_QDMA_CQIDR_SQT) !=3D 0) >>> + intr =3D fsl_qdma_queue_transfer_complete(fsl_qdma, >> block, id); >>> + >>> + if (intr !=3D 0) { >>> + reg =3D qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR); >>> + reg |=3D FSL_QDMA_DMR_DQD; >>> + qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR); >>> + qdma_writel(fsl_qdma, 0, block + >> FSL_QDMA_BCQIER(0)); >>> + dev_err(fsl_qdma->dma_dev.dev, "QDMA: status >> err!\n"); >>> + } >>> + >>> + /* Clear all detected events and interrupts. */ >>> + qdma_writel(fsl_qdma, FSL_QDMA_BCQIDR_CLEAR, >>> + block + FSL_QDMA_BCQIDR(0)); >>> + >>> + return IRQ_HANDLED; >>> +} >>> + >>> +static int >>> +fsl_qdma_irq_init(struct platform_device *pdev, >>> + struct fsl_qdma_engine *fsl_qdma) >>> +{ >>> + int i; >>> + int cpu; >>> + int ret; >>> + char irq_name[20]; >>> + >>> + fsl_qdma->error_irq =3D >>> + platform_get_irq_byname(pdev, "qdma-error"); >>> + if (fsl_qdma->error_irq < 0) { >>> + dev_err(&pdev->dev, "Can't get qdma controller irq.\n");= >>> + return fsl_qdma->error_irq; >>> + } >>> + >>> + ret =3D devm_request_irq(&pdev->dev, fsl_qdma->error_irq, >>> + fsl_qdma_error_handler, 0, >>> + "qDMA error", fsl_qdma); >>> + if (ret) { >>> + dev_err(&pdev->dev, "Can't register qDMA controller >> IRQ.\n"); >>> + return ret; >>> + } >>> + >>> + for (i =3D 0; i < fsl_qdma->block_number; i++) { >>> + sprintf(irq_name, "qdma-queue%d", i); >>> + fsl_qdma->queue_irq[i] =3D >>> + platform_get_irq_byname(pdev, >> irq_name); >>> + >>> + if (fsl_qdma->queue_irq[i] < 0) { >>> + dev_err(&pdev->dev, >>> + "Can't get qdma queue %d irq.\n", i); >>> + return fsl_qdma->queue_irq[i]; >>> + } >>> + >>> + ret =3D devm_request_irq(&pdev->dev, >>> + fsl_qdma->queue_irq[i], >>> + fsl_qdma_queue_handler, >>> + 0, >>> + "qDMA queue", >>> + fsl_qdma); >>> + if (ret) { >>> + dev_err(&pdev->dev, >>> + "Can't register qDMA queue >> IRQ.\n"); >>> + return ret; >>> + } >>> + >>> + cpu =3D i % num_online_cpus(); >>> + ret =3D irq_set_affinity_hint(fsl_qdma->queue_irq[i], >>> + get_cpu_mask(cpu)); >>> + if (ret) { >>> + dev_err(&pdev->dev, >>> + "Can't set cpu %d affinity to >> IRQ %d.\n", >>> + cpu, >>> + fsl_qdma->queue_irq[i]); >>> + return ret; >>> + } >>> + } >>> + >>> + return 0; >>> +} >>> + >>> +static void fsl_qdma_irq_exit(struct platform_device *pdev, >>> + struct fsl_qdma_engine *fsl_qdma) >>> +{ >>> + int i; >>> + >>> + devm_free_irq(&pdev->dev, fsl_qdma->error_irq, fsl_qdma); >>> + for (i =3D 0; i < fsl_qdma->block_number; i++) >>> + devm_free_irq(&pdev->dev, fsl_qdma->queue_irq[i], >> fsl_qdma); >>> +} >>> + >>> +static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma) >>> +{ >>> + u32 reg; >>> + int i, j, ret; >>> + struct fsl_qdma_queue *temp; >>> + void __iomem *status =3D fsl_qdma->status_base; >>> + void __iomem *block, *ctrl =3D fsl_qdma->ctrl_base; >>> + struct fsl_qdma_queue *fsl_queue =3D fsl_qdma->queue; >>> + >>> + /* Try to halt the qDMA engine first. */ >>> + ret =3D fsl_qdma_halt(fsl_qdma); >>> + if (ret) { >>> + dev_err(fsl_qdma->dma_dev.dev, "DMA halt failed!"); >>> + return ret; >>> + } >>> + >>> + for (i =3D 0; i < fsl_qdma->block_number; i++) { >>> + /* >>> + * Clear the command queue interrupt detect register >> for >>> + * all queues. >>> + */ >>> + >>> + block =3D fsl_qdma->block_base + >>> + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, i); >>> + qdma_writel(fsl_qdma, FSL_QDMA_BCQIDR_CLEAR, >>> + block + FSL_QDMA_BCQIDR(0)); >>> + } >>> + >>> + for (j =3D 0; j < fsl_qdma->block_number; j++) { >>> + block =3D fsl_qdma->block_base + >>> + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j); >>> + for (i =3D 0; i < fsl_qdma->n_queues; i++) { >>> + temp =3D fsl_queue + i + (j * >> fsl_qdma->n_queues); >>> + /* >>> + * Initialize Command Queue registers to >>> + * point to the first >>> + * command descriptor in memory. >>> + * Dequeue Pointer Address Registers >>> + * Enqueue Pointer Address Registers >>> + */ >>> + >>> + qdma_writel(fsl_qdma, temp->bus_addr, >>> + block + >> FSL_QDMA_BCQDPA_SADDR(i)); >>> + qdma_writel(fsl_qdma, temp->bus_addr, >>> + block + >> FSL_QDMA_BCQEPA_SADDR(i)); >>> + >>> + /* Initialize the queue mode. */ >>> + reg =3D FSL_QDMA_BCQMR_EN; >>> + reg |=3D >> FSL_QDMA_BCQMR_CD_THLD(ilog2(temp->n_cq) - 4); >>> + reg |=3D >> FSL_QDMA_BCQMR_CQ_SIZE(ilog2(temp->n_cq) - 6); >>> + qdma_writel(fsl_qdma, reg, block + >> FSL_QDMA_BCQMR(i)); >>> + } >>> + >>> + /* >>> + * Workaround for erratum: ERR010812. >>> + * We must enable XOFF to avoid the enqueue rejection >> occurs. >>> + * Setting SQCCMR ENTER_WM to 0x20. >>> + */ >>> + >>> + qdma_writel(fsl_qdma, >> FSL_QDMA_SQCCMR_ENTER_WM, >>> + block + FSL_QDMA_SQCCMR); >>> + >>> + /* >>> + * Initialize status queue registers to point to the fir= st >>> + * command descriptor in memory. >>> + * Dequeue Pointer Address Registers >>> + * Enqueue Pointer Address Registers >>> + */ >>> + >>> + qdma_writel(fsl_qdma, fsl_qdma->status[j]->bus_addr, >>> + block + FSL_QDMA_SQEPAR); >>> + qdma_writel(fsl_qdma, fsl_qdma->status[j]->bus_addr, >>> + block + FSL_QDMA_SQDPAR); >>> + /* Initialize status queue interrupt. */ >>> + qdma_writel(fsl_qdma, FSL_QDMA_BCQIER_CQTIE, >>> + block + FSL_QDMA_BCQIER(0)); >>> + qdma_writel(fsl_qdma, FSL_QDMA_BSQICR_ICEN | >>> + FSL_QDMA_BSQICR_ICST(5) | >> 0x8000, >>> + block + FSL_QDMA_BSQICR); >>> + qdma_writel(fsl_qdma, FSL_QDMA_CQIER_MEIE | >>> + FSL_QDMA_CQIER_TEIE, >>> + block + FSL_QDMA_CQIER); >>> + >>> + /* Initialize the status queue mode. */ >>> + reg =3D FSL_QDMA_BSQMR_EN; >>> + reg |=3D FSL_QDMA_BSQMR_CQ_SIZE(ilog2 >>> + (fsl_qdma->status[j]->n_cq) - 6); >>> + >>> + qdma_writel(fsl_qdma, reg, block + >> FSL_QDMA_BSQMR); >>> + reg =3D qdma_readl(fsl_qdma, block + >> FSL_QDMA_BSQMR); >>> + } >>> + >>> + /* Initialize controller interrupt register. */ >>> + qdma_writel(fsl_qdma, FSL_QDMA_DEDR_CLEAR, status + >> FSL_QDMA_DEDR); >>> + qdma_writel(fsl_qdma, FSL_QDMA_DEIER_CLEAR, status + >> FSL_QDMA_DEIER); >>> + >>> + reg =3D qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR); >>> + reg &=3D ~FSL_QDMA_DMR_DQD; >>> + qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR); >>> + >>> + return 0; >>> +} >>> + >>> +static struct dma_async_tx_descriptor * >>> +fsl_qdma_prep_memcpy(struct dma_chan *chan, dma_addr_t dst, >>> + dma_addr_t src, size_t len, unsigned long flags) >>> +{ >>> + struct fsl_qdma_comp *fsl_comp; >>> + struct fsl_qdma_chan *fsl_chan =3D to_fsl_qdma_chan(chan); >>> + >>> + fsl_comp =3D fsl_qdma_request_enqueue_desc(fsl_chan); >>> + >>> + if (!fsl_comp) >>> + return NULL; >>> + >>> + fsl_qdma_comp_fill_memcpy(fsl_comp, dst, src, len); >>> + >>> + return vchan_tx_prep(&fsl_chan->vchan, &fsl_comp->vdesc, flags);= >>> +} >>> + >>> +static void fsl_qdma_enqueue_desc(struct fsl_qdma_chan *fsl_chan) >>> +{ >>> + u32 reg; >>> + struct virt_dma_desc *vdesc; >>> + struct fsl_qdma_comp *fsl_comp; >>> + struct fsl_qdma_queue *fsl_queue =3D fsl_chan->queue; >>> + void __iomem *block =3D fsl_queue->block_base; >>> + >>> + reg =3D qdma_readl(fsl_chan->qdma, block + >> FSL_QDMA_BCQSR(fsl_queue->id)); >>> + if (reg & (FSL_QDMA_BCQSR_QF | FSL_QDMA_BCQSR_XOFF)) >>> + return; >>> + vdesc =3D vchan_next_desc(&fsl_chan->vchan); >>> + if (!vdesc) >>> + return; >>> + list_del(&vdesc->node); >>> + fsl_comp =3D to_fsl_qdma_comp(vdesc); >>> + >>> + memcpy(fsl_queue->virt_head++, >>> + fsl_comp->virt_addr, sizeof(struct fsl_qdma_format)); >>> + if (fsl_queue->virt_head =3D=3D fsl_queue->cq + fsl_queue->n_cq)= >>> + fsl_queue->virt_head =3D fsl_queue->cq; >>> + >>> + list_add_tail(&fsl_comp->list, &fsl_queue->comp_used); >>> + barrier(); >>> + reg =3D qdma_readl(fsl_chan->qdma, block + >> FSL_QDMA_BCQMR(fsl_queue->id)); >>> + reg |=3D FSL_QDMA_BCQMR_EI; >>> + qdma_writel(fsl_chan->qdma, reg, block + >> FSL_QDMA_BCQMR(fsl_queue->id)); >>> + fsl_chan->status =3D DMA_IN_PROGRESS; >>> +} >>> + >>> +static void fsl_qdma_free_desc(struct virt_dma_desc *vdesc) >>> +{ >>> + unsigned long flags; >>> + struct fsl_qdma_comp *fsl_comp; >>> + struct fsl_qdma_queue *fsl_queue; >>> + >>> + fsl_comp =3D to_fsl_qdma_comp(vdesc); >>> + fsl_queue =3D fsl_comp->qchan->queue; >>> + >>> + spin_lock_irqsave(&fsl_queue->queue_lock, flags); >>> + list_add_tail(&fsl_comp->list, &fsl_queue->comp_free); >>> + spin_unlock_irqrestore(&fsl_queue->queue_lock, flags); >>> +} >>> + >>> +static void fsl_qdma_issue_pending(struct dma_chan *chan) >>> +{ >>> + unsigned long flags; >>> + struct fsl_qdma_chan *fsl_chan =3D to_fsl_qdma_chan(chan); >>> + struct fsl_qdma_queue *fsl_queue =3D fsl_chan->queue; >>> + >>> + spin_lock_irqsave(&fsl_queue->queue_lock, flags); >>> + spin_lock(&fsl_chan->vchan.lock); >>> + if (vchan_issue_pending(&fsl_chan->vchan)) >>> + fsl_qdma_enqueue_desc(fsl_chan); >>> + spin_unlock(&fsl_chan->vchan.lock); >>> + spin_unlock_irqrestore(&fsl_queue->queue_lock, flags); >>> +} >>> + >>> +static void fsl_qdma_synchronize(struct dma_chan *chan) >>> +{ >>> + struct fsl_qdma_chan *fsl_chan =3D to_fsl_qdma_chan(chan); >>> + >>> + vchan_synchronize(&fsl_chan->vchan); >>> +} >>> + >>> +static int fsl_qdma_terminate_all(struct dma_chan *chan) >>> +{ >>> + LIST_HEAD(head); >>> + unsigned long flags; >>> + struct fsl_qdma_chan *fsl_chan =3D to_fsl_qdma_chan(chan); >>> + >>> + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); >>> + vchan_get_all_descriptors(&fsl_chan->vchan, &head); >>> + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); >>> + vchan_dma_desc_free_list(&fsl_chan->vchan, &head); >>> + return 0; >>> +} >>> + >>> +static int fsl_qdma_alloc_chan_resources(struct dma_chan *chan) >>> +{ >>> + int ret; >>> + struct fsl_qdma_chan *fsl_chan =3D to_fsl_qdma_chan(chan); >>> + struct fsl_qdma_engine *fsl_qdma =3D fsl_chan->qdma; >>> + struct fsl_qdma_queue *fsl_queue =3D fsl_chan->queue; >>> + >>> + if (fsl_queue->comp_pool && fsl_queue->desc_pool) >>> + return fsl_qdma->desc_allocated; >>> + >>> + INIT_LIST_HEAD(&fsl_queue->comp_free); >>> + >>> + /* >>> + * The dma pool for queue command buffer >>> + */ >>> + fsl_queue->comp_pool =3D >>> + dma_pool_create("comp_pool", >>> + chan->device->dev, >>> + FSL_QDMA_COMMAND_BUFFER_SIZE, >>> + 64, 0); >>> + if (!fsl_queue->comp_pool) >>> + return -ENOMEM; >>> + >>> + /* >>> + * The dma pool for Descriptor(SD/DD) buffer >>> + */ >>> + fsl_queue->desc_pool =3D >>> + dma_pool_create("desc_pool", >>> + chan->device->dev, >>> + FSL_QDMA_DESCRIPTOR_BUFFER_SIZE, >>> + 32, 0); >>> + if (!fsl_queue->desc_pool) >>> + goto err_desc_pool; >>> + >>> + ret =3D fsl_qdma_pre_request_enqueue_desc(fsl_queue); >>> + if (ret) { >>> + dev_err(chan->device->dev, >>> + "failed to alloc dma buffer for S/G >> descriptor\n"); >>> + goto err_mem; >>> + } >>> + >>> + fsl_qdma->desc_allocated++; >>> + return fsl_qdma->desc_allocated; >>> + >>> +err_mem: >>> + dma_pool_destroy(fsl_queue->desc_pool); >>> +err_desc_pool: >>> + dma_pool_destroy(fsl_queue->comp_pool); >>> + return -ENOMEM; >>> +} >>> + >>> +static int fsl_qdma_probe(struct platform_device *pdev) >>> +{ >>> + int ret, i; >>> + int blk_num, blk_off; >>> + u32 len, chans, queues; >>> + struct resource *res; >>> + struct fsl_qdma_chan *fsl_chan; >>> + struct fsl_qdma_engine *fsl_qdma; >>> + struct device_node *np =3D pdev->dev.of_node; >>> + >>> + ret =3D of_property_read_u32(np, "dma-channels", &chans); >>> + if (ret) { >>> + dev_err(&pdev->dev, "Can't get dma-channels.\n"); >>> + return ret; >>> + } >>> + >>> + ret =3D of_property_read_u32(np, "block-offset", &blk_off); >>> + if (ret) { >>> + dev_err(&pdev->dev, "Can't get block-offset.\n"); >>> + return ret; >>> + } >>> + >>> + ret =3D of_property_read_u32(np, "block-number", &blk_num); >>> + if (ret) { >>> + dev_err(&pdev->dev, "Can't get block-number.\n"); >>> + return ret; >>> + } >>> + >>> + blk_num =3D min_t(int, blk_num, num_online_cpus()); >>> + >>> + len =3D sizeof(*fsl_qdma); >>> + fsl_qdma =3D devm_kzalloc(&pdev->dev, len, GFP_KERNEL); >>> + if (!fsl_qdma) >>> + return -ENOMEM; >>> + >>> + len =3D sizeof(*fsl_chan) * chans; >>> + fsl_qdma->chans =3D devm_kzalloc(&pdev->dev, len, GFP_KERNEL); >>> + if (!fsl_qdma->chans) >>> + return -ENOMEM; >>> + >>> + len =3D sizeof(struct fsl_qdma_queue *) * blk_num; >>> + fsl_qdma->status =3D devm_kzalloc(&pdev->dev, len, GFP_KERNEL); >>> + if (!fsl_qdma->status) >>> + return -ENOMEM; >>> + >>> + len =3D sizeof(int) * blk_num; >>> + fsl_qdma->queue_irq =3D devm_kzalloc(&pdev->dev, len, >> GFP_KERNEL); >>> + if (!fsl_qdma->queue_irq) >>> + return -ENOMEM; >>> + >>> + ret =3D of_property_read_u32(np, "fsl,dma-queues", &queues); >>> + if (ret) { >>> + dev_err(&pdev->dev, "Can't get queues.\n"); >>> + return ret; >>> + } >>> + >>> + fsl_qdma->desc_allocated =3D 0; >>> + fsl_qdma->n_chans =3D chans; >>> + fsl_qdma->n_queues =3D queues; >>> + fsl_qdma->block_number =3D blk_num; >>> + fsl_qdma->block_offset =3D blk_off; >>> + >>> + mutex_init(&fsl_qdma->fsl_qdma_mutex); >>> + >>> + for (i =3D 0; i < fsl_qdma->block_number; i++) { >>> + fsl_qdma->status[i] =3D >> fsl_qdma_prep_status_queue(pdev); >>> + if (!fsl_qdma->status[i]) >>> + return -ENOMEM; >>> + } >>> + res =3D platform_get_resource(pdev, IORESOURCE_MEM, 0); >>> + fsl_qdma->ctrl_base =3D devm_ioremap_resource(&pdev->dev, res); >>> + if (IS_ERR(fsl_qdma->ctrl_base)) >>> + return PTR_ERR(fsl_qdma->ctrl_base); >>> + >>> + res =3D platform_get_resource(pdev, IORESOURCE_MEM, 1); >>> + fsl_qdma->status_base =3D devm_ioremap_resource(&pdev->dev, >> res); >>> + if (IS_ERR(fsl_qdma->status_base)) >>> + return PTR_ERR(fsl_qdma->status_base); >>> + >>> + res =3D platform_get_resource(pdev, IORESOURCE_MEM, 2); >>> + fsl_qdma->block_base =3D devm_ioremap_resource(&pdev->dev, >> res); >>> + if (IS_ERR(fsl_qdma->block_base)) >>> + return PTR_ERR(fsl_qdma->block_base); >>> + fsl_qdma->queue =3D fsl_qdma_alloc_queue_resources(pdev, >> fsl_qdma); >>> + if (!fsl_qdma->queue) >>> + return -ENOMEM; >>> + >>> + ret =3D fsl_qdma_irq_init(pdev, fsl_qdma); >>> + if (ret) >>> + return ret; >>> + >>> + fsl_qdma->irq_base =3D platform_get_irq_byname(pdev, >> "qdma-queue0"); >>> + fsl_qdma->feature =3D of_property_read_bool(np, "big-endian"); >>> + INIT_LIST_HEAD(&fsl_qdma->dma_dev.channels); >>> + >>> + for (i =3D 0; i < fsl_qdma->n_chans; i++) { >>> + struct fsl_qdma_chan *fsl_chan =3D &fsl_qdma->chans[i]; >>> + >>> + fsl_chan->qdma =3D fsl_qdma; >>> + fsl_chan->queue =3D fsl_qdma->queue + i % >> (fsl_qdma->n_queues * >>> + >> fsl_qdma->block_number); >>> + fsl_chan->vchan.desc_free =3D fsl_qdma_free_desc; >>> + vchan_init(&fsl_chan->vchan, &fsl_qdma->dma_dev); >>> + } >>> + >>> + dma_cap_set(DMA_MEMCPY, fsl_qdma->dma_dev.cap_mask); >>> + >>> + fsl_qdma->dma_dev.dev =3D &pdev->dev; >>> + fsl_qdma->dma_dev.device_free_chan_resources =3D >>> + fsl_qdma_free_chan_resources; >>> + fsl_qdma->dma_dev.device_alloc_chan_resources =3D >>> + fsl_qdma_alloc_chan_resources; >>> + fsl_qdma->dma_dev.device_tx_status =3D dma_cookie_status; >>> + fsl_qdma->dma_dev.device_prep_dma_memcpy =3D >> fsl_qdma_prep_memcpy; >>> + fsl_qdma->dma_dev.device_issue_pending =3D >> fsl_qdma_issue_pending; >>> + fsl_qdma->dma_dev.device_synchronize =3D fsl_qdma_synchronize; >>> + fsl_qdma->dma_dev.device_terminate_all =3D >> fsl_qdma_terminate_all; >>> + >>> + dma_set_mask(&pdev->dev, DMA_BIT_MASK(40)); >>> + >>> + platform_set_drvdata(pdev, fsl_qdma); >>> + >>> + ret =3D dma_async_device_register(&fsl_qdma->dma_dev); >>> + if (ret) { >>> + dev_err(&pdev->dev, >>> + "Can't register NXP Layerscape qDMA >> engine.\n"); >>> + return ret; >>> + } >>> + >>> + ret =3D fsl_qdma_reg_init(fsl_qdma); >>> + if (ret) { >>> + dev_err(&pdev->dev, "Can't Initialize the qDMA >> engine.\n"); >>> + return ret; >>> + } >>> + >>> + return 0; >>> +} >>> + >>> +static void fsl_qdma_cleanup_vchan(struct dma_device *dmadev) >>> +{ >>> + struct fsl_qdma_chan *chan, *_chan; >>> + >>> + list_for_each_entry_safe(chan, _chan, >>> + &dmadev->channels, >> vchan.chan.device_node) { >>> + list_del(&chan->vchan.chan.device_node); >>> + tasklet_kill(&chan->vchan.task); >>> + } >>> +} >>> + >>> +static int fsl_qdma_remove(struct platform_device *pdev) >>> +{ >>> + int i; >>> + struct fsl_qdma_queue *status; >>> + struct device_node *np =3D pdev->dev.of_node; >>> + struct fsl_qdma_engine *fsl_qdma =3D platform_get_drvdata(pdev);= >>> + >>> + fsl_qdma_irq_exit(pdev, fsl_qdma); >>> + fsl_qdma_cleanup_vchan(&fsl_qdma->dma_dev); >>> + of_dma_controller_free(np); >>> + dma_async_device_unregister(&fsl_qdma->dma_dev); >>> + >>> + for (i =3D 0; i < fsl_qdma->block_number; i++) { >>> + status =3D fsl_qdma->status[i]; >>> + dma_free_coherent(&pdev->dev, sizeof(struct >> fsl_qdma_format) * >>> + status->n_cq, status->cq, >> status->bus_addr); >>> + } >>> + return 0; >>> +} >>> + >>> +static const struct of_device_id fsl_qdma_dt_ids[] =3D { >>> + { .compatible =3D "fsl,ls1021a-qdma", }, >>> + { /* sentinel */ } >>> +}; >>> +MODULE_DEVICE_TABLE(of, fsl_qdma_dt_ids); >>> + >>> +static struct platform_driver fsl_qdma_driver =3D { >>> + .driver =3D { >>> + .name =3D "fsl-qdma", >>> + .of_match_table =3D fsl_qdma_dt_ids, >>> + }, >>> + .probe =3D fsl_qdma_probe, >>> + .remove =3D fsl_qdma_remove, >>> +}; >>> + >>> +module_platform_driver(fsl_qdma_driver); >>> + >>> +MODULE_ALIAS("platform:fsl-qdma"); >>> +MODULE_DESCRIPTION("NXP Layerscape qDMA engine driver"); >>> -- >>> 1.7.1 >>>=20