2019-03-07 01:47:35

by Long Cheng

[permalink] [raw]
Subject: [PATCH v11 0/4] add uart DMA function

In Mediatek SOCs, the uart can support DMA function.
Base on DMA engine formwork, we add the DMA code to support uart. And put the code under drivers/dma/mediatek.

This series contains document bindings, Kconfig to control the function enable or not,
device tree including interrupt and dma device node, the code of UART DMA

Changes compared to v10
-modify DMA tx status function
-modify 8250_mtk for DMA rx
-add notes to binding Document.
Changes compared to v9
-rename dt-bindings file
-remove direction from device_config
-simplified code
Changes compared to v8
-revise missing items
Changes compared to v7:
-modify apdma uart tx
Changes compared to v6:
-Correct spelling
Changes compared to v5:
-move 'requst irqs' to alloc channel
-remove tasklet.
Changes compared to v4:
-modify Kconfig depends on.
Changes compared to v3:
-fix CONFIG_PM, will cause build fail
Changes compared to v2:
-remove unimportant parameters
-instead of cookie, use APIs of virtual channel.
-use of_dma_xlate_by_chan_id.
Changes compared to v1:
-mian revised file, 8250_mtk_dma.c
--parameters renamed for standard
--remove atomic operation

Long Cheng (4):
dmaengine: 8250_mtk_dma: add MediaTek uart DMA support
arm: dts: mt2712: add uart APDMA to device tree
dt-bindings: dma: uart: rename binding
serial: 8250-mtk: modify uart DMA rx

.../devicetree/bindings/dma/8250_mtk_dma.txt | 33 -
.../devicetree/bindings/dma/mtk-uart-apdma.txt | 55 ++
arch/arm64/boot/dts/mediatek/mt2712e.dtsi | 51 ++
drivers/dma/mediatek/Kconfig | 11 +
drivers/dma/mediatek/Makefile | 1 +
drivers/dma/mediatek/mtk-uart-apdma.c | 660 ++++++++++++++++++++
drivers/tty/serial/8250/8250_mtk.c | 53 +-
7 files changed, 801 insertions(+), 63 deletions(-)
delete mode 100644 Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
create mode 100644 Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt
create mode 100644 drivers/dma/mediatek/mtk-uart-apdma.c

--
1.7.9.5



2019-03-07 01:46:31

by Long Cheng

[permalink] [raw]
Subject: [PATCH v11 3/4] dt-bindings: dma: uart: rename binding

The filename matches mtk-uart-apdma.c.
So using "mtk-uart-apdma.txt" should be better.
And add some property.

Signed-off-by: Long Cheng <[email protected]>
Reviewed-by: Rob Herring <[email protected]>
---
.../devicetree/bindings/dma/8250_mtk_dma.txt | 33 ------------
.../devicetree/bindings/dma/mtk-uart-apdma.txt | 55 ++++++++++++++++++++
2 files changed, 55 insertions(+), 33 deletions(-)
delete mode 100644 Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
create mode 100644 Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt

diff --git a/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt b/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
deleted file mode 100644
index 3fe0961..0000000
--- a/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
+++ /dev/null
@@ -1,33 +0,0 @@
-* Mediatek UART APDMA Controller
-
-Required properties:
-- compatible should contain:
- * "mediatek,mt2712-uart-dma" for MT2712 compatible APDMA
- * "mediatek,mt6577-uart-dma" for MT6577 and all of the above
-
-- reg: The base address of the APDMA register bank.
-
-- interrupts: A single interrupt specifier.
-
-- clocks : Must contain an entry for each entry in clock-names.
- See ../clocks/clock-bindings.txt for details.
-- clock-names: The APDMA clock for register accesses
-
-Examples:
-
- apdma: dma-controller@11000380 {
- compatible = "mediatek,mt2712-uart-dma";
- reg = <0 0x11000380 0 0x400>;
- interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_LOW>,
- <GIC_SPI 64 IRQ_TYPE_LEVEL_LOW>,
- <GIC_SPI 65 IRQ_TYPE_LEVEL_LOW>,
- <GIC_SPI 66 IRQ_TYPE_LEVEL_LOW>,
- <GIC_SPI 67 IRQ_TYPE_LEVEL_LOW>,
- <GIC_SPI 68 IRQ_TYPE_LEVEL_LOW>,
- <GIC_SPI 69 IRQ_TYPE_LEVEL_LOW>,
- <GIC_SPI 70 IRQ_TYPE_LEVEL_LOW>;
- clocks = <&pericfg CLK_PERI_AP_DMA>;
- clock-names = "apdma";
- #dma-cells = <1>;
- };
-
diff --git a/Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt b/Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt
new file mode 100644
index 0000000..e0424b3
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt
@@ -0,0 +1,55 @@
+* Mediatek UART APDMA Controller
+
+Required properties:
+- compatible should contain:
+ * "mediatek,mt2712-uart-dma" for MT2712 compatible APDMA
+ * "mediatek,mt6577-uart-dma" for MT6577 and all of the above
+
+- reg: The base address of the APDMA register bank.
+
+- interrupts: A single interrupt specifier.
+ One interrupt per dma-requests, or 8 if no dma-requests property is present
+
+- dma-requests: The number of DMA channels
+
+- clocks : Must contain an entry for each entry in clock-names.
+ See ../clocks/clock-bindings.txt for details.
+- clock-names: The APDMA clock for register accesses
+
+- mediatek,dma-33bits: Present if the DMA requires support
+
+Examples:
+
+ apdma: dma-controller@11000400 {
+ compatible = "mediatek,mt2712-uart-dma";
+ reg = <0 0x11000400 0 0x80>,
+ <0 0x11000480 0 0x80>,
+ <0 0x11000500 0 0x80>,
+ <0 0x11000580 0 0x80>,
+ <0 0x11000600 0 0x80>,
+ <0 0x11000680 0 0x80>,
+ <0 0x11000700 0 0x80>,
+ <0 0x11000780 0 0x80>,
+ <0 0x11000800 0 0x80>,
+ <0 0x11000880 0 0x80>,
+ <0 0x11000900 0 0x80>,
+ <0 0x11000980 0 0x80>;
+ interrupts = <GIC_SPI 103 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 104 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 105 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 106 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 107 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 108 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 109 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 110 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 111 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 112 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 113 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 114 IRQ_TYPE_LEVEL_LOW>;
+ dma-requests = <12>;
+ clocks = <&pericfg CLK_PERI_AP_DMA>;
+ clock-names = "apdma";
+ mediatek,dma-33bits;
+ #dma-cells = <1>;
+ };
+
--
1.7.9.5


2019-03-07 01:46:41

by Long Cheng

[permalink] [raw]
Subject: [PATCH v11 1/4] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support

In DMA engine framework, add 8250 uart dma to support MediaTek uart.
If MediaTek uart enabled(SERIAL_8250_MT6577), and want to improve
the performance, can enable the function.

Signed-off-by: Long Cheng <[email protected]>
---
drivers/dma/mediatek/Kconfig | 11 +
drivers/dma/mediatek/Makefile | 1 +
drivers/dma/mediatek/mtk-uart-apdma.c | 660 +++++++++++++++++++++++++++++++++
3 files changed, 672 insertions(+)
create mode 100644 drivers/dma/mediatek/mtk-uart-apdma.c

diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
index 680fc05..ac49eb6 100644
--- a/drivers/dma/mediatek/Kconfig
+++ b/drivers/dma/mediatek/Kconfig
@@ -24,3 +24,14 @@ config MTK_CQDMA

This controller provides the channels which is dedicated to
memory-to-memory transfer to offload from CPU.
+
+config MTK_UART_APDMA
+ tristate "MediaTek SoCs APDMA support for UART"
+ depends on OF && SERIAL_8250_MT6577
+ select DMA_ENGINE
+ select DMA_VIRTUAL_CHANNELS
+ help
+ Support for the UART DMA engine found on MediaTek MTK SoCs.
+ When SERIAL_8250_MT6577 is enabled, and if you want to use DMA,
+ you can enable the config. The DMA engine can only be used
+ with MediaTek SoCs.
diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
index 41bb381..61a6d29 100644
--- a/drivers/dma/mediatek/Makefile
+++ b/drivers/dma/mediatek/Makefile
@@ -1,2 +1,3 @@
+obj-$(CONFIG_MTK_UART_APDMA) += mtk-uart-apdma.o
obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
obj-$(CONFIG_MTK_CQDMA) += mtk-cqdma.o
diff --git a/drivers/dma/mediatek/mtk-uart-apdma.c b/drivers/dma/mediatek/mtk-uart-apdma.c
new file mode 100644
index 0000000..9ed7a49
--- /dev/null
+++ b/drivers/dma/mediatek/mtk-uart-apdma.c
@@ -0,0 +1,660 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * MediaTek Uart APDMA driver.
+ *
+ * Copyright (c) 2018 MediaTek Inc.
+ * Author: Long Cheng <[email protected]>
+ */
+
+#include <linux/clk.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/iopoll.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/of_dma.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include "../virt-dma.h"
+
+/* The default number of virtual channel */
+#define MTK_UART_APDMA_NR_VCHANS 8
+
+#define VFF_EN_B BIT(0)
+#define VFF_STOP_B BIT(0)
+#define VFF_FLUSH_B BIT(0)
+#define VFF_4G_SUPPORT_B BIT(0)
+#define VFF_RX_INT_EN0_B BIT(0) /* rx valid size >= vff thre */
+#define VFF_RX_INT_EN1_B BIT(1)
+#define VFF_TX_INT_EN_B BIT(0) /* tx left size >= vff thre */
+#define VFF_WARM_RST_B BIT(0)
+#define VFF_RX_INT_CLR_B (BIT(0) | BIT(1))
+#define VFF_TX_INT_CLR_B 0
+#define VFF_STOP_CLR_B 0
+#define VFF_INT_EN_CLR_B 0
+#define VFF_4G_SUPPORT_CLR_B 0
+
+/* interrupt trigger level for tx */
+#define VFF_TX_THRE(n) ((n) * 7 / 8)
+/* interrupt trigger level for rx */
+#define VFF_RX_THRE(n) ((n) * 3 / 4)
+
+#define VFF_RING_SIZE 0xffffU
+/* invert this bit when wrap ring head again */
+#define VFF_RING_WRAP 0x10000U
+
+#define VFF_INT_FLAG 0x00
+#define VFF_INT_EN 0x04
+#define VFF_EN 0x08
+#define VFF_RST 0x0c
+#define VFF_STOP 0x10
+#define VFF_FLUSH 0x14
+#define VFF_ADDR 0x1c
+#define VFF_LEN 0x24
+#define VFF_THRE 0x28
+#define VFF_WPT 0x2c
+#define VFF_RPT 0x30
+/* TX: the buffer size HW can read. RX: the buffer size SW can read. */
+#define VFF_VALID_SIZE 0x3c
+/* TX: the buffer size SW can write. RX: the buffer size HW can write. */
+#define VFF_LEFT_SIZE 0x40
+#define VFF_DEBUG_STATUS 0x50
+#define VFF_4G_SUPPORT 0x54
+
+struct mtk_uart_apdmadev {
+ struct dma_device ddev;
+ struct clk *clk;
+ bool support_33bits;
+ unsigned int dma_requests;
+ unsigned int *dma_irq;
+};
+
+struct mtk_uart_apdma_desc {
+ struct virt_dma_desc vd;
+
+ unsigned int avail_len;
+};
+
+struct mtk_chan {
+ struct virt_dma_chan vc;
+ struct dma_slave_config cfg;
+ void __iomem *base;
+ struct mtk_uart_apdma_desc *desc;
+
+ enum dma_transfer_direction dir;
+
+ bool requested;
+
+ unsigned int rx_status;
+};
+
+static inline struct mtk_uart_apdmadev *
+to_mtk_uart_apdma_dev(struct dma_device *d)
+{
+ return container_of(d, struct mtk_uart_apdmadev, ddev);
+}
+
+static inline struct mtk_chan *to_mtk_uart_apdma_chan(struct dma_chan *c)
+{
+ return container_of(c, struct mtk_chan, vc.chan);
+}
+
+static inline struct mtk_uart_apdma_desc *to_mtk_uart_apdma_desc
+ (struct dma_async_tx_descriptor *t)
+{
+ return container_of(t, struct mtk_uart_apdma_desc, vd.tx);
+}
+
+static void mtk_uart_apdma_write(struct mtk_chan *c,
+ unsigned int reg, unsigned int val)
+{
+ writel(val, c->base + reg);
+}
+
+static unsigned int mtk_uart_apdma_read(struct mtk_chan *c, unsigned int reg)
+{
+ return readl(c->base + reg);
+}
+
+static void mtk_uart_apdma_desc_free(struct virt_dma_desc *vd)
+{
+ struct dma_chan *chan = vd->tx.chan;
+ struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+
+ kfree(c->desc);
+}
+
+static void mtk_uart_apdma_start_tx(struct mtk_chan *c)
+{
+ unsigned int len, send, left, wpt, d_wpt, tmp;
+ int ret;
+
+ left = mtk_uart_apdma_read(c, VFF_LEFT_SIZE);
+ if (!left) {
+ mtk_uart_apdma_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
+ return;
+ }
+
+ /* Wait 1sec for flush, can't sleep */
+ ret = readx_poll_timeout(readl, c->base + VFF_FLUSH, tmp,
+ tmp != VFF_FLUSH_B, 0, 1000000);
+ if (ret)
+ dev_warn(c->vc.chan.device->dev, "tx: fail, debug=0x%x\n",
+ mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
+
+ send = min_t(unsigned int, left, c->desc->avail_len);
+ wpt = mtk_uart_apdma_read(c, VFF_WPT);
+ len = c->cfg.dst_port_window_size;
+
+ d_wpt = wpt + send;
+ if ((d_wpt & VFF_RING_SIZE) >= len) {
+ d_wpt = d_wpt - len;
+ d_wpt = d_wpt ^ VFF_RING_WRAP;
+ }
+ mtk_uart_apdma_write(c, VFF_WPT, d_wpt);
+
+ c->desc->avail_len -= send;
+
+ mtk_uart_apdma_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
+ if (mtk_uart_apdma_read(c, VFF_FLUSH) == 0U)
+ mtk_uart_apdma_write(c, VFF_FLUSH, VFF_FLUSH_B);
+}
+
+static void mtk_uart_apdma_start_rx(struct mtk_chan *c)
+{
+ struct mtk_uart_apdma_desc *d = c->desc;
+ unsigned int len, wg, rg;
+ int cnt;
+
+ if ((mtk_uart_apdma_read(c, VFF_VALID_SIZE) == 0U) ||
+ !d || !vchan_next_desc(&c->vc))
+ return;
+
+ len = c->cfg.src_port_window_size;
+ rg = mtk_uart_apdma_read(c, VFF_RPT);
+ wg = mtk_uart_apdma_read(c, VFF_WPT);
+ cnt = (wg & VFF_RING_SIZE) - (rg & VFF_RING_SIZE);
+ /*
+ * The buffer is ring buffer. If wrap bit different,
+ * represents the start of the next cycle for WPT
+ */
+ if ((rg ^ wg) & VFF_RING_WRAP)
+ cnt += len;
+
+ c->rx_status = d->avail_len - cnt;
+ mtk_uart_apdma_write(c, VFF_RPT, wg);
+
+ list_del(&d->vd.node);
+ vchan_cookie_complete(&d->vd);
+}
+
+static irqreturn_t mtk_uart_apdma_irq_handler(int irq, void *dev_id)
+{
+ struct dma_chan *chan = (struct dma_chan *)dev_id;
+ struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+ struct mtk_uart_apdma_desc *d;
+ unsigned long flags;
+
+ spin_lock_irqsave(&c->vc.lock, flags);
+ if (c->dir == DMA_DEV_TO_MEM) {
+ mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
+ mtk_uart_apdma_start_rx(c);
+ } else if (c->dir == DMA_MEM_TO_DEV) {
+ d = c->desc;
+
+ mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
+
+ if (d->avail_len != 0U) {
+ mtk_uart_apdma_start_tx(c);
+ } else {
+ list_del(&d->vd.node);
+ vchan_cookie_complete(&d->vd);
+ }
+ }
+ spin_unlock_irqrestore(&c->vc.lock, flags);
+
+ return IRQ_HANDLED;
+}
+
+static int mtk_uart_apdma_alloc_chan_resources(struct dma_chan *chan)
+{
+ struct mtk_uart_apdmadev *mtkd = to_mtk_uart_apdma_dev(chan->device);
+ struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+ unsigned int tmp;
+ int ret;
+
+ pm_runtime_get_sync(mtkd->ddev.dev);
+
+ mtk_uart_apdma_write(c, VFF_ADDR, 0);
+ mtk_uart_apdma_write(c, VFF_THRE, 0);
+ mtk_uart_apdma_write(c, VFF_LEN, 0);
+ mtk_uart_apdma_write(c, VFF_RST, VFF_WARM_RST_B);
+
+ ret = readx_poll_timeout(readl, c->base + VFF_EN, tmp, !tmp, 10, 100);
+ if (ret) {
+ dev_err(chan->device->dev, "dma reset: fail, timeout\n");
+ return ret;
+ }
+
+ if (!c->requested) {
+ c->requested = true;
+ ret = request_irq(mtkd->dma_irq[chan->chan_id],
+ mtk_uart_apdma_irq_handler, IRQF_TRIGGER_NONE,
+ KBUILD_MODNAME, chan);
+ if (ret < 0) {
+ dev_err(chan->device->dev, "Can't request dma IRQ\n");
+ return -EINVAL;
+ }
+ }
+
+ if (mtkd->support_33bits)
+ mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
+
+ return ret;
+}
+
+static void mtk_uart_apdma_free_chan_resources(struct dma_chan *chan)
+{
+ struct mtk_uart_apdmadev *mtkd = to_mtk_uart_apdma_dev(chan->device);
+ struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+
+ if (c->requested) {
+ c->requested = false;
+ free_irq(mtkd->dma_irq[chan->chan_id], chan);
+ }
+
+ tasklet_kill(&c->vc.task);
+
+ vchan_free_chan_resources(&c->vc);
+
+ pm_runtime_put_sync(mtkd->ddev.dev);
+}
+
+static enum dma_status mtk_uart_apdma_tx_status(struct dma_chan *chan,
+ dma_cookie_t cookie,
+ struct dma_tx_state *txstate)
+{
+ struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+ enum dma_status ret;
+
+ ret = dma_cookie_status(chan, cookie, txstate);
+
+ dma_set_residue(txstate, c->rx_status);
+
+ return ret;
+}
+
+static void mtk_uart_apdma_config_write(struct dma_chan *chan,
+ struct dma_slave_config *cfg,
+ enum dma_transfer_direction dir)
+{
+ struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+ struct mtk_uart_apdmadev *mtkd =
+ to_mtk_uart_apdma_dev(c->vc.chan.device);
+ unsigned int tmp;
+
+ if (mtk_uart_apdma_read(c, VFF_EN) == VFF_EN_B)
+ return;
+
+ c->dir = dir;
+
+ if (dir == DMA_DEV_TO_MEM) {
+ tmp = cfg->src_port_window_size;
+
+ mtk_uart_apdma_write(c, VFF_ADDR, cfg->src_addr);
+ mtk_uart_apdma_write(c, VFF_LEN, tmp);
+ mtk_uart_apdma_write(c, VFF_THRE, VFF_RX_THRE(tmp));
+ mtk_uart_apdma_write(c, VFF_INT_EN,
+ VFF_RX_INT_EN0_B | VFF_RX_INT_EN1_B);
+ mtk_uart_apdma_write(c, VFF_RPT, 0);
+ mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
+ } else if (dir == DMA_MEM_TO_DEV) {
+ tmp = cfg->dst_port_window_size;
+
+ mtk_uart_apdma_write(c, VFF_ADDR, cfg->dst_addr);
+ mtk_uart_apdma_write(c, VFF_LEN, tmp);
+ mtk_uart_apdma_write(c, VFF_THRE, VFF_TX_THRE(tmp));
+ mtk_uart_apdma_write(c, VFF_WPT, 0);
+ mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
+ }
+
+ mtk_uart_apdma_write(c, VFF_EN, VFF_EN_B);
+
+ if (mtkd->support_33bits)
+ mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
+
+ if (mtk_uart_apdma_read(c, VFF_EN) != VFF_EN_B)
+ dev_err(chan->device->dev, "dir[%d] fail\n", dir);
+}
+
+/*
+ * dmaengine_prep_slave_single will call the function. and sglen is 1.
+ * 8250 uart using one ring buffer, and deal with one sg.
+ */
+static struct dma_async_tx_descriptor *mtk_uart_apdma_prep_slave_sg
+ (struct dma_chan *chan, struct scatterlist *sgl,
+ unsigned int sglen, enum dma_transfer_direction dir,
+ unsigned long tx_flags, void *context)
+{
+ struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+ struct mtk_uart_apdma_desc *d;
+
+ if (!is_slave_direction(dir))
+ return NULL;
+
+ mtk_uart_apdma_config_write(chan, &c->cfg, dir);
+
+ /* Now allocate and setup the descriptor */
+ d = kzalloc(sizeof(*d), GFP_ATOMIC);
+ if (!d)
+ return NULL;
+
+ /* sglen is 1 */
+ d->avail_len = sg_dma_len(sgl);
+ c->rx_status = d->avail_len;
+
+ return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
+}
+
+static void mtk_uart_apdma_issue_pending(struct dma_chan *chan)
+{
+ struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+ struct virt_dma_desc *vd;
+ unsigned long flags;
+
+ spin_lock_irqsave(&c->vc.lock, flags);
+ if (vchan_issue_pending(&c->vc)) {
+ vd = vchan_next_desc(&c->vc);
+ c->desc = to_mtk_uart_apdma_desc(&vd->tx);
+ }
+
+ if (c->dir == DMA_DEV_TO_MEM)
+ mtk_uart_apdma_start_rx(c);
+ else if (c->dir == DMA_MEM_TO_DEV)
+ mtk_uart_apdma_start_tx(c);
+
+ spin_unlock_irqrestore(&c->vc.lock, flags);
+}
+
+static int mtk_uart_apdma_slave_config(struct dma_chan *chan,
+ struct dma_slave_config *config)
+{
+ struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+
+ memcpy(&c->cfg, config, sizeof(*config));
+
+ return 0;
+}
+
+static int mtk_uart_apdma_terminate_all(struct dma_chan *chan)
+{
+ struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+ unsigned long flags;
+ unsigned int tmp;
+ int ret;
+
+ spin_lock_irqsave(&c->vc.lock, flags);
+
+ mtk_uart_apdma_write(c, VFF_FLUSH, VFF_FLUSH_B);
+ /* Wait 1sec for flush, can't sleep */
+ ret = readx_poll_timeout(readl, c->base + VFF_FLUSH, tmp,
+ tmp != VFF_FLUSH_B, 0, 1000000);
+ if (ret)
+ dev_err(c->vc.chan.device->dev, "flush: fail, debug=0x%x\n",
+ mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
+
+ /* set stop as 1 -> wait until en is 0 -> set stop as 0 */
+ mtk_uart_apdma_write(c, VFF_STOP, VFF_STOP_B);
+ ret = readx_poll_timeout(readl, c->base + VFF_EN, tmp, !tmp, 10, 100);
+ if (ret)
+ dev_err(c->vc.chan.device->dev, "stop: fail, debug=0x%x\n",
+ mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
+
+ mtk_uart_apdma_write(c, VFF_STOP, VFF_STOP_CLR_B);
+ mtk_uart_apdma_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
+
+ if (c->dir == DMA_DEV_TO_MEM)
+ mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
+ else if (c->dir == DMA_MEM_TO_DEV)
+ mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
+
+ spin_unlock_irqrestore(&c->vc.lock, flags);
+
+ return 0;
+}
+
+static int mtk_uart_apdma_device_pause(struct dma_chan *chan)
+{
+ /* just for check caps pass */
+ dev_err(chan->device->dev, "Pause can't support\n");
+
+ return 0;
+}
+
+static void mtk_uart_apdma_free(struct mtk_uart_apdmadev *mtkd)
+{
+ while (!list_empty(&mtkd->ddev.channels)) {
+ struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
+ struct mtk_chan, vc.chan.device_node);
+
+ list_del(&c->vc.chan.device_node);
+ tasklet_kill(&c->vc.task);
+ }
+}
+
+static const struct of_device_id mtk_uart_apdma_match[] = {
+ { .compatible = "mediatek,mt6577-uart-dma", },
+ { /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, mtk_uart_apdma_match);
+
+static int mtk_uart_apdma_probe(struct platform_device *pdev)
+{
+ struct device_node *np = pdev->dev.of_node;
+ struct mtk_uart_apdmadev *mtkd;
+ struct resource *res;
+ struct mtk_chan *c;
+ int bit_mask = 32, rc;
+ unsigned int i;
+
+ mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
+ if (!mtkd)
+ return -ENOMEM;
+
+ mtkd->clk = devm_clk_get(&pdev->dev, NULL);
+ if (IS_ERR(mtkd->clk)) {
+ dev_err(&pdev->dev, "No clock specified\n");
+ rc = PTR_ERR(mtkd->clk);
+ return rc;
+ }
+
+ if (of_property_read_bool(np, "mediatek,dma-33bits"))
+ mtkd->support_33bits = true;
+
+ if (mtkd->support_33bits)
+ bit_mask = 33;
+
+ rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(bit_mask));
+ if (rc)
+ return rc;
+
+ dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
+ mtkd->ddev.device_alloc_chan_resources =
+ mtk_uart_apdma_alloc_chan_resources;
+ mtkd->ddev.device_free_chan_resources =
+ mtk_uart_apdma_free_chan_resources;
+ mtkd->ddev.device_tx_status = mtk_uart_apdma_tx_status;
+ mtkd->ddev.device_issue_pending = mtk_uart_apdma_issue_pending;
+ mtkd->ddev.device_prep_slave_sg = mtk_uart_apdma_prep_slave_sg;
+ mtkd->ddev.device_config = mtk_uart_apdma_slave_config;
+ mtkd->ddev.device_pause = mtk_uart_apdma_device_pause;
+ mtkd->ddev.device_terminate_all = mtk_uart_apdma_terminate_all;
+ mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
+ mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
+ mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+ mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
+ mtkd->ddev.dev = &pdev->dev;
+ INIT_LIST_HEAD(&mtkd->ddev.channels);
+
+ mtkd->dma_requests = MTK_UART_APDMA_NR_VCHANS;
+ if (of_property_read_u32(np, "dma-requests", &mtkd->dma_requests)) {
+ dev_info(&pdev->dev,
+ "Using %u as missing dma-requests property\n",
+ MTK_UART_APDMA_NR_VCHANS);
+ }
+
+ mtkd->dma_irq = devm_kcalloc(&pdev->dev, mtkd->dma_requests,
+ sizeof(*mtkd->dma_irq), GFP_KERNEL);
+ if (!mtkd->dma_irq)
+ return -ENOMEM;
+
+ for (i = 0; i < mtkd->dma_requests; i++) {
+ c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
+ if (!c) {
+ rc = -ENODEV;
+ goto err_no_dma;
+ }
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+ if (!res) {
+ rc = -ENODEV;
+ goto err_no_dma;
+ }
+
+ c->base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(c->base)) {
+ rc = PTR_ERR(c->base);
+ goto err_no_dma;
+ }
+ c->requested = false;
+ c->vc.desc_free = mtk_uart_apdma_desc_free;
+ vchan_init(&c->vc, &mtkd->ddev);
+
+ mtkd->dma_irq[i] = platform_get_irq(pdev, i);
+ if ((int)mtkd->dma_irq[i] < 0) {
+ dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
+ rc = -EINVAL;
+ goto err_no_dma;
+ }
+ }
+
+ pm_runtime_enable(&pdev->dev);
+ pm_runtime_set_active(&pdev->dev);
+
+ rc = dma_async_device_register(&mtkd->ddev);
+ if (rc)
+ goto rpm_disable;
+
+ platform_set_drvdata(pdev, mtkd);
+
+ /* Device-tree DMA controller registration */
+ rc = of_dma_controller_register(np, of_dma_xlate_by_chan_id, mtkd);
+ if (rc)
+ goto dma_remove;
+
+ return rc;
+
+dma_remove:
+ dma_async_device_unregister(&mtkd->ddev);
+rpm_disable:
+ pm_runtime_disable(&pdev->dev);
+err_no_dma:
+ mtk_uart_apdma_free(mtkd);
+ return rc;
+}
+
+static int mtk_uart_apdma_remove(struct platform_device *pdev)
+{
+ struct mtk_uart_apdmadev *mtkd = platform_get_drvdata(pdev);
+
+ if (pdev->dev.of_node)
+ of_dma_controller_free(pdev->dev.of_node);
+
+ pm_runtime_disable(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
+
+ dma_async_device_unregister(&mtkd->ddev);
+ mtk_uart_apdma_free(mtkd);
+
+ return 0;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int mtk_uart_apdma_suspend(struct device *dev)
+{
+ struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
+
+ if (!pm_runtime_suspended(dev))
+ clk_disable_unprepare(mtkd->clk);
+
+ return 0;
+}
+
+static int mtk_uart_apdma_resume(struct device *dev)
+{
+ int ret;
+ struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
+
+ if (!pm_runtime_suspended(dev)) {
+ ret = clk_prepare_enable(mtkd->clk);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+#endif /* CONFIG_PM_SLEEP */
+
+#ifdef CONFIG_PM
+static int mtk_uart_apdma_runtime_suspend(struct device *dev)
+{
+ struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
+
+ clk_disable_unprepare(mtkd->clk);
+
+ return 0;
+}
+
+static int mtk_uart_apdma_runtime_resume(struct device *dev)
+{
+ int ret;
+ struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
+
+ ret = clk_prepare_enable(mtkd->clk);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+#endif /* CONFIG_PM */
+
+static const struct dev_pm_ops mtk_uart_apdma_pm_ops = {
+ SET_SYSTEM_SLEEP_PM_OPS(mtk_uart_apdma_suspend, mtk_uart_apdma_resume)
+ SET_RUNTIME_PM_OPS(mtk_uart_apdma_runtime_suspend,
+ mtk_uart_apdma_runtime_resume, NULL)
+};
+
+static struct platform_driver mtk_uart_apdma_driver = {
+ .probe = mtk_uart_apdma_probe,
+ .remove = mtk_uart_apdma_remove,
+ .driver = {
+ .name = KBUILD_MODNAME,
+ .pm = &mtk_uart_apdma_pm_ops,
+ .of_match_table = of_match_ptr(mtk_uart_apdma_match),
+ },
+};
+
+module_platform_driver(mtk_uart_apdma_driver);
+
+MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
+MODULE_AUTHOR("Long Cheng <[email protected]>");
+MODULE_LICENSE("GPL v2");
+
--
1.7.9.5


2019-03-07 01:46:43

by Long Cheng

[permalink] [raw]
Subject: [PATCH v11 2/4] arm: dts: mt2712: add uart APDMA to device tree

1. add uart APDMA controller device node
2. add uart 0/1/2/3/4/5 DMA function

Signed-off-by: Long Cheng <[email protected]>
---
arch/arm64/boot/dts/mediatek/mt2712e.dtsi | 51 +++++++++++++++++++++++++++++
1 file changed, 51 insertions(+)

diff --git a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
index ee627a7..3469a6c 100644
--- a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
@@ -298,6 +298,9 @@
interrupts = <GIC_SPI 127 IRQ_TYPE_LEVEL_LOW>;
clocks = <&baud_clk>, <&sys_clk>;
clock-names = "baud", "bus";
+ dmas = <&apdma 10
+ &apdma 11>;
+ dma-names = "tx", "rx";
status = "disabled";
};

@@ -346,6 +349,39 @@
(GIC_CPU_MASK_RAW(0x13) | IRQ_TYPE_LEVEL_HIGH)>;
};

+ apdma: dma-controller@11000400 {
+ compatible = "mediatek,mt2712-uart-dma",
+ "mediatek,mt6577-uart-dma";
+ reg = <0 0x11000400 0 0x80>,
+ <0 0x11000480 0 0x80>,
+ <0 0x11000500 0 0x80>,
+ <0 0x11000580 0 0x80>,
+ <0 0x11000600 0 0x80>,
+ <0 0x11000680 0 0x80>,
+ <0 0x11000700 0 0x80>,
+ <0 0x11000780 0 0x80>,
+ <0 0x11000800 0 0x80>,
+ <0 0x11000880 0 0x80>,
+ <0 0x11000900 0 0x80>,
+ <0 0x11000980 0 0x80>;
+ interrupts = <GIC_SPI 103 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 104 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 105 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 106 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 107 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 108 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 109 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 110 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 111 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 112 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 113 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_SPI 114 IRQ_TYPE_LEVEL_LOW>;
+ dma-requests = <12>;
+ clocks = <&pericfg CLK_PERI_AP_DMA>;
+ clock-names = "apdma";
+ #dma-cells = <1>;
+ };
+
auxadc: adc@11001000 {
compatible = "mediatek,mt2712-auxadc";
reg = <0 0x11001000 0 0x1000>;
@@ -362,6 +398,9 @@
interrupts = <GIC_SPI 91 IRQ_TYPE_LEVEL_LOW>;
clocks = <&baud_clk>, <&sys_clk>;
clock-names = "baud", "bus";
+ dmas = <&apdma 0
+ &apdma 1>;
+ dma-names = "tx", "rx";
status = "disabled";
};

@@ -372,6 +411,9 @@
interrupts = <GIC_SPI 92 IRQ_TYPE_LEVEL_LOW>;
clocks = <&baud_clk>, <&sys_clk>;
clock-names = "baud", "bus";
+ dmas = <&apdma 2
+ &apdma 3>;
+ dma-names = "tx", "rx";
status = "disabled";
};

@@ -382,6 +424,9 @@
interrupts = <GIC_SPI 93 IRQ_TYPE_LEVEL_LOW>;
clocks = <&baud_clk>, <&sys_clk>;
clock-names = "baud", "bus";
+ dmas = <&apdma 4
+ &apdma 5>;
+ dma-names = "tx", "rx";
status = "disabled";
};

@@ -392,6 +437,9 @@
interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_LOW>;
clocks = <&baud_clk>, <&sys_clk>;
clock-names = "baud", "bus";
+ dmas = <&apdma 6
+ &apdma 7>;
+ dma-names = "tx", "rx";
status = "disabled";
};

@@ -402,6 +450,9 @@
interrupts = <GIC_SPI 126 IRQ_TYPE_LEVEL_LOW>;
clocks = <&baud_clk>, <&sys_clk>;
clock-names = "baud", "bus";
+ dmas = <&apdma 8
+ &apdma 9>;
+ dma-names = "tx", "rx";
status = "disabled";
};

--
1.7.9.5


2019-03-07 01:47:28

by Long Cheng

[permalink] [raw]
Subject: [PATCH v11 4/4] serial: 8250-mtk: modify uart DMA rx

Modify uart rx and complete for DMA.

Signed-off-by: Long Cheng <[email protected]>
---
drivers/tty/serial/8250/8250_mtk.c | 53 ++++++++++++++++--------------------
1 file changed, 23 insertions(+), 30 deletions(-)

diff --git a/drivers/tty/serial/8250/8250_mtk.c b/drivers/tty/serial/8250/8250_mtk.c
index e2c4076..f9b2fd5 100644
--- a/drivers/tty/serial/8250/8250_mtk.c
+++ b/drivers/tty/serial/8250/8250_mtk.c
@@ -30,7 +30,6 @@
#define MTK_UART_DMA_EN_TX 0x2
#define MTK_UART_DMA_EN_RX 0x5

-#define MTK_UART_TX_SIZE UART_XMIT_SIZE
#define MTK_UART_RX_SIZE 0x8000
#define MTK_UART_TX_TRIGGER 1
#define MTK_UART_RX_TRIGGER MTK_UART_RX_SIZE
@@ -64,28 +63,30 @@ static void mtk8250_dma_rx_complete(void *param)
struct mtk8250_data *data = up->port.private_data;
struct tty_port *tty_port = &up->port.state->port;
struct dma_tx_state state;
+ int copied, cnt, tmp;
unsigned char *ptr;
- int copied;

- dma_sync_single_for_cpu(dma->rxchan->device->dev, dma->rx_addr,
- dma->rx_size, DMA_FROM_DEVICE);
+ if (data->rx_status == DMA_RX_SHUTDOWN)
+ return;

dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
+ cnt = dma->rx_size - state.residue;
+ tmp = cnt;

- if (data->rx_status == DMA_RX_SHUTDOWN)
- return;
+ if ((data->rx_pos + cnt) > dma->rx_size)
+ tmp = dma->rx_size - data->rx_pos;

- if ((data->rx_pos + state.residue) <= dma->rx_size) {
- ptr = (unsigned char *)(data->rx_pos + dma->rx_buf);
- copied = tty_insert_flip_string(tty_port, ptr, state.residue);
- } else {
- ptr = (unsigned char *)(data->rx_pos + dma->rx_buf);
- copied = tty_insert_flip_string(tty_port, ptr,
- dma->rx_size - data->rx_pos);
+ ptr = (unsigned char *)(data->rx_pos + dma->rx_buf);
+ copied = tty_insert_flip_string(tty_port, ptr, tmp);
+ data->rx_pos += tmp;
+
+ if (cnt > tmp) {
ptr = (unsigned char *)(dma->rx_buf);
- copied += tty_insert_flip_string(tty_port, ptr,
- data->rx_pos + state.residue - dma->rx_size);
+ tmp = cnt - tmp;
+ copied += tty_insert_flip_string(tty_port, ptr, tmp);
+ data->rx_pos = tmp;
}
+
up->port.icount.rx += copied;

tty_flip_buffer_push(tty_port);
@@ -96,9 +97,7 @@ static void mtk8250_dma_rx_complete(void *param)
static void mtk8250_rx_dma(struct uart_8250_port *up)
{
struct uart_8250_dma *dma = up->dma;
- struct mtk8250_data *data = up->port.private_data;
struct dma_async_tx_descriptor *desc;
- struct dma_tx_state state;

desc = dmaengine_prep_slave_single(dma->rxchan, dma->rx_addr,
dma->rx_size, DMA_DEV_TO_MEM,
@@ -113,12 +112,6 @@ static void mtk8250_rx_dma(struct uart_8250_port *up)

dma->rx_cookie = dmaengine_submit(desc);

- dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
- data->rx_pos = state.residue;
-
- dma_sync_single_for_device(dma->rxchan->device->dev, dma->rx_addr,
- dma->rx_size, DMA_FROM_DEVICE);
-
dma_async_issue_pending(dma->rxchan);
}

@@ -131,13 +124,13 @@ static void mtk8250_dma_enable(struct uart_8250_port *up)
if (data->rx_status != DMA_RX_START)
return;

- dma->rxconf.direction = DMA_DEV_TO_MEM;
- dma->rxconf.src_addr_width = dma->rx_size / 1024;
- dma->rxconf.src_addr = dma->rx_addr;
+ dma->rxconf.direction = DMA_DEV_TO_MEM;
+ dma->rxconf.src_port_window_size = dma->rx_size;
+ dma->rxconf.src_addr = dma->rx_addr;

- dma->txconf.direction = DMA_MEM_TO_DEV;
- dma->txconf.dst_addr_width = MTK_UART_TX_SIZE / 1024;
- dma->txconf.dst_addr = dma->tx_addr;
+ dma->txconf.direction = DMA_MEM_TO_DEV;
+ dma->txconf.dst_port_window_size = UART_XMIT_SIZE;
+ dma->txconf.dst_addr = dma->tx_addr;

serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR_CLEAR_RCVR |
UART_FCR_CLEAR_XMIT);
@@ -217,7 +210,7 @@ static void mtk8250_shutdown(struct uart_port *port)
* Mediatek UARTs use an extra highspeed register (UART_MTK_HIGHS)
*
* We need to recalcualte the quot register, as the claculation depends
- * on the vaule in the highspeed register.
+ * on the value in the highspeed register.
*
* Some baudrates are not supported by the chip, so we use the next
* lower rate supported and update termios c_flag.
--
1.7.9.5


2019-03-10 11:20:43

by Nicolas Boichat

[permalink] [raw]
Subject: Re: [PATCH v11 1/4] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support

On Thu, Mar 7, 2019 at 9:45 AM Long Cheng <[email protected]> wrote:
>
> In DMA engine framework, add 8250 uart dma to support MediaTek uart.
> If MediaTek uart enabled(SERIAL_8250_MT6577), and want to improve
> the performance, can enable the function.
>
> Signed-off-by: Long Cheng <[email protected]>
> ---
> drivers/dma/mediatek/Kconfig | 11 +
> drivers/dma/mediatek/Makefile | 1 +
> drivers/dma/mediatek/mtk-uart-apdma.c | 660 +++++++++++++++++++++++++++++++++
> 3 files changed, 672 insertions(+)
> create mode 100644 drivers/dma/mediatek/mtk-uart-apdma.c
>
> diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
> index 680fc05..ac49eb6 100644
> --- a/drivers/dma/mediatek/Kconfig
> +++ b/drivers/dma/mediatek/Kconfig
> @@ -24,3 +24,14 @@ config MTK_CQDMA
>
> This controller provides the channels which is dedicated to
> memory-to-memory transfer to offload from CPU.
> +
> +config MTK_UART_APDMA
> + tristate "MediaTek SoCs APDMA support for UART"
> + depends on OF && SERIAL_8250_MT6577
> + select DMA_ENGINE
> + select DMA_VIRTUAL_CHANNELS
> + help
> + Support for the UART DMA engine found on MediaTek MTK SoCs.
> + When SERIAL_8250_MT6577 is enabled, and if you want to use DMA,
> + you can enable the config. The DMA engine can only be used
> + with MediaTek SoCs.
> diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
> index 41bb381..61a6d29 100644
> --- a/drivers/dma/mediatek/Makefile
> +++ b/drivers/dma/mediatek/Makefile
> @@ -1,2 +1,3 @@
> +obj-$(CONFIG_MTK_UART_APDMA) += mtk-uart-apdma.o
> obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
> obj-$(CONFIG_MTK_CQDMA) += mtk-cqdma.o
> diff --git a/drivers/dma/mediatek/mtk-uart-apdma.c b/drivers/dma/mediatek/mtk-uart-apdma.c
> new file mode 100644
> index 0000000..9ed7a49
> --- /dev/null
> +++ b/drivers/dma/mediatek/mtk-uart-apdma.c
> @@ -0,0 +1,660 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * MediaTek Uart APDMA driver.
> + *
> + * Copyright (c) 2018 MediaTek Inc.
> + * Author: Long Cheng <[email protected]>
> + */
> +
> +#include <linux/clk.h>
> +#include <linux/dmaengine.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/err.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/iopoll.h>
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/of_device.h>
> +#include <linux/of_dma.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +
> +#include "../virt-dma.h"
> +
> +/* The default number of virtual channel */
> +#define MTK_UART_APDMA_NR_VCHANS 8
> +
> +#define VFF_EN_B BIT(0)
> +#define VFF_STOP_B BIT(0)
> +#define VFF_FLUSH_B BIT(0)
> +#define VFF_4G_SUPPORT_B BIT(0)
> +#define VFF_RX_INT_EN0_B BIT(0) /* rx valid size >= vff thre */
> +#define VFF_RX_INT_EN1_B BIT(1)
> +#define VFF_TX_INT_EN_B BIT(0) /* tx left size >= vff thre */
> +#define VFF_WARM_RST_B BIT(0)
> +#define VFF_RX_INT_CLR_B (BIT(0) | BIT(1))
> +#define VFF_TX_INT_CLR_B 0
> +#define VFF_STOP_CLR_B 0
> +#define VFF_INT_EN_CLR_B 0
> +#define VFF_4G_SUPPORT_CLR_B 0
> +
> +/* interrupt trigger level for tx */
> +#define VFF_TX_THRE(n) ((n) * 7 / 8)
> +/* interrupt trigger level for rx */
> +#define VFF_RX_THRE(n) ((n) * 3 / 4)
> +
> +#define VFF_RING_SIZE 0xffffU

Drop the U, it's not very useful (there are a a few more below, grep
for [0-9a-f]U).

> +/* invert this bit when wrap ring head again */
> +#define VFF_RING_WRAP 0x10000U
> +
> +#define VFF_INT_FLAG 0x00
> +#define VFF_INT_EN 0x04
> +#define VFF_EN 0x08
> +#define VFF_RST 0x0c
> +#define VFF_STOP 0x10
> +#define VFF_FLUSH 0x14
> +#define VFF_ADDR 0x1c
> +#define VFF_LEN 0x24
> +#define VFF_THRE 0x28
> +#define VFF_WPT 0x2c
> +#define VFF_RPT 0x30
> +/* TX: the buffer size HW can read. RX: the buffer size SW can read. */
> +#define VFF_VALID_SIZE 0x3c
> +/* TX: the buffer size SW can write. RX: the buffer size HW can write. */
> +#define VFF_LEFT_SIZE 0x40
> +#define VFF_DEBUG_STATUS 0x50
> +#define VFF_4G_SUPPORT 0x54
> +
> +struct mtk_uart_apdmadev {
> + struct dma_device ddev;
> + struct clk *clk;
> + bool support_33bits;
> + unsigned int dma_requests;
> + unsigned int *dma_irq;
> +};
> +
> +struct mtk_uart_apdma_desc {
> + struct virt_dma_desc vd;
> +
> + unsigned int avail_len;
> +};
> +
> +struct mtk_chan {
> + struct virt_dma_chan vc;
> + struct dma_slave_config cfg;
> + void __iomem *base;
> + struct mtk_uart_apdma_desc *desc;
> +
> + enum dma_transfer_direction dir;
> +
> + bool requested;
> +
> + unsigned int rx_status;
> +};
> +
> +static inline struct mtk_uart_apdmadev *
> +to_mtk_uart_apdma_dev(struct dma_device *d)
> +{
> + return container_of(d, struct mtk_uart_apdmadev, ddev);
> +}
> +
> +static inline struct mtk_chan *to_mtk_uart_apdma_chan(struct dma_chan *c)
> +{
> + return container_of(c, struct mtk_chan, vc.chan);
> +}
> +
> +static inline struct mtk_uart_apdma_desc *to_mtk_uart_apdma_desc
> + (struct dma_async_tx_descriptor *t)
> +{
> + return container_of(t, struct mtk_uart_apdma_desc, vd.tx);
> +}
> +
> +static void mtk_uart_apdma_write(struct mtk_chan *c,
> + unsigned int reg, unsigned int val)
> +{
> + writel(val, c->base + reg);
> +}
> +
> +static unsigned int mtk_uart_apdma_read(struct mtk_chan *c, unsigned int reg)
> +{
> + return readl(c->base + reg);
> +}
> +
> +static void mtk_uart_apdma_desc_free(struct virt_dma_desc *vd)
> +{
> + struct dma_chan *chan = vd->tx.chan;
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +
> + kfree(c->desc);
> +}
> +
> +static void mtk_uart_apdma_start_tx(struct mtk_chan *c)
> +{
> + unsigned int len, send, left, wpt, d_wpt, tmp;
> + int ret;
> +
> + left = mtk_uart_apdma_read(c, VFF_LEFT_SIZE);
> + if (!left) {
> + mtk_uart_apdma_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> + return;
> + }
> +
> + /* Wait 1sec for flush, can't sleep */
> + ret = readx_poll_timeout(readl, c->base + VFF_FLUSH, tmp,
> + tmp != VFF_FLUSH_B, 0, 1000000);
> + if (ret)
> + dev_warn(c->vc.chan.device->dev, "tx: fail, debug=0x%x\n",
> + mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> +
> + send = min_t(unsigned int, left, c->desc->avail_len);
> + wpt = mtk_uart_apdma_read(c, VFF_WPT);
> + len = c->cfg.dst_port_window_size;
> +
> + d_wpt = wpt + send;
> + if ((d_wpt & VFF_RING_SIZE) >= len) {
> + d_wpt = d_wpt - len;
> + d_wpt = d_wpt ^ VFF_RING_WRAP;
> + }
> + mtk_uart_apdma_write(c, VFF_WPT, d_wpt);
> +
> + c->desc->avail_len -= send;
> +
> + mtk_uart_apdma_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> + if (mtk_uart_apdma_read(c, VFF_FLUSH) == 0U)
> + mtk_uart_apdma_write(c, VFF_FLUSH, VFF_FLUSH_B);
> +}
> +
> +static void mtk_uart_apdma_start_rx(struct mtk_chan *c)
> +{
> + struct mtk_uart_apdma_desc *d = c->desc;
> + unsigned int len, wg, rg;
> + int cnt;
> +
> + if ((mtk_uart_apdma_read(c, VFF_VALID_SIZE) == 0U) ||
> + !d || !vchan_next_desc(&c->vc))
> + return;
> +
> + len = c->cfg.src_port_window_size;
> + rg = mtk_uart_apdma_read(c, VFF_RPT);
> + wg = mtk_uart_apdma_read(c, VFF_WPT);
> + cnt = (wg & VFF_RING_SIZE) - (rg & VFF_RING_SIZE);
> + /*
> + * The buffer is ring buffer. If wrap bit different,
> + * represents the start of the next cycle for WPT
> + */
> + if ((rg ^ wg) & VFF_RING_WRAP)
> + cnt += len;
> +
> + c->rx_status = d->avail_len - cnt;
> + mtk_uart_apdma_write(c, VFF_RPT, wg);
> +
> + list_del(&d->vd.node);
> + vchan_cookie_complete(&d->vd);
> +}
> +
> +static irqreturn_t mtk_uart_apdma_irq_handler(int irq, void *dev_id)
> +{
> + struct dma_chan *chan = (struct dma_chan *)dev_id;
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> + struct mtk_uart_apdma_desc *d;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&c->vc.lock, flags);
> + if (c->dir == DMA_DEV_TO_MEM) {
> + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> + mtk_uart_apdma_start_rx(c);
> + } else if (c->dir == DMA_MEM_TO_DEV) {
> + d = c->desc;
> +
> + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> +
> + if (d->avail_len != 0U) {
> + mtk_uart_apdma_start_tx(c);
> + } else {
> + list_del(&d->vd.node);
> + vchan_cookie_complete(&d->vd);
> + }
> + }
> + spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> + return IRQ_HANDLED;
> +}
> +
> +static int mtk_uart_apdma_alloc_chan_resources(struct dma_chan *chan)
> +{
> + struct mtk_uart_apdmadev *mtkd = to_mtk_uart_apdma_dev(chan->device);
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> + unsigned int tmp;
> + int ret;
> +
> + pm_runtime_get_sync(mtkd->ddev.dev);
> +
> + mtk_uart_apdma_write(c, VFF_ADDR, 0);
> + mtk_uart_apdma_write(c, VFF_THRE, 0);
> + mtk_uart_apdma_write(c, VFF_LEN, 0);
> + mtk_uart_apdma_write(c, VFF_RST, VFF_WARM_RST_B);
> +
> + ret = readx_poll_timeout(readl, c->base + VFF_EN, tmp, !tmp, 10, 100);
> + if (ret) {
> + dev_err(chan->device->dev, "dma reset: fail, timeout\n");
> + return ret;
> + }
> +
> + if (!c->requested) {
> + c->requested = true;
> + ret = request_irq(mtkd->dma_irq[chan->chan_id],
> + mtk_uart_apdma_irq_handler, IRQF_TRIGGER_NONE,
> + KBUILD_MODNAME, chan);
> + if (ret < 0) {
> + dev_err(chan->device->dev, "Can't request dma IRQ\n");
> + return -EINVAL;
> + }
> + }
> +
> + if (mtkd->support_33bits)
> + mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
> +
> + return ret;
> +}
> +
> +static void mtk_uart_apdma_free_chan_resources(struct dma_chan *chan)
> +{
> + struct mtk_uart_apdmadev *mtkd = to_mtk_uart_apdma_dev(chan->device);
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +
> + if (c->requested) {
> + c->requested = false;
> + free_irq(mtkd->dma_irq[chan->chan_id], chan);
> + }
> +
> + tasklet_kill(&c->vc.task);
> +
> + vchan_free_chan_resources(&c->vc);
> +
> + pm_runtime_put_sync(mtkd->ddev.dev);
> +}
> +
> +static enum dma_status mtk_uart_apdma_tx_status(struct dma_chan *chan,
> + dma_cookie_t cookie,
> + struct dma_tx_state *txstate)
> +{
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> + enum dma_status ret;
> +
> + ret = dma_cookie_status(chan, cookie, txstate);
> +
> + dma_set_residue(txstate, c->rx_status);
> +
> + return ret;
> +}
> +
> +static void mtk_uart_apdma_config_write(struct dma_chan *chan,
> + struct dma_slave_config *cfg,
> + enum dma_transfer_direction dir)
> +{
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> + struct mtk_uart_apdmadev *mtkd =
> + to_mtk_uart_apdma_dev(c->vc.chan.device);
> + unsigned int tmp;
> +
> + if (mtk_uart_apdma_read(c, VFF_EN) == VFF_EN_B)
> + return;
> +
> + c->dir = dir;
> +
> + if (dir == DMA_DEV_TO_MEM) {
> + tmp = cfg->src_port_window_size;
> +
> + mtk_uart_apdma_write(c, VFF_ADDR, cfg->src_addr);
> + mtk_uart_apdma_write(c, VFF_LEN, tmp);
> + mtk_uart_apdma_write(c, VFF_THRE, VFF_RX_THRE(tmp));
> + mtk_uart_apdma_write(c, VFF_INT_EN,
> + VFF_RX_INT_EN0_B | VFF_RX_INT_EN1_B);
> + mtk_uart_apdma_write(c, VFF_RPT, 0);
> + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> + } else if (dir == DMA_MEM_TO_DEV) {
> + tmp = cfg->dst_port_window_size;
> +
> + mtk_uart_apdma_write(c, VFF_ADDR, cfg->dst_addr);
> + mtk_uart_apdma_write(c, VFF_LEN, tmp);
> + mtk_uart_apdma_write(c, VFF_THRE, VFF_TX_THRE(tmp));
> + mtk_uart_apdma_write(c, VFF_WPT, 0);
> + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> + }
> +
> + mtk_uart_apdma_write(c, VFF_EN, VFF_EN_B);
> +
> + if (mtkd->support_33bits)
> + mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
> +
> + if (mtk_uart_apdma_read(c, VFF_EN) != VFF_EN_B)
> + dev_err(chan->device->dev, "dir[%d] fail\n", dir);
> +}
> +
> +/*
> + * dmaengine_prep_slave_single will call the function. and sglen is 1.
> + * 8250 uart using one ring buffer, and deal with one sg.
> + */
> +static struct dma_async_tx_descriptor *mtk_uart_apdma_prep_slave_sg
> + (struct dma_chan *chan, struct scatterlist *sgl,
> + unsigned int sglen, enum dma_transfer_direction dir,
> + unsigned long tx_flags, void *context)
> +{
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> + struct mtk_uart_apdma_desc *d;
> +
> + if (!is_slave_direction(dir))
> + return NULL;
> +
> + mtk_uart_apdma_config_write(chan, &c->cfg, dir);
> +
> + /* Now allocate and setup the descriptor */
> + d = kzalloc(sizeof(*d), GFP_ATOMIC);
> + if (!d)
> + return NULL;
> +
> + /* sglen is 1 */
> + d->avail_len = sg_dma_len(sgl);
> + c->rx_status = d->avail_len;
> +
> + return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
> +}
> +
> +static void mtk_uart_apdma_issue_pending(struct dma_chan *chan)
> +{
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> + struct virt_dma_desc *vd;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&c->vc.lock, flags);
> + if (vchan_issue_pending(&c->vc)) {
> + vd = vchan_next_desc(&c->vc);
> + c->desc = to_mtk_uart_apdma_desc(&vd->tx);
> + }
> +
> + if (c->dir == DMA_DEV_TO_MEM)
> + mtk_uart_apdma_start_rx(c);
> + else if (c->dir == DMA_MEM_TO_DEV)
> + mtk_uart_apdma_start_tx(c);
> +
> + spin_unlock_irqrestore(&c->vc.lock, flags);
> +}
> +
> +static int mtk_uart_apdma_slave_config(struct dma_chan *chan,
> + struct dma_slave_config *config)
> +{
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +
> + memcpy(&c->cfg, config, sizeof(*config));
> +
> + return 0;
> +}
> +
> +static int mtk_uart_apdma_terminate_all(struct dma_chan *chan)
> +{
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> + unsigned long flags;
> + unsigned int tmp;
> + int ret;
> +
> + spin_lock_irqsave(&c->vc.lock, flags);
> +
> + mtk_uart_apdma_write(c, VFF_FLUSH, VFF_FLUSH_B);
> + /* Wait 1sec for flush, can't sleep */
> + ret = readx_poll_timeout(readl, c->base + VFF_FLUSH, tmp,
> + tmp != VFF_FLUSH_B, 0, 1000000);
> + if (ret)
> + dev_err(c->vc.chan.device->dev, "flush: fail, debug=0x%x\n",
> + mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> +
> + /* set stop as 1 -> wait until en is 0 -> set stop as 0 */
> + mtk_uart_apdma_write(c, VFF_STOP, VFF_STOP_B);
> + ret = readx_poll_timeout(readl, c->base + VFF_EN, tmp, !tmp, 10, 100);
> + if (ret)
> + dev_err(c->vc.chan.device->dev, "stop: fail, debug=0x%x\n",
> + mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> +
> + mtk_uart_apdma_write(c, VFF_STOP, VFF_STOP_CLR_B);
> + mtk_uart_apdma_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
> +
> + if (c->dir == DMA_DEV_TO_MEM)
> + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> + else if (c->dir == DMA_MEM_TO_DEV)
> + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> +
> + spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> + return 0;
> +}
> +
> +static int mtk_uart_apdma_device_pause(struct dma_chan *chan)
> +{
> + /* just for check caps pass */
> + dev_err(chan->device->dev, "Pause can't support\n");
> +
> + return 0;
> +}

I think we've said repeatedly that leaving this stub function is incorrect.

> +
> +static void mtk_uart_apdma_free(struct mtk_uart_apdmadev *mtkd)
> +{
> + while (!list_empty(&mtkd->ddev.channels)) {
> + struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
> + struct mtk_chan, vc.chan.device_node);
> +
> + list_del(&c->vc.chan.device_node);
> + tasklet_kill(&c->vc.task);
> + }
> +}
> +
> +static const struct of_device_id mtk_uart_apdma_match[] = {
> + { .compatible = "mediatek,mt6577-uart-dma", },
> + { /* sentinel */ },
> +};
> +MODULE_DEVICE_TABLE(of, mtk_uart_apdma_match);
> +
> +static int mtk_uart_apdma_probe(struct platform_device *pdev)
> +{
> + struct device_node *np = pdev->dev.of_node;
> + struct mtk_uart_apdmadev *mtkd;
> + struct resource *res;
> + struct mtk_chan *c;
> + int bit_mask = 32, rc;
> + unsigned int i;
> +
> + mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
> + if (!mtkd)
> + return -ENOMEM;
> +
> + mtkd->clk = devm_clk_get(&pdev->dev, NULL);
> + if (IS_ERR(mtkd->clk)) {
> + dev_err(&pdev->dev, "No clock specified\n");
> + rc = PTR_ERR(mtkd->clk);
> + return rc;
> + }
> +
> + if (of_property_read_bool(np, "mediatek,dma-33bits"))
> + mtkd->support_33bits = true;
> +
> + if (mtkd->support_33bits)
> + bit_mask = 33;
> +
> + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(bit_mask));
> + if (rc)
> + return rc;
> +
> + dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
> + mtkd->ddev.device_alloc_chan_resources =
> + mtk_uart_apdma_alloc_chan_resources;
> + mtkd->ddev.device_free_chan_resources =
> + mtk_uart_apdma_free_chan_resources;
> + mtkd->ddev.device_tx_status = mtk_uart_apdma_tx_status;
> + mtkd->ddev.device_issue_pending = mtk_uart_apdma_issue_pending;
> + mtkd->ddev.device_prep_slave_sg = mtk_uart_apdma_prep_slave_sg;
> + mtkd->ddev.device_config = mtk_uart_apdma_slave_config;
> + mtkd->ddev.device_pause = mtk_uart_apdma_device_pause;
> + mtkd->ddev.device_terminate_all = mtk_uart_apdma_terminate_all;
> + mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> + mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> + mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> + mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> + mtkd->ddev.dev = &pdev->dev;
> + INIT_LIST_HEAD(&mtkd->ddev.channels);
> +
> + mtkd->dma_requests = MTK_UART_APDMA_NR_VCHANS;
> + if (of_property_read_u32(np, "dma-requests", &mtkd->dma_requests)) {
> + dev_info(&pdev->dev,
> + "Using %u as missing dma-requests property\n",
> + MTK_UART_APDMA_NR_VCHANS);
> + }
> +
> + mtkd->dma_irq = devm_kcalloc(&pdev->dev, mtkd->dma_requests,
> + sizeof(*mtkd->dma_irq), GFP_KERNEL);
> + if (!mtkd->dma_irq)
> + return -ENOMEM;
> +
> + for (i = 0; i < mtkd->dma_requests; i++) {
> + c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
> + if (!c) {
> + rc = -ENODEV;
> + goto err_no_dma;
> + }
> +
> + res = platform_get_resource(pdev, IORESOURCE_MEM, i);
> + if (!res) {
> + rc = -ENODEV;
> + goto err_no_dma;
> + }
> +
> + c->base = devm_ioremap_resource(&pdev->dev, res);
> + if (IS_ERR(c->base)) {
> + rc = PTR_ERR(c->base);
> + goto err_no_dma;
> + }
> + c->requested = false;
> + c->vc.desc_free = mtk_uart_apdma_desc_free;
> + vchan_init(&c->vc, &mtkd->ddev);
> +
> + mtkd->dma_irq[i] = platform_get_irq(pdev, i);
> + if ((int)mtkd->dma_irq[i] < 0) {
> + dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
> + rc = -EINVAL;
> + goto err_no_dma;
> + }
> + }
> +
> + pm_runtime_enable(&pdev->dev);
> + pm_runtime_set_active(&pdev->dev);
> +
> + rc = dma_async_device_register(&mtkd->ddev);
> + if (rc)
> + goto rpm_disable;
> +
> + platform_set_drvdata(pdev, mtkd);
> +
> + /* Device-tree DMA controller registration */
> + rc = of_dma_controller_register(np, of_dma_xlate_by_chan_id, mtkd);
> + if (rc)
> + goto dma_remove;
> +
> + return rc;
> +
> +dma_remove:
> + dma_async_device_unregister(&mtkd->ddev);
> +rpm_disable:
> + pm_runtime_disable(&pdev->dev);
> +err_no_dma:
> + mtk_uart_apdma_free(mtkd);
> + return rc;
> +}
> +
> +static int mtk_uart_apdma_remove(struct platform_device *pdev)
> +{
> + struct mtk_uart_apdmadev *mtkd = platform_get_drvdata(pdev);
> +
> + if (pdev->dev.of_node)
> + of_dma_controller_free(pdev->dev.of_node);
> +
> + pm_runtime_disable(&pdev->dev);
> + pm_runtime_put_noidle(&pdev->dev);
> +
> + dma_async_device_unregister(&mtkd->ddev);
> + mtk_uart_apdma_free(mtkd);
> +
> + return 0;
> +}
> +
> +#ifdef CONFIG_PM_SLEEP
> +static int mtk_uart_apdma_suspend(struct device *dev)
> +{
> + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> + if (!pm_runtime_suspended(dev))
> + clk_disable_unprepare(mtkd->clk);
> +
> + return 0;
> +}
> +
> +static int mtk_uart_apdma_resume(struct device *dev)
> +{
> + int ret;
> + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> + if (!pm_runtime_suspended(dev)) {
> + ret = clk_prepare_enable(mtkd->clk);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +#endif /* CONFIG_PM_SLEEP */
> +
> +#ifdef CONFIG_PM
> +static int mtk_uart_apdma_runtime_suspend(struct device *dev)
> +{
> + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> + clk_disable_unprepare(mtkd->clk);
> +
> + return 0;
> +}
> +
> +static int mtk_uart_apdma_runtime_resume(struct device *dev)
> +{
> + int ret;
> + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> + ret = clk_prepare_enable(mtkd->clk);
> + if (ret)
> + return ret;
> +
> + return 0;
> +}
> +#endif /* CONFIG_PM */
> +
> +static const struct dev_pm_ops mtk_uart_apdma_pm_ops = {
> + SET_SYSTEM_SLEEP_PM_OPS(mtk_uart_apdma_suspend, mtk_uart_apdma_resume)
> + SET_RUNTIME_PM_OPS(mtk_uart_apdma_runtime_suspend,
> + mtk_uart_apdma_runtime_resume, NULL)
> +};
> +
> +static struct platform_driver mtk_uart_apdma_driver = {
> + .probe = mtk_uart_apdma_probe,
> + .remove = mtk_uart_apdma_remove,
> + .driver = {
> + .name = KBUILD_MODNAME,
> + .pm = &mtk_uart_apdma_pm_ops,
> + .of_match_table = of_match_ptr(mtk_uart_apdma_match),
> + },
> +};
> +
> +module_platform_driver(mtk_uart_apdma_driver);
> +
> +MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
> +MODULE_AUTHOR("Long Cheng <[email protected]>");
> +MODULE_LICENSE("GPL v2");
> +
> --
> 1.7.9.5
>

2019-03-11 00:46:41

by Sean Wang

[permalink] [raw]
Subject: Re: [PATCH v11 1/4] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support

Hi, Long

List some comments as the below and this week I will find a board to
test and then improve the driver.

Sean

On Wed, Mar 6, 2019 at 5:45 PM Long Cheng <[email protected]> wrote:
>
> In DMA engine framework, add 8250 uart dma to support MediaTek uart.
> If MediaTek uart enabled(SERIAL_8250_MT6577), and want to improve
> the performance, can enable the function.
>
> Signed-off-by: Long Cheng <[email protected]>
> ---
> drivers/dma/mediatek/Kconfig | 11 +
> drivers/dma/mediatek/Makefile | 1 +
> drivers/dma/mediatek/mtk-uart-apdma.c | 660 +++++++++++++++++++++++++++++++++
> 3 files changed, 672 insertions(+)
> create mode 100644 drivers/dma/mediatek/mtk-uart-apdma.c
>
> diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
> index 680fc05..ac49eb6 100644
> --- a/drivers/dma/mediatek/Kconfig
> +++ b/drivers/dma/mediatek/Kconfig
> @@ -24,3 +24,14 @@ config MTK_CQDMA
>
> This controller provides the channels which is dedicated to
> memory-to-memory transfer to offload from CPU.
> +
> +config MTK_UART_APDMA
> + tristate "MediaTek SoCs APDMA support for UART"
> + depends on OF && SERIAL_8250_MT6577
> + select DMA_ENGINE
> + select DMA_VIRTUAL_CHANNELS
> + help
> + Support for the UART DMA engine found on MediaTek MTK SoCs.
> + When SERIAL_8250_MT6577 is enabled, and if you want to use DMA,
> + you can enable the config. The DMA engine can only be used
> + with MediaTek SoCs.
> diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
> index 41bb381..61a6d29 100644
> --- a/drivers/dma/mediatek/Makefile
> +++ b/drivers/dma/mediatek/Makefile
> @@ -1,2 +1,3 @@
> +obj-$(CONFIG_MTK_UART_APDMA) += mtk-uart-apdma.o
> obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
> obj-$(CONFIG_MTK_CQDMA) += mtk-cqdma.o
> diff --git a/drivers/dma/mediatek/mtk-uart-apdma.c b/drivers/dma/mediatek/mtk-uart-apdma.c
> new file mode 100644
> index 0000000..9ed7a49
> --- /dev/null
> +++ b/drivers/dma/mediatek/mtk-uart-apdma.c
> @@ -0,0 +1,660 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * MediaTek Uart APDMA driver.
> + *
> + * Copyright (c) 2018 MediaTek Inc.
> + * Author: Long Cheng <[email protected]>
> + */
> +
> +#include <linux/clk.h>
> +#include <linux/dmaengine.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/err.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/iopoll.h>
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/of_device.h>
> +#include <linux/of_dma.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +
> +#include "../virt-dma.h"
> +
> +/* The default number of virtual channel */
> +#define MTK_UART_APDMA_NR_VCHANS 8
> +
> +#define VFF_EN_B BIT(0)
> +#define VFF_STOP_B BIT(0)
> +#define VFF_FLUSH_B BIT(0)
> +#define VFF_4G_SUPPORT_B BIT(0)
> +#define VFF_RX_INT_EN0_B BIT(0) /* rx valid size >= vff thre */
> +#define VFF_RX_INT_EN1_B BIT(1)
> +#define VFF_TX_INT_EN_B BIT(0) /* tx left size >= vff thre */
> +#define VFF_WARM_RST_B BIT(0)
> +#define VFF_RX_INT_CLR_B (BIT(0) | BIT(1))
> +#define VFF_TX_INT_CLR_B 0
> +#define VFF_STOP_CLR_B 0
> +#define VFF_INT_EN_CLR_B 0
> +#define VFF_4G_SUPPORT_CLR_B 0
> +
> +/* interrupt trigger level for tx */
> +#define VFF_TX_THRE(n) ((n) * 7 / 8)
> +/* interrupt trigger level for rx */
> +#define VFF_RX_THRE(n) ((n) * 3 / 4)
> +
> +#define VFF_RING_SIZE 0xffffU
> +/* invert this bit when wrap ring head again */
> +#define VFF_RING_WRAP 0x10000U
> +
> +#define VFF_INT_FLAG 0x00
> +#define VFF_INT_EN 0x04
> +#define VFF_EN 0x08
> +#define VFF_RST 0x0c
> +#define VFF_STOP 0x10
> +#define VFF_FLUSH 0x14
> +#define VFF_ADDR 0x1c
> +#define VFF_LEN 0x24
> +#define VFF_THRE 0x28
> +#define VFF_WPT 0x2c
> +#define VFF_RPT 0x30
> +/* TX: the buffer size HW can read. RX: the buffer size SW can read. */
> +#define VFF_VALID_SIZE 0x3c
> +/* TX: the buffer size SW can write. RX: the buffer size HW can write. */
> +#define VFF_LEFT_SIZE 0x40
> +#define VFF_DEBUG_STATUS 0x50
> +#define VFF_4G_SUPPORT 0x54
> +
> +struct mtk_uart_apdmadev {
> + struct dma_device ddev;
> + struct clk *clk;
> + bool support_33bits;
> + unsigned int dma_requests;
> + unsigned int *dma_irq;
> +};
> +
> +struct mtk_uart_apdma_desc {
> + struct virt_dma_desc vd;
> +
> + unsigned int avail_len;
> +};
> +
> +struct mtk_chan {
> + struct virt_dma_chan vc;
> + struct dma_slave_config cfg;
> + void __iomem *base;
> + struct mtk_uart_apdma_desc *desc;
> +
> + enum dma_transfer_direction dir;
> +
> + bool requested;
> +
> + unsigned int rx_status;
> +};
> +
> +static inline struct mtk_uart_apdmadev *
> +to_mtk_uart_apdma_dev(struct dma_device *d)
> +{
> + return container_of(d, struct mtk_uart_apdmadev, ddev);
> +}
> +
> +static inline struct mtk_chan *to_mtk_uart_apdma_chan(struct dma_chan *c)
> +{
> + return container_of(c, struct mtk_chan, vc.chan);
> +}
> +
> +static inline struct mtk_uart_apdma_desc *to_mtk_uart_apdma_desc
> + (struct dma_async_tx_descriptor *t)
> +{
> + return container_of(t, struct mtk_uart_apdma_desc, vd.tx);
> +}
> +
> +static void mtk_uart_apdma_write(struct mtk_chan *c,
> + unsigned int reg, unsigned int val)
> +{
> + writel(val, c->base + reg);
> +}
> +
> +static unsigned int mtk_uart_apdma_read(struct mtk_chan *c, unsigned int reg)
> +{
> + return readl(c->base + reg);
> +}
> +
> +static void mtk_uart_apdma_desc_free(struct virt_dma_desc *vd)
> +{
> + struct dma_chan *chan = vd->tx.chan;
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +
> + kfree(c->desc);
> +}
> +
> +static void mtk_uart_apdma_start_tx(struct mtk_chan *c)
> +{
> + unsigned int len, send, left, wpt, d_wpt, tmp;
> + int ret;
> +
> + left = mtk_uart_apdma_read(c, VFF_LEFT_SIZE);
> + if (!left) {
> + mtk_uart_apdma_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> + return;
> + }
> +
> + /* Wait 1sec for flush, can't sleep */
> + ret = readx_poll_timeout(readl, c->base + VFF_FLUSH, tmp,
> + tmp != VFF_FLUSH_B, 0, 1000000);

It is really not a good idea that polling up to 1 second in an
interrupt context.

> + if (ret)
> + dev_warn(c->vc.chan.device->dev, "tx: fail, debug=0x%x\n",
> + mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> +
> + send = min_t(unsigned int, left, c->desc->avail_len);
> + wpt = mtk_uart_apdma_read(c, VFF_WPT);
> + len = c->cfg.dst_port_window_size;
> +
> + d_wpt = wpt + send;
> + if ((d_wpt & VFF_RING_SIZE) >= len) {

I am confused what size of VFF is. Either VFF_RING_SIZE or
c->cfg.dst_port_window_size?

> + d_wpt = d_wpt - len;
> + d_wpt = d_wpt ^ VFF_RING_WRAP;
> + }
> + mtk_uart_apdma_write(c, VFF_WPT, d_wpt);
> +
> + c->desc->avail_len -= send;
> +
> + mtk_uart_apdma_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);

Why should we need to program interrupt enabled bit again?

> + if (mtk_uart_apdma_read(c, VFF_FLUSH) == 0U)
> + mtk_uart_apdma_write(c, VFF_FLUSH, VFF_FLUSH_B);
> +}
> +
> +static void mtk_uart_apdma_start_rx(struct mtk_chan *c)
> +{
> + struct mtk_uart_apdma_desc *d = c->desc;
> + unsigned int len, wg, rg;
> + int cnt;
> +
> + if ((mtk_uart_apdma_read(c, VFF_VALID_SIZE) == 0U) ||
> + !d || !vchan_next_desc(&c->vc))
> + return;

If the current descriptor is not available, the hardware should be
idle or stopped. so I think the condition can be removed or there is
somewhere your handle descriptors incorrectly.

> +
> + len = c->cfg.src_port_window_size;
> + rg = mtk_uart_apdma_read(c, VFF_RPT);
> + wg = mtk_uart_apdma_read(c, VFF_WPT);
> + cnt = (wg & VFF_RING_SIZE) - (rg & VFF_RING_SIZE);

Is it possible that rg and wg would be greater than VFF_RING_SIZE?

> + /*
> + * The buffer is ring buffer. If wrap bit different,
> + * represents the start of the next cycle for WPT
> + */
> + if ((rg ^ wg) & VFF_RING_WRAP)
> + cnt += len;

Again, I am confused what size of VFF is. Either VFF_RING_SIZE or
c->cfg.dst_port_window_size?

> +
> + c->rx_status = d->avail_len - cnt;
> + mtk_uart_apdma_write(c, VFF_RPT, wg);
> +
> + list_del(&d->vd.node);
> + vchan_cookie_complete(&d->vd);
> +}
> +
> +static irqreturn_t mtk_uart_apdma_irq_handler(int irq, void *dev_id)
> +{
> + struct dma_chan *chan = (struct dma_chan *)dev_id;
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> + struct mtk_uart_apdma_desc *d;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&c->vc.lock, flags);
> + if (c->dir == DMA_DEV_TO_MEM) {
> + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> + mtk_uart_apdma_start_rx(c);
> + } else if (c->dir == DMA_MEM_TO_DEV) {
> + d = c->desc;
> +
> + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> +
> + if (d->avail_len != 0U) {
> + mtk_uart_apdma_start_tx(c);
> + } else {
> + list_del(&d->vd.node);
> + vchan_cookie_complete(&d->vd);
> + }
> + }
> + spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> + return IRQ_HANDLED;
> +}
> +
> +static int mtk_uart_apdma_alloc_chan_resources(struct dma_chan *chan)
> +{
> + struct mtk_uart_apdmadev *mtkd = to_mtk_uart_apdma_dev(chan->device);
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> + unsigned int tmp;
> + int ret;
> +
> + pm_runtime_get_sync(mtkd->ddev.dev);

Add an error handling, something like

err = pm_runtime_get_sync(mtkd->ddev.dev);
if (err < 0) {
pm_runtime_put_noidle(dev);
...
}

> +
> + mtk_uart_apdma_write(c, VFF_ADDR, 0);
> + mtk_uart_apdma_write(c, VFF_THRE, 0);
> + mtk_uart_apdma_write(c, VFF_LEN, 0);
> + mtk_uart_apdma_write(c, VFF_RST, VFF_WARM_RST_B);
> +
> + ret = readx_poll_timeout(readl, c->base + VFF_EN, tmp, !tmp, 10, 100);
> + if (ret) {
> + dev_err(chan->device->dev, "dma reset: fail, timeout\n");
> + return ret;
> + }
> +
> + if (!c->requested) {
> + c->requested = true;

The variable c->requested can be saved since the same channel
shouldn't be requested more one time

> + ret = request_irq(mtkd->dma_irq[chan->chan_id],
> + mtk_uart_apdma_irq_handler, IRQF_TRIGGER_NONE,
> + KBUILD_MODNAME, chan);
> + if (ret < 0) {
> + dev_err(chan->device->dev, "Can't request dma IRQ\n");
> + return -EINVAL;
> + }
> + }
> +
> + if (mtkd->support_33bits)
> + mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
> +
> + return ret;
> +}
> +
> +static void mtk_uart_apdma_free_chan_resources(struct dma_chan *chan)
> +{
> + struct mtk_uart_apdmadev *mtkd = to_mtk_uart_apdma_dev(chan->device);
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +
> + if (c->requested) {

ditto as the above

> + c->requested = false;
> + free_irq(mtkd->dma_irq[chan->chan_id], chan);
> + }
> +
> + tasklet_kill(&c->vc.task);
> +
> + vchan_free_chan_resources(&c->vc);
> +
> + pm_runtime_put_sync(mtkd->ddev.dev);
> +}
> +
> +static enum dma_status mtk_uart_apdma_tx_status(struct dma_chan *chan,
> + dma_cookie_t cookie,
> + struct dma_tx_state *txstate)
> +{
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> + enum dma_status ret;
> +
> + ret = dma_cookie_status(chan, cookie, txstate);
> +
> + dma_set_residue(txstate, c->rx_status);
> +

The handling is not enough. You should get the descriptor
corresponding to the cookie and then calculate and return the
->tx_status by the descriptor

> + return ret;
> +}
> +
> +static void mtk_uart_apdma_config_write(struct dma_chan *chan,
> + struct dma_slave_config *cfg,
> + enum dma_transfer_direction dir)
> +{
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> + struct mtk_uart_apdmadev *mtkd =
> + to_mtk_uart_apdma_dev(c->vc.chan.device);
> + unsigned int tmp;
> +
> + if (mtk_uart_apdma_read(c, VFF_EN) == VFF_EN_B)
> + return;
> +
> + c->dir = dir;

The direction is fixed by the device, I don't think it is required to
keep it in a software state.

> +
> + if (dir == DMA_DEV_TO_MEM) {
> + tmp = cfg->src_port_window_size;
> +
> + mtk_uart_apdma_write(c, VFF_ADDR, cfg->src_addr);

That is wrong. ->src_addr is the physical address where DMA slave data
should be read (RX), not the memory address.

You should program the register VFF_ADDR and VFF_LEN by sg address and
length from device_prep_slave_sg.

> + mtk_uart_apdma_write(c, VFF_LEN, tmp);
> + mtk_uart_apdma_write(c, VFF_THRE, VFF_RX_THRE(tmp));
> + mtk_uart_apdma_write(c, VFF_INT_EN,
> + VFF_RX_INT_EN0_B | VFF_RX_INT_EN1_B);
> + mtk_uart_apdma_write(c, VFF_RPT, 0);
> + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> + } else if (dir == DMA_MEM_TO_DEV) {
> + tmp = cfg->dst_port_window_size;
> +
> + mtk_uart_apdma_write(c, VFF_ADDR, cfg->dst_addr);

That is also wrong. st_addr: this is the physical address where DMA
slave data should be written (TX), not the memory address similar to
the above explanation.

> + mtk_uart_apdma_write(c, VFF_LEN, tmp);
> + mtk_uart_apdma_write(c, VFF_THRE, VFF_TX_THRE(tmp));
> + mtk_uart_apdma_write(c, VFF_WPT, 0);
> + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> + }
> +
> + mtk_uart_apdma_write(c, VFF_EN, VFF_EN_B);
> +
> + if (mtkd->support_33bits)
> + mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
> +
> + if (mtk_uart_apdma_read(c, VFF_EN) != VFF_EN_B)
> + dev_err(chan->device->dev, "dir[%d] fail\n", dir);
> +}
> +
> +/*
> + * dmaengine_prep_slave_single will call the function. and sglen is 1.
> + * 8250 uart using one ring buffer, and deal with one sg.
> + */
> +static struct dma_async_tx_descriptor *mtk_uart_apdma_prep_slave_sg
> + (struct dma_chan *chan, struct scatterlist *sgl,
> + unsigned int sglen, enum dma_transfer_direction dir,
> + unsigned long tx_flags, void *context)
> +{
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> + struct mtk_uart_apdma_desc *d;
> +
> + if (!is_slave_direction(dir))
> + return NULL;
> +
> + mtk_uart_apdma_config_write(chan, &c->cfg, dir);
> +
> + /* Now allocate and setup the descriptor */
> + d = kzalloc(sizeof(*d), GFP_ATOMIC);
> + if (!d)
> + return NULL;
> +
> + /* sglen is 1 */
> + d->avail_len = sg_dma_len(sgl);
> + c->rx_status = d->avail_len;
> +
> + return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
> +}
> +
> +static void mtk_uart_apdma_issue_pending(struct dma_chan *chan)
> +{
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> + struct virt_dma_desc *vd;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&c->vc.lock, flags);
> + if (vchan_issue_pending(&c->vc)) {
> + vd = vchan_next_desc(&c->vc);
> + c->desc = to_mtk_uart_apdma_desc(&vd->tx);
> + }
> +
> + if (c->dir == DMA_DEV_TO_MEM)
> + mtk_uart_apdma_start_rx(c);
> + else if (c->dir == DMA_MEM_TO_DEV)
> + mtk_uart_apdma_start_tx(c);
> +
> + spin_unlock_irqrestore(&c->vc.lock, flags);
> +}
> +
> +static int mtk_uart_apdma_slave_config(struct dma_chan *chan,
> + struct dma_slave_config *config)
> +{
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +
> + memcpy(&c->cfg, config, sizeof(*config));
> +
> + return 0;
> +}
> +
> +static int mtk_uart_apdma_terminate_all(struct dma_chan *chan)
> +{
> + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> + unsigned long flags;
> + unsigned int tmp;
> + int ret;
> +
> + spin_lock_irqsave(&c->vc.lock, flags);
> +
> + mtk_uart_apdma_write(c, VFF_FLUSH, VFF_FLUSH_B);
> + /* Wait 1sec for flush, can't sleep */
> + ret = readx_poll_timeout(readl, c->base + VFF_FLUSH, tmp,
> + tmp != VFF_FLUSH_B, 0, 1000000);

It is extremely bad pending so long is in the spin_lock_irqsave

> + if (ret)
> + dev_err(c->vc.chan.device->dev, "flush: fail, debug=0x%x\n",
> + mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> +
> + /* set stop as 1 -> wait until en is 0 -> set stop as 0 */
> + mtk_uart_apdma_write(c, VFF_STOP, VFF_STOP_B);
> + ret = readx_poll_timeout(readl, c->base + VFF_EN, tmp, !tmp, 10, 100);
> + if (ret)
> + dev_err(c->vc.chan.device->dev, "stop: fail, debug=0x%x\n",
> + mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> +
> + mtk_uart_apdma_write(c, VFF_STOP, VFF_STOP_CLR_B);
> + mtk_uart_apdma_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
> +
> + if (c->dir == DMA_DEV_TO_MEM)
> + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> + else if (c->dir == DMA_MEM_TO_DEV)
> + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> +
> + spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> + return 0;
> +}
> +
> +static int mtk_uart_apdma_device_pause(struct dma_chan *chan)
> +{
> + /* just for check caps pass */
> + dev_err(chan->device->dev, "Pause can't support\n");
> +

If the device can't support hardware pause, we can do it as a software
pause in an implementation based on vdesc.

> + return 0;
> +}
> +
> +static void mtk_uart_apdma_free(struct mtk_uart_apdmadev *mtkd)
> +{
> + while (!list_empty(&mtkd->ddev.channels)) {
> + struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
> + struct mtk_chan, vc.chan.device_node);
> +
> + list_del(&c->vc.chan.device_node);
> + tasklet_kill(&c->vc.task);
> + }
> +}
> +
> +static const struct of_device_id mtk_uart_apdma_match[] = {
> + { .compatible = "mediatek,mt6577-uart-dma", },
> + { /* sentinel */ },
> +};
> +MODULE_DEVICE_TABLE(of, mtk_uart_apdma_match);
> +
> +static int mtk_uart_apdma_probe(struct platform_device *pdev)
> +{
> + struct device_node *np = pdev->dev.of_node;
> + struct mtk_uart_apdmadev *mtkd;
> + struct resource *res;
> + struct mtk_chan *c;
> + int bit_mask = 32, rc;
> + unsigned int i;
> +
> + mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
> + if (!mtkd)
> + return -ENOMEM;
> +
> + mtkd->clk = devm_clk_get(&pdev->dev, NULL);
> + if (IS_ERR(mtkd->clk)) {
> + dev_err(&pdev->dev, "No clock specified\n");
> + rc = PTR_ERR(mtkd->clk);
> + return rc;
> + }
> +
> + if (of_property_read_bool(np, "mediatek,dma-33bits"))
> + mtkd->support_33bits = true;
> +
> + if (mtkd->support_33bits)
> + bit_mask = 33;
> +
> + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(bit_mask));
> + if (rc)
> + return rc;
> +
> + dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
> + mtkd->ddev.device_alloc_chan_resources =
> + mtk_uart_apdma_alloc_chan_resources;
> + mtkd->ddev.device_free_chan_resources =
> + mtk_uart_apdma_free_chan_resources;
> + mtkd->ddev.device_tx_status = mtk_uart_apdma_tx_status;
> + mtkd->ddev.device_issue_pending = mtk_uart_apdma_issue_pending;
> + mtkd->ddev.device_prep_slave_sg = mtk_uart_apdma_prep_slave_sg;
> + mtkd->ddev.device_config = mtk_uart_apdma_slave_config;
> + mtkd->ddev.device_pause = mtk_uart_apdma_device_pause;
> + mtkd->ddev.device_terminate_all = mtk_uart_apdma_terminate_all;
> + mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> + mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> + mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> + mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> + mtkd->ddev.dev = &pdev->dev;
> + INIT_LIST_HEAD(&mtkd->ddev.channels);
> +
> + mtkd->dma_requests = MTK_UART_APDMA_NR_VCHANS;
> + if (of_property_read_u32(np, "dma-requests", &mtkd->dma_requests)) {
> + dev_info(&pdev->dev,
> + "Using %u as missing dma-requests property\n",
> + MTK_UART_APDMA_NR_VCHANS);
> + }
> +
> + mtkd->dma_irq = devm_kcalloc(&pdev->dev, mtkd->dma_requests,
> + sizeof(*mtkd->dma_irq), GFP_KERNEL);
> + if (!mtkd->dma_irq)
> + return -ENOMEM;
> +
> + for (i = 0; i < mtkd->dma_requests; i++) {
> + c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
> + if (!c) {
> + rc = -ENODEV;
> + goto err_no_dma;
> + }
> +
> + res = platform_get_resource(pdev, IORESOURCE_MEM, i);
> + if (!res) {
> + rc = -ENODEV;
> + goto err_no_dma;
> + }
> +
> + c->base = devm_ioremap_resource(&pdev->dev, res);
> + if (IS_ERR(c->base)) {
> + rc = PTR_ERR(c->base);
> + goto err_no_dma;
> + }
> + c->requested = false;
> + c->vc.desc_free = mtk_uart_apdma_desc_free;
> + vchan_init(&c->vc, &mtkd->ddev);
> +
> + mtkd->dma_irq[i] = platform_get_irq(pdev, i);
> + if ((int)mtkd->dma_irq[i] < 0) {
> + dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
> + rc = -EINVAL;
> + goto err_no_dma;
> + }
> + }
> +
> + pm_runtime_enable(&pdev->dev);
> + pm_runtime_set_active(&pdev->dev);
> +
> + rc = dma_async_device_register(&mtkd->ddev);
> + if (rc)
> + goto rpm_disable;
> +
> + platform_set_drvdata(pdev, mtkd);
> +
> + /* Device-tree DMA controller registration */
> + rc = of_dma_controller_register(np, of_dma_xlate_by_chan_id, mtkd);
> + if (rc)
> + goto dma_remove;
> +
> + return rc;
> +
> +dma_remove:
> + dma_async_device_unregister(&mtkd->ddev);
> +rpm_disable:
> + pm_runtime_disable(&pdev->dev);
> +err_no_dma:
> + mtk_uart_apdma_free(mtkd);
> + return rc;
> +}
> +
> +static int mtk_uart_apdma_remove(struct platform_device *pdev)
> +{
> + struct mtk_uart_apdmadev *mtkd = platform_get_drvdata(pdev);
> +
> + if (pdev->dev.of_node)
> + of_dma_controller_free(pdev->dev.of_node);
> +
> + pm_runtime_disable(&pdev->dev);
> + pm_runtime_put_noidle(&pdev->dev);

That pm_runtime_put_noidle should be removed or it causes an
inconsistency with the probe handler.

> +
> + dma_async_device_unregister(&mtkd->ddev);
> + mtk_uart_apdma_free(mtkd);
> +
> + return 0;
> +}
> +
> +#ifdef CONFIG_PM_SLEEP
> +static int mtk_uart_apdma_suspend(struct device *dev)
> +{
> + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> + if (!pm_runtime_suspended(dev))
> + clk_disable_unprepare(mtkd->clk);
> +
> + return 0;
> +}
> +
> +static int mtk_uart_apdma_resume(struct device *dev)
> +{
> + int ret;
> + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> + if (!pm_runtime_suspended(dev)) {
> + ret = clk_prepare_enable(mtkd->clk);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +#endif /* CONFIG_PM_SLEEP */
> +
> +#ifdef CONFIG_PM
> +static int mtk_uart_apdma_runtime_suspend(struct device *dev)
> +{
> + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> + clk_disable_unprepare(mtkd->clk);
> +
> + return 0;
> +}
> +
> +static int mtk_uart_apdma_runtime_resume(struct device *dev)
> +{
> + int ret;
> + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> + ret = clk_prepare_enable(mtkd->clk);
> + if (ret)
> + return ret;
> +
> + return 0;
> +}
> +#endif /* CONFIG_PM */
> +
> +static const struct dev_pm_ops mtk_uart_apdma_pm_ops = {
> + SET_SYSTEM_SLEEP_PM_OPS(mtk_uart_apdma_suspend, mtk_uart_apdma_resume)
> + SET_RUNTIME_PM_OPS(mtk_uart_apdma_runtime_suspend,
> + mtk_uart_apdma_runtime_resume, NULL)
> +};

It probably causes a build error when CONFIG_PM is not enabled.
and use a UNIVERSAL_DEV_PM_OPS because the runtime suspend/resume and
system suspend/resume for the dma are
almost the same.

> +
> +static struct platform_driver mtk_uart_apdma_driver = {
> + .probe = mtk_uart_apdma_probe,
> + .remove = mtk_uart_apdma_remove,
> + .driver = {
> + .name = KBUILD_MODNAME,
> + .pm = &mtk_uart_apdma_pm_ops,
> + .of_match_table = of_match_ptr(mtk_uart_apdma_match),
> + },
> +};
> +
> +module_platform_driver(mtk_uart_apdma_driver);
> +
> +MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
> +MODULE_AUTHOR("Long Cheng <[email protected]>");
> +MODULE_LICENSE("GPL v2");
> +
> --
> 1.7.9.5
>

2019-04-09 09:43:22

by Long Cheng

[permalink] [raw]
Subject: Re: [PATCH v11 1/4] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support

On Sun, 2019-03-10 at 19:15 +0800, Nicolas Boichat wrote:
> On Thu, Mar 7, 2019 at 9:45 AM Long Cheng <[email protected]> wrote:
> >
> > In DMA engine framework, add 8250 uart dma to support MediaTek uart.
> > If MediaTek uart enabled(SERIAL_8250_MT6577), and want to improve
> > the performance, can enable the function.
> >
> > Signed-off-by: Long Cheng <[email protected]>
> > ---
> > drivers/dma/mediatek/Kconfig | 11 +
> > drivers/dma/mediatek/Makefile | 1 +
> > drivers/dma/mediatek/mtk-uart-apdma.c | 660 +++++++++++++++++++++++++++++++++
> > 3 files changed, 672 insertions(+)
> > create mode 100644 drivers/dma/mediatek/mtk-uart-apdma.c
> >
> > diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
> > index 680fc05..ac49eb6 100644
> > --- a/drivers/dma/mediatek/Kconfig
> > +++ b/drivers/dma/mediatek/Kconfig
> > @@ -24,3 +24,14 @@ config MTK_CQDMA
> >
> > This controller provides the channels which is dedicated to
> > memory-to-memory transfer to offload from CPU.
> > +
> > +config MTK_UART_APDMA
> > + tristate "MediaTek SoCs APDMA support for UART"
> > + depends on OF && SERIAL_8250_MT6577
> > + select DMA_ENGINE
> > + select DMA_VIRTUAL_CHANNELS
> > + help
> > + Support for the UART DMA engine found on MediaTek MTK SoCs.
> > + When SERIAL_8250_MT6577 is enabled, and if you want to use DMA,
> > + you can enable the config. The DMA engine can only be used
> > + with MediaTek SoCs.
> > diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
> > index 41bb381..61a6d29 100644
> > --- a/drivers/dma/mediatek/Makefile
> > +++ b/drivers/dma/mediatek/Makefile
> > @@ -1,2 +1,3 @@
> > +obj-$(CONFIG_MTK_UART_APDMA) += mtk-uart-apdma.o
> > obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
> > obj-$(CONFIG_MTK_CQDMA) += mtk-cqdma.o
> > diff --git a/drivers/dma/mediatek/mtk-uart-apdma.c b/drivers/dma/mediatek/mtk-uart-apdma.c
> > new file mode 100644
> > index 0000000..9ed7a49
> > --- /dev/null
> > +++ b/drivers/dma/mediatek/mtk-uart-apdma.c
> > @@ -0,0 +1,660 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * MediaTek Uart APDMA driver.
> > + *
> > + * Copyright (c) 2018 MediaTek Inc.
> > + * Author: Long Cheng <[email protected]>
> > + */
> > +
> > +#include <linux/clk.h>
> > +#include <linux/dmaengine.h>
> > +#include <linux/dma-mapping.h>
> > +#include <linux/err.h>
> > +#include <linux/init.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/iopoll.h>
> > +#include <linux/kernel.h>
> > +#include <linux/list.h>
> > +#include <linux/module.h>
> > +#include <linux/of_device.h>
> > +#include <linux/of_dma.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/pm_runtime.h>
> > +#include <linux/slab.h>
> > +#include <linux/spinlock.h>
> > +
> > +#include "../virt-dma.h"
> > +
> > +/* The default number of virtual channel */
> > +#define MTK_UART_APDMA_NR_VCHANS 8
> > +
> > +#define VFF_EN_B BIT(0)
> > +#define VFF_STOP_B BIT(0)
> > +#define VFF_FLUSH_B BIT(0)
> > +#define VFF_4G_SUPPORT_B BIT(0)
> > +#define VFF_RX_INT_EN0_B BIT(0) /* rx valid size >= vff thre */
> > +#define VFF_RX_INT_EN1_B BIT(1)
> > +#define VFF_TX_INT_EN_B BIT(0) /* tx left size >= vff thre */
> > +#define VFF_WARM_RST_B BIT(0)
> > +#define VFF_RX_INT_CLR_B (BIT(0) | BIT(1))
> > +#define VFF_TX_INT_CLR_B 0
> > +#define VFF_STOP_CLR_B 0
> > +#define VFF_INT_EN_CLR_B 0
> > +#define VFF_4G_SUPPORT_CLR_B 0
> > +
> > +/* interrupt trigger level for tx */
> > +#define VFF_TX_THRE(n) ((n) * 7 / 8)
> > +/* interrupt trigger level for rx */
> > +#define VFF_RX_THRE(n) ((n) * 3 / 4)
> > +
> > +#define VFF_RING_SIZE 0xffffU
>
> Drop the U, it's not very useful (there are a a few more below, grep
> for [0-9a-f]U).
>

OK, i will fix these.

> > +/* invert this bit when wrap ring head again */
> > +#define VFF_RING_WRAP 0x10000U
> > +
> > +#define VFF_INT_FLAG 0x00
> > +#define VFF_INT_EN 0x04
> > +#define VFF_EN 0x08
> > +#define VFF_RST 0x0c
> > +#define VFF_STOP 0x10
> > +#define VFF_FLUSH 0x14
> > +#define VFF_ADDR 0x1c
> > +#define VFF_LEN 0x24
> > +#define VFF_THRE 0x28
> > +#define VFF_WPT 0x2c
> > +#define VFF_RPT 0x30
> > +/* TX: the buffer size HW can read. RX: the buffer size SW can read. */
> > +#define VFF_VALID_SIZE 0x3c
> > +/* TX: the buffer size SW can write. RX: the buffer size HW can write. */
> > +#define VFF_LEFT_SIZE 0x40
> > +#define VFF_DEBUG_STATUS 0x50
> > +#define VFF_4G_SUPPORT 0x54
> > +
> > +struct mtk_uart_apdmadev {
> > + struct dma_device ddev;
> > + struct clk *clk;
> > + bool support_33bits;
> > + unsigned int dma_requests;
> > + unsigned int *dma_irq;
> > +};
> > +
> > +struct mtk_uart_apdma_desc {
> > + struct virt_dma_desc vd;
> > +
> > + unsigned int avail_len;
> > +};
> > +
> > +struct mtk_chan {
> > + struct virt_dma_chan vc;
> > + struct dma_slave_config cfg;
> > + void __iomem *base;
> > + struct mtk_uart_apdma_desc *desc;
> > +
> > + enum dma_transfer_direction dir;
> > +
> > + bool requested;
> > +
> > + unsigned int rx_status;
> > +};
> > +
> > +static inline struct mtk_uart_apdmadev *
> > +to_mtk_uart_apdma_dev(struct dma_device *d)
> > +{
> > + return container_of(d, struct mtk_uart_apdmadev, ddev);
> > +}
> > +
> > +static inline struct mtk_chan *to_mtk_uart_apdma_chan(struct dma_chan *c)
> > +{
> > + return container_of(c, struct mtk_chan, vc.chan);
> > +}
> > +
> > +static inline struct mtk_uart_apdma_desc *to_mtk_uart_apdma_desc
> > + (struct dma_async_tx_descriptor *t)
> > +{
> > + return container_of(t, struct mtk_uart_apdma_desc, vd.tx);
> > +}
> > +
> > +static void mtk_uart_apdma_write(struct mtk_chan *c,
> > + unsigned int reg, unsigned int val)
> > +{
> > + writel(val, c->base + reg);
> > +}
> > +
> > +static unsigned int mtk_uart_apdma_read(struct mtk_chan *c, unsigned int reg)
> > +{
> > + return readl(c->base + reg);
> > +}
> > +
> > +static void mtk_uart_apdma_desc_free(struct virt_dma_desc *vd)
> > +{
> > + struct dma_chan *chan = vd->tx.chan;
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > +
> > + kfree(c->desc);
> > +}
> > +
> > +static void mtk_uart_apdma_start_tx(struct mtk_chan *c)
> > +{
> > + unsigned int len, send, left, wpt, d_wpt, tmp;
> > + int ret;
> > +
> > + left = mtk_uart_apdma_read(c, VFF_LEFT_SIZE);
> > + if (!left) {
> > + mtk_uart_apdma_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> > + return;
> > + }
> > +
> > + /* Wait 1sec for flush, can't sleep */
> > + ret = readx_poll_timeout(readl, c->base + VFF_FLUSH, tmp,
> > + tmp != VFF_FLUSH_B, 0, 1000000);
> > + if (ret)
> > + dev_warn(c->vc.chan.device->dev, "tx: fail, debug=0x%x\n",
> > + mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> > +
> > + send = min_t(unsigned int, left, c->desc->avail_len);
> > + wpt = mtk_uart_apdma_read(c, VFF_WPT);
> > + len = c->cfg.dst_port_window_size;
> > +
> > + d_wpt = wpt + send;
> > + if ((d_wpt & VFF_RING_SIZE) >= len) {
> > + d_wpt = d_wpt - len;
> > + d_wpt = d_wpt ^ VFF_RING_WRAP;
> > + }
> > + mtk_uart_apdma_write(c, VFF_WPT, d_wpt);
> > +
> > + c->desc->avail_len -= send;
> > +
> > + mtk_uart_apdma_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> > + if (mtk_uart_apdma_read(c, VFF_FLUSH) == 0U)
> > + mtk_uart_apdma_write(c, VFF_FLUSH, VFF_FLUSH_B);
> > +}
> > +
> > +static void mtk_uart_apdma_start_rx(struct mtk_chan *c)
> > +{
> > + struct mtk_uart_apdma_desc *d = c->desc;
> > + unsigned int len, wg, rg;
> > + int cnt;
> > +
> > + if ((mtk_uart_apdma_read(c, VFF_VALID_SIZE) == 0U) ||
> > + !d || !vchan_next_desc(&c->vc))
> > + return;
> > +
> > + len = c->cfg.src_port_window_size;
> > + rg = mtk_uart_apdma_read(c, VFF_RPT);
> > + wg = mtk_uart_apdma_read(c, VFF_WPT);
> > + cnt = (wg & VFF_RING_SIZE) - (rg & VFF_RING_SIZE);
> > + /*
> > + * The buffer is ring buffer. If wrap bit different,
> > + * represents the start of the next cycle for WPT
> > + */
> > + if ((rg ^ wg) & VFF_RING_WRAP)
> > + cnt += len;
> > +
> > + c->rx_status = d->avail_len - cnt;
> > + mtk_uart_apdma_write(c, VFF_RPT, wg);
> > +
> > + list_del(&d->vd.node);
> > + vchan_cookie_complete(&d->vd);
> > +}
> > +
> > +static irqreturn_t mtk_uart_apdma_irq_handler(int irq, void *dev_id)
> > +{
> > + struct dma_chan *chan = (struct dma_chan *)dev_id;
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > + struct mtk_uart_apdma_desc *d;
> > + unsigned long flags;
> > +
> > + spin_lock_irqsave(&c->vc.lock, flags);
> > + if (c->dir == DMA_DEV_TO_MEM) {
> > + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> > + mtk_uart_apdma_start_rx(c);
> > + } else if (c->dir == DMA_MEM_TO_DEV) {
> > + d = c->desc;
> > +
> > + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> > +
> > + if (d->avail_len != 0U) {
> > + mtk_uart_apdma_start_tx(c);
> > + } else {
> > + list_del(&d->vd.node);
> > + vchan_cookie_complete(&d->vd);
> > + }
> > + }
> > + spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > + return IRQ_HANDLED;
> > +}
> > +
> > +static int mtk_uart_apdma_alloc_chan_resources(struct dma_chan *chan)
> > +{
> > + struct mtk_uart_apdmadev *mtkd = to_mtk_uart_apdma_dev(chan->device);
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > + unsigned int tmp;
> > + int ret;
> > +
> > + pm_runtime_get_sync(mtkd->ddev.dev);
> > +
> > + mtk_uart_apdma_write(c, VFF_ADDR, 0);
> > + mtk_uart_apdma_write(c, VFF_THRE, 0);
> > + mtk_uart_apdma_write(c, VFF_LEN, 0);
> > + mtk_uart_apdma_write(c, VFF_RST, VFF_WARM_RST_B);
> > +
> > + ret = readx_poll_timeout(readl, c->base + VFF_EN, tmp, !tmp, 10, 100);
> > + if (ret) {
> > + dev_err(chan->device->dev, "dma reset: fail, timeout\n");
> > + return ret;
> > + }
> > +
> > + if (!c->requested) {
> > + c->requested = true;
> > + ret = request_irq(mtkd->dma_irq[chan->chan_id],
> > + mtk_uart_apdma_irq_handler, IRQF_TRIGGER_NONE,
> > + KBUILD_MODNAME, chan);
> > + if (ret < 0) {
> > + dev_err(chan->device->dev, "Can't request dma IRQ\n");
> > + return -EINVAL;
> > + }
> > + }
> > +
> > + if (mtkd->support_33bits)
> > + mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
> > +
> > + return ret;
> > +}
> > +
> > +static void mtk_uart_apdma_free_chan_resources(struct dma_chan *chan)
> > +{
> > + struct mtk_uart_apdmadev *mtkd = to_mtk_uart_apdma_dev(chan->device);
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > +
> > + if (c->requested) {
> > + c->requested = false;
> > + free_irq(mtkd->dma_irq[chan->chan_id], chan);
> > + }
> > +
> > + tasklet_kill(&c->vc.task);
> > +
> > + vchan_free_chan_resources(&c->vc);
> > +
> > + pm_runtime_put_sync(mtkd->ddev.dev);
> > +}
> > +
> > +static enum dma_status mtk_uart_apdma_tx_status(struct dma_chan *chan,
> > + dma_cookie_t cookie,
> > + struct dma_tx_state *txstate)
> > +{
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > + enum dma_status ret;
> > +
> > + ret = dma_cookie_status(chan, cookie, txstate);
> > +
> > + dma_set_residue(txstate, c->rx_status);
> > +
> > + return ret;
> > +}
> > +
> > +static void mtk_uart_apdma_config_write(struct dma_chan *chan,
> > + struct dma_slave_config *cfg,
> > + enum dma_transfer_direction dir)
> > +{
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > + struct mtk_uart_apdmadev *mtkd =
> > + to_mtk_uart_apdma_dev(c->vc.chan.device);
> > + unsigned int tmp;
> > +
> > + if (mtk_uart_apdma_read(c, VFF_EN) == VFF_EN_B)
> > + return;
> > +
> > + c->dir = dir;
> > +
> > + if (dir == DMA_DEV_TO_MEM) {
> > + tmp = cfg->src_port_window_size;
> > +
> > + mtk_uart_apdma_write(c, VFF_ADDR, cfg->src_addr);
> > + mtk_uart_apdma_write(c, VFF_LEN, tmp);
> > + mtk_uart_apdma_write(c, VFF_THRE, VFF_RX_THRE(tmp));
> > + mtk_uart_apdma_write(c, VFF_INT_EN,
> > + VFF_RX_INT_EN0_B | VFF_RX_INT_EN1_B);
> > + mtk_uart_apdma_write(c, VFF_RPT, 0);
> > + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> > + } else if (dir == DMA_MEM_TO_DEV) {
> > + tmp = cfg->dst_port_window_size;
> > +
> > + mtk_uart_apdma_write(c, VFF_ADDR, cfg->dst_addr);
> > + mtk_uart_apdma_write(c, VFF_LEN, tmp);
> > + mtk_uart_apdma_write(c, VFF_THRE, VFF_TX_THRE(tmp));
> > + mtk_uart_apdma_write(c, VFF_WPT, 0);
> > + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> > + }
> > +
> > + mtk_uart_apdma_write(c, VFF_EN, VFF_EN_B);
> > +
> > + if (mtkd->support_33bits)
> > + mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
> > +
> > + if (mtk_uart_apdma_read(c, VFF_EN) != VFF_EN_B)
> > + dev_err(chan->device->dev, "dir[%d] fail\n", dir);
> > +}
> > +
> > +/*
> > + * dmaengine_prep_slave_single will call the function. and sglen is 1.
> > + * 8250 uart using one ring buffer, and deal with one sg.
> > + */
> > +static struct dma_async_tx_descriptor *mtk_uart_apdma_prep_slave_sg
> > + (struct dma_chan *chan, struct scatterlist *sgl,
> > + unsigned int sglen, enum dma_transfer_direction dir,
> > + unsigned long tx_flags, void *context)
> > +{
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > + struct mtk_uart_apdma_desc *d;
> > +
> > + if (!is_slave_direction(dir))
> > + return NULL;
> > +
> > + mtk_uart_apdma_config_write(chan, &c->cfg, dir);
> > +
> > + /* Now allocate and setup the descriptor */
> > + d = kzalloc(sizeof(*d), GFP_ATOMIC);
> > + if (!d)
> > + return NULL;
> > +
> > + /* sglen is 1 */
> > + d->avail_len = sg_dma_len(sgl);
> > + c->rx_status = d->avail_len;
> > +
> > + return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
> > +}
> > +
> > +static void mtk_uart_apdma_issue_pending(struct dma_chan *chan)
> > +{
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > + struct virt_dma_desc *vd;
> > + unsigned long flags;
> > +
> > + spin_lock_irqsave(&c->vc.lock, flags);
> > + if (vchan_issue_pending(&c->vc)) {
> > + vd = vchan_next_desc(&c->vc);
> > + c->desc = to_mtk_uart_apdma_desc(&vd->tx);
> > + }
> > +
> > + if (c->dir == DMA_DEV_TO_MEM)
> > + mtk_uart_apdma_start_rx(c);
> > + else if (c->dir == DMA_MEM_TO_DEV)
> > + mtk_uart_apdma_start_tx(c);
> > +
> > + spin_unlock_irqrestore(&c->vc.lock, flags);
> > +}
> > +
> > +static int mtk_uart_apdma_slave_config(struct dma_chan *chan,
> > + struct dma_slave_config *config)
> > +{
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > +
> > + memcpy(&c->cfg, config, sizeof(*config));
> > +
> > + return 0;
> > +}
> > +
> > +static int mtk_uart_apdma_terminate_all(struct dma_chan *chan)
> > +{
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > + unsigned long flags;
> > + unsigned int tmp;
> > + int ret;
> > +
> > + spin_lock_irqsave(&c->vc.lock, flags);
> > +
> > + mtk_uart_apdma_write(c, VFF_FLUSH, VFF_FLUSH_B);
> > + /* Wait 1sec for flush, can't sleep */
> > + ret = readx_poll_timeout(readl, c->base + VFF_FLUSH, tmp,
> > + tmp != VFF_FLUSH_B, 0, 1000000);
> > + if (ret)
> > + dev_err(c->vc.chan.device->dev, "flush: fail, debug=0x%x\n",
> > + mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> > +
> > + /* set stop as 1 -> wait until en is 0 -> set stop as 0 */
> > + mtk_uart_apdma_write(c, VFF_STOP, VFF_STOP_B);
> > + ret = readx_poll_timeout(readl, c->base + VFF_EN, tmp, !tmp, 10, 100);
> > + if (ret)
> > + dev_err(c->vc.chan.device->dev, "stop: fail, debug=0x%x\n",
> > + mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> > +
> > + mtk_uart_apdma_write(c, VFF_STOP, VFF_STOP_CLR_B);
> > + mtk_uart_apdma_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
> > +
> > + if (c->dir == DMA_DEV_TO_MEM)
> > + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> > + else if (c->dir == DMA_MEM_TO_DEV)
> > + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> > +
> > + spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > + return 0;
> > +}
> > +
> > +static int mtk_uart_apdma_device_pause(struct dma_chan *chan)
> > +{
> > + /* just for check caps pass */
> > + dev_err(chan->device->dev, "Pause can't support\n");
> > +
> > + return 0;
> > +}
>
> I think we've said repeatedly that leaving this stub function is incorrect.
>

I will try to implementation it. Thanks

> > +
> > +static void mtk_uart_apdma_free(struct mtk_uart_apdmadev *mtkd)
> > +{
> > + while (!list_empty(&mtkd->ddev.channels)) {
> > + struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
> > + struct mtk_chan, vc.chan.device_node);
> > +
> > + list_del(&c->vc.chan.device_node);
> > + tasklet_kill(&c->vc.task);
> > + }
> > +}
> > +
> > +static const struct of_device_id mtk_uart_apdma_match[] = {
> > + { .compatible = "mediatek,mt6577-uart-dma", },
> > + { /* sentinel */ },
> > +};
> > +MODULE_DEVICE_TABLE(of, mtk_uart_apdma_match);
> > +
> > +static int mtk_uart_apdma_probe(struct platform_device *pdev)
> > +{
> > + struct device_node *np = pdev->dev.of_node;
> > + struct mtk_uart_apdmadev *mtkd;
> > + struct resource *res;
> > + struct mtk_chan *c;
> > + int bit_mask = 32, rc;
> > + unsigned int i;
> > +
> > + mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
> > + if (!mtkd)
> > + return -ENOMEM;
> > +
> > + mtkd->clk = devm_clk_get(&pdev->dev, NULL);
> > + if (IS_ERR(mtkd->clk)) {
> > + dev_err(&pdev->dev, "No clock specified\n");
> > + rc = PTR_ERR(mtkd->clk);
> > + return rc;
> > + }
> > +
> > + if (of_property_read_bool(np, "mediatek,dma-33bits"))
> > + mtkd->support_33bits = true;
> > +
> > + if (mtkd->support_33bits)
> > + bit_mask = 33;
> > +
> > + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(bit_mask));
> > + if (rc)
> > + return rc;
> > +
> > + dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
> > + mtkd->ddev.device_alloc_chan_resources =
> > + mtk_uart_apdma_alloc_chan_resources;
> > + mtkd->ddev.device_free_chan_resources =
> > + mtk_uart_apdma_free_chan_resources;
> > + mtkd->ddev.device_tx_status = mtk_uart_apdma_tx_status;
> > + mtkd->ddev.device_issue_pending = mtk_uart_apdma_issue_pending;
> > + mtkd->ddev.device_prep_slave_sg = mtk_uart_apdma_prep_slave_sg;
> > + mtkd->ddev.device_config = mtk_uart_apdma_slave_config;
> > + mtkd->ddev.device_pause = mtk_uart_apdma_device_pause;
> > + mtkd->ddev.device_terminate_all = mtk_uart_apdma_terminate_all;
> > + mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> > + mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> > + mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> > + mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> > + mtkd->ddev.dev = &pdev->dev;
> > + INIT_LIST_HEAD(&mtkd->ddev.channels);
> > +
> > + mtkd->dma_requests = MTK_UART_APDMA_NR_VCHANS;
> > + if (of_property_read_u32(np, "dma-requests", &mtkd->dma_requests)) {
> > + dev_info(&pdev->dev,
> > + "Using %u as missing dma-requests property\n",
> > + MTK_UART_APDMA_NR_VCHANS);
> > + }
> > +
> > + mtkd->dma_irq = devm_kcalloc(&pdev->dev, mtkd->dma_requests,
> > + sizeof(*mtkd->dma_irq), GFP_KERNEL);
> > + if (!mtkd->dma_irq)
> > + return -ENOMEM;
> > +
> > + for (i = 0; i < mtkd->dma_requests; i++) {
> > + c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
> > + if (!c) {
> > + rc = -ENODEV;
> > + goto err_no_dma;
> > + }
> > +
> > + res = platform_get_resource(pdev, IORESOURCE_MEM, i);
> > + if (!res) {
> > + rc = -ENODEV;
> > + goto err_no_dma;
> > + }
> > +
> > + c->base = devm_ioremap_resource(&pdev->dev, res);
> > + if (IS_ERR(c->base)) {
> > + rc = PTR_ERR(c->base);
> > + goto err_no_dma;
> > + }
> > + c->requested = false;
> > + c->vc.desc_free = mtk_uart_apdma_desc_free;
> > + vchan_init(&c->vc, &mtkd->ddev);
> > +
> > + mtkd->dma_irq[i] = platform_get_irq(pdev, i);
> > + if ((int)mtkd->dma_irq[i] < 0) {
> > + dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
> > + rc = -EINVAL;
> > + goto err_no_dma;
> > + }
> > + }
> > +
> > + pm_runtime_enable(&pdev->dev);
> > + pm_runtime_set_active(&pdev->dev);
> > +
> > + rc = dma_async_device_register(&mtkd->ddev);
> > + if (rc)
> > + goto rpm_disable;
> > +
> > + platform_set_drvdata(pdev, mtkd);
> > +
> > + /* Device-tree DMA controller registration */
> > + rc = of_dma_controller_register(np, of_dma_xlate_by_chan_id, mtkd);
> > + if (rc)
> > + goto dma_remove;
> > +
> > + return rc;
> > +
> > +dma_remove:
> > + dma_async_device_unregister(&mtkd->ddev);
> > +rpm_disable:
> > + pm_runtime_disable(&pdev->dev);
> > +err_no_dma:
> > + mtk_uart_apdma_free(mtkd);
> > + return rc;
> > +}
> > +
> > +static int mtk_uart_apdma_remove(struct platform_device *pdev)
> > +{
> > + struct mtk_uart_apdmadev *mtkd = platform_get_drvdata(pdev);
> > +
> > + if (pdev->dev.of_node)
> > + of_dma_controller_free(pdev->dev.of_node);
> > +
> > + pm_runtime_disable(&pdev->dev);
> > + pm_runtime_put_noidle(&pdev->dev);
> > +
> > + dma_async_device_unregister(&mtkd->ddev);
> > + mtk_uart_apdma_free(mtkd);
> > +
> > + return 0;
> > +}
> > +
> > +#ifdef CONFIG_PM_SLEEP
> > +static int mtk_uart_apdma_suspend(struct device *dev)
> > +{
> > + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> > +
> > + if (!pm_runtime_suspended(dev))
> > + clk_disable_unprepare(mtkd->clk);
> > +
> > + return 0;
> > +}
> > +
> > +static int mtk_uart_apdma_resume(struct device *dev)
> > +{
> > + int ret;
> > + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> > +
> > + if (!pm_runtime_suspended(dev)) {
> > + ret = clk_prepare_enable(mtkd->clk);
> > + if (ret)
> > + return ret;
> > + }
> > +
> > + return 0;
> > +}
> > +#endif /* CONFIG_PM_SLEEP */
> > +
> > +#ifdef CONFIG_PM
> > +static int mtk_uart_apdma_runtime_suspend(struct device *dev)
> > +{
> > + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> > +
> > + clk_disable_unprepare(mtkd->clk);
> > +
> > + return 0;
> > +}
> > +
> > +static int mtk_uart_apdma_runtime_resume(struct device *dev)
> > +{
> > + int ret;
> > + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> > +
> > + ret = clk_prepare_enable(mtkd->clk);
> > + if (ret)
> > + return ret;
> > +
> > + return 0;
> > +}
> > +#endif /* CONFIG_PM */
> > +
> > +static const struct dev_pm_ops mtk_uart_apdma_pm_ops = {
> > + SET_SYSTEM_SLEEP_PM_OPS(mtk_uart_apdma_suspend, mtk_uart_apdma_resume)
> > + SET_RUNTIME_PM_OPS(mtk_uart_apdma_runtime_suspend,
> > + mtk_uart_apdma_runtime_resume, NULL)
> > +};
> > +
> > +static struct platform_driver mtk_uart_apdma_driver = {
> > + .probe = mtk_uart_apdma_probe,
> > + .remove = mtk_uart_apdma_remove,
> > + .driver = {
> > + .name = KBUILD_MODNAME,
> > + .pm = &mtk_uart_apdma_pm_ops,
> > + .of_match_table = of_match_ptr(mtk_uart_apdma_match),
> > + },
> > +};
> > +
> > +module_platform_driver(mtk_uart_apdma_driver);
> > +
> > +MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
> > +MODULE_AUTHOR("Long Cheng <[email protected]>");
> > +MODULE_LICENSE("GPL v2");
> > +
> > --
> > 1.7.9.5
> >


2019-04-11 08:23:08

by Long Cheng

[permalink] [raw]
Subject: Re: [PATCH v11 1/4] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support

On Sun, 2019-03-10 at 17:31 -0700, Sean Wang wrote:
> Hi, Long
>
> List some comments as the below and this week I will find a board to
> test and then improve the driver.
>
> Sean
>
> On Wed, Mar 6, 2019 at 5:45 PM Long Cheng <[email protected]> wrote:
> >
> > In DMA engine framework, add 8250 uart dma to support MediaTek uart.
> > If MediaTek uart enabled(SERIAL_8250_MT6577), and want to improve
> > the performance, can enable the function.
> >
> > Signed-off-by: Long Cheng <[email protected]>
> > ---
> > drivers/dma/mediatek/Kconfig | 11 +
> > drivers/dma/mediatek/Makefile | 1 +
> > drivers/dma/mediatek/mtk-uart-apdma.c | 660 +++++++++++++++++++++++++++++++++
> > 3 files changed, 672 insertions(+)
> > create mode 100644 drivers/dma/mediatek/mtk-uart-apdma.c
> >
> > diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
> > index 680fc05..ac49eb6 100644
> > --- a/drivers/dma/mediatek/Kconfig
> > +++ b/drivers/dma/mediatek/Kconfig
> > @@ -24,3 +24,14 @@ config MTK_CQDMA
> >
> > This controller provides the channels which is dedicated to
> > memory-to-memory transfer to offload from CPU.
> > +
> > +config MTK_UART_APDMA
> > + tristate "MediaTek SoCs APDMA support for UART"
> > + depends on OF && SERIAL_8250_MT6577
> > + select DMA_ENGINE
> > + select DMA_VIRTUAL_CHANNELS
> > + help
> > + Support for the UART DMA engine found on MediaTek MTK SoCs.
> > + When SERIAL_8250_MT6577 is enabled, and if you want to use DMA,
> > + you can enable the config. The DMA engine can only be used
> > + with MediaTek SoCs.
> > diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
> > index 41bb381..61a6d29 100644
> > --- a/drivers/dma/mediatek/Makefile
> > +++ b/drivers/dma/mediatek/Makefile
> > @@ -1,2 +1,3 @@
> > +obj-$(CONFIG_MTK_UART_APDMA) += mtk-uart-apdma.o
> > obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
> > obj-$(CONFIG_MTK_CQDMA) += mtk-cqdma.o
> > diff --git a/drivers/dma/mediatek/mtk-uart-apdma.c b/drivers/dma/mediatek/mtk-uart-apdma.c
> > new file mode 100644
> > index 0000000..9ed7a49
> > --- /dev/null
> > +++ b/drivers/dma/mediatek/mtk-uart-apdma.c
> > @@ -0,0 +1,660 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * MediaTek Uart APDMA driver.
> > + *
> > + * Copyright (c) 2018 MediaTek Inc.
> > + * Author: Long Cheng <[email protected]>
> > + */
> > +
> > +#include <linux/clk.h>
> > +#include <linux/dmaengine.h>
> > +#include <linux/dma-mapping.h>
> > +#include <linux/err.h>
> > +#include <linux/init.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/iopoll.h>
> > +#include <linux/kernel.h>
> > +#include <linux/list.h>
> > +#include <linux/module.h>
> > +#include <linux/of_device.h>
> > +#include <linux/of_dma.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/pm_runtime.h>
> > +#include <linux/slab.h>
> > +#include <linux/spinlock.h>
> > +
> > +#include "../virt-dma.h"
> > +
> > +/* The default number of virtual channel */
> > +#define MTK_UART_APDMA_NR_VCHANS 8
> > +
> > +#define VFF_EN_B BIT(0)
> > +#define VFF_STOP_B BIT(0)
> > +#define VFF_FLUSH_B BIT(0)
> > +#define VFF_4G_SUPPORT_B BIT(0)
> > +#define VFF_RX_INT_EN0_B BIT(0) /* rx valid size >= vff thre */
> > +#define VFF_RX_INT_EN1_B BIT(1)
> > +#define VFF_TX_INT_EN_B BIT(0) /* tx left size >= vff thre */
> > +#define VFF_WARM_RST_B BIT(0)
> > +#define VFF_RX_INT_CLR_B (BIT(0) | BIT(1))
> > +#define VFF_TX_INT_CLR_B 0
> > +#define VFF_STOP_CLR_B 0
> > +#define VFF_INT_EN_CLR_B 0
> > +#define VFF_4G_SUPPORT_CLR_B 0
> > +
> > +/* interrupt trigger level for tx */
> > +#define VFF_TX_THRE(n) ((n) * 7 / 8)
> > +/* interrupt trigger level for rx */
> > +#define VFF_RX_THRE(n) ((n) * 3 / 4)
> > +
> > +#define VFF_RING_SIZE 0xffffU
> > +/* invert this bit when wrap ring head again */
> > +#define VFF_RING_WRAP 0x10000U
> > +
> > +#define VFF_INT_FLAG 0x00
> > +#define VFF_INT_EN 0x04
> > +#define VFF_EN 0x08
> > +#define VFF_RST 0x0c
> > +#define VFF_STOP 0x10
> > +#define VFF_FLUSH 0x14
> > +#define VFF_ADDR 0x1c
> > +#define VFF_LEN 0x24
> > +#define VFF_THRE 0x28
> > +#define VFF_WPT 0x2c
> > +#define VFF_RPT 0x30
> > +/* TX: the buffer size HW can read. RX: the buffer size SW can read. */
> > +#define VFF_VALID_SIZE 0x3c
> > +/* TX: the buffer size SW can write. RX: the buffer size HW can write. */
> > +#define VFF_LEFT_SIZE 0x40
> > +#define VFF_DEBUG_STATUS 0x50
> > +#define VFF_4G_SUPPORT 0x54
> > +
> > +struct mtk_uart_apdmadev {
> > + struct dma_device ddev;
> > + struct clk *clk;
> > + bool support_33bits;
> > + unsigned int dma_requests;
> > + unsigned int *dma_irq;
> > +};
> > +
> > +struct mtk_uart_apdma_desc {
> > + struct virt_dma_desc vd;
> > +
> > + unsigned int avail_len;
> > +};
> > +
> > +struct mtk_chan {
> > + struct virt_dma_chan vc;
> > + struct dma_slave_config cfg;
> > + void __iomem *base;
> > + struct mtk_uart_apdma_desc *desc;
> > +
> > + enum dma_transfer_direction dir;
> > +
> > + bool requested;
> > +
> > + unsigned int rx_status;
> > +};
> > +
> > +static inline struct mtk_uart_apdmadev *
> > +to_mtk_uart_apdma_dev(struct dma_device *d)
> > +{
> > + return container_of(d, struct mtk_uart_apdmadev, ddev);
> > +}
> > +
> > +static inline struct mtk_chan *to_mtk_uart_apdma_chan(struct dma_chan *c)
> > +{
> > + return container_of(c, struct mtk_chan, vc.chan);
> > +}
> > +
> > +static inline struct mtk_uart_apdma_desc *to_mtk_uart_apdma_desc
> > + (struct dma_async_tx_descriptor *t)
> > +{
> > + return container_of(t, struct mtk_uart_apdma_desc, vd.tx);
> > +}
> > +
> > +static void mtk_uart_apdma_write(struct mtk_chan *c,
> > + unsigned int reg, unsigned int val)
> > +{
> > + writel(val, c->base + reg);
> > +}
> > +
> > +static unsigned int mtk_uart_apdma_read(struct mtk_chan *c, unsigned int reg)
> > +{
> > + return readl(c->base + reg);
> > +}
> > +
> > +static void mtk_uart_apdma_desc_free(struct virt_dma_desc *vd)
> > +{
> > + struct dma_chan *chan = vd->tx.chan;
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > +
> > + kfree(c->desc);
> > +}
> > +
> > +static void mtk_uart_apdma_start_tx(struct mtk_chan *c)
> > +{
> > + unsigned int len, send, left, wpt, d_wpt, tmp;
> > + int ret;
> > +
> > + left = mtk_uart_apdma_read(c, VFF_LEFT_SIZE);
> > + if (!left) {
> > + mtk_uart_apdma_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> > + return;
> > + }
> > +
> > + /* Wait 1sec for flush, can't sleep */
> > + ret = readx_poll_timeout(readl, c->base + VFF_FLUSH, tmp,
> > + tmp != VFF_FLUSH_B, 0, 1000000);
>
> It is really not a good idea that polling up to 1 second in an
> interrupt context.
>

I will modify it in next version.

> > + if (ret)
> > + dev_warn(c->vc.chan.device->dev, "tx: fail, debug=0x%x\n",
> > + mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> > +
> > + send = min_t(unsigned int, left, c->desc->avail_len);
> > + wpt = mtk_uart_apdma_read(c, VFF_WPT);
> > + len = c->cfg.dst_port_window_size;
> > +
> > + d_wpt = wpt + send;
> > + if ((d_wpt & VFF_RING_SIZE) >= len) {
>
> I am confused what size of VFF is. Either VFF_RING_SIZE or
> c->cfg.dst_port_window_size?
>

VFF_RRING_SIZE is max length that HW can support.The
c->cfg.dst_port_window_size is actual length.

> > + d_wpt = d_wpt - len;
> > + d_wpt = d_wpt ^ VFF_RING_WRAP;
> > + }
> > + mtk_uart_apdma_write(c, VFF_WPT, d_wpt);
> > +
> > + c->desc->avail_len -= send;
> > +
> > + mtk_uart_apdma_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
>
> Why should we need to program interrupt enabled bit again?

HW request. the step is must.

>
> > + if (mtk_uart_apdma_read(c, VFF_FLUSH) == 0U)
> > + mtk_uart_apdma_write(c, VFF_FLUSH, VFF_FLUSH_B);
> > +}
> > +
> > +static void mtk_uart_apdma_start_rx(struct mtk_chan *c)
> > +{
> > + struct mtk_uart_apdma_desc *d = c->desc;
> > + unsigned int len, wg, rg;
> > + int cnt;
> > +
> > + if ((mtk_uart_apdma_read(c, VFF_VALID_SIZE) == 0U) ||
> > + !d || !vchan_next_desc(&c->vc))
> > + return;
>
> If the current descriptor is not available, the hardware should be
> idle or stopped. so I think the condition can be removed or there is
> somewhere your handle descriptors incorrectly.

I will modify it in next version.

>
> > +
> > + len = c->cfg.src_port_window_size;
> > + rg = mtk_uart_apdma_read(c, VFF_RPT);
> > + wg = mtk_uart_apdma_read(c, VFF_WPT);
> > + cnt = (wg & VFF_RING_SIZE) - (rg & VFF_RING_SIZE);
>
> Is it possible that rg and wg would be greater than VFF_RING_SIZE?
>

No.

> > + /*
> > + * The buffer is ring buffer. If wrap bit different,
> > + * represents the start of the next cycle for WPT
> > + */
> > + if ((rg ^ wg) & VFF_RING_WRAP)
> > + cnt += len;
>
> Again, I am confused what size of VFF is. Either VFF_RING_SIZE or
> c->cfg.dst_port_window_size?
>
> > +
> > + c->rx_status = d->avail_len - cnt;
> > + mtk_uart_apdma_write(c, VFF_RPT, wg);
> > +
> > + list_del(&d->vd.node);
> > + vchan_cookie_complete(&d->vd);
> > +}
> > +
> > +static irqreturn_t mtk_uart_apdma_irq_handler(int irq, void *dev_id)
> > +{
> > + struct dma_chan *chan = (struct dma_chan *)dev_id;
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > + struct mtk_uart_apdma_desc *d;
> > + unsigned long flags;
> > +
> > + spin_lock_irqsave(&c->vc.lock, flags);
> > + if (c->dir == DMA_DEV_TO_MEM) {
> > + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> > + mtk_uart_apdma_start_rx(c);
> > + } else if (c->dir == DMA_MEM_TO_DEV) {
> > + d = c->desc;
> > +
> > + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> > +
> > + if (d->avail_len != 0U) {
> > + mtk_uart_apdma_start_tx(c);
> > + } else {
> > + list_del(&d->vd.node);
> > + vchan_cookie_complete(&d->vd);
> > + }
> > + }
> > + spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > + return IRQ_HANDLED;
> > +}
> > +
> > +static int mtk_uart_apdma_alloc_chan_resources(struct dma_chan *chan)
> > +{
> > + struct mtk_uart_apdmadev *mtkd = to_mtk_uart_apdma_dev(chan->device);
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > + unsigned int tmp;
> > + int ret;
> > +
> > + pm_runtime_get_sync(mtkd->ddev.dev);
>
> Add an error handling, something like
>
> err = pm_runtime_get_sync(mtkd->ddev.dev);
> if (err < 0) {
> pm_runtime_put_noidle(dev);
> ...
> }
>

I will modify it in next version.

> > +
> > + mtk_uart_apdma_write(c, VFF_ADDR, 0);
> > + mtk_uart_apdma_write(c, VFF_THRE, 0);
> > + mtk_uart_apdma_write(c, VFF_LEN, 0);
> > + mtk_uart_apdma_write(c, VFF_RST, VFF_WARM_RST_B);
> > +
> > + ret = readx_poll_timeout(readl, c->base + VFF_EN, tmp, !tmp, 10, 100);
> > + if (ret) {
> > + dev_err(chan->device->dev, "dma reset: fail, timeout\n");
> > + return ret;
> > + }
> > +
> > + if (!c->requested) {
> > + c->requested = true;
>
> The variable c->requested can be saved since the same channel
> shouldn't be requested more one time
>
I will modify it in next version.

> > + ret = request_irq(mtkd->dma_irq[chan->chan_id],
> > + mtk_uart_apdma_irq_handler, IRQF_TRIGGER_NONE,
> > + KBUILD_MODNAME, chan);
> > + if (ret < 0) {
> > + dev_err(chan->device->dev, "Can't request dma IRQ\n");
> > + return -EINVAL;
> > + }
> > + }
> > +
> > + if (mtkd->support_33bits)
> > + mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
> > +
> > + return ret;
> > +}
> > +
> > +static void mtk_uart_apdma_free_chan_resources(struct dma_chan *chan)
> > +{
> > + struct mtk_uart_apdmadev *mtkd = to_mtk_uart_apdma_dev(chan->device);
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > +
> > + if (c->requested) {
>
> ditto as the above

I will modify it in next version.
>
> > + c->requested = false;
> > + free_irq(mtkd->dma_irq[chan->chan_id], chan);
> > + }
> > +
> > + tasklet_kill(&c->vc.task);
> > +
> > + vchan_free_chan_resources(&c->vc);
> > +
> > + pm_runtime_put_sync(mtkd->ddev.dev);
> > +}
> > +
> > +static enum dma_status mtk_uart_apdma_tx_status(struct dma_chan *chan,
> > + dma_cookie_t cookie,
> > + struct dma_tx_state *txstate)
> > +{
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > + enum dma_status ret;
> > +
> > + ret = dma_cookie_status(chan, cookie, txstate);
> > +
> > + dma_set_residue(txstate, c->rx_status);
> > +
>
> The handling is not enough. You should get the descriptor
> corresponding to the cookie and then calculate and return the
> ->tx_status by the descriptor
>

Because UART can't get any interrupt except DMA interrupt. So in APDMA,
need notify UART to get data. And then descriptor will be released. So
Need keep the solution.

> > + return ret;
> > +}
> > +
> > +static void mtk_uart_apdma_config_write(struct dma_chan *chan,
> > + struct dma_slave_config *cfg,
> > + enum dma_transfer_direction dir)
> > +{
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > + struct mtk_uart_apdmadev *mtkd =
> > + to_mtk_uart_apdma_dev(c->vc.chan.device);
> > + unsigned int tmp;
> > +
> > + if (mtk_uart_apdma_read(c, VFF_EN) == VFF_EN_B)
> > + return;
> > +
> > + c->dir = dir;
>
> The direction is fixed by the device, I don't think it is required to
> keep it in a software state.

Need save it. Because RX and TX isn't all same.

>
> > +
> > + if (dir == DMA_DEV_TO_MEM) {
> > + tmp = cfg->src_port_window_size;
> > +
> > + mtk_uart_apdma_write(c, VFF_ADDR, cfg->src_addr);
>
> That is wrong. ->src_addr is the physical address where DMA slave data
> should be read (RX), not the memory address.
>
> You should program the register VFF_ADDR and VFF_LEN by sg address and
> length from device_prep_slave_sg.

I will modify it in next version.
>
> > + mtk_uart_apdma_write(c, VFF_LEN, tmp);
> > + mtk_uart_apdma_write(c, VFF_THRE, VFF_RX_THRE(tmp));
> > + mtk_uart_apdma_write(c, VFF_INT_EN,
> > + VFF_RX_INT_EN0_B | VFF_RX_INT_EN1_B);
> > + mtk_uart_apdma_write(c, VFF_RPT, 0);
> > + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> > + } else if (dir == DMA_MEM_TO_DEV) {
> > + tmp = cfg->dst_port_window_size;
> > +
> > + mtk_uart_apdma_write(c, VFF_ADDR, cfg->dst_addr);
>
> That is also wrong. st_addr: this is the physical address where DMA
> slave data should be written (TX), not the memory address similar to
> the above explanation.

I will modify it in next version.
>
> > + mtk_uart_apdma_write(c, VFF_LEN, tmp);
> > + mtk_uart_apdma_write(c, VFF_THRE, VFF_TX_THRE(tmp));
> > + mtk_uart_apdma_write(c, VFF_WPT, 0);
> > + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> > + }
> > +
> > + mtk_uart_apdma_write(c, VFF_EN, VFF_EN_B);
> > +
> > + if (mtkd->support_33bits)
> > + mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
> > +
> > + if (mtk_uart_apdma_read(c, VFF_EN) != VFF_EN_B)
> > + dev_err(chan->device->dev, "dir[%d] fail\n", dir);
> > +}
> > +
> > +/*
> > + * dmaengine_prep_slave_single will call the function. and sglen is 1.
> > + * 8250 uart using one ring buffer, and deal with one sg.
> > + */
> > +static struct dma_async_tx_descriptor *mtk_uart_apdma_prep_slave_sg
> > + (struct dma_chan *chan, struct scatterlist *sgl,
> > + unsigned int sglen, enum dma_transfer_direction dir,
> > + unsigned long tx_flags, void *context)
> > +{
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > + struct mtk_uart_apdma_desc *d;
> > +
> > + if (!is_slave_direction(dir))
> > + return NULL;
> > +
> > + mtk_uart_apdma_config_write(chan, &c->cfg, dir);
> > +
> > + /* Now allocate and setup the descriptor */
> > + d = kzalloc(sizeof(*d), GFP_ATOMIC);
> > + if (!d)
> > + return NULL;
> > +
> > + /* sglen is 1 */
> > + d->avail_len = sg_dma_len(sgl);
> > + c->rx_status = d->avail_len;
> > +
> > + return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
> > +}
> > +
> > +static void mtk_uart_apdma_issue_pending(struct dma_chan *chan)
> > +{
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > + struct virt_dma_desc *vd;
> > + unsigned long flags;
> > +
> > + spin_lock_irqsave(&c->vc.lock, flags);
> > + if (vchan_issue_pending(&c->vc)) {
> > + vd = vchan_next_desc(&c->vc);
> > + c->desc = to_mtk_uart_apdma_desc(&vd->tx);
> > + }
> > +
> > + if (c->dir == DMA_DEV_TO_MEM)
> > + mtk_uart_apdma_start_rx(c);
> > + else if (c->dir == DMA_MEM_TO_DEV)
> > + mtk_uart_apdma_start_tx(c);
> > +
> > + spin_unlock_irqrestore(&c->vc.lock, flags);
> > +}
> > +
> > +static int mtk_uart_apdma_slave_config(struct dma_chan *chan,
> > + struct dma_slave_config *config)
> > +{
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > +
> > + memcpy(&c->cfg, config, sizeof(*config));
> > +
> > + return 0;
> > +}
> > +
> > +static int mtk_uart_apdma_terminate_all(struct dma_chan *chan)
> > +{
> > + struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > + unsigned long flags;
> > + unsigned int tmp;
> > + int ret;
> > +
> > + spin_lock_irqsave(&c->vc.lock, flags);
> > +
> > + mtk_uart_apdma_write(c, VFF_FLUSH, VFF_FLUSH_B);
> > + /* Wait 1sec for flush, can't sleep */
> > + ret = readx_poll_timeout(readl, c->base + VFF_FLUSH, tmp,
> > + tmp != VFF_FLUSH_B, 0, 1000000);
>
> It is extremely bad pending so long is in the spin_lock_irqsave

I will modify it in next version.
>
> > + if (ret)
> > + dev_err(c->vc.chan.device->dev, "flush: fail, debug=0x%x\n",
> > + mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> > +
> > + /* set stop as 1 -> wait until en is 0 -> set stop as 0 */
> > + mtk_uart_apdma_write(c, VFF_STOP, VFF_STOP_B);
> > + ret = readx_poll_timeout(readl, c->base + VFF_EN, tmp, !tmp, 10, 100);
> > + if (ret)
> > + dev_err(c->vc.chan.device->dev, "stop: fail, debug=0x%x\n",
> > + mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> > +
> > + mtk_uart_apdma_write(c, VFF_STOP, VFF_STOP_CLR_B);
> > + mtk_uart_apdma_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
> > +
> > + if (c->dir == DMA_DEV_TO_MEM)
> > + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> > + else if (c->dir == DMA_MEM_TO_DEV)
> > + mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> > +
> > + spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > + return 0;
> > +}
> > +
> > +static int mtk_uart_apdma_device_pause(struct dma_chan *chan)
> > +{
> > + /* just for check caps pass */
> > + dev_err(chan->device->dev, "Pause can't support\n");
> > +
>
> If the device can't support hardware pause, we can do it as a software
> pause in an implementation based on vdesc.
>

ok, i will try it.

> > + return 0;
> > +}
> > +
> > +static void mtk_uart_apdma_free(struct mtk_uart_apdmadev *mtkd)
> > +{
> > + while (!list_empty(&mtkd->ddev.channels)) {
> > + struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
> > + struct mtk_chan, vc.chan.device_node);
> > +
> > + list_del(&c->vc.chan.device_node);
> > + tasklet_kill(&c->vc.task);
> > + }
> > +}
> > +
> > +static const struct of_device_id mtk_uart_apdma_match[] = {
> > + { .compatible = "mediatek,mt6577-uart-dma", },
> > + { /* sentinel */ },
> > +};
> > +MODULE_DEVICE_TABLE(of, mtk_uart_apdma_match);
> > +
> > +static int mtk_uart_apdma_probe(struct platform_device *pdev)
> > +{
> > + struct device_node *np = pdev->dev.of_node;
> > + struct mtk_uart_apdmadev *mtkd;
> > + struct resource *res;
> > + struct mtk_chan *c;
> > + int bit_mask = 32, rc;
> > + unsigned int i;
> > +
> > + mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
> > + if (!mtkd)
> > + return -ENOMEM;
> > +
> > + mtkd->clk = devm_clk_get(&pdev->dev, NULL);
> > + if (IS_ERR(mtkd->clk)) {
> > + dev_err(&pdev->dev, "No clock specified\n");
> > + rc = PTR_ERR(mtkd->clk);
> > + return rc;
> > + }
> > +
> > + if (of_property_read_bool(np, "mediatek,dma-33bits"))
> > + mtkd->support_33bits = true;
> > +
> > + if (mtkd->support_33bits)
> > + bit_mask = 33;
> > +
> > + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(bit_mask));
> > + if (rc)
> > + return rc;
> > +
> > + dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
> > + mtkd->ddev.device_alloc_chan_resources =
> > + mtk_uart_apdma_alloc_chan_resources;
> > + mtkd->ddev.device_free_chan_resources =
> > + mtk_uart_apdma_free_chan_resources;
> > + mtkd->ddev.device_tx_status = mtk_uart_apdma_tx_status;
> > + mtkd->ddev.device_issue_pending = mtk_uart_apdma_issue_pending;
> > + mtkd->ddev.device_prep_slave_sg = mtk_uart_apdma_prep_slave_sg;
> > + mtkd->ddev.device_config = mtk_uart_apdma_slave_config;
> > + mtkd->ddev.device_pause = mtk_uart_apdma_device_pause;
> > + mtkd->ddev.device_terminate_all = mtk_uart_apdma_terminate_all;
> > + mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> > + mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> > + mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> > + mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> > + mtkd->ddev.dev = &pdev->dev;
> > + INIT_LIST_HEAD(&mtkd->ddev.channels);
> > +
> > + mtkd->dma_requests = MTK_UART_APDMA_NR_VCHANS;
> > + if (of_property_read_u32(np, "dma-requests", &mtkd->dma_requests)) {
> > + dev_info(&pdev->dev,
> > + "Using %u as missing dma-requests property\n",
> > + MTK_UART_APDMA_NR_VCHANS);
> > + }
> > +
> > + mtkd->dma_irq = devm_kcalloc(&pdev->dev, mtkd->dma_requests,
> > + sizeof(*mtkd->dma_irq), GFP_KERNEL);
> > + if (!mtkd->dma_irq)
> > + return -ENOMEM;
> > +
> > + for (i = 0; i < mtkd->dma_requests; i++) {
> > + c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
> > + if (!c) {
> > + rc = -ENODEV;
> > + goto err_no_dma;
> > + }
> > +
> > + res = platform_get_resource(pdev, IORESOURCE_MEM, i);
> > + if (!res) {
> > + rc = -ENODEV;
> > + goto err_no_dma;
> > + }
> > +
> > + c->base = devm_ioremap_resource(&pdev->dev, res);
> > + if (IS_ERR(c->base)) {
> > + rc = PTR_ERR(c->base);
> > + goto err_no_dma;
> > + }
> > + c->requested = false;
> > + c->vc.desc_free = mtk_uart_apdma_desc_free;
> > + vchan_init(&c->vc, &mtkd->ddev);
> > +
> > + mtkd->dma_irq[i] = platform_get_irq(pdev, i);
> > + if ((int)mtkd->dma_irq[i] < 0) {
> > + dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
> > + rc = -EINVAL;
> > + goto err_no_dma;
> > + }
> > + }
> > +
> > + pm_runtime_enable(&pdev->dev);
> > + pm_runtime_set_active(&pdev->dev);
> > +
> > + rc = dma_async_device_register(&mtkd->ddev);
> > + if (rc)
> > + goto rpm_disable;
> > +
> > + platform_set_drvdata(pdev, mtkd);
> > +
> > + /* Device-tree DMA controller registration */
> > + rc = of_dma_controller_register(np, of_dma_xlate_by_chan_id, mtkd);
> > + if (rc)
> > + goto dma_remove;
> > +
> > + return rc;
> > +
> > +dma_remove:
> > + dma_async_device_unregister(&mtkd->ddev);
> > +rpm_disable:
> > + pm_runtime_disable(&pdev->dev);
> > +err_no_dma:
> > + mtk_uart_apdma_free(mtkd);
> > + return rc;
> > +}
> > +
> > +static int mtk_uart_apdma_remove(struct platform_device *pdev)
> > +{
> > + struct mtk_uart_apdmadev *mtkd = platform_get_drvdata(pdev);
> > +
> > + if (pdev->dev.of_node)
> > + of_dma_controller_free(pdev->dev.of_node);
> > +
> > + pm_runtime_disable(&pdev->dev);
> > + pm_runtime_put_noidle(&pdev->dev);
>
> That pm_runtime_put_noidle should be removed or it causes an
> inconsistency with the probe handler.
>

Ok, I will remove it.

> > +
> > + dma_async_device_unregister(&mtkd->ddev);
> > + mtk_uart_apdma_free(mtkd);
> > +
> > + return 0;
> > +}
> > +
> > +#ifdef CONFIG_PM_SLEEP
> > +static int mtk_uart_apdma_suspend(struct device *dev)
> > +{
> > + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> > +
> > + if (!pm_runtime_suspended(dev))
> > + clk_disable_unprepare(mtkd->clk);
> > +
> > + return 0;
> > +}
> > +
> > +static int mtk_uart_apdma_resume(struct device *dev)
> > +{
> > + int ret;
> > + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> > +
> > + if (!pm_runtime_suspended(dev)) {
> > + ret = clk_prepare_enable(mtkd->clk);
> > + if (ret)
> > + return ret;
> > + }
> > +
> > + return 0;
> > +}
> > +#endif /* CONFIG_PM_SLEEP */
> > +
> > +#ifdef CONFIG_PM
> > +static int mtk_uart_apdma_runtime_suspend(struct device *dev)
> > +{
> > + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> > +
> > + clk_disable_unprepare(mtkd->clk);
> > +
> > + return 0;
> > +}
> > +
> > +static int mtk_uart_apdma_runtime_resume(struct device *dev)
> > +{
> > + int ret;
> > + struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> > +
> > + ret = clk_prepare_enable(mtkd->clk);
> > + if (ret)
> > + return ret;
> > +
> > + return 0;
> > +}
> > +#endif /* CONFIG_PM */
> > +
> > +static const struct dev_pm_ops mtk_uart_apdma_pm_ops = {
> > + SET_SYSTEM_SLEEP_PM_OPS(mtk_uart_apdma_suspend, mtk_uart_apdma_resume)
> > + SET_RUNTIME_PM_OPS(mtk_uart_apdma_runtime_suspend,
> > + mtk_uart_apdma_runtime_resume, NULL)
> > +};
>
> It probably causes a build error when CONFIG_PM is not enabled.
> and use a UNIVERSAL_DEV_PM_OPS because the runtime suspend/resume and
> system suspend/resume for the dma are
> almost the same.
>

I remember that these had test. It's build pass.

> > +
> > +static struct platform_driver mtk_uart_apdma_driver = {
> > + .probe = mtk_uart_apdma_probe,
> > + .remove = mtk_uart_apdma_remove,
> > + .driver = {
> > + .name = KBUILD_MODNAME,
> > + .pm = &mtk_uart_apdma_pm_ops,
> > + .of_match_table = of_match_ptr(mtk_uart_apdma_match),
> > + },
> > +};
> > +
> > +module_platform_driver(mtk_uart_apdma_driver);
> > +
> > +MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
> > +MODULE_AUTHOR("Long Cheng <[email protected]>");
> > +MODULE_LICENSE("GPL v2");
> > +
> > --
> > 1.7.9.5
> >